89 Matching Annotations
  1. Last 7 days
    1. location / { proxy_pass http://backend; # You may need to uncomment the following line if your redirects are relative, e.g. /foo/bar #proxy_redirect / /; proxy_intercept_errors on; error_page 301 302 307 = @handle_redirect; } location @handle_redirect { set $saved_redirect_location '$upstream_http_location'; proxy_pass $saved_redirect_location; }

      Usually the redirect is returned as the response and the client follows the redirect. This will follow a redirect inside nginx rather than the client.

    1. The X-Forwarded-Proto request header helps you identify the protocol (HTTP or HTTPS) that a client used to connect to your load balancer. Your server access logs contain only the protocol used between the server and the load balancer; they contain no information about the protocol used between the client and the load balancer.

      The load balancer may talk to the server via http so using $scheme in nginx when there's an AWS load balancer in front may lead to the $scheme being unexpectedly http instead of https.

      http {
          map $http_x_forwarded_proto $original_scheme {
            "" $scheme;
            default $http_x_forwarded_proto;
          }
      }
      
  2. Oct 2019
    1. I had a similar issue with nginx+passenger (for Ruby on Rails / Rack / etc.), and I confirm that by default, multiple slashes are collapsed (in both PATH_INFO and REQUEST_URI). Adding merge_slashes off; in the server context of the nginx configuration fixed it
    1. non-blocking internal requests

      Note ngx.location.capture only works on internal requests which means if you want to request an external endpoint dynamically then you need to setup something like below and call that internal endpoint instead of calling the external url directly.

      Say for example you want to send a request to / endpoint with the thirdparty url as part of the path (http:proxy-server.com/http://example.com).

            location /external/ {
              internal;
              set $upstream "";
              rewrite_by_lua_file ./lua/get_external.lua;
              resolver 8.8.8.8;
              proxy_pass $upstream;
      

      Where lua/get_external.lua:

      -- strip beginning '/' from uri path
      ngx.var.upstream = ngx.var.request_uri:sub(2)
      
    1. set $template_root /usr/local/openresty/nginx/html/templates;

      We should probably use this instead of root since root has other implications.

    1. # kill cache add_header Last-Modified $date_gmt; add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; if_modified_since off; expires off; etag off;

      disable nginx caching

    1. Because it can handle a high volume of connections, NGINX is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers – anything from legacy database servers to microservices.
    2. Dynamic sites, built using anything from Node.js to PHP, commonly deploy NGINX as a content cache and reverse proxy to reduce load on application servers and make the most effective use of the underlying hardware.
    3. With its event-driven, asynchronous architecture, NGINX revolutionized how servers operate in high-performance contexts and became the fastest web server available.
    4. NGINX has grown along with it and now supports all the components of the modern Web, including WebSocket, HTTP/2, and streaming of multiple video formats (HDS, HLS, RTMP, and others).
    5. NGINX consistently beats Apache and other servers in benchmarks measuring web server performance.
  3. Sep 2019
    1. This API function (as well as ngx.location.capture_multi) always buffers the whole response body of the subrequest in memory. Thus, you should use cosockets and streaming processing instead if you have to handle large subrequest responses.

      So my interpretation of this is the request is issued, the entire response is buffered in memory, then res.headers is read from that buffer. I wonder if there is a way to cut off this buffer and close the connection early such that only the headers make it into the buffer for responses that are very large.

    2. Network I/O operations in user code should only be done through the Nginx Lua API calls as the Nginx event loop may be blocked and performance drop off dramatically otherwise. Disk operations with relatively small amount of data can be done using the standard Lua io library but huge file reading and writing should be avoided wherever possible as they may block the Nginx process significantly. Delegating all network and disk I/O operations to Nginx's subrequests (via the ngx.location.capture method and similar) is strongly recommended for maximum performance.

      Very important ngx.location.capture does not block in nginx.

    1. Nginx, by default, will consider any header that contains underscores as invalid.

      Interesting.

    2. This is a really great introductory article to how nginx works. Definitely worth a read!

    3. While buffering can help free up the backend server to handle more requests, Nginx also provides a way to cache content from backend servers, eliminating the need to connect to the upstream at all for many requests.

      This seems like something we might want to look into long term but also I wonder whether this is really necessary. I thought nginx also cached responses by default so in that scenario I don't think it would even go out to the upstream server anyway, it would just return the previous still valid response but maybe I'm missing something.

    1. local LuaSocket = require("socket") client = LuaSocket.connect("example.com", 80) client:send("GET /login.php?login=admin&pass=admin HTTP/1.0\r\nHost: example.com\r\n\r\n") while true do s, status, partial = client:receive('*a') print(s or partial) if status == "closed" then break end end client:close()

      How to issue an http request through a socket in lua.

    1. proxy_ssl_trusted_certificate file;

      And we should probably specify this.

    2. proxy_ssl_verify on

      We should probably turn this on.

    3. proxy_ssl_server_name on

      Turn this on when you need to proxy to an https server.

    4. There could be several proxy_redirect directives: proxy_redirect default; proxy_redirect http://localhost:8000/ /; proxy_redirect http://www.example.com/ /;

      In case there are multiple layers of redirection.

    5. proxy_redirect ~*/user/([^/]+)/(.+)$ http://$1.example.com/$2;

      So you can basically redirect anything to be the original url so the client is not aware there were layers of redirects by:

      proxy_redirect ~^.*$ /original/endpoint;
      
    1. proxy_redirect $scheme://$host:$server_port/ /gitlab/;

      This doesn't seem to be working as I'd expect it to override the location response header.

    1. proxy_redirect: If you need to modify the redirect (i.e. location header) being returned to the browser
    1. more_clear_headers

      Look into this as the nginx set header Cookie "" doesn't appear to be working. Might also be able to use lua header filter.

    2. openresty/headers-more-nginx-module

      Allows you to clear headers.

    1. header_filter_by_lua 'ngx.header.Foo = "blah"';

      Not sure why you wouldn't use raw nginx for this.

    2. access_by_lua ' local res = ngx.location.capture("/auth") if res.status == ngx.HTTP_OK then return end if res.status == ngx.HTTP_FORBIDDEN then ngx.exit(res.status) end ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)

      Potentially something like this for getting the content-type response header and using that to decide which proxy server to send the original request to.

    3. The Lua code cache can be temporarily disabled during development by switching lua_code_cache off

      For dev mode switch the lua file cache off.

    1. # map to different upstream backends based on header map $http_x_server_select $pool { default "apache"; staging "staging"; dev "development"; }

      case statement on the header X-Server-Select. I wonder if we could do something similar for rate limiting ip selection.

    1. location ~* ^/proxy/(?<pschema>https?)/(?<phost>[\w.]+)(?<puri>/.*) { set $adr $pschema://$phost; rewrite .* $puri break; proxy_pass $adr;

      Nice idea as it doesn't use lua but also a bit fragile I think.

    1. You can only really adjust worker_connections since worker_processes is based off the number of CPUs you have available.
    2. the product of your worker_connections and worker_processes directive settings
    3. worker_rlimit_nofile directive. This changes the limit on the maximum number of open files
    1. set $stripped_cookie $http_cookie; if ($http_cookie ~ "(.*)(?:^|;)\s*sessionid=[^;]+(.*)$") { set $stripped_cookie $1$2; } if ($stripped_cookie ~ "(.*)(?:^|;)\s*csrftoken=[^;]+(.*)$") { set $stripped_cookie $1$2; }
  4. Jun 2019
  5. Jan 2019
    1. this is in /srv/www/ on the host.

      This site actually gives somewhat clear instructions about which directories from which to run the commands. I think where I went wrong befire was using various directories that in the end did not match the actual installations.

  6. Dec 2018
  7. Nov 2018
  8. Oct 2018
    1. This NGINX module provides mechanism to cryptographically bind HTTP cookies to client's HTTPS channel using Token Binding, as defined by following IETF drafts:
  9. Apr 2018
  10. Feb 2018
  11. Aug 2017
  12. Jul 2017
    1. since lua-resty-redis can not be used in set_by_lua*, dynamic routing based on redis should be impelemented in the access_by_lua block

    1. In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In the configuration above, the default server is the first one — which is nginx’s standard default behaviour. It can also be set explicitly which server should be default, with the default_server parameter in the listen directive:
    1. First, Nginx looks at the IP address and the port of the request. It matches this against the listen directive of each server to build a list of the server blocks that can possibly resolve the request.
    2. The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
  13. Apr 2017
  14. Mar 2017
  15. Feb 2017
  16. Jan 2017