136 Matching Annotations
  1. Aug 2023
    1. Reverse proxy implementation in nginx includes in-band (or passive) server health checks. If the response from a particular server fails with an error, nginx will mark this server as failed, and will try to avoid selecting this server for subsequent inbound requests for a while.

      Handy!

  2. Dec 2022
  3. Nov 2022
    1. ./nginx #打开 nginx nginx -s reload|reopen|stop|quit #重新加载配置|重启|停止|退出 nginx nginx -t #测试配置是否有语法错误 nginx [-?hvVtq] [-s signal] [-c filename] [-p prefix] [-g directives] -?,-h : 打开帮助信息 -v : 显示版本信息并退出 -V : 显示版本和配置选项信息,然后退出 -t : 检测配置文件是否有语法错误,然后退出 -q : 在检测配置文件期间屏蔽非错误信息 -s signal : 给一个 nginx 主进程发送信号:stop(停止), quit(退出), reopen(重启), reload(重新加载配置文件) -p prefix : 设置前缀路径(默认是:/usr/local/nginx/) -c filename : 设置配置文件(默认是:/usr/local/nginx/conf/nginx.conf) -g directives : 设置配置文件外的全局指令 -s signal Send a signal to the master process. The argument signal can be one of: stop, quit, reopen, reload. The following table shows the corresponding system signals: stop SIGTERM quit SIGQUIT reopen SIGUSR1 SIGNALS The master process of nginx can handle the following signals: SIGINT, SIGTERM Shut down quickly. SIGHUP Reload configuration, start the new worker process with a new configuration, and gracefully shut down old worker processes. SIGQUIT Shut down gracefully. SIGUSR1 Reopen log files. SIGUSR2 Upgrade the nginx executable on the fly. SIGWINCH Shut down worker processes gracefully.

      启动nginx

    2. ./nginx #打开 nginx nginx -s reload|reopen|stop|quit #重新加载配置|重启|停止|退出 nginx nginx -t #测试配置是否有语法错误 nginx [-?hvVtq] [-s signal] [-c filename] [-p prefix] [-g directives] -?,-h : 打开帮助信息 -v : 显示版本信息并退出 -V : 显示版本和配置选项信息,然后退出 -t : 检测配置文件是否有语法错误,然后退出 -q : 在检测配置文件期间屏蔽非错误信息 -s signal : 给一个 nginx 主进程发送信号:stop(停止), quit(退出), reopen(重启), reload(重新加载配置文件) -p prefix : 设置前缀路径(默认是:/usr/local/nginx/) -c filename : 设置配置文件(默认是:/usr/local/nginx/conf/nginx.conf) -g directives : 设置配置文件外的全局指令 -s signal Send a signal to the master process. The argument signal can be one of: stop, quit, reopen, reload. The following table shows the corresponding system signals: stop SIGTERM quit SIGQUIT reopen SIGUSR1 SIGNALS The master process of nginx can handle the following signals: SIGINT, SIGTERM Shut down quickly. SIGHUP Reload configuration, start the new worker process with a new configuration, and gracefully shut down old worker processes. SIGQUIT Shut down gracefully. SIGUSR1 Reopen log files. SIGUSR2 Upgrade the nginx executable on the fly. SIGWINCH Shut down worker processes gracefully.

      Nginx启动

    3. 二.停止命令 1.查看进程号 登录后复制 $ ps -ef|grep nginx root 5747 1 0 May23 ? 00:00:00 nginx: master process /usr/local/nginx/sbin/nginx 500 12037 7886 0 10:00 pts/1 00:00:00 grep nginx nobody 25581 5747 0 Sep27 ? 00:01:16 nginx: worker process nobody 25582 5747 0 Sep27 ? 00:01:25 nginx: worker process nobody 25583 5747 0 Sep27 ? 00:02:59 nginx: worker process nobody 25584 5747 0 Sep27 ? 00:02:05 nginx: worker process1.2.3.4.5.6.7. 2.杀死进程 在进程列表里 面找master进程,它的编号就是主进程号了。这里要注意,我们服务器可能部署多个nginx服务器,可能会查到多个主进程,不要手误停错了; 登录后复制 #从容停止Nginx: $ kill -QUIT 5747 #快速停止Nginx: $ kill -TERM 5747 #强制停止Nginx:(常用) $ kill -9 5747

      Nginx停止

  4. Jul 2022
    1. By default, this function reads template files in /etc/nginx/templates/*.template and outputs the result of executing envsubst to /etc/nginx/conf.d.

      '

  5. May 2022
    1. You should mentioned what you listed after the word try_files. Here's what I ended up using that seemed to work: try_files $uri $uri/index.html $uri.html /index.html; The /index.html at the end needs to match the fallback: 'index.html' part of your adapter-static config. Otherwise going directly to a route that doesn't have a matching file at that path -- such as any route with a dynamic param like [id] -- will result in a 404.
    1. Here's another convenient use of try_files, as unconditional redirects to named locations. The named locations are effectively acting as subroutines, saving duplication of code.
  6. Dec 2021
  7. Oct 2021
  8. Sep 2021
  9. Dec 2020
    1. The ngx_http_auth_jwt_module module (1.11.3) implements client authorization by validating the provided JSON Web Token (JWT) using the specified keys. JWT claims must be encoded in a JSON Web Signature (JWS) structure. The module can be used for OpenID Connect authentication.

      Модуль ngx_http_auth_jwt_module (1.11.3) реализует авторизацию клиента, проверяя предоставленный веб-токен JSON (JWT) с использованием указанных ключей. Утверждения JWT должны быть закодированы в структуре веб-подписи JSON (JWS). Модуль может использоваться для аутентификации OpenID Connect.

    1. ngx_http_access_module

      Модуль, позволяющий ограничивать доступ основываясь на адресе клиента

  10. May 2020
  11. Mar 2020
    1. The fastest way to preventively block the scripts that require prior consent is to install a module on your own server that we have developed for Apache, IIS and NGNIX. After the initial configuration, the module will autonomously block all the resources that are subject to prior consent, on all sites on that server that are using the Cookie Solution.
  12. Dec 2019
  13. Nov 2019
    1. output = lustache:render("{{title}} spends {{calc}}", view_model)

      This will return a string of the rendered template.

    1. Make text substitutions in response bodies, using both regular expressions and fixed strings, in this filter module.

      We need to use this. It's under nginx plus though so does that mean we have to pay for it/it doesn't work with regular nginx?

    1. nginx_substitutions_filter is a filter module which can do both regular expression and fixed string substitutions on response bodies. This module is quite different from the Nginx's native Substitution Module.

      We might need to switch to this if we want to do replacement with regex's.

    1. http { proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g; server { location / { proxy_pass http://1.2.3.4; proxy_set_header Host $host; proxy_buffering on; proxy_cache STATIC; proxy_cache_valid 200 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; } } }

      example proxy_cache config.

    1. Default: proxy_cache off;

      So sounds like proxy caching is off by default.

    2. Default: proxy_cache_lock off;

      hmm...Ok this is off by default so probably not a bottleneck in that case.

    3. Default: proxy_cache_lock_age 5s;

      Again-seems like this should be a smaller timeout.

    4. Default: proxy_cache_lock_timeout 5s;

      This is a very long time! We should definitely shorten this. I actually wonder if this is perhaps why we are facing a bottleneck when we see a bunch of requests at once. When I performance tested, I ran without caching so it's possible the caching is actually bottlenecking us.

    5. Sets an offset in bytes for byte-range requests. If the range is beyond the offset, the range request will be passed to the proxied server and the response will not be cached.

      I don't see a need for this right now as it's not the range requests that are really the perceived slowness but it might be worth looking at later.

    6. “GET” and “HEAD” methods are always added to the list, though it is recommended to specify them explicitly.

      I wonder why?

    7. If the value is set to off, temporary files will be put directly in the cache directory.

      So use_temp_path should probably be set to off. I don't see a reason why we would need to first write them to a different directory.

    8. Cached data that are not accessed during the time specified by the inactive parameter get removed from the cache regardless of their freshness. By default, inactive is set to 10 minutes.

      This default seems like it should probably be set to minimally an hr-probably more. Cloudlflare is set at 4 hours which seems perfectly reasonable to me.

    9. The special “cache manager” process monitors the maximum cache size set by the max_size parameter. When this size is exceeded, it removes the least recently used data.

      So we might need to increase this?

    10. Cache data are stored in files.

      So it caches them on the file system by way of storing them in a temp file and renaming them. The default here says --. I thought nginx cached things by default so does this mean that's not the case or maybe it doesn't save them? Confused-need to investigate more.

    11. Default: proxy_cache_min_uses 1;

      This seems right for us.

    12. The levels parameter defines hierarchy levels of a cache: from 1 to 3

      Basically the same concept as a hardware cache with 1 I assume being the first cache that will be checked.

    13. each level accepts values 1 or 2

      I don't understand what this means.

  14. Oct 2019
    1. location / { sub_filter '<a href="http://127.0.0.1:8080/' '<a href="https://$host/'; sub_filter '<img src="http://127.0.0.1:8080/' '<img src="https://$host/'; sub_filter_once on; }

      How to replace strings in an html response on-the-fly.

      Note the Content-Type must be requested as not compressed for this to work.

    1. # request will be sent to backend without uri changed # to '/' due to if location /proxy-pass-uri { proxy_pass http://127.0.0.1:8080/; set $true 1; if ($true) { # nothing } }

      This is a weird one. I guess as long as you aren't using $uri it has no impact though.

    1. Is anyone aware of a lua http lib that supports keepalive?

      When sending a request you can pass the following keepalive settings which will keep the connection open:

      local http = require "resty.http"
      local httpc = http.new()
      httpc:connect("127.0.0.1", 9081)
      local response, err = httpc:request({
        path = "/proxy" .. ngx.var.request_uri, 
        method = "HEAD",
        headers = ngx.req.get_headers(),
        keepalive_timeout = 60,
        keepalive_pool = 10,
      })
      
    1. Also, it's always a good idea to add ipv6=off to the resolver directive when your dns server may return IPv6 addresses and your network does not support it.

      This might help.

    1. Another difference about $uri and $request_uri in proxy_cache_key is $request_uri will include anchor tags part, but $uri$is_args$args will ignore it Do a curl operation : curl -I static.io/hello.htm

      This could be a problem. We'll probably have to fix this when we move via2 outside lms.

    1. The proxy_buffering option tells NGINX to pass the response directly back to the client. Otherwise, it will try to buffer it in memory or on disk. I recommend this if the upstream response can be large.

      We may want to not buffer on pdf endpoints.

    1. set $stripped_cookie $http_cookie; if ($http_cookie ~ "(.*)(?:^|;)\s*sessionid=[^;]+(.*)$") { set $stripped_cookie $1$2; } if ($stripped_cookie ~ "(.*)(?:^|;)\s*csrftoken=[^;]+(.*)$") { set $stripped_cookie $1$2; }
    1. location / { proxy_pass http://backend; # You may need to uncomment the following line if your redirects are relative, e.g. /foo/bar #proxy_redirect / /; proxy_intercept_errors on; error_page 301 302 307 = @handle_redirect; } location @handle_redirect { set $saved_redirect_location '$upstream_http_location'; proxy_pass $saved_redirect_location; }

      Usually the redirect is returned as the response and the client follows the redirect. This will follow a redirect inside nginx rather than the client.

    1. The X-Forwarded-Proto request header helps you identify the protocol (HTTP or HTTPS) that a client used to connect to your load balancer. Your server access logs contain only the protocol used between the server and the load balancer; they contain no information about the protocol used between the client and the load balancer.

      The load balancer may talk to the server via http so using $scheme in nginx when there's an AWS load balancer in front may lead to the $scheme being unexpectedly http instead of https.

      http {
          map $http_x_forwarded_proto $original_scheme {
            "" $scheme;
            default $http_x_forwarded_proto;
          }
      }
      
    1. I had a similar issue with nginx+passenger (for Ruby on Rails / Rack / etc.), and I confirm that by default, multiple slashes are collapsed (in both PATH_INFO and REQUEST_URI). Adding merge_slashes off; in the server context of the nginx configuration fixed it
    1. non-blocking internal requests

      Note ngx.location.capture only works on internal requests which means if you want to request an external endpoint dynamically then you need to setup something like below and call that internal endpoint instead of calling the external url directly.

      Say for example you want to send a request to / endpoint with the thirdparty url as part of the path (http:proxy-server.com/http://example.com).

            location /external/ {
              internal;
              set $upstream "";
              rewrite_by_lua_file ./lua/get_external.lua;
              resolver 8.8.8.8;
              proxy_pass $upstream;
      

      Where lua/get_external.lua:

      -- strip beginning '/' from uri path
      ngx.var.upstream = ngx.var.request_uri:sub(2)
      
    1. set $template_root /usr/local/openresty/nginx/html/templates;

      We should probably use this instead of root since root has other implications.

    1. # kill cache add_header Last-Modified $date_gmt; add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; if_modified_since off; expires off; etag off;

      disable nginx caching

    1. Because it can handle a high volume of connections, NGINX is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers – anything from legacy database servers to microservices.
    2. Dynamic sites, built using anything from Node.js to PHP, commonly deploy NGINX as a content cache and reverse proxy to reduce load on application servers and make the most effective use of the underlying hardware.
    3. With its event-driven, asynchronous architecture, NGINX revolutionized how servers operate in high-performance contexts and became the fastest web server available.
    4. NGINX has grown along with it and now supports all the components of the modern Web, including WebSocket, HTTP/2, and streaming of multiple video formats (HDS, HLS, RTMP, and others).
    5. NGINX consistently beats Apache and other servers in benchmarks measuring web server performance.
  15. Sep 2019
    1. This API function (as well as ngx.location.capture_multi) always buffers the whole response body of the subrequest in memory. Thus, you should use cosockets and streaming processing instead if you have to handle large subrequest responses.

      So my interpretation of this is the request is issued, the entire response is buffered in memory, then res.headers is read from that buffer. I wonder if there is a way to cut off this buffer and close the connection early such that only the headers make it into the buffer for responses that are very large.

    2. Network I/O operations in user code should only be done through the Nginx Lua API calls as the Nginx event loop may be blocked and performance drop off dramatically otherwise. Disk operations with relatively small amount of data can be done using the standard Lua io library but huge file reading and writing should be avoided wherever possible as they may block the Nginx process significantly. Delegating all network and disk I/O operations to Nginx's subrequests (via the ngx.location.capture method and similar) is strongly recommended for maximum performance.

      Very important ngx.location.capture does not block in nginx.

    1. Nginx, by default, will consider any header that contains underscores as invalid.

      Interesting.

    2. This is a really great introductory article to how nginx works. Definitely worth a read!

    3. While buffering can help free up the backend server to handle more requests, Nginx also provides a way to cache content from backend servers, eliminating the need to connect to the upstream at all for many requests.

      This seems like something we might want to look into long term but also I wonder whether this is really necessary. I thought nginx also cached responses by default so in that scenario I don't think it would even go out to the upstream server anyway, it would just return the previous still valid response but maybe I'm missing something.

    1. local LuaSocket = require("socket") client = LuaSocket.connect("example.com", 80) client:send("GET /login.php?login=admin&pass=admin HTTP/1.0\r\nHost: example.com\r\n\r\n") while true do s, status, partial = client:receive('*a') print(s or partial) if status == "closed" then break end end client:close()

      How to issue an http request through a socket in lua.

    1. proxy_ssl_trusted_certificate file;

      And we should probably specify this.

    2. proxy_ssl_verify on

      We should probably turn this on.

    3. proxy_ssl_server_name on

      Turn this on when you need to proxy to an https server.

    4. There could be several proxy_redirect directives: proxy_redirect default; proxy_redirect http://localhost:8000/ /; proxy_redirect http://www.example.com/ /;

      In case there are multiple layers of redirection.

    5. proxy_redirect ~*/user/([^/]+)/(.+)$ http://$1.example.com/$2;

      So you can basically redirect anything to be the original url so the client is not aware there were layers of redirects by:

      proxy_redirect ~^.*$ /original/endpoint;
      
    1. proxy_redirect $scheme://$host:$server_port/ /gitlab/;

      This doesn't seem to be working as I'd expect it to override the location response header.

    1. proxy_redirect: If you need to modify the redirect (i.e. location header) being returned to the browser
    1. more_clear_headers

      Look into this as the nginx set header Cookie "" doesn't appear to be working. Might also be able to use lua header filter.

    2. openresty/headers-more-nginx-module

      Allows you to clear headers.

    1. header_filter_by_lua 'ngx.header.Foo = "blah"';

      Not sure why you wouldn't use raw nginx for this.

    2. access_by_lua ' local res = ngx.location.capture("/auth") if res.status == ngx.HTTP_OK then return end if res.status == ngx.HTTP_FORBIDDEN then ngx.exit(res.status) end ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)

      Potentially something like this for getting the content-type response header and using that to decide which proxy server to send the original request to.

    3. The Lua code cache can be temporarily disabled during development by switching lua_code_cache off

      For dev mode switch the lua file cache off.

    1. # map to different upstream backends based on header map $http_x_server_select $pool { default "apache"; staging "staging"; dev "development"; }

      case statement on the header X-Server-Select. I wonder if we could do something similar for rate limiting ip selection.

    1. location ~* ^/proxy/(?<pschema>https?)/(?<phost>[\w.]+)(?<puri>/.*) { set $adr $pschema://$phost; rewrite .* $puri break; proxy_pass $adr;

      Nice idea as it doesn't use lua but also a bit fragile I think.

    1. You can only really adjust worker_connections since worker_processes is based off the number of CPUs you have available.
    2. the product of your worker_connections and worker_processes directive settings
    3. worker_rlimit_nofile directive. This changes the limit on the maximum number of open files
  16. Jun 2019
  17. Jan 2019
    1. this is in /srv/www/ on the host.

      This site actually gives somewhat clear instructions about which directories from which to run the commands. I think where I went wrong befire was using various directories that in the end did not match the actual installations.

  18. Dec 2018
  19. Nov 2018
  20. Oct 2018
    1. This NGINX module provides mechanism to cryptographically bind HTTP cookies to client's HTTPS channel using Token Binding, as defined by following IETF drafts:
  21. Apr 2018
  22. Feb 2018
  23. Aug 2017
  24. Jul 2017
    1. since lua-resty-redis can not be used in set_by_lua*, dynamic routing based on redis should be impelemented in the access_by_lua block

    1. In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In the configuration above, the default server is the first one — which is nginx’s standard default behaviour. It can also be set explicitly which server should be default, with the default_server parameter in the listen directive:
    1. First, Nginx looks at the IP address and the port of the request. It matches this against the listen directive of each server to build a list of the server blocks that can possibly resolve the request.
    2. The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
  25. Apr 2017
  26. Mar 2017
  27. Feb 2017
  28. Jan 2017