- Aug 2023
-
nginx.org nginx.org
-
Reverse proxy implementation in nginx includes in-band (or passive) server health checks. If the response from a particular server fails with an error, nginx will mark this server as failed, and will try to avoid selecting this server for subsequent inbound requests for a while.
Handy!
Tags
Annotators
URL
-
- Dec 2022
-
www.zhihu.com www.zhihu.com
-
如何通俗地解释 CGI、FastCGI、php-fpm 之间的关系?
Tags
Annotators
URL
-
- Nov 2022
-
blog.51cto.com blog.51cto.com
-
./nginx #打开 nginx nginx -s reload|reopen|stop|quit #重新加载配置|重启|停止|退出 nginx nginx -t #测试配置是否有语法错误 nginx [-?hvVtq] [-s signal] [-c filename] [-p prefix] [-g directives] -?,-h : 打开帮助信息 -v : 显示版本信息并退出 -V : 显示版本和配置选项信息,然后退出 -t : 检测配置文件是否有语法错误,然后退出 -q : 在检测配置文件期间屏蔽非错误信息 -s signal : 给一个 nginx 主进程发送信号:stop(停止), quit(退出), reopen(重启), reload(重新加载配置文件) -p prefix : 设置前缀路径(默认是:/usr/local/nginx/) -c filename : 设置配置文件(默认是:/usr/local/nginx/conf/nginx.conf) -g directives : 设置配置文件外的全局指令 -s signal Send a signal to the master process. The argument signal can be one of: stop, quit, reopen, reload. The following table shows the corresponding system signals: stop SIGTERM quit SIGQUIT reopen SIGUSR1 SIGNALS The master process of nginx can handle the following signals: SIGINT, SIGTERM Shut down quickly. SIGHUP Reload configuration, start the new worker process with a new configuration, and gracefully shut down old worker processes. SIGQUIT Shut down gracefully. SIGUSR1 Reopen log files. SIGUSR2 Upgrade the nginx executable on the fly. SIGWINCH Shut down worker processes gracefully.
启动nginx
-
./nginx #打开 nginx nginx -s reload|reopen|stop|quit #重新加载配置|重启|停止|退出 nginx nginx -t #测试配置是否有语法错误 nginx [-?hvVtq] [-s signal] [-c filename] [-p prefix] [-g directives] -?,-h : 打开帮助信息 -v : 显示版本信息并退出 -V : 显示版本和配置选项信息,然后退出 -t : 检测配置文件是否有语法错误,然后退出 -q : 在检测配置文件期间屏蔽非错误信息 -s signal : 给一个 nginx 主进程发送信号:stop(停止), quit(退出), reopen(重启), reload(重新加载配置文件) -p prefix : 设置前缀路径(默认是:/usr/local/nginx/) -c filename : 设置配置文件(默认是:/usr/local/nginx/conf/nginx.conf) -g directives : 设置配置文件外的全局指令 -s signal Send a signal to the master process. The argument signal can be one of: stop, quit, reopen, reload. The following table shows the corresponding system signals: stop SIGTERM quit SIGQUIT reopen SIGUSR1 SIGNALS The master process of nginx can handle the following signals: SIGINT, SIGTERM Shut down quickly. SIGHUP Reload configuration, start the new worker process with a new configuration, and gracefully shut down old worker processes. SIGQUIT Shut down gracefully. SIGUSR1 Reopen log files. SIGUSR2 Upgrade the nginx executable on the fly. SIGWINCH Shut down worker processes gracefully.
Nginx启动
-
二.停止命令 1.查看进程号 登录后复制 $ ps -ef|grep nginx root 5747 1 0 May23 ? 00:00:00 nginx: master process /usr/local/nginx/sbin/nginx 500 12037 7886 0 10:00 pts/1 00:00:00 grep nginx nobody 25581 5747 0 Sep27 ? 00:01:16 nginx: worker process nobody 25582 5747 0 Sep27 ? 00:01:25 nginx: worker process nobody 25583 5747 0 Sep27 ? 00:02:59 nginx: worker process nobody 25584 5747 0 Sep27 ? 00:02:05 nginx: worker process1.2.3.4.5.6.7. 2.杀死进程 在进程列表里 面找master进程,它的编号就是主进程号了。这里要注意,我们服务器可能部署多个nginx服务器,可能会查到多个主进程,不要手误停错了; 登录后复制 #从容停止Nginx: $ kill -QUIT 5747 #快速停止Nginx: $ kill -TERM 5747 #强制停止Nginx:(常用) $ kill -9 5747
Nginx停止
-
- Jul 2022
-
howto.philippkeller.com howto.philippkeller.com
-
原来给 nginx 设置 SSL 这么简单。
-
-
github.com github.com
-
By default, this function reads template files in /etc/nginx/templates/*.template and outputs the result of executing envsubst to /etc/nginx/conf.d.
'
-
- May 2022
-
stackoverflow.com stackoverflow.com
-
You should mentioned what you listed after the word try_files. Here's what I ended up using that seemed to work: try_files $uri $uri/index.html $uri.html /index.html; The /index.html at the end needs to match the fallback: 'index.html' part of your adapter-static config. Otherwise going directly to a route that doesn't have a matching file at that path -- such as any route with a dynamic param like [id] -- will result in a 404.
-
-
serverfault.com serverfault.com
-
Here's another convenient use of try_files, as unconditional redirects to named locations. The named locations are effectively acting as subroutines, saving duplication of code.
-
- Dec 2021
-
ngbala6.medium.com ngbala6.medium.com
-
discuss.streamlit.io discuss.streamlit.io
- Oct 2021
-
opensourcelibs.com opensourcelibs.com
- Sep 2021
-
Tags
Annotators
URL
-
-
segmentfault.com segmentfault.com
-
-
www.lijiaocn.com www.lijiaocn.com
- Dec 2020
-
nginx.org nginx.org
-
The ngx_http_auth_jwt_module module (1.11.3) implements client authorization by validating the provided JSON Web Token (JWT) using the specified keys. JWT claims must be encoded in a JSON Web Signature (JWS) structure. The module can be used for OpenID Connect authentication.
Модуль ngx_http_auth_jwt_module (1.11.3) реализует авторизацию клиента, проверяя предоставленный веб-токен JSON (JWT) с использованием указанных ключей. Утверждения JWT должны быть закодированы в структуре веб-подписи JSON (JWS). Модуль может использоваться для аутентификации OpenID Connect.
-
-
nginx.org nginx.org
-
ngx_http_access_module
Модуль, позволяющий ограничивать доступ основываясь на адресе клиента
Tags
Annotators
URL
-
- May 2020
- Mar 2020
-
www.iubenda.com www.iubenda.com
-
The fastest way to preventively block the scripts that require prior consent is to install a module on your own server that we have developed for Apache, IIS and NGNIX. After the initial configuration, the module will autonomously block all the resources that are subject to prior consent, on all sites on that server that are using the Cookie Solution.
-
- Dec 2019
-
github.com github.com
-
Using environment variables in nginx configuration
Tags
Annotators
URL
-
- Nov 2019
-
github.com github.com
-
output = lustache:render("{{title}} spends {{calc}}", view_model)
This will return a string of the rendered template.
Tags
Annotators
URL
-
-
github.com github.com
-
1connect/nginx-config-formatter
Possible nginx config formatter.
Tags
Annotators
URL
-
-
docs.nginx.com docs.nginx.com
-
Make text substitutions in response bodies, using both regular expressions and fixed strings, in this filter module.
We need to use this. It's under nginx plus though so does that mean we have to pay for it/it doesn't work with regular nginx?
-
-
github.com github.com
-
nginx_substitutions_filter is a filter module which can do both regular expression and fixed string substitutions on response bodies. This module is quite different from the Nginx's native Substitution Module.
We might need to switch to this if we want to do replacement with regex's.
-
-
www.nginx.com www.nginx.com
-
http { proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=STATIC:10m inactive=24h max_size=1g; server { location / { proxy_pass http://1.2.3.4; proxy_set_header Host $host; proxy_buffering on; proxy_cache STATIC; proxy_cache_valid 200 1d; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; } } }
example proxy_cache config.
-
-
nginx.org nginx.org
-
Default: proxy_cache off;
So sounds like proxy caching is off by default.
-
Default: proxy_cache_lock off;
hmm...Ok this is off by default so probably not a bottleneck in that case.
-
Default: proxy_cache_lock_age 5s;
Again-seems like this should be a smaller timeout.
-
Default: proxy_cache_lock_timeout 5s;
This is a very long time! We should definitely shorten this. I actually wonder if this is perhaps why we are facing a bottleneck when we see a bunch of requests at once. When I performance tested, I ran without caching so it's possible the caching is actually bottlenecking us.
-
Sets an offset in bytes for byte-range requests. If the range is beyond the offset, the range request will be passed to the proxied server and the response will not be cached.
I don't see a need for this right now as it's not the range requests that are really the perceived slowness but it might be worth looking at later.
-
“GET” and “HEAD” methods are always added to the list, though it is recommended to specify them explicitly.
I wonder why?
-
If the value is set to off, temporary files will be put directly in the cache directory.
So use_temp_path should probably be set to off. I don't see a reason why we would need to first write them to a different directory.
-
Cached data that are not accessed during the time specified by the inactive parameter get removed from the cache regardless of their freshness. By default, inactive is set to 10 minutes.
This default seems like it should probably be set to minimally an hr-probably more. Cloudlflare is set at 4 hours which seems perfectly reasonable to me.
-
The special “cache manager” process monitors the maximum cache size set by the max_size parameter. When this size is exceeded, it removes the least recently used data.
So we might need to increase this?
-
Cache data are stored in files.
So it caches them on the file system by way of storing them in a temp file and renaming them. The default here says --. I thought nginx cached things by default so does this mean that's not the case or maybe it doesn't save them? Confused-need to investigate more.
-
Default: proxy_cache_min_uses 1;
This seems right for us.
-
The levels parameter defines hierarchy levels of a cache: from 1 to 3
Basically the same concept as a hardware cache with 1 I assume being the first cache that will be checked.
-
each level accepts values 1 or 2
I don't understand what this means.
Tags
Annotators
URL
-
- Oct 2019
-
nginx.org nginx.org
-
location / { sub_filter '<a href="http://127.0.0.1:8080/' '<a href="https://$host/'; sub_filter '<img src="http://127.0.0.1:8080/' '<img src="https://$host/'; sub_filter_once on; }
How to replace strings in an html response on-the-fly.
Note the Content-Type must be requested as not compressed for this to work.
Tags
Annotators
URL
-
-
serverfault.com serverfault.com
-
proxy_set_header Accept-Encoding "";
Note this is key to using sub_filter to replace strings in the response body.
-
-
www.nginx.com www.nginx.com
-
# request will be sent to backend without uri changed # to '/' due to if location /proxy-pass-uri { proxy_pass http://127.0.0.1:8080/; set $true 1; if ($true) { # nothing } }
This is a weird one. I guess as long as you aren't using $uri it has no impact though.
-
-
lua.2524044.n2.nabble.com lua.2524044.n2.nabble.com
-
Is anyone aware of a lua http lib that supports keepalive?
When sending a request you can pass the following keepalive settings which will keep the connection open:
local http = require "resty.http" local httpc = http.new() httpc:connect("127.0.0.1", 9081) local response, err = httpc:request({ path = "/proxy" .. ngx.var.request_uri, method = "HEAD", headers = ngx.req.get_headers(), keepalive_timeout = 60, keepalive_pool = 10, })
-
-
github.com github.com
-
Also, it's always a good idea to add ipv6=off to the resolver directive when your dns server may return IPv6 addresses and your network does not support it.
This might help.
Tags
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
Another difference about $uri and $request_uri in proxy_cache_key is $request_uri will include anchor tags part, but $uri$is_args$args will ignore it Do a curl operation : curl -I static.io/hello.htm
This could be a problem. We'll probably have to fix this when we move via2 outside lms.
-
-
clubhouse.io clubhouse.io
-
The proxy_buffering option tells NGINX to pass the response directly back to the client. Otherwise, it will try to buffer it in memory or on disk. I recommend this if the upstream response can be large.
We may want to not buffer on pdf endpoints.
-
-
www.ruby-forum.com www.ruby-forum.com
-
set $stripped_cookie $http_cookie; if ($http_cookie ~ "(.*)(?:^|;)\s*sessionid=[^;]+(.*)$") { set $stripped_cookie $1$2; } if ($stripped_cookie ~ "(.*)(?:^|;)\s*csrftoken=[^;]+(.*)$") { set $stripped_cookie $1$2; }
-
-
serverfault.com serverfault.com
-
location / { proxy_pass http://backend; # You may need to uncomment the following line if your redirects are relative, e.g. /foo/bar #proxy_redirect / /; proxy_intercept_errors on; error_page 301 302 307 = @handle_redirect; } location @handle_redirect { set $saved_redirect_location '$upstream_http_location'; proxy_pass $saved_redirect_location; }
Usually the redirect is returned as the response and the client follows the redirect. This will follow a redirect inside nginx rather than the client.
-
-
docs.aws.amazon.com docs.aws.amazon.com
-
The X-Forwarded-Proto request header helps you identify the protocol (HTTP or HTTPS) that a client used to connect to your load balancer. Your server access logs contain only the protocol used between the server and the load balancer; they contain no information about the protocol used between the client and the load balancer.
The load balancer may talk to the server via http so using $scheme in nginx when there's an AWS load balancer in front may lead to the $scheme being unexpectedly http instead of https.
http { map $http_x_forwarded_proto $original_scheme { "" $scheme; default $http_x_forwarded_proto; } }
-
-
stackoverflow.com stackoverflow.com
-
I had a similar issue with nginx+passenger (for Ruby on Rails / Rack / etc.), and I confirm that by default, multiple slashes are collapsed (in both PATH_INFO and REQUEST_URI). Adding merge_slashes off; in the server context of the nginx configuration fixed it
-
-
openresty-reference.readthedocs.io openresty-reference.readthedocs.io
-
non-blocking internal requests
Note ngx.location.capture only works on internal requests which means if you want to request an external endpoint dynamically then you need to setup something like below and call that internal endpoint instead of calling the external url directly.
Say for example you want to send a request to / endpoint with the thirdparty url as part of the path (
http:proxy-server.com/http://example.com
).location /external/ { internal; set $upstream ""; rewrite_by_lua_file ./lua/get_external.lua; resolver 8.8.8.8; proxy_pass $upstream;
Where lua/get_external.lua:
-- strip beginning '/' from uri path ngx.var.upstream = ngx.var.request_uri:sub(2)
-
-
github.com github.com
-
set $template_root /usr/local/openresty/nginx/html/templates;
We should probably use this instead of root since root has other implications.
Tags
Annotators
URL
-
-
github.com github.com
-
docker-openresty/alpine/Dockerfile.fat
openrestry nginx image that Hypothesis's new proxy-server is based on.
-
-
stackoverflow.com stackoverflow.com
-
# kill cache add_header Last-Modified $date_gmt; add_header Cache-Control 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0'; if_modified_since off; expires off; etag off;
disable nginx caching
-
-
www.nginx.com www.nginx.com
-
Because it can handle a high volume of connections, NGINX is commonly used as a reverse proxy and load balancer to manage incoming traffic and distribute it to slower upstream servers – anything from legacy database servers to microservices.
-
Dynamic sites, built using anything from Node.js to PHP, commonly deploy NGINX as a content cache and reverse proxy to reduce load on application servers and make the most effective use of the underlying hardware.
-
With its event-driven, asynchronous architecture, NGINX revolutionized how servers operate in high-performance contexts and became the fastest web server available.
-
NGINX has grown along with it and now supports all the components of the modern Web, including WebSocket, HTTP/2, and streaming of multiple video formats (HDS, HLS, RTMP, and others).
-
NGINX consistently beats Apache and other servers in benchmarks measuring web server performance.
Tags
Annotators
URL
-
- Sep 2019
-
openresty-reference.readthedocs.io openresty-reference.readthedocs.io
-
This API function (as well as ngx.location.capture_multi) always buffers the whole response body of the subrequest in memory. Thus, you should use cosockets and streaming processing instead if you have to handle large subrequest responses.
So my interpretation of this is the request is issued, the entire response is buffered in memory, then res.headers is read from that buffer. I wonder if there is a way to cut off this buffer and close the connection early such that only the headers make it into the buffer for responses that are very large.
-
Network I/O operations in user code should only be done through the Nginx Lua API calls as the Nginx event loop may be blocked and performance drop off dramatically otherwise. Disk operations with relatively small amount of data can be done using the standard Lua io library but huge file reading and writing should be avoided wherever possible as they may block the Nginx process significantly. Delegating all network and disk I/O operations to Nginx's subrequests (via the ngx.location.capture method and similar) is strongly recommended for maximum performance.
Very important ngx.location.capture does not block in nginx.
-
-
www.digitalocean.com www.digitalocean.com
-
Nginx, by default, will consider any header that contains underscores as invalid.
Interesting.
-
This is a really great introductory article to how nginx works. Definitely worth a read!
-
While buffering can help free up the backend server to handle more requests, Nginx also provides a way to cache content from backend servers, eliminating the need to connect to the upstream at all for many requests.
This seems like something we might want to look into long term but also I wonder whether this is really necessary. I thought nginx also cached responses by default so in that scenario I don't think it would even go out to the upstream server anyway, it would just return the previous still valid response but maybe I'm missing something.
-
-
stackoverflow.com stackoverflow.com
-
local LuaSocket = require("socket") client = LuaSocket.connect("example.com", 80) client:send("GET /login.php?login=admin&pass=admin HTTP/1.0\r\nHost: example.com\r\n\r\n") while true do s, status, partial = client:receive('*a') print(s or partial) if status == "closed" then break end end client:close()
How to issue an http request through a socket in lua.
-
-
stackoverflow.com stackoverflow.com
-
ngx.unescape_uri(str)
decode a url string
-
-
nginx.org nginx.org
-
proxy_ssl_trusted_certificate file;
And we should probably specify this.
-
proxy_ssl_verify on
We should probably turn this on.
-
proxy_ssl_server_name on
Turn this on when you need to proxy to an https server.
-
There could be several proxy_redirect directives: proxy_redirect default; proxy_redirect http://localhost:8000/ /; proxy_redirect http://www.example.com/ /;
In case there are multiple layers of redirection.
-
proxy_redirect ~*/user/([^/]+)/(.+)$ http://$1.example.com/$2;
So you can basically redirect anything to be the original url so the client is not aware there were layers of redirects by:
proxy_redirect ~^.*$ /original/endpoint;
Tags
Annotators
URL
-
-
serverfault.com serverfault.com
-
proxy_redirect $scheme://$host:$server_port/ /gitlab/;
This doesn't seem to be working as I'd expect it to override the location response header.
-
-
serverfault.com serverfault.com
-
proxy_redirect: If you need to modify the redirect (i.e. location header) being returned to the browser
-
-
stackoverflow.com stackoverflow.com
-
ngx.log(ngx.STDERR, 'your message here')
For debugging and printing strings to the console.
-
-
www.lua.org www.lua.org
-
print(a .. " World") --> Hello World
-
-
github.com github.com
-
more_clear_headers
Look into this as the nginx set header Cookie "" doesn't appear to be working. Might also be able to use lua header filter.
-
openresty/headers-more-nginx-module
Allows you to clear headers.
-
-
openresty-reference.readthedocs.io openresty-reference.readthedocs.io
-
header_filter_by_lua 'ngx.header.Foo = "blah"';
Not sure why you wouldn't use raw nginx for this.
-
access_by_lua ' local res = ngx.location.capture("/auth") if res.status == ngx.HTTP_OK then return end if res.status == ngx.HTTP_FORBIDDEN then ngx.exit(res.status) end ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
Potentially something like this for getting the content-type response header and using that to decide which proxy server to send the original request to.
-
The Lua code cache can be temporarily disabled during development by switching lua_code_cache off
For dev mode switch the lua file cache off.
-
-
sites.psu.edu sites.psu.edu
-
# map to different upstream backends based on header map $http_x_server_select $pool { default "apache"; staging "staging"; dev "development"; }
case statement on the header X-Server-Select. I wonder if we could do something similar for rate limiting ip selection.
-
-
www.getpagespeed.com www.getpagespeed.com
-
location ~* ^/proxy/(?<pschema>https?)/(?<phost>[\w.]+)(?<puri>/.*) { set $adr $pschema://$phost; rewrite .* $puri break; proxy_pass $adr;
Nice idea as it doesn't use lua but also a bit fragile I think.
-
-
-
You can only really adjust worker_connections since worker_processes is based off the number of CPUs you have available.
-
the product of your worker_connections and worker_processes directive settings
-
worker_rlimit_nofile directive. This changes the limit on the maximum number of open files
-
-
github.com github.com
-
it defaults to /etc/nginx/conf.d/default.conf
-
-
github.com github.com
-
default nginx config for openresty lua
-
-
github.com github.com
-
docs on lua templating
Tags
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
resolver 8.8.8.8;
-
-
docs.coronalabs.com docs.coronalabs.com
-
string.sub() Returns a substring (a specified portion of an existing string).
-
-
github.com github.com
-
local data = ngx.req.get_body_data()
Tags
Annotators
URL
-
- Jun 2019
- Jan 2019
-
-
this is in /srv/www/ on the host.
This site actually gives somewhat clear instructions about which directories from which to run the commands. I think where I went wrong befire was using various directories that in the end did not match the actual installations.
-
-
unify.id unify.id
Tags
Annotators
URL
-
- Dec 2018
-
rzetterberg.github.io rzetterberg.github.io
Tags
Annotators
URL
-
- Nov 2018
-
stackoverflow.com stackoverflow.com
- Oct 2018
-
stackoverflow.com stackoverflow.com
-
www.wormly.com www.wormly.com
Tags
Annotators
URL
-
-
www.digitalocean.com www.digitalocean.com
-
blog.cloudflare.com blog.cloudflare.com
-
github.com github.com
-
github.com github.com
Tags
Annotators
URL
-
-
maestroh.github.io maestroh.github.io
Tags
Annotators
URL
-
-
gkedge.gitbooks.io gkedge.gitbooks.io
Tags
Annotators
URL
-
-
stackoverflow.com stackoverflow.com
-
codeburst.io codeburst.io
Tags
Annotators
URL
-
-
www.nginx.com www.nginx.com
-
gist.github.com gist.github.com
-
-
www.nginx.com www.nginx.com
-
medium.com medium.com
-
github.com github.com
-
This NGINX module provides mechanism to cryptographically bind HTTP cookies to client's HTTPS channel using Token Binding, as defined by following IETF drafts:
-
- Apr 2018
- Feb 2018
-
www.drupal.org www.drupal.org
Tags
Annotators
URL
-
- Aug 2017
-
www.digitalocean.com www.digitalocean.com
-
when the location is matched using regular expressions
nginx will send original client request URI
-
- Jul 2017
-
www.nginx.com www.nginx.com
-
openresty.org openresty.org
-
since
lua-resty-redis
can not be used inset_by_lua*
, dynamic routing based on redis should be impelemented in theaccess_by_lua
block
-
-
stackoverflow.com stackoverflow.com
-
using
set_by_lua
to set the url to proxy_pass on-the-fly
-
-
www.digitalocean.com www.digitalocean.com
-
nginx.org nginx.org
-
In this configuration nginx tests only the request’s header field “Host” to determine which server the request should be routed to. If its value does not match any server name, or the request does not contain this header field at all, then nginx will route the request to the default server for this port. In the configuration above, the default server is the first one — which is nginx’s standard default behaviour. It can also be set explicitly which server should be default, with the default_server parameter in the listen directive:
Tags
Annotators
URL
-
-
www.digitalocean.com www.digitalocean.com
-
First, Nginx looks at the IP address and the port of the request. It matches this against the listen directive of each server to build a list of the server blocks that can possibly resolve the request.
-
The main server block directives that Nginx is concerned with during this process are the listen directive, and the server_name directive.
-
- Apr 2017
- Mar 2017
-
laravel-recipes.com laravel-recipes.com
- Feb 2017
-
blog.stickleback.dk blog.stickleback.dk
-
www.routerperformance.net www.routerperformance.net
-
www.digitalocean.com www.digitalocean.com
- Jan 2017
-
groups.google.com groups.google.com
-
A discussion on deployment of Kong with OWASP mod_security
-