108

I have a rather large and slow (complex data, complex frontend) web application build in RoR and served by Puma with nginx as reverse proxy. Looking at the nginx error log, I see quite a few entries like:

2014/04/08 09:46:08 [warn] 20058#0: *819237 an upstream response is buffered to a temporary file 
    /var/lib/nginx/proxy/8/47/0000038478 while reading upstream, 
    client: 5.144.169.242, server: engagement-console.foo.it, 
    request: "GET /elements/pending?customer_id=2&page=2 HTTP/1.0", 
    upstream: "http://unix:///home/deployer/apps/conversationflow/shared/sockets/puma.sock:/elements/pending?customer_id=2&page=2", 
    host: "ec.reputationmonitor.it", 
    referrer: "http://ec.foo.it/elements/pending?customer_id=2&page=3"

I am rather curious as it's very unlikely that the page remains the same for different users and different user interactions, and I would not think that buffering the response on disk is necessary/useful.

I know about proxy_max_temp_file_size and setting it to 0, but it seems to me a little bit awkward (my proxy tries to buffer but has no file where to buffer to... how can that be faster?).

My questions are:

  1. How can I remove the [warn] and avoid buffering responses? Is it better to turn off proxy_buffering or to set proxy_max_temp_file_size to 0? Why?

  2. If nginx buffers a response: When does it serve the buffered response, to whom, and why?

  3. Why nginx turns proxy_buffering on by default and then [warn]s you if it actually buffers a response?

  4. When does a response trigger that option? When it takes > some seconds (how many?) to serve the response? Is this configurable?

TIA, ngw.

ngw
  • 1,291
  • 3
    I have a feeling that you are confusing buffering with caching. The buffering is the procedure that allows loading more data than allowed by memory allocation. – Slavic Jul 25 '16 at 15:32

4 Answers4

119
  1. How can I remove the [warn] and avoid buffering responses? Is it better to turn off proxy_buffering or set proxy_max_temp_file_size to 0? Why?

You should set proxy_max_temp_file_size to 0 in order to remove it. The proxy_buffering directive isn't directly related to the warning. You can switch it off to stop any buffering at all but that isn't recommended in general (unless it's needed for Comet).

  1. If nginx buffers a response when does it serve the buffered response, to whom and why?

It serves the response immediately, but a client usually has a much slower connection and can't consume the response data as fast as it is produced by your application. Nginx tries to buffer the whole response in order to release your application ASAP.

See also: http://aosabook.org/en/nginx.html

  1. Why nginx turns proxy_buffering on by default and then [warn]s you if it actually buffers a response?

As I already mentioned, the proxy_buffering isn't directly related to the warning. It's generally needed for optimized proxy operations and turning it off degrades performance and throughput.

Nginx only warns you when a response doesn't fit into configured memory buffers. You may ignore the warning if it's ok for you.

  1. When does a response triggers that option? When it takes > than some seconds (how many?) to serve the response? Is this configurable?

It triggers when memory buffers are full. Please, look at the docs, the whole mechanism is explained: http://nginx.org/r/proxy_max_temp_file_size

You may want to increase memory buffers.

Kirby
  • 115
VBart
  • 8,439
  • 5
    For #1, how does removing the size limit of temporary files prevent the buffering warning? I dont think this is correct because I have this directive set to 0 and still get warnings. – Phil May 09 '17 at 08:44
  • 2
    This answer is not clear enough, is it better for performance to set the proxy_max_temp_file_size to 0 or this is only a way to remove that warning? – Offir Oct 20 '19 at 09:11
  • 5
    I've set proxy_max_temp_file_size to 0 and still get the warnings. I have oodles of spare memory, what should I change to utilise it and avoid this message? – Codemonkey Jul 09 '20 at 20:04
  • 1
    For reference I get this message when a user downloads a large JPEG file (let's say 20MB). The default setting is apparently 1024MB, so even without setting it to 0 you'd think the warning wouldn't occur. A bug in nginx? – Codemonkey Jul 09 '20 at 20:05
  • at October of 2020, proxy_max_temp_file_size causes an error, maybe the directive doesn't exist anymore? – realtebo Oct 14 '20 at 07:18
  • Another fun time you can get this is when the requestor's server goes away or is blocked. I just triggered it, for example, by firewalling the user's IP address after a request had come in, but before the response had gone out. – mlissner Apr 11 '22 at 20:16
  • This setting worked for me and supressed the messages. As I understood it, the warning appeared because proxy_buffers is not big enough to hold the files in its buffer while giving it out to the client (e.g. big file, slow client). So it offloads it to a temp file, so that it can still close connection to upstream fast, but reading/writing from file is slow, that's why a warning for you to tweak buffer settings so that this warning is gone. proxy_max_temp_file_size turns off buffering, so upstream will suffer since nginx can't close connection quickly and have to open it as long as client read – Rowanto Sep 14 '23 at 22:12
32

The following configuration works fine on my server.

proxy_buffers 16 16k;  
proxy_buffer_size 16k;
Haluk
  • 963
  • 2
  • 15
  • 20
  • 7
    Is the second directive redundant, i.e. the 16k in the first line is doing the exact same as the second line? – EoghanM Jul 05 '16 at 10:55
  • 10
    @EoghanM according to the docs, no. Proxy_Buffer_size (buffer, not plural) applies to the first part of the proxy server's response (aka the headers) Sets the size of the buffer used for reading the first part of the response received from the proxied server. This part usually contains a small response header. Proxy_buffers is for the rest of the response. – cde Dec 08 '16 at 18:53
  • 4
    You have saved some of my hair today. I was moving nginx from our server to a docker container and it started to be INCREDIBLY slow. This "fixed" it. Not sure if the version on our server had this on by default or not, but the one in the container definitely needed these settings. – Krystian Jan 02 '18 at 11:33
  • 13
    Is there a good strategy for determining the values of proxy_buffers, and proxy_buffer_size? – Brian Jan 09 '20 at 07:42
  • 4
    what are defaults for these 2 configs? – realtebo Jan 11 '21 at 15:47
  • 7
    I found some good explanation here: https://www.getpagespeed.com/server-setup/nginx/tuning-proxy_buffer_size-in-nginx – Melroy van den Berg Jan 18 '21 at 21:35
  • 1
    @Brian proxy_buffer_size is often 8k by default, which is often enough for requests with headers. And default of proxy_buffers is then proxy_buffers 8 8k; – Melroy van den Berg Jan 18 '21 at 21:38
0

The nginx sendfile command might help with this for static content

https://docs.nginx.com/nginx/admin-guide/web-server/serving-static-content/

Enabling sendfile:

By default, NGINX handles file transmission itself and copies the file into the buffer before sending it. Enabling the sendfile directive eliminates the step of copying the data into the buffer and enables direct copying data from one file descriptor to another. To prevent one fast connection from entirely occupying the worker process, you can use the sendfile_max_chunk directive to limit the amount of data transferred in a single sendfile() call (in this example, to 1 MB):

location /mp3 {
    sendfile           on;
    sendfile_max_chunk 1m;
    #...
}
0

Add these to nginx.conf or virtual-host config file context(http, server, location):

proxy_max_temp_file_size 10240m;
proxy_buffers 240 240k;
proxy_busy_buffers_size 240k;
proxy_buffer_size 240k;

You can set the values by your situation, But they must have a balance with each other(dont use diffrent numbers for buffer directives).

I have the same problem, and test these configs for a script with heavy and long time IO, SQL and php process, and it works fine.

Eyni Kave
  • 101