THREESCALE-12258 Stream response back when using proxy#1572
Open
tkan145 wants to merge 3 commits into3scale:masterfrom
Open
THREESCALE-12258 Stream response back when using proxy#1572tkan145 wants to merge 3 commits into3scale:masterfrom
tkan145 wants to merge 3 commits into3scale:masterfrom
Conversation
8b73bc8 to
bfdef91
Compare
bfdef91 to
5e047fb
Compare
d7462ac to
01becde
Compare
01becde to
4615258
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What
THREESCALE-12258
In the proxy code, we call
httpc:proxy_response(res), lua-resty-http then callsock:receive(max_chunk_size). WhenContent-Lengthheader exist but no "max_chunk_size" passes in, the function will try to read the chunk with the size equal to value of Content-Lenght header (see here). Every iteration of the loop,proxy_responsewill then allocates a string with the size of "Content-Length" on the Lua/LuaJIT GC heap.These strings are short-lived (passed to ngx_print and then discarded), but LuaJIT's GC doesn't immediately free them (read more here). With a large response body, the dead strings pile up. LuaJIT's GC is incremental and may not keep up, causing peak memory to be much higher than the actual response body size.
Before
After
NOTE: we can see even after this patch, the response is still fully buffered in memory.
When running on a local machine, I found that responses were sent almost instantly and memory usage remained stable at around ~60MB throughout the request.
However, when running in a OCP cluster and sending a request from outside the cluster, the response appeared to be buffered and only sent after the entire response had been downloaded.
request with 30M payload

Requests to the same gateway via a different pod don't seem to cache the response, so I suspect there's something funny with the OCP routing/ingress.
We can call
ngx.flush(true)immediately followingnginx.printto flush the buffer. This essentially achieves the same result as setting proxy_buffering tooff. However, I remain uncertain about the merits of this idea.The response via our Lua code using proxy policies will have the header
Transfer-Encoding: chunked(we strip theContent-Lengthheader here)via nginx
via proxy policies
Verification steps
Modify docker-compose.yaml as follow
Take note of the memory usage