Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
WalkthroughAdds bidirectional incremental HTML streaming with async-props support, length‑prefixed wire format, VM execution-context refactor, request prechecks abstraction, HTTPX stream_bidi patch and explicit request.close behavior, AsyncPropsManager, new Node/Rails endpoints and emitters, and many tests/fixtures across Node renderer, Pro client, and Rails integration. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant NodeRenderer as Node Renderer
participant StreamParser as NDJSON Stream Parser
participant ReqPrechecks as Request Prechecks
participant ExecCtx as ExecutionContext/VM
participant AsyncMgr as AsyncPropsManager
participant Response
Client->>NodeRenderer: POST /bundles/:ts/incremental-render/:digest (NDJSON stream)
NodeRenderer->>ReqPrechecks: performRequestPrechecks(firstChunkBody)
NodeRenderer->>StreamParser: consume request.raw (with chunk timeout)
StreamParser->>NodeRenderer: onRenderRequestReceived(firstObj)
NodeRenderer->>ExecCtx: handleIncrementalRenderRequest(firstObj) => {response, sink?}
ExecCtx->>Response: emit initial length-prefixed chunk
Client->>StreamParser: send update NDJSON line
StreamParser->>NodeRenderer: onUpdateReceived(updateObj)
NodeRenderer->>ExecCtx: sink.add(updateObj)
ExecCtx->>ExecCtx: runInVM(updateChunk)
ExecCtx->>AsyncMgr: setProp(propName, value)
AsyncMgr->>ExecCtx: resolve awaiting async-prop promises
ExecCtx->>Response: emit updated length-prefixed chunk
Client->>StreamParser: close stream
StreamParser->>NodeRenderer: onRequestEnded()
NodeRenderer->>ExecCtx: sink.handleRequestClosed()
ExecCtx->>AsyncMgr: endStream()
Response->>Client: final chunk / stream closed
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: da37b46080
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| <%# TODO Remove after adding support for requesting async props from node renderer side %> | ||
|
|
||
| <% if @rsc_payload_component_name.start_with? "AsyncPropsComponent" %> | ||
| <%= rsc_payload_react_component_with_async_props("AsyncPropsComponent", props: { name: "John Doe", age: 30, description: "Software Engineer" }) do |emit| |
There was a problem hiding this comment.
Remove hardcoded async-props rendering from shared RSC view
This template now special-cases any @rsc_payload_component_name starting with "AsyncPropsComponent" and always renders "AsyncPropsComponent" with fixed props, which overrides the caller-provided component name/props for that namespace and introduces artificial sleep delays in request handling. Because react_on_rails_pro/rsc_payload.text.erb is the shared production payload view, this test-specific branch can corrupt real payload responses and latency for matching component names.
Useful? React with 👍 / 👎.
| addAsyncPropsCapabilityToComponentProps() { | ||
| throw new Error('addAsyncPropsCapabilityToComponentProps requires react-on-rails-pro package'); | ||
| }, |
There was a problem hiding this comment.
Add missing core stub for getOrCreateAsyncPropsManager
The core object now defines a Pro-only stub for addAsyncPropsCapabilityToComponentProps, but it does not define one for getOrCreateAsyncPropsManager even though that method was added to ReactOnRailsInternal. In core-only builds, calling ReactOnRails.getOrCreateAsyncPropsManager will therefore hit undefined and throw a generic TypeError instead of the intended explicit "requires react-on-rails-pro" error contract used by other Pro APIs.
Useful? React with 👍 / 👎.
Greptile SummaryThis PR merges Key changes:
Confidence Score: 4/5Not safe to merge until the missing One confirmed P1 bug:
Important Files Changed
Sequence DiagramsequenceDiagram
participant Rails
participant HTTPX
participant Node as Node Renderer
participant React
Rails->>HTTPX: POST /incremental-render (stream_bidi)
HTTPX->>Node: NDJSON line 1: {renderingRequest, onRequestClosedUpdateChunk, ...}
Node->>React: renderServerComponent(props w/ getReactOnRailsAsyncProp)
React-->>Node: suspend on asyncPropsManager.getProp("users")
Node-->>HTTPX: streaming HTML shell (length-prefixed chunks)
HTTPX-->>Rails: HTML shell chunks → browser
par Rails sends async props concurrently
Rails->>HTTPX: NDJSON line 2: {bundleTimestamp, updateChunk: setProp("users", ...)}
HTTPX->>Node: updateChunk JS executed
Node->>React: asyncPropsManager.setProp("users", data)
React-->>Node: resumes, more HTML
Node-->>HTTPX: more HTML chunks
HTTPX-->>Rails: HTML chunks → browser
end
Rails->>HTTPX: close request (END_STREAM)
HTTPX->>Node: handleRequestClosed → endStream()
Node->>React: reject unresolved props
React-->>Node: final HTML / error boundaries
Node-->>HTTPX: stream end
HTTPX-->>Rails: done
Reviews (2): Last reviewed commit: "Replace NDJSON envelope with length-pref..." | Re-trigger Greptile |
| onResponseStartPromise = Promise.resolve(onResponseStart(response)); | ||
|
|
||
| if (!continueFlag) { | ||
| return; | ||
| } |
There was a problem hiding this comment.
Unhandled Promise rejection on early return
When shouldContinue is false (e.g., auth failure, protocol version mismatch), onResponseStartPromise is assigned but the function returns immediately, leaving the promise floating with no .catch() handler. If setResponse later rejects — for example because the connection was reset between the content-type parser finishing and res.send() writing the headers — this becomes an unhandled Promise rejection. In Node.js 15+ unhandled rejections crash the process by default, which would kill the worker.
The fix is to await the promise before the early return:
| onResponseStartPromise = Promise.resolve(onResponseStart(response)); | |
| if (!continueFlag) { | |
| return; | |
| } | |
| onResponseStartPromise = Promise.resolve(onResponseStart(response)); | |
| if (!continueFlag) { | |
| await onResponseStartPromise; | |
| return; | |
| } |
| loop do | ||
| stream_response = @request_executor.call(send_bundle, barrier) | ||
|
|
||
| # Chunks can be merged during streaming, so we separate them by newlines | ||
| # Also, we check the status code inside the loop block because calling `status` outside the loop block | ||
| # is blocking, it will wait for the response to be fully received | ||
| # Look at the spec of `status` in `spec/react_on_rails_pro/stream_spec.rb` for more details | ||
| process_response_chunks(stream_response, error_body, &block) | ||
| break | ||
| rescue HTTPX::HTTPError => e | ||
| send_bundle = handle_http_error(e, error_body, send_bundle) | ||
| rescue HTTPX::ReadTimeoutError => e | ||
| raise ReactOnRailsPro::Error, "Time out error while server side render streaming a component.\n" \ | ||
| "Original error:\n#{e}\n#{e.backtrace}" | ||
| end | ||
|
|
||
| barrier.wait |
There was a problem hiding this comment.
Uncaught exception from
async_props_block propagates after response is already streamed
If async_props_block.call(emitter) raises (e.g. a database timeout), the exception is stored in the Async::Barrier task. process_response_chunks will complete and break the loop normally, and then barrier.wait re-raises the exception. At that point the streaming HTML has already been fully sent to the client (or partially, if React is still rendering).
The propagating exception will unwind the Rails request stack, potentially triggering error-handling middleware that tries to issue a 500 response on an already-started stream — and will surface as a noisy uncaught exception in logs even when the user already received a complete page.
The ensure request.close inside the barrier.async block does correctly notify the Node renderer (so asyncPropsManager.endStream() fires and React shows error boundaries), but the Ruby side exception handling needs to explicitly rescue and log the barrier error rather than let it propagate:
begin
barrier.wait
rescue StandardError => e
Rails.logger.error do
"[ReactOnRailsPro] Error in async_props_block: #{e.class} - #{e.message}\n#{e.backtrace&.first(5)&.join(\"\\n\")}"
end
end| response | ||
| end | ||
|
|
||
| # Executes an HTTP POST request using build_request pattern. | ||
| # For streaming requests, calls request.close to send END_STREAM flag. | ||
| def execute_http_request(path, form: nil, json: nil, stream: false) | ||
| body, content_type = encode_request_body(form: form, json: json) | ||
|
|
||
| request_options = { | ||
| headers: { "content-type" => content_type }, | ||
| body: body | ||
| } | ||
| request_options[:stream] = true if stream | ||
|
|
||
| request = connection.build_request("POST", path, **request_options) | ||
| request.close if stream # Signal end of request body to send END_STREAM flag | ||
|
|
||
| connection.request(request) | ||
| end | ||
|
|
||
| # Encodes request body for use with build_request. | ||
| # Supports both form data (with automatic multipart detection) and JSON data. | ||
| def encode_request_body(form: nil, json: nil) | ||
| if form | ||
| encoder = if HTTPX::Transcoder::Multipart.multipart?(form) | ||
| HTTPX::Transcoder::Multipart.encode(form) | ||
| else | ||
| HTTPX::Transcoder::Form.encode(form) | ||
| end | ||
| [encoder.to_s, encoder.content_type] | ||
| elsif json | ||
| [JSON.generate(json), "application/json"] | ||
| else | ||
| raise ArgumentError, "Either form: or json: must be provided" | ||
| end | ||
| end | ||
|
|
||
| def form_with_code(js_code, send_bundle) | ||
| form = common_form_data | ||
| form["renderingRequest"] = js_code |
There was a problem hiding this comment.
Dead code:
json: branch in encode_request_body is never exercised
perform_request only ever receives form: (never json:), and execute_http_request passes through the same keywords. No existing call site passes json: to either method. The elsif json branch in encode_request_body is unreachable dead code, which may confuse future maintainers into thinking JSON request bodies are already supported.
Consider either removing the json: parameter until a concrete caller is added, or adding a comment noting it is reserved for a planned use-case.
| // Stream ended normally | ||
| await onRequestEnded(); | ||
| await onResponseStartPromise; |
There was a problem hiding this comment.
onResponseStartPromise remains null if the NDJSON stream ends without a single object being received (e.g. client disconnected immediately). await null is harmless, so there is no crash, but onRequestEnded fires with incrementalSink === undefined (a no-op) and no error response is ever sent to the client. Consider adding an explicit guard:
if (!hasReceivedFirstObject) {
throw new Error('Incremental render stream ended without receiving any NDJSON objects');
}| * If no chunk is received within this timeout, the request is cancelled. | ||
| * This prevents requests from hanging forever when clients disconnect without properly closing the stream. | ||
| */ | ||
| export const STREAM_CHUNK_TIMEOUT_MS = 20 * 1000; // 20 seconds |
There was a problem hiding this comment.
STREAM_CHUNK_TIMEOUT_MS (20 s) may be too short for slow async props
The 20-second per-chunk timeout will cancel an incremental render if the Ruby side takes more than 20 s to emit the next NDJSON line. A long-running DB query, a slow external API call, or a GC pause under load can easily exceed this. The timeout fires a StreamChunkTimeoutError which propagates through the outer catch in worker.ts and destroys the connection.
Consider making this configurable via the existing Config mechanism so operators can tune it per deployment, or at least align the default with the read_timeout already set in the HTTPX connection options.
There was a problem hiding this comment.
Actionable comments posted: 15
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/react-on-rails-pro-node-renderer/src/worker/checkProtocolVersionHandler.ts (1)
43-55:⚠️ Potential issue | 🟠 MajorDon’t destructure or echo the raw body in the 412 path.
Line 44 assumes
bodyis always an object, and Line 54 serializes the entire payload into the error text. That turns an empty/malformed request into a 500, and a protocol mismatch into a response that reflects values likepasswordor large props back to the client. Falling back to an empty object and reporting field names instead of values keeps the diagnostic without leaking request data.🔒 Suggested change
-export function checkProtocolVersion(body: RequestBody) { - const { protocolVersion: reqProtocolVersion, gemVersion, railsEnv } = body; +export function checkProtocolVersion(body: RequestBody | undefined) { + const safeBody = body ?? {}; + const { protocolVersion: reqProtocolVersion, gemVersion, railsEnv } = safeBody; + const receivedFields = Object.keys(safeBody).join(', ') || 'none'; @@ reqProtocolVersion ? `request protocol ${reqProtocolVersion}` - : `MISSING with body ${JSON.stringify(body)}` + : `MISSING (received fields: ${receivedFields})` } does not match installed renderer protocol ${packageJson.protocolVersion} for version ${packageJson.version}. Update either the renderer or the Rails server`,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/src/worker/checkProtocolVersionHandler.ts` around lines 43 - 55, In checkProtocolVersion, avoid destructuring and echoing the raw RequestBody: replace the current destructuring of body and JSON.stringify(body) with safe checks (e.g., treat body as {} unless typeof body === 'object' && body !== null) and extract only presence indicators (or the literal protocol version if safe) for protocolVersion, gemVersion, and railsEnv instead of serializing the whole payload; update the 412 error text to mention which fields are present/missing (field names) and the expected packageJson.protocolVersion / packageJson.version rather than dumping request values so no sensitive or large request content is reflected back.
🟡 Minor comments (8)
scripts/publish-to-builds-repo.sh-153-155 (1)
153-155:⚠️ Potential issue | 🟡 MinorPortability:
sed -isyntax differs between macOS and Linux.On macOS (BSD sed),
sed -irequires a backup extension argument (usesed -i ''for no backup). On Linux (GNU sed), it works without the argument. This script will fail on macOS with "invalid command code".🔧 Portable fix using a temp file or Perl
Option 1: Use Perl (available on both platforms):
if grep -q '"workspace:\*"' "$PKG_JSON" 2>/dev/null; then - sed -i "s/\"workspace:\*\"/\"$VERSION\"/g" "$PKG_JSON" + perl -i -pe "s/\"workspace:\*\"/\"$VERSION\"/g" "$PKG_JSON" echo " Fixed $pkg/package.json" fiOption 2: Detect platform:
if grep -q '"workspace:\*"' "$PKG_JSON" 2>/dev/null; then - sed -i "s/\"workspace:\*\"/\"$VERSION\"/g" "$PKG_JSON" + if [[ "$(uname)" == "Darwin" ]]; then + sed -i '' "s/\"workspace:\*\"/\"$VERSION\"/g" "$PKG_JSON" + else + sed -i "s/\"workspace:\*\"/\"$VERSION\"/g" "$PKG_JSON" + fi echo " Fixed $pkg/package.json" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/publish-to-builds-repo.sh` around lines 153 - 155, The sed -i usage in the block that checks PKG_JSON is not portable to macOS; replace the inline sed call so it works on both GNU and BSD sed (or use Perl). Specifically, in the if grep -q ... "$PKG_JSON" block update the sed invocation that currently uses sed -i "s/\"workspace:\*\"/\"$VERSION\"/g" "$PKG_JSON" to a portable form (e.g., use perl -pi -e 's/.../.../g' "$PKG_JSON" or use sed -i.bak ... then rm the .bak) and ensure the variables PKG_JSON, VERSION and pkg remain used identically; remove any created backup files if you choose the .bak route.CHANGELOG.md-27-32 (1)
27-32:⚠️ Potential issue | 🟡 MinorChangelog formatting here does not follow repo conventions.
Line 27 introduces a separate
#### Prosubsection, and Line 31 uses"[PR 2251](...)". This repo expects Pro entries inline under standard headings with**[Pro]**, and PR references formatted as[2251](url) by [username](url).✍️ Suggested changelog fix
-#### Pro - -##### Fixed - -- **HTTPX Streaming Compatibility**: Fixed streaming request timeouts when using HTTPX with both `:stream` and `:stream_bidi` plugins. Refactored `perform_request` to use the `build_request` pattern with explicit `request.close` to send the HTTP/2 `END_STREAM` flag. Also includes a temporary workaround for an [HTTPX stream_bidi plugin retry bug](https://github.com/HoneyryderChuck/httpx/issues/124) that caused crashes on request retries. [PR 2251](https://github.com/shakacode/react_on_rails/pull/2251) by [AbanoubGhadban](https://github.com/AbanoubGhadban). +#### Fixed + +- **[Pro]** **HTTPX Streaming Compatibility**: Fixed streaming request timeouts when using HTTPX with both `:stream` and `:stream_bidi` plugins. Refactored `perform_request` to use the `build_request` pattern with explicit `request.close` to send the HTTP/2 `END_STREAM` flag. Also includes a temporary workaround for an [HTTPX stream_bidi plugin retry bug](https://github.com/HoneyryderChuck/httpx/issues/124) that caused crashes on request retries. [2251](https://github.com/shakacode/react_on_rails/pull/2251) by [AbanoubGhadban](https://github.com/AbanoubGhadban).As per coding guidelines: Pro entries must use an inline
**[Pro]**tag within standard sections, and CHANGELOG references must be formatted as[PR NUMBER](url) by [username](url)withoutPRtext.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` around lines 27 - 32, Replace the standalone "#### Pro" subsection and the `**HTTPX Streaming Compatibility**` entry so it uses the inline Pro tag and correct PR formatting: move the entry under the existing appropriate section (e.g., "Fixed") and change the subsection marker to an inline "**[Pro]**" tag inside the bullet, and replace `"[PR 2251](...)"` with the required format `[2251](https://github.com/shakacode/react_on_rails/pull/2251) by [AbanoubGhadban](https://github.com/AbanoubGhadban)` so the bullet reads like the repo convention (include the explicit `request.close` note and mention of the HTTPX workaround in the same bullet).packages/react-on-rails-pro-node-renderer/tests/worker.test.ts-538-543 (1)
538-543:⚠️ Potential issue | 🟡 MinorIgnore the completeness marker when counting files.
These assertions assume the directory contains only the uploaded bundle/assets, but
/upload-assetscan also write the completeness marker once the bundle becomes complete. That makestoHaveLength(3)/toHaveLength(1)false negatives even when the right files are present. Assert the required filenames instead, or filter the marker out before counting.Based on learnings, the
.completemarker is written when the bundle directory was previously incomplete and the bundle file is present.Also applies to: 581-587, 690-693
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/worker.test.ts` around lines 538 - 543, The test currently counts files using fs.readdirSync into bundle1Files/bundle2Files and expects lengths (toHaveLength(3)/toHaveLength(1)), which fails when a '.complete' marker is present; update the assertions to either (a) filter out the completeness marker before counting (e.g., bundle1Files.filter(f => f !== '.complete')) and assert the filtered length, or (b) assert the exact required filenames exist (e.g., expect(bundle1Files).toEqual(expect.arrayContaining(['<bundle-file-name>', 'asset1', 'asset2'])) using the same fs.readdirSync results), and apply the same change to the other occurrences that use bundle1Files/bundle2Files counting logic.react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb-418-447 (1)
418-447:⚠️ Potential issue | 🟡 MinorThis overlap check is timing-dependent.
sleep 0.01plus an exactexecution_orderarray will flake if the scheduler runs the stream branch first, or if both branches finish in the same time slice. Use an explicit latch/barrier to prove overlap instead of wall-clock timing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb` around lines 418 - 447, The test's concurrency assertion is timing-dependent; replace the sleep-based check in the test_async_props_block with an explicit synchronization barrier: create two synchronization primitives (e.g., Ruby Queue or Mutex+ConditionVariable) called something like start_q and continue_q, have test_async_props_block push a token into start_q when it begins and then block on continue_q before pushing the end token, and modify the mocked response each block to wait for start_q (ensuring the async block has started) and then push a token into continue_q to allow the async block to finish; update references to test_async_props_block and the allow(mock_response).to receive(:each) stub so the execution_order assertion reliably observes [:async_block_start, :chunk_yielded, :async_block_end].packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts-467-476 (1)
467-476:⚠️ Potential issue | 🟡 MinorThe empty-line spec never writes an empty line.
This loop only sends valid JSON objects, so the test would still pass if blank NDJSON lines broke parsing. Interleave raw
\n/\r\nwrites before asserting the sink calls.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts` around lines 467 - 476, The test currently only writes JSON chunks (chunksToSend) so it never exercises empty NDJSON lines; update the loop that writes to req (the req.write calls inside the for loop) to interleave raw empty-line writes (e.g., write '\n' or '\r\n' directly to req) before or between the JSON writes and before calling waitFor/expect; ensure the assertion using waitFor and sinkAdd still runs after the JSON chunk write so you verify that an empty-line write does not break parsing.packages/react-on-rails-pro-node-renderer/tests/handleIncrementalRenderStream.test.ts-170-178 (1)
170-178:⚠️ Potential issue | 🟡 MinorThis regression case never reaches the threshold it claims to protect.
100 * ~1KBis only about100KB, so the spec still passes if line-buffer accounting accidentally keeps accumulating across newline boundaries. To prove the reset behavior, make the aggregate payload exceedFIELD_SIZE_LIMITwhile keeping each individual line below it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/handleIncrementalRenderStream.test.ts` around lines 170 - 178, The test "resets buffer after processing newlines (no false positive on total size)" currently creates 100 ~1KB lines (~100KB) which doesn't exceed FIELD_SIZE_LIMIT; change the test so the total aggregated payload across lines exceeds FIELD_SIZE_LIMIT while each individual line remains smaller than FIELD_SIZE_LIMIT: use the existing FIELD_SIZE_LIMIT constant to compute a per-line smallJson length (e.g., keep JSON.stringify({data: 'x'.repeat(n)}) length < FIELD_SIZE_LIMIT) and increase the number of chunks (or per-chunk repetition) so that number_of_chunks * per_line_size > FIELD_SIZE_LIMIT, then push that many Buffer.from(smallJson + '\n') into chunks before running handleIncrementalRenderStream to verify reset behavior.packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderStream.ts-160-163 (1)
160-163:⚠️ Potential issue | 🟡 MinorAwait
onResponseStartbefore returning early.When
shouldContinueis false, the function exits without awaitingonResponseStartPromise. That turns the precheck/error path into fire-and-forget I/O and can surface rejections outside this handler.🐛 Proposed fix
onResponseStartPromise = Promise.resolve(onResponseStart(response)); if (!continueFlag) { - return; + await onResponseStartPromise; + return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderStream.ts` around lines 160 - 163, The early-return path after computing onResponseStartPromise (created from onResponseStart(response)) must await that promise to avoid unhandled rejections: in the block that checks continueFlag (the if (!continueFlag) return; near onResponseStartPromise), replace the bare return with awaiting onResponseStartPromise first (e.g., await onResponseStartPromise) and then return; ensure you reference the existing onResponseStartPromise and onResponseStart symbols so the change is applied to that exact precheck/error path.packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts-231-233 (1)
231-233:⚠️ Potential issue | 🟡 MinorMissing timeout cleanup in
onClosehandler.The
onClosehandler rejects the promise but does not clear the timeout, which could lead to the timeout callback executing after the promise has already settled (causing a redundant rejection attempt).Proposed fix
const onClose = () => { + clearTimeout(timeoutId); + cancelDataListener(); reject(new Error('Stream Closed')); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts` around lines 231 - 233, The onClose handler currently only calls reject(new Error('Stream Closed')) which can leave the request timeout active; update the onClose function (const onClose) to also clear the timeout/timer you created (e.g., call clearTimeout(timeoutId) or clearTimeout(timer)) before rejecting so the timeout callback cannot run after the promise is settled.
🧹 Nitpick comments (13)
packages/react-on-rails-pro/tests/testUtils.ts (2)
5-8: Minor documentation issues.Typo on line 5: "Pusing" → "Pushing"
Also, the JSDoc return type on line 8 specifies
anybut the implementation usesunknown. Consider updating for consistency:- * Pusing a null or undefined chunk will end the stream. + * Pushing a null or undefined chunk will end the stream. ... - * push: (chunk: any) => void + * push: (chunk: unknown) => void🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro/tests/testUtils.ts` around lines 5 - 8, Fix the JSDoc: correct the typo "Pusing" to "Pushing" in the comment and make the return typedef consistent with the implementation by changing the push parameter type from "any" to "unknown" in the `@returns` block so it matches the actual push implementation and the declared stream: Readable and push function signatures (look for the JSDoc above the Readable/ push declarations).
12-13: Widening tounknownis a good improvement for generic streaming tests.The change from
Buffertounknowntypes allows this utility to handle various chunk types (NDJSON objects, strings, Buffers) needed for the incremental rendering tests.However,
pushFnis typed as non-nullable but declared without initialization. Consider making the type explicit:- let pushFn: (chunk: unknown) => void; + let pushFn: ((chunk: unknown) => void) | undefined;This aligns the type with the actual runtime behavior where
pushFnis undefined untilread()is called.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro/tests/testUtils.ts` around lines 12 - 13, pushFn is declared non-nullable but actually remains undefined until read() is called; change its type to allow undefined by declaring pushFn as ((chunk: unknown) => void) | undefined (and initialize to undefined) and update any usage sites (e.g., in read() and where pushFn is invoked) to guard against undefined or assert existence after read() has set it; refer to the identifiers pendingChunks, pushFn, and read() to locate and fix the declarations and call sites.scripts/publish-to-builds-repo.sh (2)
133-138: Consider failing iflib/is missing for any package.The script warns but continues if
lib/doesn't exist. This could result in publishing incomplete builds if a package build silently fails. Since these are critical distribution artifacts, a missinglib/should likely abort the script.♻️ Proposed stricter handling
# Copy lib/ directory if [[ -d "$PKG_SRC/lib" ]]; then cp -r "$PKG_SRC/lib" "$PKG_DEST/lib" else - echo " WARNING: $PKG_SRC/lib does not exist! Was the build successful?" >&2 + echo " ERROR: $PKG_SRC/lib does not exist! Build may have failed." >&2 + exit 1 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/publish-to-builds-repo.sh` around lines 133 - 138, The script currently only warns when "$PKG_SRC/lib" is missing which can allow publishing incomplete packages; change the behavior in the block that handles copying lib/ (the if [[ -d "$PKG_SRC/lib" ]] ... else ...) to treat a missing lib/ as a fatal error: write a clear error to stderr referencing "$PKG_SRC/lib" and then exit with a non-zero status (e.g., exit 1) so the publishing process aborts rather than continuing to "$PKG_DEST/lib" handling and potentially producing incomplete builds.
242-249: Tag existence check only verifies local refs; remote tag may still conflict.With a shallow clone, tags aren't fetched by default. If the tag exists on the remote but not locally,
git rev-parse "$DIST_TAG"succeeds (tag not found), butgit push origin "$DIST_TAG"will fail with "already exists".💡 Check remote tags instead
# Create and push tag - if git rev-parse "$DIST_TAG" &>/dev/null; then + if git ls-remote --tags origin "$DIST_TAG" | grep -q "$DIST_TAG"; then echo "==> Tag $DIST_TAG already exists, skipping tag creation." else git tag "$DIST_TAG" git push origin "$DIST_TAG" echo "==> Tag $DIST_TAG pushed." fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/publish-to-builds-repo.sh` around lines 242 - 249, The tag-exists check currently uses git rev-parse on DIST_TAG which only inspects local refs; change the logic to verify the remote before creating/pushing the tag by querying the remote (e.g., using git ls-remote --tags or fetching the specific tag) for DIST_TAG and skip creation if the remote already has it; update the branch where DIST_TAG is created/pushed so it only runs when neither local nor remote contain the tag, and keep the existing echo messages (references: DIST_TAG, the git rev-parse check and the git push origin "$DIST_TAG" branch).react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb (1)
140-148: Avoid mutating the caller’soptionshash in these wrappers.Reusing the same hash for a later helper call will keep the old
:async_props_blockattached, which is a surprising cross-call side effect for a public helper.Suggested refactor
def stream_react_component_with_async_props(component_name, options = {}, &props_block) - options[:async_props_block] = props_block - stream_react_component(component_name, options) + stream_react_component( + component_name, + props_block ? options.merge(async_props_block: props_block) : options.dup + ) end def rsc_payload_react_component_with_async_props(component_name, options = {}, &props_block) - options[:async_props_block] = props_block - rsc_payload_react_component(component_name, options) + rsc_payload_react_component( + component_name, + props_block ? options.merge(async_props_block: props_block) : options.dup + ) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb` around lines 140 - 148, The wrappers stream_react_component_with_async_props and rsc_payload_react_component_with_async_props mutate the caller-provided options hash by setting :async_props_block directly; instead duplicate the options (e.g., local_opts = options.dup or options.merge({})) and set local_opts[:async_props_block] = props_block, then call the underlying helpers stream_react_component(component_name, local_opts) and rsc_payload_react_component(component_name, local_opts) to avoid surprising cross-call side effects.react_on_rails_pro/spec/dummy/app/views/pages/test_incremental_rendering.html.erb (1)
4-4: Wrap this ERB helper call across multiple lines.Line 4 is over the Ruby line-length limit, so this is harder to scan than the surrounding dummy views.
♻️ Suggested formatting
-<%= stream_react_component_with_async_props("AsyncPropsComponent", props: { name: "John Doe", age: 30, description: "Software Engineer" }) do |emit| +<%= stream_react_component_with_async_props( + "AsyncPropsComponent", + props: { + name: "John Doe", + age: 30, + description: "Software Engineer", + }, + ) do |emit|As per coding guidelines, "Line length max 120 characters in Ruby code; break long chains using multi-line method calls".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@react_on_rails_pro/spec/dummy/app/views/pages/test_incremental_rendering.html.erb` at line 4, The ERB call to stream_react_component_with_async_props for "AsyncPropsComponent" exceeds the Ruby line-length limit; reformat it into a multi-line method call by placing the helper name and first argument on one line, put the props: hash on its own indented lines (breaking each key/value or the hash across lines as needed), and start the do |emit| block on a new line so the entire call is readable and wrapped under 120 characters; update the call site referencing stream_react_component_with_async_props, "AsyncPropsComponent", props and the emit block accordingly.react_on_rails_pro/spec/dummy/app/views/pages/server_router.html.erb (1)
27-28: Wrap the longresearchesemission.The
emitter.callforresearchesspills past the 120-character Ruby limit once indentation is included.♻️ Suggested formatting
- emitter.call("researches", ["Machine Learning Study", "React Performance Optimization", "Database Indexing Strategies"]) + emitter.call( + "researches", + [ + "Machine Learning Study", + "React Performance Optimization", + "Database Indexing Strategies", + ], + )As per coding guidelines, "Line length max 120 characters in Ruby code; break long chains using multi-line method calls".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@react_on_rails_pro/spec/dummy/app/views/pages/server_router.html.erb` around lines 27 - 28, The emitter.call invocation for "researches" exceeds the 120-char limit; refactor it into a multi-line method call by placing the array argument on its own indented lines and breaking long strings if needed (e.g., change emitter.call("researches", ["Machine Learning Study", "React Performance Optimization", "Database Indexing Strategies"]) to a multi-line form using emitter.call("researches", [\n "Machine Learning Study",\n "React Performance Optimization",\n "Database Indexing Strategies"\n]) so the line length stays under 120 while preserving the same arguments and behavior.react_on_rails_pro/spec/react_on_rails_pro/async_props_emitter_spec.rb (1)
13-43: Add a happy-path assertion for#end_stream_chunk.This spec covers
#call, but the close-stream marker is the other half of the protocol; if it regresses, pending async-prop waiters can hang indefinitely. A small example assertingbundleTimestampandasyncPropsManager.endStream()would make this much less brittle.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@react_on_rails_pro/spec/react_on_rails_pro/async_props_emitter_spec.rb` around lines 13 - 43, Add a happy-path spec for `#end_stream_chunk` that verifies the close-stream NDJSON marker is written: call emitter.end_stream_chunk (or the public method that emits the stream-close marker), assert request_stream received :<< with a string ending in "\n", parse JSON.parse(output.chomp) and expect parsed["bundleTimestamp"] to equal bundle_timestamp and parsed["updateChunk"] to include 'asyncPropsManager.endStream()' so regressions won't leave waiters hanging.packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts (1)
243-247: Potential race condition when checkingstream.closed.The check for
stream.closedhappens after attaching thecloselistener. If the stream closes between attaching the listener and the check, both paths could be triggered. Additionally, if the stream is already closed when entering this function, thecloseevent may have already fired and won't fire again, but callingonClose()directly is correct here.Consider reordering to check
stream.closedbefore attaching listeners:Proposed fix
- stream.once('data', onData); - stream.once('error', onError); - if ('closed' in stream && stream.closed) { - onClose(); - } else { - stream.once('close', onClose); - } + // Check closed state first to avoid race + if ('closed' in stream && stream.closed) { + onClose(); + return; + } + + stream.once('data', onData); + stream.once('error', onError); + stream.once('close', onClose);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts` around lines 243 - 247, The current logic may call onClose twice because it checks stream.closed after attaching the 'close' listener; reverse the order so you check if ('closed' in stream && stream.closed) first and call onClose() immediately, otherwise attach the listener with stream.once('close', onClose). Update the block around the stream/ onClose handling (the code using stream.closed and stream.once('close', onClose)) to perform the closed check before registering listeners to avoid the race.packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderRequest.ts (2)
119-129: Error handling swallows failures silently (by design).The
sink.addmethod catches all errors and logs them without propagating. This appears intentional for resilience (an error in one update chunk shouldn't crash the stream), but callers have no way to know if updates are failing.Consider whether the sink should expose an error state or callback for observability, or if this fire-and-forget pattern is sufficient for the use case.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderRequest.ts` around lines 119 - 129, sink.add currently swallows all failures by catching errors inside the add handler (see assertIsUpdateChunk, getRequestBundleFilePath and executionContext.runInVM) and only logging via log.error, leaving callers unaware of update failures; modify the API so failures are observable — for example, add an onError callback or an observable/error state on the sink that the caller can subscribe to, and invoke it whenever either assertIsUpdateChunk throws or runInVM rejects (include the err and chunk payload); ensure backward compatibility by keeping the log.error but also emitting the error through the new channel so callers can react or record metrics.
130-145:handleRequestClosedreturnsvoidbut contains an unhandled floating promise.The method signature returns
void, butexecutionContext.runInVM(...)returns aPromise. The.catch()handles errors, but the overall promise is not returned or awaited. While this fire-and-forget pattern is common for cleanup handlers, TypeScript'svoidreturn type may hide the async nature from callers.Consider making the return type explicit:
Proposed clarification
- handleRequestClosed: () => { + handleRequestClosed: (): void => { + // Fire-and-forget: errors are logged but not propagated if (!onRequestClosedUpdateChunk) { return; } const bundlePath = getRequestBundleFilePath(onRequestClosedUpdateChunk.bundleTimestamp); - executionContext + void executionContext .runInVM(onRequestClosedUpdateChunk.updateChunk, bundlePath) .catch((err: unknown) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderRequest.ts` around lines 130 - 145, The handler handleRequestClosed currently returns void but starts an unreturned Promise from executionContext.runInVM; either return or explicitly discard that promise to make the async intent clear: change handleRequestClosed to return the Promise (e.g., make the function return Promise<void> and return executionContext.runInVM(onRequestClosedUpdateChunk.updateChunk, bundlePath).catch(...)) or mark it as fire-and-forget by prepending void to the call (void executionContext.runInVM(...).catch(...)); reference handleRequestClosed, executionContext.runInVM, getRequestBundleFilePath and onRequestClosedUpdateChunk when applying the change.packages/react-on-rails-pro-node-renderer/tests/helper.ts (1)
201-213: Redundant timeout check inside catch block.The check at line 209 (
if (Date.now() - startTime >= timeout)) is redundant because thewhileloop condition at line 201 already ensures we exit when the timeout is reached. The inner check will always be false (or nearly false) because we just entered the loop body.Proposed simplification
while (Date.now() - startTime < timeout) { try { expectFn(); // If we get here, the expect passed, so we can return return; } catch (error) { lastError = error as Error; - // Expect failed, continue retrying - if (Date.now() - startTime >= timeout) { - // Timeout reached, re-throw the last error - throw error; - } + // Expect failed, continue retrying (loop will exit on timeout) } // Wait before next retry🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/helper.ts` around lines 201 - 213, The retry loop in the helper uses a redundant timeout check inside the catch block; remove the inner if (Date.now() - startTime >= timeout) check and instead let the while (Date.now() - startTime < timeout) loop exit naturally, then after the loop re-throw the stored lastError (or throw a new Error if lastError is undefined). Update the code around expectFn(), startTime, timeout and lastError so failures are retried until the loop ends and the final throw happens once after the loop.packages/react-on-rails-pro-node-renderer/src/worker/vm.ts (1)
359-359: Consider adding type annotation tosharedExecutionContext.The
sharedExecutionContextis declared as a plainMap()without type parameters. While this provides flexibility, adding a type annotation would improve code clarity and enable better IDE support.Proposed type annotation
- const sharedExecutionContext = new Map(); + const sharedExecutionContext = new Map<string, unknown>();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/src/worker/vm.ts` at line 359, sharedExecutionContext is created with new Map() without type parameters; add a TypeScript annotation to improve clarity and IDE support by declaring the expected key/value types (for example replace with const sharedExecutionContext: Map<string, unknown> = new Map(); or, if there is a specific interface used elsewhere, use that type like Map<string, ExecutionContext>); update any code that reads/writes this map to use the chosen type so types align.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/react-on-rails-pro-node-renderer/src/worker.ts`:
- Around line 367-373: The code currently logs the full parsed NDJSON update
object (obj) at info level before calling incrementalSink.add, which can leak
sensitive payloads and inflate logs; change the log call in the try block to
avoid serializing the entire obj — instead log a small, safe set of metadata
(for example obj.type, obj.id, and the payload size) or a redacted summary, e.g.
log.info with fields like { type: obj?.type, id: obj?.id, size:
Buffer.byteLength(JSON.stringify(obj || {})) } — keep the
incrementalSink.add(obj) call unchanged and ensure no full-object serialization
occurs anywhere near the log statement.
In
`@packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderStream.ts`:
- Around line 118-123: The current check in handleIncrementalRenderStream uses
buffer.length on the whole buffered chunk and can abort when a transport chunk
contains many valid newline-terminated NDJSON objects and also measures
characters not bytes; change the guard to split the accumulated buffer on
newline boundaries and enforce FIELD_SIZE_LIMIT per individual line using byte
length (e.g., Buffer.byteLength(line, 'utf8')) and only throw if any complete
line exceeds FIELD_SIZE_LIMIT, leaving partial trailing fragment in the buffer
for the next chunk processing; reference the buffer variable, FIELD_SIZE_LIMIT,
and the line-splitting logic in handleIncrementalRenderStream when making this
change.
In `@packages/react-on-rails-pro-node-renderer/src/worker/vm.ts`:
- Around line 417-420: The branch that serializes the execution output
incorrectly double-serializes when a Promise resolves to a string: update the
logic around the local variable "result" (the RenderCodeResult handling) to
await the promise first and then check the resolved value's type — if the
awaited value is a string use it directly, otherwise JSON.stringify the object;
reference the existing checks around "result" and the awaited "objectResult" to
implement this safe type-check-after-await behavior.
In
`@packages/react-on-rails-pro-node-renderer/tests/fixtures/secondary-bundle-incremental.js`:
- Around line 26-32: The helper addStreamValueToFirstBundle currently assumes
sharedExecutionContext.get('stream') exists but ReactOnRails.getStreamValues()
only seeds secondaryStream, so change the initialization logic: when
sharedExecutionContext lacks 'stream', call ReactOnRails.getStreamValues() and
then if only 'secondaryStream' was created, copy or seed that value into
sharedExecutionContext under the 'stream' key (e.g., get
sharedExecutionContext.get('secondaryStream') and set
sharedExecutionContext.set('stream', { stream: <that stream> })); ensure you
still handle the case where neither exists by creating a new stream before
destructuring stream and calling stream.write(value). This touches
addStreamValueToFirstBundle, sharedExecutionContext,
ReactOnRails.getStreamValues, and secondaryStream.
In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts`:
- Around line 592-599: The test currently assumes a 1:1 correspondence between
response.streamedData entries and responseChunks which is unreliable; update the
assertions in the incremental render test to validate the combined streamed
output instead of per-chunk ordering: join/concatenate all entries in
response.streamedData into a single string (or parse them using an explicit
delimiter if you add framing) and compare that combined result to the
concatenated expected responseChunks (or to the expected framed parse); adjust
any other similar assertions that use response.streamedData[index] (and any
push()-based test helpers) to use concatenation or a proper framing/parser
before asserting.
In `@packages/react-on-rails-pro-node-renderer/tests/vm.test.ts`:
- Line 11: The test calls buildVM() but only imports buildExecutionContext,
hasVMContextForBundle, and resetVM; either export and import buildVM from the vm
module or (recommended) replace the two buildVM() calls with
buildExecutionContext() so the test uses the existing exported API; update the
two occurrences at lines where buildVM is invoked to call buildExecutionContext
with the same args and ensure any returned shape is handled the same way as
other test usages of buildExecutionContext.
In `@packages/react-on-rails-pro/src/AsyncPropsManager.ts`:
- Around line 80-87: The bug is that setProp (and the symmetric code around
lines 103-111) allows late updates because getOrCreatePromiseController returns
an existing controller even after the stream is closed; change setProp to check
this.isClosed (or check promiseController.isClosed) immediately after obtaining
the controller and throw the same error as when no controller exists so
post-endStream() calls reject, and update getOrCreatePromiseController (or
callers) so it never creates or returns controllers once isClosed is true; also
add a regression test in AsyncPropManager.test.ts that calls getProp('x');
endStream(); then setProp('x', value) and asserts that setProp throws to cover
the already-requested-prop case.
In `@packages/react-on-rails-pro/src/RSCRequestTracker.ts`:
- Around line 41-46: Tests instantiate RSCRequestTracker with only the
railsContext but the constructor now requires a second generateRSCPayload
argument; update the two failing tests to pass a mock/stub generateRSCPayload
function (e.g., a no-op or simple stub returning expected payload) when
constructing RSCRequestTracker so getRSCPayloadStream can run; locate the
constructor calls to RSCRequestTracker in the test files and add the second
parameter accordingly, ensuring the mock matches the GenerateRSCPayloadFunction
signature used by getRSCPayloadStream.
In `@react_on_rails_pro/app/views/react_on_rails_pro/rsc_payload.text.erb`:
- Around line 3-16: The template currently hardcodes rendering
"AsyncPropsComponent" with demo props instead of using the incoming
`@rsc_payload_component_name` and `@rsc_payload_component_props`; update the
rsc_payload_react_component_with_async_props invocation to use
`@rsc_payload_component_name` as the component name and
`@rsc_payload_component_props` as the props, preserve the special-case for
"AsyncPropsComponentForTesting" by calling read_async_props_from_redis(emit)
when `@rsc_payload_component_name` == "AsyncPropsComponentForTesting", and remove
the demo sleeps and hardcoded emit.call(...) demo data so real requests return
the correct component and props while the emit block only supplies async data
when explicitly required by the testing branch.
In `@react_on_rails_pro/lib/react_on_rails_pro/request.rb`:
- Around line 79-100: The incremental-update emitter is being created with
pool.rsc_bundle_hash which always tags updates for the RSC bundle; change the
ReactOnRailsPro::AsyncPropsEmitter instantiation inside the
ReactOnRailsPro::StreamRequest.create block (the spot that currently calls
ReactOnRailsPro::AsyncPropsEmitter.new(pool.rsc_bundle_hash, request)) to use
the bundle identifier for the server bundle referenced by path (i.e., derive the
bundle hash from path or use the pool method that maps path → bundle) so that
render_code_with_incremental_updates sends async-prop updates to the same
execution context as the initial render; update the emitter creation and any
helper call sites (e.g., build_initial_incremental_request) accordingly.
In `@react_on_rails_pro/react_on_rails_pro.gemspec`:
- Around line 37-40: The runtime dependency for httpx must be constrained to
prevent accepting incompatible major versions—update the gemspec constraint for
"httpx" from ">= 1.7.0" to a bounded range that prevents 2.x (for example ">=
1.7.0", "< 2.0") so the monkeypatch in httpx_stream_bidi_patch.rb (which uses
RequestMethods.prepend and relies on internals like `@headers_sent` and
`@options.stream`) won't be applied against a breaking httpx major release; change
the version spec in the existing s.add_runtime_dependency call for "httpx"
accordingly.
In `@react_on_rails_pro/spec/dummy/app/controllers/pages_controller.rb`:
- Around line 225-254: The read_async_props_from_redis method opens a Redis
connection (redis = ::Redis.new) but never closes it and only checks for nil
request_id; update read_async_props_from_redis to validate request_id is present
and not blank (e.g., nil or empty/whitespace) before doing any XREAD, raising
immediately if invalid, and ensure the Redis client is closed on both normal
completion and on exceptions (use an ensure/finally-style block around the
XREAD/stream loop to call redis.close or redis.quit); keep
emitter.call(message_key[1..], JSON.parse(message_value)) behavior but protect
the stream loop so sockets aren’t leaked.
In `@react_on_rails_pro/spec/dummy/e2e-tests/asyncProps.spec.ts`:
- Around line 6-27: The test body is not ensuring endRedisStream() always runs;
wrap the existing test logic (the async callback that uses page, sendRedisValue,
endRedisStream) in a try/finally so that endRedisStream() is invoked in the
finally block regardless of failures; locate the async test function (the
anonymous async ({ page, sendRedisValue, endRedisStream }) => { ... }) and move
all assertions and sendRedisValue calls into the try, then call endRedisStream()
in the finally to guarantee the Redis stream is closed.
In `@react_on_rails_pro/spec/dummy/Procfile.dev`:
- Line 3: Uncomment and restore the two process entries for the dummy app so it
actually serves and SSR/RSC-renders: re-enable the "rails" entry (the commented
"rails: rails s -p 3000" line) and the "node-renderer" entry so both processes
run alongside the watchers in Procfile.dev; ensure the lines remain present and
active (not prefixed with '#') so the Procfile can start the full dummy stack
when used.
In `@scripts/publish-to-builds-repo.sh`:
- Around line 51-56: The VERSION read uses a relative path
("./packages/react-on-rails/package.json") which breaks when the script is run
outside the repo root; update the node invocation that sets VERSION to reference
the absolute repo path (use the existing MONOREPO_ROOT variable or derive the
script's root) so it reads "$MONOREPO_ROOT/packages/react-on-rails/package.json"
(or equivalent) instead of the relative path; ensure the VERSION assignment and
subsequent error check continue to work with the absolute path.
---
Outside diff comments:
In
`@packages/react-on-rails-pro-node-renderer/src/worker/checkProtocolVersionHandler.ts`:
- Around line 43-55: In checkProtocolVersion, avoid destructuring and echoing
the raw RequestBody: replace the current destructuring of body and
JSON.stringify(body) with safe checks (e.g., treat body as {} unless typeof body
=== 'object' && body !== null) and extract only presence indicators (or the
literal protocol version if safe) for protocolVersion, gemVersion, and railsEnv
instead of serializing the whole payload; update the 412 error text to mention
which fields are present/missing (field names) and the expected
packageJson.protocolVersion / packageJson.version rather than dumping request
values so no sensitive or large request content is reflected back.
---
Minor comments:
In `@CHANGELOG.md`:
- Around line 27-32: Replace the standalone "#### Pro" subsection and the
`**HTTPX Streaming Compatibility**` entry so it uses the inline Pro tag and
correct PR formatting: move the entry under the existing appropriate section
(e.g., "Fixed") and change the subsection marker to an inline "**[Pro]**" tag
inside the bullet, and replace `"[PR 2251](...)"` with the required format
`[2251](https://github.com/shakacode/react_on_rails/pull/2251) by
[AbanoubGhadban](https://github.com/AbanoubGhadban)` so the bullet reads like
the repo convention (include the explicit `request.close` note and mention of
the HTTPX workaround in the same bullet).
In
`@packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderStream.ts`:
- Around line 160-163: The early-return path after computing
onResponseStartPromise (created from onResponseStart(response)) must await that
promise to avoid unhandled rejections: in the block that checks continueFlag
(the if (!continueFlag) return; near onResponseStartPromise), replace the bare
return with awaiting onResponseStartPromise first (e.g., await
onResponseStartPromise) and then return; ensure you reference the existing
onResponseStartPromise and onResponseStart symbols so the change is applied to
that exact precheck/error path.
In
`@packages/react-on-rails-pro-node-renderer/tests/handleIncrementalRenderStream.test.ts`:
- Around line 170-178: The test "resets buffer after processing newlines (no
false positive on total size)" currently creates 100 ~1KB lines (~100KB) which
doesn't exceed FIELD_SIZE_LIMIT; change the test so the total aggregated payload
across lines exceeds FIELD_SIZE_LIMIT while each individual line remains smaller
than FIELD_SIZE_LIMIT: use the existing FIELD_SIZE_LIMIT constant to compute a
per-line smallJson length (e.g., keep JSON.stringify({data: 'x'.repeat(n)})
length < FIELD_SIZE_LIMIT) and increase the number of chunks (or per-chunk
repetition) so that number_of_chunks * per_line_size > FIELD_SIZE_LIMIT, then
push that many Buffer.from(smallJson + '\n') into chunks before running
handleIncrementalRenderStream to verify reset behavior.
In `@packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts`:
- Around line 231-233: The onClose handler currently only calls reject(new
Error('Stream Closed')) which can leave the request timeout active; update the
onClose function (const onClose) to also clear the timeout/timer you created
(e.g., call clearTimeout(timeoutId) or clearTimeout(timer)) before rejecting so
the timeout callback cannot run after the promise is settled.
In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts`:
- Around line 467-476: The test currently only writes JSON chunks (chunksToSend)
so it never exercises empty NDJSON lines; update the loop that writes to req
(the req.write calls inside the for loop) to interleave raw empty-line writes
(e.g., write '\n' or '\r\n' directly to req) before or between the JSON writes
and before calling waitFor/expect; ensure the assertion using waitFor and
sinkAdd still runs after the JSON chunk write so you verify that an empty-line
write does not break parsing.
In `@packages/react-on-rails-pro-node-renderer/tests/worker.test.ts`:
- Around line 538-543: The test currently counts files using fs.readdirSync into
bundle1Files/bundle2Files and expects lengths (toHaveLength(3)/toHaveLength(1)),
which fails when a '.complete' marker is present; update the assertions to
either (a) filter out the completeness marker before counting (e.g.,
bundle1Files.filter(f => f !== '.complete')) and assert the filtered length, or
(b) assert the exact required filenames exist (e.g.,
expect(bundle1Files).toEqual(expect.arrayContaining(['<bundle-file-name>',
'asset1', 'asset2'])) using the same fs.readdirSync results), and apply the same
change to the other occurrences that use bundle1Files/bundle2Files counting
logic.
In `@react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb`:
- Around line 418-447: The test's concurrency assertion is timing-dependent;
replace the sleep-based check in the test_async_props_block with an explicit
synchronization barrier: create two synchronization primitives (e.g., Ruby Queue
or Mutex+ConditionVariable) called something like start_q and continue_q, have
test_async_props_block push a token into start_q when it begins and then block
on continue_q before pushing the end token, and modify the mocked response each
block to wait for start_q (ensuring the async block has started) and then push a
token into continue_q to allow the async block to finish; update references to
test_async_props_block and the allow(mock_response).to receive(:each) stub so
the execution_order assertion reliably observes [:async_block_start,
:chunk_yielded, :async_block_end].
In `@scripts/publish-to-builds-repo.sh`:
- Around line 153-155: The sed -i usage in the block that checks PKG_JSON is not
portable to macOS; replace the inline sed call so it works on both GNU and BSD
sed (or use Perl). Specifically, in the if grep -q ... "$PKG_JSON" block update
the sed invocation that currently uses sed -i
"s/\"workspace:\*\"/\"$VERSION\"/g" "$PKG_JSON" to a portable form (e.g., use
perl -pi -e 's/.../.../g' "$PKG_JSON" or use sed -i.bak ... then rm the .bak)
and ensure the variables PKG_JSON, VERSION and pkg remain used identically;
remove any created backup files if you choose the .bak route.
---
Nitpick comments:
In
`@packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderRequest.ts`:
- Around line 119-129: sink.add currently swallows all failures by catching
errors inside the add handler (see assertIsUpdateChunk, getRequestBundleFilePath
and executionContext.runInVM) and only logging via log.error, leaving callers
unaware of update failures; modify the API so failures are observable — for
example, add an onError callback or an observable/error state on the sink that
the caller can subscribe to, and invoke it whenever either assertIsUpdateChunk
throws or runInVM rejects (include the err and chunk payload); ensure backward
compatibility by keeping the log.error but also emitting the error through the
new channel so callers can react or record metrics.
- Around line 130-145: The handler handleRequestClosed currently returns void
but starts an unreturned Promise from executionContext.runInVM; either return or
explicitly discard that promise to make the async intent clear: change
handleRequestClosed to return the Promise (e.g., make the function return
Promise<void> and return
executionContext.runInVM(onRequestClosedUpdateChunk.updateChunk,
bundlePath).catch(...)) or mark it as fire-and-forget by prepending void to the
call (void executionContext.runInVM(...).catch(...)); reference
handleRequestClosed, executionContext.runInVM, getRequestBundleFilePath and
onRequestClosedUpdateChunk when applying the change.
In `@packages/react-on-rails-pro-node-renderer/src/worker/vm.ts`:
- Line 359: sharedExecutionContext is created with new Map() without type
parameters; add a TypeScript annotation to improve clarity and IDE support by
declaring the expected key/value types (for example replace with const
sharedExecutionContext: Map<string, unknown> = new Map(); or, if there is a
specific interface used elsewhere, use that type like Map<string,
ExecutionContext>); update any code that reads/writes this map to use the chosen
type so types align.
In `@packages/react-on-rails-pro-node-renderer/tests/helper.ts`:
- Around line 201-213: The retry loop in the helper uses a redundant timeout
check inside the catch block; remove the inner if (Date.now() - startTime >=
timeout) check and instead let the while (Date.now() - startTime < timeout) loop
exit naturally, then after the loop re-throw the stored lastError (or throw a
new Error if lastError is undefined). Update the code around expectFn(),
startTime, timeout and lastError so failures are retried until the loop ends and
the final throw happens once after the loop.
In `@packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts`:
- Around line 243-247: The current logic may call onClose twice because it
checks stream.closed after attaching the 'close' listener; reverse the order so
you check if ('closed' in stream && stream.closed) first and call onClose()
immediately, otherwise attach the listener with stream.once('close', onClose).
Update the block around the stream/ onClose handling (the code using
stream.closed and stream.once('close', onClose)) to perform the closed check
before registering listeners to avoid the race.
In `@packages/react-on-rails-pro/tests/testUtils.ts`:
- Around line 5-8: Fix the JSDoc: correct the typo "Pusing" to "Pushing" in the
comment and make the return typedef consistent with the implementation by
changing the push parameter type from "any" to "unknown" in the `@returns` block
so it matches the actual push implementation and the declared stream: Readable
and push function signatures (look for the JSDoc above the Readable/ push
declarations).
- Around line 12-13: pushFn is declared non-nullable but actually remains
undefined until read() is called; change its type to allow undefined by
declaring pushFn as ((chunk: unknown) => void) | undefined (and initialize to
undefined) and update any usage sites (e.g., in read() and where pushFn is
invoked) to guard against undefined or assert existence after read() has set it;
refer to the identifiers pendingChunks, pushFn, and read() to locate and fix the
declarations and call sites.
In `@react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb`:
- Around line 140-148: The wrappers stream_react_component_with_async_props and
rsc_payload_react_component_with_async_props mutate the caller-provided options
hash by setting :async_props_block directly; instead duplicate the options
(e.g., local_opts = options.dup or options.merge({})) and set
local_opts[:async_props_block] = props_block, then call the underlying helpers
stream_react_component(component_name, local_opts) and
rsc_payload_react_component(component_name, local_opts) to avoid surprising
cross-call side effects.
In `@react_on_rails_pro/spec/dummy/app/views/pages/server_router.html.erb`:
- Around line 27-28: The emitter.call invocation for "researches" exceeds the
120-char limit; refactor it into a multi-line method call by placing the array
argument on its own indented lines and breaking long strings if needed (e.g.,
change emitter.call("researches", ["Machine Learning Study", "React Performance
Optimization", "Database Indexing Strategies"]) to a multi-line form using
emitter.call("researches", [\n "Machine Learning Study",\n "React Performance
Optimization",\n "Database Indexing Strategies"\n]) so the line length stays
under 120 while preserving the same arguments and behavior.
In
`@react_on_rails_pro/spec/dummy/app/views/pages/test_incremental_rendering.html.erb`:
- Line 4: The ERB call to stream_react_component_with_async_props for
"AsyncPropsComponent" exceeds the Ruby line-length limit; reformat it into a
multi-line method call by placing the helper name and first argument on one
line, put the props: hash on its own indented lines (breaking each key/value or
the hash across lines as needed), and start the do |emit| block on a new line so
the entire call is readable and wrapped under 120 characters; update the call
site referencing stream_react_component_with_async_props, "AsyncPropsComponent",
props and the emit block accordingly.
In `@react_on_rails_pro/spec/react_on_rails_pro/async_props_emitter_spec.rb`:
- Around line 13-43: Add a happy-path spec for `#end_stream_chunk` that verifies
the close-stream NDJSON marker is written: call emitter.end_stream_chunk (or the
public method that emits the stream-close marker), assert request_stream
received :<< with a string ending in "\n", parse JSON.parse(output.chomp) and
expect parsed["bundleTimestamp"] to equal bundle_timestamp and
parsed["updateChunk"] to include 'asyncPropsManager.endStream()' so regressions
won't leave waiters hanging.
In `@scripts/publish-to-builds-repo.sh`:
- Around line 133-138: The script currently only warns when "$PKG_SRC/lib" is
missing which can allow publishing incomplete packages; change the behavior in
the block that handles copying lib/ (the if [[ -d "$PKG_SRC/lib" ]] ... else
...) to treat a missing lib/ as a fatal error: write a clear error to stderr
referencing "$PKG_SRC/lib" and then exit with a non-zero status (e.g., exit 1)
so the publishing process aborts rather than continuing to "$PKG_DEST/lib"
handling and potentially producing incomplete builds.
- Around line 242-249: The tag-exists check currently uses git rev-parse on
DIST_TAG which only inspects local refs; change the logic to verify the remote
before creating/pushing the tag by querying the remote (e.g., using git
ls-remote --tags or fetching the specific tag) for DIST_TAG and skip creation if
the remote already has it; update the branch where DIST_TAG is created/pushed so
it only runs when neither local nor remote contain the tag, and keep the
existing echo messages (references: DIST_TAG, the git rev-parse check and the
git push origin "$DIST_TAG" branch).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
| try { | ||
| log.info(`Received a new update chunk ${JSON.stringify(obj)}`); | ||
| await incrementalSink.add(obj); | ||
| } catch (err) { | ||
| // Log error but don't stop processing | ||
| log.error({ err, msg: 'Error processing update chunk' }); | ||
| } |
There was a problem hiding this comment.
Don’t log raw incremental update bodies at info level.
obj here is the parsed NDJSON update and can contain arbitrary async-prop payloads. Serializing every chunk into the info log risks leaking user data and can blow up log volume on busy streams.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro-node-renderer/src/worker.ts` around lines 367 -
373, The code currently logs the full parsed NDJSON update object (obj) at info
level before calling incrementalSink.add, which can leak sensitive payloads and
inflate logs; change the log call in the try block to avoid serializing the
entire obj — instead log a small, safe set of metadata (for example obj.type,
obj.id, and the payload size) or a redacted summary, e.g. log.info with fields
like { type: obj?.type, id: obj?.id, size: Buffer.byteLength(JSON.stringify(obj
|| {})) } — keep the incrementalSink.add(obj) call unchanged and ensure no
full-object serialization occurs anywhere near the log statement.
| // Check single line size limit (protects against missing newlines) | ||
| if (buffer.length > FIELD_SIZE_LIMIT) { | ||
| throw new Error( | ||
| `NDJSON line exceeds maximum size of ${FIELD_SIZE_LIMIT} bytes (${Math.round(FIELD_SIZE_LIMIT / 1024 / 1024)}MB). ` + | ||
| `Current buffer: ${buffer.length} bytes. Ensure each JSON object is followed by a newline.`, | ||
| ); |
There was a problem hiding this comment.
Enforce FIELD_SIZE_LIMIT per NDJSON line, not per buffered chunk.
This guard runs before complete lines are drained, so one transport chunk containing several small newline-terminated objects can still exceed the limit and abort a valid stream. It also measures characters instead of bytes.
🐛 Proposed fix
- // Check single line size limit (protects against missing newlines)
- if (buffer.length > FIELD_SIZE_LIMIT) {
+ // Check only the unfinished line suffix here; complete lines are validated after splitting.
+ const lastBoundary = buffer.lastIndexOf('\n');
+ const pendingLine = lastBoundary === -1 ? buffer : buffer.slice(lastBoundary + 1);
+ if (Buffer.byteLength(pendingLine, 'utf8') > FIELD_SIZE_LIMIT) {
throw new Error(
`NDJSON line exceeds maximum size of ${FIELD_SIZE_LIMIT} bytes (${Math.round(FIELD_SIZE_LIMIT / 1024 / 1024)}MB). ` +
- `Current buffer: ${buffer.length} bytes. Ensure each JSON object is followed by a newline.`,
+ `Current buffer: ${Buffer.byteLength(pendingLine, 'utf8')} bytes. Ensure each JSON object is followed by a newline.`,
);
}- const rawObject = buffer.slice(0, boundary).trim();
+ const rawLine = buffer.slice(0, boundary);
+ if (Buffer.byteLength(rawLine, 'utf8') > FIELD_SIZE_LIMIT) {
+ throw new Error(
+ `NDJSON line exceeds maximum size of ${FIELD_SIZE_LIMIT} bytes (${Math.round(FIELD_SIZE_LIMIT / 1024 / 1024)}MB). ` +
+ `Current buffer: ${Buffer.byteLength(rawLine, 'utf8')} bytes. Ensure each JSON object is followed by a newline.`,
+ );
+ }
+ const rawObject = rawLine.trim();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderStream.ts`
around lines 118 - 123, The current check in handleIncrementalRenderStream uses
buffer.length on the whole buffered chunk and can abort when a transport chunk
contains many valid newline-terminated NDJSON objects and also measures
characters not bytes; change the guard to split the accumulated buffer on
newline boundaries and enforce FIELD_SIZE_LIMIT per individual line using byte
length (e.g., Buffer.byteLength(line, 'utf8')) and only throw if any complete
line exceeds FIELD_SIZE_LIMIT, leaving partial trailing fragment in the buffer
for the next chunk processing; reference the buffer variable, FIELD_SIZE_LIMIT,
and the line-splitting logic in handleIncrementalRenderStream when making this
change.
| addStreamValueToFirstBundle: function (value) { | ||
| if (!sharedExecutionContext.has('stream')) { | ||
| // Create the stream first if it doesn't exist | ||
| ReactOnRails.getStreamValues(); | ||
| } | ||
| const { stream } = sharedExecutionContext.get('stream'); | ||
| stream.write(value); |
There was a problem hiding this comment.
Seed stream here instead of secondaryStream.
ReactOnRails.getStreamValues() only creates secondaryStream. If the first bundle has not already populated stream, Line 31 still destructures undefined and this helper throws before writing the cross-bundle update.
Proposed fix
addStreamValueToFirstBundle: function (value) {
if (!sharedExecutionContext.has('stream')) {
- // Create the stream first if it doesn't exist
- ReactOnRails.getStreamValues();
+ const stream = new PassThrough();
+ sharedExecutionContext.set('stream', { stream });
}
const { stream } = sharedExecutionContext.get('stream');
stream.write(value);
},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/react-on-rails-pro-node-renderer/tests/fixtures/secondary-bundle-incremental.js`
around lines 26 - 32, The helper addStreamValueToFirstBundle currently assumes
sharedExecutionContext.get('stream') exists but ReactOnRails.getStreamValues()
only seeds secondaryStream, so change the initialization logic: when
sharedExecutionContext lacks 'stream', call ReactOnRails.getStreamValues() and
then if only 'secondaryStream' was created, copy or seed that value into
sharedExecutionContext under the 'stream' key (e.g., get
sharedExecutionContext.get('secondaryStream') and set
sharedExecutionContext.set('stream', { stream: <that stream> })); ensure you
still handle the case where neither exists by creating a new stream before
destructuring stream and calling stream.write(value). This touches
addStreamValueToFirstBundle, sharedExecutionContext,
ReactOnRails.getStreamValues, and secondaryStream.
| // Verify that we received all the streamed chunks | ||
| expect(response.streamedData).toHaveLength(responseChunks.length); | ||
|
|
||
| // Verify that each chunk was received in order | ||
| responseChunks.forEach((expectedChunk, index) => { | ||
| const receivedChunk = response.streamedData[index]; | ||
| expect(receivedChunk).toEqual(expectedChunk); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.tsRepository: shakacode/react_on_rails
Length of output: 145
🏁 Script executed:
cat -n packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts | sed -n '585,610p'Repository: shakacode/react_on_rails
Length of output: 1195
🏁 Script executed:
cat -n packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts | sed -n '680,700p'Repository: shakacode/react_on_rails
Length of output: 1052
🏁 Script executed:
cat -n packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts | sed -n '550,595p'Repository: shakacode/react_on_rails
Length of output: 1835
🏁 Script executed:
cat -n packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts | sed -n '1,100p'Repository: shakacode/react_on_rails
Length of output: 3836
🏁 Script executed:
rg -n "streamedData" packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts -A 3 -B 3Repository: shakacode/react_on_rails
Length of output: 2489
🏁 Script executed:
rg -n "responseChunks" packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts -B 5 -A 2Repository: shakacode/react_on_rails
Length of output: 1172
🏁 Script executed:
cat -n packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts | sed -n '220,235p'Repository: shakacode/react_on_rails
Length of output: 792
🏁 Script executed:
cat -n packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts | sed -n '527,550p'Repository: shakacode/react_on_rails
Length of output: 924
🏁 Script executed:
rg -n "createStreamingTestSetup" packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts -A 30Repository: shakacode/react_on_rails
Length of output: 3453
Avoid asserting 1:1 correspondence between stream writes and HTTP data events.
TCP/HTTP can coalesce or split writes arbitrarily. These tests push plain strings without framing (lines 592-599) or without parsing frames (lines 686-693), so streamedData[index] may not correspond to the original push() call. Restructure assertions to either:
- Concatenate all streamed data and validate the final output
- Use explicit delimiters (e.g., newline-separated) and parse accordingly
- Implement a framing protocol and validate parsed frames
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts`
around lines 592 - 599, The test currently assumes a 1:1 correspondence between
response.streamedData entries and responseChunks which is unreliable; update the
assertions in the incremental render test to validate the combined streamed
output instead of per-chunk ordering: join/concatenate all entries in
response.streamedData into a single string (or parse them using an explicit
delimiter if you add framing) and compare that combined result to the
concatenated expected responseChunks (or to the expected framed parse); adjust
any other similar assertions that use response.streamedData[index] (and any
push()-based test helpers) to use concatenation or a proper framing/parser
before asserting.
| s.add_runtime_dependency "httpx", ">= 1.7.0" | ||
| # Needed to avoid this bug at httpx versions >= 1.6.0: | ||
| # https://github.com/HoneyryderChuck/httpx/issues/118 | ||
| s.add_runtime_dependency "http-2", ">= 1.1.1" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "HTTPX constraint in the gemspec:"
rg -n 'add_runtime_dependency "httpx"' react_on_rails_pro/react_on_rails_pro.gemspec
echo
echo "HTTPX-specific integration points in the Pro gem:"
rg -n 'stream_bidi|httpx_stream_bidi_patch|HTTPX' react_on_rails_proRepository: shakacode/react_on_rails
Length of output: 8303
🏁 Script executed:
cat -n react_on_rails_pro/lib/react_on_rails_pro/httpx_stream_bidi_patch.rbRepository: shakacode/react_on_rails
Length of output: 1417
🏁 Script executed:
sed -n '395,410p' react_on_rails_pro/lib/react_on_rails_pro/request.rbRepository: shakacode/react_on_rails
Length of output: 887
🏁 Script executed:
rg -A 5 -B 5 'stream_bidi' react_on_rails_pro/lib/react_on_rails_pro/request.rb | head -40Repository: shakacode/react_on_rails
Length of output: 1615
Add an upper bound for httpx to prevent major version breaks.
The gem directly monkeypatches internal HTTPX APIs via RequestMethods.prepend in httpx_stream_bidi_patch.rb, relying on internal state (@headers_sent, @options.stream) and callback methods. Without an upper bound, ">= 1.7.0" will accept HTTPX 2.x when released, which would likely break the patch due to internal API changes.
Suggested fix
- s.add_runtime_dependency "httpx", ">= 1.7.0"
+ s.add_runtime_dependency "httpx", ">= 1.7.0", "< 2"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails_pro/react_on_rails_pro.gemspec` around lines 37 - 40, The
runtime dependency for httpx must be constrained to prevent accepting
incompatible major versions—update the gemspec constraint for "httpx" from ">=
1.7.0" to a bounded range that prevents 2.x (for example ">= 1.7.0", "< 2.0") so
the monkeypatch in httpx_stream_bidi_patch.rb (which uses RequestMethods.prepend
and relies on internals like `@headers_sent` and `@options.stream`) won't be applied
against a breaking httpx major release; change the version spec in the existing
s.add_runtime_dependency call for "httpx" accordingly.
| def read_async_props_from_redis(emitter) | ||
| redis = ::Redis.new | ||
| request_id = params[:request_id] | ||
|
|
||
| unless request_id | ||
| sleep 1 | ||
| raise "You must pass the request_id param to the page, this page is inteded to be used for testing only" | ||
| end | ||
|
|
||
| ended = false | ||
| last_received_id = "0-0" | ||
| stream_id = "stream:#{request_id}" | ||
| until ended | ||
| received_messages = redis.xread(stream_id, last_received_id, block: 0)[stream_id] | ||
| # receive_messages are like [[msg1_id, [**msg_entries]], [msg2_id, [**msg_entries]]] | ||
| last_received_id = received_messages.last.first | ||
| received_messages.each do |_message_id, message_entries| # rubocop:disable Style/HashEachMethods | ||
| message_entries.each do |message_key, message_value| | ||
| if message_key == "end" | ||
| ended = true | ||
| next | ||
| end | ||
|
|
||
| sleep 0.1 | ||
| # Key starts with : | ||
| emitter.call(message_key[1..], JSON.parse(message_value)) | ||
| end | ||
| end | ||
| end | ||
| end |
There was a problem hiding this comment.
Close the Redis client here and reject blank request_ids before XREAD.
Line 226 opens a Redis connection per render, but nothing closes it, so successful streams and exceptions both leak sockets. Line 229 also only rejects nil; ?request_id= will subscribe to stream: and hang instead of failing fast.
🛠 Suggested change
def read_async_props_from_redis(emitter)
- redis = ::Redis.new
+ redis = nil
request_id = params[:request_id]
- unless request_id
+ if request_id.blank?
sleep 1
- raise "You must pass the request_id param to the page, this page is inteded to be used for testing only"
+ raise(
+ "You must pass the request_id param to the page, this page is intended to be used for testing only"
+ )
end
+ redis = ::Redis.new
@@
end
end
+ ensure
+ begin
+ redis&.close
+ rescue StandardError => e
+ Rails.logger.warn "Failed to close Redis: #{e.message}"
+ end
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails_pro/spec/dummy/app/controllers/pages_controller.rb` around
lines 225 - 254, The read_async_props_from_redis method opens a Redis connection
(redis = ::Redis.new) but never closes it and only checks for nil request_id;
update read_async_props_from_redis to validate request_id is present and not
blank (e.g., nil or empty/whitespace) before doing any XREAD, raising
immediately if invalid, and ensure the Redis client is closed on both normal
completion and on exceptions (use an ensure/finally-style block around the
XREAD/stream loop to call redis.close or redis.quit); keep
emitter.call(message_key[1..], JSON.parse(message_value)) behavior but protect
the stream loop so sockets aren’t leaked.
| async ({ page, sendRedisValue, endRedisStream }) => { | ||
| await expect(page.getByRole('heading', { name: 'Async Props Component' })).toBeVisible(); | ||
| await expect(page.getByText('Name: John Doe')).toBeVisible(); | ||
| await expect(page.getByText('Age: 30')).toBeVisible(); | ||
| await expect(page.getByText('Description: Software Engineer')).toBeVisible(); | ||
| await expect(page.getByText('Clean Code')).not.toBeVisible(); | ||
| await expect(page.getByText('Design Patterns')).not.toBeVisible(); | ||
| await expect(page.getByText('Engineering in software')).not.toBeVisible(); | ||
| await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible(); | ||
|
|
||
| await sendRedisValue('books', ['Clean Code', 'Design Patterns']); | ||
| await expect(page.getByText('Clean Code')).toBeVisible(); | ||
| await expect(page.getByText('Design Patterns')).toBeVisible(); | ||
| await expect(page.getByText('Engineering in software')).not.toBeVisible(); | ||
| await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible(); | ||
|
|
||
| await sendRedisValue('researches', ['Engineering in software', 'The Need for Software Engineering']); | ||
| await expect(page.getByText('Clean Code')).toBeVisible(); | ||
| await expect(page.getByText('Design Patterns')).toBeVisible(); | ||
| await expect(page.getByText('Engineering in software')).toBeVisible(); | ||
| await expect(page.getByText('The Need for Software Engineering')).toBeVisible(); | ||
| await endRedisStream(); |
There was a problem hiding this comment.
Always close the Redis stream in a finally block.
If any expectation or sendRedisValue call fails before Line 27, endRedisStream() is skipped and the next E2E case inherits an open backend stream.
🧹 Suggested fix
async ({ page, sendRedisValue, endRedisStream }) => {
- await expect(page.getByRole('heading', { name: 'Async Props Component' })).toBeVisible();
- await expect(page.getByText('Name: John Doe')).toBeVisible();
- await expect(page.getByText('Age: 30')).toBeVisible();
- await expect(page.getByText('Description: Software Engineer')).toBeVisible();
- await expect(page.getByText('Clean Code')).not.toBeVisible();
- await expect(page.getByText('Design Patterns')).not.toBeVisible();
- await expect(page.getByText('Engineering in software')).not.toBeVisible();
- await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible();
-
- await sendRedisValue('books', ['Clean Code', 'Design Patterns']);
- await expect(page.getByText('Clean Code')).toBeVisible();
- await expect(page.getByText('Design Patterns')).toBeVisible();
- await expect(page.getByText('Engineering in software')).not.toBeVisible();
- await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible();
-
- await sendRedisValue('researches', ['Engineering in software', 'The Need for Software Engineering']);
- await expect(page.getByText('Clean Code')).toBeVisible();
- await expect(page.getByText('Design Patterns')).toBeVisible();
- await expect(page.getByText('Engineering in software')).toBeVisible();
- await expect(page.getByText('The Need for Software Engineering')).toBeVisible();
- await endRedisStream();
+ try {
+ await expect(page.getByRole('heading', { name: 'Async Props Component' })).toBeVisible();
+ await expect(page.getByText('Name: John Doe')).toBeVisible();
+ await expect(page.getByText('Age: 30')).toBeVisible();
+ await expect(page.getByText('Description: Software Engineer')).toBeVisible();
+ await expect(page.getByText('Clean Code')).not.toBeVisible();
+ await expect(page.getByText('Design Patterns')).not.toBeVisible();
+ await expect(page.getByText('Engineering in software')).not.toBeVisible();
+ await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible();
+
+ await sendRedisValue('books', ['Clean Code', 'Design Patterns']);
+ await expect(page.getByText('Clean Code')).toBeVisible();
+ await expect(page.getByText('Design Patterns')).toBeVisible();
+ await expect(page.getByText('Engineering in software')).not.toBeVisible();
+ await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible();
+
+ await sendRedisValue('researches', ['Engineering in software', 'The Need for Software Engineering']);
+ await expect(page.getByText('Clean Code')).toBeVisible();
+ await expect(page.getByText('Design Patterns')).toBeVisible();
+ await expect(page.getByText('Engineering in software')).toBeVisible();
+ await expect(page.getByText('The Need for Software Engineering')).toBeVisible();
+ } finally {
+ await endRedisStream();
+ }
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async ({ page, sendRedisValue, endRedisStream }) => { | |
| await expect(page.getByRole('heading', { name: 'Async Props Component' })).toBeVisible(); | |
| await expect(page.getByText('Name: John Doe')).toBeVisible(); | |
| await expect(page.getByText('Age: 30')).toBeVisible(); | |
| await expect(page.getByText('Description: Software Engineer')).toBeVisible(); | |
| await expect(page.getByText('Clean Code')).not.toBeVisible(); | |
| await expect(page.getByText('Design Patterns')).not.toBeVisible(); | |
| await expect(page.getByText('Engineering in software')).not.toBeVisible(); | |
| await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible(); | |
| await sendRedisValue('books', ['Clean Code', 'Design Patterns']); | |
| await expect(page.getByText('Clean Code')).toBeVisible(); | |
| await expect(page.getByText('Design Patterns')).toBeVisible(); | |
| await expect(page.getByText('Engineering in software')).not.toBeVisible(); | |
| await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible(); | |
| await sendRedisValue('researches', ['Engineering in software', 'The Need for Software Engineering']); | |
| await expect(page.getByText('Clean Code')).toBeVisible(); | |
| await expect(page.getByText('Design Patterns')).toBeVisible(); | |
| await expect(page.getByText('Engineering in software')).toBeVisible(); | |
| await expect(page.getByText('The Need for Software Engineering')).toBeVisible(); | |
| await endRedisStream(); | |
| async ({ page, sendRedisValue, endRedisStream }) => { | |
| try { | |
| await expect(page.getByRole('heading', { name: 'Async Props Component' })).toBeVisible(); | |
| await expect(page.getByText('Name: John Doe')).toBeVisible(); | |
| await expect(page.getByText('Age: 30')).toBeVisible(); | |
| await expect(page.getByText('Description: Software Engineer')).toBeVisible(); | |
| await expect(page.getByText('Clean Code')).not.toBeVisible(); | |
| await expect(page.getByText('Design Patterns')).not.toBeVisible(); | |
| await expect(page.getByText('Engineering in software')).not.toBeVisible(); | |
| await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible(); | |
| await sendRedisValue('books', ['Clean Code', 'Design Patterns']); | |
| await expect(page.getByText('Clean Code')).toBeVisible(); | |
| await expect(page.getByText('Design Patterns')).toBeVisible(); | |
| await expect(page.getByText('Engineering in software')).not.toBeVisible(); | |
| await expect(page.getByText('The Need for Software Engineering')).not.toBeVisible(); | |
| await sendRedisValue('researches', ['Engineering in software', 'The Need for Software Engineering']); | |
| await expect(page.getByText('Clean Code')).toBeVisible(); | |
| await expect(page.getByText('Design Patterns')).toBeVisible(); | |
| await expect(page.getByText('Engineering in software')).toBeVisible(); | |
| await expect(page.getByText('The Need for Software Engineering')).toBeVisible(); | |
| } finally { | |
| await endRedisStream(); | |
| } | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails_pro/spec/dummy/e2e-tests/asyncProps.spec.ts` around lines 6 -
27, The test body is not ensuring endRedisStream() always runs; wrap the
existing test logic (the async callback that uses page, sendRedisValue,
endRedisStream) in a try/finally so that endRedisStream() is invoked in the
finally block regardless of failures; locate the async test function (the
anonymous async ({ page, sendRedisValue, endRedisStream }) => { ... }) and move
all assertions and sendRedisValue calls into the try, then call endRedisStream()
in the finally to guarantee the Redis stream is closed.
| # Procfile for development with hot reloading of JavaScript and CSS | ||
|
|
||
| rails: rails s -p 3000 | ||
| # rails: rails s -p 3000 |
There was a problem hiding this comment.
Re-enable Rails and node-renderer in this dev Procfile
Commenting out both rails and node-renderer leaves only watcher processes, so the dummy app stack cannot actually serve or SSR/RSC-render when started via this Procfile. Unless this is intentionally temporary/local-only, this should stay runnable.
Suggested fix
-# rails: rails s -p 3000
+rails: rails s -p 3000
...
-# node-renderer: RENDERER_LOG_LEVEL=debug RENDERER_PORT=3800 node --inspect client/node-renderer.js
+node-renderer: RENDERER_LOG_LEVEL=debug RENDERER_PORT=3800 node --inspect client/node-renderer.jsAlso applies to: 14-14
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails_pro/spec/dummy/Procfile.dev` at line 3, Uncomment and restore
the two process entries for the dummy app so it actually serves and
SSR/RSC-renders: re-enable the "rails" entry (the commented "rails: rails s -p
3000" line) and the "node-renderer" entry so both processes run alongside the
watchers in Procfile.dev; ensure the lines remain present and active (not
prefixed with '#') so the Procfile can start the full dummy stack when used.
| # Read version from react-on-rails package.json | ||
| VERSION=$(node -e "console.log(require('./packages/react-on-rails/package.json').version)" 2>/dev/null) | ||
| if [[ -z "$VERSION" ]]; then | ||
| echo "Error: Could not read version from packages/react-on-rails/package.json" >&2 | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Bug: Relative path fails if script is invoked from outside repo root.
Line 52 uses ./packages/react-on-rails/package.json which is relative to the current working directory, not MONOREPO_ROOT. If the script is invoked from a different directory (e.g., ./react_on_rails/scripts/publish-to-builds-repo.sh), version detection will fail.
🐛 Proposed fix to use absolute path
# Read version from react-on-rails package.json
-VERSION=$(node -e "console.log(require('./packages/react-on-rails/package.json').version)" 2>/dev/null)
+VERSION=$(node -e "console.log(require('$MONOREPO_ROOT/packages/react-on-rails/package.json').version)" 2>/dev/null)
if [[ -z "$VERSION" ]]; then
echo "Error: Could not read version from packages/react-on-rails/package.json" >&2
exit 1
fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/publish-to-builds-repo.sh` around lines 51 - 56, The VERSION read
uses a relative path ("./packages/react-on-rails/package.json") which breaks
when the script is run outside the repo root; update the node invocation that
sets VERSION to reference the absolute repo path (use the existing MONOREPO_ROOT
variable or derive the script's root) so it reads
"$MONOREPO_ROOT/packages/react-on-rails/package.json" (or equivalent) instead of
the relative path; ensure the VERSION assignment and subsequent error check
continue to work with the absolute path.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2c96039b4e
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| pool = ReactOnRailsPro::ServerRenderingPool::NodeRenderingPool | ||
|
|
||
| # Incremental rendering streams raw text from the VM without length-prefixed envelope | ||
| ReactOnRailsPro::StreamRequest.create(length_prefixed: false) do |send_bundle, barrier| |
There was a problem hiding this comment.
Parse incremental renderer responses as length-prefixed chunks
render_code_with_incremental_updates forces StreamRequest into length_prefixed: false, but the incremental endpoint still returns the same framed stream produced by handleRenderRequest (the renderer emits <metadata>\t<len>\n<content> chunks). In this mode downstream code receives plain header/content strings instead of hash chunks, so stream_react_component_with_async_props flows into helper code that expects chunk_json_result["html"] and can fail while building server-rendered output. This makes incremental async-props rendering break for real SSR responses rather than only test fixtures.
Useful? React with 👍 / 👎.
| chunk.split("\n").each do |line| | ||
| stripped = line.strip | ||
| yield stripped unless stripped.empty? |
There was a problem hiding this comment.
Preserve cross-chunk buffering in line-based stream parsing
The new line-based parser splits each transport chunk with chunk.split("\n") and strips each piece, but HTTP stream chunk boundaries are arbitrary and may cut through a logical line. When that happens, partial fragments are yielded as independent messages and whitespace/newline semantics are lost, which can corrupt incremental stream payloads whenever the sender or network fragments data differently.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (3)
packages/react-on-rails-pro-node-renderer/src/worker/vm.ts (1)
432-436: Consider simplifying the return statement.In an
asyncfunction, you can return a plain object directly rather than wrapping it inPromise.resolve().🧹 Optional simplification
} catch (exception) { const exceptionMessage = formatExceptionMessage(renderingRequest, exception); log.debug('Caught exception in rendering request: %s', exceptionMessage); - return Promise.resolve({ exceptionMessage }); + return { exceptionMessage }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/src/worker/vm.ts` around lines 432 - 436, The catch block currently returns Promise.resolve({ exceptionMessage }) which is unnecessary inside an async function; change the return to a plain object (return { exceptionMessage }) in the catch handling where formatExceptionMessage(renderingRequest, exception) is used so the async function returns the object directly instead of wrapping it with Promise.resolve.react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb (1)
61-61: Fix test description typo (yeilds→yields).Small naming cleanup to keep spec output professional and searchable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb` at line 61, Update the example description string in the spec: change the test description in the example labeled "yeilds chunks in order" to the correctly spelled "yields chunks in order" so the spec reads with proper spelling; locate the it "yeilds chunks in order" example in request_spec.rb and correct the spelling.packages/react-on-rails-pro/tests/injectRSCPayload.test.ts (1)
13-34: Add one split-frame RSC case here.
createMockRSCStream()always emits a complete length-prefixed frame per chunk, so this suite never exercises the parser across arbitrary stream boundaries. SinceinjectRSCPayload()now feeds raw chunks intoLengthPrefixedStreamParser, please add at least one case where a single framed payload is split across multiple pushes, e.g. header and body delivered separately.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro/tests/injectRSCPayload.test.ts` around lines 13 - 34, Add a test that exercises a split-frame RSC by constructing one full length-prefixed payload (use toLengthPrefixed on the payload string), then split that prefixed byte sequence into two parts and push them into the stream separately so injectRSCPayload/LengthPrefixedStreamParser sees the header and body across different chunks; implement this either by adding a new case in the existing test suite that builds the prefixed Uint8Array, slices it into two strings/Uint8Arrays and returns a PassThrough that pushes the first slice, then the second, then null, or extend createMockRSCStream usage in the test to accept pre-prefixed slices (keeping createMockRSCStream name to locate code and referencing injectRSCPayload and LengthPrefixedStreamParser in the test).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts`:
- Around line 249-261: The catch in getNextChunk is swallowing all errors once
any data is buffered; change it to only treat timeout/stream-closure sentinel
errors as normal boundaries and return receivedChunks.join(''), but rethrow any
other errors even if receivedChunks has data. Specifically, in getNextChunk's
catch block (calling getNextChunkInternal), inspect the caught error (e.g., by
checking error.name, error.code, or a sentinel property used by
getNextChunkInternal such as 'TimeoutError' or 'StreamClosed') and if it matches
timeout/closed semantics return buffered data, otherwise throw the error so real
stream errors are not masked.
- Around line 203-245: The current getNextChunkInternal implementation removes
and reattaches once('data') listeners which risks data loss and split-UTF8
decoding; change getNextChunkInternal to install a single long-lived
onData/onError/onClose listener (do not remove/re-add per-call) that pushes
incoming Buffer chunks into an internal queue/array and only resolves callers by
concatenating queued Buffers with Buffer.concat(buffers).toString('utf8');
preserve timeout behavior by having each getNextChunkInternal call check the
shared queue and resolve immediately if data exists or register a per-call
promise to be fulfilled when the queue grows, and ensure errors/close events are
forwarded to pending promises (reference getNextChunkInternal, onData,
cancelDataListener, and makeRequest for Buffer.concat decoding pattern).
In `@packages/react-on-rails-pro/src/getReactServerComponent.client.ts`:
- Around line 58-63: In createFromFetch, check response.ok before using the
response body stream: if response.ok is false, read the error payload (e.g.,
await response.text()) and throw a clear Error including response.status and the
response text so HTTP errors aren’t fed into the Flight/length-prefixed parser;
only proceed to access response.body and start the Flight parsing when
response.ok is true.
In `@packages/react-on-rails-pro/src/parseLengthPrefixedStream.ts`:
- Around line 45-90: The parser currently accepts a truncated final frame
silently; add a public finish() (or end()/checkEOF()) method on the same class
that validates end-of-stream: ensure this.state === 'header' and this.buf.length
=== 0, and if not throw a descriptive Error indicating truncated mid-header or
truncated mid-content (reference this.state, this.buf, this.contentLen and the
feed() logic that sets them); callers should invoke finish() when the source is
closed so leftover buffered bytes are reported as errors rather than dropped.
- Around line 69-73: The header length field parsing in
parseLengthPrefixedStream.ts is too permissive: before calling parseInt on
lenHex (derived from header.subarray(tabIdx + 1) and decoder.decode), validate
that lenHex is a non-empty string consisting only of hex digits (use a regex
like /^[0-9A-Fa-f]+$/) to reject garbage or negative signs, then parse with
parseInt(lenHex, 16) and ensure the resulting this.contentLen is >= 0; on any
failure (empty, regex mismatch, or negative) throw an Error indicating an
invalid content length hex so malformed headers cannot desynchronize frames
(refer to variables lenHex, decoder, header, tabIdx, and this.contentLen).
In `@react_on_rails_pro/lib/react_on_rails_pro/stream_request.rb`:
- Around line 153-165: process_line_based_chunks currently splits each incoming
chunk separately and can yield partial NDJSON lines when a line spans chunk
boundaries; restore a line accumulator (e.g., a local or instance variable such
as line_accumulator or leftover) that you append each chunk to, then split that
buffer on "\n", yield every complete stripped line via yield, and keep the final
partial segment in the accumulator for the next chunk; also continue to append
to error_body when response_has_error_status?(stream_response) and still set
`@react_on_rails_received_first_chunk` on the stream_response as before.
In `@react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb`:
- Line 13: The single-line hash assigned to metadata is too long—refactor the
metadata = { ... } literal into a multi-line hash so each key/value pair is on
its own line (e.g., metadata = { "consoleReplayScript" => "", "hasErrors" =>
false, ... } → a multi-line block) to satisfy the Ruby line-length rule; update
the assignment of metadata in request_spec.rb (the metadata variable/hash
literal) by breaking it across lines and aligning keys for readability.
- Around line 439-469: The current test in request_spec.rb is flaky because it
asserts exact execution_order equality with sleep timing; change the assertion
to check relative ordering instead of exact sequence: after collecting
execution_order from test_async_props_block and stream.each_chunk (used in
described_class.render_code_with_incremental_updates and stream.each_chunk),
assert that execution_order.index(:async_block_start) <
execution_order.index(:chunk_yielded) and execution_order.index(:chunk_yielded)
< execution_order.index(:async_block_end) (or equivalent RSpec matcher), so the
test only requires that the chunk was yielded while the async block was running
rather than enforcing a strict array equality.
In `@react_on_rails_pro/spec/react_on_rails_pro/stream_request_spec.rb`:
- Line 13: The metadata hash in stream_request_spec.rb assigned to metadata is
too long; break the hash across multiple lines (one key/value per line) so the
line length stays under 120 characters and improves readability—locate the
metadata assignment (metadata = { "consoleReplayScript" => "", "hasErrors" =>
false, "isShellReady" => true, "payloadType" => "string" }) and reformat it into
a multi-line hash with each key/value on its own indented line, preserving the
same keys and values.
In `@react_on_rails/lib/react_on_rails/length_prefixed_parser.rb`:
- Around line 18-30: The parse_one_chunk_result method currently only checks how
many complete chunks were yielded but doesn't ensure the input had no leftover
or partial-frame bytes; after calling parser.feed(str.to_s.b) and collecting
results, verify the parser consumed the entire input (no leftover/unparsed bytes
or an incomplete second frame) and if any leftover/partial data exists raise
ReactOnRails::Error with a clear message; update the logic around
parse_one_chunk_result, parser.feed and results to perform this extra
EOF/leftover check and reject inputs that are not exactly one complete
length-prefixed chunk.
- Around line 101-105: After parsing the hex header into `@content_len` using
Integer(len_hex, 16) in length_prefixed_parser.rb, validate that `@content_len` is
non‑negative and raise ReactOnRails::Error if it's negative; specifically, after
the Integer(...) call check if `@content_len` < 0 and raise with a message that
includes len_hex.force_encoding(Encoding::UTF_8).inspect so negative signed hex
values like "-1" or "-FF" are rejected before byteslice is used.
---
Nitpick comments:
In `@packages/react-on-rails-pro-node-renderer/src/worker/vm.ts`:
- Around line 432-436: The catch block currently returns Promise.resolve({
exceptionMessage }) which is unnecessary inside an async function; change the
return to a plain object (return { exceptionMessage }) in the catch handling
where formatExceptionMessage(renderingRequest, exception) is used so the async
function returns the object directly instead of wrapping it with
Promise.resolve.
In `@packages/react-on-rails-pro/tests/injectRSCPayload.test.ts`:
- Around line 13-34: Add a test that exercises a split-frame RSC by constructing
one full length-prefixed payload (use toLengthPrefixed on the payload string),
then split that prefixed byte sequence into two parts and push them into the
stream separately so injectRSCPayload/LengthPrefixedStreamParser sees the header
and body across different chunks; implement this either by adding a new case in
the existing test suite that builds the prefixed Uint8Array, slices it into two
strings/Uint8Arrays and returns a PassThrough that pushes the first slice, then
the second, then null, or extend createMockRSCStream usage in the test to accept
pre-prefixed slices (keeping createMockRSCStream name to locate code and
referencing injectRSCPayload and LengthPrefixedStreamParser in the test).
In `@react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb`:
- Line 61: Update the example description string in the spec: change the test
description in the example labeled "yeilds chunks in order" to the correctly
spelled "yields chunks in order" so the spec reads with proper spelling; locate
the it "yeilds chunks in order" example in request_spec.rb and correct the
spelling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0583d187-ffe0-49c9-8451-517a0ac4dd0d
📒 Files selected for processing (39)
packages/react-on-rails-pro-node-renderer/src/worker/vm.tspackages/react-on-rails-pro-node-renderer/tests/concurrentHtmlStreaming.test.tspackages/react-on-rails-pro-node-renderer/tests/handleRenderRequest.test.tspackages/react-on-rails-pro-node-renderer/tests/htmlStreaming.test.jspackages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.tspackages/react-on-rails-pro-node-renderer/tests/parseLengthPrefixedStream.tspackages/react-on-rails-pro-node-renderer/tests/streamErrorHang.test.tspackages/react-on-rails-pro/src/getReactServerComponent.client.tspackages/react-on-rails-pro/src/injectRSCPayload.tspackages/react-on-rails-pro/src/parseLengthPrefixedStream.tspackages/react-on-rails-pro/src/streamingUtils.tspackages/react-on-rails-pro/src/transformRSCNodeStream.tspackages/react-on-rails-pro/src/transformRSCStreamAndReplayConsoleLogs.tspackages/react-on-rails-pro/tests/injectRSCPayload.test.tspackages/react-on-rails-pro/tests/streamServerRenderedReactComponent.test.jsxpackages/react-on-rails-pro/tests/transformRSCStreamAndReplayConsoleLogs.test.tspackages/react-on-rails-pro/tests/utils/removeRSCChunkStack.tspackages/react-on-rails/src/base/client.tspackages/react-on-rails/src/base/full.rsc.tspackages/react-on-rails/src/base/full.tspackages/react-on-rails/src/serverRenderReactComponent.tspackages/react-on-rails/src/serverRenderUtils.tspackages/react-on-rails/src/types/index.tspackages/react-on-rails/tests/serverRenderReactComponent.test.tsreact_on_rails/lib/react_on_rails/helper.rbreact_on_rails/lib/react_on_rails/length_prefixed_parser.rbreact_on_rails/lib/react_on_rails/server_rendering_pool/ruby_embedded_java_script.rbreact_on_rails/spec/dummy/spec/helpers/react_on_rails_helper_spec.rbreact_on_rails/spec/dummy/spec/requests/server_render_check_spec.rbreact_on_rails_pro/app/helpers/react_on_rails_pro_helper.rbreact_on_rails_pro/app/views/react_on_rails_pro/rsc_payload.text.erbreact_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rbreact_on_rails_pro/lib/react_on_rails_pro/request.rbreact_on_rails_pro/lib/react_on_rails_pro/stream_request.rbreact_on_rails_pro/spec/dummy/spec/helpers/react_on_rails_pro_helper_spec.rbreact_on_rails_pro/spec/dummy/spec/requests/rsc_payload_spec.rbreact_on_rails_pro/spec/react_on_rails_pro/request_spec.rbreact_on_rails_pro/spec/react_on_rails_pro/stream_integration_spec.rbreact_on_rails_pro/spec/react_on_rails_pro/stream_request_spec.rb
💤 Files with no reviewable changes (2)
- packages/react-on-rails-pro/tests/transformRSCStreamAndReplayConsoleLogs.test.ts
- packages/react-on-rails-pro/src/transformRSCStreamAndReplayConsoleLogs.ts
✅ Files skipped from review due to trivial changes (3)
- react_on_rails_pro/lib/react_on_rails_pro/concerns/stream.rb
- react_on_rails/spec/dummy/spec/requests/server_render_check_spec.rb
- react_on_rails_pro/spec/react_on_rails_pro/stream_integration_spec.rb
🚧 Files skipped from review as they are similar to previous changes (4)
- react_on_rails_pro/app/views/react_on_rails_pro/rsc_payload.text.erb
- packages/react-on-rails/src/base/client.ts
- react_on_rails_pro/app/helpers/react_on_rails_pro_helper.rb
- react_on_rails_pro/lib/react_on_rails_pro/request.rb
| export const getNextChunkInternal = ( | ||
| stream: NodeJS.ReadableStream, | ||
| { timeout = 250 }: { timeout?: number } = {}, | ||
| ) => { | ||
| return new Promise<string>((resolve, reject) => { | ||
| let timeoutId: NodeJS.Timeout; | ||
| let cancelDataListener = () => {}; | ||
| if (timeout) { | ||
| timeoutId = setTimeout(() => { | ||
| cancelDataListener(); | ||
| reject(new Error(`Timeout after waiting for ${timeout}ms to get the next stream chunk`)); | ||
| }, timeout); | ||
| } | ||
|
|
||
| const onData = (chunk: Buffer) => { | ||
| clearTimeout(timeoutId); | ||
| cancelDataListener(); | ||
| resolve(chunk.toString()); | ||
| }; | ||
|
|
||
| const onError = (error: Error) => { | ||
| clearTimeout(timeoutId); | ||
| cancelDataListener(); | ||
| reject(error); | ||
| }; | ||
|
|
||
| const onClose = () => { | ||
| reject(new Error('Stream Closed')); | ||
| }; | ||
|
|
||
| cancelDataListener = () => { | ||
| stream.off('data', onData); | ||
| stream.off('error', onError); | ||
| stream.off('close', onClose); | ||
| }; | ||
|
|
||
| stream.once('data', onData); | ||
| stream.once('error', onError); | ||
| if ('closed' in stream && stream.closed) { | ||
| onClose(); | ||
| } else { | ||
| stream.once('close', onClose); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Get the file size and content
wc -l packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.tsRepository: shakacode/react_on_rails
Length of output: 139
🏁 Script executed:
# Read the specific file
cat -n packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts | head -100Repository: shakacode/react_on_rails
Length of output: 4282
🏁 Script executed:
# Search for usages of getNextChunkInternal
rg "getNextChunkInternal" packages/react-on-rails-pro-node-renderer/Repository: shakacode/react_on_rails
Length of output: 306
🏁 Script executed:
# Look for makeRequest function to understand the "split-UTF-8 corruption" fix
rg "makeRequest\|split.*UTF" packages/react-on-rails-pro-node-renderer/ -A 5Repository: shakacode/react_on_rails
Length of output: 50
🏁 Script executed:
# Read the rest of the file from line 100 onwards
cat -n packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts | tail -170Repository: shakacode/react_on_rails
Length of output: 6170
🏁 Script executed:
# Search more broadly for the caller context
rg "getNextChunkInternal\|makeRequest" packages/react-on-rails-pro-node-renderer/tests/ -B 5 -A 10Repository: shakacode/react_on_rails
Length of output: 50
🏁 Script executed:
# Look for any references to UTF-8, Buffer.concat, or stream handling patterns
rg "Buffer\.concat|UTF-8|split.*chunk|data.*listener" packages/react-on-rails-pro-node-renderer/ -A 3 -B 3Repository: shakacode/react_on_rails
Length of output: 50382
🏁 Script executed:
# Search for any tests or usage patterns of getNextChunk/getNextChunkInternal
rg "getNextChunk\|makeRequest" packages/react-on-rails-pro-node-renderer/ -lRepository: shakacode/react_on_rails
Length of output: 50
🏁 Script executed:
# Check if there are any test files using these functions
find packages/react-on-rails-pro-node-renderer -name "*.test.ts" -o -name "*.spec.ts" | head -20Repository: shakacode/react_on_rails
Length of output: 1237
🏁 Script executed:
# Look for test files that might use these utilities
rg "getNextChunk|describe|it\(" packages/react-on-rails-pro-node-renderer/tests/ -l --type tsRepository: shakacode/react_on_rails
Length of output: 1152
🏁 Script executed:
# Search for where getNextChunk is actually being called in test files
rg "getNextChunk|import.*httpRequestUtils" packages/react-on-rails-pro-node-renderer/tests/ -B 2 -A 5 --type tsRepository: shakacode/react_on_rails
Length of output: 10909
🏁 Script executed:
# Look at the actual test to see if UTF-8 or stream behavior issues are observable
cat -n packages/react-on-rails-pro-node-renderer/tests/incrementalHtmlStreaming.test.ts | head -150Repository: shakacode/react_on_rails
Length of output: 5362
Refactor getNextChunk* functions to use a single long-lived buffer and queue instead of reattaching once('data') in a loop.
onData() removes all listeners before the next getNextChunkInternal() call reattaches them. This causes two problems:
-
Data loss in the gap: When listeners are removed, the stream pauses. Data from upstream can be lost during the interval before new listeners are attached, and errors during this gap have no handler.
-
Split-UTF-8 corruption: Calling
chunk.toString()on individual chunks (line 220) reintroduces the multi-byte UTF-8 splitting issue thatmakeRequest()solves by usingBuffer.concat()before decoding (line 143).
Instead, maintain a single long-lived listener and internal queue for the entire read burst, accumulating buffers and decoding only after all chunks are collected with Buffer.concat(buffer).toString('utf8'), similar to the pattern in makeRequest().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts` around
lines 203 - 245, The current getNextChunkInternal implementation removes and
reattaches once('data') listeners which risks data loss and split-UTF8 decoding;
change getNextChunkInternal to install a single long-lived
onData/onError/onClose listener (do not remove/re-add per-call) that pushes
incoming Buffer chunks into an internal queue/array and only resolves callers by
concatenating queued Buffers with Buffer.concat(buffers).toString('utf8');
preserve timeout behavior by having each getNextChunkInternal call check the
shared queue and resolve immediately if data exists or register a per-call
promise to be fulfilled when the queue grows, and ensure errors/close events are
forwarded to pending promises (reference getNextChunkInternal, onData,
cancelDataListener, and makeRequest for Buffer.concat decoding pattern).
| export const getNextChunk = async (stream: NodeJS.ReadableStream, options: { timeout?: number } = {}) => { | ||
| const receivedChunks: string[] = []; | ||
| // eslint-disable-next-line @typescript-eslint/no-unnecessary-condition | ||
| while (true) { | ||
| try { | ||
| // eslint-disable-next-line no-await-in-loop | ||
| const chunk = await getNextChunkInternal(stream, options); | ||
| receivedChunks.push(chunk); | ||
| } catch (err) { | ||
| if (receivedChunks.length > 0) { | ||
| return receivedChunks.join(''); | ||
| } | ||
| throw err; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.tsRepository: shakacode/react_on_rails
Length of output: 8217
Distinguish between stream termination and stream errors in the catch block.
The catch block currently returns buffered data for any error—including real stream errors—once data has been received. This masks corruption and stream failures as partial success, weakening error-path tests. Only treat timeout and stream close as normal boundaries and return buffered data; rethrow actual stream errors regardless of buffered chunks.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts` around
lines 249 - 261, The catch in getNextChunk is swallowing all errors once any
data is buffered; change it to only treat timeout/stream-closure sentinel errors
as normal boundaries and return receivedChunks.join(''), but rethrow any other
errors even if receivedChunks has data. Specifically, in getNextChunk's catch
block (calling getNextChunkInternal), inspect the caught error (e.g., by
checking error.name, error.code, or a sentinel property used by
getNextChunkInternal such as 'TimeoutError' or 'StreamClosed') and if it matches
timeout/closed semantics return buffered data, otherwise throw the error so real
stream errors are not masked.
| const createFromFetch = async (fetchPromise: Promise<Response>, cspNonce?: string) => { | ||
| const response = await fetchPromise; | ||
| const stream = response.body; | ||
| if (!stream) { | ||
| const { body } = response; | ||
| if (!body) { | ||
| throw new Error('No stream found in response'); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check if the file exists and get its full content
cat -n packages/react-on-rails-pro/src/getReactServerComponent.client.tsRepository: shakacode/react_on_rails
Length of output: 11239
Check response.ok before starting the Flight parser.
fetch resolves on 4xx/5xx status codes. Without an HTTP status check, error responses with text bodies will be fed into the length-prefixed parser, surfacing confusing framing/Flight parsing failures instead of the actual HTTP error.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro/src/getReactServerComponent.client.ts` around
lines 58 - 63, In createFromFetch, check response.ok before using the response
body stream: if response.ok is false, read the error payload (e.g., await
response.text()) and throw a clear Error including response.status and the
response text so HTTP errors aren’t fed into the Flight/length-prefixed parser;
only proceed to access response.body and start the Flight parsing when
response.ok is true.
| feed(chunk: Uint8Array, onChunk: (content: Uint8Array, metadata: Record<string, unknown>) => void): void { | ||
| this.buf = concatBytes(this.buf, chunk); | ||
|
|
||
| let canExtract = true; | ||
| while (canExtract) { | ||
| if (this.state === 'header') { | ||
| const idx = this.buf.indexOf(0x0a); // \n | ||
| if (idx >= 0) { | ||
| const header = this.buf.subarray(0, idx); | ||
| this.buf = this.buf.subarray(idx + 1); | ||
| const tabIdx = header.indexOf(0x09); // \t | ||
| if (tabIdx < 0) { | ||
| throw new Error( | ||
| `Malformed length-prefixed header: missing tab separator in: ${JSON.stringify(decoder.decode(header))}`, | ||
| ); | ||
| } | ||
| const metaStr = decoder.decode(header.subarray(0, tabIdx)); | ||
| try { | ||
| this.metadata = JSON.parse(metaStr); | ||
| } catch { | ||
| throw new Error( | ||
| `Malformed length-prefixed header: invalid metadata JSON: ${JSON.stringify(metaStr)}`, | ||
| ); | ||
| } | ||
| const lenHex = decoder.decode(header.subarray(tabIdx + 1)); | ||
| this.contentLen = parseInt(lenHex, 16); | ||
| if (Number.isNaN(this.contentLen)) { | ||
| throw new Error(`Invalid content length hex: ${JSON.stringify(lenHex)}`); | ||
| } | ||
| this.state = 'content'; | ||
| } else { | ||
| canExtract = false; | ||
| } | ||
| } else if (this.buf.length >= this.contentLen) { | ||
| const content = this.buf.subarray(0, this.contentLen); | ||
| this.buf = this.buf.subarray(this.contentLen); | ||
| // Strip protocol-internal payloadType before passing to consumers | ||
| const metadata = { ...this.metadata }; | ||
| delete metadata.payloadType; | ||
| onChunk(content, metadata); | ||
| this.state = 'header'; | ||
| } else { | ||
| canExtract = false; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Don't silently accept a truncated final frame.
If the source ends mid-header or mid-content, feed() leaves buffered bytes behind and nothing reports that as an error. The current consumers will just close the stream and drop the tail. This parser needs a finish()/EOF check that requires state === 'header' and an empty buffer.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro/src/parseLengthPrefixedStream.ts` around lines 45
- 90, The parser currently accepts a truncated final frame silently; add a
public finish() (or end()/checkEOF()) method on the same class that validates
end-of-stream: ensure this.state === 'header' and this.buf.length === 0, and if
not throw a descriptive Error indicating truncated mid-header or truncated
mid-content (reference this.state, this.buf, this.contentLen and the feed()
logic that sets them); callers should invoke finish() when the source is closed
so leftover buffered bytes are reported as errors rather than dropped.
| const lenHex = decoder.decode(header.subarray(tabIdx + 1)); | ||
| this.contentLen = parseInt(lenHex, 16); | ||
| if (Number.isNaN(this.contentLen)) { | ||
| throw new Error(`Invalid content length hex: ${JSON.stringify(lenHex)}`); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd packages/react-on-rails-pro/src && cat -n parseLengthPrefixedStream.ts | sed -n '50,90p'Repository: shakacode/react_on_rails
Length of output: 1950
🏁 Script executed:
cat packages/react-on-rails-pro/src/parseLengthPrefixedStream.ts | wc -lRepository: shakacode/react_on_rails
Length of output: 70
🏁 Script executed:
cat -n packages/react-on-rails-pro/src/parseLengthPrefixedStream.tsRepository: shakacode/react_on_rails
Length of output: 3823
🏁 Script executed:
node <<'EOF'
// Verify parseInt behavior with various inputs
console.log("parseInt('10garbage', 16):", parseInt('10garbage', 16));
console.log("parseInt('-1', 16):", parseInt('-1', 16));
console.log("parseInt('abc', 16):", parseInt('abc', 16));
console.log("parseInt('-abc', 16):", parseInt('-abc', 16));
console.log("parseInt('', 16):", parseInt('', 16));
console.log("parseInt('xyz', 16):", parseInt('xyz', 16));
console.log("Number.isNaN(parseInt('10garbage', 16)):", Number.isNaN(parseInt('10garbage', 16)));
console.log("Number.isNaN(parseInt('-1', 16)):", Number.isNaN(parseInt('-1', 16)));
EOFRepository: shakacode/react_on_rails
Length of output: 308
🏁 Script executed:
cd packages/react-on-rails-pro/src && grep -r "LengthPrefixedStreamParser" --include="*.ts" --include="*.tsx"Repository: shakacode/react_on_rails
Length of output: 688
🏁 Script executed:
cd packages/react-on-rails-pro/src && grep -r "feed(" transformRSCNodeStream.ts getReactServerComponent.client.ts -A 3 -B 3Repository: shakacode/react_on_rails
Length of output: 1037
🏁 Script executed:
cd packages/react-on-rails-pro/src && head -n 100 transformRSCNodeStream.ts | tail -n 50Repository: shakacode/react_on_rails
Length of output: 1731
🏁 Script executed:
cd packages/react-on-rails-pro/src && grep -B5 "parser.feed" transformRSCNodeStream.ts getReactServerComponent.client.ts | head -n 30Repository: shakacode/react_on_rails
Length of output: 884
🏁 Script executed:
# Check if there's any upstream validation or encoding of the lenHex field
cd packages/react-on-rails-pro/src && grep -r "parseInt\|lenHex\|parseInt.*16" --include="*.ts" -B2 -A2Repository: shakacode/react_on_rails
Length of output: 625
🏁 Script executed:
# Check what format is produced on the encoding side
cd packages/react-on-rails-pro/src && find . -name "*.ts" -exec grep -l "subarray\|concat\|0x09\|0x0a" {} \; | head -5Repository: shakacode/react_on_rails
Length of output: 118
Validate the length field as strict unsigned hex before parseInt.
parseInt(lenHex, 16) accepts strings like "10garbage" (parses as 16) and "-1" (parses as -1). The current Number.isNaN() check only catches completely unparseable strings, allowing malformed headers to cause silent frame desynchronization. Add validation to ensure lenHex contains only hex digits [0-9a-fA-F] before parsing, and reject negative values or trailing garbage.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro/src/parseLengthPrefixedStream.ts` around lines 69
- 73, The header length field parsing in parseLengthPrefixedStream.ts is too
permissive: before calling parseInt on lenHex (derived from
header.subarray(tabIdx + 1) and decoder.decode), validate that lenHex is a
non-empty string consisting only of hex digits (use a regex like
/^[0-9A-Fa-f]+$/) to reject garbage or negative signs, then parse with
parseInt(lenHex, 16) and ensure the resulting this.contentLen is >= 0; on any
failure (empty, regex mismatch, or negative) throw an Error indicating an
invalid content length hex so malformed headers cannot desynchronize frames
(refer to variables lenHex, decoder, header, tabIdx, and this.contentLen).
| # Encodes an HTML string in the length-prefixed wire format expected by LengthPrefixedParser. | ||
| # Wire format: <metadata JSON>\t<content byte length hex>\n<raw content bytes> | ||
| def to_length_prefixed(html) | ||
| metadata = { "consoleReplayScript" => "", "hasErrors" => false, "isShellReady" => true, "payloadType" => "string" } |
There was a problem hiding this comment.
Wrap the long metadata literal.
Please split this hash across multiple lines to satisfy the Ruby line-length rule.
💡 Suggested change
- metadata = { "consoleReplayScript" => "", "hasErrors" => false, "isShellReady" => true, "payloadType" => "string" }
+ metadata = {
+ "consoleReplayScript" => "",
+ "hasErrors" => false,
+ "isShellReady" => true,
+ "payloadType" => "string",
+ }As per coding guidelines, **/*.rb: "Line length max 120 characters in Ruby code; break long chains using multi-line method calls".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata = { "consoleReplayScript" => "", "hasErrors" => false, "isShellReady" => true, "payloadType" => "string" } | |
| metadata = { | |
| "consoleReplayScript" => "", | |
| "hasErrors" => false, | |
| "isShellReady" => true, | |
| "payloadType" => "string", | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb` at line 13, The
single-line hash assigned to metadata is too long—refactor the metadata = { ...
} literal into a multi-line hash so each key/value pair is on its own line
(e.g., metadata = { "consoleReplayScript" => "", "hasErrors" => false, ... } → a
multi-line block) to satisfy the Ruby line-length rule; update the assignment of
metadata in request_spec.rb (the metadata variable/hash literal) by breaking it
across lines and aligning keys for readability.
| it "executes async_props_block concurrently with response streaming via barrier.async" do | ||
| execution_order = [] | ||
|
|
||
| test_async_props_block = proc do |_emitter| | ||
| execution_order << :async_block_start | ||
| # Simulate async work - this runs in a separate fiber | ||
| sleep 0.01 | ||
| execution_order << :async_block_end | ||
| end | ||
|
|
||
| # Track when chunks are yielded during streaming | ||
| allow(mock_response).to receive(:each) do |&block| | ||
| execution_order << :chunk_yielded | ||
| block.call("chunk\n") | ||
| end | ||
|
|
||
| # Allow real emitter to be created for this test | ||
| allow(ReactOnRailsPro::AsyncPropsEmitter).to receive(:new).and_call_original | ||
|
|
||
| stream = described_class.render_code_with_incremental_updates( | ||
| "/render-incremental", | ||
| js_code, | ||
| async_props_block: test_async_props_block | ||
| ) | ||
|
|
||
| stream.each_chunk(&:itself) | ||
|
|
||
| # Verify concurrent execution: chunk should be yielded while async block is running | ||
| # If synchronous, order would be [:async_block_start, :async_block_end, :chunk_yielded] | ||
| expect(execution_order).to eq(%i[async_block_start chunk_yielded async_block_end]) | ||
| end |
There was a problem hiding this comment.
The concurrency-order assertion is flaky due to strict sequence matching.
Using sleep + exact array equality makes this test timing-sensitive and can fail nondeterministically under CI load.
💡 Suggested change
- expect(execution_order).to eq(%i[async_block_start chunk_yielded async_block_end])
+ expect(execution_order).to include(:async_block_start, :chunk_yielded, :async_block_end)
+ expect(execution_order.index(:async_block_start)).to be < execution_order.index(:chunk_yielded)
+ expect(execution_order.index(:chunk_yielded)).to be < execution_order.index(:async_block_end)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails_pro/spec/react_on_rails_pro/request_spec.rb` around lines 439
- 469, The current test in request_spec.rb is flaky because it asserts exact
execution_order equality with sleep timing; change the assertion to check
relative ordering instead of exact sequence: after collecting execution_order
from test_async_props_block and stream.each_chunk (used in
described_class.render_code_with_incremental_updates and stream.each_chunk),
assert that execution_order.index(:async_block_start) <
execution_order.index(:chunk_yielded) and execution_order.index(:chunk_yielded)
< execution_order.index(:async_block_end) (or equivalent RSpec matcher), so the
test only requires that the chunk was yielded while the async block was running
rather than enforcing a strict array equality.
| RSpec.describe ReactOnRailsPro::StreamRequest do | ||
| # Wraps a string in the length-prefixed wire format for mock streaming responses. | ||
| def to_length_prefixed(html) | ||
| metadata = { "consoleReplayScript" => "", "hasErrors" => false, "isShellReady" => true, "payloadType" => "string" } |
There was a problem hiding this comment.
Split the metadata hash across lines to satisfy Ruby line-length rules.
This line is too long and should be wrapped for readability and guideline compliance.
💡 Suggested change
- metadata = { "consoleReplayScript" => "", "hasErrors" => false, "isShellReady" => true, "payloadType" => "string" }
+ metadata = {
+ "consoleReplayScript" => "",
+ "hasErrors" => false,
+ "isShellReady" => true,
+ "payloadType" => "string",
+ }As per coding guidelines, **/*.rb: "Line length max 120 characters in Ruby code; break long chains using multi-line method calls".
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| metadata = { "consoleReplayScript" => "", "hasErrors" => false, "isShellReady" => true, "payloadType" => "string" } | |
| metadata = { | |
| "consoleReplayScript" => "", | |
| "hasErrors" => false, | |
| "isShellReady" => true, | |
| "payloadType" => "string", | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails_pro/spec/react_on_rails_pro/stream_request_spec.rb` at line
13, The metadata hash in stream_request_spec.rb assigned to metadata is too
long; break the hash across multiple lines (one key/value per line) so the line
length stays under 120 characters and improves readability—locate the metadata
assignment (metadata = { "consoleReplayScript" => "", "hasErrors" => false,
"isShellReady" => true, "payloadType" => "string" }) and reformat it into a
multi-line hash with each key/value on its own indented line, preserving the
same keys and values.
| def self.parse_one_chunk_result(str) | ||
| parser = new | ||
| results = [] | ||
| parser.feed(str.to_s.b) { |chunk| results << chunk } | ||
| if results.empty? | ||
| raise ReactOnRails::Error, | ||
| "Malformed render result: expected exactly one length-prefixed chunk but found none" | ||
| end | ||
| if results.size > 1 | ||
| raise ReactOnRails::Error, | ||
| "Malformed render result: expected exactly one length-prefixed chunk but found #{results.size}" | ||
| end | ||
| results.first |
There was a problem hiding this comment.
Require a clean EOF for parse_one_chunk_result.
This only verifies how many complete frames were emitted. One valid frame followed by leftover bytes or a partial second frame still returns results.first, which weakens the “exactly one chunk” contract for non-streaming renders.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails/lib/react_on_rails/length_prefixed_parser.rb` around lines 18
- 30, The parse_one_chunk_result method currently only checks how many complete
chunks were yielded but doesn't ensure the input had no leftover or
partial-frame bytes; after calling parser.feed(str.to_s.b) and collecting
results, verify the parser consumed the entire input (no leftover/unparsed bytes
or an incomplete second frame) and if any leftover/partial data exists raise
ReactOnRails::Error with a clear message; update the logic around
parse_one_chunk_result, parser.feed and results to perform this extra
EOF/leftover check and reject inputs that are not exactly one complete
length-prefixed chunk.
| begin | ||
| @content_len = Integer(len_hex, 16) | ||
| rescue ArgumentError | ||
| raise ReactOnRails::Error, "Invalid content length hex: #{len_hex.force_encoding(Encoding::UTF_8).inspect}" | ||
| end |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "length_prefixed_parser.rb" -type fRepository: shakacode/react_on_rails
Length of output: 129
🏁 Script executed:
cat -n react_on_rails/lib/react_on_rails/length_prefixed_parser.rbRepository: shakacode/react_on_rails
Length of output: 5700
🏁 Script executed:
python3 << 'EOF'
# Test Integer() behavior with signed hex
# This mimics Ruby's Integer(str, 16) behavior
# In Python, we can test with int()
test_values = ["-1", "-FF", "FF", "10"]
for val in test_values:
try:
result = int(val, 16)
print(f"int('{val}', 16) = {result}")
except ValueError as e:
print(f"int('{val}', 16) raises: {e}")
EOFRepository: shakacode/react_on_rails
Length of output: 147
Reject negative content lengths in the header parser.
Integer(len_hex, 16) accepts signed hex values like -1, -FF, etc., causing @content_len to become negative. This then causes byteslice at lines 121–122 to behave unexpectedly (negative lengths count from the end) instead of failing fast at the protocol boundary. Add validation to reject negative values immediately after parsing.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@react_on_rails/lib/react_on_rails/length_prefixed_parser.rb` around lines 101
- 105, After parsing the hex header into `@content_len` using Integer(len_hex, 16)
in length_prefixed_parser.rb, validate that `@content_len` is non‑negative and
raise ReactOnRails::Error if it's negative; specifically, after the Integer(...)
call check if `@content_len` < 0 and raise with a message that includes
len_hex.force_encoding(Encoding::UTF_8).inspect so negative signed hex values
like "-1" or "-FF" are rejected before byteslice is used.
…er Components (#1853) Components This PR adds a new helper `stream_react_component_with_async_props` that lets you start rendering a React component **immediately** while database queries run in the background. As each query completes, the result streams to the component, which progressively updates. This dramatically improves Time to First Byte (TTFB) for pages with slow data fetches. ```ruby <%= stream_react_component("Dashboard", props: { users: User.all.to_a, # 1 second posts: Post.recent.to_a, # 1 second metrics: Metric.summary # 1 second }) %> <%= stream_react_component_with_async_props("Dashboard", props: { title: "My Dashboard" }) do |emit| emit.call("users", User.all.to_a) # Streams when ready emit.call("posts", Post.recent.to_a) # Streams when ready emit.call("metrics", Metric.summary) # Streams when ready end %> ``` --- 1. [The Problem](#the-problem) 2. [The Solution](#the-solution) 3. [Key Concepts & Terminology](#key-concepts--terminology) 4. [Dependencies & Infrastructure](#dependencies--infrastructure) 5. [How It Works](#how-it-works) 6. [Architecture Overview](#architecture-overview) 7. [Usage Guide](#usage-guide) 8. [Design Decisions](#design-decisions) 9. [Implementation Details](#implementation-details) 10. [Testing](#testing) 11. [Files Changed](#files-changed) --- React on Rails Pro supports **streaming SSR** using React 18's `renderToPipeableStream`. This allows the server to start sending HTML before the entire page is rendered, showing loading states via `<Suspense>` boundaries. However, there's a fundamental bottleneck: **all props must be fetched before rendering begins**. ``` Current Flow: ┌────────────────────────────────────────────────────────────────────┐ │ Time: 0ms 3000ms │ │ │ │ Rails: [──── Fetch users ────][── Fetch posts ──][─ metrics ─] │ │ │ │ │ Node: [render] │ │ │ │ │ Browser: [HTML]│ │ │ │ User sees: Nothing for 3 seconds, then entire page appears │ └────────────────────────────────────────────────────────────────────┘ ``` Even though React *can* stream HTML incrementally, we're not leveraging it because Rails blocks on data fetching first. **What we want:** ``` Desired Flow: ┌────────────────────────────────────────────────────────────────────┐ │ Time: 0ms 500ms 1500ms 2500ms 3000ms │ │ │ │ Rails: [── users ──] │ │ [────── posts ──────] │ │ [─────────── metrics ───────────] │ │ ↓ ↓ ↓ │ │ Node: [shell]────[+users]────[+posts]────[+metrics] │ │ ↓ ↓ ↓ ↓ │ │ Browser: [loading] [users!] [posts!] [complete] │ │ │ │ User sees: Loading state at 50ms, progressive content after │ └────────────────────────────────────────────────────────────────────┘ ``` --- We introduce **async props** - props that are declared upfront but whose values arrive *after* rendering has already started. The React component renders immediately with loading states. As each async prop value arrives from Rails, the component re-renders with the new data. This is powered by: 1. **Bidirectional HTTP/2 streaming** between Rails and Node 2. **Promise-based suspension** in React Server Components 3. **Request-scoped state sharing** to coordinate between initial render and updates | Component | Purpose | |-----------|---------| | `stream_react_component_with_async_props` | New Rails helper that accepts a block for async props | | `AsyncPropsEmitter` | Ruby class that sends prop values to Node as they become available | | `/incremental-render/` endpoint | New Node endpoint that accepts streaming updates | | `AsyncPropsManager` | TypeScript class that manages promises for async props | | `sharedExecutionContext` | Request-scoped Map for sharing state between render and updates | --- Before diving into the architecture, let's define the key concepts: A simple streaming format where each line is a valid JSON object: ``` {"type": "init", "data": "..."} {"type": "update", "propName": "users", "value": [...]} {"type": "update", "propName": "posts", "value": [...]} ``` **Why NDJSON?** It allows streaming JSON objects one at a time over a single HTTP connection. Each line can be parsed independently as it arrives. HTTP/2 allows sending data in both directions simultaneously on a single connection: ``` ┌─────────────────────────────────────────────────────────────┐ │ HTTP/2 Stream │ ├─────────────────────────────────────────────────────────────┤ │ Rails → Node: Request data, then updates as props ready │ │ Node → Rails: HTML chunks as React renders them │ │ │ │ Both happen at the same time on ONE connection! │ └─────────────────────────────────────────────────────────────┘ ``` React 18's streaming SSR can pause rendering when it hits a Promise, continue with other parts of the tree, and resume when the Promise resolves: ```jsx function Dashboard() { return ( <div> <Header /> {/* Renders immediately */} <Suspense fallback={<Spinner />}> <UserList users={asyncPromise} /> {/* Waits for promise */} </Suspense> <Footer /> {/* Renders immediately */} </div> ); } ``` When Node renders a React component, it creates an **ExecutionContext** - an isolated environment for that specific HTTP request. The `sharedExecutionContext` is a `Map` within this environment that allows the initial render code and subsequent update code to share data (like the AsyncPropsManager instance) **without using global variables**. ```javascript // During initial render: sharedExecutionContext.set("asyncPropsManager", manager); // During update (same request, moments later): const manager = sharedExecutionContext.get("asyncPropsManager"); manager.setProp("users", data); ``` **Why not globals?** The Node renderer handles multiple concurrent requests. Global variables would leak data between users. --- Provides cooperative concurrency via Fibers. Key constructs we use: ```ruby Async do |task| barrier = Async::Barrier.new barrier.async { fetch_users } # Runs concurrently barrier.async { fetch_posts } # Runs concurrently barrier.wait # Waits for all to complete end ``` We use `barrier.async` to run the async props block concurrently with streaming the response. HTTPX is our HTTP client for communicating with the Node renderer. The `stream_bidi` plugin enables bidirectional streaming: ```ruby response = connection.post(url, body: data) request = connection.build_request("POST", url, stream: true) response = connection.request(request, stream: true) request << "more data" # Send while response streams back request.close # Signal end of request body ``` The Node renderer uses Fastify to handle HTTP requests. We register a new endpoint: ```typescript app.post('/bundles/:bundleTimestamp/incremental-render/:digest', handler); ``` Each JavaScript bundle runs in an isolated VM context. The `ExecutionContext` wraps VM operations and manages the `sharedExecutionContext` Map. --- Let's trace through a complete request step by step: ```erb <%= stream_react_component_with_async_props("Dashboard", props: { title: "Hi" }) do |emit| emit.call("users", User.all.to_a) end %> ``` The helper: 1. Stores the block in `options[:async_props_block]` 2. Calls the standard `stream_react_component` pipeline In `NodeRenderingPool#eval_streaming_js`: ```ruby async_props_block = render_options.internal_option(:async_props_block) if async_props_block # Use incremental rendering endpoint Request.render_code_with_incremental_updates(path, js_code, async_props_block: async_props_block) else # Use standard streaming endpoint Request.render_code_as_stream(path, js_code) end ``` ```ruby request = connection.build_request("POST", path, headers: { "content-type" => "application/x-ndjson" }, body: [], stream: true ) emitter = AsyncPropsEmitter.new(bundle_timestamp, request) response = connection.request(request, stream: true) request << "#{initial_data.to_json}\n" ``` ```ruby barrier.async do async_props_block.call(emitter) # User's block runs here ensure request.close # Signals end of request body end ``` The block runs in a separate Fiber while the response streams back to the client. The `/incremental-render/` endpoint parses the first NDJSON line: ```typescript // handleIncrementalRenderStream.ts for await (const chunk of request.raw) { // Parse NDJSON lines const parsed = JSON.parse(line); if (!hasReceivedFirstObject) { // First line: Start rendering const result = await handleIncrementalRenderRequest(parsed); // Returns response stream + sink for updates } } ``` The rendering code includes setup for async props: ```javascript // Generated by ServerRenderingJsCode#async_props_setup_js if (ReactOnRails.isRSCBundle) { var { props, asyncPropManager } = ReactOnRails.addAsyncPropsCapabilityToComponentProps(usedProps); usedProps = props; sharedExecutionContext.set("asyncPropsManager", asyncPropManager); } // Then render the component with enhanced props return ReactOnRails.serverRenderRSCReactComponent({ name, props: usedProps, ... }); ``` ```tsx async function Dashboard({ title, getReactOnRailsAsyncProp }) { // This returns a Promise that will resolve when the prop arrives const usersPromise = getReactOnRailsAsyncProp('users'); return ( <Suspense fallback={<Spinner />}> <UserList users={usersPromise} /> {/* Suspends on promise */} </Suspense> ); } ``` React suspends on the unresolved promise and streams the initial shell with loading state. When `emit.call("users", data)` is called: ```ruby update_chunk = { bundleTimestamp: @bundle_timestamp, updateChunk: <<~JS (function(){ var manager = sharedExecutionContext.get("asyncPropsManager"); manager.setProp("users", #{data.to_json}); })() JS } @request_stream << "#{update_chunk.to_json}\n" ``` ```typescript // handleIncrementalRenderStream.ts - for subsequent NDJSON lines await incrementalSink.add(parsed); // handleIncrementalRenderRequest.ts - sink.add implementation add: async (chunk) => { await executionContext.runInVM(chunk.updateChunk, bundlePath); } ``` The update chunk JavaScript runs in the same ExecutionContext, accessing the same `sharedExecutionContext` Map. ```typescript // AsyncPropsManager#setProp setProp(propName, propValue) { const controller = this.propNameToPromiseController.get(propName); controller.resolve(propValue); // Promise resolves! } ``` React detects the resolved promise, renders the component with data, and streams more HTML. When the Rails block completes and `request.close` is called: 1. HTTP/2 END_STREAM flag is sent 2. Node calls `sink.handleRequestClosed()` 3. This executes `onRequestClosedUpdateChunk` which calls `asyncPropsManager.endStream()` 4. Any props that were never sent get their promises rejected --- Now that you understand the concepts, here's how they fit together: ```mermaid sequenceDiagram participant View as Rails View participant Emitter as AsyncPropsEmitter participant HTTPX as HTTPX (HTTP/2) participant Node as Node Renderer participant VM as ExecutionContext participant React as React Component participant Manager as AsyncPropsManager View->>HTTPX: Open bidirectional stream HTTPX->>Node: NDJSON Line 1: {renderingRequest} Node->>VM: Execute initial render code VM->>Manager: Create AsyncPropsManager VM->>VM: Store manager in sharedExecutionContext VM->>React: Render component React->>Manager: getReactOnRailsAsyncProp("users") Manager-->>React: Returns Promise (unresolved) React-->>React: Suspends on Promise Node-->>HTTPX: Stream initial HTML (loading states) HTTPX-->>View: Forward HTML to client Note over View,Emitter: Meanwhile, in barrier.async fiber... View->>Emitter: emit.call("users", data) Emitter->>HTTPX: NDJSON Line 2: {updateChunk} HTTPX->>Node: Forward update Node->>VM: Execute updateChunk VM->>Manager: setProp("users", data) Manager->>React: Resolve Promise React-->>React: Resume rendering with data Node-->>HTTPX: Stream more HTML HTTPX-->>View: Forward HTML View->>HTTPX: Close request (block complete) HTTPX->>Node: END_STREAM Node->>VM: Execute onRequestClosedUpdateChunk VM->>Manager: endStream() ``` ``` ┌─────────────────────────────────────────────────────────────────────────────┐ │ RAILS APPLICATION │ │ │ │ ┌────────────────────────────────────────────────────────────────────────┐ │ │ │ View: dashboard/index.html.erb │ │ │ │ │ │ │ │ <%= stream_react_component_with_async_props("Dashboard") do |emit| %> │ │ │ │ <% emit.call("users", User.all) %> │ │ │ │ <% end %> │ │ │ └──────────────────────────────┬─────────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ ReactOnRailsProHelper │ │ │ │ ├── stream_react_component_with_async_props │ │ │ │ │ └── Stores block, calls stream_react_component │ │ │ │ └── consumer_stream_async │ │ │ │ └── Sets up barrier for concurrent execution │ │ │ └──────────────────────────────┬───────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ NodeRenderingPool#eval_streaming_js │ │ │ │ └── Detects async_props_block, routes to incremental endpoint │ │ │ └──────────────────────────────┬───────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ Request#render_code_with_incremental_updates │ │ │ │ ├── Creates HTTPX bidirectional stream │ │ │ │ ├── Sends initial NDJSON request │ │ │ │ └── barrier.async { block.call(emitter); request.close } │ │ │ └──────────────────────────────┬───────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ AsyncPropsEmitter │ │ │ │ ├── call(prop_name, value) → Sends NDJSON update line │ │ │ │ └── end_stream_chunk → Code to run when request closes │ │ │ └──────────────────────────────┬───────────────────────────────────────┘ │ │ │ │ └─────────────────────────────────┼────────────────────────────────────────────┘ │ │ HTTP/2 + NDJSON ▼ ┌─────────────────────────────────────────────────────────────────────────────┐ │ NODE RENDERER │ │ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ worker.ts - Fastify Server │ │ │ │ └── POST /bundles/:timestamp/incremental-render/:digest │ │ │ └──────────────────────────────┬───────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ handleIncrementalRenderStream.ts │ │ │ │ ├── Parses incoming NDJSON stream │ │ │ │ ├── Line 1 → onRenderRequestReceived (initial render) │ │ │ │ ├── Line N → onUpdateReceived (prop updates) │ │ │ │ └── END_STREAM → onRequestEnded (cleanup) │ │ │ └──────────────────────────────┬───────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ handleIncrementalRenderRequest.ts │ │ │ │ ├── Executes initial render via handleRenderRequest │ │ │ │ └── Returns "sink" object with: │ │ │ │ ├── add(chunk) → Executes update in VM │ │ │ │ └── handleRequestClosed() → Executes cleanup code │ │ │ └──────────────────────────────┬───────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ vm.ts - ExecutionContext │ │ │ │ ├── Created per HTTP request (not shared between requests!) │ │ │ │ ├── sharedExecutionContext = new Map() │ │ │ │ │ └── Allows initial render & updates to share state │ │ │ │ └── runInVM(code) → Executes JS in isolated VM context │ │ │ └──────────────────────────────────────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────────────┘ │ │ VM Execution ▼ ┌─────────────────────────────────────────────────────────────────────────────┐ │ REACT-ON-RAILS-PRO PACKAGE │ │ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ ReactOnRailsRSC.ts │ │ │ │ └── addAsyncPropsCapabilityToComponentProps(props) │ │ │ │ ├── Creates AsyncPropsManager instance │ │ │ │ ├── Adds getReactOnRailsAsyncProp function to props │ │ │ │ └── Returns { asyncPropManager, props } │ │ │ └──────────────────────────────┬───────────────────────────────────────┘ │ │ │ │ │ ▼ │ │ ┌──────────────────────────────────────────────────────────────────────┐ │ │ │ AsyncPropsManager.ts │ │ │ │ ├── getProp(name) → Returns Promise (cached per prop name) │ │ │ │ ├── setProp(name, value) → Resolves the Promise │ │ │ │ └── endStream() → Rejects all unresolved Promises │ │ │ └──────────────────────────────────────────────────────────────────────┘ │ │ │ └─────────────────────────────────────────────────────────────────────────────┘ ``` --- **Rails View:** ```erb <%# app/views/dashboard/index.html.erb %> <%= stream_react_component_with_async_props("Dashboard", props: { title: "My Dashboard" }) do |emit| %> <%# These queries run concurrently with rendering %> <% emit.call("users", User.active.includes(:profile).to_a) %> <% emit.call("recent_posts", Post.recent.limit(10).to_a) %> <% emit.call("notifications", current_user.notifications.unread.to_a) %> <% end %> ``` **React Component:** ```tsx // app/javascript/components/Dashboard.tsx import { Suspense } from 'react'; import type { WithAsyncProps } from 'react-on-rails-pro'; // Define your prop types type SyncProps = { title: string; }; type AsyncProps = { users: User[]; recent_posts: Post[]; notifications: Notification[]; }; // WithAsyncProps adds getReactOnRailsAsyncProp to your props type Props = WithAsyncProps<AsyncProps, SyncProps>; export default function Dashboard({ title, getReactOnRailsAsyncProp }: Props) { return ( <div> <h1>{title}</h1> {/* Renders immediately */} <Suspense fallback={<UsersSkeleton />}> <UsersList users={getReactOnRailsAsyncProp('users')} /> </Suspense> <Suspense fallback={<PostsSkeleton />}> <RecentPosts posts={getReactOnRailsAsyncProp('recent_posts')} /> </Suspense> <Suspense fallback={<NotificationsSkeleton />}> <NotificationsBell items={getReactOnRailsAsyncProp('notifications')} /> </Suspense> </div> ); } // Async component that awaits the promise async function UsersList({ users }: { users: Promise<User[]> }) { const resolvedUsers = await users; return ( <ul> {resolvedUsers.map(user => ( <li key={user.id}>{user.name}</li> ))} </ul> ); } ``` If a prop is never sent (e.g., the block raises an error), the promise will reject when `endStream()` is called: ```tsx import { ErrorBoundary } from 'react-error-boundary'; function Dashboard({ getReactOnRailsAsyncProp }) { return ( <ErrorBoundary fallback={<p>Failed to load users</p>}> <Suspense fallback={<Spinner />}> <UsersList users={getReactOnRailsAsyncProp('users')} /> </Suspense> </ErrorBoundary> ); } ``` **Good candidates:** - Slow database queries (N+1, complex joins) - External API calls - Heavy computations - Data that varies in load time **Not needed for:** - Fast queries (< 50ms) - Static data - Data needed for the initial shell/layout --- **Problem:** The initial render and update chunks need to share the `AsyncPropsManager` instance. Where should it live? **Option 1: Global Variable (Rejected)** ```javascript // ❌ DANGEROUS - Node handles multiple concurrent requests global.asyncPropsManager = new AsyncPropsManager(); // Request A stores Alice's manager global.asyncPropsManager = aliceManager; // Request B (different user!) might read it global.asyncPropsManager.setProp("data", bobsData); // Corrupts Alice's render! ``` **Option 2: Request-Scoped Map (Chosen)** ```javascript // ✅ SAFE - Each request has its own Map // Created in buildExecutionContext(): const sharedExecutionContext = new Map(); // Initial render stores manager sharedExecutionContext.set("asyncPropsManager", manager); // Update chunk (same ExecutionContext) retrieves it const manager = sharedExecutionContext.get("asyncPropsManager"); ``` The `sharedExecutionContext` is: - Created once per HTTP request - Passed to VM only during code execution - Never stored globally - Automatically garbage collected when request ends **Problem:** How should the component access async props? **Option 1: Custom Hook (Rejected)** ```tsx // ❌ DOESN'T WORK - React Server Components can't use hooks function Dashboard() { const users = useAsyncProps('users'); // Error: Hooks not available in RSC return <div>{users}</div>; } ``` **Option 2: Function in Props (Chosen)** ```tsx // ✅ WORKS - Just a closure, no hook runtime needed function Dashboard({ getReactOnRailsAsyncProp }) { const users = await getReactOnRailsAsyncProp('users'); return <div>{users}</div>; } ``` React Server Components are async functions, not components with a lifecycle. They can't use hooks because the hook runtime isn't available server-side. **Problem:** React may call render functions multiple times. What happens if `getProp()` returns a new Promise each time? ```typescript // ❌ BAD - New promise each call getProp(name) { return new Promise(resolve => { ... }); // Different object! } // React renders, gets Promise A // React re-renders, gets Promise B (different object) // React thinks state changed, re-renders again // Gets Promise C... infinite loop! 💥 ``` **Solution:** Return the same Promise object for repeated calls: ```typescript // ✅ GOOD - Cached promise private cache = new Map<string, PromiseController>(); getProp(name) { if (this.cache.has(name)) { return this.cache.get(name).promise; // Same object } // ... create and cache new promise } ``` --- **Request (Rails → Node):** Line 1 - Initial render request: ```json { "gemVersion": "16.2.0", "protocolVersion": "2.0.0", "password": "...", "renderingRequest": "(function(componentName = 'Dashboard') { ... })()", "onRequestClosedUpdateChunk": { "bundleTimestamp": "abc123", "updateChunk": "(function(){ sharedExecutionContext.get('asyncPropsManager').endStream(); })()" } } ``` Line 2+ - Update chunks: ```json { "bundleTimestamp": "abc123", "updateChunk": "(function(){ sharedExecutionContext.get('asyncPropsManager').setProp('users', [...]); })()" } ``` **Response (Node → Rails):** Streaming HTML chunks: ```json {"html": "<div id=\"Dashboard\"><!--$-->Loading...", "consoleReplayScript": "", "isShellReady": true} {"html": "<ul><li>User 1</li></ul><!--/$-->", "consoleReplayScript": "", "isShellReady": true} ``` We discovered a bug in HTTPX's `stream_bidi` plugin where the `@headers_sent` flag isn't reset on retry, causing crashes. We include a patch: ```ruby module HTTPX::Plugins::StreamBidi::RequestMethodsRetryFix def transition(nextstate) @headers_sent = false if nextstate == :idle super end end ``` This will be removed once fixed upstream. --- ```bash cd packages/react-on-rails-pro-node-renderer pnpm test cd packages/react-on-rails-pro pnpm test cd react_on_rails_pro bundle exec rspec cd react_on_rails_pro/spec/dummy bundle exec rspec spec/requests/incremental_rendering_integration_spec.rb ``` | Component | Test File | Lines | |-----------|-----------|-------| | NDJSON Stream Parser | `handleIncrementalRenderStream.test.ts` | 205 | | Incremental Render Handler | `incrementalRender.test.ts` | 875 | | HTML Streaming | `incrementalHtmlStreaming.test.ts` | 233 | | AsyncPropsManager | `AsyncPropManager.test.ts` | 142 | | Ruby AsyncPropsEmitter | `async_props_emitter_spec.rb` | 43 | | Ruby Request | `request_spec.rb` | 184 | --- | File | Purpose | |------|---------| | `handleIncrementalRenderRequest.ts` | Handles initial incremental render, returns sink | | `handleIncrementalRenderStream.ts` | Parses NDJSON stream from Rails | | `AsyncPropsManager.ts` | Manages promises for async props | | `async_props_emitter.rb` | Sends prop values to Node | | `httpx_stream_bidi_patch.rb` | Fixes HTTPX retry bug | | File | Changes | |------|---------| | `worker.ts` | Added `/incremental-render/` endpoint | | `vm.ts` | Added `sharedExecutionContext` to ExecutionContext | | `request.rb` | Added `render_code_with_incremental_updates` | | `server_rendering_js_code.rb` | Added `async_props_setup_js` | | `node_rendering_pool.rb` | Routes async props to incremental endpoint | | `react_on_rails_pro_helper.rb` | Added `stream_react_component_with_async_props` | | `ReactOnRailsRSC.ts` | Added `addAsyncPropsCapabilityToComponentProps` | | `types/index.ts` | Added `WithAsyncProps` type | --- **None.** This is an additive feature. Existing code continues to work unchanged. --- - [ ] Add timeout support for individual async props - [ ] Performance benchmarks comparing before/after - [ ] DevTools integration to visualize async prop timing - [ ] Support for async props in non-RSC streaming (client components) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> * **New Features** * Incremental NDJSON HTML streaming over HTTP/2 with bidirectional support and per-request execution contexts. * Async props system for progressive prop delivery to components during streaming; test and demo components added. * Bundle existence validation to prevent renders when required bundles are missing. * **Bug Fixes** * Fixed HTTP/2 streaming reliability and END_STREAM signaling; improved streaming error handling and size limits. * **Security** * Dependency updates to mitigate CVE-2025-55182. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude <noreply@anthropic.com>
Fixes race conditions causing async props to not be delivered when
`emit.call` is invoked without delays.
- Update chunks could arrive before `AsyncPropsManager` was initialized
- Added `getOrCreateAsyncPropsManager()` for lazy initialization on
first access
- `handleIncrementalRenderStream` returned before HTTP response was
fully sent
- Changed `void onResponseStart()` to capture and await the promise
When using HTTPX's `stream_bidi` plugin with `retries` plugin for
bidirectional HTTP/2 streaming, a protocol error occurs during
connection retry:
```
stream 0 closed with error: protocol_error
```
**Root cause:** When a connection fails and retry kicks in, the
`retries` plugin calls `transition(:idle)` to reset the request state.
However:
1. The body callbacks from the previous attempt remain registered
2. These stale callbacks can fire during the retry, causing HTTP/2
protocol violations
**Fix:** Clear body callbacks when transitioning to `:idle` state in our
`httpx_stream_bidi_patch.rb`:
```ruby
if nextstate == :idle
callbacks(:body).clear
end
```
This ensures a clean slate for the retried request.
**Problem:** When a `StreamChunkTimeoutError` occurs during incremental
rendering, workers would take 30+ seconds to shut down during graceful
termination.
**Root causes identified:**
1. **`onResponse` not called on abort/timeout:** The
`handleGracefulShutdown.ts` only used `onResponse` to decrement
`activeRequestsCount`. But `onResponse` is NOT called when:
- Client aborts the connection (`onRequestAbort` fires instead)
- Request times out (`onTimeout` fires instead)
2. **PassThrough stream not closed on error:** In `handleStreamError()`,
when the React render stream errors, the PassThrough wrapper was never
destroyed. Node.js `pipe()` does NOT automatically close the destination
on source error, leaving the HTTP response open.
3. **HTTP response not closed on chunk timeout:** When
`StreamChunkTimeoutError` fires, we called `handleRequestClosed()` to
tell React to finish, but nothing closed the HTTP response to Rails.
**Fixes applied:**
1. **`handleGracefulShutdown.ts`:** Added `onRequestAbort` and
`onTimeout` hooks to decrement `activeRequestsCount`
2. **`utils.ts`:** Fixed `handleStreamError()` to destroy the
PassThrough when source stream errors:
```typescript
stream.on('error', (error) => {
onError(error);
newStreamAfterHandlingError.destroy(error); // Close PassThrough!
});
```
3. **`worker.ts`:** Added `res.raw.destroy()` after chunk timeout to
immediately close the HTTP connection
Fixes #2270
Fixes #2308
- [x] Test `stream_react_component_with_async_props` without sleep
delays
- [x] Verify async props are delivered correctly regardless of timing
- [x] Test worker restart scenario with persistent HTTP/2 connections
- [x] Verify graceful shutdown completes promptly after chunk timeout
errors
🤖 Generated with [Claude Code](https://claude.ai/code)
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
## Summary - Adds `scripts/publish-to-builds-repo.sh` that builds all JS packages and pushes the compiled output to [shakacode/react-on-rails-builds](https://github.com/shakacode/react-on-rails-builds) - The builds repo contains pre-built `package.json` + `lib/` for each package, ready for consumption as pnpm git dependencies — no TypeScript compilation needed during install - The script defaults to pushing to a branch matching the current react_on_rails branch name (overridable with `--branch`) ### What the script does 1. Runs `pnpm build` for all packages 2. Copies `package.json` + `lib/` to the builds repo 3. Replaces `workspace:*` references with the actual version 4. Strips `devDependencies` and build-related scripts 5. Commits, pushes, and tags (e.g., `v16.4.0`) ### Usage ```bash ./scripts/publish-to-builds-repo.sh # build + push + tag ./scripts/publish-to-builds-repo.sh --dry-run # preview without pushing ./scripts/publish-to-builds-repo.sh --tag v1.2.3 # custom tag ./scripts/publish-to-builds-repo.sh --branch main # override branch ``` ### Consumer usage (pnpm) ```json { "react-on-rails": "github:shakacode/react-on-rails-builds#v16.4.0&path:react-on-rails", "react-on-rails-pro": "github:shakacode/react-on-rails-builds#v16.4.0&path:react-on-rails-pro", "react-on-rails-pro-node-renderer": "github:shakacode/react-on-rails-builds#v16.4.0&path:react-on-rails-pro-node-renderer" } ``` ## Test plan - [x] Dry run completes successfully - [x] Real run pushes to `shakacode/react-on-rails-builds` with correct structure - [x] Verified `workspace:*` replaced with actual version in dist `package.json` - [x] Verified `devDependencies` and build scripts stripped - [x] Verified consumer `pnpm install` resolves correct packages in ~30s 🤖 Generated with [Claude Code](https://claude.com/claude-code) --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary Closes #2614 - Replace `JSON.stringify` wrapping of HTML content in the Node→Ruby streaming protocol with a length-prefixed format that sends raw content bytes without escaping - Auto-detect format on Ruby side for backward compatibility with older node renderers - Refactor console replay logging into a shared method to handle both Hash and String chunks ## Problem The current NDJSON protocol wraps every HTML chunk in a JSON envelope: ```json {"html":"<div>...escaped HTML...</div>","consoleReplayScript":"","hasErrors":false,"isShellReady":true}\n ``` `JSON.stringify` escapes every `"`, `\`, and `\n` in the HTML content — adding **~30% overhead** for typical payloads. Ruby then `JSON.parse`s each line, undoing all the escaping. This serialize→deserialize round-trip is pure waste for the bulk content (HTML is 99.9% of each chunk by size). ## Solution New wire format per chunk: ``` <metadata JSON>\t<content byte length hex>\n<raw content bytes> ``` Example: ``` {"consoleReplayScript":"","hasErrors":false,"isShellReady":true}\t00028A3C\n<div>...(raw HTML, zero escaping)... ``` - **Metadata** (~80 bytes): small JSON without the `html` field — negligible escaping cost - **Content** (bulk): raw bytes with length prefix — **zero** escaping overhead - **Ruby parser**: reads header line (until `\n`), extracts content length, reads exactly that many raw bytes - **Auto-detection**: if the header line contains `\t`, it's length-prefixed; otherwise legacy NDJSON (backward compatible) ## Changes | File | Change | |------|--------| | `streamingUtils.ts` | Write `metadata\tcontent_length\nraw_content` instead of `JSON.stringify(envelope)\n` | | `stream_request.rb` | Replace `loop_response_lines` with `loop_response_chunks` (length-prefixed parser with auto-detection) | | `ruby_embedded_java_script.rb` | Handle both Hash (new) and String (legacy) chunks; extract shared `replay_console_to_rails_logger` | | `streamServerRenderedReactComponent.test.jsx` | Update `expectStreamChunk` to parse length-prefixed format | | `removeRSCChunkStack.ts` | Update to parse length-prefixed format with NDJSON fallback | ## Test plan - [x] `streamServerRenderedReactComponent.test.jsx` — 16/16 pass (core streaming tests) - [x] `concurrentRSCPayloadGeneration.rsc.test.tsx` — pass (parallel RSC rendering) - [x] `RSCSerialization.rsc.test.tsx` — pass - [x] `ReactOnRailsRSC.rsc.test.tsx` — pass - [x] `serverRenderRSCReactComponent.rsc.test.tsx` — pass - [x] `injectRSCPayload.test.ts` — 9/9 pass - [ ] Integration tests with real node renderer - [ ] E2E tests with dummy app 🤖 Generated with [Claude Code](https://claude.com/claude-code) <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Improvements** * Switched streaming to a length‑prefixed protocol for atomic chunk delivery, correct multibyte handling, and compatibility with legacy newline‑delimited streams. * More reliable stream parsing with improved server-side console replay/logging, clearer error signaling, and robust end‑of‑stream handling. * **Tests** * Added tests validating length‑prefixed parsing, including UTF‑8 multibyte content. <!-- end of auto-generated comment: release notes by coderabbit.ai --> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: Justin Gordon <justin@shakacode.com>
2c96039 to
c186056
Compare
size-limit report 📦
|
…moval The rebase onto main replaced the local FastifyRequest import with performRequestPrechecks, making the @typescript-eslint/no-shadow suppression on the declare module block unnecessary. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Code Review —
|
| ); | ||
| } | ||
| const lenHex = decoder.decode(header.subarray(tabIdx + 1)); | ||
| this.contentLen = parseInt(lenHex, 16); |
There was a problem hiding this comment.
No upper-bound validation on contentLen — potential DoS / memory exhaustion.
parseInt(lenHex, 16) can return any non-negative integer. A malformed or compromised renderer can claim contentLen = 0xFFFFFFFF (4 GB), causing the parser to buffer indefinitely waiting for bytes that never arrive while consuming memory.
Suggested fix — add a bounds check immediately after parsing:
| this.contentLen = parseInt(lenHex, 16); | |
| this.contentLen = parseInt(lenHex, 16); | |
| if (Number.isNaN(this.contentLen)) { | |
| throw new Error(`Invalid content length hex: ${JSON.stringify(lenHex)}`); | |
| } | |
| // Guard against absurdly large payloads (same ceiling used elsewhere in the stack) | |
| const MAX_CONTENT_LEN = 100 * 1024 * 1024; // 100 MB | |
| if (this.contentLen > MAX_CONTENT_LEN) { | |
| throw new Error( | |
| `Content length ${this.contentLen} exceeds maximum allowed size of ${MAX_CONTENT_LEN} bytes`, | |
| ); | |
| } |
The Ruby LengthPrefixedParser (react_on_rails/lib/react_on_rails/length_prefixed_parser.rb:102) has the same gap and should receive the same treatment.
| onResponseStartPromise = Promise.resolve(onResponseStart(response)); | ||
|
|
||
| if (!continueFlag) { | ||
| return; |
There was a problem hiding this comment.
onResponseStartPromise is not awaited on the early-return path — risk of unhandled promise rejection.
onResponseStart is an async function (setResponse is async). Wrapping it in Promise.resolve(...) starts the async work but does not wait for it. When !continueFlag the function returns immediately, so onResponseStartPromise is never awaited and any rejection becomes an unhandled promise rejection.
The finally block at line 193–195 is a no-op here (just a log), and lines 198–199 are never reached because of the return.
Simplest fix — await before returning:
| onResponseStartPromise = Promise.resolve(onResponseStart(response)); | |
| if (!continueFlag) { | |
| return; | |
| onResponseStartPromise = Promise.resolve(onResponseStart(response)); | |
| if (!continueFlag) { | |
| await onResponseStartPromise; | |
| return; | |
| } |
Alternatively, move await onResponseStartPromise into the finally block so it covers every exit path.
| app.addHook('onResponse', (_req, _reply, done) => { | ||
| activeRequestsCount -= 1; | ||
| if (isShuttingDown && activeRequestsCount === 0) { | ||
| log.debug('Worker #%d served all active requests and going to be killed', worker.id); | ||
| worker.destroy(); | ||
| } | ||
| decrementAndMaybeShutdown('onResponse'); | ||
| done(); | ||
| }); | ||
|
|
||
| // Handle client abort - onResponse is NOT called when client disconnects | ||
| app.addHook('onRequestAbort', (_req, done) => { | ||
| log.debug('Worker #%d: request aborted by client', worker.id); | ||
| decrementAndMaybeShutdown('onRequestAbort'); | ||
| done(); | ||
| }); | ||
|
|
||
| // Handle request timeout - onResponse is NOT called when request times out | ||
| app.addHook('onTimeout', (_req, _reply, done) => { | ||
| log.debug('Worker #%d: request timed out', worker.id); | ||
| decrementAndMaybeShutdown('onTimeout'); | ||
| done(); | ||
| }); |
There was a problem hiding this comment.
Potential double-decrement of activeRequestsCount when res.raw.destroy() is called mid-stream.
The comment "onResponse is NOT called when client disconnects / times out" holds for normal Fastify flows. However, the incremental render error handler in worker.ts calls res.raw.destroy() explicitly after streaming has already started. The resulting socket-close event may trigger onRequestAbort even though no earlier hook has decremented the counter yet — or, on some Node.js versions, it triggers nothing and the counter leaks in the other direction.
Consider making decrementAndMaybeShutdown idempotent per request by passing the request object and tracking which requests have already been decremented:
// Inside the hook closures, guard with a per-request Set:
const completedRequests = new WeakSet<object>();
const decrementAndMaybeShutdown = (req: object, context: string) => {
if (completedRequests.has(req)) return;
completedRequests.add(req);
activeRequestsCount -= 1;
...
};This prevents activeRequestsCount from ever going negative, which would otherwise cause worker.destroy() to be skipped permanently during shutdown.
| @@ -0,0 +1,42 @@ | |||
| # frozen_string_literal: true | |||
There was a problem hiding this comment.
Monkey-patch has no version guard — silent breakage risk on HTTPX upgrades.
This patch works around a specific HTTPX retry bug (HoneyryderChuck/httpx#124). Without a version pin, two failure modes exist:
- HTTPX ships a fix that changes the internal state machine — the patch resets
@headers_sentat a point that is now wrong, causing regressions. - HTTPX renames/removes
RequestMethodsor the:idletransition — the patch silently does nothing, and the original bug resurfaces.
Consider adding a guard:
HTTPX_BIDI_PATCH_MAX_VERSION = "0.x.y" # last version confirmed to need this patch
if Gem::Version.new(HTTPX::VERSION) <= Gem::Version.new(HTTPX_BIDI_PATCH_MAX_VERSION)
# apply patch
endAnd add a TODO: remove when HTTPX > x.y.z is the minimum required version comment referencing the upstream issue/PR.
| "Original error:\n#{e}\n#{e.backtrace}" | ||
| end | ||
|
|
||
| barrier.wait |
There was a problem hiding this comment.
barrier.wait can block the thread indefinitely.
barrier.wait (no timeout argument) suspends until every fiber spawned inside the barrier finishes. For async-props, those fibers call user-supplied Rails code (DB queries, API calls, etc.). If that code:
- Acquires a mutex already held by another thread
- Blocks on I/O without an async-compatible adapter
- Raises an exception that the fiber silently swallows
…the barrier never unblocks, the HTTP/2 connection stays open, and the Rails thread is stuck.
Add a deadline so the request can fail fast:
barrier.wait(timeout: ReactOnRailsPro.configuration.rendering_timeout)(If rendering_timeout isn't already surfaced in config, this is a good prompt to add it.)
| chunk.split("\n").each do |line| | ||
| stripped = line.strip | ||
| yield stripped unless stripped.empty? | ||
| end |
There was a problem hiding this comment.
Per-chunk split("\n") loses data when a JSON object straddles two TCP chunks.
Example: if chunk 1 ends with {"prop":"va and chunk 2 starts with lue"}\n, the second half arrives as a new chunk whose first line lue"} is not valid JSON and will be silently dropped after stripped.empty? passes (it's not empty, but JSON.parse would fail).
This only affects the non-length-prefixed legacy code path, but it's the same class of bug the new length-prefixed protocol was designed to eliminate. A line buffer fixes it:
def process_line_based_chunks(stream_response, error_body)
line_buf = +""
stream_response.each do |chunk|
stream_response.instance_variable_set(:@react_on_rails_received_first_chunk, true)
if response_has_error_status?(stream_response)
error_body << chunk
next
end
line_buf << chunk
while (newline_idx = line_buf.index("\n"))
line = line_buf.slice!(0, newline_idx + 1).strip
yield line unless line.empty?
end
end
end|
Code Review: Incremental Streaming and NDJSON Pipeline This is a substantial PR adding incremental HTML streaming with async props and a bidirectional NDJSON update protocol. The architecture is well-structured. I found a few bugs and security/robustness issues worth addressing before merge. Bug: response never sent when shouldContinue is false In handleIncrementalRenderStream.ts, when the initial render request callback returns shouldContinue: false (e.g. auth failure, protocol mismatch), the function calls return on line 163 immediately after setting onResponseStartPromise. The await onResponseStartPromise at line 199 is outside the try/finally block and is never reached. onResponseStart is an async function - calling it only starts it. The await setResponse(response, res) inside it may not complete before the Fastify handler returns, resulting in the error response never being sent to the client. Bug: counter underflow in graceful shutdown In handleGracefulShutdown.ts, the new onTimeout hook decrements activeRequestsCount - but Fastify timeout handling also triggers onResponse after sending the 408 reply. This means both onTimeout and onResponse fire for the same timed-out request, decrementing the counter twice and potentially driving it negative. Security/robustness: no upper bound on contentLen in LengthPrefixedStreamParser parseLengthPrefixedStream.ts parses an arbitrary hex integer from the header as contentLen with no maximum validation. A malformed or adversarial stream could specify a multi-gigabyte length, causing the parser to buffer indefinitely. A sanity cap (e.g. matching FIELD_SIZE_LIMIT) should be checked after parseInt. Minor: full body including password serialised into protocol-version error checkProtocolVersionHandler.ts line 54 serialises the entire request body into the error response data when protocolVersion is missing. This includes the password field from AuthBody, which would be visible in client error responses and server logs. Consider stripping sensitive fields before JSON.stringify. This pre-existed the refactor but the body type now explicitly includes password as an optional string. |
| onResponseStartPromise = Promise.resolve(onResponseStart(response)); | ||
|
|
||
| if (!continueFlag) { | ||
| return; |
There was a problem hiding this comment.
Bug: onResponseStartPromise never awaited on this early-return path.
onResponseStart(response) starts the async response flush (line 160), but await onResponseStartPromise at line 199 is outside the try/finally block and is skipped when return is hit here. The Fastify handler can finish before setResponse completes, leaving the error response (auth failure, protocol mismatch, etc.) unsent.
Fix: await the promise before returning, or move the await into the finally block:
| return; | |
| if (!continueFlag) { | |
| await onResponseStartPromise; | |
| return; |
| }); | ||
|
|
||
| // Handle request timeout - onResponse is NOT called when request times out | ||
| app.addHook('onTimeout', (_req, _reply, done) => { |
There was a problem hiding this comment.
Bug: double-decrement on request timeout — counter can go negative.
Fastify's timeout handling works like this: when a request times out, onTimeout fires, then Fastify sends a 408 reply, which triggers onResponse too. Both hooks call decrementAndMaybeShutdown, so activeRequestsCount is decremented twice for the same request.
This can drive the counter negative, meaning the worker never shuts down (the === 0 check is never satisfied) and the counter is permanently wrong for all subsequent requests.
One fix is to guard the onTimeout decrement with a flag per-request, or simply remove the onTimeout hook and rely on onResponse (which Fastify does call after the 408 is sent). The onRequestAbort hook is correct because Fastify does not call onResponse for aborted requests.
| this.contentLen = parseInt(lenHex, 16); | ||
| if (Number.isNaN(this.contentLen)) { |
There was a problem hiding this comment.
Robustness: no upper bound on contentLen.
parseInt(lenHex, 16) accepts arbitrarily large values. A malformed or adversarial header specifying e.g. ffffffff (4 GB) would cause the parser to accumulate data in this.buf indefinitely without ever emitting a chunk, silently exhausting memory.
Add a size guard after the NaN check:
| this.contentLen = parseInt(lenHex, 16); | |
| if (Number.isNaN(this.contentLen)) { | |
| this.contentLen = parseInt(lenHex, 16); | |
| if (Number.isNaN(this.contentLen)) { | |
| throw new Error(`Invalid content length hex: ${JSON.stringify(lenHex)}`); | |
| } | |
| const MAX_CONTENT_LEN = 10 * 1024 * 1024; // 10 MB, matching FIELD_SIZE_LIMIT | |
| if (this.contentLen > MAX_CONTENT_LEN) { | |
| throw new Error(`Content length ${this.contentLen} exceeds maximum allowed ${MAX_CONTENT_LEN} bytes`); | |
| } |
| reqProtocolVersion | ||
| ? `request protocol ${reqProtocolVersion}` | ||
| : `MISSING with body ${JSON.stringify(req.body)}` | ||
| : `MISSING with body ${JSON.stringify(body)}` |
There was a problem hiding this comment.
Minor security note: password field may appear in error responses and logs.
body here is typed as RequestPrechecksBody which extends AuthBody { password?: string }. When protocolVersion is absent, JSON.stringify(body) will include the plaintext password in the error response data returned to the caller and in any server-side logging that captures response bodies.
Consider sanitising before serialising:
| : `MISSING with body ${JSON.stringify(body)}` | |
| : `MISSING with body ${JSON.stringify({ ...body, password: body.password ? '[REDACTED]' : undefined })}` |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 2ed2b346b5
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| def self.parse_one_chunk_result(str) | ||
| parser = new | ||
| results = [] | ||
| parser.feed(str.to_s.b) { |chunk| results << chunk } |
There was a problem hiding this comment.
Validate parser buffer before returning one-chunk results
parse_one_chunk_result returns the first parsed chunk after feed without verifying that the input was fully consumed. A payload like "<valid chunk><trailing garbage>" (or a truncated second chunk header) will still be accepted as a valid single-chunk response, so corrupted renderer output can silently become partial HTML instead of raising. This is a protocol-regression risk compared with strict parsing; add an end-of-input validation step that errors on leftover buffer or non-header parser state.
Useful? React with 👍 / 👎.
| ) | ||
|
|
||
| # Create emitter - it will write NDJSON lines to the request stream | ||
| emitter = ReactOnRailsPro::AsyncPropsEmitter.new(pool.rsc_bundle_hash, request) |
There was a problem hiding this comment.
Guard async-props incremental path when RSC is disabled
render_code_with_incremental_updates always builds AsyncPropsEmitter with pool.rsc_bundle_hash, but this path is reachable from stream_react_component_with_async_props without checking enable_rsc_support. With the default enable_rsc_support = false, this forces lookup of the RSC bundle hash/file and can raise before any request is sent in apps that do not build an RSC bundle. Add an explicit guard (clear error) or derive the emitter bundle from the active render bundle when RSC is off.
Useful? React with 👍 / 👎.
…ndpoint Three test fixes for API changes from main (PR #2768): 1. vm.test.ts: Replace direct `buildVM()` calls with `buildExecutionContext()` since `buildVM` is no longer exported (now a private function). 2. worker.test.ts: Update empty upload-assets test to expect 400 status since the endpoint now requires at least one `bundle_<hash>` field. 3. worker.test.ts: Update targetBundles directory test since the endpoint no longer creates directories for targetBundles (only bundle_<hash> fields determine directory placement). Also update buildVM spy to use buildExecutionContext which is the exported function. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (4)
packages/react-on-rails-pro-node-renderer/src/worker.ts (1)
386-392:⚠️ Potential issue | 🟠 MajorDon't log raw incremental update bodies at info level.
JSON.stringify(obj)serializes the full update chunk payload, which can contain arbitrary async-prop data. This risks leaking user data and inflating log volume on busy streams.🔧 Suggested fix: log minimal metadata instead
- log.info(`Received a new update chunk ${JSON.stringify(obj)}`); + log.debug({ msg: 'Received update chunk', hasObj: !!obj });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/src/worker.ts` around lines 386 - 392, The current log.info call in the worker (around the incrementalSink.add usage) prints the entire update chunk via JSON.stringify(obj), risking sensitive data leakage and high log volume; change the log to avoid serializing the full obj and instead log only minimal, safe metadata (e.g., obj.type, obj.streamId or a small set of non-sensitive fields or a helper like getUpdateMetadata(obj)) before calling incrementalSink.add, and remove or redact any raw payload fields from logs while keeping the error catch logging of err unchanged.packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts (2)
686-693:⚠️ Potential issue | 🟠 MajorSame stream assertion issue in echo server test.
This test also assumes 1:1 correspondence between response chunks and
streamedDataentries. Apply the same fix: concatenate and compare the full content.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts` around lines 686 - 693, The test incorrectly asserts a 1:1 mapping between chunksToSend and response.streamedData; change it to concatenate and compare the full stream content instead: remove the expect(response.streamedData).toHaveLength(...) and the forEach loop, build the expected string by mapping chunksToSend to `processed ${JSON.stringify(chunk)}` and joining them (e.g., expectedFull = mapped.join('')), then assert that response.streamedData.join('') (or the appropriate join) equals expectedFull; reference variables: chunksToSend, response.streamedData, expectedEcho/receivedEcho to locate and replace the length and per-chunk assertions.
592-599:⚠️ Potential issue | 🟠 MajorAvoid asserting 1:1 correspondence between stream writes and HTTP data events.
TCP/HTTP can coalesce or split writes arbitrarily. The assertion
expect(response.streamedData).toHaveLength(responseChunks.length)assumes eachresponseStream.push()results in exactly one HTTP data event, which is unreliable.♻️ Suggested fix: validate concatenated output
- // Verify that we received all the streamed chunks - expect(response.streamedData).toHaveLength(responseChunks.length); - - // Verify that each chunk was received in order - responseChunks.forEach((expectedChunk, index) => { - const receivedChunk = response.streamedData[index]; - expect(receivedChunk).toEqual(expectedChunk); - }); + // Verify that we received all the streamed content (order preserved, but may be coalesced) + const receivedContent = response.streamedData.join(''); + const expectedContent = responseChunks.join(''); + expect(receivedContent).toEqual(expectedContent);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts` around lines 592 - 599, The test currently asserts a 1:1 correspondence between writes and HTTP data events (using response.streamedData length and per-index equality against responseChunks); instead, remove the length and per-chunk order assertions and validate the effective stream payload by concatenating the expected chunks (responseChunks) into one string/Buffer and concatenating the received pieces (response.streamedData) into one string/Buffer, then assert those two concatenated values are equal; reference response.streamedData, responseChunks, and any code that pushes into the response stream (e.g., responseStream.push) when making this change.packages/react-on-rails-pro/src/getReactServerComponent.client.ts (1)
58-63:⚠️ Potential issue | 🟠 MajorReject non-2xx RSC responses before starting the Flight parser.
fetch()resolves on 4xx/5xx. Right now those error bodies are fed straight intoLengthPrefixedStreamParser, which hides the real HTTP failure behind a framing/Flight parse error.🔧 Proposed fix
const createFromFetch = async (fetchPromise: Promise<Response>, cspNonce?: string) => { const response = await fetchPromise; + if (!response.ok) { + const errorBody = await response.text(); + throw new Error(`RSC request failed with ${response.status}: ${errorBody}`); + } const { body } = response; if (!body) { throw new Error('No stream found in response'); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro/src/getReactServerComponent.client.ts` around lines 58 - 63, In createFromFetch, reject non-2xx HTTP responses before handing the body/stream to the Flight parser: after awaiting fetchPromise and before using response.body or instantiating LengthPrefixedStreamParser, check response.ok and if false read the error payload (e.g., await response.text()) and throw a descriptive Error containing response.status, response.statusText and the response body; then continue with the existing logic (using cspNonce and the LengthPrefixedStreamParser) only for successful responses.
🧹 Nitpick comments (3)
packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts (1)
21-26: Module-level directory creation is a side effect during import.Creating directories at module load time (
if (!fs.existsSync(BUNDLE_PATH))) can cause issues if the test file is imported but tests aren't run, or during parallel test execution. Move this intobeforeAll.♻️ Suggested refactor
describe('incremental render NDJSON endpoint', () => { const TEST_NAME = 'incrementalRender'; const BUNDLE_PATH = path.join(__dirname, 'tmp', TEST_NAME); - if (!fs.existsSync(BUNDLE_PATH)) { - fs.mkdirSync(BUNDLE_PATH, { recursive: true }); - } const app = worker({ - bundlePath: BUNDLE_PATH, + serverBundleCachePath: BUNDLE_PATH, password: 'myPassword1', logHttpLevel: 'silent' as const, supportModules: true, }); + beforeAll(async () => { + if (!fs.existsSync(BUNDLE_PATH)) { + fs.mkdirSync(BUNDLE_PATH, { recursive: true }); + } + await app.ready(); + await app.listen({ port: 0 }); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts` around lines 21 - 26, The module currently creates the test directory at import time using TEST_NAME and BUNDLE_PATH, causing side effects; move that logic into the test lifecycle by removing the fs.existsSync/mkdirSync block from the top-level and instead create the directory inside a beforeAll() within the describe('incremental render NDJSON endpoint', ...) block (and optionally clean up in afterAll()), referencing TEST_NAME and BUNDLE_PATH to build the path and ensure the directory is created before tests run to avoid import-time side effects and parallel-run races.packages/react-on-rails-pro-node-renderer/tests/helper.ts (1)
189-225:waitForsilently passes when an asyncexpectFnis provided.If someone passes an async function,
expectFn()returns a Promise immediately without throwing. Thetry/catchwon't catch the rejection, and the function returns successfully—causing false positives in tests.Consider supporting async functions by awaiting the result:
Proposed fix to support async expectFn
export const waitFor = async ( - expectFn: () => void, + expectFn: () => void | Promise<void>, options: { timeout?: number; interval?: number; message?: string; } = {}, ): Promise<void> => { const { timeout = 1000, interval = 10, message } = options; const startTime = Date.now(); let lastError: Error | null = null; while (Date.now() - startTime < timeout) { try { - expectFn(); + await expectFn(); // If we get here, the expect passed, so we can return return; } catch (error) { lastError = error as Error; - // Expect failed, continue retrying - if (Date.now() - startTime >= timeout) { - // Timeout reached, re-throw the last error - throw error; - } + // Expect failed, continue retrying after interval } // Wait before next retry // eslint-disable-next-line no-await-in-loop await new Promise<void>((resolve) => { setTimeout(resolve, interval); }); } // Timeout reached, throw error with descriptive message const defaultMessage = `Expect condition not met within ${timeout}ms`; throw new Error(message || defaultMessage + (lastError ? `\nLast error: ${lastError.message}` : '')); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/helper.ts` around lines 189 - 225, The waitFor helper currently calls expectFn() but does not await it, so async expectFn returns a Promise and false positives occur; update waitFor to await the result of expectFn (e.g., await expectFn() or await Promise.resolve(expectFn())) inside the try block so thrown or rejected errors are caught by the catch handler, preserve the lastError assignment and the timeout/rethrow logic (using lastError when timing out) in the same function (waitFor) so async and sync expectation functions are both supported.react_on_rails/lib/react_on_rails/length_prefixed_parser.rb (1)
66-70: Consider whether silent discard influshis the right behavior.The
flushmethod silently discards incomplete data. While the comment explains that connection errors surface via HTTPX error handling, this could mask protocol-level truncation issues. Consider logging a warning or raising in development mode if@bufis non-empty.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@react_on_rails/lib/react_on_rails/length_prefixed_parser.rb` around lines 66 - 70, The current flush method in LengthPrefixedParser silently discards any incomplete buffer (`@buf`) on stream end; update the flush method to detect a non-empty `@buf` and either log a warning (using the project's logger) or raise an exception in development/test environments to surface protocol truncation issues, while keeping the existing no-op behavior in production. Locate the flush method in the LengthPrefixedParser class and: check `@buf` presence, call the shared logger to warn with context (e.g., "LengthPrefixedParser: incomplete data discarded") or raise a descriptive error when Rails.env.development? / test, and ensure behavior is safe for production (no exception thrown).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/react-on-rails-pro-node-renderer/tests/helper.ts`:
- Around line 68-78: The comments in createVmBundle and createSecondaryVmBundle
incorrectly claim a "Build config with module support before creating VM bundle"
even though no buildConfig call occurs; update those comment lines (above
vmBundlePath and vmSecondaryBundlePath operations) to accurately describe the
actions (e.g., "Copy fixture bundle and build execution context" or similar) or,
if module support is intended, add an explicit buildConfig call with
supportModules: true before calling buildExecutionContext; reference the
createVmBundle and createSecondaryVmBundle functions when making the change.
In `@packages/react-on-rails-pro/src/injectRSCPayload.ts`:
- Around line 269-272: The server currently double-wraps
metadata.consoleReplayScript (produced by
buildConsoleReplay()/wrapInScriptTags()) by passing it into createScriptTag(),
producing nested <script> tags; update the injectRSCPayload code to detect and
strip an existing outer <script> wrapper from metadata.consoleReplayScript
before pushing into rscPayloadBuffers (or avoid calling createScriptTag when the
string is already wrapped), referencing metadata.consoleReplayScript and
createScriptTag to locate the change, and add a regression test in
injectRSCPayload.test.ts that supplies a non-empty consoleReplayScript (actual
console code wrapped in <script>...</script>) to assert the emitted buffer
contains a single script tag with the inner code (client fetch path already
expects stripping).
In `@packages/react-on-rails-pro/tests/testUtils.ts`:
- Around line 12-13: Change the pendingChunks and pushFn types to match Node.js
Readable.push() expectations: introduce a ReadableChunk type
(Buffer|string|Uint8Array|null), make pendingChunks: ReadableChunk[] and pushFn:
((chunk: ReadableChunk) => boolean) | undefined (since it's only set inside
read()), and in the code path that shifts from pendingChunks normalize undefined
to null before calling stream.push or pushFn; also update the JSDoc to state
that null (not undefined) terminates the stream and reflect the new types
(referencing pendingChunks, pushFn, read(), and the public push API).
---
Duplicate comments:
In `@packages/react-on-rails-pro-node-renderer/src/worker.ts`:
- Around line 386-392: The current log.info call in the worker (around the
incrementalSink.add usage) prints the entire update chunk via
JSON.stringify(obj), risking sensitive data leakage and high log volume; change
the log to avoid serializing the full obj and instead log only minimal, safe
metadata (e.g., obj.type, obj.streamId or a small set of non-sensitive fields or
a helper like getUpdateMetadata(obj)) before calling incrementalSink.add, and
remove or redact any raw payload fields from logs while keeping the error catch
logging of err unchanged.
In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts`:
- Around line 686-693: The test incorrectly asserts a 1:1 mapping between
chunksToSend and response.streamedData; change it to concatenate and compare the
full stream content instead: remove the
expect(response.streamedData).toHaveLength(...) and the forEach loop, build the
expected string by mapping chunksToSend to `processed ${JSON.stringify(chunk)}`
and joining them (e.g., expectedFull = mapped.join('')), then assert that
response.streamedData.join('') (or the appropriate join) equals expectedFull;
reference variables: chunksToSend, response.streamedData,
expectedEcho/receivedEcho to locate and replace the length and per-chunk
assertions.
- Around line 592-599: The test currently asserts a 1:1 correspondence between
writes and HTTP data events (using response.streamedData length and per-index
equality against responseChunks); instead, remove the length and per-chunk order
assertions and validate the effective stream payload by concatenating the
expected chunks (responseChunks) into one string/Buffer and concatenating the
received pieces (response.streamedData) into one string/Buffer, then assert
those two concatenated values are equal; reference response.streamedData,
responseChunks, and any code that pushes into the response stream (e.g.,
responseStream.push) when making this change.
In `@packages/react-on-rails-pro/src/getReactServerComponent.client.ts`:
- Around line 58-63: In createFromFetch, reject non-2xx HTTP responses before
handing the body/stream to the Flight parser: after awaiting fetchPromise and
before using response.body or instantiating LengthPrefixedStreamParser, check
response.ok and if false read the error payload (e.g., await response.text())
and throw a descriptive Error containing response.status, response.statusText
and the response body; then continue with the existing logic (using cspNonce and
the LengthPrefixedStreamParser) only for successful responses.
---
Nitpick comments:
In `@packages/react-on-rails-pro-node-renderer/tests/helper.ts`:
- Around line 189-225: The waitFor helper currently calls expectFn() but does
not await it, so async expectFn returns a Promise and false positives occur;
update waitFor to await the result of expectFn (e.g., await expectFn() or await
Promise.resolve(expectFn())) inside the try block so thrown or rejected errors
are caught by the catch handler, preserve the lastError assignment and the
timeout/rethrow logic (using lastError when timing out) in the same function
(waitFor) so async and sync expectation functions are both supported.
In `@packages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.ts`:
- Around line 21-26: The module currently creates the test directory at import
time using TEST_NAME and BUNDLE_PATH, causing side effects; move that logic into
the test lifecycle by removing the fs.existsSync/mkdirSync block from the
top-level and instead create the directory inside a beforeAll() within the
describe('incremental render NDJSON endpoint', ...) block (and optionally clean
up in afterAll()), referencing TEST_NAME and BUNDLE_PATH to build the path and
ensure the directory is created before tests run to avoid import-time side
effects and parallel-run races.
In `@react_on_rails/lib/react_on_rails/length_prefixed_parser.rb`:
- Around line 66-70: The current flush method in LengthPrefixedParser silently
discards any incomplete buffer (`@buf`) on stream end; update the flush method to
detect a non-empty `@buf` and either log a warning (using the project's logger) or
raise an exception in development/test environments to surface protocol
truncation issues, while keeping the existing no-op behavior in production.
Locate the flush method in the LengthPrefixedParser class and: check `@buf`
presence, call the shared logger to warn with context (e.g.,
"LengthPrefixedParser: incomplete data discarded") or raise a descriptive error
when Rails.env.development? / test, and ensure behavior is safe for production
(no exception thrown).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5b74c52c-33d8-4fcd-9c8b-e12e9a38b01f
📒 Files selected for processing (58)
CHANGELOG.mdpackages/react-on-rails-pro-node-renderer/src/shared/constants.tspackages/react-on-rails-pro-node-renderer/src/shared/utils.tspackages/react-on-rails-pro-node-renderer/src/worker.tspackages/react-on-rails-pro-node-renderer/src/worker/authHandler.tspackages/react-on-rails-pro-node-renderer/src/worker/checkProtocolVersionHandler.tspackages/react-on-rails-pro-node-renderer/src/worker/handleGracefulShutdown.tspackages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderRequest.tspackages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderStream.tspackages/react-on-rails-pro-node-renderer/src/worker/handleRenderRequest.tspackages/react-on-rails-pro-node-renderer/src/worker/requestPrechecks.tspackages/react-on-rails-pro-node-renderer/src/worker/types.tspackages/react-on-rails-pro-node-renderer/src/worker/vm.tspackages/react-on-rails-pro-node-renderer/tests/concurrentHtmlStreaming.test.tspackages/react-on-rails-pro-node-renderer/tests/fixtures/bundle-incremental.jspackages/react-on-rails-pro-node-renderer/tests/fixtures/projects/spec-dummy/asyncComponentsTreeForTestingRenderingRequest.jspackages/react-on-rails-pro-node-renderer/tests/fixtures/secondary-bundle-incremental.jspackages/react-on-rails-pro-node-renderer/tests/handleIncrementalRenderStream.test.tspackages/react-on-rails-pro-node-renderer/tests/handleRenderRequest.test.tspackages/react-on-rails-pro-node-renderer/tests/helper.tspackages/react-on-rails-pro-node-renderer/tests/htmlStreaming.test.jspackages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.tspackages/react-on-rails-pro-node-renderer/tests/incrementalHtmlStreaming.test.tspackages/react-on-rails-pro-node-renderer/tests/incrementalRender.test.tspackages/react-on-rails-pro-node-renderer/tests/parseLengthPrefixedStream.tspackages/react-on-rails-pro-node-renderer/tests/serverRenderRSCReactComponent.test.jspackages/react-on-rails-pro-node-renderer/tests/streamErrorHang.test.tspackages/react-on-rails-pro-node-renderer/tests/vm.test.tspackages/react-on-rails-pro-node-renderer/tests/worker.test.tspackages/react-on-rails-pro/src/AsyncPropsManager.tspackages/react-on-rails-pro/src/RSCRequestTracker.tspackages/react-on-rails-pro/src/ReactOnRailsRSC.tspackages/react-on-rails-pro/src/createReactOnRailsPro.tspackages/react-on-rails-pro/src/getReactServerComponent.client.tspackages/react-on-rails-pro/src/injectRSCPayload.tspackages/react-on-rails-pro/src/parseLengthPrefixedStream.tspackages/react-on-rails-pro/src/streamingUtils.tspackages/react-on-rails-pro/src/transformRSCNodeStream.tspackages/react-on-rails-pro/src/transformRSCStreamAndReplayConsoleLogs.tspackages/react-on-rails-pro/tests/AsyncPropManager.test.tspackages/react-on-rails-pro/tests/injectRSCPayload.test.tspackages/react-on-rails-pro/tests/streamServerRenderedReactComponent.test.jsxpackages/react-on-rails-pro/tests/testUtils.tspackages/react-on-rails-pro/tests/transformRSCStreamAndReplayConsoleLogs.test.tspackages/react-on-rails-pro/tests/utils/removeRSCChunkStack.tspackages/react-on-rails/src/base/client.tspackages/react-on-rails/src/base/full.rsc.tspackages/react-on-rails/src/base/full.tspackages/react-on-rails/src/createReactOnRails.tspackages/react-on-rails/src/serverRenderReactComponent.tspackages/react-on-rails/src/serverRenderUtils.tspackages/react-on-rails/src/types/index.tspackages/react-on-rails/tests/serverRenderReactComponent.test.tsreact_on_rails/lib/react_on_rails/helper.rbreact_on_rails/lib/react_on_rails/length_prefixed_parser.rbreact_on_rails/lib/react_on_rails/server_rendering_pool/ruby_embedded_java_script.rbreact_on_rails/spec/dummy/spec/helpers/react_on_rails_helper_spec.rbreact_on_rails/spec/dummy/spec/requests/server_render_check_spec.rb
💤 Files with no reviewable changes (3)
- packages/react-on-rails-pro-node-renderer/src/worker/types.ts
- packages/react-on-rails-pro/tests/transformRSCStreamAndReplayConsoleLogs.test.ts
- packages/react-on-rails-pro/src/transformRSCStreamAndReplayConsoleLogs.ts
✅ Files skipped from review due to trivial changes (5)
- packages/react-on-rails-pro-node-renderer/src/shared/constants.ts
- packages/react-on-rails-pro-node-renderer/src/worker/requestPrechecks.ts
- packages/react-on-rails-pro-node-renderer/tests/parseLengthPrefixedStream.ts
- packages/react-on-rails-pro-node-renderer/tests/incrementalHtmlStreaming.test.ts
- react_on_rails/lib/react_on_rails/server_rendering_pool/ruby_embedded_java_script.rb
🚧 Files skipped from review as they are similar to previous changes (23)
- packages/react-on-rails-pro-node-renderer/tests/handleRenderRequest.test.ts
- packages/react-on-rails-pro-node-renderer/src/worker/handleGracefulShutdown.ts
- react_on_rails/spec/dummy/spec/requests/server_render_check_spec.rb
- CHANGELOG.md
- packages/react-on-rails-pro/src/transformRSCNodeStream.ts
- packages/react-on-rails/src/base/client.ts
- packages/react-on-rails/src/base/full.rsc.ts
- packages/react-on-rails-pro-node-renderer/tests/handleIncrementalRenderStream.test.ts
- packages/react-on-rails-pro-node-renderer/tests/fixtures/bundle-incremental.js
- packages/react-on-rails-pro-node-renderer/src/shared/utils.ts
- packages/react-on-rails/src/createReactOnRails.ts
- packages/react-on-rails-pro-node-renderer/tests/fixtures/secondary-bundle-incremental.js
- packages/react-on-rails-pro/tests/streamServerRenderedReactComponent.test.jsx
- packages/react-on-rails-pro/tests/utils/removeRSCChunkStack.ts
- packages/react-on-rails-pro-node-renderer/src/worker/handleIncrementalRenderStream.ts
- packages/react-on-rails-pro/tests/AsyncPropManager.test.ts
- packages/react-on-rails/src/types/index.ts
- packages/react-on-rails-pro-node-renderer/tests/httpRequestUtils.ts
- packages/react-on-rails-pro/src/createReactOnRailsPro.ts
- packages/react-on-rails-pro-node-renderer/tests/streamErrorHang.test.ts
- packages/react-on-rails/tests/serverRenderReactComponent.test.ts
- packages/react-on-rails-pro-node-renderer/src/worker/vm.ts
- packages/react-on-rails-pro-node-renderer/tests/vm.test.ts
| export async function createVmBundle(testName: string) { | ||
| // Build config with module support before creating VM bundle | ||
| await safeCopyFileAsync(getFixtureBundle(), vmBundlePath(testName)); | ||
| return buildVM(vmBundlePath(testName)); | ||
| await buildExecutionContext([vmBundlePath(testName)], /* buildVmsIfNeeded */ true); | ||
| } | ||
|
|
||
| export async function createSecondaryVmBundle(testName: string) { | ||
| // Build config with module support before creating VM bundle | ||
| await safeCopyFileAsync(getFixtureSecondaryBundle(), vmSecondaryBundlePath(testName)); | ||
| return buildVM(vmSecondaryBundlePath(testName)); | ||
| await buildExecutionContext([vmSecondaryBundlePath(testName)], /* buildVmsIfNeeded */ true); | ||
| } |
There was a problem hiding this comment.
Misleading comments: no buildConfig call with module support here.
Lines 69 and 75 state "Build config with module support before creating VM bundle," but these functions do not call buildConfig. They rely on setConfig being called earlier (via resetForTest), which only sets serverBundleCachePath without supportModules: true. Either remove/update the comments or add the buildConfig call if module support is actually intended here.
Suggested fix: update or remove the misleading comments
export async function createVmBundle(testName: string) {
- // Build config with module support before creating VM bundle
await safeCopyFileAsync(getFixtureBundle(), vmBundlePath(testName));
await buildExecutionContext([vmBundlePath(testName)], /* buildVmsIfNeeded */ true);
}
export async function createSecondaryVmBundle(testName: string) {
- // Build config with module support before creating VM bundle
await safeCopyFileAsync(getFixtureSecondaryBundle(), vmSecondaryBundlePath(testName));
await buildExecutionContext([vmSecondaryBundlePath(testName)], /* buildVmsIfNeeded */ true);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export async function createVmBundle(testName: string) { | |
| // Build config with module support before creating VM bundle | |
| await safeCopyFileAsync(getFixtureBundle(), vmBundlePath(testName)); | |
| return buildVM(vmBundlePath(testName)); | |
| await buildExecutionContext([vmBundlePath(testName)], /* buildVmsIfNeeded */ true); | |
| } | |
| export async function createSecondaryVmBundle(testName: string) { | |
| // Build config with module support before creating VM bundle | |
| await safeCopyFileAsync(getFixtureSecondaryBundle(), vmSecondaryBundlePath(testName)); | |
| return buildVM(vmSecondaryBundlePath(testName)); | |
| await buildExecutionContext([vmSecondaryBundlePath(testName)], /* buildVmsIfNeeded */ true); | |
| } | |
| export async function createVmBundle(testName: string) { | |
| await safeCopyFileAsync(getFixtureBundle(), vmBundlePath(testName)); | |
| await buildExecutionContext([vmBundlePath(testName)], /* buildVmsIfNeeded */ true); | |
| } | |
| export async function createSecondaryVmBundle(testName: string) { | |
| await safeCopyFileAsync(getFixtureSecondaryBundle(), vmSecondaryBundlePath(testName)); | |
| await buildExecutionContext([vmSecondaryBundlePath(testName)], /* buildVmsIfNeeded */ true); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro-node-renderer/tests/helper.ts` around lines 68 -
78, The comments in createVmBundle and createSecondaryVmBundle incorrectly claim
a "Build config with module support before creating VM bundle" even though no
buildConfig call occurs; update those comment lines (above vmBundlePath and
vmSecondaryBundlePath operations) to accurately describe the actions (e.g.,
"Copy fixture bundle and build execution context" or similar) or, if module
support is intended, add an explicit buildConfig call with supportModules: true
before calling buildExecutionContext; reference the createVmBundle and
createSecondaryVmBundle functions when making the change.
| // Emit console replay as a separate <script> tag (not inside the payload) | ||
| const consoleScript = metadata.consoleReplayScript as string; | ||
| if (consoleScript) { | ||
| rscPayloadBuffers.push(Buffer.from(createScriptTag(consoleScript, sanitizedNonce))); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /tmp && find . -type f -name "*.ts" -o -name "*.tsx" 2>/dev/null | head -20Repository: shakacode/react_on_rails
Length of output: 50
🏁 Script executed:
# Search for where consoleReplayScript is generated and used
rg "consoleReplayScript" --type ts -B 3 -A 3Repository: shakacode/react_on_rails
Length of output: 14271
🏁 Script executed:
# Find the replayConsole function mentioned in the learnings
rg "replayConsole" --type ts -B 3 -A 3Repository: shakacode/react_on_rails
Length of output: 4934
🏁 Script executed:
# Find createScriptTag implementation
rg "createScriptTag" --type ts -B 3 -A 3Repository: shakacode/react_on_rails
Length of output: 2216
🏁 Script executed:
# Look for the getReactServerComponent.client.ts file mentioned
fd "getReactServerComponent.client" --type fRepository: shakacode/react_on_rails
Length of output: 133
🏁 Script executed:
# Examine the injectRSCPayload.ts file around the disputed lines
fd "injectRSCPayload" --type fRepository: shakacode/react_on_rails
Length of output: 178
🏁 Script executed:
# Find the consoleReplay function definition
rg "function consoleReplay|const consoleReplay|export.*consoleReplay" --type ts -B 2 -A 10Repository: shakacode/react_on_rails
Length of output: 6119
🏁 Script executed:
# Also search for where it's imported from
rg "import.*consoleReplay" --type tsRepository: shakacode/react_on_rails
Length of output: 420
🏁 Script executed:
# Look at the injectRSCPayload.ts file more closely, especially around the disputed lines
cat -n packages/react-on-rails-pro/src/injectRSCPayload.ts | sed -n '260,280p'Repository: shakacode/react_on_rails
Length of output: 1210
🏁 Script executed:
# Check the full context of getReactServerComponent.client.ts for the replayConsole function
cat -n packages/react-on-rails-pro/src/getReactServerComponent.client.ts | grep -A 20 "const replayConsole"Repository: shakacode/react_on_rails
Length of output: 816
🏁 Script executed:
# Check the test file for console script test coverage
cat -n packages/react-on-rails-pro/tests/injectRSCPayload.test.ts | head -100Repository: shakacode/react_on_rails
Length of output: 4902
🏁 Script executed:
# Search for consoleReplayScript in the test file
rg "consoleReplayScript" packages/react-on-rails-pro/tests/injectRSCPayload.test.ts -B 3 -A 3Repository: shakacode/react_on_rails
Length of output: 476
🏁 Script executed:
# Find wrapInScriptTags to confirm it adds script tags
rg "wrapInScriptTags" --type ts -B 2 -A 5Repository: shakacode/react_on_rails
Length of output: 1446
🏁 Script executed:
# Also check escapeScript to understand what it does with already-wrapped content
rg "function escapeScript|const escapeScript|export.*escapeScript" --type ts -B 2 -A 10Repository: shakacode/react_on_rails
Length of output: 1224
Strip the existing <script> wrapper before embedding console replay.
metadata.consoleReplayScript is already a complete <script> tag (produced by wrapInScriptTags() in buildConsoleReplay()). Wrapping it again in createScriptTag() creates <script><script>...</script></script>, which is invalid and will fail when the streamed RSC chunk carries console logs. The client-side fetch path already correctly strips this wrapper before reinjecting, so this server-side streaming path should do the same:
- const consoleScript = metadata.consoleReplayScript as string;
- if (consoleScript) {
- rscPayloadBuffers.push(Buffer.from(createScriptTag(consoleScript, sanitizedNonce)));
+ const consoleScript = (metadata.consoleReplayScript as string | undefined)?.trim() ?? '';
+ const consoleCode = consoleScript
+ .replace(/^<script[^>]*>/i, '')
+ .replace(/<\/script>$/i, '');
+ if (consoleCode) {
+ rscPayloadBuffers.push(Buffer.from(createScriptTag(consoleCode, sanitizedNonce)));
}The existing test in injectRSCPayload.test.ts mocks consoleReplayScript as empty, so add a regression test with non-empty consoleReplayScript containing actual console code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro/src/injectRSCPayload.ts` around lines 269 - 272,
The server currently double-wraps metadata.consoleReplayScript (produced by
buildConsoleReplay()/wrapInScriptTags()) by passing it into createScriptTag(),
producing nested <script> tags; update the injectRSCPayload code to detect and
strip an existing outer <script> wrapper from metadata.consoleReplayScript
before pushing into rscPayloadBuffers (or avoid calling createScriptTag when the
string is already wrapped), referencing metadata.consoleReplayScript and
createScriptTag to locate the change, and add a regression test in
injectRSCPayload.test.ts that supplies a non-empty consoleReplayScript (actual
console code wrapped in <script>...</script>) to assert the emitted buffer
contains a single script tag with the inner code (client fetch path already
expects stripping).
| const pendingChunks: unknown[] = []; | ||
| let pushFn: (chunk: unknown) => void; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Node.js Readable streams (without objectMode), what chunk types are accepted by readable.push(), and does passing undefined end the stream or only null?
💡 Result:
In Node.js Readable streams without objectMode, readable.push accepts these chunk types: Buffer, string, Uint8Array, or null. Strings are converted to Buffers using the specified or default encoding. Passing null signals end-of-stream (EOF). Passing undefined does not end the stream; it is treated as invalid data and may cause errors or unexpected behavior, as the implementation expects specific types and explicitly handles null for ending.
Citations:
- 1: https://nodejs.org/api/stream.html
- 2: https://searchcode.com/file/287300574/node_modules/readable-stream/lib/_stream_readable.js/
- 3: https://stackoverflow.com/questions/34959339/nodejs-attempt-to-push-to-readable-throws-error-not-implemented
🏁 Script executed:
cat -n packages/react-on-rails-pro/tests/testUtils.ts | head -40Repository: shakacode/react_on_rails
Length of output: 1146
Constrain chunk types to match Node.js Readable API instead of accepting unknown.
The helper currently accepts arbitrary types via unknown, but Readable.push() in non-object mode only accepts Buffer | string | Uint8Array | null. Passing undefined (which Array.shift() can return) causes runtime errors, not stream termination. The JSDoc incorrectly claims undefined ends the stream.
Additionally, pushFn should be typed as optional since it's only assigned inside the read() callback. Model it as ((chunk: ReadableChunk) => boolean) | undefined and normalize undefined chunks to null when calling the public push API.
Proposed fix
export const createNodeReadableStream = () => {
- const pendingChunks: unknown[] = [];
- let pushFn: (chunk: unknown) => void;
+ type ReadableChunk = Buffer | string | Uint8Array | null;
+ const pendingChunks: ReadableChunk[] = [];
+ let pushFn: ((chunk: ReadableChunk) => boolean) | undefined;
const stream = new Readable({
read() {
pushFn = this.push.bind(this);
if (pendingChunks.length > 0) {
- pushFn(pendingChunks.shift());
+ const next = pendingChunks.shift();
+ if (next !== undefined) pushFn(next);
}
},
});
- const push = (chunk: unknown) => {
+ const push = (chunk: ReadableChunk | undefined) => {
+ const normalizedChunk: ReadableChunk = chunk ?? null;
if (pushFn) {
- pushFn(chunk);
+ pushFn(normalizedChunk);
} else {
- pendingChunks.push(chunk);
+ pendingChunks.push(normalizedChunk);
}
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const pendingChunks: unknown[] = []; | |
| let pushFn: (chunk: unknown) => void; | |
| export const createNodeReadableStream = () => { | |
| type ReadableChunk = Buffer | string | Uint8Array | null; | |
| const pendingChunks: ReadableChunk[] = []; | |
| let pushFn: ((chunk: ReadableChunk) => boolean) | undefined; | |
| const stream = new Readable({ | |
| read() { | |
| pushFn = this.push.bind(this); | |
| if (pendingChunks.length > 0) { | |
| const next = pendingChunks.shift(); | |
| if (next !== undefined) pushFn(next); | |
| } | |
| }, | |
| }); | |
| const push = (chunk: ReadableChunk | undefined) => { | |
| const normalizedChunk: ReadableChunk = chunk ?? null; | |
| if (pushFn) { | |
| pushFn(normalizedChunk); | |
| } else { | |
| pendingChunks.push(normalizedChunk); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/react-on-rails-pro/tests/testUtils.ts` around lines 12 - 13, Change
the pendingChunks and pushFn types to match Node.js Readable.push()
expectations: introduce a ReadableChunk type (Buffer|string|Uint8Array|null),
make pendingChunks: ReadableChunk[] and pushFn: ((chunk: ReadableChunk) =>
boolean) | undefined (since it's only set inside read()), and in the code path
that shifts from pendingChunks normalize undefined to null before calling
stream.push or pushFn; also update the JSDoc to state that null (not undefined)
terminates the stream and reflect the new types (referencing pendingChunks,
pushFn, read(), and the public push API).
| ); | ||
| } | ||
| const lenHex = decoder.decode(header.subarray(tabIdx + 1)); | ||
| this.contentLen = parseInt(lenHex, 16); |
There was a problem hiding this comment.
No upper bound is enforced on contentLen. A crafted stream header can specify an arbitrarily large value (e.g. parseInt("ffffffff", 16) → 4 GB), causing the parser to wait indefinitely for content that never arrives. Because Number.isNaN passes for very large integers, the parser silently enters the 'content' state and the connection hangs.
Consider adding a max-chunk guard immediately after this line:
| this.contentLen = parseInt(lenHex, 16); | |
| this.contentLen = parseInt(lenHex, 16); | |
| if (Number.isNaN(this.contentLen) || this.contentLen < 0 || this.contentLen > FIELD_SIZE_LIMIT) { | |
| throw new Error(`Invalid content length: ${JSON.stringify(lenHex)} (parsed: ${this.contentLen})`); | |
| } |
(Import FIELD_SIZE_LIMIT from ../shared/constants or define a suitable local constant.)
| log.debug('Worker #%d: request timed out', worker.id); | ||
| decrementAndMaybeShutdown('onTimeout'); | ||
| done(); | ||
| }); |
There was a problem hiding this comment.
Potential double-decrement: worker.ts calls res.raw.destroy() after a streaming error (to force-close the connection). This can trigger both onRequestAbort (socket destroyed) and, because the response was already started, onResponse — decrementing activeRequestsCount twice for the same request. The counter would go negative, breaking shutdown logic.
A safe fix is to track which requests have already been decremented:
const completedRequests = new Set<string>();
const decrementAndMaybeShutdown = (reqId: string, context: string) => {
if (completedRequests.has(reqId)) {
log.warn('Worker #%d: duplicate lifecycle hook %s for request %s, skipping', worker.id, context, reqId);
return;
}
completedRequests.add(reqId);
activeRequestsCount -= 1;
if (isShuttingDown && activeRequestsCount === 0) {
log.debug('Worker #%d has no active requests after %s, killing the worker', worker.id, context);
worker.destroy();
}
};Then pass req.id (Fastify assigns a unique ID per request) as the first argument in each hook.
| } | ||
|
|
||
| // Stream ended normally | ||
| await onRequestEnded(); |
There was a problem hiding this comment.
Missing response when stream has no NDJSON objects: if the request body is empty or truncated (no complete NDJSON line arrives), the for await loop exits without ever setting hasReceivedFirstObject, so onResponseStart is never called. The handler returns normally, and Fastify will send whatever default it uses for a handler that returns without replying (typically an empty 200). The client receives no HTML and no error.
A guard before onRequestEnded() would make the failure explicit:
| await onRequestEnded(); | |
| // Stream ended normally | |
| if (!hasReceivedFirstObject) { | |
| throw new Error('Stream ended without receiving any NDJSON objects'); | |
| } | |
| await onRequestEnded(); | |
| await onResponseStartPromise; |
| "#{e.class} - #{e.message}\n#{backtrace}" | ||
| end | ||
| # Continue - don't abort entire render because one prop failed | ||
| end |
There was a problem hiding this comment.
Silent prop loss causes React to hang for up to 20 s: when the stream write fails here, the async prop is never delivered to the Node renderer. React's suspended component waits forever (until STREAM_CHUNK_TIMEOUT_MS expires). From the user's perspective, the page just hangs with no error boundary shown.
If call fails, the render session is effectively broken. At minimum, the emitter should record which props failed so the caller can issue an endStream to unblock React immediately rather than waiting for the timeout.
| module StreamBidi | ||
| module RequestMethodsRetryFix | ||
| def transition(nextstate) | ||
| @headers_sent = false if nextstate == :idle |
There was a problem hiding this comment.
@headers_sent is reset for all requests, not only streaming ones. The guard return super unless @options.stream on line 30 restricts the callbacks(:body).clear call to streaming requests, but @headers_sent = false runs unconditionally on every :idle transition. Any non-streaming HTTPX request sharing the same session will have its header-sent flag cleared, which could cause headers to be re-sent on the next request.
| @headers_sent = false if nextstate == :idle | |
| def transition(nextstate) | |
| if nextstate == :idle && @options.stream | |
| @headers_sent = false | |
| callbacks(:body).clear | |
| return super | |
| end | |
| super | |
| end |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 4a7fb775ec
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| htmlStr = JSON.stringify(html); | ||
| } | ||
| const metadata = JSON.stringify(metadataObj); | ||
| const byteLength = utf8ByteLength(htmlStr); |
There was a problem hiding this comment.
Handle undefined payloads before length-prefix encoding
buildLengthPrefixedResult assumes non-string payloads always stringify to a string, but JSON.stringify(undefined) returns undefined, so utf8ByteLength(htmlStr) throws. This is reachable from server_render_js, where the evaluated expression can legitimately return undefined; before this change that value was serialized without crashing, but now it raises a runtime TypeError and aborts rendering. Normalize undefined (for example to null or "") before computing byte length.
Useful? React with 👍 / 👎.
| throw new Error(`Can't set the async prop "${propName}" because the stream is already closed`); | ||
| } | ||
|
|
||
| promiseController.resolve(propValue); |
There was a problem hiding this comment.
Double-resolution is silently dropped. If setProp is called twice for the same prop (e.g., a bug in the Rails emitter sends the same update chunk twice), the second resolve() call is a no-op on an already-resolved promise, and resolved is overwritten to true again — no error, no log. The second value is silently discarded.
| promiseController.resolve(propValue); | |
| promiseController.resolve(propValue); | |
| if (promiseController.resolved) { | |
| log.warn?.(`AsyncPropsManager: setProp called twice for "${propName}", ignoring duplicate`); | |
| return; | |
| } | |
| promiseController.resolve(propValue); | |
| promiseController.resolved = true; |
(or guard at the top of the method before calling getOrCreatePromiseController)
|
Code Review: Incremental HTML Streaming and Async Props This is a large, architecturally significant PR. The overall design is sound. Six issues are called out below — four as inline comments on specific lines, two here. Summary of inline comments posted:
Minor observations (no inline comment):
|
There was a problem hiding this comment.
🧹 Nitpick comments (1)
packages/react-on-rails-pro-node-renderer/tests/worker.test.ts (1)
1080-1119: Test title and TODO comment are misleading.The title says "only processes first one for now" and there's a TODO to "Implement incremental updates", but based on
handleIncrementalRenderStream.ts(context snippet 1, lines 171-181), subsequent chunks ARE already processed throughonUpdateReceived. The first chunk produces the response; subsequent chunks are update chunks processed by the sink.Consider updating the title to clarify the actual behavior:
- // TODO: Implement incremental updates and update this test - test('handles multiple NDJSON chunks but only processes first one for now', async () => { + test('handles multiple NDJSON chunks - first produces response, subsequent are updates', async () => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/react-on-rails-pro-node-renderer/tests/worker.test.ts` around lines 1080 - 1119, The test title and TODO are misleading because subsequent NDJSON chunks are processed via onUpdateReceived in handleIncrementalRenderStream.ts; update the test name 'handles multiple NDJSON chunks but only processes first one for now' and remove or reword the TODO to reflect that the first chunk generates the response while additional chunks are treated as updates (processed by onUpdateReceived/sink). Ensure the test description and any TODO comment clearly state that the first NDJSON chunk returns the initial render response and later chunks are consumed as incremental updates rather than being ignored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/react-on-rails-pro-node-renderer/tests/worker.test.ts`:
- Around line 1080-1119: The test title and TODO are misleading because
subsequent NDJSON chunks are processed via onUpdateReceived in
handleIncrementalRenderStream.ts; update the test name 'handles multiple NDJSON
chunks but only processes first one for now' and remove or reword the TODO to
reflect that the first chunk generates the response while additional chunks are
treated as updates (processed by onUpdateReceived/sink). Ensure the test
description and any TODO comment clearly state that the first NDJSON chunk
returns the initial render response and later chunks are consumed as incremental
updates rather than being ignored.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 4f8fc020-87bd-4df7-8a5b-d082648f532d
📒 Files selected for processing (3)
packages/react-on-rails-pro-node-renderer/src/worker.tspackages/react-on-rails-pro-node-renderer/tests/vm.test.tspackages/react-on-rails-pro-node-renderer/tests/worker.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/react-on-rails-pro-node-renderer/tests/vm.test.ts
## Summary
- Removes the `length_prefixed` flag from `StreamRequest`, making it
always parse the length-prefixed wire format
- Removes the dead `process_line_based_chunks` method since the
incremental rendering path now uses the same
`streamServerRenderedReactComponent` transform as one-shot streaming
- Updates incremental rendering tests to use
`to_length_prefixed("chunk")` instead of raw `"chunk\n"`
## Test plan
- [ ] Pro gem unit tests pass (`cd react_on_rails_pro && bundle exec
rspec spec/react_on_rails_pro/`)
- [ ] Pro integration tests pass (`cd react_on_rails_pro/spec/dummy &&
bundle exec rspec spec/`)
- [ ] Verify incremental rendering (async props) works end-to-end with
streaming components
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
#3196) ## Summary - Replace `setTimeout(flush, 0)` in `injectRSCPayload.ts` with React's `destination.flush()` signal for per-render-cycle flushing - Yield to the event loop (`setImmediate`) between NDJSON update chunks in `handleIncrementalRenderStream.ts` so each async prop triggers a separate React render cycle ## Problem The RSC shell HTML + all resolved Suspense boundary content were being merged into a single wire-format message, defeating progressive streaming. On pages with fast DB queries (product, blog, restaurant search), RSC FCP was **worse** than SSR. **Product page before fix:** | Metric | SSR | RSC | |--------|-----|-----| | FCP | 136ms | 370ms (+172%) | | First wire msg | 70KB | 124KB (shell + all resolved + Flight data) | **Product page after fix (with 3ms sleep workaround confirming the diagnosis):** | Metric | SSR | RSC | |--------|-----|-----| | FCP | 140ms | 172ms (+23%) | | First wire msg | 70KB | 38KB (shell + skeletons only) | ## Root Cause Two issues combined: 1. **`injectRSCPayload.ts`**: `setTimeout(flush, 0)` batches by event loop tick. When multiple Suspense boundaries resolve in the same tick, all HTML merges into one output. 2. **`handleIncrementalRenderStream.ts`**: The `while` loop processes all NDJSON update chunks synchronously. Each `setProp()` adds to React's `pingedTasks`, but React only calls `setImmediate(performWork)` on the first ping. All props batch into one `performWork` → one `flushCompletedQueues` → one giant chunk. ## Fix **`injectRSCPayload.ts`**: React calls `flushBuffered(destination)` → `destination.flush()` at the end of every `flushCompletedQueues` cycle ([React source](https://github.com/facebook/react/blob/main/packages/react-server/src/ReactServerStreamConfigNode.js#L31-L38), [PR facebook/react#21625](facebook/react#21625)). By adding `.flush()` to `htmlStream`, we flush buffered chunks at exactly the right granularity — per render cycle, not per event loop tick. This also solves the partial HTML tag problem (React's 4KB buffer can split tags mid-attribute) without the over-batching of `setTimeout(0)`. **`handleIncrementalRenderStream.ts`**: `await setImmediate` after each update chunk gives React's `performWork` a chance to run between `setProp()` calls, producing separate `flushCompletedQueues` → `flush()` cycles per Suspense boundary. Both changes are needed together — see the issue for detailed explanation of why each alone is insufficient. ## Verification Test script rendering 4 large Suspense boundaries (>4KB each): ``` Without flush(): 147 chunks, 84% contain partial HTML tags With flush(): 5 flush cycles, 0% partial tags Without flush: [Hero+Fallbacks+Details] → [Reviews] → [Related] → [Recommended] With flush: [Hero+Fallbacks] → [Details] → [Reviews] → [Related] → [Recommended] ``` ## Test plan - [ ] Existing streaming tests pass - [ ] Product page RSC: first wire-format message contains only shell + skeletons (not resolved content) - [ ] Blog page RSC: FCP is faster or equal to SSR (was 28% slower) - [ ] Dashboard RSC: no regression (queries are slow enough that streaming already worked) - [ ] No partial HTML tags in any output chunk (verified by test script) Closes #3194 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
CI comparison PR — used to identify which CI failures are pre-existing on
upcoming-v16.3.0vs introduced by PR #2615.This PR is NOT intended to be merged. It exists solely to run CI on
upcoming-v16.3.0againstmain.🤖 Generated with Claude Code
Summary by CodeRabbit
Bug Fixes
New Features
Tests
Chores