Skip to content

Conversation

@avik-pal
Copy link
Member

No description provided.

@gemini-code-assist
Copy link

Summary of Changes

Hello @avik-pal, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing the integration of Reactant and Enzyme within the Lux framework. It updates the documentation to accurately reflect the current recommended usage of these tools for automatic differentiation and introduces significant performance improvements by caching compiled functions within the training loop. Additionally, it includes a minor cleanup of documentation dependencies.

Highlights

  • Documentation Updates: The documentation has been updated to reflect the use of Reactant and Enzyme for automatic differentiation, replacing Zygote in quickstart examples. This includes changes to prerequisite packages and example code snippets.
  • Reactant Compilation Caching: Performance optimizations have been introduced in the LuxReactantExt by implementing caching for compiled gradient computation and parameter update functions. This prevents redundant recompilation during training steps, improving efficiency.
  • Dependency Cleanup: The OpenSSL_jll dependency has been removed from the documentation's Project.toml file, streamlining the documentation build process.
  • Model Architecture Adjustment: A minor adjustment was made to the neural network architecture in the quickstart example, changing a Dense layer from (256, 1) to (256, 256).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

)
else
cache = TrainingBackendCache(
backend, False(), dparameters, (; compiled_grad_and_step_function, is_sharded)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[JuliaFormatter] reported by reviewdog 🐶

Suggested change
backend, False(), dparameters, (; compiled_grad_and_step_function, is_sharded)
backend,
False(),
dparameters,
(; compiled_grad_and_step_function, is_sharded),

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the documentation to use Reactant and Enzyme and refactors the Reactant training extension. The documentation changes look good but introduce a small bug in the quickstart example. The refactoring of the training extension is a nice improvement, but it introduces a critical bug in the apply_gradients implementation that could lead to a crash. My review includes suggestions to fix these issues.

Comment on lines 67 to 89
## First construct a TrainState
train_state = Lux.Training.TrainState(model, ps, st, Adam(0.0001f0))
train_state = Training.TrainState(model, ps, st, Adam(0.0001f0))
## We can compute the gradients using Training.compute_gradients
## TrainState handles compilation internally
gs, loss, stats, train_state = Lux.Training.compute_gradients(
AutoZygote(), MSELoss(),
(x, dev(rand(rng, Float32, 10, 2))), train_state
AutoEnzyme(),
MSELoss(),
(x, dev(rand(rng, Float32, 10, 2))),
train_state
)
## Optimization
train_state = Training.apply_gradients!(train_state, gs) # or Training.apply_gradients (no `!` at the end)
# Both these steps can be combined into a single call
# Both these steps can be combined into a single call (preferred approach)
gs, loss, stats, train_state = Training.single_train_step!(
AutoZygote(), MSELoss(),
(x, dev(rand(rng, Float32, 10, 2))), train_state
AutoEnzyme(),
MSELoss(),
(x, dev(rand(rng, Float32, 10, 2))),
train_state
)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The Training module is not imported directly, so calls to Training.TrainState, Training.apply_gradients!, and Training.single_train_step! will fail. You should prefix them with Lux. to match the style in the rest of the file and ensure the example code is runnable.

## First construct a TrainState
train_state = Lux.Training.TrainState(model, ps, st, Adam(0.0001f0))

## We can compute the gradients using Training.compute_gradients
## TrainState handles compilation internally
gs, loss, stats, train_state = Lux.Training.compute_gradients(
    AutoEnzyme(),
    MSELoss(),
    (x, dev(rand(rng, Float32, 10, 2))),
    train_state
)

## Optimization
train_state = Lux.Training.apply_gradients!(train_state, gs) # or Training.apply_gradients (no `!` at the end)

# Both these steps can be combined into a single call (preferred approach)
gs, loss, stats, train_state = Lux.Training.single_train_step!(
    AutoEnzyme(),
    MSELoss(),
    (x, dev(rand(rng, Float32, 10, 2))),
    train_state
)

@avik-pal avik-pal force-pushed the ap/fix_docs_build branch 2 times, most recently from 84094d2 to 0fe8fa8 Compare November 13, 2025 03:29
@codecov
Copy link

codecov bot commented Nov 13, 2025

Codecov Report

❌ Patch coverage is 55.10204% with 22 lines in your changes missing coverage. Please review.
✅ Project coverage is 73.87%. Comparing base (8c14c89) to head (b272de7).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
ext/LuxReactantExt/training.jl 55.55% 20 Missing ⚠️
src/helpers/training.jl 50.00% 2 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (8c14c89) and HEAD (b272de7). Click for more details.

HEAD has 22 uploads less than BASE
Flag BASE (8c14c89) HEAD (b272de7)
51 29
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1548      +/-   ##
==========================================
- Coverage   82.47%   73.87%   -8.61%     
==========================================
  Files         168      168              
  Lines        6957     6954       -3     
==========================================
- Hits         5738     5137     -601     
- Misses       1219     1817     +598     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@github-actions
Copy link
Contributor

github-actions bot commented Nov 13, 2025

Benchmark Results (Julia v1.11)

Time benchmarks
main b272de7... main / b272de7...
basics/MHA 4.16 ± 0.59 μs 4.18 ± 0.7 μs 0.995 ± 0.22
basics/MHA (first run) 4.41 ± 0.45 μs 4.52 ± 0.66 μs 0.976 ± 0.17
basics/MHA reactant 0.0663 ± 0.011 ms 0.0632 ± 0.013 ms 1.05 ± 0.27
basics/MHA reactant (comp + run) 0.153 ± 0.0037 s 0.146 ± 0.0069 s 1.05 ± 0.056
basics/conv 12.3 ± 11 μs 12.5 ± 12 μs 0.989 ± 1.3
basics/conv (first run) 10.3 ± 0.56 μs 10.2 ± 0.63 μs 1.01 ± 0.083
basics/conv reactant 0.0518 ± 0.0027 ms 0.0532 ± 0.0044 ms 0.975 ± 0.095
basics/conv reactant (comp + run) 0.0931 ± 0.0032 s 0.0984 ± 0.0046 s 0.947 ± 0.055
basics/dense 0.18 ± 0.001 μs 0.18 ± 0.001 μs 1 ± 0.0079
basics/dense (first run) 0.2 ± 0.01 μs 0.2 ± 0.001 μs 1 ± 0.05
basics/dense reactant 0.0459 ± 0.0031 ms 0.0507 ± 0.0025 ms 0.907 ± 0.076
basics/dense reactant (comp + run) 0.0806 ± 0.0035 s 0.0816 ± 0.0027 s 0.988 ± 0.053
time_to_load 0.927 ± 0.019 s 0.954 ± 0.016 s 0.972 ± 0.025
Memory benchmarks
main b272de7... main / b272de7...
basics/MHA 0.087 k allocs: 6.05 kB 0.087 k allocs: 6.05 kB 1
basics/MHA (first run) 0.087 k allocs: 6.05 kB 0.087 k allocs: 6.05 kB 1
basics/MHA reactant 19 allocs: 0.578 kB 19 allocs: 0.578 kB 1
basics/MHA reactant (comp + run) 18 k allocs: 1.38 MB 18 k allocs: 1.38 MB 1
basics/conv 0.038 k allocs: 5.12 kB 0.038 k allocs: 5.12 kB 1
basics/conv (first run) 0.038 k allocs: 5.12 kB 0.038 k allocs: 5.12 kB 1
basics/conv reactant 15 allocs: 0.438 kB 15 allocs: 0.438 kB 1
basics/conv reactant (comp + run) 6.16 k allocs: 0.823 MB 6.16 k allocs: 0.823 MB 1
basics/dense 2 allocs: 0.109 kB 2 allocs: 0.109 kB 1
basics/dense (first run) 2 allocs: 0.109 kB 2 allocs: 0.109 kB 1
basics/dense reactant 15 allocs: 0.422 kB 15 allocs: 0.422 kB 1
basics/dense reactant (comp + run) 5.9 k allocs: 0.805 MB 5.9 k allocs: 0.805 MB 1
time_to_load 0.159 k allocs: 11.2 kB 0.159 k allocs: 11.2 kB 1

@avik-pal avik-pal merged commit 510f710 into main Nov 13, 2025
45 of 49 checks passed
@avik-pal avik-pal deleted the ap/fix_docs_build branch November 13, 2025 05:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants