Skip to content

Releases: LuxDL/Lux.jl

MLDataDevices-v1.15.3

08 Dec 04:21
4a170a1

Choose a tag to compare

MLDataDevices MLDataDevices-v1.15.3

Diff since MLDataDevices-v1.15.2

Merged pull requests:

  • Fix isbits type support for GPU device transfer (#1587) (@Copilot)

Closed issues:

  • [MLDataDevices] broken support for isbits types movement (#1586)

v1.27.1

06 Dec 18:36

Choose a tag to compare

Lux v1.27.1

Diff since v1.27.0

Merged pull requests:

  • chore: bump actions/download-artifact from 5 to 6 (#1575) (@dependabot[bot])
  • ci: fix download path for cuda ci (#1576) (@avik-pal)
  • chore: bump crate-ci/typos from 1.39.2 to 1.40.0 (#1578) (@dependabot[bot])
  • test: Metal test now works (#1581) (@avik-pal)
  • Add AbstractChar array support to MLDataDevices (#1582) (@Copilot)
  • Mark LuxCore imports as public using @public (#1585) (@Copilot)

Closed issues:

  • Overhead of convolution on AMD GPU (#1557)
  • LuxTestUtils not re-exported (#1579)
  • [MLDataDevices] failure at transfering non numerical array (#1580)
  • Mark LuxCore imports in Lux as public (#1584)

MLDataDevices-v1.15.2

06 Dec 18:36

Choose a tag to compare

MLDataDevices MLDataDevices-v1.15.2

Diff since MLDataDevices-v1.15.1

Merged pull requests:

Closed issues:

  • Add simple tests for other accelerators (#686)
  • DifferentiationInterface testing (#769)
  • Enzyme Cache Invalidation Failure with v1.10 (#1551)
  • OneHotArrays + Reactant with cross entropy loss (#1556)
  • Overhead of convolution on AMD GPU (#1557)
  • LuxTestUtils not re-exported (#1579)
  • [MLDataDevices] failure at transfering non numerical array (#1580)
  • Mark LuxCore imports in Lux as public (#1584)

v1.27.0

22 Nov 22:58
9029706

Choose a tag to compare

Lux v1.27.0

Diff since v1.26.0

Merged pull requests:

Closed issues:

  • Add simple tests for other accelerators (#686)
  • DifferentiationInterface testing (#769)
  • Enzyme Cache Invalidation Failure with v1.10 (#1551)
  • OneHotArrays + Reactant with cross entropy loss (#1556)

LuxLib-v1.13.1

22 Nov 22:58
9029706

Choose a tag to compare

LuxLib LuxLib-v1.13.1

Diff since LuxLib-v1.13.0

Merged pull requests:

Closed issues:

  • Add simple tests for other accelerators (#686)
  • DifferentiationInterface testing (#769)
  • Automatically cache allocations for JuliaGPU workloads (#1527)
  • Enzyme Cache Invalidation Failure with v1.10 (#1551)
  • OneHotArrays + Reactant with cross entropy loss (#1556)

v1.26.0

13 Nov 18:50
584ca77

Choose a tag to compare

Lux v1.26.0

Diff since v1.25.0

Merged pull requests:

  • feat: migrate DDIM to Reactant (#1158) (@avik-pal)
  • docs: stop manual specification of precision config (#1536) (@avik-pal)
  • CompatHelper: add new compat entry for TensorBoardLogger at version 0.1 for package DDIM, (keep existing compat) (#1537) (@github-actions[bot])
  • CompatHelper: add new compat entry for ImageShow at version 0.3 for package DDIM, (keep existing compat) (#1538) (@github-actions[bot])
  • CompatHelper: add new compat entry for OhMyThreads at version 0.8 for package DDIM, (keep existing compat) (#1539) (@github-actions[bot])
  • chore: bump crate-ci/typos from 1.38.1 to 1.39.0 (#1541) (@dependabot[bot])
  • refactor: use EnzymeRules.@easy_rule in Lux.jl (#1542) (@avik-pal)
  • Fix identity_init filling entire submatrix instead of diagonal (#1544) (@Copilot)
  • feat: more informative error on constructing trainstate with compiled function (#1547) (@avik-pal)
  • fix: minor reactant stuff + docs build (#1548) (@avik-pal)
  • feat: use a caching allocator for GPUArrays workflows (#1549) (@avik-pal)
  • Avoid reconstruction in Internal.unsafe_free! (#1550) (@AntonOresten)

Closed issues:

  • Automatically cache allocations for JuliaGPU workloads (#1527)
  • Exporting to Jax manual entry segfaults with recent reactant (#1540)
  • Identity matrix initialization fills all entries with ones (#1543)
  • Embedding Layer results in scalar indexing with Reactant? (#1546)

MLDataDevices-v1.15.1

13 Nov 18:50
584ca77

Choose a tag to compare

MLDataDevices MLDataDevices-v1.15.1

Diff since MLDataDevices-v1.15.0

Merged pull requests:

  • feat: migrate DDIM to Reactant (#1158) (@avik-pal)
  • feat: support distributed training via TrainState API (#1529) (@avik-pal)
  • ci: run LuxCore + MLDataDevices testing on 1.12 (#1534) (@avik-pal)
  • docs: stop manual specification of precision config (#1536) (@avik-pal)
  • CompatHelper: add new compat entry for TensorBoardLogger at version 0.1 for package DDIM, (keep existing compat) (#1537) (@github-actions[bot])
  • CompatHelper: add new compat entry for ImageShow at version 0.3 for package DDIM, (keep existing compat) (#1538) (@github-actions[bot])
  • CompatHelper: add new compat entry for OhMyThreads at version 0.8 for package DDIM, (keep existing compat) (#1539) (@github-actions[bot])
  • chore: bump crate-ci/typos from 1.38.1 to 1.39.0 (#1541) (@dependabot[bot])
  • refactor: use EnzymeRules.@easy_rule in Lux.jl (#1542) (@avik-pal)
  • Fix identity_init filling entire submatrix instead of diagonal (#1544) (@Copilot)
  • feat: more informative error on constructing trainstate with compiled function (#1547) (@avik-pal)
  • fix: minor reactant stuff + docs build (#1548) (@avik-pal)
  • feat: use a caching allocator for GPUArrays workflows (#1549) (@avik-pal)
  • Avoid reconstruction in Internal.unsafe_free! (#1550) (@AntonOresten)

Closed issues:

  • Reactant get_device with sharding throws error inside of MLDataDevices, impossible to use with TrainState API (#1520)
  • Automatically cache allocations for JuliaGPU workloads (#1527)
  • Exporting to Jax manual entry segfaults with recent reactant (#1540)
  • Identity matrix initialization fills all entries with ones (#1543)
  • Embedding Layer results in scalar indexing with Reactant? (#1546)

LuxLib-v1.13.0

12 Nov 16:30
0d0558e

Choose a tag to compare

LuxLib LuxLib-v1.13.0

Diff since LuxLib-v1.12.1

Merged pull requests:

  • feat: migrate DDIM to Reactant (#1158) (@avik-pal)
  • Add type-stable eltype control to device adaptors with comprehensive testing (#1498) (@Copilot)
  • chore: bump crate-ci/typos from 1.36.2 to 1.36.3 (#1499) (@dependabot[bot])
  • chore: bump crate-ci/typos from 1.36.3 to 1.37.2 (#1500) (@dependabot[bot])
  • CompatHelper: bump compat for BFloat16s to 0.6 for package CIFAR10, (keep existing compat) (#1501) (@github-actions[bot])
  • CompatHelper: bump compat for BFloat16s to 0.6 for package Qwen3, (keep existing compat) (#1502) (@github-actions[bot])
  • CompatHelper: bump compat for JLArrays to 0.3 for package test, (keep existing compat) (#1503) (@github-actions[bot])
  • ci: use 1.11 (#1504) (@avik-pal)
  • feat: JVP and VJP APIs for Reactant (#1506) (@avik-pal)
  • chore: bump crate-ci/typos from 1.37.2 to 1.38.1 (#1508) (@dependabot[bot])
  • CompatHelper: bump compat for Optimization to 5 for package GravitationalWaveForm, (keep existing compat) (#1510) (@github-actions[bot])
  • CompatHelper: bump compat for Optimization to 5 for package OptimizationIntegration, (keep existing compat) (#1511) (@github-actions[bot])
  • CompatHelper: bump compat for BLISBLAS in [weakdeps] to 0.2 for package LuxLib, (keep existing compat) (#1512) (@github-actions[bot])
  • CompatHelper: bump compat for BLISBLAS to 0.2 for package test, (keep existing compat) (#1513) (@github-actions[bot])
  • feat: move rng to reactant device (#1517) (@avik-pal)
  • fix: donation errors for reactant (#1518) (@avik-pal)
  • feat: allow passing a sync option (#1519) (@avik-pal)
  • chore: bump actions/upload-artifact from 4 to 5 (#1526) (@dependabot[bot])
  • feat: support distributed training via TrainState API (#1529) (@avik-pal)
  • feat: support track numbers via reactant device API (#1533) (@avik-pal)
  • ci: run LuxCore + MLDataDevices testing on 1.12 (#1534) (@avik-pal)
  • docs: stop manual specification of precision config (#1536) (@avik-pal)
  • CompatHelper: add new compat entry for TensorBoardLogger at version 0.1 for package DDIM, (keep existing compat) (#1537) (@github-actions[bot])
  • CompatHelper: add new compat entry for ImageShow at version 0.3 for package DDIM, (keep existing compat) (#1538) (@github-actions[bot])
  • CompatHelper: add new compat entry for OhMyThreads at version 0.8 for package DDIM, (keep existing compat) (#1539) (@github-actions[bot])
  • chore: bump crate-ci/typos from 1.38.1 to 1.39.0 (#1541) (@dependabot[bot])
  • refactor: use EnzymeRules.@easy_rule in Lux.jl (#1542) (@avik-pal)
  • Fix identity_init filling entire submatrix instead of diagonal (#1544) (@Copilot)

Closed issues:

  • Rethinking eltype conversions in Adaptors (#1015)
  • CUDA.jl along cannot trigger automatic GPU backend selection (#1245)
  • Fix remaining CUDA testing (#1457)
  • Global configuration for setting sync=true in training API (#1509)
  • Invalid buffer donation in new Reactant versions (#1514)
  • Lux.jl and Reactant and StableRNG interaction (#1515)
  • memory leak (?) on AMD MI250X GPUs (#1516)
  • Reactant get_device with sharding throws error inside of MLDataDevices, impossible to use with TrainState API (#1520)
  • Error "failed to run pass manager on module" only on Vector input (#1521)
  • Local MPI rank is always 0 if Ipopt solver is imported before Lux and MPI (#1525)
  • Reactant RNG handling broken in latest release (#1531)
  • Exporting to Jax manual entry segfaults with recent reactant (#1540)
  • Identity matrix initialization fills all entries with ones (#1543)

WeightInitializers-v1.2.2

07 Nov 21:41
226beb3

Choose a tag to compare

WeightInitializers WeightInitializers-v1.2.2

Diff since WeightInitializers-v1.2.1

Merged pull requests:

  • feat: migrate DDIM to Reactant (#1158) (@avik-pal)
  • feat: use new NCCL version (#1492) (@avik-pal)
  • feat: replace Compat.jl with SciMLPublic.jl for @public macro (#1497) (@Copilot)
  • Add type-stable eltype control to device adaptors with comprehensive testing (#1498) (@Copilot)
  • chore: bump crate-ci/typos from 1.36.2 to 1.36.3 (#1499) (@dependabot[bot])
  • chore: bump crate-ci/typos from 1.36.3 to 1.37.2 (#1500) (@dependabot[bot])
  • CompatHelper: bump compat for BFloat16s to 0.6 for package CIFAR10, (keep existing compat) (#1501) (@github-actions[bot])
  • CompatHelper: bump compat for BFloat16s to 0.6 for package Qwen3, (keep existing compat) (#1502) (@github-actions[bot])
  • CompatHelper: bump compat for JLArrays to 0.3 for package test, (keep existing compat) (#1503) (@github-actions[bot])
  • ci: use 1.11 (#1504) (@avik-pal)
  • feat: JVP and VJP APIs for Reactant (#1506) (@avik-pal)
  • chore: bump crate-ci/typos from 1.37.2 to 1.38.1 (#1508) (@dependabot[bot])
  • CompatHelper: bump compat for Optimization to 5 for package GravitationalWaveForm, (keep existing compat) (#1510) (@github-actions[bot])
  • CompatHelper: bump compat for Optimization to 5 for package OptimizationIntegration, (keep existing compat) (#1511) (@github-actions[bot])
  • CompatHelper: bump compat for BLISBLAS in [weakdeps] to 0.2 for package LuxLib, (keep existing compat) (#1512) (@github-actions[bot])
  • CompatHelper: bump compat for BLISBLAS to 0.2 for package test, (keep existing compat) (#1513) (@github-actions[bot])
  • feat: move rng to reactant device (#1517) (@avik-pal)
  • fix: donation errors for reactant (#1518) (@avik-pal)
  • feat: allow passing a sync option (#1519) (@avik-pal)
  • chore: bump actions/upload-artifact from 4 to 5 (#1526) (@dependabot[bot])
  • feat: support distributed training via TrainState API (#1529) (@avik-pal)
  • feat: support track numbers via reactant device API (#1533) (@avik-pal)
  • ci: run LuxCore + MLDataDevices testing on 1.12 (#1534) (@avik-pal)
  • docs: stop manual specification of precision config (#1536) (@avik-pal)
  • CompatHelper: add new compat entry for TensorBoardLogger at version 0.1 for package DDIM, (keep existing compat) (#1537) (@github-actions[bot])
  • CompatHelper: add new compat entry for ImageShow at version 0.3 for package DDIM, (keep existing compat) (#1538) (@github-actions[bot])
  • CompatHelper: add new compat entry for OhMyThreads at version 0.8 for package DDIM, (keep existing compat) (#1539) (@github-actions[bot])
  • chore: bump crate-ci/typos from 1.38.1 to 1.39.0 (#1541) (@dependabot[bot])
  • Fix identity_init filling entire submatrix instead of diagonal (#1544) (@Copilot)

Closed issues:

  • Rethinking eltype conversions in Adaptors (#1015)
  • CUDA.jl along cannot trigger automatic GPU backend selection (#1245)
  • Fix remaining CUDA testing (#1457)
  • Relax NCCL dep for testing (#1479)
  • Use SciMLPublic.jl instead of Compat for @public (#1496)
  • Global configuration for setting sync=true in training API (#1509)
  • Invalid buffer donation in new Reactant versions (#1514)
  • Lux.jl and Reactant and StableRNG interaction (#1515)
  • memory leak (?) on AMD MI250X GPUs (#1516)
  • Reactant get_device with sharding throws error inside of MLDataDevices, impossible to use with TrainState API (#1520)
  • Error "failed to run pass manager on module" only on Vector input (#1521)
  • Local MPI rank is always 0 if Ipopt solver is imported before Lux and MPI (#1525)
  • Reactant RNG handling broken in latest release (#1531)
  • Exporting to Jax manual entry segfaults with recent reactant (#1540)
  • Identity matrix initialization fills all entries with ones (#1543)

v1.25.0

31 Oct 18:03
4de5284

Choose a tag to compare

Lux v1.25.0

Diff since v1.24.0

Merged pull requests:

  • chore: bump actions/upload-artifact from 4 to 5 (#1526) (@dependabot[bot])
  • feat: support distributed training via TrainState API (#1529) (@avik-pal)
  • feat: support track numbers via reactant device API (#1533) (@avik-pal)
  • ci: run LuxCore + MLDataDevices testing on 1.12 (#1534) (@avik-pal)

Closed issues:

  • Fix remaining CUDA testing (#1457)
  • memory leak (?) on AMD MI250X GPUs (#1516)
  • Reactant get_device with sharding throws error inside of MLDataDevices, impossible to use with TrainState API (#1520)
  • Error "failed to run pass manager on module" only on Vector input (#1521)
  • Local MPI rank is always 0 if Ipopt solver is imported before Lux and MPI (#1525)
  • Reactant RNG handling broken in latest release (#1531)