Skip to content

[task] fix: fix bug for args.train.accelerator.fsdp_config.mixed_precision.enable#638

Merged
FoolPlayer merged 4 commits intoByteDance-Seed:mainfrom
UserChen666:main
Apr 18, 2026
Merged

[task] fix: fix bug for args.train.accelerator.fsdp_config.mixed_precision.enable#638
FoolPlayer merged 4 commits intoByteDance-Seed:mainfrom
UserChen666:main

Conversation

@UserChen666
Copy link
Copy Markdown
Contributor

What does this PR do?

fix bug for #636

Checklist Before Starting

  • Search for relative PRs/issues and link here: ...
  • PR title follows [{modules}] {type}: {description} format (see check_pr_title.yml for the full list of allowed modules and types)
    • Breaking changes: prepend [BREAKING] — e.g. [BREAKING][parallel, model] feat: dynamic batching

Test

Validation results (training curves, eval metrics) for changes not covered by CI.

API and Usage Example

Show API changes and usage examples if applicable.

Design & Code Changes

High-level design description and specific change list.

Checklist Before Submitting

  • Read the Contribute Guide
  • Applied pre-commit checks
  • Added/updated documentation
  • If tasks/ training scripts were moved or renamed: updated docs/ examples and verified python3 scripts/ci/check_doc_task_paths.py passes (also enforced by the Check doc task paths CI workflow)
  • Added tests to CI workflow (or explained why not feasible)

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 9, 2026

CLA assistant check
All committers have signed the CLA.

@github-actions github-actions Bot added bug Something isn't working fix labels Apr 9, 2026
@UserChen666 UserChen666 changed the title fix bug for args.train.accelerator.fsdp_config.mixed_precision.enable [arguments]fix bug for args.train.accelerator.fsdp_config.mixed_precision.enable Apr 9, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates various training scripts to use the mixed precision configuration object instead of a boolean flag. However, the review identifies a critical issue where the build_parallelize_model function expects a MixedPrecisionConfig object rather than a boolean, meaning the current implementation ignores the user's configuration. The reviewer suggests passing the full mixed_precision configuration object to ensure the settings are correctly applied.

Comment thread tasks/deprecated_task/train_flux.py Outdated
enable_full_shard=args.train.accelerator.fsdp_config.full_shard,
enable_reshard_after_forward=args.train.accelerator.fsdp_config.reshard_after_forward,
enable_mixed_precision=args.train.enable_mixed_precision,
enable_mixed_precision=args.train.accelerator.fsdp_config.mixed_precision.enable,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The build_parallelize_model function expects a mixed_precision parameter of type MixedPrecisionConfig, not a boolean enable_mixed_precision. Passing a boolean to a non-existent parameter name will cause the function to use its default MixedPrecisionConfig(enable=True), effectively ignoring the user's configuration for mixed precision.

Suggested change
enable_mixed_precision=args.train.accelerator.fsdp_config.mixed_precision.enable,
mixed_precision=args.train.accelerator.fsdp_config.mixed_precision,

Comment thread tasks/deprecated_task/train_qwen_vl.py Outdated
enable_full_shard=args.train.accelerator.fsdp_config.full_shard,
enable_reshard_after_forward=args.train.accelerator.fsdp_config.reshard_after_forward,
enable_mixed_precision=args.train.enable_mixed_precision,
enable_mixed_precision=args.train.accelerator.fsdp_config.mixed_precision.enable,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The build_parallelize_model function expects a mixed_precision parameter of type MixedPrecisionConfig, not a boolean enable_mixed_precision. Passing a boolean to a non-existent parameter name will cause the function to use its default MixedPrecisionConfig(enable=True), effectively ignoring the user's configuration for mixed precision.

Suggested change
enable_mixed_precision=args.train.accelerator.fsdp_config.mixed_precision.enable,
mixed_precision=args.train.accelerator.fsdp_config.mixed_precision,

Comment thread tasks/deprecated_task/train_torch.py Outdated
enable_full_shard=args.train.accelerator.fsdp_config.full_shard,
enable_reshard_after_forward=args.train.accelerator.fsdp_config.reshard_after_forward,
enable_mixed_precision=args.train.enable_mixed_precision,
enable_mixed_precision=args.train.accelerator.fsdp_config.mixed_precision.enable,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The build_parallelize_model function expects a mixed_precision parameter of type MixedPrecisionConfig, not a boolean enable_mixed_precision. Passing a boolean to a non-existent parameter name will cause the function to use its default MixedPrecisionConfig(enable=True), effectively ignoring the user's configuration for mixed precision.

Suggested change
enable_mixed_precision=args.train.accelerator.fsdp_config.mixed_precision.enable,
mixed_precision=args.train.accelerator.fsdp_config.mixed_precision,

Comment thread tasks/deprecated_task/train_wan.py Outdated
@FoolPlayer
Copy link
Copy Markdown
Collaborator

Hi, can you help fix lint error with

make style   # auto-fix lint + format
make quality # verify everything passes

@UserChen666
Copy link
Copy Markdown
Contributor Author

Hi, can you help fix lint error with

make style   # auto-fix lint + format
make quality # verify everything passes

I will do it immediately .

@UserChen666
Copy link
Copy Markdown
Contributor Author

Hi, can you help fix lint error with

make style   # auto-fix lint + format
make quality # verify everything passes

done

@FoolPlayer FoolPlayer changed the title [arguments]fix bug for args.train.accelerator.fsdp_config.mixed_precision.enable [task] fix: fix bug for args.train.accelerator.fsdp_config.mixed_precision.enable Apr 18, 2026
@FoolPlayer FoolPlayer merged commit 27b8723 into ByteDance-Seed:main Apr 18, 2026
17 of 18 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working fix

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants