Skip to content

Error by CUequivariance execution #170

@Seunghyo-Noh

Description

@Seunghyo-Noh

Hi, I want to use Cu-equvarience for accelerated training, but I am encountering the following error. How can I fix it?
I attached captued error image.

  • The command I ran is sevenn "input_test.yaml -s --enable_cueq"
  • I am using sevennet version 0.11.0 multi-fidelity
  • I modified some presets provided by sevennet and attached the file.
    input_test.yaml.txt

Traceback (most recent call last):
File "/home/test/miniconda3/envs/sevenn/bin/sevenn", line 8, in
sys.exit(main())
File "/home/test/miniconda3/envs/sevenn/lib/python3.10/site-packages/sevenn/main/sevenn.py", line 190, in main
run(args)
File "/home/test/miniconda3/envs/sevenn/lib/python3.10/site-packages/sevenn/main/sevenn.py", line 129, in run
train_v2(global_config, working_dir)
File "/home/test/miniconda3/envs/sevenn/lib/python3.10/site-packages/sevenn/scripts/train.py", line 84, in train_v2
trainer.load_state_dicts(*state_dicts, strict=False)
File "/home/test/miniconda3/envs/sevenn/lib/python3.10/site-packages/sevenn/train/trainer.py", line 234, in load_state_dicts
self.model.load_state_dict(model_state_dict, strict=strict)
File "/home/test/miniconda3/envs/sevenn/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2215, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for AtomGraphSequential:
size mismatch for onehot_to_feature_x.linear.weight: copying a param with shape torch.Size([11392]) from checkpoint, the shape in current model is torch.Size([1, 11392]).
size mismatch for 0_self_connection_intro.linear.weight: copying a param with shape torch.Size([28672]) from checkpoint, the shape in current model is torch.Size([1, 28672]).
size mismatch for 0_self_interaction_1.linear.weight: copying a param with shape torch.Size([16640]) from checkpoint, the shape in current model is torch.Size([1, 16640]).
size mismatch for 0_self_interaction_2.linear.weight: copying a param with shape torch.Size([41408]) from checkpoint, the shape in current model is torch.Size([1, 41408]).
size mismatch for 1_self_connection_intro.linear.weight: copying a param with shape torch.Size([33792]) from checkpoint, the shape in current model is torch.Size([1, 33792]).
size mismatch for 1_self_interaction_1.linear.weight: copying a param with shape torch.Size([21760]) from checkpoint, the shape in current model is torch.Size([1, 21760]).
size mismatch for 1_self_interaction_2.linear.weight: copying a param with shape torch.Size([86464]) from checkpoint, the shape in current model is torch.Size([1, 86464]).
size mismatch for 2_self_connection_intro.linear.weight: copying a param with shape torch.Size([33792]) from checkpoint, the shape in current model is torch.Size([1, 33792]).
size mismatch for 2_self_interaction_1.linear.weight: copying a param with shape torch.Size([21760]) from checkpoint, the shape in current model is torch.Size([1, 21760]).
size mismatch for 2_self_interaction_2.linear.weight: copying a param with shape torch.Size([86464]) from checkpoint, the shape in current model is torch.Size([1, 86464]).
size mismatch for 3_self_connection_intro.linear.weight: copying a param with shape torch.Size([33792]) from checkpoint, the shape in current model is torch.Size([1, 33792]).
size mismatch for 3_self_interaction_1.linear.weight: copying a param with shape torch.Size([21760]) from checkpoint, the shape in current model is torch.Size([1, 21760]).
size mismatch for 3_self_interaction_2.linear.weight: copying a param with shape torch.Size([86464]) from checkpoint, the shape in current model is torch.Size([1, 86464]).
size mismatch for 4_self_connection_intro.linear.weight: copying a param with shape torch.Size([16384]) from checkpoint, the shape in current model is torch.Size([1, 16384]).
size mismatch for 4_self_interaction_1.linear.weight: copying a param with shape torch.Size([21760]) from checkpoint, the shape in current model is torch.Size([1, 21760]).
size mismatch for 4_self_interaction_2.linear.weight: copying a param with shape torch.Size([28928]) from checkpoint, the shape in current model is torch.Size([1, 28928]).
size mismatch for reduce_input_to_hidden.linear.weight: copying a param with shape torch.Size([8320]) from checkpoint, the shape in current model is torch.Size([1, 8320]).

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions