Skip to content

torch backprop does not forget that something was nan #2

@FredDeCeuster

Description

@FredDeCeuster

The following code:

import torch
from torch.optim import Adam

m = torch.arange(5.0, requires_grad=True)
optimizer = Adam([m], lr=0.1)

f = 1 / m
f[m == 0] = 1.0

loss = torch.mean(f**2)

optimizer.zero_grad()
loss.backward()
optimizer.step()

will cause the tensor m to evaluate to:

tensor([   nan, 1.1000, 2.1000, 3.1000, 4.1000], requires_grad=True)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions