Skip to content

Reduce GPU OOM in layer gradient computation by offloading tensors to CPU#1796

Open
styusuf wants to merge 1 commit intometa-pytorch:masterfrom
styusuf:export-D94915367
Open

Reduce GPU OOM in layer gradient computation by offloading tensors to CPU#1796
styusuf wants to merge 1 commit intometa-pytorch:masterfrom
styusuf:export-D94915367

Conversation

@styusuf
Copy link
Contributor

@styusuf styusuf commented Mar 3, 2026

Summary:
LayerGradientXActivation is having a number of jobs with OOM erros. These errors are due to the way we drain GPU memory during the forward and backward passes to obtain layer evaluations, copies of evaluations, and layer gradients.

The issue with the current way of getting evalutaions and gradients is the following:

  • After getting the evaluations (activations) from the forward pass, we gather all the activation tensors across multiple devices into one device meaning that one device would hold its own layer activations + copy of all the activations across all devices. This is the peak of the memory utilization. This is in addition to storing the model graph and the original layer activations in memory.

  • After the backward pass on the saved_layer (still on GPU), we are also doing a similar operation on the gradients - collecting gradients across all devices into the first device. At this point though, we don't have as much memory utilization since backward pass is already completed. So it is safer to collect all gradients. We can then offload to cpu afterwards.

What we want to do now is to offload the layer activations to cpu during these peak gpu utilization. Before we run the expensive torch.cat, we offload all these tensors to cpu first. That way, when we run torch.cat, this is actually done on cpu freeing up gpu memory.

Additionally when we get to the gradients, after gathering all tensors together, we also offload these to cpu.

With this simple change, we would significantly improve gpu utilization. We add a flag that will include an efficient path to the implementation.

Adds a memory_efficient mode to LayerGradientXActivation and an offload_to_cpu parameter to compute_layer_gradients_and_eval to reduce peak GPU memory usage during multi-layer attribution.

Differential Revision: D94915367

… CPU

Summary:
LayerGradientXActivation is having a number of jobs with OOM erros. These errors are due to the way we drain GPU memory during the forward and backward passes to obtain layer evaluations, copies of evaluations, and layer gradients.

The issue with the current way of getting evalutaions and gradients is the following:

* After getting the evaluations (activations) from the forward pass, we gather all the activation tensors across multiple devices into [one device](https://www.internalfb.com/code/fbsource/[0579e5aab76b1f89fc82d27913cbb3a6e0160b5f]/fbcode/pytorch/captum/captum/_utils/common.py?lines=785) meaning that one device would hold its own layer activations + copy of all the activations across all devices. This is the peak of the memory utilization. This is in addition to storing the model graph and the original layer activations in memory.


* After the backward pass on the `saved_layer` (still on GPU), we are also doing a similar operation on the gradients - collecting gradients across all devices into the first device. At this point though, we don't have as much memory utilization since backward pass is already completed. So it is safer to collect all gradients. We can then offload to cpu afterwards.

What we want to do now is to offload the layer activations to cpu during these peak gpu utilization. Before we run the expensive torch.cat, we offload all these tensors to cpu first. That way, when we run torch.cat, this is actually done on cpu freeing up gpu memory.

Additionally when we get to the gradients, after gathering all tensors together, we also offload these to cpu.

With this simple change, we would significantly improve gpu utilization. We add a flag that will include an efficient path to the implementation.

Adds a `memory_efficient` mode to `LayerGradientXActivation` and an `offload_to_cpu` parameter to `compute_layer_gradients_and_eval` to reduce peak GPU memory usage during multi-layer attribution.

Differential Revision: D94915367
@meta-codesync
Copy link
Contributor

meta-codesync bot commented Mar 3, 2026

@styusuf has exported this pull request. If you are a Meta employee, you can view the originating Diff in D94915367.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant