Save Memory When Operating on Tensors

PyTorch
Author

Imad Dabbura

Published

September 19, 2022

To avoid allocating new memory when operating on tensors, we can either: - Do inplace operations

x = torch.randn(1000, 1000)
before = id(x)
x.add_(100)
id(x) == before #=> True
x = torch.randn(1000, 1000)
before = id(x)
x += 100
id(x) == before #=> True
x = torch.randn(1000, 1000)
before = id(x)
x[:] = x + 100
id(x) == before #=> True

Otherwise, if we do the following, it will create new object and make x point to it. This will put pressure on the host memory.

x = torch.randn(1000, 1000)
before = id(x)
x = x + 100
id(x) == before #=> False