To avoid allocating new memory when operating on tensors, we can either: - Do inplace operations
= torch.randn(1000, 1000)
x = id(x)
before 100)
x.add_(id(x) == before #=> True
- Or change the current object that the tensor is pointing to
= torch.randn(1000, 1000)
x = id(x)
before += 100
x id(x) == before #=> True
- Or assign the result of the operation to the previously allocated tensor
= torch.randn(1000, 1000)
x = id(x)
before = x + 100
x[:] id(x) == before #=> True
Otherwise, if we do the following, it will create new object and make x
point to it. This will put pressure on the host memory.
= torch.randn(1000, 1000)
x = id(x)
before = x + 100
x id(x) == before #=> False