# PyLops-GPU¶

Note

This library is under early development.

Expect things to constantly change until version v1.0.0.

This library is an extension of PyLops to run operators on GPUs.

As much as numpy and scipy lie at the core of the parent project PyLops, PyLops-GPU heavily builds on top of PyTorch and takes advantage of the same optimized tensor computations used in PyTorch for deep learning using GPUs and CPUs. Doing so, linear operators can be computed on GPUs.

Here is a simple example showing how a diagonal operator can be created, applied and inverted using PyLops:

```
import numpy as np
from pylops import Diagonal
n = int(1e6)
x = np.ones(n)
d = np.arange(n) + 1.
Dop = Diagonal(d)
# y = Dx
y = Dop*x
```

and similarly using PyLops-GPU:

```
import numpy as np
import torch
from pylops_gpu.utils.backend import device
from pylops_gpu import Diagonal
dev = device() # will return 'gpu' if GPU is available
n = int(1e6)
x = torch.ones(n, dtype=torch.float64).to(dev)
d = (torch.arange(0, n, dtype=torch.float64) + 1.).to(dev)
Dop = Diagonal(d, device=dev)
# y = Dx
y = Dop*x
```

Running these two snippets of code in Google Colab with GPU enabled gives a 50+ speed up for the forward pass.

As a by-product of implementing PyLops linear operators in PyTorch, we can easily
chain our operators with any nonlinear mathematical operation (e.g., log, sin, tan, pow, …)
as well as with operators from the `torch.nn`

submodule and obtain *Automatic
Differentiation* (AD) for the entire chain. Since the gradient of a linear
operator is simply its *adjoint*, we have implemented a single class,
`pylops_gpu.TorchOperator`

, which can wrap any linear operator
from PyLops and PyLops-gpu libraries and return a `torch.autograd.Function`

object.