Matrix Multiplication

This example shows how to use the pylops_gpu.MatrixMult operator to perform Matrix inversion of the following linear system.

\[\mathbf{y}= \mathbf{A} \mathbf{x}\]

For square \(\mathbf{A}\), we will use the pylops_gpu.optimization.leastsquares.cg solver.

import torch
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.gridspec as pltgs
import pylops_gpu

from pylops_gpu.utils.backend import device
from pylops_gpu.optimization.cg import cg

torch.manual_seed(0)
dev = device()
print('PyLops-gpu working on %s...' % dev)
plt.close('all')

Out:

PyLops-gpu working on cpu...

Let’s define the size N of thesquare matrix \(\mathbf{A}\) and fill the matrix with random numbers

N = 20
A = torch.randn((N, N), dtype=torch.float32).to(dev)
A = torch.matmul(A.t(), A) # need semi-definite positive matrix for cg
Aop = pylops_gpu.MatrixMult(A, dtype=torch.float32)

x = torch.ones(N, dtype=torch.float32).to(dev)

We can now apply the forward operator to create the data vector \(\mathbf{y}\) and use / to solve the system by means of an explicit solver. If you prefer to customize the solver (e.g., choosing the number of iterations) use the method div instead.

y = Aop * x
xest = Aop / y
xest = Aop.div(y, niter=2*N)

print('x', x)
print('xest', xest)

Out:

x tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
        1., 1.])
xest tensor([1.0778, 0.5838, 0.4961, 0.9645, 0.4359, 0.7856, 0.8009, 1.0520, 0.8892,
        1.0553, 0.7420, 1.1305, 1.0049, 0.9770, 1.4189, 0.7818, 1.0504, 1.1458,
        1.0992, 1.0956])

Total running time of the script: ( 0 minutes 0.006 seconds)

Gallery generated by Sphinx-Gallery