Tensors
Copyright 2022 National Technology & Engineering Solutions of Sandia,
LLC (NTESS). Under the terms of Contract DE-NA0003525 with NTESS, the
U.S. Government retains certain rights in this software.
Tensors are extensions of multidimensial arrays with additional operations defined on them. Here we explain the basics for creating and working with dense tensors.
For more details, see the pyttb.tensor
class documentation.
import pyttb as ttb
import numpy as np
import sys
from pyttb.matlab.matlab_support import matlab_print
Creating a tensor from an array
The pyttb.tensor
command creates a (multidimensional) array as a tensor object. By default, it creates a deep copy of the input object. It also reorders that copy to be F-ordered if it isn’t already. For a tensor of size \(m \times n \times p\), the shape is (m,n,p)
.
M = np.ones((4, 3, 2)) # Create numpy 4 x 3 x 2 array of ones.
X = ttb.tensor(M) # Convert to 4 x 3 x 2 tensor object.
X
tensor of shape (4, 3, 2) with order F
data[:, :, 0] =
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
data[:, :, 1] =
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
X = ttb.tensor(M, (4, 6)) # Reshape to 4 x 6 tensor.
X
tensor of shape (4, 6) with order F
data[:, :] =
[[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1. 1.]]
There is an option to only do a shallow copy the input data, but it must be F-ordered. This can be useful for larger data. (A vector is both C- and F-ordered, which is useful for functions that don’t support alternative orderings.)
np.random.seed(0)
v = np.random.rand(12) # length-12 vector of uniform random numbers.
X = ttb.tensor(v, (2, 3, 2), copy=False) # Converted to 2 x 3 x 2 tensor.
X
tensor of shape (2, 3, 2) with order F
data[:, :, 0] =
[[0.5488135 0.60276338 0.4236548 ]
[0.71518937 0.54488318 0.64589411]]
data[:, :, 1] =
[[0.43758721 0.96366276 0.79172504]
[0.891773 0.38344152 0.52889492]]
A Note on Display of Tensors
The display of a tensor is by frontal slice where the first two indices range and the remainder stay fixed. This is different than how Python normal displays multidimensional arrays where the last two indices range and the remainder stay fixed.
# Display of the above tensor object in the usual Python way.
X.data
array([[[0.5488135 , 0.43758721],
[0.60276338, 0.96366276],
[0.4236548 , 0.79172504]],
[[0.71518937, 0.891773 ],
[0.54488318, 0.38344152],
[0.64589411, 0.52889492]]])
Printing similar to MATLAB
It is possible to print similar to MATLAB using matlab_print()
which has the optional arguments name
and format
to further customize the outputs. You will need
from pyttb.matlab.matlab_support import matlab_print
in your code for this to work as shown here.
matlab_print(X,name='X',format='7.4f') # Print tensor X in MATLAB format.
X is a tensor of shape 2 x 3 x 2
X(:,:, 0) =
0.5488 0.6028 0.4237
0.7152 0.5449 0.6459
X(:,:, 1) =
0.4376 0.9637 0.7917
0.8918 0.3834 0.5289
Optionally, you can specify a different shape for the tensor, so long as the input array has the right number of elements.
Creating a tensor with elements generated by a function
Using pyttb.tensor.from_function()
takes another function that is used to generate entries of the tensor. The returned array should be in Fortran order to avoid unnecessary copies and rearrangement. Since the data will be reshape in any case, returning a vector is recommended. Alternatively, ensure the function returns in F-order for those methods that support it.
# Ensure reproducibility of random numbers.
np.random.seed(1)
# Function to generate normally distributed random numbers.
randn = lambda s: np.random.randn(np.prod(s))
# Create 2 x 3 x 2 tensor of normally distributed random numbers.
X = ttb.tensor.from_function(randn, (2, 3, 2))
# Print tensor X in MATLAB format.
matlab_print(X,name='X',format='7.4f')
X is a tensor of shape 2 x 3 x 2
X(:,:, 0) =
1.6243 -0.5282 0.8654
-0.6118 -1.0730 -2.3015
X(:,:, 1) =
1.7448 0.3190 1.4621
-0.7612 -0.2494 -2.0601
We show how to use pyttb.tensor.from_function()
in the next example to create a tensor of all ones, but it’s even easier to use pyttb.tenones()
described below.
# Function to generate tensor of ones. Uses explicit Fortran order.
ones = lambda s: np.ones(s,order='F')
# Create 3 x 4 x 2 tensor of ones.
X = ttb.tensor.from_function(ones, (3, 4, 2))
# Print tensor X in MATLAB format.
matlab_print(X,name='X',format='2.0f')
X is a tensor of shape 3 x 4 x 2
X(:,:, 0) =
1 1 1 1
1 1 1 1
1 1 1 1
X(:,:, 1) =
1 1 1 1
1 1 1 1
1 1 1 1
Create a tensor of all ones.
Using pyttb.tenones()
to create a tensor of all ones.
X = ttb.tenones((3, 4, 2))
matlab_print(X,name='X',format='2.0f')
X is a tensor of shape 3 x 4 x 2
X(:,:, 0) =
1 1 1 1
1 1 1 1
1 1 1 1
X(:,:, 1) =
1 1 1 1
1 1 1 1
1 1 1 1
Create a tensor of all zeros
Use pyttb.tenzeros()
to create a tensor of all zeros.
X = ttb.tenzeros((1, 4, 2)) # Creates a 1 x 4 x 2 tensor of zeroes.
matlab_print(X,name='X',format='2.0f')
X is a tensor of shape 1 x 4 x 2
X(:,:, 0) =
0 0 0 0
X(:,:, 1) =
0 0 0 0
Create a random tensor
Use pyttb.tenrand()
to create a tensor with uniform random values from [0,1].
# Creates a 5 x 4 x 2 tensor of uniform [0,1] random numbers
np.random.seed(2) # Reproducible random numbers
X = ttb.tenrand((5, 4, 2))
matlab_print(X,name='X',format='7.4f')
X is a tensor of shape 5 x 4 x 2
X(:,:, 0) =
0.4360 0.3303 0.6211 0.7853
0.0259 0.2046 0.5291 0.8540
0.5497 0.6193 0.1346 0.4942
0.4353 0.2997 0.5136 0.8466
0.4204 0.2668 0.1844 0.0796
X(:,:, 1) =
0.5052 0.5967 0.4678 0.3869
0.0653 0.2260 0.2017 0.7936
0.4281 0.1069 0.6404 0.5800
0.0965 0.2203 0.4831 0.1623
0.1272 0.3498 0.5052 0.7008
Creating a one-dimensional tensor
To specify a 1-way tensor of size \(m\), the shape should be of the form (m,)
.
np.random.seed(3)
X = ttb.tenrand((5,)) # Creates a 1-way tensor.
matlab_print(X,name='X',format='7.4f')
X is a tensor of shape 5
X(:) =
0.5508
0.7081
0.2909
0.5108
0.8929
Specifying trailing singleton dimensions in a tensor
Likewise, trailing singleton dimensions must be explicitly specified.
np.random.seed(4)
Y = ttb.tenrand((3, 4)) # Creates a 2-way tensor of size 4 x 3.
matlab_print(Y,name='Y',format='7.4f')
Y is a tensor of shape 3 x 4
Y(:,:) =
0.9670 0.7148 0.9763 0.4348
0.5472 0.6977 0.0062 0.7794
0.9727 0.2161 0.2530 0.1977
np.random.seed(4)
Y = ttb.tenrand((3, 4, 1)) # Creates a 3-way tensor of size 3 x 4 x 1.
matlab_print(Y,name='Y',format='7.4f')
Y is a tensor of shape 3 x 4 x 1
Y(:,:, 0) =
0.9670 0.7148 0.9763 0.4348
0.5472 0.6977 0.0062 0.7794
0.9727 0.2161 0.2530 0.1977
The constituent parts of a tensor
A tensor has two parts: data
(a multidimensional array) and shape
(a tuple of integers).
np.random.seed(5)
X = ttb.tenrand((2, 4, 3)) # Create tensor of size 2 x 4 x 3 with random numbers.
matlab_print(X,name='X',format='7.4f')
X is a tensor of shape 2 x 4 x 3
X(:,:, 0) =
0.2220 0.2067 0.4884 0.7659
0.8707 0.9186 0.6117 0.5184
X(:,:, 1) =
0.2968 0.0807 0.4413 0.8799
0.1877 0.7384 0.1583 0.2741
X(:,:, 2) =
0.4142 0.6288 0.5999 0.2847
0.2961 0.5798 0.2658 0.2536
# The array (note that its display order is different from the tensor).
X.data # The array.
array([[[0.22199317, 0.2968005 , 0.41423502],
[0.20671916, 0.08074127, 0.62878791],
[0.48841119, 0.44130922, 0.5999292 ],
[0.76590786, 0.87993703, 0.28468588]],
[[0.87073231, 0.18772123, 0.29607993],
[0.91861091, 0.7384403 , 0.57983781],
[0.61174386, 0.15830987, 0.26581912],
[0.51841799, 0.27408646, 0.25358821]]])
# Note that it's is stored in Fortran format (F_CONTIGUOUS = True).
X.data.flags
C_CONTIGUOUS : False
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : True
ALIGNED : True
WRITEBACKIFCOPY : False
# The shape
X.shape
(2, 4, 3)
Creating a tensor from its constituent parts
This is an efficient way to create a tensor copy, but it illustrates the role of the parts. A more efficient way is to use Y = X
(shallow copy) or Y = X.copy()
(deep copy).
np.random.seed(0)
X = ttb.tenrand((2, 4, 3)) # Create data.
matlab_print(X,name='X',format='7.4f') # Print tensor X in MATLAB format.
Y = ttb.tensor(X.data, X.shape) # Creates a (deep) copy of X from its parts.
matlab_print(Y,name='Y',format='7.4f')
X is a tensor of shape 2 x 4 x 3
X(:,:, 0) =
0.5488 0.6028 0.4237 0.4376
0.7152 0.5449 0.6459 0.8918
X(:,:, 1) =
0.9637 0.7917 0.5680 0.0710
0.3834 0.5289 0.9256 0.0871
X(:,:, 2) =
0.0202 0.7782 0.9786 0.4615
0.8326 0.8700 0.7992 0.7805
Y is a tensor of shape 2 x 4 x 3
Y(:,:, 0) =
0.5488 0.6028 0.4237 0.4376
0.7152 0.5449 0.6459 0.8918
Y(:,:, 1) =
0.9637 0.7917 0.5680 0.0710
0.3834 0.5289 0.9256 0.0871
Y(:,:, 2) =
0.0202 0.7782 0.9786 0.4615
0.8326 0.8700 0.7992 0.7805
Creating an empty tensor
An empty constructor exists.
X = ttb.tensor() # Creates an empty tensor
matlab_print(X,name='X',format='7.4f')
X is a tensor of shape
X(:) =
Removing singleton dimensions from a tensor
Use pyttb.tensor.squeeze()
to remove single dimensions from a tensor.
np.random.seed(0)
Y = ttb.tenrand((4, 3, 1)) # Create the data.
matlab_print(Y,name='Y',format='7.4f') # Print tensor Y in MATLAB format.
Y is a tensor of shape 4 x 3 x 1
Y(:,:, 0) =
0.5488 0.4237 0.9637
0.7152 0.6459 0.3834
0.6028 0.4376 0.7917
0.5449 0.8918 0.5289
Z = Y.squeeze() # Squeeze out the singleton dimension.
matlab_print(Z,name='Z',format='7.4f') # Print tensor Z in MATLAB format.
Z is a tensor of shape 4 x 3
Z(:,:) =
0.5488 0.4237 0.9637
0.7152 0.6459 0.3834
0.6028 0.4376 0.7917
0.5449 0.8918 0.5289
Convert a tensor to a (multidimensional) array
Use pyttb.tensor.double()
to convert a tensor to a numpy array; this is identical to extracting the data
member.
np.random.seed(0)
X = ttb.tenrand((2, 5, 4)) # Create the data.
X.double() # Converts X to an array of doubles.
array([[[0.5488135 , 0.79172504, 0.97861834, 0.26455561],
[0.60276338, 0.56804456, 0.46147936, 0.45615033],
[0.4236548 , 0.07103606, 0.11827443, 0.0187898 ],
[0.43758721, 0.0202184 , 0.14335329, 0.61209572],
[0.96366276, 0.77815675, 0.52184832, 0.94374808]],
[[0.71518937, 0.52889492, 0.79915856, 0.77423369],
[0.54488318, 0.92559664, 0.78052918, 0.56843395],
[0.64589411, 0.0871293 , 0.63992102, 0.6176355 ],
[0.891773 , 0.83261985, 0.94466892, 0.616934 ],
[0.38344152, 0.87001215, 0.41466194, 0.6818203 ]]])
X.data # Same thing.
array([[[0.5488135 , 0.79172504, 0.97861834, 0.26455561],
[0.60276338, 0.56804456, 0.46147936, 0.45615033],
[0.4236548 , 0.07103606, 0.11827443, 0.0187898 ],
[0.43758721, 0.0202184 , 0.14335329, 0.61209572],
[0.96366276, 0.77815675, 0.52184832, 0.94374808]],
[[0.71518937, 0.52889492, 0.79915856, 0.77423369],
[0.54488318, 0.92559664, 0.78052918, 0.56843395],
[0.64589411, 0.0871293 , 0.63992102, 0.6176355 ],
[0.891773 , 0.83261985, 0.94466892, 0.616934 ],
[0.38344152, 0.87001215, 0.41466194, 0.6818203 ]]])
Use ndims
and shape
to get the shape of a tensor
X = ttb.tenrand((4,3,2)) # Create a 4 x 3 x 2 tensor of random numbers.
X.ndims # Number of dimensions (or ways).
3
X.shape # Tuple with the sizes of all dimensions.
(4, 3, 2)
X.shape[2] # Size of a single dimension.
2
Subscripted reference for a tensor
np.random.seed(0)
X = ttb.tenrand((3, 4, 2, 1)) # Create a 3x4x2x1 random tensor.
X[0, 0, 0, 0] # Extract a single element.
0.5488135039273248
It is possible to extract a subtensor that contains a single element. Observe that singleton dimensions are not dropped unless they are specifically specified, e.g., as above.
X[0, 0, 0, :] # Produces a tensor of order 1 and shape 1.
tensor of shape (1,) with order F
data[:] =
[0.5488135]
X[:, 0, 0, :] # Produces a tensor of shape 3x1.
tensor of shape (3, 1) with order F
data[:, :] =
[[0.5488135 ]
[0.71518937]
[0.60276338]]
Moreover, the subtensor is automatically renumbered/resized in the same way that numpy works for arrays except that singleton dimensions are handled explicitly.
X[0:2, [1, 3], 0, :] # Produces a tensor of shape 2x2x1.
tensor of shape (2, 2, 1) with order F
data[:, :, 0] =
[[0.54488318 0.38344152]
[0.4236548 0.79172504]]
It’s also possible to extract a list of elements by passing in an array of subscripts or a column array of linear indices.
subs = np.array([[0, 0, 0, 0], [2, 3, 1, 0]])
X[subs] # Extract 2 values by subscript.
array([0.5488135 , 0.78052918])
inds = np.array([0, 23])
X[inds] # Same thing with linear indices.
array([0.5488135 , 0.78052918])
np.random.seed(0)
X = ttb.tenrand((10,)) # Create a random tensor.
X[0:5] # Extract a subtensor.
array([0.5488135 , 0.71518937, 0.60276338, 0.54488318, 0.4236548 ])
Subscripted assignment for a tensor
We can assign a single element, an entire subtensor, or a list of values for a tensor.
np.random.seed(0)
X = ttb.tenrand((3,4,2)) # Create some data.
X[0, 0, 0] = 0 # Replaces the [0,0,0] element.
matlab_print(X,name='X',format='7.4f') # Print tensor X in MATLAB format.
X is a tensor of shape 3 x 4 x 2
X(:,:, 0) =
0.0000 0.5449 0.4376 0.3834
0.7152 0.4237 0.8918 0.7917
0.6028 0.6459 0.9637 0.5289
X(:,:, 1) =
0.5680 0.0871 0.7782 0.7992
0.9256 0.0202 0.8700 0.4615
0.0710 0.8326 0.9786 0.7805
X[0:2, 0:2,0] = np.ones((2, 2)) # Replaces a subtensor.
matlab_print(X,name='X',format='7.4f') # Print tensor X in MATLAB format.
X is a tensor of shape 3 x 4 x 2
X(:,:, 0) =
1.0000 1.0000 0.4376 0.3834
1.0000 1.0000 0.8918 0.7917
0.6028 0.6459 0.9637 0.5289
X(:,:, 1) =
0.5680 0.0871 0.7782 0.7992
0.9256 0.0202 0.8700 0.4615
0.0710 0.8326 0.9786 0.7805
subs = np.array([[0, 0, 0], [0,0,1]])
X[subs] = [5, 7] # Replaces the (0,0,0) and (1,0,0) elements.
matlab_print(X,name='X',format='7.4f') # Print tensor X in MATLAB format.
X is a tensor of shape 3 x 4 x 2
X(:,:, 0) =
5.0000 1.0000 0.4376 0.3834
1.0000 1.0000 0.8918 0.7917
0.6028 0.6459 0.9637 0.5289
X(:,:, 1) =
7.0000 0.0871 0.7782 0.7992
0.9256 0.0202 0.8700 0.4615
0.0710 0.8326 0.9786 0.7805
X[[0, 12]] = [5, 7] # Same as above using linear indices.
matlab_print(X,name='X',format='7.4f') # Print tensor X in MATLAB format.
X is a tensor of shape 3 x 4 x 2
X(:,:, 0) =
5.0000 1.0000 0.4376 0.3834
1.0000 1.0000 0.8918 0.7917
0.6028 0.6459 0.9637 0.5289
X(:,:, 1) =
7.0000 0.0871 0.7782 0.7992
0.9256 0.0202 0.8700 0.4615
0.0710 0.8326 0.9786 0.7805
It is possible to grow the tensor automatically by assigning elements outside the original range of the tensor.
X[0,0,2] = 1 # Grows the shape of the tensor.
matlab_print(X,name='X',format='7.4f') # Print tensor X in MATLAB format.
X is a tensor of shape 3 x 4 x 3
X(:,:, 0) =
5.0000 1.0000 0.4376 0.3834
1.0000 1.0000 0.8918 0.7917
0.6028 0.6459 0.9637 0.5289
X(:,:, 1) =
7.0000 0.0871 0.7782 0.7992
0.9256 0.0202 0.8700 0.4615
0.0710 0.8326 0.9786 0.7805
X(:,:, 2) =
1.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
0.0000 0.0000 0.0000 0.0000
Using negative indexing for the last array index
np.random.seed(0)
X = ttb.tenrand((4,3,2)) # Create some data.
matlab_print(X,name='X',format='7.4f') # Print tensor X in MATLAB format.
X is a tensor of shape 4 x 3 x 2
X(:,:, 0) =
0.5488 0.4237 0.9637
0.7152 0.6459 0.3834
0.6028 0.4376 0.7917
0.5449 0.8918 0.5289
X(:,:, 1) =
0.5680 0.0202 0.9786
0.9256 0.8326 0.7992
0.0710 0.7782 0.4615
0.0871 0.8700 0.7805
f"Last value in array is {X[-1]:.4f}" # Same as X(3,2,1)
'Last value in array is 0.7805'
Extracting subscripts of nonzero elements of a tensor
Use pyttb.tensor.find()
to get nonzero elements and values from a tensor.
# Create a tensor that's about 33% zeros.
np.random.seed(5)
randint = lambda s: np.random.randint(0, 3, np.prod(s))
X = ttb.tensor.from_function(randint, (2, 2, 2)) # Create a tensor.
matlab_print(X,name='X',format='2.0f') # Print tensor X in MATLAB format.
X is a tensor of shape 2 x 2 x 2
X(:,:, 0) =
2 2
1 2
X(:,:, 1) =
0 0
1 0
S, V = X.find() # Find all the nonzero subscripts and values.
S # Nonzero subscripts
array([[0, 0, 0],
[1, 0, 0],
[0, 1, 0],
[1, 1, 0],
[1, 0, 1]])
V # Values
array([[2],
[1],
[2],
[2],
[1]])
larger_entries = X >= 2 # Find entries >= 2.
larger_subs, larger_vals = larger_entries.find() # Find subscripts of values >= 2.
larger_subs, larger_vals
(array([[0, 0, 0],
[0, 1, 0],
[1, 1, 0]]),
array([[ True],
[ True],
[ True]]))
V = X[larger_subs]
V
array([2, 2, 2])
Computing the Frobenius norm of a tensor
The method pyttb.tensor.norm()
computes the Frobenius norm of a tensor. This corresponds to the Euclidean norm of the vectorized tensor.
np.random.seed(0)
X = ttb.tenrand((2,3,3))
X.norm()
2.631397990238147
Reshaping a tensor
The method pyttb.tensor.reshape()
reshapes a tensor into a given shape array. The total number of elements in the tensor cannot change.
Currently, this methods creates a copy of the tensor, and this needs to be fixed.
np.random.seed(0)
randint = lambda s: np.random.randint(0, 10, np.prod(s))
X = ttb.tensor.from_function(randint, (3, 2, 3))
matlab_print(X,name='X',format='2.0f')
Y = X.reshape((3,3,2))
matlab_print(Y,name='Y',format='2.0f')
X is a tensor of shape 3 x 2 x 3
X(:,:, 0) =
5 3
0 7
3 9
X(:,:, 1) =
3 4
5 7
2 6
X(:,:, 2) =
8 6
8 7
1 7
Y is a tensor of shape 3 x 3 x 2
Y(:,:, 0) =
5 3 3
0 7 5
3 9 2
Y(:,:, 1) =
4 8 6
7 8 7
6 1 7
Basic operations (plus, minus, and, or, etc.) on a tensor
tensors support plus, minus, times, divide, power, equals, and not-equals operators. tensors can use their operators with another tensor or a scalar (with the exception of equalities which only takes tensors). All mathematical operators are elementwise operations.
np.random.seed(0)
A = ttb.tensor(np.floor(3 * np.random.rand(2, 2, 3))) # Generate some data.
B = ttb.tensor(np.floor(3 * np.random.rand(2, 2, 3)))
A.logical_and(B) # Calls and.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 0.]
[1. 1.]]
data[:, :, 1] =
[[1. 0.]
[1. 1.]]
data[:, :, 2] =
[[0. 1.]
[1. 1.]]
A.logical_or(B)
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[1. 1.]
[1. 1.]]
data[:, :, 2] =
[[1. 1.]
[1. 1.]]
A.logical_xor(B)
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[0. 1.]
[0. 0.]]
data[:, :, 1] =
[[0. 1.]
[0. 0.]]
data[:, :, 2] =
[[1. 0.]
[0. 0.]]
A == B # Calls eq.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[ True False]
[False False]]
data[:, :, 1] =
[[ True False]
[ True False]]
data[:, :, 2] =
[[False False]
[ True False]]
A != B # Calls neq.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[False True]
[ True True]]
data[:, :, 1] =
[[False True]
[False True]]
data[:, :, 2] =
[[ True True]
[False True]]
A > B # Calls gt.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[False True]
[False False]]
data[:, :, 1] =
[[False True]
[False True]]
data[:, :, 2] =
[[ True False]
[False False]]
A >= B # Calls ge.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[ True True]
[False False]]
data[:, :, 1] =
[[ True True]
[ True True]]
data[:, :, 2] =
[[ True False]
[ True False]]
A < B # Calls lt.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[False False]
[ True True]]
data[:, :, 1] =
[[False False]
[False False]]
data[:, :, 2] =
[[False True]
[False True]]
A <= B # Calls le.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[ True False]
[ True True]]
data[:, :, 1] =
[[ True False]
[ True False]]
data[:, :, 2] =
[[False True]
[ True True]]
A.logical_not() # Calls not.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[0. 0.]
[0. 0.]]
data[:, :, 1] =
[[0. 0.]
[0. 0.]]
data[:, :, 2] =
[[0. 0.]
[0. 0.]]
+A # Calls uplus.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[2. 1.]
[2. 2.]]
data[:, :, 2] =
[[1. 1.]
[2. 1.]]
-A # Calls uminus.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[-1. -1.]
[-1. -1.]]
data[:, :, 1] =
[[-2. -1.]
[-2. -2.]]
data[:, :, 2] =
[[-1. -1.]
[-2. -1.]]
A + B # Calls plus.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[2. 1.]
[3. 3.]]
data[:, :, 1] =
[[4. 1.]
[4. 3.]]
data[:, :, 2] =
[[1. 3.]
[4. 3.]]
A - B # Calls minus.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[ 0. 1.]
[-1. -1.]]
data[:, :, 1] =
[[0. 1.]
[0. 1.]]
data[:, :, 2] =
[[ 1. -1.]
[ 0. -1.]]
A * B # Calls times.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 0.]
[2. 2.]]
data[:, :, 1] =
[[4. 0.]
[4. 2.]]
data[:, :, 2] =
[[0. 2.]
[4. 2.]]
5 * A # Calls mtimes.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[5. 5.]
[5. 5.]]
data[:, :, 1] =
[[10. 5.]
[10. 10.]]
data[:, :, 2] =
[[ 5. 5.]
[10. 5.]]
A**B # Calls power.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[4. 1.]
[4. 2.]]
data[:, :, 2] =
[[1. 1.]
[4. 1.]]
A**2 # Calls power.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[4. 1.]
[4. 4.]]
data[:, :, 2] =
[[1. 1.]
[4. 1.]]
A / B # Calls ldivide.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. inf]
[0.5 0.5]]
data[:, :, 1] =
[[ 1. inf]
[ 1. 2.]]
data[:, :, 2] =
[[inf 0.5]
[1. 0.5]]
2 / A # Calls rdivide.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[2. 2.]
[2. 2.]]
data[:, :, 1] =
[[1. 2.]
[1. 1.]]
data[:, :, 2] =
[[2. 2.]
[1. 2.]]
Using tenfun
for elementwise operations on one or more tensors
The method tenfun
applies a specified function to a number of tensors. This can be used for any function that is not predefined for tensors.
np.random.seed(0)
A = ttb.tensor(np.floor(3 * np.random.rand(2, 2, 3), order="F")) # Generate some data.
A.tenfun(lambda x: x + 1) # Increment every element of A by one.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[2. 2.]
[2. 2.]]
data[:, :, 1] =
[[3. 2.]
[3. 3.]]
data[:, :, 2] =
[[2. 2.]
[3. 2.]]
# Wrap np.maximum in a function with a function signature that Python's inspect.signature can handle.
def max_elements(a, b):
return np.maximum(a, b)
A.tenfun(max_elements, B) # Max of A and B, elementwise.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[2. 2.]]
data[:, :, 1] =
[[2. 1.]
[2. 2.]]
data[:, :, 2] =
[[1. 2.]
[2. 2.]]
np.random.seed(0)
C = ttb.tensor(
np.floor(5 * np.random.rand(2, 2, 3), order="F")
) # Create another tensor.
def elementwise_mean(X):
# finding mean for the columns
return np.floor(np.mean(X, axis=0), order="F")
A.tenfun(elementwise_mean, B, C) # Elementwise means for A, B, and C.
tensor of shape (2, 2, 3) with order F
data[:, :, 0] =
[[1. 1.]
[1. 1.]]
data[:, :, 1] =
[[2. 1.]
[2. 2.]]
data[:, :, 2] =
[[1. 2.]
[2. 1.]]
Use permute
to reorder the modes of a tensor
X = ttb.tensor(np.arange(1, 25), shape=(2, 3, 4))
print(f"X is a {X}")
X is a tensor of shape (2, 3, 4) with order F
data[:, :, 0] =
[[1 3 5]
[2 4 6]]
data[:, :, 1] =
[[ 7 9 11]
[ 8 10 12]]
data[:, :, 2] =
[[13 15 17]
[14 16 18]]
data[:, :, 3] =
[[19 21 23]
[20 22 24]]
X.permute(np.array((2, 1, 0))) # Reverse the modes.
tensor of shape (4, 3, 2) with order F
data[:, :, 0] =
[[ 1 3 5]
[ 7 9 11]
[13 15 17]
[19 21 23]]
data[:, :, 1] =
[[ 2 4 6]
[ 8 10 12]
[14 16 18]
[20 22 24]]
Permuting a 1-dimensional tensor works correctly.
X = ttb.tensor(np.arange(1, 5), (4,))
X.permute(
np.array(
1,
)
)
tensor of shape (4,) with order F
data[:] =
[1 2 3 4]
Symmetrizing and checking for symmetry in a tensor
A tensor can be symmetrized in a collection of modes with the command symmetrize
. The new, symmetric tensor is formed by averaging over all elements in the tensor which are required to be equal.
np.random.rand(0)
X = ttb.tensor(np.arange(1, 5), (4,)) # Create some data
W = ttb.tensor(np.random.rand(4, 4, 4))
Y = X.symmetrize()
An optional argument grps
can also be passed to symmetrize
which specifies an array of modes with respect to which the tensor should be symmetrized.
np.random.seed(0)
X = ttb.tensor(np.random.rand(3, 3, 2))
Z = X.symmetrize(np.array((0, 1)))
Additionally, one can check for symmetry in tensors with the issymmetric
function. Similar to symmetrize
, a collection of modes can be passed as a second argument.
Y.issymmetric()
True
Z.issymmetric(np.array((1, 2)))
False
Displaying a tensor
print(X)
tensor of shape (3, 3, 2) with order F
data[:, :, 0] =
[[0.5488135 0.60276338 0.4236548 ]
[0.43758721 0.96366276 0.79172504]
[0.56804456 0.07103606 0.0202184 ]]
data[:, :, 1] =
[[0.71518937 0.54488318 0.64589411]
[0.891773 0.38344152 0.52889492]
[0.92559664 0.0871293 0.83261985]]
X # In the python interface
tensor of shape (3, 3, 2) with order F
data[:, :, 0] =
[[0.5488135 0.60276338 0.4236548 ]
[0.43758721 0.96366276 0.79172504]
[0.56804456 0.07103606 0.0202184 ]]
data[:, :, 1] =
[[0.71518937 0.54488318 0.64589411]
[0.891773 0.38344152 0.52889492]
[0.92559664 0.0871293 0.83261985]]