Delving into PyTorch's torch.Tensor.aminmax: Finding Maximum and Minimum Values
Purpose
- Computes the minimum and maximum values along a specified dimension (or for the entire tensor if no dimension is given).
Syntax
torch.aminmax(input, dim=None, keepdim=False)
Parameters
keepdim
(bool, optional): IfTrue
, the output tensorsmin
andmax
will have the same dimensions as the input tensor, with a dimension of size 1 inserted at the specifieddim
. Defaults toFalse
, in which case the output tensors will have one fewer dimension than the input tensor.dim
(int, optional): The dimension along which to compute the minimum and maximum. Defaults toNone
, in which case the minimum and maximum are computed over all elements of the tensor.input
(torch.Tensor): The input tensor for which you want to find the minimum and maximum values.
Returns
- A named tuple
(min, max)
containing two tensors:min
(torch.Tensor): A tensor containing the minimum values along the specified dimension (or for the entire tensor ifdim
isNone
).max
(torch.Tensor): A tensor containing the maximum values along the specified dimension (or for the entire tensor ifdim
isNone
).
Example
import torch
tensor = torch.tensor([[1, 5, 3], [7, 2, 4]])
# Find min and max over all elements (dim=None)
min_all, max_all = torch.aminmax(tensor)
print(min_all, max_all) # Output: tensor(1) tensor(7)
# Find min and max along the first dimension (dim=0)
min_dim0, max_dim0 = torch.aminmax(tensor, dim=0)
print(min_dim0, max_dim0) # Output: tensor([1, 2, 3]) tensor([7, 5, 4])
# Find min and max along the second dimension (dim=1), keeping dimensions
min_dim1_keepdim, max_dim1_keepdim = torch.aminmax(tensor, dim=1, keepdim=True)
print(min_dim1_keepdim.shape, max_dim1_keepdim.shape) # Output: torch.Size([2, 1]), torch.Size([2, 1])
print(min_dim1_keepdim, max_dim1_keepdim)
Key Points
- NaN values (Not a Number) are propagated to the output if at least one value in the input tensor is NaN.
- If
dim
refers to a dimension of size 0, aRuntimeError
will be raised. aminmax
is useful for various tasks like finding outliers, scaling data, and analyzing distributions.
- For complex data types (e.g.,
torch.complex64
),aminmax
might not be supported, so you might need to use custom functions or work with the real and imaginary parts separately. torch.aminmax
is generally faster than usingtorch.min
andtorch.max
sequentially, especially for higher-dimensional tensors.
Finding Minimum and Maximum Values Within a Specific Range
import torch
# Sample data
data = torch.randn(100)
# Define a threshold range
low_threshold = 0.2
high_threshold = 0.8
# Filter data within the range
filtered_data = data[(data >= low_threshold) & (data <= high_threshold)]
# Find min and max within the filtered range
min_filtered, max_filtered = torch.aminmax(filtered_data)
print("Minimum value within range:", min_filtered)
print("Maximum value within range:", max_filtered)
Normalizing Data Using Min-Max Scaling
import torch
# Sample data
data = torch.tensor([10, -5, 2, 8])
# Find min and max values
min_val, max_val = torch.aminmax(data)
# Normalize data (scale to range 0-1)
normalized_data = (data - min_val) / (max_val - min_val)
print("Normalized data:", normalized_data)
Finding Minimum and Maximum Indices
import torch
# Sample data
data = torch.tensor([3, 1, 5, 2])
# Find min and max values
min_val, max_val = torch.aminmax(data)
# Find indices of min and max values
min_idx = (data == min_val).nonzero(as_tuple=True)[0][0]
max_idx = (data == max_val).nonzero(as_tuple=True)[0][0]
print("Index of minimum value:", min_idx)
print("Index of maximum value:", max_idx)
import torch
# Sample data
data = torch.arange(24).reshape(3, 4)
# Find min and max along both dimensions (keeping dimensions)
min_all_keepdim, max_all_keepdim = torch.aminmax(data, keepdim=True)
print(min_all_keepdim.shape, max_all_keepdim.shape) # Output: torch.Size([3, 4, 1]), torch.Size([3, 4, 1])
# Access minimum/maximum values for each row/column
min_rows = min_all_keepdim.squeeze(dim=2) # Squeeze out the extra dimension
max_cols = max_all_keepdim.squeeze(dim=1)
print("Minimum values for each row:", min_rows)
print("Maximum values for each column:", max_cols)
Using torch.min and torch.max sequentially
import torch
tensor = torch.tensor([[1, 5, 3], [7, 2, 4]])
# Find min and max over all elements (dim=None)
min_all = torch.min(tensor)
max_all = torch.max(tensor)
print(min_all, max_all)
# Find min and max along the first dimension (dim=0)
min_dim0 = torch.min(tensor, dim=0)[0] # Access the output tensor
max_dim0 = torch.max(tensor, dim=0)[0]
print(min_dim0, max_dim0)
- This approach requires separate calls to
torch.min
andtorch.max
, potentially being less performant for large tensors.
Using list comprehension or a loop (for basic cases)
import torch
tensor = torch.tensor([[1, 5, 3], [7, 2, 4]])
# Find min and max over all elements (dim=None) using list comprehension
min_all = min(tensor.flatten())
max_all = max(tensor.flatten())
print(min_all, max_all)
# Find min and max along the first dimension (dim=0) using a loop
min_dim0 = float('inf') # Initialize with positive infinity
max_dim0 = float('-inf') # Initialize with negative infinity
for row in tensor:
min_dim0 = min(min_dim0, min(row))
max_dim0 = max(max_dim0, max(row))
print(min_dim0, max_dim0)
- This method is suitable for small tensors or simpler use cases but can be cumbersome and slow for larger datasets.
Custom functions (for complex scenarios)
If you need to handle edge cases or perform specific operations along with finding min/max, you can create a custom function:
import torch
def custom_aminmax(tensor, dim=None):
if dim is None:
min_val = tensor.flatten().min()
max_val = tensor.flatten().max()
else:
# Implement custom logic for handling specific dimension
# (e.g., ignoring NaNs, applying thresholds)
# ...
pass
return min_val, max_val
tensor = torch.tensor([[1, 5, 3], [7, 2, 4]])
min_all, max_all = custom_aminmax(tensor)
print(min_all, max_all)
- This approach offers flexibility but requires writing and maintaining custom code.
- Use alternate methods when
aminmax
doesn't suit your specific needs or for educational purposes. torch.Tensor.aminmax
is generally the recommended approach for efficiency and clarity.