Improved performance from existing einops repo. Tensor operations made 150 percent faster.
This implementation is inspired from einops
but takes it further with optimized performance, smart caching, and user-friendly error handling and designed to simplify tensor manipulation with speed.
- Pattern-Based Flexibility: Transform tensors using readable patterns like
'h w c -> (h w) c'
or'b c h w -> b (h w) c'
. - Broadcasting Magic: Add new dimensions or expand existing ones without breaking a sweat.
- Performance Boost: Powered by
lru_cache
, it caches operations for fast repeated calls. - Helpful Debugging: Custom
EinopsError
exceptions provide clear, actionable feedback when something goes wrong. - Unique Design: Improved performance and usability advantages.
The rearrange
function is a tensor-transforming wizard that operates in four seamless steps:
-
Pattern Parsing
- Takes a pattern like
'h w c -> w h c'
and splits it into input axes (h
,w
,c
) and output axes (w
,h
,c
). - Understands grouped axes like
(h w)
for advanced reshaping.
- Takes a pattern like
-
Custom Axis Size
- Calculates the size of each axis based on the tensorโs shape.
- Supports custom sizes via an optional
axes_lengths
argument (e.g.,h=32, w=32
).
-
Tensor Transformation
- Splits: Breaks down grouped axes (e.g.,
(h w)
into a single dimension). - Permutes: Reorders axes to match the output pattern.
- Broadcasts: Expands or adds dimensions as needed.
- Reshapes: Delivers the final tensor in the desired shape.
- Splits: Breaks down grouped axes (e.g.,
-
Caching
- Stores parsed patterns and operations in memory, so repeated calls are fast.