You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Decoder-only Transformer where attention operates on geodesic distances in a learned Riemannian manifold with gravitational curvature and variable dimensionality per token. Based on Directional Relational Manifolds (DRM)
A physics-aware transformer architecture that replaces standard query-key-value attention with Newton's law of gravitation, producing a minimal yet powerful model optimised for resource-constrained environments, edge deployment, and VictorOS cognitive-runtime integration.