SPATIOTEMPORAL

SPATIOTEMPORAL
SPATIOTEMPORAL

MOTION INTELLIGENCE

The physical world unfolds over time. People, vehicles, robots, and environments are defined by how they move, not by static snapshots.

We are building a Large SpatioTemporal Model that learns movement directly. The model captures how behaviour evolves and how likely futures branch from the present.

This work sits between perception and action. It complements vision systems and decision layers by focusing on change over time and early signals of intent.

The physical world unfolds over time. People, vehicles, robots, and environments are defined by how they move, not by static snapshots.

We are building a Large SpatioTemporal Model that learns movement directly. The model captures how behaviour evolves and how likely futures branch from the present.

This work sits between perception and action. It complements vision systems and decision layers by focusing on change over time and early signals of intent.

MOTIONINTELLIGENCE
MOTIONINTELLIGENCE

COMPRESSING SPACE & TIME

LSTM-01 is the world's first Large SpatioTemporal Model.

Rather than operating on pixels or fixed trajectories, the model compresses short windows of kinematic change into motion tokens to form a vocabulary of movement.

Token sequences describe how agents accelerate, slow, drift, hesitate, or interact. Intent emerges from how motion unfolds over time.

This structure supports real-time inference, robustness to noise, and generalisation across sensors and environments.

LSTM-01 is the world's first Large SpatioTemporal Model.

Rather than operating on pixels or fixed trajectories, the model compresses short windows of kinematic change into motion tokens to form a vocabulary of movement.

Token sequences describe how agents accelerate, slow, drift, hesitate, or interact. Intent emerges from how motion unfolds over time.

This structure supports real-time inference, robustness to noise, and generalisation across sensors and environments.

COMPRESSING SPACE & TIME
<COMPRESSING SPACE & TIME

APPLIED MOTION

Motion intelligence matters wherever agents share space.

Humans infer intent from movement instinctively. Machines generally do not. This gap becomes critical in environments where safety and coordination depend on subtle temporal cues.

Relevant domains include shared-space transportation, driver assistance, mobile robotics, warehouse and industrial automation, and human–robot interaction.

In these settings, risk often appears before an explicit action occurs. LSTM-01 provides a predictive layer that helps systems recognise and respond to those early signals.

Motion intelligence matters wherever agents share space.

Humans infer intent from movement instinctively. Machines generally do not. This gap becomes critical in environments where safety and coordination depend on subtle temporal cues.

Relevant domains include shared-space transportation, driver assistance, mobile robotics, warehouse and industrial automation, and human–robot interaction.

In these settings, risk often appears before an explicit action occurs. LSTM-01 provides a predictive layer that helps systems recognise and respond to those early signals.

APPLIEDMOTION
APPLIEDMOTION

MEET CAM

Cam is the first deployment of LSTM-01. A voice-first, dash companion that notices what others miss: merging traffic, sudden stops, distracted drivers.

Privacy is central. Video processing happens on-device, ignoring all other details apart from that it's "the red car is moving into your lane". The system focuses on how things move, not who or where they are.

It runs on your phone, no expensive hardware needed. (BYO suction cup, though)

Think of Cam as your AI backseat driver – a second set of eyes on the road ahead.

Cam is the first deployment of LSTM-01. A voice-first, dash companion that notices what others miss: merging traffic, sudden stops, distracted drivers.

Privacy is central. Video processing happens on-device, ignoring all other details apart from that it's "the red car is moving into your lane". The system focuses on how things move, not who or where they are.

It runs on your phone, no expensive hardware needed. (BYO suction cup, though)

Think of Cam as your AI backseat driver – a second set of eyes on the road ahead.

MEETCAM
MEETCAM>

STAY INFORMED

We are building a foundation for motion understanding that supports anticipation in real-world systems. Much of the work ahead involves evaluation, multi-agent reasoning, and integration.

If you are working in areas where motion and interaction matter, or are interested in the research direction, get in touch.

We are building a foundation for motion understanding that supports anticipation in real-world systems. Much of the work ahead involves evaluation, multi-agent reasoning, and integration.

If you are working in areas where motion and interaction matter, or are interested in the research direction, get in touch.

STAY INFORMED
STAY INFORMED
Scroll, click or swipe
Mission
Technology
Applications
Meet Cam
Connect