Universal Robots and Scale AI launch the UR AI Trainer


Universal Robots and Scale AI launch the UR AI Trainer

Revealed at GTC 2026, the leader-follower imitation learning platform captures force, motion, and visual data directly on production hardware, closing the gap between AI research labs and factory floors.


Universal Robots has launched the UR AI Trainer, a hardware-software system built in collaboration with Scale AI that allows operators to generate high-fidelity robot training data directly on the same cobots they deploy in production.

Announced at NVIDIA’s GTC 2026 conference in San Jose on 16 March, the system is designed to close what the robotics industry calls the lab-to-factory gap: the practical difficulty of moving AI models trained in controlled research settings into real-world manufacturing environments.

The core mechanism is a leader-follower setup. A human operator physically guides a leader robot through a task, say, packaging a smartphone, while a follower robot mirrors the motion in real time. Throughout each demonstration, the system simultaneously captures motion trajectories, force feedback data, and visual information, producing the structured multimodal datasets needed to train Vision-Language-Action models.

The key differentiator is that this happens on the same industrial cobots UR sells into production: training data collected on a UR3e or UR7e in a controlled AI training cell can be used to train models that then run on identical hardware in a factory.

“Our customers, ranging from large enterprises to AI research labs, are no longer just asking for AI features. They need a way to collect high-fidelity, synchronized robot and vision data to train AI models on the same robots they intend to deploy. Our AI Trainer is the industry’s first direct lab-to-factory solution for AI model training.”  – Anders Beck, VP of AI Robotics Products, Universal Robots

Why force feedback changes the physics of robot training

Most robot training data today is collected on research platforms using vision alone. That approach works for tasks where position is sufficient, but fails for anything involving delicate contact, screwing, pressing, inserting, or any manipulation where the robot needs to respond to resistance.

Universal Robots argues that its Direct Torque Control and force feedback capabilities give the AI Trainer a physical fidelity advantage: the robot can not only learn what to do visually but also how it should feel to do it correctly.

This matters particularly for the category of tasks the robotics research community describes as contact-rich manipulation, assembly operations where parts must fit together with precision and the robot must adjust its grip in response to what it encounters. Those tasks have historically been among the hardest to automate reliably, and they represent a significant share of the manufacturing operations that remain human-dependent.

Scale AI builds the data flywheel

The UR AI Trainer deploys on UR’s AI Accelerator platform and integrates Scale AI’s software stack to capture, structure, and manage the training data generated during demonstrations. The collaboration is explicitly framed as a flywheel: operators collect demonstration data, models are trained on that data, deployed robots improve performance, and the improved performance feeds back into the next round of training.

“Universal Robots is a leader in industrial robotics, and its global footprint offers the ideal foundation for data capture and AI deployment. Together, we’ve created an integrated robotics data flywheel, allowing customers to train, deploy, and improve their AI models faster than ever before.”  – Ben Levin, General Manager, Physical AI, Scale AI

As part of the collaboration, Universal Robots and Scale AI will release a large-scale industrial dataset collected on UR robots later in 2026. The GTC demo captures this pipeline in miniature: visitors at UR’s booth can guide two UR3e leader robots through a smartphone packaging task, with the demonstration data recorded in real time on Scale’s stack and immediately replayable on the AI Trainer.

A parallel virtual demo, built in NVIDIA Omniverse using Isaac Sim, shows the same task being trained synthetically using two Haply Inverse3 haptic devices, demonstrating the simulation-to-real pathway alongside the physical data collection.

Generalist AI’s first public demo

Accompanying the AI Trainer launch is the first public demonstration of Generalist AI’s embodied foundation models. Generalist was founded by Pete Florence, a former Senior Research Scientist at Google DeepMind whose prior work includes co-authorship on RT-2 (Robotic Transformer 2) and PaLM-E, alongside Andy Zeng and Andy Barry, both former colleagues at DeepMind and MIT.

The startup, which counts NVIDIA’s venture arm NVentures among its investors, emerged from stealth at GTC 2025 and has since been developing what it describes as embodied foundation models for general-purpose robot dexterity.

At GTC 2026, two UR7e robots running Generalist’s model autonomously execute the same smartphone packaging task that the AI Trainer demos use for human-guided data collection. The demonstration is designed to show the end state that the training pipeline is building towards: robots that can complete contact-rich manipulation tasks reliably and without pre-programmed trajectories.

“Generalist is building embodied foundation models that deliver industry-leading dexterity and reliability. This demonstration on Universal Robots’ trusted industrial platform shows how physical commonsense can be translated into real-world capability, paving the way for deployment across industries at scale.”  – Pete Florence, co-founder and CEO, Generalist AI

UR’s industrial footprint as the training advantage

Universal Robots frames the industrial scale of its installed base, over 100,000 cobots deployed worldwide, as a structural advantage in the race to build physical AI. The argument is that the quality of an AI model depends heavily on the quality and quantity of the training data, and that UR’s fleet of production robots represents the largest potential source of real-world manipulation data in the industry.

The AI Trainer is the mechanism for unlocking that data. NVIDIA’s physical AI ecosystem surrounds the launch: the company is also exploring use of the NVIDIA Physical AI Data Factory Blueprint to automate synthetic data generation, complementing the physical demonstration data.

“The shift toward Physical AI requires a fundamental move from rigid, pre-programmed automation to generalist robots that can perceive, reason, and learn through human-like interaction. By leveraging the NVIDIA Isaac simulation frameworks, Universal Robots is building a scalable engine for high-fidelity data capture and generation, providing the essential infrastructure to train the next generation of autonomous systems at scale.”  – Amit Goel, Head of Robotics and Edge AI Ecosystem, NVIDIA

Universal Robots is a subsidiary of Teradyne Robotics, itself a division of Teradyne (NASDAQ: TER). The GTC 2026 announcement comes at a moment when physical AI, the application of AI techniques to real-world robotic manipulation, has attracted significant attention and investment, driven partly by the success of large language models and the argument that similar scaling approaches can work for robot learning given sufficient high-quality data.

Get the TNW newsletter

Get the most important tech news in your inbox each week.