Google has released a new robot model, the RT-1, which is similar to the GPT model used in its OpenAI artificial intelligence program. The new model is designed with Google’s other robotics programs, including its driverless car program, in mind. The RT-1 model presented here is a step toward generative AI models in the field of robotics. In the real world, the RT-1 can execute over 700 instructions with a 97% success rate.
The recent advances in machine learning (ML) research, such as computer vision and natural language processing, have been enabled by a shared common approach that uses large, diverse datasets and expressive models. Although there have been various attempts to apply this approach to robotics, robots so far have not used highly-capable models as much as other subfields.
The model encodes a written command and a set of images as tokens using a pre-trained FiLM EfficientNet model before compressing them using TokenLearner. This is the architecture of RT-1. The Transformer then receives these and produces action tokens.
Developers gathered a sizable, varied dataset of robot trajectories in order to develop a system that could generalize to new tasks and demonstrate robustness to various distractions and backgrounds. To gather 130k episodes over 17 months, they deployed 13 EDR robot manipulators, each of which has a 7-degree-of-freedom arm, a two-finger gripper, and a mobile base. The researchers used human examples obtained by remote teleoperation, and they marked each event with a written explanation of the command that the robot had just carried out. Picking and arranging objects, opening and closing drawers, getting objects into and out of drawers, positioning elongated objects upright, knocking over objects, pulling napkins, and opening jars are among the high-level skills included in the dataset.
The following video displays a few sample PaLM-SayCan-RT1 long-horizon task performances in several actual kitchens.
In all four areas, RT-1 performs significantly better than baselines, displaying exceptional levels of generalization and resilience.
The RT-1 Robotics Transformer is an action-generation model for real-world robotics tasks that is simple and scalable. It tokenizes all inputs and outputs and compresses them using a pre-trained EfficientNet model with early language fusion and a token learner. RT-1 demonstrates strong performance across hundreds of tasks, as well as extensive generalization and robustness in real-world settings.
Learn more:
GPT-3: Can it really predict the future for USA for next 5 years?
OpenAI updated GPT-3: higher quality writing and longer text capability
Sber AI has presented Kandinsky 2.0, the first text-to-image model for generating in more than 100 languages
The post Google releases a “GPT-like” robot model, the RT-1 appeared first on Metaverse Post.