Surgical Robotics Giants Shift Focus Away From Competition With Market Leader
The surgical robotics industry has witnessed significant growth in recent years, with companies …
05. March 2025
Revolutionary App Launches to Tackle Visual Training Data for Generalizable Robotics Models
A group of innovative researchers at New York University has brought forth a game-changing iOS app called AnySense, specifically designed to collect and provide high-quality visual training data for generalizable robotics models. The brainchild of Raunaq Bhirangi, Zeyu Bian, Venkatesh Pattabiraman, Haritheja Etukuru, Mehmet Enes Erciyes, Nur Muhammad Mahi Shafiullah, and Prof. Lerrel Pinto, AnySense is the culmination of their collaborative efforts to address a significant bottleneck in robotics model training.
To facilitate this endeavor, Prof. Pinto and his team had previously launched an open-source research project called Robot Utility Models (RUM), which endeavors to generalize training for robots by reducing the need for thousands of examples of a task. By leveraging RUM, developers can now successfully train robots in zero-shot situations or unfamiliar environments.
To support this groundbreaking approach, the Generalizable Robotics and AI Lab (GRAIL lab) at NYU created an innovative device called “the Stick,” which employs an iPhone as a visual feedback system. This move aimed to bridge the gap between scalable data collection interfaces and affordable sensors, essential for tackling real-world data bottlenecks.
The AnySense app is specifically designed for multi-sensory data collection and learning, directly stemming from the RUM project. By integrating iPhone sensors with external multisensory inputs via Bluetooth and wired interfaces, AnySense empowers users to collect diverse and high-quality training data in real-world settings.
One of the key features of the app is its seamless integration with the versatile tactile sensor called AnySkin, which provides feedback for gripping tasks through a touch-sensitive pad. Developed by the NYU researchers, AnySkin boasts an easy-to-assemble design, compatibility with various robotic end effectors, and generalizability to new skin instances.
Unlike traditional sensors, AnySkin senses contact through distortions in the magnetic field generated by magnetized iron particles on its sensing surface. The flexible surface is designed to be easily replaceable when damaged, allowing for continuous data collection without interruptions.
With the launch of AnySense, researchers and developers can now efficiently gather visual training data for generalizable robotics models, paving the way for significant breakthroughs in the field. By leveraging this innovative tool, the potential for more intuitive and adaptable robots is vast, with far-reaching implications for industries such as manufacturing, logistics, and healthcare.
The AnySkin device has already garnered attention from researchers worldwide, who are now utilizing its capabilities to enhance their own robotics projects. As the robotics community continues to grow, tools like AnySense will play an increasingly crucial role in shaping the future of intelligent machines.