Technology
MIT’s Solution For Teaching Robots How to Interact With Water
Last year, the Massachusetts Institute of Technology created a new tool to help robots simulate water. While robots like Tesla’s up and coming Optimus bot can handle solids, teaching robots to interact with fluids poses a whole new challenge. Fortunately, some of MIT’s best computer and AI scientists are already developing specialized tools to help with this.
Fluid Simulation
Fundamentally, robots learn through being exposed to a simulation and copying it. Just like how modern large language models learn from batches of real-world data, robots learn by mapping the world around them, often using LiDAR scanning, and then mimicking behaviors like grasping and pattern-based sorting activities. However, it’s much more difficult to do that for ever-changing fluids.
Outside of scientific endeavors, entertainment services have made great leaps in fluid simulation. Even unrelated industries like iGaming have often integrated basic water animations into their games, featuring transparent underwater scenes or crashing waves. Games like Big Bass Splash slot focus on fishing, making use of an animated water surface and trailing bubbles that distinguish between the slot’s reels. Other media have created more advanced fluid simulations, often designed to be interacted with by a fellow digital character. However, creating a fluid simulation that can play nicely with a real-world robot is a new challenge.
At best, we have taught basic robots to pour fluid from a cup. Even then, it’s clumsy and the robot is interacting more with the cup than what’s in it. As humans, we can scarcely predict how fluid will behave with complete accuracy when it’s in our hands. For a robot, lacking that accuracy leaves a lot of room for errors, from the embarrassing to the catastrophic.
MIT’s FluidLab
That’s where MIT’s FluidLab comes in. This is the pet project of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), who want to enable robots to perform fluid manipulation at the same level humans can. Examples include making beverages like latte or ice cream and including cutesy art on top of their fragile surfaces.
Robots conquer complex fluid dynamics from latte art to ice cream rolls, using a GPU-accelerated, fully differentiable physics simulator: https://t.co/63rytplJ5E pic.twitter.com/VbakGpuZfs
— MIT CSAIL (@MIT_CSAIL) May 29, 2023
Despite the name, FluidLab can model solids and gases too, and can apply traits to objects like elasticity. The tool is powered by FluidEngine, a physics simulator that uses several GPUs to mimic real-world interactions between digital materials. Using state-of-the-art simulation, researchers can optimize robotic manipulation when handling fluids, where a slight mistake can result in a big mess. MIT-IBM Watson AI lab researcher Chuang Gan said that “FluidEngine is, to our knowledge, the first-of-its-kind physics engine that supports a wide range of materials and couplings while being fully differentiable.” Where previous simulation tools have focused on specific desired movements and outcomes, FluidLab takes a holistic approach where researchers can train robots to handle everything we’ll inevitably throw at them.
While humanoid robotics technology is in its infancy, there’s no doubt that MIT will play an important role in its future. MIT engineers are some of the best in the world, having previously cracked feats like flying planes without moving parts. Now MIT has delivered a crucial tool in fluid modeling and simulation that’s understandable to robots. Through its partnership with IBM, MIT are also exploring the latest in machine learning and generative AI. By combining both, we may be able to create coffee-serving robots once thought the realm of science fiction.