Google’s ClearGrasp AI model helps robots better recognize transparent objects

Optical sensors such as cameras and lidar are a fundamental part of modern robotics platforms, but they suffer from a common flaw: transparent objects like glass containers tend to confuse them. That’s because most of the algorithms analyzing data from those sensors assume all surfaces are Lambertian, or that they reflect light evenly in all directions and from all angles. By contrast, transparent objects both refract and reflect light, rendering depth data invalid or full of noise.

In search of a solution, a team of Google researchers collaborated with Columbia University and Synthesis AI, a data generation platform for computer vision, to develop ClearGrasp. It’s an algorithm capable of estimating accurate 3D data of transparent objects from RGB images, and importantly one that works with inputs from any standard RGB camera, using AI to reconstruct the depth of transparent objects and generalize to objects unseen during training.

Read More