Image credit: Ras Labs
The human sense of touch is our tactile window to the world. Our fingertips contain some of the most advanced touch receptors on the human body. Humans can feel a butterfly alight on their hand, hold a delicate raspberry, and adjust a grip to prevent a glass from slipping. New artificial intelligence and deep learning technologies are bringing that level of sensitivity to machines to advance robotic capabilities.
Much of today’s artificial intelligence stems from vision. Captured images are analyzed to determine whether a manufactured item has a defect, for example. Now companies are integrating machine-learning algorithms to assess pressure and touch sensitivity to improve artificial intelligence.
AI at Your Fingers
Boston-based Ras Labs is testing tactile capabilities in FingerTip, a technology based on the company’s Synthetic Muscle product, which uses electroactive polymers and actuators that sense a range of pressure. The FingerTip sensors are incorporated into robotic grippers or end of arm tooling (EOAT).
The soft pads integrate multichannel sensors, machine learning, and artificial intelligence to identify trends and patterns. The sensors detect even minimal contact, enabling the device to gently hold an item, such as a raspberry, without crushing or dropping it. FingerTip can also determine a change in pressure and adjust its grip if the object begins to slip, thus preventing the item from falling.
The technology is extremely sensitive, with the ability to detect pressure ranging from at least 0.05 N up to 50 N, with 1 mm resolution. Human touch has a pressure sensitivity of around 0.1 N. In fact, Ras Labs’ FingerTip technology not only can detect a heartbeat when placed against a person’s wrist or fingers, it can actually delineate between the four steps of cardiac conduction within a single heartbeat.
Enabling human-like dexterity to robotic grippers will allow new levels of automation across multiple industries. Amazon is investing in computer vision and technologies that support robotic shoppers that one day could select produce for grocery customers.
Tactile Simulation for Deep Learning
To help other companies develop touch-based capabilities, AI researcher and entrepreneur Jason Toy launched SenseNet. SenseNet is a sensorimotor and touch simulator that allows AI devices to better integrate tactile feedback into machine-learning algorithms. It provides a framework for researchers to integrate into robots tactile and haptic input, including shape, contour, texture, and hardness. Building these sensorimotor neural systems will allow robotic devices to recognize objects through touch.
To gain true intelligence, a robotic device must be able to interact effectively with its environment, Toy has said. That means it needs both vision- and touch-based machine learning algorithms. Toy hopes that SenseNet will do for touch what ImageNet did for vision. ImageNet is an open database of thousands of images accessible to researchers and developers working on vision-based technologies.
The SenseNet framework for 3D objects was built on OpenAI’s Gym and Intel® Distribution of OpenVINO™ toolkit. Intel’s Reinforcement Learning Coach expedited the training and testing of reinforcement learning algorithms. SenseNet includes other resources for developers, including a hand simulation tool built on the Bullet physics engine. SenseNet is part of the Intel® Developer Zone.
In the future, SenseNet technologies will enable robotic hands to act more like human hands. Having a human-like grasp could enable robots to prepare food, distribute pills and other medications to patients, and sort and assemble components in a factory.