MIT Develops Finger Sensor for Advanced Robot Dexterity
26-10-2023 | By Robin Mitchell
Recognising the challenges faced by robotic systems, MIT researchers recently demonstrated a new finger-shaped sensor that can be used to handle various objects with ease. What challenges do robotic grippers face, what did the researchers develop, and how could it help with future robotic systems?
What challenges do robotic grippers face?
Over the past few decades, the field of robotics has seen major strides in technological advances, and one only has to look at companies like Boston Dynamics to see examples of such developments. However, baring robots that perform parkour, backflips, and dancing, the physical aspect of robots hasn’t seen substantial advances in this time, as bipedal robots have been around since the 90s.
What has seen major changes is software, sensors, and the use of AI. Creating a balancing robot that can move around is easily achievable with an Arduino and a few motors, but trying to get a robot to walk on uneven surfaces, recognise its environment, and adjust for unexpected events is extremely challenging.
As such, many robotic developers have been focusing heavily on the software side of designs, as well as the computational hardware they use. This shift in focus also shows how computing platforms play an increasingly important role, with modern robots capable of independent control being entirely dependent on advances in the field of microprocessor technologies.
Despite all of the advances made in these areas of technology, one challenge that robotic systems still face is being dexterous. While humans can easily pick up random objects and decide how much pressure to apply to maintain grip and not damage the object, robotic systems are notoriously poor at this.
The fundamental reason for this comes down to a lack of sense. The human fingertip is one of the most sensitive parts of the human body, with around 3,000 nerve endings, all of which capable of determining pressure and temperature. Furthermore, these nerve endings can then be used to determine surface texture, and even identify if an object has started to slip. In fact, the human finger is so capable and sensitive that it is able to feel changes in a surface that are 13nm in size.
By contrast, robotic grippers are lucky to incorporate more than three sensors, and such sensors are usually only able to provide a limited amount of data. For example, a gripper may be able to feel the force exerted on an object, but could not be used to determine that objects surface texture, nor understand the weight of the object on individual grippers.
Thus, robotic systems can often struggle to switch between picking up eggs and lifting heavy objects without major code changes, or a pre-defined routine that knows when delicate objects are being delt with.
MIT Researchers create dexterous robot fingers
Recognising the challenges faced with robotic systems, researchers from MIT have recently demonstrated a new robotic finger that not only has the ability to determine pressure, but also allow for extremely dexterous handling of various objects.
Instead of trying to create a finger with 3,000 receptors, the researchers instead decided to go for a combined approach, utilising both touch sensors and visual cameras. By making the finger curved, a small camera located at the base inside the finger, along with two mirrors, allows for a complete visual picture of what is being gripped (it should be noted that the camera is inside the finger, not outside).
The purpose of the mirrors in this design is to provide the camera with the ability to see the entirety of the finger, as even small cameras would need to be positioned far away from the finger. Thus, the use of mirrors allows the camera to be mounted far closer to the finger, thereby reducing the size of the gripper setup.
However, the camera does not receive images that humans would typically understand, but instead special reflection and refraction patterns that occur along the length of the finger. Combined with simulations of expected reflection and refraction patterns, a computer can be used to determine the exact nature of the object being held, such as its size and orientation.
What’s even more impressive is that the use of internal LEDs (red and green) allows for measuring the pressure applied by the gripper, as these colors are reflected differently depending on the pressure. Finally, the backbone of the finger also incorporates sensors to identify the pressure being applied to the gripper.
The resulting three-finger gripper demonstrated an extraordinary amount of dexterity, able to hold different objects with ease. Furthermore, the gripper was able to hold objects even at the finger tips, thus, indicating its ability to handle delicate objects.
How could such sensors help with the future of robotics?
The use of an optical-based sensing solution over a touch-based system is undeniably brilliant, as it automatically provides a wide sensing area (considering that such a camera can have millions of pixels, that would indicate a nerve density greater than a human finger). Furthermore, cameras produce data that is very easy to rapidly process via AI, and thus, could help to create real-time reactionary systems in robotics.
But, if the finger developed by the MIT team can be commercialised, it could provide robotic operators with all kinds of new opportunities, ranging from standard automation in production lines all the way to next-generation robotic systems that are used in tasks normally dominated by humans. Of course, this sensor first needs to prove itself beyond a few laboratory setups, but from what has been shown, it looks like that this sensor certainly shows promise.