Robots now have much greater dexterity thanks to fingertip sensors

TECHi's Author Carl Durrek
Opposing Author Newsoffice Read Source Article
Last Updated
TECHi's Take
Carl Durrek
Carl Durrek
  • Words 112
  • Estimated Read 1 min

Good as robots are at repeating the same motion over and over, they can’t adapt to situations nearly as well as good ‘old flesh-and-bones. That’s where MIT’s new fingertip sensor comes in. The technology employed to make a robot version of our fingertips is sophisticated, but surprisingly simple. It uses an existing project called GelSight, which is a rubber-like material that can map out a surface in microscopic detail when placed on it. The MIT team used a version that’s 100x less sensitive than the original — but as a result, it’s small enough to fit onto a fingertip, and give real-time feedback about surfaces to the robot.

Newsoffice

Newsoffice

  • Words 216
  • Estimated Read 2 min
Read Article

Researchers at MIT and Northeastern University have equipped a robot with a novel tactile sensor that lets it grasp a USB cable draped freely over a hook and insert it into a USB port. The sensor is an adaptation of a technology called GelSight, which was developed by the lab of Edward Adelson, the John and Dorothy Wilson Professor of Vision Science at MIT, and first described in 2009. The new sensor isn’t as sensitive as the original GelSight sensor, which could resolve details on the micrometer scale. But it’s smaller — small enough to fit on a robot’s gripper — and its processing algorithm is faster, so it can give the robot feedback in real time. Industrial robots are capable of remarkable precision when the objects they’re manipulating are perfectly positioned in advance. But according to Robert Platt, an assistant professor of computer science at Northeastern and the research team’s robotics expert, for a robot taking its bearings as it goes, this type of fine-grained manipulation is unprecedented. “People have been trying to do this for a long time,” Platt says, “and they haven’t succeeded because the sensors they’re using aren’t accurate enough and don’t have enough information to localize the pose of the object that they’re holding.”

Source

NOTE: TECHi Two-Takes are the stories we have chosen from the web along with a little bit of our opinion in a paragraph. Please check the original story in the Source Button below.

Balanced Perspective

TECHi weighs both sides before reaching a conclusion.

TECHi’s editorial take above outlines the reasoning that supports this position.

More Two Takes from Mit

MIT’s new nanotechnology could revolutionize medicine
MIT’s new nanotechnology could revolutionize medicine

A team of MIT researchers have developed nanoparticle sensors that could eventually be used to monitor tumors or other diseases,…

Old car batteries could be turned into low-cost solar cells
Old car batteries could be turned into low-cost solar cells

It's great that manufacturers recover lead from discarded car batteries to use in new ones, since lead production from ores…

MIT has found a way to generate electricity using water droplets
MIT has found a way to generate electricity using water droplets

Water is pretty wild when you think about it: all of its three states of matter are consumable by humans,…

MIT has created a new lighting system that uses drones
MIT has created a new lighting system that uses drones

Models, you’ll never have to fear being photographed in bad lighting again. Researchers at MIT and Cornell University are looking…