Expert Speak Digital Frontiers
Published on Nov 22, 2018

In typical applications of robotics, you have a robot that does one thing over and over again. If something goes wrong, the robot is not able to adapt. If you have a system that instead of memorising what exactly to do has a more general understanding of how the world works, it should be able to adapt to how the world works in many different situations and be able to handle the diversity of the real world and unstructured environments.

Chelsea Finn is teaching Brett the Robot how the world works

Image Source: Earl McCollough/The New York Times

Times Square — a series on jobs, automation and anxiety from the world’s public square.


Four years ago, when Chelsea Finn began her PhD at University of California, Berkeley, she created an algorithm that enabled a robot to instead a block into a cube, screw a cap onto a bottle and lift an object using a spatula and place it into a bowl. While each advance sparked excitement, Brett the Robot, a star of Finn’s lab at Berkeley, had one problem — an intense level of specificity. “He” could screw “that cap onto that bottle” or lift “that object with that spatula.” If you gave Brett any other bottle or spatula, Brett would have to learn from scratch.

From that point onwards, Finn has focused her energies on training her robots to learn general things about the world and acquire what she calls general notions of intelligence. Finn is now developing robots that can learn by “learning how the world works.” Brett the Robot is currently learning how to set the table by watching how a young boy does the same task and “imitating” him.

“The hardware for robots is far more advanced than the software. It’s the software that I’m interested in,” says Finn about the inputs that go into making smart robots that can work in new and untested environments without say, harming anyone or causing damage.

Central to the struggle is a tenet of artificial intelligence called Moravec’s paradox which says that it’s relatively easy for AI to mimic the high level intellectual or computational capabilities of a human but it’s far harder to input a robot with the perception and motor skills of even a toddler. Precisely that gap is Finn’s speciality. Her algorithms require much less data than is usually needed to train an AI so that her robots can learn how to manipulate an object just by watching a video of a human doing it.

Nikhila Natarajan in New York spoke with Chelsea Finn on the US West Coast. Excerpts from the conversation are below.

Nikhila Natarajan: Tell us about your experiences with Brett the Robot.

Chelsea Finn: I think the types of things I’m really excited about is not about teaching robots a specific skill but having them learn more general things about the world and hence being able to generalise. Before I began my PhD, much of the research on robots was focused on like — let’s get the robot to do one particular thing. Some of the advances are trying to get more broader notions of tasks and having robots be able to learn about the world both by understanding the outcome of actions and by trying to predict what will happen.

Natarajan: What are the computational models that capture human learning abilities for a large class of visual concepts?

Finn: The thinking underlying few shot learning is that instead of training a machine learning system from scratch to recognise objects — say, to translate from French to English, or a similar narrow task, giving it a small amount of data for a broad number of tasks and adapting the model to help the robot learn to learn new tasks. Then, when it sees a new task, it can learn very quickly with a small amount of data. The recent breakthroughs have been to combine existing knowledge with deep learning and large neural networks in order to be able to scale to say, identifying pictures based on raw pixels alone.

Natarajan: What’s the value of creating machine learning systems as generalists?

Finn: In typical applications of robotics, you have a robot that does one thing over and over again. If something goes wrong, the robot is not able to adapt. If you have a system that instead of memorising what exactly to do has a more general understanding of how the world works, it should be able to adapt to how the world works in many different situations and be able to handle the diversity of the real world and unstructured environments. So if you train a robot from scratch how to screw a cap onto a bottle, its entire universe is that bottle and that lid. Give it another bottle and lid and it won’t know what to do because it’s never seen anything else in its lifetime. The robot should be able to tell if the lid is appropriate to the bottle and it should be able to figure that out relatively quickly — like a human would.

Natarajan: What does success look like in your field?

Finn: If we’re able to put a robot in a disaster zone where its never been before or in a hospital and have it be able to do even the most basic things that a human can do — I think that would be success. These types of generalisation capabilities, though, are way beyond the existing techniques. So, a lot of the work I have done so far is instead of putting a robot in a totally new environment, just introducing the robot to new objects it hasn’t seen before in the same environment. Give it a new bottle or ask it to set a table in an environment that it knows but with objects that it has never seen before.

Natarajan: In the marketplace for robots, which ones are the most impressive so far?

Finn: It’s all about the software. The hardware is way ahead of the software. You can tell the operator robot to do remarkable things and that’s because the human is controlling the robot whereas working with Brett is where you need to code the brain of the robot itself to do things autonomously. On the software side is where we need to see advances. There’s a company that’s thinking of putting robots in hotels and I think that’s interesting because that will be an environment where there’ll be humans; the robot could get off from the elevator on the wrong floor, other things might go wrong but the robot has to be able to adapt to it.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Nikhila Natarajan

Nikhila Natarajan

Nikhila Natarajan is Senior Programme Manager for Media and Digital Content with ORF America. Her work focuses on the future of jobs current research in ...

Read More +

Related Search Terms