The three waves of robotics | Jeremy Wyatt | TEDxRoma


Translator: Martina Pia Ruino
Reviewer: Lisa Thompson Good afternoon,
and thank you for inviting me. I’d like to tell you
a simple story about robots. So, the truly autonomous robot
has been a vision, a dream, of scientists, engineers,
and artists for many years. We’re all fascinated by the idea that one day, it might be possible
to construct a thinking machine that, just as we can,
can move and act in our world. I want you to think, to imagine
just for a moment what it might be like to look into the eyes of a machine
and know that it’s looking back at you, that in some even limited sense it has
sentience, it has thoughts, intentions, plans, and maybe one day, even desires. That’s a huge vision.
How can we realize it? We’re still a long way from that vision, but the question
I have to answer for you is, Is it destined to remain science fiction,
or is it achievable, in whole or in part? I believe that it is achievable,
and I want to tell you how. Now, I want to tell you a story
in terms of three parts, three waves. You can think about robotics and artificial intelligence technology
as coming in waves. Now, that’s an approximation to the truth; it’s a mental tool, but I think
it’s a good first-order approximation. So, let me tell you about the first wave. It’s what I call structured manipulation, and in fact, this wave
you’re all already familiar with. It’s what we call industrial robotics. So this is a very,
very particular technology that’s based on a very,
very particular idea, which is what we call
precise control of positions. So, these industrial robots,
they move back and forth between two or three positions
very, very repetitively and very, very beautifully
and accurately. And this has revolutionized manufacturing. And so, the problem,
however, with these industrial robots, is that they can’t cope
with uncertainty or novelty. So if, for example, you move the cars in this car factory
out of sync by maybe one centimeter, everything in the factory would go wrong. And if we want to take those robots
out of factories and into our world – bring them into our world – we need to be able to deal
with that uncertainty and novelty. So what is it that we need to do that? Well, we need some kind of intelligence, and that’s where we really come
to artificial intelligence. Now, the thing that you need
to understand about AI is that very early on in AI,
we had a lot of big successes. But they’re very interesting because all the big successes
early on in AI were all things like chess, mathematical theorem proving,
medical diagnosis, and all of those tasks are tasks
that when done by people, we think about as being done
by people of high intelligence. Now, there’s a paradox there because conversely, when you
think about simple, everyday tasks that you don’t even
think about as requiring thought – so, for example, walking, seeing,
having an everyday conversation, making a cup of coffee – all of those things, it turns out, are far, far harder to automate
than we ever anticipated. Now, in order to be able to do these, we can’t use that kind of
old-fashioned AI, and in fact, although
it’s been going for a long time, we’ve begun to make,
over a steady period of time, progress in a different kind of AI which is based on machine learning
and reasoning under uncertainty, and this has really been critical to being able to do
these lower level tasks, as it were. This is a set of images
from a task called ImageNet. This is a competition where
different machine learning algorithms try to learn to recognize
things in images. So, what you do is you have
a learning machine – many of the popular ones
are based on principles of the brain – and you it feed hundreds
of thousands of images, and then it learns to distinguish
between, for example, a caravan and a car, between a motorcycle and a bicycle
and a giraffe and so on. In 2010 to 2014, we suddenly
got much better at these kinds of tasks. There was a breakthrough
in machine learning that enabled this. But hand in hand with machine learning
is something we call probabilistic AI, and it started with, for example,
your own James Bernoulli, but also Thomas Bayes in England –
that’s the man sitting behind me now – and they used probability; they came up
with the first laws of probability theory, and this has turned out
to be really important for AI. Bayes, in particular,
he came up with this idea that you could combine
your prior beliefs about the world with evidence from unreliable data,
unreliable sensing, and that’s perfect for our robots,
but it took us another 300 years to get the algorithms efficient enough
to be able to use them in our robots. Now, with this machine learning and AI, we’re now able to get the second wave,
of unstructured mobility. So this is things like self-driving cars. Here, in the picture behind me, you see a self-driving car
with a laser on top. Now, with that kind of
laser-based navigation, you can localize and build
maps in the world, but the problem is that the laser sitting
on top of the car is very expensive; it generates a lot of data,
but it’s worth more than the car itself. So that’s not a practical technology. Instead, it’s now the case that machine learning
has advanced to the point where now we can actually
just take a regular camera, stick it on the top of your dashboard,
and it will recognize where it is. So, this is work done
at the University of Oxford, where the car is able to recognize
where it is in the street based on its previous memories. And this is a real achievement. What it’s doing is spotting patches
between the images that are similar and are robust to changes in weather
or daytime, lighting illumination, and it learns many
specific models like this. It’s very, very powerful stuff. However, this kind
of revolution in perception isn’t just going to work for cars; it also works for domestic robots,
like this vacuum cleaner, and what this vacuum cleaner does – this is work that was done
at Imperial College in London, where it picks out visual landmarks that it uses to track its way
around the house. So what it’s doing is –
it never gets lost, and this means it can plan
where it should go to clean, so it will always go and clean in the places that are going to be
dirtiest or it’s cleaned least. So that’s a technology that’s really
going to revolutionize our lives. However, there’s something missing, which is these robots
have a breakthrough in perception, but they don’t have
what we would call cognition; they can’t think ahead in complex ways. And in fact, that is
what we’ve been looking at. So, let’s imagine that you took your robot
out of the box the day that you bought it, your domestic assistance robot, and you asked it to get you
something for breakfast and perhaps to get you
something to read while you’re eating. But the problem is
because you’ve switched your robot on and it doesn’t know the house,
it doesn’t know where the kitchen is, so it can’t find
the objects that you want. And in order to address this, we have to have
two very special properties: first of all, we have to have
what we call common sense, and second, we have to have imagination. So in my laboratory, what we’ve been doing
is building robot planning systems that are capable
of reasoning in novel worlds. So you drop the robot into the office, and it’s able to reason
very, very efficiently about not only how what it does
changes the world, but about how what it does changes
what it knows about the world. And it actually imagines the existence
of the things that it has to find and the kinds of places
that it’s going to find them, and it uses this combination
of Bayesian reasoning, this probability, together with artificial
intelligence planning techniques and also common sense. So it has common sense and imagination. However, this is still not enough. This wave of unstructured mobile robots
is already revolutionizing our world. You see it every day
when you switch on the TV and you hear a new story,
a new application. But actually, what we really need is the ability to bring together
the manipulation from the first wave, together with the ability to deal with
unstructured data from the second wave. And that will give us what we call
unstructured manipulation so that you can go and pick up
that cornflakes box and manipulate it. To see why that’s really hard
is still beyond us; it’s not hitting us yet
and it might not for some time. This is a robot called Justin, from
the German Aerospace Research Institute. It was a project that I helped
work on a little bit. And Justin is performing a manipulation. Here he’s making a drink,
and he has to manipulate these objects, but it’s the case that
if you move the objects, it’s fine, but if you changed one of the objects – so you put a different
glass, for example – Justin would have to be
completely reprogrammed. So, in my laboratory,
what we’ve been doing is we’ve been working on planning
for machine learning techniques for grasping novel objects. So this is Boris;
this is one of my robots. And Boris is able – you are able to show him
how to pick up, for example, a jug, and to pick it up with a human-type hand, and then he can see many, many
more objects that he’s never seen before, but yet he’s able to pick them up. And he’s able, in fact, also, to imagine lots and lots
of possible grasps for these objects, so hundreds and hundreds
of grasps within seconds. And here we play a few of them for you,
and then you can see him executing them. We’ll also be able to show,
as you’ll be able to see in a moment – is that he’s a very British robot because of the the way
that he picks up his teacup. (Laughter) Very polite. And also, the nice thing here is
if you look at the little red dots, that’s all that the robot sensor can see,
so not only is the object novel, but the robot can’t actually see very much
of the object that it has to manipulate. However, there’s another problem, which is that here we’re using
this kind of machine learning technique in order to be able
to deal with positional uncertainty and shape uncertainty, but we still, in fact,
have the wrong kinds of actuators. Robots like Boris and Justin are a little step on
from the industrial robots of the past in the sense that they can feel forces,
but only fairly crudely; they’re still basically
position control devices. And we need very, very different
kinds of robot hardware which is much more like our hardware to be able to manipulate
in unstructured environments. The other thing is we also need
new kinds of machine learning and optimization techniques. So, for example, here, this is – techniques people in walking robots have really begun to crack the problem
of how to reason with the robot. The robot reasons not just about
where to put its legs, but the forces that it needs to apply, and with this, you can do amazing things. So this was work done
at the University of Southern California, which involved a colleague of mine
who’s now in Birmingham. So it turns out, you need the right kind of hardware
and also the right kind of brain, and we still have
many problems left to go. So, in summary,
we already have intelligent robots. Wave one has come;
wave two is hitting us now. That’s mobility
in unstructured environments. And wave three will be
this unstructured manipulation. And I want you to imagine
how revolutionary that will be because when we have
unstructured manipulating robots, we will be able to automate
any task that humans can do that involves manipulation, which is essentially everything
except walking. And is that a good thing? I believe yes. If you want to think about the impact
of this kind of automation, go back and imagine
what it must have been like living in the pre-industrial age, because all of the benefits in our society
have all arisen from automation. So the possibilities at that point
really will become almost endless. Thank you very much. (Applause)

, , , , , ,

Post navigation

2 thoughts on “The three waves of robotics | Jeremy Wyatt | TEDxRoma

  1. This makes me see the possibilities in robotics in a different way. I want to be a part of this. There's so much learning to do and I hope to find related MOOCs or short courses from University of Birmingham's resources.

  2. the lidar tech on the google car is not worth more than the cars they are mounted on. the entire hardware package on the google car is estimated to be around 2500usd when it is ready for the consumer market.

Leave a Reply

Your email address will not be published. Required fields are marked *