Home Tech Reith Lectures: AI and why people should be scared – BBC News

Reith Lectures: AI and why people should be scared – BBC News

0
Reith Lectures: AI and why people should be scared – BBC News

Rory Cellan-Jones
Technology correspondent
@BBCRoryCJon Twitter

Prof Stuart Russell, founder of the Center for Human-Compatible Artificial Intelligence, at the University of California, Berkeley, is giving this year's Reith Lectures.
His four lectures, Living With Artificial Intelligence, address the existential threat from machines more powerful than humans – and offer a way forward.
Last month, he spoke to then BBC News technology correspondent Rory Cellan-Jones about what to expect.
The first drafts that I sent them were much too pointy-headed, much too focused on the intellectual roots of AI and the various definitions of rationality and how they emerged over history and things like that.
So I readjusted – and we have one lecture that introduces AI and the future prospects both good and bad.
And then, we talk about weapons and we talk about jobs.
And then, the fourth one will be: "OK, here's how we avoid losing control over AI systems in the future."
Yes, it's machines that perceive and act and hopefully choose actions that will achieve their objectives.
All these other things that you read about, like deep learning and so on, they're all just special cases of that.
It's a continuum.
Thermostats perceive and act and, in a sense, they have one little rule that says: "If the temperature is below this, turn on the heat.
"If the temperature is above this, turn off the heat."
So that's a trivial program and it's a program that was completely written by a person, so there was no learning involved.
All the way up the other end – you have the self-driving cars, where the decision-making is much more complicated, where a lot of learning was involved in achieving that quality of decision-making.
But there's no hard-and-fast line.
We can't say anything below this doesn't count as AI and anything above this does count.
In object recognition, for example, which was one of the things we've been trying to do since the 1960s, we've gone from completely pathetic to superhuman, according to some measures.
And in machine translation, again we've gone from completely pathetic to really pretty good.
If you look at what the founders of the field said their goal was, general-purpose AI, which means not a program that's really good at playing Go or a program that's really good at machine translation but something that can do pretty much anything a human could do and probably a lot more besides because machines have huge bandwidth and memory advantages over humans.
Just say we need a new school.
The robots would show up.
The robot trucks, the construction robots, the construction management software would know how to build it, knows how to get permits, knows how to talk to the school district and the principal to figure out the right design for the school and so on so forth – and a week later, you have a school.
I'd say we're a fair bit of the way.
Clearly, there are some major breakthroughs that still have to happen.
And I think the biggest one is around complex decision-making.
So if you think about the example of building a school – how do we start from the goal that we want a school, and then all the conversations happen, and then all the construction happens, how do humans do that?
Well, humans have an ability to think at multiple scales of abstraction.
So we might say: "OK, well the first thing we need to figure out is where we're going to put it. And how big should it be?"
We don't start thinking about should I move my left finger first or my right foot first, we focus on the high-level decisions that need to be made.
I think so, yes.
There are two arguments as to why we should pay attention.
One is that even though our algorithms right now are nowhere close to general human capabilities, when you have billions of them running they can still have a very big effect on the world.
The other reason to worry is that it's entirely plausible – and most experts think very likely – that we will have general-purpose AI within either our lifetimes or in the lifetimes of our children.
I think if general-purpose AI is created in the current context of superpower rivalry – you know, whoever rules AI rules the world, that kind of mentality – then I think the outcomes could be the worst possible.
Because I think it's really important and really urgent.
And the reason it's urgent is because the weapons that we have been talking about for the last six years or seven years are now starting to be manufactured and sold.
So in 2017, for example, we produced a movie called Slaughterbots about a small quadcopter about 3in [8cm] in diameter that carries an explosive charge and can kill people by getting close enough to them to blow up.
We showed this first at diplomatic meetings in Geneva and I remember the Russian ambassador basically sneering and sniffing and saying: "Well, you know, this is just science fiction, we don't have to worry about these things for 25 or 30 years."
I explained what my robotics colleagues had said, which is that no, they could put a weapon like this together in a few months with a few graduate students.
And in the following month, so three weeks later, the Turkish manufacturer STM [Savunma Teknolojileri Mühendislik ve Ticaret AŞ] actually announced the Kargu drone, which is basically a slightly larger version of the Slaughterbot.
All of the above – I think a little bit of fear is appropriate, not fear when you get up tomorrow morning and think my laptop is going to murder me or something, but thinking about the future – I would say the same kind of fear we have about the climate or, rather, we should have about the climate.
I think some people just say: "Well, it looks like a nice day today," and they don't think about the longer timescale or the broader picture.
And I think a little bit of fear is necessary, because that's what makes you act now rather than acting when it's too late, which is, in fact, what we have done with the climate.
The Reith Lectures will be on BBC Radio 4, BBC World Service and BBC Sounds.
God and robots: Will AI transform religion?
US boosts troops in Europe amid Ukraine tensions
GoFundMe stops Canada trucker protest fundraiser
What we know about the new Omicron BA.2 variant
The illegal Brazilian gold you may be wearing. Video
Ukraine far from minds in picture postcard Russia
Ukraine's teenage rock band singing for peace. Video
Life inside the Beijing Winter Olympics bubble
Iran accused of sowing Israel discontent online
India's job crisis leading to a 'nowhere generation'
'I have had more time, silence and solitude to write'
The 11-year old taking on Kenneth Branagh. Video
BBC Future: The end of the rebellious teenager?
Was this just a freak accident?
How one ship triggered a global crisis…
'They died doing something they loved'
An extraordinary family and their passion for climbing
© 2022 BBC. The BBC is not responsible for the content of external sites. Read about our approach to external linking.

source

LEAVE A REPLY

Please enter your comment!
Please enter your name here