The UN Will Finally Decide In 2017 If Armies Can Use ‘Killer Robots’

annabenedetti

like marbles on glass
Isn't it limited by the training, though?

Google Translate's just made a big jump in the way it's "thinking."

They’re all moving to this method not only because they can improve machine translation, but because they can improve it in a much faster and much broader way. “The key thing about neural network models is that they are able to generalize better from the data,” says Microsoft researcher Arul Menezes. “With the previous model, no matter how much data we threw at them, they failed to make basic generalizations. At some point, more data was just not making them any better.”

For machine translation, Google is using a form of deep neural network called an LSTM, short for long short-term memory. An LSTM can retain information in both the short and the long term—kind of like your own memory. That allows it learn in more complex ways. As it analyzes a sentence, it can remember the beginning as it gets to the end. That’s different from Google’s previous translation method, Phrase-Based Machine Translation, which breaks sentences into individual words and phrases. The new method looks at the entire collection of words.

Of course, researchers have been trying to get LSTM to work on translation for years. The trouble with LSTMs for machine translation was that they couldn’t operate at the pace we have all come to expect from online service. Google finally got it to work at speed—fast enough to run a service across the Internet at large. “Without doing lots of engineering work and algorithmic work to improve the models,” says Microsoft researcher Jacob Devlin, “the speed is very much slower than traditional models.”

According to Schuster, Google has achieved this speed partly through changes to the LSTMs themselves. Deep neural networks consists of layer after layer of mathematical calculations—linear algebra—with the results of one layer feeding into the next. One trick Google uses is to start the calculations for the second layer before the first layer is finished—and so on. But Schuster also says that much of the speed is driven by Google’s tensor processing units, chips the company specifically built for AI. With TPUs, Schuster says, the same sentence that once took ten seconds to translate via this LSTM model now takes 300 milliseconds.
 

Quincy

New member
Wow, Quincy. That's just so visual. And a scary thought.

It's a good reason to not allow the development of robotic warfare, I suppose. While I don't think any a.i. will ever escape the confines of it's programming and I don't believe anyone will train one to turn on people, it's still terrifying, for sure.

Now, if nations were to only engage in robotic warfare and human lives were never lost to war again, I totally go for it. But developing robots just to slaughter nomadic jihadists or similar, no, that's just a nightmare.
 

annabenedetti

like marbles on glass
It's a good reason to not allow the development of robotic warfare, I suppose. While I don't think any a.i. will ever escape the confines of it's programming and I don't believe anyone will train one to turn on people, it's still terrifying, for sure.

There are a lot of Singularity hypotheses out there.

Now, if nations were to only engage in robotic warfare and human lives were never lost to war again, I totally go for it. But developing robots just to slaughter nomadic jihadists or similar, no, that's just a nightmare.

Which can be done now with a human-operated drone. Imagine a fleet of drones equipped with biological weapons.
 

Quincy

New member
There are a lot of Singularity hypotheses out there.

I suppose it's possible. Perhaps people think a.i. will experience something like that because of Moore's Law, not sure. I don't know, it just seems very science fictiony and like Berean mentioned, the form matters. The form these a.i.s would have to require would trend towards something androidish, before they could destroy us. I think, or at least hope, that engineering can't pick up pace to a point that we couldn't just pull the plug on the whole thing, maybe?

I got to say, my hope is that if we ever create an a.i. that advanced, that we're smart enough to put it in it's own simulation, kind of like what God did with us :eek: .

Which can be done now with a human-operated drone. Imagine a fleet of drones equipped with biological weapons.

Terrifying scenario, indeed! But what if those drones were programmed to only attack other drones, or only be piloted by humans and the skills of drone pilots (or a.i. developers) 100s or 1000s of miles away actually determine the outcome of war, and no human life is lost at all?
 
Last edited:

ok doser

lifeguard at the cement pond
Darn right.

I'll tell you what - nothing would unite the nations of the Earth better than an alien attack.
Maybe the governments of the world should stage one.

even better - we could hire some aliens to stage one


i hear california's full of aliens
 

aCultureWarrior

BANNED
Banned
LIFETIME MEMBER
They've been using robots in combat and law enforcement for what, a decade or more? In some ways they save lives, and in some ways they take lives. Their use raises ethical questions that are going to have to be addressed, and sooner rather than later because the future is already here.

What's so surprising to me is that no Trump lemmings have come forward and said that the US is a sovereign nation and doesn't take orders from the United Nations.

I wonder how Donald Trump will address this issue, as he claims that he doesn't take orders from anyone.
 

Crucible

BANNED
Banned
The military doesn't have anything that is 'autonomous', only things that are remotely controlled or, at most, programed to do limited tasks. They have 'trojan horses' that look like miniature AT-AT's, and they have devices to disarm bombs.

The technology isn't there yet, but it will be, and the UN isn't the world's authority- as soon as another country decides to use autonomous robots, it will compel others to do the same.
I feel like the UN talks just to be saying something a lot of the time :rolleyes:
 

rexlunae

New member
The military doesn't have anything that is 'autonomous', only things that are remotely controlled or, at most, programed to do limited tasks.

That's true, as far as I know. There are some UAVs that have semi-autonomous navigation, but even that requires a human-programmed flight plan. This is about preventing the tech from being developed, which is going to have a lot of incentive behind it.
 
Top