Live Markets »News & Advice»Market News»Market News Details
Market News Details

Mark Zuckerberg, Elon Musk and their feud over killer robots

NYT/ 11 Jun 18 | 10:22 PM

Photo: Shutterstock

Mark Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist.

Related Stories

    No Related Stories Found
Widgets Magazine

Mr. Musk, the entrepreneur behind SpaceX and the electric-car maker Tesla, had taken it upon himself to warn the world that artificial intelligence was “potentially more dangerous than nukes" in television interviews and on social media.

So, on Nov. 19, 2014, Mr. Zuckerberg, Facebook’s chief executive, invited Mr. Musk to dinner at his home in Palo Alto, Calif. Two top researchers from Facebook’s new artificial intelligence lab and two other Facebook executives joined them.

As they ate, the Facebook contingent tried to convince Mr. Musk that he was wrong. But he wasn’t budging. “I genuinely believe this is dangerous," Mr. Musk told the table, according to one of the dinner’s attendees, Yann LeCun, the researcher who led Facebook’s A.I. lab.

Mr. Musk’s fears of A.I., distilled to their essence, were simple: If we create machines that are smarter than humans, they could turn against us. (See: “The Terminator," “The Matrix," and “2001: A Space Odyssey.") Let’s for once, he was saying to the rest of the tech industry, consider the unintended consequences of what we are creating before we unleash it on the world.

Neither Mr. Musk nor Mr. Zuckerberg would talk in detail about the dinner, which has not been reported before, or their long-running A.I. debate.

The creation of “superintelligence" — the name for the supersmart technological breakthrough that takes A.I. to the next level and creates machines that not only perform narrow tasks that typically require human intelligence (like self-driving cars) but can actually outthink humans — still feels like science fiction. But the fight over the future of A.I. has spread across the tech industry. More than 4,000 Google employees recently signed a petition protesting a $9 million A.I. contract the company had signed with the Pentagon — a deal worth chicken feed to the internet giant, but deeply troubling to many artificial intelligence researchers at the company. Last week, Google executives, trying to head off a worker rebellion, said they wouldn't renew the contract when it expires next year.

Artificial intelligence research has enormous potential and enormous implications, both as an economic engine and a source of military superiority. The Chinese government has said it is willing to spend billions in the coming years to make the country the world's leader in A.I., while the Pentagon is aggressively courting the tech industry for help. A new breed of autonomous weapons can't be far away.

All sorts of deep thinkers have joined the debate, from a gathering of philosophers and scientists held along the central California coast to an annual conference hosted in Palm Springs, Calif., by Amazon's chief executive, Jeff Bezos.

“You can now talk about the risks of A.I. without seeming like you are lost in science fiction," said Allan Dafoe, a director of the governance of A.I. program at the Future of Humanity Institute, a research center at the University of Oxford that explores the risks and opportunities of advanced technology.

And the public roasting of Facebook and other tech companies over the past few months has done plenty to raise the issue of the unintended consequences of the technology created by Silicon Valley. 

In April, Mr. Zuckerberg spent two days answering questions from members of Congress about data privacy and Facebook's role in the spread of misinformation before the 2016 election. He faced a similar grilling in Europe last month.

Facebook's recognition that it was slow to understand what was going on has led to a rare moment of self-reflection in an industry that has long believed it is making the world a better place, whether the world likes it or not. Even such influential figures as the Microsoft founder Bill Gates and the late Stephen Hawking have expressed concern about creating machines that are more intelligent than we are. Even though superintelligence seems decades away, they and others have said, shouldn't we consider the consequences before it's too late?" 

The kind of systems we are creating are very powerful," said Bart Selman, a Cornell University computer science professor and former Bell Labs researcher. "And we cannot understand their impact."

© 2018 The New York Times

Widgets Magazine


Company Price Gain (%)
Coal India291.602.60
Axis Bank635.201.76
Sun Pharma.Inds.635.401.71


Currently No Poll Available.

Online Portfolio

You can create Online Portfolio here using the below button.

Widgets Magazine