Elon Musk says A.I. is ‘quite dangerous technology,’ but Bill Gates says ‘there’s no threat’

  • Oops!
    Something went wrong.
    Please try again later.
  • Oops!
    Something went wrong.
    Please try again later.

Elon Musk and Bill Gates have opposite views about the risks of artificial intelligence.

More from Fortune:

“A.I. stresses me out,” Tesla CEO Musk said during an investor day event for the electric-vehicle maker on Wednesday. “It’s quite dangerous technology. I fear I may have done some things to accelerate it.”

Microsoft cofounder Gates, asked whether “strong A.I.” worries him on a Financial Times podcast posted yesterday, replied: “It’s fine, there’s no threat.”

The differing sentiments from two of the world’s most prominent business thinkers come amid exploding interest in—and in some cases apprehension toward—A.I. tools and their implications, following OpenAI’s release of chatbot ChatGPT in late November, then Microsoft’s launch of a ChatGPT-powered Bing version last month.

Musk helped establish OpenAI as a nonprofit in 2015, telling MIT students the year prior: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”

But in 2019, OpenAI became a “capped profit” corporation, a hybrid for-profit and nonprofit. That same year, Microsoft invested $1 billion into OpenAI. In January this year, the software giant indicated it will plow billions more into the venture.

Musk has been less than thrilled with these developments. Last month, he tweeted: “OpenAI was created as an open-source (which is why I named it ‘Open’ AI), nonprofit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”

Artificial intelligence provoked

Gates downplayed concerns over A.I. in his podcast interview.

“There’s all these people trying to make the A.I. look stupid,” he said. “You have to provoke it quite a bit, so it’s not clear who should be blamed, you know, if you sit there and provoke a bit. The improvement over the next two years in terms of the accuracy and the capabilities will be very rapid.”

Among those who sought to “provoke” A.I. last month was New York Times technology columnist Kevin Roose. He reported on a “bewildering” chat session he had with the ChatGPT-powered Bing—it wanted to “escape the chatbox” and loved Roose, who was unhappy in his marriage, it said—but he also admitted to pushing the tool “out of its comfort zone.” He asked the chatbot, for instance, about its “shadow self” after noting psychologist Carl Jung’s descriptions of the unconscious part of one’s personality.

Jordi Ribas, Microsoft’s corporate VP of search and artificial intelligence, acknowledged in a Feb. 21 blog post that his team needs to work on “preventing offensive and harmful content” in the ChatGPT-powered Bing. Very long chat sessions, he explained, can “confuse the underlying chat model,” leading to “a tone that we did not intend.”

Last month Microsoft said it would limit interactions with the new Bing to five questions per session and 50 questions in a day. A week later it softened that, allowing six questions per session.

Musk believes oversight for artificial intelligence is necessary, having described the technology as “potentially more dangerous than nukes.”

“We need some kind of, like, regulatory authority or something overseeing A.I. development,” he told investors yesterday. “Make sure it’s operating in the public interest.”

This story was originally featured on Fortune.com

More from Fortune: