Elon Musk, the boss of Tesla and SpaceX, has said the development of intelligent machines that far surpass human intelligence poses a greater threat to civilization than nuclear weapons.
The technology entrepreneur told audience at the South by South West festival in Austin, Texas, that efforts to advance artificial intelligence posed a "very serious danger" to the public, and called for AI research to be properly regulated.
Musk has been one of the most vocal critics of AI development, previously describing it as "our biggest existential threat".
The billionaire recently quit the board of OpenAI, a non-profit research group he co-founded to develop "safer" AI, to avoid any conflict of interest.
"I'm close to AI and it scares the hell out of me," Musk was quoted as saying by Deadline. "It's capable of vastly more than anyone knows, and the improvement is exponential."
He pointed to Google's DeepMind AlphaGo AI, which defeated the world's number one Go player Ke Jie in May last year.
"Those experts who think AI is not progressing: look at things like Go," Musk said. "Their batting average is quite weak.
"The danger of AI is much greater than the danger of nuclear warheads – by a lot. Mark my words, AI is far more dangerous than nukes."
Musk questioned why no public body had been set up to oversee research into AI.
"I'm not normally an advocate of regulation and oversight," he stated. "This is a situation where you have a very serious danger to the public. There needs to be a public body that has insight and oversight so that everyone is delivering AI safely. This is extremely important.
"Nobody would suggest that we allow anyone to just build nuclear warheads if they want, that would be insane.
"My point was AI is far more dangerous than nukes. So why do we have no regulatory oversight? It's insane."
Musk is not alone in his dire warnings about a future world that is dominated by intelligent machines.
Prof Stephen Hawking, one of the world's foremost physicists, has warned that developing true AI "could spell the end of the human race".
"AI would take off on its own, and re-design itself at an ever increasing rate," Hawking told the BBC in 2014.
"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."