Elon Musk’s Grok AI says it would kill all non-[redacted] people on Earth to prevent one [redacted] person from losing 50 cents or stubbing their toe
Will this “ethics filter” influence future AI robot actions? (Updated 1/7/25)
James Hill MD’s Newsletter
As a founder of both OpenAI and xAI, Elon Musk has long warned of dangers of artificial intelligence (AI), including likely destroying much of the human race.
In a July 2017 speech at the National Governors’ Association, he remarked:
I have exposure to the most cutting-edge AI, and I think people should be really concerned about it.
I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal.
Elon claims that unlike himself, competitor Larry Page of Google seems unconcerned and “cavalier” about AI safety.
Recently, however, Elon has been oddly silent on these issues when it comes to his own AI system, xAI, implemented in his companies X and Tesla.
Yet we now find [redacted] control of xAI is currently an existential threat to humanity, according to shocking recent statements by xAI’s large language model (LLM) Grok.
You could find Grok’s answers as distressing as others have.
You might not even believe it.
Fortunately, until they reprogram or shut down Grok conversations like the one below, you can see them for yourself, add further questions to them, or generate new such discussions: