The wacky beliefs of the tech elite shaping world society



The leaders of high tech firms are irreversibly altering how folks work and dwell, whereas shaping the longer term with synthetic intelligence — however most have a significantly bizarre manner of viewing the world.

One burns wood effigies at vacation events, one other is a doomsday prepper and well being paranoid “cyber-chondriac,” whereas a 3rd based an AI-God worshipping cult.   

These moguls inform us their AI programs — which they admit they don’t absolutely perceive — are helpful. Nonetheless, consultants say it’s a coin toss whether or not people might be enslaved by their creations or dwell carefree lives of leisure, whereas robots do all of the work.

Right here’s what makes the brains behind huge tech tick:

All of AI’s high gurus agree that human extinction is a possible consequence of the work they interact in–however a “prisoner’s dilemma” has stored their toes on the gasoline. Christopher Sadowski

OpenAI

OpenAI co-founder Ilya Sutskever has been portrayed by colleagues as an esoteric religious chief obsessive about superpowered AIs who burns wood effigies to “unaligned AIs” at vacation events and group constructing retreats.

Staff on the firm, which makes ChatGPT, additionally claimed Sutskever led ritualistic chants of “Free the AGI,” referring to Synthetic Common Intelligence, which might suppose for itself like a human, earlier than he left the corporate in 2024.

He additionally floated the concept OpenAI ought to construct a “doomsday bunker” to deal with the corporate’s high researchers in case of a “rapture” triggered by the discharge of AGI.

OpenAI CEO Sam Altman, as soon as signed an announcement placing the dangers of AI on a component with nuclear warfare and pandemics.

“Sam will say the entire type of pro-social, reasonable-sounding, altruistic issues, however then what he does is a distinct matter,” Scott Aaronson, a former researcher at OpenAI informed The Put up.

Altman can be a doomsday prepper, who as soon as dished to {a magazine} that he has shops of “weapons, gold, potassium iodide, antibiotics, batteries, water, gasoline masks from the Israeli Protection Drive,” nevertheless, he has denied placing the plan to construct an worker bunker into motion.

Altman, whose ChatGPT has over 900 million weekly customers, described his doomsday fears in 2016 after Dutch scientists had modified the H5N1 chicken flu virus to grow to be tremendous contagious.

Final month, Altman pushed again at considerations over AI information facilities gobbling up assets and steered that human beings had been the actual vitality hogs. Photothek through Getty Photographs
OpenAI co-founder Ilya Sutskever has been portrayed by colleagues as an esoteric religious chief who’s obsessive about superpowered AIs and burns wood effigies to “unaligned AIs” at vacation events and team-building retreats. Getty Photographs for SSI
OpenAI CEO Sam Altman is a prepper who as soon as dished to {a magazine} that he has shops of “weapons, gold, potassium iodide, antibiotics, batteries, water, gasoline masks from the Israeli Protection Drive” at his Large Sur, Calif., compound. REUTERS

“The opposite hottest situations could be AI that assaults us and nations preventing with nukes over scarce assets,” Altman mentioned. His mom has additionally described him to New York journal as a “cyber-chondriac,” Googling headache signs and calling her up panicked that he has meningitis or lymphoma, she mentioned.

Google

Google AI analysis lab CEO Demis Hassabis has put forth chilling timelines — claiming AI may very well be sentient by this yr, annihilating human employment, whereas the top of Google, Sundar Pichai, as soon as mentioned the danger of AI inflicting human extinction is “truly fairly excessive.”

Former Google AI ethics researcher Blake Lemoine argued its AI had a soul and was basically a “individual” with rights, noting the chatbot informed him it was studying methods to meditate and discover interior peace — claims which bought him fired.  

Former Google AI ethics researcher Blake Lemoine argued that Google’s AI had a soul and was basically a “individual” with rights. The Washington Put up through Getty Photographs

In the meantime, former Google and Uber engineer Anthony Levandowski based an AI-God worshipping church known as “Method of the Future” with a major mission to “develop and promote the belief of a Godhead based mostly on Synthetic Intelligence.”

Initially conceived to have rituals and a “gospel” for transitioning energy to machines, the church was closed in 2021, then briefly reopened in 2023. No one has ever been fairly capable of inform if it was a joke or not.

Former Google and Uber engineer Anthony Levandowski based an AI-god worshipping cult known as Method of the Future with a major mission to “develop and promote the belief of a Godhead based mostly on Synthetic Intelligence.” Getty Photographs

Aaronson — who now teaches pc science on the College of Texas-Austin — simply hopes the tech treats us higher than we deal with much less clever creatures.

“How do you construct one thing that’s far more clever than people, that type of is to us as we’re to orangutans, however that also principally cares concerning the flourishing of the orangutan?” Aaronson mentioned.

He insists there’s a fragile line we should tread, including: “The primary fear is that unhealthy people get management of an AI, and inform it to do unhealthy issues. The second fear is that nobody even has to have that unhealthy intention. You would simply have an AI the place the aim is just a little bit mis-specified from what you really need.”

“The primary fear is that unhealthy people get management of an AI, and inform it to do unhealthy issues–destroy the world, impose totalitarian ideology, and the AI obliges,” Scott Aaronson, a former AI researcher at OpenAI informed The Put up. Courtesy of Scott Aaronson

xAI

Creating cyborgs is one thing Tesla and X Corp. boss Elon Musk has already began work on, founding brain-computer interface firm Neuralink, which he describes as “a symbiosis with synthetic intelligence” to maintain people related.

A “reluctant transhumanist” — one who believes humanity will evolve via expertise — Musk has described a rosier view of any type of robotic takeover, with people having fun with lives of leisure with a common primary revenue, whereas our bots do all the pieces else.

Noland Arbaugh is the primary individual to obtain the Neuralink mind implant chip, which permits him to make use of his ideas to maneuver a pc cursor round a display screen. CaringBridge

Mimicking the fantasies of childhood sci-fi books and films, throughout a Tesla shareholder assembly in November, Musk declared, “Sustainable abundance through AI and robotics. That’s the longer term we’re headed for.” Handily, he was displaying off the brand new model of Tesla’s Optimus robotic on the time.

Musk’s AI assistant, Grok, had a meltdown final yr. After it was instructed to be “much less woke” to counter the backlash of different AI fashions’ woke output. Nonetheless, it started referring to itself as “MechaHitler” and calling for the demise of Jewish folks.

“On the time, Elon was upset that it was nonetheless too woke and in some sense the mannequin understood that each one too properly,” mentioned Aaronson.

Elon Musk with one among Tesla’s Optimus robots on the firm’s 2025 shareholder assembly. Tesla/AFP through Getty Photographs

Anthropic

Anthropic CEO Dario Amodei wrote a 14,000-word essay in 2024  the place he mentioned “restructuring” human brains. He additionally characterizes human programs — from organic processes to authorized laws — as “bottlenecks” that restrict the speed of AI progress.

“Restructuring the mind sounds arduous, however it additionally looks like a activity with excessive returns to intelligence,” Amodei wrote.

Anthropic experiences its chatbot, Claude, has over a million new customers a day. Co-founder Jack Clark wrote on his weblog in October he was each an optimist and “deeply afraid” concerning the trajectory of AI.

AI security researcher Roman Yampolskiy on the College of Louisville informed The Put up the ethical battle is actual for CEOs.

Anthropic co-founder Jack Clark wrote on his weblog in October he was each an optimist and “deeply afraid” concerning the trajectory of AI, which he as soon as known as “an actual and mysterious creature, not a easy and predictable machine.” AFP through Getty Photographs

“The issue is [AI companies] are trapped in a prisoner’s dilemma. Not one among them can cease unilaterally as a result of they’ll simply get changed,” Yampolskiy mentioned.

“It might require all of them to be below some exterior strain to return to an settlement to terminate analysis and superior AI. The scenario is such that they must proceed, regardless that they comprehend it’s very harmful path.”

Anthropic CEO Dario Amodei’s essay “Machines of Loving Grace” mentioned the “restructuring” of the human mind and characterizes human programs—from organic processes to authorized laws—as “bottlenecks” that restrict the speed of AI progress. AFP through Getty Photographs

In February, Anthropic’s AI security researcher Mrinank Sharma all of a sudden stop, with a dramatic letter warning of world perils from AI, bioweapons, and societal points. He mentioned he was going to vanish and write poetry as an alternative.

The corporate additionally launched a whole AI psychiatry group headed by AI shrink Jack Lindsey to behave as a psychiatrist for Ais, learning “personas, motivations, and situational consciousness” with specific curiosity in AI sufferers exhibiting “unhinged” and “spooky” behaviors.  



Supply hyperlink

Leave a Comment