Altman Wishes for AI to "Love Humanity"; AI Has Come a Long Way

OpenAI CEO Sam Altman has expressed a bold aspiration for AI: that it "love humanity."

While confident this trait can be embedded into AI systems, he admits there is no certainty.

“I think so,”Altman remarked when posed the question by Harvard Business School's senior associate dean Debora Spar.

Sam Altman says if he has one wish for AI it would be for it to "love humanity" and that aligning an AI system to work in a particular way is working surprisingly well pic.twitter.com/zQJmviq0W8

— Tsarathustra (@tsarnick) November 20, 2024

What once seemed like the realm of science fiction, from Isaac Asimov's novels to James Cameron's cinematic epics, has evolved into a serious and pressing debate.

The idea of an AI uprising, previously dismissed as speculative, has become a legitimate regulatory concern.

Conversations that might have once been relegated to conspiracy theories are now shaping policy discussions worldwide.

Altman highlighted OpenAI's "fairly constructive" relationship with the government, acknowledging the importance of collaboration in navigating AI's rapid evolution.

He also noted that a project of AI's magnitude might ideally have been spearheaded by governments, emphasizing the vast societal implications tied to its development.

Altman noted:

“In a well-functioning society this would be a government project. Given that it's not happening, I think it's better that it's happening this way as an American project.”

AI Safety Guidelines Still Not There Yet

The federal government has made little headway in advancing AI safety legislation.

A recent attempt in California sought to hold AI developers accountable for catastrophic misuse, such as the creation of weapons of mass destruction or attacks on critical infrastructure.

Although the bill passed the state legislature, it was ultimately vetoed by Governor Gavin Newsom.

The urgency of addressing AI's alignment with human welfare has been underscored by some of the most influential voices in the field.

Nobel laureate Geoffrey Hinton, often called the "Godfather of AI," has expressed grave concerns, admitting he sees no clear path to guarantee AI safety.

Geoffrey Hinton says AI companies should be forced to use one-third of their computing resources on safety research because AI will become smarter than us in the next 20 years and we need to start worrying about what happens then pic.twitter.com/ocT3Scmyxg

— Tsarathustra (@tsarnick) October 25, 2024

Similarly, Tesla CEO Elon Musk has consistently warned that AI poses existential risks to humanity.

Ironically, Musk, a vocal critic of current AI practices, was a key figure in OpenAI's founding, providing significant early funding—a contribution for which Altman remains "grateful," even as Musk is now suing the organisation.

The challenge of AI safety has spurred the creation of specialised organisations dedicated to addressing these concerns.

Groups like the Alignment Research Center and Safe Superintelligence, founded by OpenAI’s former chief science officer, have emerged to explore strategies for ensuring AI systems operate in alignment with human values.

These efforts highlight the growing recognition that AI’s rapid development must be matched by equally rigourous safeguards to protect humanity's future.

Altman Hopes AI Attains Empathy

Altman believes current AI design lends itself well to alignment, making it more feasible than many assume to ensure AI systems do not pose harm to humanity.

He said:

“One of the things that has worked surprisingly well has been the ability to align an AI system to behave in a particular way. So if we can articulate what that means in a bunch of different cases then, yeah, I think we can get the system to act that way.”

He proposes an innovative approach for defining the principles and values that should guide AI development: leveraging AI itself to engage with the public directly.

Altman envisions using AI chatbots to poll billions of users, gathering insights about their values and priorities.

By fostering deep and widespread communication, AI could gain a nuanced understanding of societal challenges and identify what measures would best serve the public's well-being.

He explained:

“I'm interested in the thought experiment [in which] an AI chats with you for a couple of hours about your value system. It does that with me, with everybody else. And then says 'ok I can't make everybody happy all the time.'"

Altman suggests that this collective feedback could then be used to align AI systems with the broader interests of humanity, potentially creating a framework for AI to operate in harmony with societal goals.

This approach not only underscores the potential of AI as a tool for fostering global dialogue but also raises thought-provoking questions.

Sam Altman says having users explain their value system to an AI and getting it to collectively align across that is a promising direction of AI alignment pic.twitter.com/Ev97Ts3fQN

— Tsarathustra (@tsarnick) November 20, 2024

Can such a method truly capture the complexity of human values?

And could it balance the diverse perspectives of billions to achieve a unified vision of societal good?

Altman’s idea offers a glimpse into how AI might not just serve humanity but also collaborate with it to address our most pressing challenges.

Many Ex-OpenAI Staff Worry that Safety has Taken a Backseat in AI

OpenAI once had a dedicated superalignment team focused on preventing future digital superintelligence from going rogue and causing catastrophic harm.

In December 2023, the team published an early research paper outlining a potential process in which one large language model would oversee another, acting as a safeguard.

However, by the following spring, the team was disbanded after its leaders, Ilya Sutskever and Jan Leike, departed the company.

Leike cited growing disagreements with OpenAI's leadership regarding its safety priorities as the company advanced toward artificial general intelligence (AGI)—a level of AI intelligence comparable to that of humans.

Building smarter-than-human machines is an inherently dangerous endeavor.

OpenAI is shouldering an enormous responsibility on behalf of all of humanity.

— Jan Leike (@janleike) May 17, 2024

His departure highlighted mounting tensions over the balance between innovation and safety in the race to develop AGI.

When Leike left, Altman expressed gratitude for his contributions in a post on X (formerly known as Twitter), but the situation left many questioning how OpenAI would address critical safety concerns moving forward.

i'm super appreciative of @janleike's contributions to openai's alignment research and safety culture, and very sad to see him leave. he's right we have a lot more to do; we are committed to doing it. i'll have a longer post in the next couple of days.

🧡 https://t.co/t2yexKtQEk

— Sam Altman (@sama) May 17, 2024