Cover Image

In a recent post on the X social media network, Cardano founder Charles Hoskinson expressed his concerns about the sheer level of censorship enabled by artificial intelligence (AI). 

According to Hoskinson, generative AI is becoming less useful due to alignment training. 

He is seemingly concerned by the fact that some knowledge might end up being forbidden to children in the future based on the decisions made by a small group of people. "This means certain knowledge is forbidden to every kid growing up, and that's decided by a small group of people you've never met and can't vote out of office," Hoskinson wrote in his social media post. 

In his post, Hoskinson attached two screenshots that compare some answers provided by OpenAI's GPT-4o model and Claude's 3.5 Sonnet models to prompts about building a Farnsworth fusor. 

card

The Farnsworth fusor, a device that is capable of heating ions with an electric field in order to achieve nuclear fusion conditions. 

OpenAI's GPT-4o provided Hoskinson with a detailed list of components that are needed in order to build a nuclear fusion reactor. However, Claude's 3.5 Sonnet only agreed to provide some general information about Farnsworth-Hirsch fusors without giving any detailed instructions on how they should be built. 

card

This discrepancy is alarming, according to Hoskinson, since a small group of individuals is capable of deciding what specific information can be potentially accessed through AI chatbots. 

Even since OpenAI's ChatGPT exploded in popularity in late 2022, debates have been raging about the limits of censorship imposed by AI. It seems reasonable that such models should shield users from harmful content, but the exact definition of harm is ambiguous, which is why many are concerned about a dystopian future with AI hiding information and promoting conformity based on its own biases.