AI read "1984" and decided to ban it

People who resist AI decisions lose their jobs, while those who sign off on AI decisions face no consequences.

Author: Kuri, Deep Tide TechFlow

Last week, a school in Manchester, England, used AI to review its library.

AI generated a list of 193 books to be removed, each with a reason attached. George Orwell’s “1984” was notably included, with the reason being “contains themes of torture, violence, and sexual coercion.”

“1984” depicts a world where the government monitors everything, rewrites history, and decides what citizens can and cannot see. Now, AI has done the same for a school, and it may not even know what it is saying.

The school librarian found this unreasonable and refused to implement all of the AI’s suggestions.

The school then launched an internal investigation against her on the grounds of “child safety,” accusing her of introducing inappropriate books to the library and reported her to the local government. She took sick leave due to stress and eventually resigned.

Ironically, the conclusion of the local government’s investigation determined that she had indeed violated child safety procedures, and the complaint was upheld.

Caroline Roche, chair of the UK School Library Association, stated that this conclusion means she can no longer work in any school.

People who resist AI decisions lose their jobs, while those who sign off on AI decisions face no consequences.

Subsequently, the school acknowledged in internal documents that all classifications and reasons were generated by AI, stating: “While the classifications were generated by AI, we believe they are largely accurate.”

A school handed over the judgment of “what books are suitable for students” to AI, which returned an answer it itself may not understand, and then a human administrator approved it without a thorough review.

After this was exposed by the UK free speech organization Index on Censorship, the issues raised extend far beyond a single school’s bookshelf:

When AI begins to decide what content is appropriate and what is dangerous, who judges whether AI’s decisions are correct?

Wikipedia Closes Its Doors to AI

In the same week, another organization answered this question with action.

The school allowed AI to decide what people could read. The world’s largest online encyclopedia, Wikipedia, made the opposite choice: it will not let AI decide what to write in the encyclopedia.

In the same week, English Wikipedia officially passed a new policy prohibiting the use of large language models to generate or rewrite entry content. The vote resulted in 44 in favor and 2 against.

The immediate cause was an AI account called TomWikiAssist. In early March of this year, this account autonomously created and edited multiple entries on Wikipedia, which was urgently addressed after being discovered by the community.

AI can write an entry in just a few seconds, but it takes volunteers hours to verify the facts, sources, and wording in an AI-generated entry for accuracy.

The editing community of Wikipedia is limited in number. If AI can produce content indefinitely, human editors simply cannot keep up.

This isn’t even the most troubling part. Wikipedia is one of the most important sources of training data for global AI models. AI learns from Wikipedia and then uses that knowledge to write new Wikipedia entries, which are then ingested by the next generation of AI models for further training.

Once incorrect information generated by AI gets mixed in, it will continuously amplify within this cycle, turning into a nesting doll of AI pollution:

AI contaminates training data, and training data then contaminates AI.

However, Wikipedia’s policy does leave two openings for AI: editors can use AI to refine their own writing and to assist with translation. But the policy specifically warns that AI may “exceed your requests, alter the meaning of the text, and cause discrepancies with the cited sources.”

Human writers make mistakes, and Wikipedia has relied on community collaboration to correct those errors for over twenty years. AI makes mistakes differently; the things it fabricates can appear more real than reality, and it can produce them in bulk.

A school trusted AI’s judgment and ended up losing a librarian. Wikipedia chose not to trust and simply closed the door.

But what if even those who create AI begin to distrust it themselves?

Those Who Create AI Are Now Afraid

While outside institutions are closing their doors to AI, AI companies are also pulling back.

In the same week, OpenAI indefinitely shelved the “adult mode” for ChatGPT. This feature was originally planned to launch last December, allowing age-verified adult users to engage in erotic conversations with ChatGPT.

CEO Sam Altman had previewed this in October last year, stating it was to “treat adult users like adults.”

After being postponed three times, it was ultimately scrapped.

According to the Financial Times, OpenAI’s internal health advisory board unanimously opposed this feature. The consultants’ concerns were quite specific: users would develop unhealthy emotional dependencies on AI, and minors would inevitably find ways to bypass age verification.

One consultant’s statement was more direct: without significant improvements, this could become a “sexy suicide coach.”

The error rate of the age verification system exceeds 10%. Given ChatGPT’s scale of 800 million active users per week, 10% means millions of people could be misclassified.

Adult mode was not the only product cut this month. The AI video tool Sora and the built-in instant checkout feature in ChatGPT were also taken offline simultaneously. Altman stated that the company needs to focus on its core business and eliminate “side tasks.”

Yet OpenAI is simultaneously preparing for an IPO.

A company racing to go public is rapidly cutting potentially controversial features; this action might be better described as not just focusing.

Five months ago, Altman was still saying he wanted to treat users like adults; five months later, he realized his company still hasn’t figured out what users can and cannot do with AI.

Even those who create AI do not have the answers. So who should draw this line?

The Unbridgeable Speed Gap

When you look at these three events together, it’s easy to draw a core conclusion:

The speed at which AI produces content and the speed at which humans review content are no longer on the same scale.

The choice made by that school in Manchester is easily understood in this context. How long would it take for the librarian to read through all 193 books and make a judgment? Let AI run through it, and it takes just a few minutes.

The principal chose the option that took a few minutes; do you really think he believed in AI’s judgment? I think it was more because he didn’t want to spend that time.

This is an economic issue. The cost of generation approaches zero, while the cost of review is entirely borne by humans.

Thus, every institution affected by AI is forced to respond in the most blunt manner: Wikipedia has outright banned it, and OpenAI has cut product lines. None of these solutions are the result of careful consideration; they all stem from the need to act quickly without thorough thought.

“Act quickly” is becoming the norm.

AI’s capabilities iterate every few months, while discussions about what content AI can handle have yet to establish a decent international framework. Each institution only cares about the line within its own yard, and the lines between different institutions conflict with each other, with no one coordinating.

AI’s speed continues to accelerate. The number of reviewers will not increase. This gap will only widen until one day something far more serious than banning “1984” occurs.

By that time, it may be too late to draw the line.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin