Today’s social media still appears lively, but the “sense of real presence” is gradually fading. As大量AI垃圾(AI slop)floods major mainstream platforms, fake and clickbait content proliferate, more and more genuine users lose their desire to share, and some even begin to flee.
In the face of rampant AI garbage, simple algorithmic moderation is no longer sufficient. Recently, top venture capital firm a16z proposed the concept of Staked Media, using real funds to filter out AI noise, which has attracted market attention.
When AI begins self-replication, the internet is being flooded with “pre-made content”
“AI is starting to imitate AI itself.”
Recently, Reddit moderators of the “American Reddit” have been overwhelmed, fighting against massive amounts of AI-generated content. The r/AmItheAsshole subreddit, with 24 million users, has moderators complaining that over half of the content is AI-generated.
In just the first half of 2025, Reddit deleted over 40 million pieces of spam and false content. This phenomenon has spread like a virus to platforms like Facebook, Instagram, X, YouTube, Xiaohongshu, and TikTok.
Currently, in an era where information seems explosive but genuine voices are increasingly scarce, AI-produced junk content almost permeates the entire internet, quietly eroding people’s minds. In fact, as generative tools like ChatGPT and Gemini become widespread, manual content creation is being replaced by AI, turning into a “assembly line factory.”
According to the latest research by SEO company Graphite, since ChatGPT was publicly released at the end of 2022, the proportion of AI-generated articles has skyrocketed from about 10% that year to over 40% in 2024. As of May this year, the figure reached 52%.
However, most of this AI-generated content resembles “pre-cooked dishes,” with fixed recipes and standardized processes, but lacking soul and reading as dull. Moreover, AI is no longer clumsy; it can mimic human tone and even replicate emotions. From travel guides to emotional disputes, and even social conflicts deliberately stirred up for traffic, AI can handle all with ease.
More dangerously, when AI hallucinate, they can spout nonsense with a straight face, creating information junk and triggering a crisis of trust.
In an era of AI proliferation, building media trust with real funds
Despite updates to moderation mechanisms and the introduction of AI assistance, the effectiveness of governance remains limited in combating AI garbage content online. In a16z crypto’s recent major annual report, Robert Hackett proposed the concept of “Staked Media.” (Read more: a16z: 17 Exciting New Directions in Crypto for 2026)
The report points out that traditional media models emphasize objectivity, but their flaws have long been evident. The internet has given everyone a voice; now, more practitioners, practitioners, and builders directly communicate their views to the public. Their perspectives reflect their interests in the world. Ironically, audiences respect them not because “they have conflicts of interest,” but “precisely because they have conflicts of interest.”
This new trend is not driven by the rise of social media but by “the emergence of encryption tools,” which enable people to make publicly verifiable commitments. As AI drastically reduces the cost and simplifies the process of generating massive content (which can be based on any perspective or identity to verify truthfulness), relying solely on human (or robotic) statements is no longer convincing. Tokenized assets, programmable escrow, prediction markets, and on-chain historical records provide a more solid foundation for trust: commentators can prove their consistency of words and actions (by backing their views with funds); podcasters can lock tokens to demonstrate they won’t opportunistically change stance or manipulate markets; analysts can bind predictions to publicly settled markets, creating auditable records.
This is the early form of what is called “Staked Media”: such media not only endorse interests but also provide tangible proof of their claims. In this model, credibility does not come from pretending to be neutral or baseless assertions, but from transparent, verifiable commitments of interest. Staked media will not replace other media forms but will complement the existing media ecosystem. It signals a new message: no longer “trust me, I am neutral,” but “this is the risk I am willing to bear, and here is how you can verify that I am truthful.”
Robert Hackett predicts that this field will continue to grow, much like mass media in the 20th century, which aimed to adapt to the technology and incentives of the time (attracting audiences and advertisers), ostensibly pursuing “objectivity” and “neutrality.” Today, AI makes creating or forging any content effortless, but what is truly scarce are the proofs—those who can make verifiable commitments and genuinely support their claims will have an advantage.
Using staking mechanisms to raise the cost of faking, and suggesting a dual content verification system
This innovative idea has also gained recognition among crypto practitioners, who have offered suggestions.
Crypto analyst Chen Jian states that from major media to self-media, false information is rampant, with events being reported with reversals and twists. The root cause is low cost and high reward for faking. If each information disseminator is viewed as a node, why not use blockchain’s POS (Proof of Stake) economic game mechanism to solve this problem? He suggests, for example, requiring each node to stake funds before expressing an opinion; the more they stake, the higher their trustworthiness. Others can gather evidence to challenge them; if the challenge succeeds, the system confiscates the staked funds and rewards the challenger. Of course, this process involves privacy and efficiency issues. Current solutions like Swarm Network combine ZK (Zero-Knowledge proofs) and AI to protect participant privacy and assist verification through multi-model data analysis, similar to Grok’s truth verification on Twitter.
Crypto influencer Blue Fox also believes that encryption technologies like zero-knowledge proofs (zk) can enable media or individuals to prove their credibility online, similar to “writing a note” on the internet, which cannot be tampered with once on-chain. But a note alone is not enough; a certain amount of assets must be staked as collateral, such as ETH, USDC, or other tokens.
The logic of staking is straightforward: if the content is proven false, the staked assets are confiscated; if the content is genuine and reliable, the assets are returned after a certain period, or even rewarded (such as tokens issued by the staked media or a share of the confiscated funds from fakers). This mechanism creates an environment that encourages truth-telling. For media, staking indeed increases capital costs, but it also gains genuine audience trust—especially important in an era of rampant fake news.
For example, a YouTuber promoting a product might need to “write a note” on Ethereum and stake ETH or USDC. If the video is false, the stake is forfeited; viewers can then trust the content more. If a creator recommends a phone, they might stake $100 worth of ETH and declare, “If this phone’s beauty filter doesn’t meet expectations, I will compensate.” Seeing the creator stake funds, viewers naturally find it more credible. If the content is AI-faked, the creator loses the stake.
For assessing truthfulness, Blue Fox suggests a “community + algorithm” dual verification system. Community-wise, users with voting rights (requiring staked crypto assets) vote on-chain; if over a certain threshold (e.g., 60%) votes for falsehood, the content is deemed fake. Algorithmically, data analysis assists in verifying voting results. For arbitration, if the content creator disagrees with the judgment, they can initiate arbitration with an expert committee; malicious voters can be penalized by confiscating their assets. Both voters and experts are rewarded, with rewards coming from confiscated funds and media tokens. Additionally, content creators can generate proof of authenticity from source using zero-knowledge proofs, such as verifying the genuine origin of a video.
To deter those with capital from exploiting staking mechanisms to produce fakes, Blue Fox recommends increasing long-term costs of faking—not only capital but also time, historical records, reputation systems, and legal liabilities. For example, accounts that are penalized and confiscated will be flagged; subsequent content from them must stake more funds; repeated violations will significantly reduce their credibility, and severe cases may face legal consequences.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
When AI takes over content platforms, how can encrypted staking help restore trust?
Author: Nancy, PANews
Today’s social media still appears lively, but the “sense of real presence” is gradually fading. As大量AI垃圾(AI slop)floods major mainstream platforms, fake and clickbait content proliferate, more and more genuine users lose their desire to share, and some even begin to flee.
In the face of rampant AI garbage, simple algorithmic moderation is no longer sufficient. Recently, top venture capital firm a16z proposed the concept of Staked Media, using real funds to filter out AI noise, which has attracted market attention.
When AI begins self-replication, the internet is being flooded with “pre-made content”
“AI is starting to imitate AI itself.”
Recently, Reddit moderators of the “American Reddit” have been overwhelmed, fighting against massive amounts of AI-generated content. The r/AmItheAsshole subreddit, with 24 million users, has moderators complaining that over half of the content is AI-generated.
In just the first half of 2025, Reddit deleted over 40 million pieces of spam and false content. This phenomenon has spread like a virus to platforms like Facebook, Instagram, X, YouTube, Xiaohongshu, and TikTok.
Currently, in an era where information seems explosive but genuine voices are increasingly scarce, AI-produced junk content almost permeates the entire internet, quietly eroding people’s minds. In fact, as generative tools like ChatGPT and Gemini become widespread, manual content creation is being replaced by AI, turning into a “assembly line factory.”
According to the latest research by SEO company Graphite, since ChatGPT was publicly released at the end of 2022, the proportion of AI-generated articles has skyrocketed from about 10% that year to over 40% in 2024. As of May this year, the figure reached 52%.
However, most of this AI-generated content resembles “pre-cooked dishes,” with fixed recipes and standardized processes, but lacking soul and reading as dull. Moreover, AI is no longer clumsy; it can mimic human tone and even replicate emotions. From travel guides to emotional disputes, and even social conflicts deliberately stirred up for traffic, AI can handle all with ease.
More dangerously, when AI hallucinate, they can spout nonsense with a straight face, creating information junk and triggering a crisis of trust.
In an era of AI proliferation, building media trust with real funds
Despite updates to moderation mechanisms and the introduction of AI assistance, the effectiveness of governance remains limited in combating AI garbage content online. In a16z crypto’s recent major annual report, Robert Hackett proposed the concept of “Staked Media.” (Read more: a16z: 17 Exciting New Directions in Crypto for 2026)
The report points out that traditional media models emphasize objectivity, but their flaws have long been evident. The internet has given everyone a voice; now, more practitioners, practitioners, and builders directly communicate their views to the public. Their perspectives reflect their interests in the world. Ironically, audiences respect them not because “they have conflicts of interest,” but “precisely because they have conflicts of interest.”
This new trend is not driven by the rise of social media but by “the emergence of encryption tools,” which enable people to make publicly verifiable commitments. As AI drastically reduces the cost and simplifies the process of generating massive content (which can be based on any perspective or identity to verify truthfulness), relying solely on human (or robotic) statements is no longer convincing. Tokenized assets, programmable escrow, prediction markets, and on-chain historical records provide a more solid foundation for trust: commentators can prove their consistency of words and actions (by backing their views with funds); podcasters can lock tokens to demonstrate they won’t opportunistically change stance or manipulate markets; analysts can bind predictions to publicly settled markets, creating auditable records.
This is the early form of what is called “Staked Media”: such media not only endorse interests but also provide tangible proof of their claims. In this model, credibility does not come from pretending to be neutral or baseless assertions, but from transparent, verifiable commitments of interest. Staked media will not replace other media forms but will complement the existing media ecosystem. It signals a new message: no longer “trust me, I am neutral,” but “this is the risk I am willing to bear, and here is how you can verify that I am truthful.”
Robert Hackett predicts that this field will continue to grow, much like mass media in the 20th century, which aimed to adapt to the technology and incentives of the time (attracting audiences and advertisers), ostensibly pursuing “objectivity” and “neutrality.” Today, AI makes creating or forging any content effortless, but what is truly scarce are the proofs—those who can make verifiable commitments and genuinely support their claims will have an advantage.
Using staking mechanisms to raise the cost of faking, and suggesting a dual content verification system
This innovative idea has also gained recognition among crypto practitioners, who have offered suggestions.
Crypto analyst Chen Jian states that from major media to self-media, false information is rampant, with events being reported with reversals and twists. The root cause is low cost and high reward for faking. If each information disseminator is viewed as a node, why not use blockchain’s POS (Proof of Stake) economic game mechanism to solve this problem? He suggests, for example, requiring each node to stake funds before expressing an opinion; the more they stake, the higher their trustworthiness. Others can gather evidence to challenge them; if the challenge succeeds, the system confiscates the staked funds and rewards the challenger. Of course, this process involves privacy and efficiency issues. Current solutions like Swarm Network combine ZK (Zero-Knowledge proofs) and AI to protect participant privacy and assist verification through multi-model data analysis, similar to Grok’s truth verification on Twitter.
Crypto influencer Blue Fox also believes that encryption technologies like zero-knowledge proofs (zk) can enable media or individuals to prove their credibility online, similar to “writing a note” on the internet, which cannot be tampered with once on-chain. But a note alone is not enough; a certain amount of assets must be staked as collateral, such as ETH, USDC, or other tokens.
The logic of staking is straightforward: if the content is proven false, the staked assets are confiscated; if the content is genuine and reliable, the assets are returned after a certain period, or even rewarded (such as tokens issued by the staked media or a share of the confiscated funds from fakers). This mechanism creates an environment that encourages truth-telling. For media, staking indeed increases capital costs, but it also gains genuine audience trust—especially important in an era of rampant fake news.
For example, a YouTuber promoting a product might need to “write a note” on Ethereum and stake ETH or USDC. If the video is false, the stake is forfeited; viewers can then trust the content more. If a creator recommends a phone, they might stake $100 worth of ETH and declare, “If this phone’s beauty filter doesn’t meet expectations, I will compensate.” Seeing the creator stake funds, viewers naturally find it more credible. If the content is AI-faked, the creator loses the stake.
For assessing truthfulness, Blue Fox suggests a “community + algorithm” dual verification system. Community-wise, users with voting rights (requiring staked crypto assets) vote on-chain; if over a certain threshold (e.g., 60%) votes for falsehood, the content is deemed fake. Algorithmically, data analysis assists in verifying voting results. For arbitration, if the content creator disagrees with the judgment, they can initiate arbitration with an expert committee; malicious voters can be penalized by confiscating their assets. Both voters and experts are rewarded, with rewards coming from confiscated funds and media tokens. Additionally, content creators can generate proof of authenticity from source using zero-knowledge proofs, such as verifying the genuine origin of a video.
To deter those with capital from exploiting staking mechanisms to produce fakes, Blue Fox recommends increasing long-term costs of faking—not only capital but also time, historical records, reputation systems, and legal liabilities. For example, accounts that are penalized and confiscated will be flagged; subsequent content from them must stake more funds; repeated violations will significantly reduce their credibility, and severe cases may face legal consequences.