Social media is flooded with AI garbage content, and genuine user sharing willingness is declining. Top venture capital firm a16z proposes the concept of Staked Media, which filters AI noise through crypto asset staking mechanisms, using real money to verify commitments and rebuild content trust.
(Background: Animal Crossing joins AI LLM, “playability skyrockets”; streamers try it out: endless conversations without repetition)
(Additional context: AI-generated games suspected of copying Pokémon! “Phantom Beast Paru” launched on Steam, selling over 2 million copies; Nintendo legal action imminent?)
Table of Contents
When AI begins self-replication, the internet is flooded with “pre-made content”
In the era of AI proliferation, building media trust with real money
Using staking mechanisms to raise the cost of faking, and proposing a dual content verification system
Currently, social media appears lively, but the sense of “human presence” is gradually disappearing. As大量 AI slop floods mainstream platforms, with fake and clickbait content rampant, more and more genuine users lose their desire to share and even start fleeing.
In the face of AI-generated garbage, simple algorithmic moderation proves insufficient. Recently, top venture capital firm a16z proposed the concept of Staked Media, using real money to filter AI noise, attracting market attention.
When AI begins self-replication, the internet is flooded with “pre-made content”
“AI is starting to imitate AI.”
Recently, Reddit moderators are in chaos, battling massive amounts of AI-generated content. The r/AmItheAsshole subreddit, with 24 million users, reports that over half of its content is AI-generated.
In the first half of 2025 alone, Reddit deleted over 40 million pieces of spam and false content. This phenomenon is spreading like a virus to Facebook, Instagram, X, YouTube, Xiaohongshu, and TikTok.
Today, in an era where information seems explosive but authentic voices are diminishing, AI-produced garbage content permeates the entire internet, quietly eroding people’s minds. In fact, as generative tools like ChatGPT and Gemini become widespread, manual content creation is being replaced by AI, turning into a “production line.”
According to the latest research from SEO company Graphite, since ChatGPT’s public release at the end of 2022, the proportion of AI-generated articles has surged from about 10% to over 40% in 2024. As of May this year, the figure reached 52%.
However, most AI-generated content resembles “pre-made dishes,” with fixed recipes and standardized processes, lacking soul and reading dull. Moreover, AI is no longer clumsy; it can mimic human tone and even replicate emotions. From travel guides to emotional disputes, and even social conflicts deliberately stirred up for traffic, AI can handle it all.
More dangerously, when AI hallucinate, they confidently spout nonsense, creating information garbage and triggering a crisis of trust.
In the era of AI proliferation, building media trust with real money
Despite updates to moderation mechanisms and AI-assisted governance, the effectiveness remains limited. In a16z crypto’s heavyweight annual report, Robert Hackett introduced the concept of Staked Media.
The report points out that traditional media models emphasize objectivity, but their flaws have long been evident. The internet has given everyone a voice; increasingly, practitioners, implementers, and builders directly communicate their perspectives to the public. Their viewpoints reflect their interests in the world. Ironically, audiences respect them not because they are “disinterested,” but because “they have vested interests.”
This shift is not due to the rise of social media but the “emergence of crypto tools,” which enable publicly verifiable commitments. As AI drastically lowers the cost and simplifies the process of generating vast amounts of content (based on any perspective or identity, with verifiable authenticity), relying solely on human (or robot) statements is no longer convincing. Tokenized assets, programmable locks, prediction markets, and on-chain history provide a more solid foundation for trust: commentators can prove their consistency when expressing opinions (by backing their views with funds); podcasters can lock tokens to demonstrate they won’t opportunistically change stance or dump; analysts can link predictions to publicly settled markets, creating auditable records.
This is the early form of “Staked Media”: such media not only endorse vested interests but also provide tangible proof of their claims. In this model, credibility does not come from feigned neutrality or baseless assertions but from transparent, verifiable commitments of interest. Staked media will not replace other media forms but will complement the existing media ecosystem. It signals a new message: “Trust me, I am neutral” is replaced by “This is the risk I am willing to bear, and here is how you can verify my claims.”
Robert Hackett predicts that this field will continue to grow, much like 20th-century mass media, which aimed to adapt to the technology and incentives of the time (attracting audiences and advertisers), appearing to pursue “objectivity” and “neutrality.” Today, AI makes creating or faking any content effortless, but what is truly scarce are evidence and proof—those who can make verifiable commitments and genuinely support their claims will have an advantage.
Using staking mechanisms to raise the cost of faking, and proposing a dual content verification system
This innovative idea has also gained recognition among crypto practitioners, who have offered suggestions.
Crypto analyst Chen Jian states that from mainstream media to self-media, false information is rampant, with stories being reported with multiple twists. The root cause is low cost and high reward for faking. If each information disseminator is viewed as a node, why not use blockchain POS (Proof of Stake) economic game mechanisms to solve this problem? He suggests, for example, requiring each node to stake funds before expressing an opinion; the more they stake, the higher their trustworthiness. Others can gather evidence to challenge them; if the challenge succeeds, the system confiscates the stake and rewards the challenger. Of course, this involves privacy and efficiency issues. Currently, solutions like Swarm Network combine ZK (Zero-Knowledge) and AI to protect participant privacy and assist verification through multi-model data analysis, similar to Grok’s truth verification on Twitter.
Crypto influencer Lan Hu also believes that cryptographic techniques like zero-knowledge proofs (zk) can enable media or individuals to prove their credibility online, akin to “writing a receipt” on the web, which is immutable on-chain. But a receipt alone is not enough; a certain amount of assets (ETH, USDC, or other tokens) should also be staked as collateral.
The logic of staking is straightforward: if the content is proven false, the staked assets are forfeited; if true and reliable, the assets are returned after a period, possibly with additional rewards (such as tokens issued by Staked Media or a share of the forfeited funds from fakers). This mechanism creates an environment that encourages truth-telling. For media, staking indeed increases capital costs, but it gains genuine audience trust—especially important in an era of rampant fake news.
For example, a YouTuber posts a video recommending a product, staking ETH or USDC on Ethereum to “write a receipt.” If the content is false, the stake is confiscated; viewers can trust the video’s authenticity. If a creator recommends a phone, staking $100 worth of ETH and stating, “If this phone’s beauty features do not meet expectations, I will compensate,” makes the creator seem more credible. If the content is AI-faked, the creator loses the stake.
Regarding content authenticity judgment, Lan Hu suggests a “community + algorithm” dual verification system. On the community side, users with voting rights (requiring staking crypto assets) vote on-chain; if over a certain threshold (e.g., 60%) votes for fake, the content is deemed false. Algorithmically, data analysis assists in verifying the voting results. For arbitration, if the content creator disputes the decision, they can initiate arbitration with an expert committee; malicious voters’ assets are confiscated; both voters and experts are rewarded, funded by confiscation and media tokens. Additionally, content creators can use zero-knowledge proofs to generate source authenticity proofs, such as verifying the genuine origin of a video.
For those with capital trying to exploit staking mechanisms for faking, Lan Hu recommends increasing long-term costs of deception, not only in funds but also in time, historical records, reputation systems, and legal responsibilities. For example, accounts that are penalized will be marked; subsequent content requires higher stakes; repeated penalties will significantly reduce the account’s credibility; severe cases may even face legal consequences.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
a16 Special Article: When AI Takes Over Content Platforms, How Can Crypto Staking Regain Trust?
Social media is flooded with AI garbage content, and genuine user sharing willingness is declining. Top venture capital firm a16z proposes the concept of Staked Media, which filters AI noise through crypto asset staking mechanisms, using real money to verify commitments and rebuild content trust.
(Background: Animal Crossing joins AI LLM, “playability skyrockets”; streamers try it out: endless conversations without repetition)
(Additional context: AI-generated games suspected of copying Pokémon! “Phantom Beast Paru” launched on Steam, selling over 2 million copies; Nintendo legal action imminent?)
Table of Contents
Currently, social media appears lively, but the sense of “human presence” is gradually disappearing. As大量 AI slop floods mainstream platforms, with fake and clickbait content rampant, more and more genuine users lose their desire to share and even start fleeing.
In the face of AI-generated garbage, simple algorithmic moderation proves insufficient. Recently, top venture capital firm a16z proposed the concept of Staked Media, using real money to filter AI noise, attracting market attention.
When AI begins self-replication, the internet is flooded with “pre-made content”
“AI is starting to imitate AI.”
Recently, Reddit moderators are in chaos, battling massive amounts of AI-generated content. The r/AmItheAsshole subreddit, with 24 million users, reports that over half of its content is AI-generated.
In the first half of 2025 alone, Reddit deleted over 40 million pieces of spam and false content. This phenomenon is spreading like a virus to Facebook, Instagram, X, YouTube, Xiaohongshu, and TikTok.
Today, in an era where information seems explosive but authentic voices are diminishing, AI-produced garbage content permeates the entire internet, quietly eroding people’s minds. In fact, as generative tools like ChatGPT and Gemini become widespread, manual content creation is being replaced by AI, turning into a “production line.”
According to the latest research from SEO company Graphite, since ChatGPT’s public release at the end of 2022, the proportion of AI-generated articles has surged from about 10% to over 40% in 2024. As of May this year, the figure reached 52%.
However, most AI-generated content resembles “pre-made dishes,” with fixed recipes and standardized processes, lacking soul and reading dull. Moreover, AI is no longer clumsy; it can mimic human tone and even replicate emotions. From travel guides to emotional disputes, and even social conflicts deliberately stirred up for traffic, AI can handle it all.
More dangerously, when AI hallucinate, they confidently spout nonsense, creating information garbage and triggering a crisis of trust.
In the era of AI proliferation, building media trust with real money
Despite updates to moderation mechanisms and AI-assisted governance, the effectiveness remains limited. In a16z crypto’s heavyweight annual report, Robert Hackett introduced the concept of Staked Media.
The report points out that traditional media models emphasize objectivity, but their flaws have long been evident. The internet has given everyone a voice; increasingly, practitioners, implementers, and builders directly communicate their perspectives to the public. Their viewpoints reflect their interests in the world. Ironically, audiences respect them not because they are “disinterested,” but because “they have vested interests.”
This shift is not due to the rise of social media but the “emergence of crypto tools,” which enable publicly verifiable commitments. As AI drastically lowers the cost and simplifies the process of generating vast amounts of content (based on any perspective or identity, with verifiable authenticity), relying solely on human (or robot) statements is no longer convincing. Tokenized assets, programmable locks, prediction markets, and on-chain history provide a more solid foundation for trust: commentators can prove their consistency when expressing opinions (by backing their views with funds); podcasters can lock tokens to demonstrate they won’t opportunistically change stance or dump; analysts can link predictions to publicly settled markets, creating auditable records.
This is the early form of “Staked Media”: such media not only endorse vested interests but also provide tangible proof of their claims. In this model, credibility does not come from feigned neutrality or baseless assertions but from transparent, verifiable commitments of interest. Staked media will not replace other media forms but will complement the existing media ecosystem. It signals a new message: “Trust me, I am neutral” is replaced by “This is the risk I am willing to bear, and here is how you can verify my claims.”
Robert Hackett predicts that this field will continue to grow, much like 20th-century mass media, which aimed to adapt to the technology and incentives of the time (attracting audiences and advertisers), appearing to pursue “objectivity” and “neutrality.” Today, AI makes creating or faking any content effortless, but what is truly scarce are evidence and proof—those who can make verifiable commitments and genuinely support their claims will have an advantage.
Using staking mechanisms to raise the cost of faking, and proposing a dual content verification system
This innovative idea has also gained recognition among crypto practitioners, who have offered suggestions.
Crypto analyst Chen Jian states that from mainstream media to self-media, false information is rampant, with stories being reported with multiple twists. The root cause is low cost and high reward for faking. If each information disseminator is viewed as a node, why not use blockchain POS (Proof of Stake) economic game mechanisms to solve this problem? He suggests, for example, requiring each node to stake funds before expressing an opinion; the more they stake, the higher their trustworthiness. Others can gather evidence to challenge them; if the challenge succeeds, the system confiscates the stake and rewards the challenger. Of course, this involves privacy and efficiency issues. Currently, solutions like Swarm Network combine ZK (Zero-Knowledge) and AI to protect participant privacy and assist verification through multi-model data analysis, similar to Grok’s truth verification on Twitter.
Crypto influencer Lan Hu also believes that cryptographic techniques like zero-knowledge proofs (zk) can enable media or individuals to prove their credibility online, akin to “writing a receipt” on the web, which is immutable on-chain. But a receipt alone is not enough; a certain amount of assets (ETH, USDC, or other tokens) should also be staked as collateral.
The logic of staking is straightforward: if the content is proven false, the staked assets are forfeited; if true and reliable, the assets are returned after a period, possibly with additional rewards (such as tokens issued by Staked Media or a share of the forfeited funds from fakers). This mechanism creates an environment that encourages truth-telling. For media, staking indeed increases capital costs, but it gains genuine audience trust—especially important in an era of rampant fake news.
For example, a YouTuber posts a video recommending a product, staking ETH or USDC on Ethereum to “write a receipt.” If the content is false, the stake is confiscated; viewers can trust the video’s authenticity. If a creator recommends a phone, staking $100 worth of ETH and stating, “If this phone’s beauty features do not meet expectations, I will compensate,” makes the creator seem more credible. If the content is AI-faked, the creator loses the stake.
Regarding content authenticity judgment, Lan Hu suggests a “community + algorithm” dual verification system. On the community side, users with voting rights (requiring staking crypto assets) vote on-chain; if over a certain threshold (e.g., 60%) votes for fake, the content is deemed false. Algorithmically, data analysis assists in verifying the voting results. For arbitration, if the content creator disputes the decision, they can initiate arbitration with an expert committee; malicious voters’ assets are confiscated; both voters and experts are rewarded, funded by confiscation and media tokens. Additionally, content creators can use zero-knowledge proofs to generate source authenticity proofs, such as verifying the genuine origin of a video.
For those with capital trying to exploit staking mechanisms for faking, Lan Hu recommends increasing long-term costs of deception, not only in funds but also in time, historical records, reputation systems, and legal responsibilities. For example, accounts that are penalized will be marked; subsequent content requires higher stakes; repeated penalties will significantly reduce the account’s credibility; severe cases may even face legal consequences.