Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Launchpad
Be early to the next big token project
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
It’s International Fact-Checking Day. Refresh your AI identification skills
AI-generated content is everywhere these days, making it increasingly difficult to separate fact from fiction, particularly when it comes to breaking news.
Look no further than the Iran war. Since the U.S. and Israel attacked Iran on Feb. 28, researchers have identified an unprecedented number of false and misleading images that were generated using artificial intelligence and have reached countless people around the world. Among them, fake footage of bombings that never happened, images of soldiers who were supposedly captured and propaganda videos created by Iran that depict President Donald Trump and others as a blocky, Lego-like miniatures.
Today, the 10th annual International Fact-Checking Day, provides a good opportunity to look at these evolving challenges.
Misinformation created with AI is being shared with unprecedented speed from an endless number of sources. From the outset of the Iran war, accounts from all sides of the conflict promoted such content.
The Institute for Strategic Dialogue, which tracks disinformation and online extremism, has been examining social media posts around the Iran war. Among their findings was a group of X accounts that regularly post AI-generated content and collectively gained more than one billion views since the conflict began. This was done by roughly two dozen accounts, many of which had blue check verification.
Here are some tips for distinguishing AI-generated content from reality in an online world where that continues to get harder.
24
Look for visual cues
When AI-generated images first began spreading widely online, there were often obvious tells that could identify them as fabricated. Perhaps a person had too few — or too many — fingers or their voice was out of sync with their mouth. Text may have been nonsensical. Objects were frequently distorted or missing key components. As the technology continues to evolve, these clues aren’t as common as they once were, but it’s still worth looking for them. Watch for inconsistencies such as a car that is in a video one moment and gone the next or actions that aren’t possible according to the laws of physics. Some images may also be overly polished or have an unnatural sheen.
Seek out a source
AI-generated images get shared over and over again. One way to determine their authenticity (or lack thereof) is to hunt for their origin. Using a reverse image search is a simple way to do this. If you’re looking at a video, take a screenshot first. This can lead to a social media account that specifically generates AI content, an older image that is being misrepresented, or something entirely unexpected.
Listen to the experts
Look for multiple verified sources that can help authenticate the image. For example, that can mean a fact-check from a reputable media outlet, a statement from a public figure, or a social media post from a misinformation expert. These sources may have more advanced techniques for identifying AI-generated content or access to information about the image that is not accessible by the general public.
Make use of technology
There are many AI detection tools that can be a helpful place to start. But be wary, as they are not always correct in their assessments. Images that have been generated or altered with AI using Google’s Gemini app include an invisible digital watermarking tool called SynthID, which the app can detect. Other AI creation tools have added visible watermarks to content they generate. They are often easy to remove though, meaning the absence of such a watermark is not proof that an image is genuine.
Slow down
Sometimes it’s just about going back to basics. Stop, take a breath and don’t immediately share something you don’t know is real. Bad actors are often counting on the fact that people let their emotions and existing viewpoints guide their reactions to content. Looking at the comments may provide clues about whether the image you’re looking at is real or not. Another user might have noticed something you didn’t or been able to find the original source. Ultimately though, it’s not always possible to determine with 100% accuracy whether an image is AI-generated so remain alert to the possibility it might not be real.
See something that looks false or misleading? Email us at [email protected].
Find AP Fact Checks here: