Android

Honor’s New AI Detection System Stops Deepfakes From Fooling You


In 2024, President Donald Trump’s account on X released a fake image. It was a shot of Taylor Swift depicting her as a supporter. Then there was that weird dynamic stage entrance from The Joker and other popular figures at Lil Yachty’s Lyrical Lemonade Summer Smash Festival in 2021. At first glance, these incidents were obviously products of artificial intelligence. Today, it’s not so easy.

Someone could release a photo of you doing something you never did, and it would be hard to contest. AI has become so skilled at mimicking voices, faces, and entire movements that it’s hard to tell reality from a well-fabricated lie. 

Citing statistics from Entrust Cybersecurity Institute and Deloitte on the rising frequency of deepfake attacks and how difficult they are to detect, mobile smartphone maker Honor is launching its technology. It’s set to roll out globally in April 2025. Here’s how it will detect manipulated content in real time.

A detection system that sees better than the human eye

Deepfake of Taylor Swift advocating for Donald Trump presidencyDeepfake of Taylor Swift advocating for Donald Trump presidency
Image: Deadline

Honor introduced the AI Deepfake Detection technology at the Berlin Internationale Funkausstellung (IFA) 2024 event. The event brought together over 1,800 global exhibitors and attracted more than 215,000 visitors from 138 countries. It was the perfect avenue for Honor to launch the Magic V3 foldable phone, MagicPad 2 tablet, and other new devices.

“While the rise of AI has brought incredible advancements, it also poses unseen challenges such as the proliferation of sophisticated deepfakes. According to Entrust Cybersecurity Institute, a deepfake attack happens every five minutes in 2024.

These manipulated images, audio recordings, and videos are becoming increasingly difficult to detect, with Deloitte’s 2024 Connected Consumer Study revealing that 59% of respondents struggle to tell the difference between human-created and AI-generated content.

While 84% of those familiar with generative AI believe such content should be labeled, HONOR recognizes that proactive detection measures and industry collaboration are crucial for robust protection.”

Honor

Deepfake abuse statisticsDeepfake abuse statistics
Image: Honor

Their AI detection system analyzes content in videos and images, immediately warning you if something seems manipulated. It looks for pixel-level imperfections, border artifacts, inter-frame continuity issues, and inconsistencies in facial features and hairstyles.

Pixel-level imperfections occur because AI struggles to generate textures consistently. Even high-quality deepfakes can have tiny distortions, smudges, or unnatural lighting in the eyes, skin, or teeth. 

Then, border artifacts happen when algorithms blend a generated face onto a real person’s body. There are bound to be unnatural transitions around the edges, such as blurry borders, uneven lighting, or skin tones that look off. These factors are dead giveaways to the system that your eyes can’t ordinarily detect.

Related: Honor Launches Magic 7 Pro Phone In Europe

Fighting deepfakes privately can be a thing

Mark Zuckerberg Imagine Me generated images on ThreadsMark Zuckerberg Imagine Me generated images on Threads
Image: Meta

The wait for Honor’s AI Deepdale detection system seems long. If implemented with on-device processing, the feature would work locally on your phone’s chipset, reducing exposure to data breaches or misuse. Should that same data be processed in the cloud instead, it’s not hard for malicious entities to intercept them and further misuse them. 

The use cases are almost endless, and social media is one area that needs the most attention. It’s easy to spread misinformation with one post. It’s necessary to flag and prevent them from spreading across platforms. You can also use it to identify and challenge deepfakes that could harm reputations.

Banks and verification services can stop fraudulent attempts by requiring video-based identity authentication. When a customer tries to verify their identity through a video call or a recorded submission, the system checks whether the face in the video is a real person. 





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.