Relying on AI detectors Raises Censorship Concerns, After Real Videos Are Labeled as Fake

The idea of using AI to detect and perhaps even censor AI content online has been growing over the last year. However, if recent tests are to be believed, the accuracy of such technology is far from perfect – meaning genuine content could be falsely censored if such technology is trusted.

In the fight against deepfake videos, Intel has released a new system named “FakeCatcher” that can allegedly distinguish between genuine and tweaked digital media. The system’s effectiveness was put to the test by using a mixture of real and doctored clips of former President Donald Trump and current President Joe Biden. Intel reportedly uses the physiological trait of Photoplethysmography, which reveals blood circulation changes and tracks eye movement to identify and expose these deep fakes.

The acclaimed scientist Ilke Demir, part of the Intel Labs research team, elucidates that the process involves determining the authenticity of the content based on human benchmarks such as a person’s blood flow changes and eye movement consistency, the BBC reported.
AI Video by Chris Montgomery is licensed under Unsplash unsplash.com

Get latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 louder.news, Privacy Policy