5 min read
The Rise of Synthetic Media: How to Tell If a Video is AI-Generated?
In the swiftly evolving digital era, we are witnessing an unprecedented surge in AI-generated content that is remarkably convincing. This content can be crafted instantaneously, right at your fingertips. Yet, the critical question remains: can one discern whether a video is AI-generated?
The Problem
The increasing sophistication of AI-generated videos has made it increasingly challenging to distinguish them from authentic ones, potentially undermining the value of human authenticity and creativity. As social media platforms are inundated with a blend of AI-generated and genuine videos daily, users face a growing struggle to discern truth from deception. The proliferation of deepfake technology, which employs AI to create convincing videos misrepresenting people and events, further blurs the line between reality and falsehood. This poses significant threats to various industries, facilitating identity fraud, unsolicited communications, and a host of other issues. The ease with which AI can now be leveraged for deception is particularly alarming, with potential implications even for national security.
A prime example of this trend occurred in December 2023, when over 100 deepfake video advertisements featuring UK Prime Minister Rishi Sunak circulated on Facebook. Similarly, in February 2024, a deepfake video of US President Joe Biden calling for military conscription in Jordan went viral, further illustrating the potential for AI-generated content to mislead and manipulate public perception.
Look at the videos created with single reference images by using Emo Portrait Alive. If the difference between real and fake content is not addressed, it can lead to widespread misinformation, and uncertainty. Moving forward if we don’t tackle this, we’ll all start losing trust in digital media.
If we look at the proposed AI EU Act for guidance, it clearly states the transparency requirements for all General-Purpose Artificial Intelligence (GPAI) generated content. This will include AI systems biasing deepfake content. In our world, genuine and honest individuals are not always abundant, necessitating a more effective mechanism to confront those with malicious intentions.
The Concept of Traceability
The concept of traceability in digital content is the capacity to validate the source, legitimacy, and journey of a piece of content from its design to its present state. This is particularity important these days as content is easily generated, distributed, and manipulated.
First, traceability can prove where a piece of content was initially formulated, for example a mobile phone on which a photo was captured, the novelist of a book, or the tool used to produce a digital blueprint.
Second, traceability also concerns tracking the changes made to a piece of content over a period of time. Tracking the journey of a piece of content can help prove if it has been changed since its formulation, for example, reworks to a document, alterations in the metadata of a photo, or the numerous platforms an original video has been reshared on.
Third, traceability increases the ethical aspects of accountability and transparency. When we can trace a piece of content back to its source, it is easy to claim the individuals’ copyrights or elements accountable for content changes or manipulations. This is principally applicable in areas such as journalism, academic research, and judiciary services.
In principle, traceability in digital content presents a form of family tree, helping us to identify the life cycle of a piece of content. It will be an effective tool to help preserve the integrity and trustworthiness of content in the digital space.
Journey from Camera Lens to Video Format
The camera plays a fundamental role in capturing raw video footage. The overview below shows how video footage moves from lens to screen.
Lens: From a scene, the camera lens captures light. There is an image sensor inside the camera where this light is focused.
Encoder: The sensor transforms the light into an electrical signal, which represents the image. The raw video footage is interpreted frame by frame and each frame (video encoding) is transformed into a digital format.
Transmission: This encoded video can be easily saved, broadcast, and streamed on countless devices.
Decoder: This involves transforming the encoded video back into a format that can be displayed on a screen.
Screen: The decoded video data is then displayed on a screen.
Blockchain Application in Traceability
Blockchain technology, in simple terms, is a type of decentralised and distributed database used to record transactions across a network so that any included record cannot be revised retroactively, without the revision of all subsequent blocks. This means that no single entity or authority controls it, allowing users to prove and audit transactions or changes independently. Given its heightened reliability in traceability, blockchain technology emerges as a promising tool for authenticating the validity of videos. Here are two instances showcasing the usage of blockchain technology and its potential applications:
- Nodle, in collaboration with Adobe and the Linux Foundation brings ContentSign to the Content Authenticity Initiative. This establishes a potential specification for media verification which will demonstrate data integrity from its moment of capture using blockchain. The process includes digital stamping that verifies that an authentic camera has captured the video; the video is authorised by the private key (only recognised by the authenticated camera) and a trace of the video is(?) broadcast to a blockchain.
- PROVER is another way of ensuring traceability, it is a technology for validating the authenticity of video content. First, a code is generated throughout recording when the user moves the phone in a particular direction i.e. up, down, left, right, and diagonal – making an exclusive ‘swype-code’ pattern for every video. After the video is recorded, the PROVER remembers the exclusive ‘swype-code’ and hash of the video file. This is an easily accessible option for most video or content generators and the cost of the service is only 6 cents ($0.06).
PROVER can be implemented both as a separate web or mobile app, or by using APIs and extensions integrated as a solution into third-party applications. This solution is based on exclusive video analysis algorithms combined with decentralised blockchain-based registry technology.
The most common use cases this technology can be applied in are property insurance, educational projects, gaming, performance reports, proof of video authorship, traffic violation videos, remote monitoring, legal content videos, mortgage approvals and many more.
Conclusions
We took a deep dive into how AI can make fake videos that look just like the real deal. It’s a bit of a brainteaser trying to figure out what’s real and what’s not. While this technology is undeniably impressive, its implementation could potentially provoke challenges in our society.
As society tackles with the ethical, legal, and societal ramifications of AI-generated content, it is imperative to understand the complexities surrounding its creation and dissemination to make better and informed decisions based on AI generated videos.
While blockchain claims a lot of promise, it’s vital to hint that scripting manipulated video data to a blockchain does not make it genuine. Like any technology, blockchain too, is not immune to maliciousness.