Recently I was considering the problem of deep-faked videos and photo-shopped images. The use case I was thinking of was this: Consider a news organization plays an edited clip of a video that appears to show someone saying or doing something that they didn’t do or say.
How to counter this? Perhaps the party in question could release the original, full video and use some mechanism that verifies that it is “official” as opposed to the edited version that would lack such a mechanism.
I believe that the standard technique would be “RSA digital signing” using asymmetric public key cryptography.
User Gilles ‘SO- stop being evil’ on Stack Overflow describes it well:
To use RSA for signing, Alice takes a hash of the message, encrypts the hash using her own private key, and appends the result (this is the signature) to the message. [..] Bob can decrypt the signature using Alice’s public key and see if [his hash of the message] matches. If it does, it must have been encrypted using Alice’s private key, which only she has, so it must have come from Alice.
Giles ‘so stop being evil’ on stack overflow
I also found this article:
A Method for verifying integrity & authenticating digital media
Ted Roche suggests taking a look at Web of Trust, which is interesting.
But perhaps this is all a waste of time. With respect to the news media, we live in a world now where declarations can be made without verification. “Proof” is no longer important. Someone can edit a video or document (whether derived from authentic sources or not) and state whatever the fuck they want to about it, and it will have an impact, potentially damaging, and it won’t matter if there’s an RSA signature associated with it or not.
We need to work on making Proof a thing.
Recent Comments