Deepfake is an AI-based technology used to produce
or alter video content so that it presents something that didn't, in fact,
occur. The term is named for a Reddit user known as deepfakes who, in December
2017, used deep learning technology to edit the faces of celebrities onto
people in pornographic video clips. The term, which applies to both the
technologies and the videos created with it, is a portmanteau of deep learning and fake.
Deepfake video is created by
using two competing AI systems -- one is called the generator and the other is
called the discriminator. Basically, the generator creates a fake video clip
and then asks the discriminator to determine whether the clip is real or fake.
Each time the discriminator accurately identifies a video clip as being fake,
it gives the generator a clue about what not to do when creating the next clip.
Together, the generator and
discriminator form something called a generative adversarial network (GAN.)
The first step in establishing a GAN is to identify the desired output and
create a training dataset for the generator. Once the generator begins creating
an acceptable level of output, video clips can be fed to the discriminator.
As the generator gets better at
creating fake video clips, the discriminator gets better at spotting them.
Conversely, as the discriminator gets better at spotting fake video, the
generator gets better at creating them.
Until recently, video content has
been more difficult to alter in any substantial way. Because deepfakes are
created through AI, however, they don't require the considerable skill that it
would take to create a realistic video otherwise. Unfortunately, this means
that just about anyone can create a deepfake to promote their chosen agenda.
One danger is that people will take such videos at face value; another is that
people will stop trusting in the validity of any video content at all.
Comments
Post a Comment