How long before my computer can hallucinate an entire movie?

2018 is the year the 'deep fake' trend started. Photoshop is so last year, the kids nowadays use a machine learning tool to automatically replace one person’s face with another in a video. Obviously, since this is the internet age, people are using it for two things: fake celebrity porn and inserting Nicolas Cage into random movies.

The Deepfake algorithm is able to digitally hallucinate realistic video.

At the moment the process is simple but processor intensive... it can take several days to generate a short clip of plausible footage at low resolution. The results which are pornographic in nature have been banned by the big name websites. And so the media hoohaa has started to die down. But the can of worms is open. They are wriggling out slowly.

This machine learning system makes use of multiple open-source libraries (Keras with TensorFlow back-end). Its creator trained a neural network to reconstruct a photo realistic human face from a distorted input image. After several hours of training on many distorted images of a particular persons face, it becomes possible to swap out the distorted input image and replace it with a non-distorted version of a different face. The algorithm then 'corrects' the new face to more closely resemble the imagery it was trained on. The results are remarkable. If you haven't yet seen them, go do some Goggling now.

There are various privacy concerns which are being discussed in the media. More worryingly this technology, used maliciously, could spread propaganda by appearing to manipulate reality... Fake imagery is something we have grown accustomed to over the last few years but fake video could catch out a lot of poorly informed people.

The mass media has been distracted by the content generated by this tool and have missed several interesting things. Things that hint at the way AI will evolve and integrate into society...

This system was created by building on free open source tools using techniques that can be learnt online. There is no need for expensive commercial software, no need to be be 'taught' in the traditional sense, no need to own specialist hardware. The barriers for entry into the field of cutting edge AI research have been eroded away. Anybody with an internet connection, intelligence, motivation and time can design and implement cutting edge AI. The playing field is being truly levelled.

Although the work has been attributed to a single person, this is really a collaborative achievement. The 'engineering' for the system was put in place by the developers of the open source tools used. Example scripts and instructions to use the software are maintained by an active community... While researching for this short article I came across several freely downloadable code bases which build and improve on the ideas used in Deepfake. This advancement is just a small piece in the constantly evolving AI toolkit.

The internet has been awash with talk of applying ethics to AI; great in theory but impossible to implement in the real world. How do you encourage an anonymous collective to act responsibly?

I believe the commercial implications of this technology have not yet been fully appreciated... There is no need to stop at faces, why not construct an algorithm that swaps entire actors in films. Or maybe an algorithm that can take an existing cartoon and up-sample it to something more photo realistic by training the algorithm on live video. Maybe in the not too distant future I'll be able to sketch out a few stick-men battling on a piece of paper and have some AI helpers to render it to photo realistic actors, with appropriate sound effects?

AdamTemper.com - Writer of speculative fiction, narrative non-fiction and science articles.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store