📺 Deep Video Portraits – SIGGRAPH 2018 | YouTube

Watched Deep Video Portraits - SIGGRAPH 2018 from YouTube

H. Kim, P. Garrido , A. Tewari, W. Xu, J. Thies, M. Nießner, P. Pérez, C. Richardt, Michael Zollhöfer, C. Theobalt, Deep Video Portraits, ACM Transactions on Graphics (SIGGRAPH 2018)

We present a novel approach that enables photo-realistic re-animation of portrait videos using only an input video. In contrast to existing approaches that are restricted to manipulations of facial expressions only, we are the first to transfer the full 3D head position, head rotation, face expression, eye gaze, and eye blinking from a source actor to a portrait video of a target actor. The core of our approach is a generative neural network with a novel space-time architecture. The network takes as input synthetic renderings of a parametric face model, based on which it predicts photo-realistic video frames for a given target actor. The realism in this rendering-to-video transfer is achieved by careful adversarial training, and as a result, we can create modified target videos that mimic the behavior of the synthetically-created input. In order to enable source-to-target video re-animation, we render a synthetic target video with the reconstructed head animation parameters from a source video, and feed it into the trained network -- thus taking full control of the target. With the ability to freely recombine source and target parameters, we are able to demonstrate a large variety of video rewrite applications without explicitly modeling hair, body or background. For instance, we can reenact the full head using interactive user-controlled editing, and realize high-fidelity visual dubbing. To demonstrate the high quality of our output, we conduct an extensive series of experiments and evaluations, where for instance a user study shows that our video edits are hard to detect.

Some fascinating work here. Just like the barrier of the printing press came down so that almost anyone can publish a book, pretty soon it won’t just be major studios with massive budgets making realistic special effects.

📑 How The Wall Street Journal is preparing its journalists to detect deepfakes | Nieman Lab

Annotated How The Wall Street Journal is preparing its journalists to detect deepfakes (Nieman Lab)
As deepfakes make their way into social media, their spread will likely follow the same pattern as other fake news stories. In a MIT study investigating the diffusion of false content on Twitter published between 2006 and 2017, researchers found that “falsehood diffused significantly farther, faster, deeper, and more broadly than truth in all categories of information.” False stories were 70 percent more likely to be retweeted than the truth and reached 1,500 people six times more quickly than accurate articles.  

This sort of research should make it easier to find and stamp out from the social media side of things. We need regulations to actually make it happen however.

Syndicated copies to:

👓 How The Wall Street Journal is preparing its journalists to detect deepfakes | Nieman Journalism Lab

Read How The Wall Street Journal is preparing its journalists to detect deepfakes (Nieman Lab)
"We have seen this rapid rise in deep learning technology and the question is: Is that going to keep going, or is it plateauing? What's going to happen next?"
Syndicated copies to: