In this era of fake news, there is an emerging technology that could make the situation a even worse. Some of you may have seen the scarily accurate, lip-syncing videos, called “deep fakes,” of Barack Obama.

Artificial intelligence was used in those videos to feed audio into a simulated version of the former president. The face and voice were his, but the lip movements were completely fake.

Taking this ethically sketchy technology a step further, a new version is debuting at this year’s Siggraph conference in British Columbia in August. A video accompanying the paper published by a team from Stanford showed a sample of what the new technology, called “Deep video portraits,” would look like.

In a nutshell, the system uses a source actor to get data of facial expressions and lip-movements. This data can then be transferred to the video of a target actor. This means anyone could serve as the source actor and have their expressions transferred to the portrait of, say, Donald Trump. Further, full 3D head positions, head rotation and eye blinking can also be recreated.

It is evident why many are worried about the potential this technology has to contribute to fake news. You can read the full paper about Deep Video Portraits here.