Computers are becoming rather versatile copycats, thanks to deep-learning algorithms.
Just last year, researchers “trained” machines to transfer the brushstrokes of iconic artists onto any still image. Now, Manuel Ruder and a team of computer scientists from the University of Freiburg in Germany have taken the technology a step further: They’re altering videos. The team’s style transfer algorithm makes clips from Ice Age or the television show Miss Marple appear as living paintings crafted by the likes of Van Gogh, Picasso or any other artist. And the results speak for themselves.
Cutting Through the Layers
Deep-learning algorithms rely on artificial neural networks that operate similarly to the connections in our brain. They allow computers to identify complex patterns and relationships in data by parsing it layer by layer. More fine-grained information is extracted the deeper the layers go.
Last year, researchers at the University of Tubingen demonstrated that it was possible to separate the content of an image from its artistic style using these deep-learning algorithms. Basically, they could use an artist’s “style” like an image filter, regardless of the image’s content — you can now add a Starry Night twist to your own images. Ruder and his team built upon this work, and applied it frame-by-frame in videos.
Making it Smooth
However, Ruder and company discovered the algorithm interpreted individual frames differently, producing unwatchable videos that flickered when the frames were strung together. As a workaround, they designed a constraint that penalized deviations between frames, which reduced jitteriness and yielded smooth, eye-pleasing clips.
Although the algorithm struggles with larger and faster movements between frames, the preliminary results are still quite beautiful. They recently submitted their findings to the preprint server arXiv.
Now that computer scientists appear to have nailed down video, it’s only a matter of time before we find ourselves inside works of art via a virtual reality headset.