Victor Espinoza generates detailed and surreal digital artwork using a technique called neural style transfer — a method of applying the ‘style’ of one image to another image. The style transfer is applied using a neural net (artificial intelligence) that somehow discerns the crucial elements it sees in the style image and then applies them to the content image.
Espinoza posts his work on artstation, tumblr, instagram, and reddit. Some of his prints are available for purchase as well.
The result of style transfer artwork is a surreal blending of two or more images that could never be accomplished traditionally (at least not easily). With neural style transfer, you could apply the ominous swirls of Van Gogh’s Starry Night to the Mona Lisa to make it took like a Van Gogh portrait. You could take Picasso’s 1907 self-portrait and apply it to a picture of yourself to transform your selfie into a cubist painting. Style transfer can even be applied to videos, frame by frame.
It’s a process of trial and error, since every result is different and what exactly goes on under the hood of the neural net is unknown. Style transfer artists must tweak various arcane-sounding parameters to adjust how much the style is weighed over the content, the scale at which the AI looks for various style elements, how many times the neural net will iterate over its own result, and many other factors. It’s even possible to use multiple style images at the same time.
Every attempt is unique, and it can take a many tries to get a worthwhile result. Depending on image resolution and the number of iterations, every attempt can take a good while to run. However, once the right parameters are found, they can be reused for that style image and applied to various content images over and over.
Unfortunately, the generation of high quality style transfers is not very accessible. Not only is the process dependent on trial and error, but it also requires powerful and expensive hardware. Style transfer—like most software that uses neural nets—is heavily dependent on GPUs (graphics cards). And the cards that have enough memory to generate high resolution images can cost quite a bit.
If you don’t want to bother with learning how to use a command line interface to run the scripts, or don’t have the requisite hardware, a way to generate these images is still available to you.
Various apps (Pikazo, Guava, Stylator, etc.) have sprung up that yield fast results on both iOS and Android. A recent update to Adobe Photoshop has also included a neural style transfer feature. Some have even taken the standard neural style transfer scripts and made a handy GUI for them to make them easier to use. These apps and implementations almost never have the same flexibility and control that’s offered by the command-line-based implementations you can find on Github, but they’ll often give you decent results. Online commercial solutions (Deep Dream Generator, DeepArt.io) also exist that can generate higher quality style transfers.
Style transfer has slowly, but steadily been making headway into the mainstream. It’s a fascinating technique, and provides a small glimpse into what kind of impact artificial intelligence can have on making art.