Machine Learning and Art (2 of 2)
Machine Learning and Deep Learning are increasingly upsetting all sectors of society and make it namely possible to improve artificial intelligence and halt the spread of malware among many other benefits. However, scientists are not the only ones interested in it. So are artists…
Although “machine learning” is an expression that exists since 1959, it is only rather recently that this technology enabling computers to master certain tasks on their own was democratized. Whereas in artistic fields, machine learning is used mainly to process, organize and analyze images, the technology is also beginning to be applied to other fields, including music and video games.
Presentation of two artistic machine-learning experiments as part of Machine Learning for Art, A Residency at the Google Cultural Institute at the Google I/O 2016 conference, by code artist Mario Klingemann and digital interactions artist Cyril Diagne.
Both artists currently reside at the Google Cultural Institute with the CILEx team, introduced by Google’s Damien Henry during an interview recently granted to CMF Trends.
Mario Klingemann’s Lucid Dreaming
The most recent artist to reside with the CILEx team is Mario Klingemann, a code artist, i.e., some who “uses code and algorithms to generate things that look interesting and that some could qualify as art.”
In his first experiments with deep learning, among other things, the German artist discovered DeepDream, a computer vision program developed by Google. DeepDream uses a convolutional neural network to identify and reinforce structures detected in images. In simpler terms, the tool makes it possible to generate rather psychedelic works from images that are normal in appearance.
“I had fun with DeepDream for several weeks but I was convinced that I could somehow improve its code,” explained Mario Klingemann during his presentation at Google I/O 2016.
His objective was to enable users to exercise better control over the images created by the tool and thus transform DeepDream into Lucid Dreaming.
To provide such a level of control, Mario Klingemann modified DeepDream to offer the possibility of changing images’ colour saturation levels. He also made it possible to temporarily delete certain categories of forms recognized by DeepDream, thereby enabling the algorithm to produce more abstract images.
Finally, he arranged for DeepDream’s neural network to learn how to recognize initial caps using the fine-tuning technique such as to modify the overall appearance of the images generated by the tool.
The works created by Mario Klingemann with Lucid Dreaming were showcased during the DeepDream: The Art of Neural Networks exhibit of the Gray Area Foundation for the Arts among other venues.
Cyril Diagne’s Portrait Finder
“I knew nothing about machine learning as recently as a year ago,” admitted Cyril Diagne at Google I/O 2016. The digital interactions artist began to develop an interest for it when Damien Henry asked him a very simple question: “What could you do with 7 million artefacts?”
That is what sparked Cyril Diagne to attempt to reorganize the works making up the Google Cultural Institute’s collection in different ways, for example in chronological order or by colour. “You make interesting findings, but I told myself that there had to be something more to do with all of this incredible material and all of this technology.”
That is when the French artist developed an interest for machine learning, and particularly for natural language annotation. He was thus able to index the 7 million artefacts using the same technology used, for example, to enter keyword “forest” in Google Photos to locate pictures of trees.
“I was astonished by the results I received,” explained Cyril Diagne, who seemed to be just as excited by his experience several months later. The machine learning algorithms gathered the images under more than 4,000 different labels, thereby making it possible to combine painting of various eras or sharing strange themes such as ‘ladies in waiting.’
“We are going to publish everything shortly,” promised the artist.
After conducting experiments on these data, Cyril Diagne developed Portrait Finder—a tool that takes a photo of a person to discover a portrait closely resembling it among the 40,000 paintings making up the collection.
The tool also uses the FaceNet model, another tool developed by Google.
The result is quite amusing. “Everyone wants to find their double in the database,” says the artist, who also found portraits resembling celebrities including American president Barack Obama and actors playing in HBO’s Silicon Valley TV series.
Be on the lookout in all sectors
The use of machine learning in artistic fields remains limited and should continue to gain ground in the coming years given the tools available are being constantly improved and artists’ interest for this technology is increasing.
Google's own Magenta project even published on June 1 it's first song generated with machine learning, a 90-second piano melody.
Whether it be to automatically generate a movie sound track based on what is unfolding on the screen, to enable the discovery of archived video content or to automatically develop certain video game sections, machine learning has the potential of profoundly transforming many facets of the arts and culture.
Furthermore, the greatest novelties to come have probably not even yet been imagined…
You can read the first part of this blog post here.