Thursday Nov. 14, Chief Data Scientist at Vanderbilt University Jesse Spencer-Smith visited the College of William and Mary to host a workshop titled “A.I. for Artists.” Spencer-Smith’s workshop focused on exploring the basics of artificial intelligence and various methods of integration of artificial intelligence with art and music.
Spencer-Smith began the workshop by explaining the differences between how people and artificial intelligence — which he often referred to as deep learning networks — learn new information. Spencer-Smith contrasted how people and AIs learn concepts such as language: people begin with the basics and progressively learn more difficult concepts, while AIs learn the basic concepts through learning the difficult concepts first, known as back propagation.
“The first thing data scientists talk about is how deep learning networks actually learn language,” Spencer-Smith said. “The second thing is the way we learn languages. We arrive at our understanding of language very differently. We take a class, we get the context, we practice speaking with other people, we learn from top-down almost, but deep learning networks learn bottom-up through a process called back propagation.”
Spencer-Smith discussed how despite these differences in the method of learning, both people and AIs retain the same amount of information on subjects, such as language, at very similar rates.
“What is really interesting is that back propagation looks nothing like the way we learn,” Spencer-Smith said. “It is completely different. But yet, when we compare what an AI learns from language and what we learn from language, we are beginning to meet in the middle. The capabilities we are beginning to see with AI are on par with what we get.”
Spencer-Smith explained various ways AIs can be used for creative purposes, such as in writing, composing music or creating artworks. He presented AIs as helpers in the creative process, a tool that can be used to generate new ideas for existing projects.
“Deep learning networks can help us be creative in different ways,” Spencer-Smith said. “We get to interact with a computer in a new way. If you want to write something right now, you type it and you type it letter by letter and it becomes words. What you can do now is to type a sentence, start in a particular direction and ask a deep learning network to continue writing for you. See what direction it goes and see what ideas come out. It can often be a really fun way of expressing yourself. The same can be done with music and art.”
Spencer-Smith then explained another way AIs can be used in art. He described a method of using AIs to create new art in a particular artistic style. For example, Spencer-Smith used an AI to produce various paintings of a photo of poppies he took inspired by historic artistic styles, such as impressionism, post-impressionism and ukiyo-e.
“Let’s say you have an artist you really like and want to create art in that style but different,” Spencer-Smith said. “One way to do this is to take two deep learning networks and put them one against the other. One network is designed to discriminate between real and fake. The other network is designed to generate ‘good fakes.’ This process is called generative adversarial networks. Over time, one network learns how to generate artwork in the style of a particular artist.”
Spencer-Smith demonstrated the capabilities of deep learning networks on two websites, http://talktotransformer.com and a program called “MuseNet” on http://openai.com. He explained the functions and uses for both AIs. The first website is an AI trained on language designed to finish writing based of words the user. The second website is an AI trained on sound and music designed to compose a piece of music in either a particular artist’s style or a particular genre. He then encouraged attendees at the workshop to experiment with the software for themselves.
Spencer-Smith also addressed the moral dilemma of ownership when using AIs to produce various forms of art. He challenged attendees by asking if art created using an AI based on a particular artist belongs to the artist the art was inspired by, the user of the AI, or the AI itself.
Campbell Scheverman ’20, responded that the art’s ownership belongs to the user of the AI.
“I think that someone could look at a certain portfolio of an artist’s work and do the same thing by hand,” Scheverman said. “So, I do not think there is much of a difference in using an AI to copy versus copying yourself.”
Joshua Otten ’23 enjoyed working with the AIs and plans on implementing them within his own artwork.
“I am very interested in new technologies, especially artificial intelligence,” Otten said. “I wanted to see more about how it worked and what the potential in it was for the future. I enjoy writing music and painting, and I think using AI could potentially help me in fleshing out ideas in the future.”