Forum examines the future of AI, data science

Courtesy Photo / wm.edu

Friday, Nov. 15, McGlothlin-Street Hall at the College of William and Mary was packed as students and faculty members gathered to listen to Jesse Spencer-Smith, chief data scientist at Vanderbilt University’s Data Science Institute. Before this position, Spencer-Smith previously worked as the director of enterprise data science for HCA Healthcare, both roles that he elaborated on during his talk. 

In his discussion, Spencer-Smith reflected on the ability for artificial intelligence to generate reasonable text and explained how artificial intelligence has become a recent and wide-reaching phenomenon. OpenAI, the organization that created this particular type of AI, has not released its entire text-generating algorithm because of the inherent danger it poses in creating spam messages and bots on Twitter. 

“Well, imagine you have something like this now, that can be far more convincing, and can write something that’s far longer, so it’s not just a tweet,” Spencer-Smith said. “It can generate a blog post. It could generate a posting on Facebook, be believable, and take whatever particular viewpoint with whatever information you care to include.”

“Well, imagine you have something like this now, that can be far more convincing, and can write something that’s far longer, so it’s not just a tweet,” Spencer-Smith said. “It can generate a blog post. It could generate a posting on Facebook, be believable, and take whatever particular viewpoint with whatever information you care to include.”  

Spencer-Smith defined deep solutions in the context of GPT-2, the new text-generating software, and discussed how it has far-reaching consequences that cannot be stopped easily. 

“So these are deep solutions, meaning they can do one kind of thing really well, sometimes remarkably well, though what it is not is a general intelligence,” Spencer-Smith said. “So what you will see, if you let this go on long enough, GPT-2 will begin to contradict itself. It will say things that are a little bit nonsensical. It’s getting the concepts right, but it doesn’t have a sense of the full narrative, nor does it know how to end. It never stops.” 

GPT-2 developed its language model on its own after training on 10 million documents gathered from links that came from posts to Reddit with karma three or higher. The network was trained to guess the next word of documents after going through the document. Essentially, the network was trained on what was coming next in a series, and it became capable of examining where it saw words and phrases that were similar to the ones provided. Grammar can be acquired through this learning process, as well as broader concepts through which words are related.  

The algorithm demonstrates how a pretrained network can solve complex problems. In the context of data science, this is a new approach.  

Spencer-Smith then shifted to explaining the history of AI. AI has been around since the 1950s with the creation of the general problem solver. After this initial advancement came the perceptron, an artificial neutral network of a single neutron, and supplementary expert systems.  

“I think the definition of AI is: what is the hardest thing that we’re working on right now that has to do with cognition or cogitation,” Spencer-Smith said.  

In the 2000s, machine learning became reality, but data must be prepared in a specific way for the machine to learn. The concept of deep learning gradually became able to answer previously unsolved problems, fixing problems through training rather than programming. Deep learning does not stop learning with large amounts of data, instead, it can learn off unstructured data through learning on its own and can learn instantaneously.  

Spencer-Smith described some of the innovations that led to progress in AI technology. Computing power was enhanced through GPUs, which allowed opportunity for larger artificial neutral networks. Image recognition tasks were improved with convolutional neutral networks, an accomplishment made through copying the human visual cortex.  

The practice of data science has shifted, since deep learning solutions will be increasingly used to solve problems. 

“Because with pretrained networks, you can solve problems with a lot less training data,” Spencer-Smith said.” “And you just saw the example with GPT-2 doing what’s called zero-shot learning, which means you don’t need to give it any training whatsoever.” 

There will be less programming, and it will be more important to choose the right pretrained network. Data science projects will be faster. Spencer-Smith also indicated that there will be a vast effects on the tools data science can create. 

Spencer-Smith also discussed the skills that data scientists need. Deep learning has become important, with machine learning secondary. Experience is needed with deep learning, and Spencer-Smith encouraged the audience to try out pretraining on their own.  

“It used to be that programming was enough, but now you have to be thinking at the higher conceptual level. What kind of problem is this? What kind of analogy can I make?” 

Natalie Larsen ’21 said she enjoyed the talk because it provided an opportunity to learn more about AI.  

 “I’m interested in data science as a career, so it was cool to see where it was headed and learn more about AI,” Larsen said.  

Keely Copperthite ’20 also found the event interesting. 

“I have taken a few computer science classes but never anything in data science, so it was interesting to learn more about not only the practical applications, but the skills you need to learn more about the subject,” Copperthite said.  

LEAVE A REPLY

Please enter your comment!
Please enter your name here