Can computers model the human mind? Associate Professor Malte Willer examines potential AI shortcomings

May 20, 2021

For decades, developers have lauded artificial intelligence (AI) as a groundbreaking way of accessing and categorizing information. But how much does AI really understand the information it processes? Malte Willer, Associate Professor of Philosophy at the University of Chicago, is using insights from contemporary philosophy of language and linguistics to investigate the foundational questions related to AI. 

"Many ethical questions are raised when using AI and determining what the rules are when applying this technology to scientific inquiry," Willer said. "But there are also foundational epistemological issues: we need to determine to what extent we can rely on these mechanisms and how we can safely use them in, for instance, medicine."

Willer is currently working on related project through the Neubauer Collegium for Culture and Society in partnership with David Schloen, Professor of Near Eastern Archaeology in the Oriental Institute and Department of Near Eastern Languages and Civilizations, and Samuel Volchenboum, Associate Professor of Pediatrics. The project focuses on ontology design, organizing databases in a particular way. This problem has practical significance, such as when different hospitals attempt to merge their databases. Doctors care about how databases are categorized, but they tend to disagree on how to best categorize information. Should medical data be merged, differences in linguistic terminology could result in conflicting and imprecise information being shared. 

Willer’s ontology work led him to explore second-wave AI, specifically the challenges posed by neural networks, which are a series of algorithms aiming to recognize underlying relationships in a set of data through a process mimicking the operation of the human brain.

"There are a lot of questions about whether what the designers of these networks say is true," Willer said. "To evaluate these claims, you have to look at the technical details, for instance at how these networks are trained."

Neural networks are sometimes described as theory neutral, but it is unclear how much theory and what kind of biases factor into their design. Willer is exploring this question in hopes of discovering to what extent second-generation AI differs from first-generation AI, and also determining which criticisms of first-generation AI transfer to second-generation AI. Last summer, he stepped back to find out more about the technology behind neural networks, learning Python programming and other techniques. One of the more interesting findings about the practice of neural network design is the importance of solving engineering problems on the fly. 

"You design a neural network that can recognize numbers and you train the network on a huge amount of human-written numbers," Willer explained. "But how do you actually train? Simple trial and error methods play a major role. That, of course, matters for the question of whether to trust these systems."

Willer is also bringing these issues into the classroom this quarter by co-teaching the course The Epistemology of Deep Learning with Anubav Vasudevan, Associate Professor in the Department of Philosophy. The course gives students a comprehensive understanding of how neural networks work, and also explores a range of metaphysical and theoretical issues surrounding second-wave AI on that basis.

"It’s interesting because the course is taught by two philosophers who have different interests that converge," Willer said. "We think it is very difficult to teach because we need to expose students to the technical details of these systems and algorithms and make sure they truly understand them in order to discover the deeper philosophical implications."

To learn more about Willer’s research, visit his website.