A new interdisciplinary initiative at Western is putting a human-interest twist on artificial intelligence (AI).
Three postdoctoral associates are bringing their unique backgrounds and research interests to the classroom this year to help students better understand assumptions that underlie the use of AI and ethical conflicts posed by its use.
Michael Barnes, Bartek Chomanski and Mahi Hardalupas have joined the Rotman Institute of Philosophy, bringing a focus on the ethical and social aspects of data analytics and AI, and its overall impact on individuals, organizations and society.
Chomanski and Hardalupas have developed new courses through funding made possible as part of a $3-million gift from Royal Bank of Canada last year, which also supports scholarships and workshops in design thinking.
“Michael, Bartek and Mahi offer great expertise in ethics and AI,” said philosophy chair Carolyn McLeod. “We are grateful to RBC for investing in an important area of study that will help us better understand the broader implications as AI continues to govern so many facets of our lives.”
The three researchers are collaborating with members of the faculties of Science, Engineering, Information and Media Studies, and the Schulich School of Medicine & Dentistry to explore questions surrounding the development and use of AI technology.
Interdisciplinary approach
Before joining the Rotman Institute, Barnes was a postdoctoral fellow at the Institute for the Study of Human Flourishing at the University of Oklahoma, where his research focused on the spread of hate online. Barnes brings that background to his teaching in Philosophy and Artificial Intelligence, an undergraduate course in philosophy that continues to grow in popularity since its inception a few years ago.
“We cover a wide range of subjects, by first examining what AI even is,” said Barnes. “That takes us back to the earliest history of research and AI, and up to the manifestation of AI as it exists in our world now; mainly in the application of machine learning that has exploded – from social media to banking and finance, predictive policing, facial recognition and a whole host of other big data applications.”
It is on those fronts that Barnes will have students explore a “whole host of ethical issues.”
“What does it mean to have our data harvested and surveilled in such incredible numbers, and then have our lives run and determined in various ways by machines that claim to be objective, but in various ways, show that they are not,” Barnes said.
“AI systems are everywhere, perhaps most notably on social media, whether it’s in the recommendation and amplification of certain content, or in its curation and moderation or its censorship. That raises a lot of questions about responsibilities in that new terrain of online speech.
Barnes’ research addresses the worrying trend of online hate spilling into the ‘real’ world. He said the amplification of violent ideologies through online platforms is a growing, worldwide problem causing personal harm that is often invisible.
“When people are victims of online abuse, they tend to avoid online platforms or get pushed off, so their voices aren’t heard,” Barnes said. “The remedies are few and far between, other than getting the platform to block users from interacting with the individual. But there’s not a lot that can be done for you in that regard, if, say, creating new accounts is so easy.”
“There’s currently no cultural or social consensus about how to respond to those kinds of issues. I think it requires an interdisciplinary approach, and a university setting is a good place to do this type of work.”
Real-world inputs
While his initial interest is in philosophy of mind, Bartek Chomanski’s research now focuses on the ethics of emerging technologies, and how the development of increasingly more powerful AI will impact both the near and distant future.
For graduate students in philosophy and computer science taking AI Ethics: A Comprehensive Introduction (co-taught with computer science professor Mike Katchabaw), Chomanski provides an overview of the contemporary issues in AI ethics.
“We look at everything from algorithmic bias or autonomous vehicles and all the ethical issues associated with that, up to and including more speculative problems such as the emergence of super-intelligent machines or the human-equivalent machines we might see in the future. We examine how we should approach these things as individuals, and as a society,” Chomanski said.
Chomanski wants students to appreciate the nuance of perspectives and the philosophy that evades any black and white classifications they may have adopted by following stories in the media.
When it comes to the question of bias and algorithmic decision making, for example, Chomanski urges students to look at it through an ethical lens, “but also try to come up with realistic solutions to the problems, which is a more technical question, but the two areas very much intersect.”
Chomanski will also coordinate a different graduate course, Ethical and Legal Challenges of AI: A Case Study Approach, intended to be a first exposure to AI ethics for graduate students from technical disciplines, as well as upper-year undergraduates. In addition to offering a basic grounding in ethical thinking, the course brings in academics from other disciplines, such as Law, and Science and Technology Studies, as well as industry experts, including those from RBC, to offer real-world perspectives on contemporary issues and challenges. The course’s focus is on ethical problems associated with currently existing AI technologies.
“Topics like data privacy regulations and other ethical problems we address in the abstract come to life through the everyday work experience of the guests,” Chomanski said. “The students are encouraged to participate, asking questions and coming up with different workable solutions by themselves or through group work.”
“I think one of the most valuable outcomes for students is the ability to appreciate the news behind the headlines. They may come to class thinking that technology is good and for the betterment of humanity or that most of it’s bad. What I’d like to accomplish with them is to ensure their understanding is more subtle and less of a more binary approach.”
Help not harm
Through AI, Ethics and Society, Hardalupas will introduce science and engineering students to ethical frameworks for understanding artificial intelligence and data science in their social and political context. The course will be offered next term.
“We wanted to have a course that could help students who don’t have the humanities background get a better idea and understanding of the humanities’ approaches to questions about technology and AI,” Hardalupas said. “And though I come from a philosophy background, I also think it’s really important to engage with lots of different disciplines when thinking about these issues.”
After learning key ethical concepts and frameworks, the goal is for students to construct and defend their own arguments in relation to ethical conflicts that arise in AI applications in society.
One aspect of the course requires students to think about the design of algorithmic systems, considering who creates them and who benefits from their use. Hardalupas will use a variety of case studies, including those from her own research where she applies feminist philosophy to analyze the use of AI for medical diagnosis and prognosis.
“In the case of medicine, we question if we really need to redirect all of our resources towards trying to automate certain processes or is it better to look at other ways to address issues in healthcare systems,” she said. “It’s one thing to say a system is being created to help patients, but if there aren’t any patients or people who are interacting with medical systems involved in deciding what that algorithm looks like, then perhaps we’re not creating systems which are benefiting society.
“The question for me is, ‘How do you think about developing technologies in a way that ensures they are actually benefiting the people who are supposed to be helped by them?’”