Mark Daley has been appointed Western University’s first-ever chief Artificial Intelligence (AI) officer, as Western becomes the first university in Canada to house such a role within its senior executive. Daley’s five-year term begins Oct. 15.
The post aligns with Western’s strategic efforts to champion AI across campus.
Daley, an AI researcher and respected leader in neural computation, will develop and implement a university-wide AI strategy that supports Western’s academic mission and research objectives.
“Mark is uniquely qualified for this exciting new role that will help propel Western to the forefront of AI research and application,” said Alan Shepard, Western’s president and vice-chancellor. “With his extensive background in computer science and neuroscience, and deep expertise in both neural computation and academic administration, Mark will help Western navigate a rapidly evolving digital era, guiding our university’s efforts in the ethical and responsible development and deployment of AI.”
Daley’s experience includes tenure as vice-president of research at the Canadian Institute for Advanced Research, a world-renowned institute supporting AI research and Canada’s AI strategy. The multidisciplinary scholar has held cross-appointments in several departments at Western, including computer science, mathematics, statistics and actuarial sciences, biology, electrical and computer engineering, and epidemiology and biostatistics.
He most recently served as Western’s first-ever chief digital officer leading Western Technology Services, and is excited to take on his new role, buoyed by Western’s “forward-thinking approach to AI.”
“Creating this role shows real vision on the part of Western’s leadership,” he said. “This technology is going to transform society in ways other technologies haven’t. It’s being compared to the internet and the steam engine. Those are legitimate comparisons, but I think this is even bigger. I think this is more like the discovery of fire.”
AI, according to Daley, will fundamentally and rapidly change many things about society, including Western’s role.
“There’s a moral imperative for us to not just be aware of it, but to engage and lead. We’re at a very important moment in time where we need to have challenging conversations about everything from regulation versus open source to freedom of expression, as well as the moral, ethical and societal implications of this technology.
“As a research institution, we must contribute to these conversations. As an institute of higher learning, we have an obligation to prepare a generation of citizens for a world that will look very different from today and Western is perfectly positioned to lead in this area.”
Since the release of OpenAI’s ChatGPT chatbot in November 2022, Daley has been a much-sought expert, providing insight and commentary to media outlets across the country.
Western News sat down with Daley to learn more about his thoughts on AI, its use across campus and how Western is positioned to lead important discussions and developments.
Western News: What will be your first priority as you settle into this role?
Mark Daley: My first step is to listen to our community of students, faculty and employees, to find out what people want and then offer resources and support. What are their fears, their aspirations?
This is going to transform everything from how we teach and do research to how we do snow removal. My job is to help people along that journey of transformation, but that transformation is going to have to be led everywhere by everyone, right down to each individual student who knows they have to engage with this new technology because it is going to affect their world.
The intent is to use AI as much as possible. We want to be a lab for what’s possible on the administrative side of AI, like a test kitchen.
WN: Was this rapid advancement of AI technology something you would have expected?
Daley: Even though I’ve been involved in neural computation research for most of my career, I didn’t imagine AI getting to the point it is at right now within my lifetime, or I thought it maybe would happen after I retired. We are way further ahead than I thought we would be. So I’m sympathetic to people who aren’t researchers in this area who feel this came out of nowhere.
There is a lot of fear and doom generated in the media but I’m an optimist. I do feel a sense of obligation, because this is an important moment in history and we – all of us – have an opportunity to help push toward making good decisions for humanity.
WN: There are fears around AI, including how it could replace jobs in society.
Daley: I don’t see AI replacing people. I see AI augmenting what people do. In the history of human technological innovation, every time we’ve invented a new technology, it has ended up creating more jobs than it took away.
There are fundamental issues of trust, and what AI is good at and not good at is still being explored. There’s still a role for humans in exercising judgement and oversight. As a university, we need to be part of that broader societal discussion about what that means and how we cope with that.
WN: The internet, which has biases, informs AI. How do we manage and mitigate that?
Daley: AI reflects the biases of the people who trained it, just like our children can reflect our biases. It’s actually a human bias problem.
OpenAI has shown a technique called reinforcement learning with human feedback, where people interact with it and when it says something “bad”, it’s told, “no, you’re not allowed to say that” and it learns and updates. If you try to get ChatGPT to say something inappropriate, most of the time it will refuse.
The challenge with that approach, in the case of ChatGPT, is that it follows a set of values set by Silicon Valley tech professionals. If your values are aligned with that, great, but if you live in a small village in sub-Saharan Africa, for example, you probably have very different lived and cultural experience and values. Is it okay for California values to become the default values for all of humanity?
My opinion is the only way to mitigate that is to democratize access to AI, so that any cultural group that wants this technology can have it and create a model that reflects and instills their culture and their values. That requires a pluralistic approach and right now there are a lot of governments talking about regulation and locking it down. That is the worst possible outcome: where only a handful of elites have input.
The idea of one company or one country dominating this research is existentially dangerous. Meta has been open sourcing for exactly this reason so that researchers, hobbyists, anyone who wants to engage with this technology, can, and I think that is critically important.
WN: What about misinformation?
Daley: The technology we have right now is really good at generating misinformation. You can use a large language model to create sock puppet accounts (fictitious online identities created for the purposes of deception) on Twitter and it will astroturf political positions, and influence society.
I think we have to train people to be skeptical again and to not trust any piece of information you cannot verify for yourself. The world was like that before.
We have to think critically, and fortunately, we are an institution that teaches critical thinking. It is an even more important citizenship skill in the 21rst century. Learning to think, and to think about how you think, is always going to be valuable.
WN: How is Western positioned to take on and contribute to these important discussions about AI?
Daley: We have strengths in computer science and electrical engineering, and colleagues who work in AI technology for a living. There are philosophers in our Rotman Institute of Philosophy who study ethics, consciousness and the philosophy of science. We also have a world-class neuroscience research group, including neuroscientists who study consciousness, like Adrian Owen and Mel Goodale. We have a critical interdisciplinary mass who can ask some of the important questions that transcend the purely technical questions about societal impact. There are scholars in FIMS (Faculty of Information and Media Studies), like Luke Stark, whose research asks what is AI going to do to society and what is society going to do to AI.
There is an AI and trust conference being hosted by Western in October. I see all sorts of opportunities to bring those teams together and an opportunity for Western to really plant a flag in the sand.
WN: How will you work with each faculty and professors to work AI into their pedagogy and curriculum?
Daley: Each individual faculty member has academic freedom and full autonomy. They know how to teach their subject. Where there’s interest and opportunity, I see my role as providing resources and support for those who think it makes sense to integrate this. And there are places where it doesn’t make sense and isn’t appropriate.
If you’re a piano performance major, having a robot perform your sonata for you is missing the point. But there are other places where AI is a really powerful tool.
I just spent some time in a classroom with students in a professional master’s program in environment and sustainability. For the final in-class assignment, they had 45 minutes to write a 15-page report on the sustainability practices of a local company, using ChatGPT and Bing. They all succeeded, with two of the groups generating impressive reports. The students were off-the-charts excited and engaged in deep critical discussions about what this means for the future of humanity.
WN: Western has already incorporated AI, as it has emerged, into a lot of its course material.
Daley: Yes. We certainly already have existing courses in AI, including those in science and engineering, as well as existing programs connected with the Vector Institute for Artificial Intelligence in Toronto, which is one of Canada’s three nationally-supported AI institutes. That’s just on the technical side of AI. We also have two courses focusing on the ethical and social aspects of AI, as part of a larger donation by RBC (Royal Bank of Canada) as well as courses in philosophy and artificial intelligence and many more.
WN: How are the professors and students responding?
Daley: Over the summer, I’ve had a number of conversations with colleagues who are bringing this technology into the classroom. They’re just trying it out, which I think is exactly the right approach. None of us yet know how to use generative AI, because it is so new, and each of us have our own use cases. The best possible response is to experiment. Until you spend time with technology, you’ll never really understand what it is good at and where it fails.
Our students are experimenting, and we should encourage them to engage with this technology. It’s going to be with them for the rest of their lives.
WN: You seem invigorated about the possibilities of AI and taking on this role.
I am an optimist and I believe in human agency. I’m so excited to be alive right now and to see this technology get to the level it’s at, and to see what’s ahead for the next five years.
*Edited for brevity and clarity.