When Golnoosh Farnadi, an assistant professor of computer science at McGill, was working on her PhD, she experienced what she describes as “a turning point in my career.”
In her doctoral research, she devised machine learning methods for determining personality traits based on how people used social media.
“I was one of the few people working in this field from a computational perspective,” says Farnadi, who now holds a Canada CIFAR AI Chair. Her model assessed “the things people wrote on their Facebook pages, the pages they liked and shared, the images they were uploading.”
The approach offered intriguing insights, but Farnadi herself knew it wasn’t perfect. For instance, the modelling might decide that some individuals were shy when in fact they were quite outgoing – they just weren’t all that active on social media.
So, she felt uneasy when companies started reaching out to her. “They were looking for tools for hiring purposes, to make it more automated,” says Farnadi. “I saw that these tools I was working on could have a real impact on peoples’ lives.”
She decided to switch course. “Instead of building tools without really considering the potential consequences on society, the work I do now is about how [AI technologies] are going to be used in society and what the risks could be.”
Farnadi pursues this work as the founder and principal investigator of the EQUAL lab. Affiliated with both McGill and Mila – Quebec AI Institute, the lab focuses on algorithmic fairness and the responsible use of AI.
She is also the co-director of the McGill Collaborative for AI and Society (McCAIS), where she is helping to build up a network of McGill experts from a broad range of disciplines who all want to ensure that AI is used responsibly, effectively, and ethically.
“We are focused on how to use AI for the good of society,” says Samira Abbasgholizadeh-Rahimi, the other co-director for McCAIS.
Abbasgholizadeh-Rahimi is an assistant professor of family medicine and McGill’s Canada Research Chair in Advanced Digital Primary Health Care. Her lab is involved in research that is examining whether AI technologies can play a valuable role for the early detection of dementia and to predict and prevent cardiovascular disease in women.
In some ways, her team represents what McCAIS is hoping to encourage across the University.
Bridging the language divide
“In my research lab, we have students and staff from very different disciplines,” says Abbasgholizadeh-Rahimi. “We have people with medical backgrounds, people with social sciences backgrounds, people with backgrounds in engineering and computer science. This is a multidisciplinary field, so you need to have all these different types of expertise working together.
“But part of the challenge is for people to be able to understand one another, because the [different disciplines] do have different languages. I’ve been trained in engineering and on the primary health care side, so I’ve seen that for myself.”
Abbasgholizadeh-Rahimi says an important part of her job is to “help [her team members] learn how to collaborate.”

“There are people at McGill who are involved with AI in very different fields, and they don’t always know about each other,” says Eric Kolaczyk, who played a leading role in the creation of McCAIS and works closely with its co-directors around strategy and development. “One of our major goals with McCAIS is to develop the scaffolding for an ecosystem that will encourage interdisciplinary collaborations.”
A professor in the Department of Mathematics and Statistics, Kolaczyk joined McGill in 2022. He is the former director of the Rafik B. Hariri Institute for Computing and Computational Science & Engineering at Boston University. The institute encompassed several research centres engaged in work involving AI, cloud computing, cyber security, digital health and other areas.
Kolaczyk is the founding director of McGill’s Computational and Data Systems Initiative (CDSI).
“I’ve always been someone who loves the start-up stage of things,” says Kolaczyk. “And the chance to work with a university with all the talent that [McGill] has, but one that hadn’t yet set up the type of connective structure that we see at many of our peers worldwide, was just too good to pass up.”
The CDSI was created to nurture and amplify McGill’s strengths in data science and computing – and a big piece of that relates to the University’s work on AI. “McCAIS is hosted within CDSI, and it is supported by CDSI staff,” explains Kolaczyk.
“I came to Montreal knowing that it was an international hub for AI,” says Kolaczyk. “I’m even more impressed by what I’ve seen now that I’ve been working here for a few years.
“McGill is well recognized as a global leader in technical areas like reinforcement learning,” he adds. “But I have been extraordinarily impressed with the investments I’ve seen around AI throughout the different faculties at McGill.
“We have people in agriculture who are leaders in the implementation of ‘smart’ agriculture – for the humane treatment of animals in the dairy sector, for instance. We have thought leaders in philosophy [on issues related to AI]. There are people here who are doing interesting work in management, in education, in all these different disciplines. McCAIS is a reflection of McGill’s strengths.”
Promoting interdisciplinary partnerships
McCAIS oversees two programs that hope to capitalize on those strengths by encouraging more interdisciplinary partnerships.
The BMO Responsible AI Scholars Program funds students working on AI-related projects at both the undergraduate and graduate levels. According to the McCAIS site, these projects should “encourage research aimed at understanding and responsibly influencing the impact of AI on society.”
Nikki Tye, an undergraduate majoring in international development, was a BMO Junior Responsible AI Scholar in 2024. Her project explored whether AI tools could be used by Indigenous communities and NGOs in their efforts to protect the Amazon rainforest from illegal logging and mining activities.
“One of the criteria that we have for this funding, both for undergraduates and at the graduate level, is that students need to have supervisors from two different departments or faculties to promote interdisciplinary collaboration,” says Abbasgholizadeh-Rahimi.
The Interdisciplinary Research Development Awards program has similar aims, but funds work done by McGill professors.
“I’m not a person who sees things as being either black or white. And I think that’s very much true for AI. AI can have a positive impact.”
Golnoosh Farnadi, co-director of the McGill Collaborative for AI in Society
“Some of the [award recipients] have told us that it’s very hard to find funding for this kind of interdisciplinary research,” says Farnadi.
Last December, at a symposium that marked McCAIS’s one-year anniversary, the 2024 award winners discussed their work. Yichuan Ding, an associate professor at the Desautels Faculty of Management, spoke about his project, using AI to help create a user-friendly system for managing trips to hospital emergency departments.
The system was designed to speed up the registration and triage process for patients, redirect them to general practitioners or pharmacists if their symptoms didn’t warrant a trip to the ER, and even offer estimated waiting times for different emergency departments. Ding’s collaborators on the project include Steve Liu, a computer science professor, and Lawrence Rosenberg, MDCM’79, MSc’82, PhD’85, a surgery professor and the president and CEO of the Integrated Health and Social Services University Network for West-Central Montreal.
Another project funded by the Interdisciplinary Research Development Awards is exploring whether AI can assist with making culturally grounded education materials more easily accessible for Indigenous communities in remote parts of Peru. The project is led by Joseph Levitan, an associate professor in the Faculty of Education, who is working in close collaboration with those communities.
Shangpeng Sun, an assistant professor of bioresource engineering, is heading up another project that uses AI for a precision spraying system for crops – up to 98 per cent of the herbicides currently used for weed control don’t remain on the plants and flow into the surrounding environment.
Involving end users at the beginning
Farnadi says these collaborative projects provide a roadmap for how AI solutions should be developed and employed.
“One of the things that the field of responsible AI has often been lacking is having in-depth knowledge about application domains,” says Farnadi. Computer scientists developing AI tools “for healthcare, for education, for agriculture, if they don’t have people on the team who represent those fields, they’re just creating a technical solution without really understanding the nuances of how these tools will be used.”
Abbasgholizadeh-Rahimi agrees.
“Our team conducted a study on the use of AI in primary healthcare a few years ago and we found that about 95 per cent of the technology that was being field tested just stayed at that stage. It didn’t go to the implementation stage. It didn’t go into real practice.
“We rarely see clinicians or patients involved in these AI projects at the earliest stages. Most of the time, these technologies have already been developed and it’s only at that point that someone says, ‘Let’s go talk to the clinicians and patients and get their feedback.’ You need to work with people from the very beginning, from the design stage, to make sure that the technologies are in line with the needs of the end users.”
Embedding ethics
Another McCAIS initiative could affect how several classes are taught at the University.
“We’ve been working with Dean [Lisa] Shapiro in the Faculty of Arts to find ways to embed ethics throughout the teaching of [AI-related courses],” says Kolaczyk.
Experts from different parts of the University, including the Department of Philosophy, the Faculty of Engineering, the School of Computer Science and the Laidley Centre for Business Ethics and Equity, have been collaborating on the project.
“We are selecting courses that are already offered to undergraduate students, and we are creating modules that are going to be very much tailored for those courses,” says Farnadi. “It will give students a new lens” for looking at the content. “So, if you are studying software engineering, you’ll see that there are these ethical considerations around AI that you should be aware of.”
McCAIS is also partnering with IVADO, a Quebec AI research consortium, to create new courses and workshops. “These will be on algorithmic fairness, the role of privacy and definitions of privacy in AI systems, creativity in AI, and AI in the health sciences. Those will be our first four,” says Kolaczyk.
The initiative is just one example of McCAIS building partnerships with off-campus players in the AI realm.
“We’re developing a relationship with the global grassroots organization Women in AI,” says Kolaczyk. “We’re in the later stages of developing the framework for an eternal affiliates program. Essentially, we would be creating an industry sandbox, where you could sit down at a table with your peers and with experts from academia and other sectors to talk about [issues associated with AI].”
While Farnadi has spent much of her career identifying the risks associated with poorly designed AI technologies, that doesn’t mean she sees AI itself as a negative force.
“I’m not a person who sees things as being either black or white. And I think that’s very much true for AI. Look at the area of drug discovery. We’re starting to see things that weren’t even possible before. AI can have a positive impact.”
Some of those positive impacts are likely to result from the work that McCAIS is doing today.