Interview: Dr. Muhammad Mamdani on AI as a Bridge to Better Care
January 22, 2024
For Muhammad Mamdani, machine learning is a bridge to better patient care.
As the Vice President of Data Science and Advanced Analytics at Unity Health Toronto, Dr. Muhammad Mamdani applies advanced analytics and machine learning to healthcare decision making, with the ultimate goal of improving patient outcomes and hospital efficiency. Currently the Director of the University of Toronto Temerty Faculty of Medicine Centre for Artificial Intelligence Research and Education in Medicine (T-CAIREM), Mamdani has previously been named among Canada’s Top 40 under 40. His rock-star publishing career includes over 500 peer-reviewed papers in the areas of drug safety, pharmacoeconomics, and the application of machine learning to clinical and policy issues. In this conversation, Mamdani talks about where he sees healthcare AI going – and how to push it along – in Canada.
Tell us a bit about Unity Health Toronto.
Unity Health Toronto is a healthcare organization with services spanning the City of Toronto and three main sites: St. Michael's Hospital, Providence Healthcare and St. Joseph's Health Centre. Since 2017, our Data Science and Advanced Analytics team has put more than 50 innovations into practice. We’re already seeing what a positive impact AI can have on hospital operations and patient care, and we’re just getting started.
What is generative AI?
The term refers to algorithms that enable computers to ‘learn’ from existing data, such as text, images, voice, or videos, and apply this learning to generate new content. For example, it can create new texts or images – content that never existed before – from simple prompts. It can also answer questions and concisely summarize complex information.
What are some of the challenges in developing generative AI in healthcare?
A model is only as good as the data you feed it and often applications of generative AI are developed to give answers to questions that may not be ‘answerable’ with the existing data. That’s why ChatGPT sometimes generates things that don’t make much sense – what we call ‘hallucinating.’ It’s often sifting through incomplete or unreliable data from the Internet. As the saying goes, garbage in, garbage out. For AI to have value in healthcare, the data sources need to be a lot more credible, such as data from trusted peer-reviewed publications. Big tech companies are already working on training AI algorithms with better-quality data, which will result in much more reliable and accurate healthcare tools.
“Data sources for AI need to be credible, such as data from peer-reviewed publications. Big tech companies are already working on training AI algorithms with better-quality data.”
What changes need to happen for AI to transform healthcare?
As a society, education plays a big role as we need to become more literate and comfortable with data and AI. At the health systems level, we need to “axe the fax” and fully embrace the digital world. We’re already doing this at Unity Health: it’s digital and data all the way. The more data we have, the more powerful it becomes. For example, our hospital may only see a couple of cases of a rare condition every year, but if we multiply that by a hundred hospitals, we now have enough data to create an algorithm around that condition. This consolidation and aggregation is where we need a lot of work. Another piece is data quality. We need alignment on data standards across jurisdictions so we can create high-quality, nationwide data sets to feed into AI.
Are there any myths about AI that need to be put to rest?
A prevailing myth is that you can apply AI to just about any challenge. In reality, you have to be very disciplined and only develop AI from high-quality datasets. Otherwise you compromise your AI algorithms and make them useless as real-world tools. Most importantly, people underestimate the amount of work it takes to develop and deploy AI in healthcare. They suggest “giving the job to AI,” as though we could just snap our fingers and do it. That’s because they conflate AI research and application. Conducting and publishing AI research is only a first step, and it often goes nowhere. As a case in point, a recent study reviewed about 400 articles on AI tools developed to address issues emerging from the pandemic. The authors looked at each of the papers to see how many of the tools could be applied in the real world. The answer was zero.
“You have to be very disciplined and develop AI from high-quality datasets – otherwise your AI algorithms become useless as real-world tools.”
So what’s the solution? How do you bridge the gap between AI research and application?
You have to create a human environment that supports the technology. If you don’t do proper change management, nobody will use the tools. This means stakeholders have to be part of the development process. That’s the model we’re using at Unity Health. We involve the end users – clinicians and staff – from the get-go. You also need the teams and supports in place to resource an AI application before, during, and after launch.
Are there any AI applications you’re especially proud of?
Unity Health helped launch an AI-based early warning system called CHARTwatch at St. Michael’s hospital in 2020. It runs every hour on the hour, grabbing patient data and categorizing each patient as low, medium, or high risk. As soon as the system flags a patient as high risk, the medical team is paged and has to see the patient within two hours. As expected, St. Michael’s saw a big increase in mortality throughout the early months of the Covid-19 pandemic. After we deployed CHARTwatch, we started seeing reductions in mortality rates, despite Covid cases continuing to increase. To put a figure on it, we had over 20% fewer deaths following our deployment of CHARTwatch. That’s when we got very excited: our AI tool is actually saving lives.
In ten years, what role do you think AI will play in healthcare settings?
We’re probably going to see AI much more ingrained in day-to-day health care. Generative AI will be pulling data from our systems and helping us make sense of it. I’m hoping that AI will be reliable enough to help us with diagnosis and treatment. I see it playing a special role in predicting health risks in specific individuals and serving as a ‘clinical assistant’ that suggests diagnoses and optimal treatment pathways. I also anticipate a huge expansion in the role of AI in automation. Right now we’re using AI to help us with menial tasks such as scheduling. As AI becomes more powerful, I think our definition of “menial” will expand and we’ll be automating an increasing variety of administrative tasks.
“Within 10 years, I hope AI will be reliable enough to help us with diagnosis and treatment. I see it as playing a special role in predicting personal health risks.”
How do we balance the risks and benefits of AI in healthcare?
It’s a matter of embracing and welcoming AI, while putting some guardrails around it. It’s a tough balance, because too many guardrails can stifle innovation, while blind trust can result in serious mistakes. That’s why it’s helpful to have guiding principles, such as the Good Machine Learning Practice developed in 2021 by the FDA, Health Canada, and the UK’s Medicines and Healthcare Products Regulatory Agency. The document offers a values-oriented approach that can be applied to a variety of healthcare AI projects.
Overall, how would you say Canada is doing in healthcare AI?
Canada is home to many AI experts. Several of us are involved in discussions to help inform regulations, so Canadians can benefit from responsible AI that helps improve our healthcare system. We’re also ramping up data consolidation, with groups like the IC/ES data repository in Ontario and initiatives like Genome Canada, which consolidates data around genomics. Alberta Health Services has taken the bold step of saying, as an entire province: ‘we’re going to bring together our data to make it more powerful and using it to help our patients.’ So we’re starting to think the right way about all this.