logo

Artificial intelligce that explains how it thinks

When health is at stake, it's important to know how AI arrives at its answers so that healthcare professionals can confirm whether they are correct. AI models need to be able to explain what they're thinking.

Webinar: Artificial intelligce that explains how it thinks
Michael Kampffmeyer researches how to use artificial intelligence to analyse medical images.

We need AI that explains what it's thinking in order to increase trust in the service, apply AI to harder problems, gain new insights, and improve models.

Artificial intelligence shouldn't be a black box that claims its answers are the only truth. It should be transparent, so that AI serves as decision support for healthcare professionals. For example, when the AI model marks which areas of a radiological image it thinks show pneumonia or broken bones. That way, healthcare professionals can double check whether this is true.

Presentation by professor Michael Kampffmeyer from UiT The Arctic University of Norway.

Using examples from research, Kampffmeyer will talk about how we can develop AI models that explain what they do. He will also explain how we can detect if the data is not representative of the problem it is intended to solve. And if you're still in doubt, he'll convince you why we need transparency in health AI.

Recording

You can download the podcast to your mobile on Apple Podcasts, Spotify or Podbean. Search for "Norwegian Centre for E-health Research".