Explainability and Transparency in AI: Trusting the Process

 

What is Explainability?

Have you ever been presented with something that you thought you understood, yet you fell short of knowing exactly what’s going on behind the scenes? Explainability is a characteristic that allows for the ‘behind the scenes’ of an AI system to be understood by a person. Artificial intelligence surrounds us in our daily lives - anything from your Google Assistant or Alexa to unlocking your smartphone with Facial Recognition can be categorized as AI. These technologies allow for the digital world to transform into something accessible and convenient, and understanding how they aid us in our everyday lives is essential. For more helpful information about artificial intelligence, machine learning and explainability, you can read The Medical Futurist’s article, Explainable AI in Healthcare

Here are some reasons why explainability, also known as ‘interpretability’, ‘transparency’, or even a ‘white-box’ approach, is important when it comes to AI. 

Explainability Increases Trust

It’s important that patients, providers and even stakeholders understand and trust the AI that they are engaging with. Patients, for example, may be using AI for remote patient monitoring to manage chronic conditions. These conditions are sensitive and at times temperamental, so patients need to know that the technology won’t let them down and that there are benefits in comparison to more traditional management methods.

Equally, clinicians may be using AI for clinical decision support for diagnoses, and again, trust in these processes is essential. Simultaneously, explainable AI and having a transparent product can help instill the confidence in providers that they aren’t at risk of losing their jobs to more capable AI.  

Explainability Increases Security

Cyberattacks pose a significant threat to online healthcare systems, as they often lead to stolen patient data or even the loss of medical documentation. Both users and developers need to be aware of how their systems work in order to properly protect them from these attacks. Explainability can increase security significantly by allowing humans and AI to work hand in hand in protecting their technology systems. 

Implementing Explainability 

A recent study investigated the impact of explainability when using AI to classify skin lesions and found that there was a disadvantage for both just AI to perform the task and for solely a human to do so. The study suggests ‘interactive machine learning’ “where a human in-the-loop contributes to reducing the complexity of NP-hard problems”. They outline a ‘glass-box’ approach, which includes explainable AI that enables humans to directly interact with learning algorithms. 

Explainability can be implemented in a number of ways, from utilizing a combination of natural language and visual aids in order to lay out the process to users, to the above ‘glass-box’ approach. Check out TechTarget’s article 4 Explainable AI Techniques for Machine Learning Models for some more in-depth examples of ways to implement explainability. 

SRG’s Approach to Transparent AI-ML

At SRG, we focus on building transparent and explainable AI and ML models, leading to high levels of user trust and engagement with advanced analytics to enhance clinical, patient engagement, and administrative applications. 

Written by:

Maxine Wesley

LinkedIn

 
Previous
Previous

The Internet of Things in Healthcare

Next
Next

Improving Digital Literacy for Seniors: The Importance of Digital Inclusion