All Article Properties:
{
"access_control": false,
"status": "publish",
"objectType": "Article",
"id": "2065237",
"signature": "Article:2065237",
"url": "https://staging.dailymaverick.co.za/opinion-piece/2065237-risks-of-ai-prediction-performance-should-be-measured-especially-in-critical-areas-like-healthcare",
"shorturl": "https://staging.dailymaverick.co.za/opinion-piece/2065237",
"slug": "risks-of-ai-prediction-performance-should-be-measured-especially-in-critical-areas-like-healthcare",
"contentType": {
"id": "3",
"name": "Opinionistas",
"slug": "opinion-piece"
},
"views": 0,
"comments": 0,
"preview_limit": null,
"excludedFromGoogleSearchEngine": 0,
"title": "Risks of AI prediction performance should be measured, especially in critical areas like healthcare",
"firstPublished": "2024-02-23 15:37:27",
"lastUpdate": "2024-02-23 15:37:27",
"categories": [
{
"id": "435053",
"name": "Opinionistas",
"signature": "Category:435053",
"slug": "opinionistas",
"typeId": {
"typeId": "1",
"name": "Daily Maverick",
"slug": "",
"includeInIssue": "0",
"shortened_domain": "",
"stylesheetClass": "",
"domain": "staging.dailymaverick.co.za",
"articleUrlPrefix": "",
"access_groups": "[]",
"locale": "",
"preview_limit": null
},
"parentId": null,
"parent": [],
"image": "",
"cover": "",
"logo": "",
"paid": "0",
"objectType": "Category",
"url": "https://staging.dailymaverick.co.za/category/opinionistas/",
"cssCode": "",
"template": "default",
"tagline": "",
"link_param": null,
"description": "",
"metaDescription": "",
"order": "0",
"pageId": null,
"articlesCount": null,
"allowComments": "0",
"accessType": "freecount",
"status": "1",
"children": [],
"cached": true
}
],
"content_length": 8258,
"contents": "<span style=\"font-weight: 400;\">Suppose Tinyiko from Duthuni Village goes to Elim Hospital. The nurse, Thuso, orders an emergency x-ray image of Tinyiko’s lung to ascertain whether he has a condition called pulmonary embolism. Because no doctor is in sight, Thuso uses an AI system that predicts whether Tinyiko has a pulmonary embolism condition.</span>\r\n\r\n<span style=\"font-weight: 400;\">The AI system makes a diagnostic that says Tinyiko has no pulmonary embolism condition. AI systems like this have been under development for a long time. For example, in 2007, </span><a href=\"https://arxiv.org/abs/0706.0300\"><span style=\"font-weight: 400;\">Simon Scurrell, David Rubin and I</span></a><span style=\"font-weight: 400;\"> developed an AI system that predicts whether a patient has a pulmonary embolism.</span>\r\n\r\n<span style=\"font-weight: 400;\">With the increase in data and computational power, these systems are beginning to exceed the accuracy of human doctors. </span>\r\n<blockquote><span style=\"font-weight: 400;\">The consequences of errors in healthcare applications are exceedingly severe, given that they may result in life-threatening misdiagnoses, inappropriate treatments, or lost opportunities for early intervention.</span></blockquote>\r\n<span style=\"font-weight: 400;\">The crucial question is whether or not the AI system that predicts whether a patient such as Tinyiko has a pulmonary embolism is enough? This AI system can determine whether Tinyiko has a pulmonary embolism, and additionally state its confidence in its prediction. For example, the AI system can quantify the prediction risk by stipulating that it is 80% confident that Tinyiko has a pulmonary embolism.</span>\r\n\r\n<span style=\"font-weight: 400;\">Of course, this additional confidence or risk quantification requires further computational and, thus, financial resources. This article suggests that society is placed in a risky position without carefully measuring this risk (80% confidence level), potentially subjecting it to unanticipated repercussions that may erode confidence, and the ethical underpinnings that ought to govern AI.</span>\r\n<h4><b>AI prediction risk</b></h4>\r\n<span style=\"font-weight: 400;\">Measuring AI prediction risk is paramount in augmenting AI systems’ transparency. Providing a coherent structure that enables stakeholders to understand predictive models’ constraints and possible modes of failure enhances their capacity to offer well-informed decisions.</span>\r\n\r\n<span style=\"font-weight: 400;\">End-users and those impacted by AI-driven choices, in addition to developers and administrators of AI systems, must have transparent access to AI prediction risk information. It promotes a climate of responsibility in which AI system developers are incentivised to comply with elevated benchmarks of dependability and security.</span>\r\n\r\n<b>Read more in Daily Maverick: </b><a href=\"https://www.dailymaverick.co.za/opinionista/2023-04-05-world-health-day-heres-how-ai-and-digital-health-are-shaping-the-future-of-healthcare/\"><span style=\"font-weight: 400;\">Here’s how AI and digital health are shaping the future of healthcare</span></a>\r\n\r\n<span style=\"font-weight: 400;\">Furthermore, measuring prediction performance risk is crucial for establishing and sustaining public confidence in AI technologies. The foundation for the extensive adoption and acceptance of AI is trust. People are more likely to adopt AI solutions when they comprehend the associated risks and know that safety and risk management protocols are in effect.</span>\r\n\r\n<span style=\"font-weight: 400;\">On the contrary, insufficient AI prediction risk quantification and communication may result in adverse public reactions, regulatory repercussions, and a hindrance to productive advancements.</span>\r\n<h4><b>Technical and social requirement</b></h4>\r\n<span style=\"font-weight: 400;\">Measuring the risk associated with AI prediction performance is not only a technical requirement but also a social one. AI prediction programs are prone to failure. The consequences can range from moderate to severe, depending on the situation.</span>\r\n\r\n<span style=\"font-weight: 400;\">For example, the inability of AI-powered financial algorithms might significantly upset the market, while the imprecision of predictive policing models can exacerbate social inequality.</span>\r\n\r\n<span style=\"font-weight: 400;\">Measuring risk is critical for understanding, mitigating, and communicating the possibility of such failures, thereby protecting against their most serious consequences. The quantification of AI prediction risk takes us to the exciting world of Reverend Thomas Bayes.</span>\r\n\r\n<span style=\"font-weight: 400;\">Thomas Bayes was an English Presbyterian minister, philosopher, and statistician born around 1701. His most renowned contribution outside theology is the development of </span><a href=\"https://www.dailymaverick.co.za/article/2011-10-07-maths-in-the-dock/\"><span style=\"font-weight: 400;\">Bayes’ Theorem</span></a><span style=\"font-weight: 400;\">, which outlines the likelihood of an occurrence by utilising prior knowledge and evidence of potentially associated conditions. Bayes’ contribution, which remains seminal in statistics, was not published during his lifetime.</span>\r\n\r\n<span style=\"font-weight: 400;\">Following his death, Richard Price, an acquaintance of Bayes, published it on his behalf. </span>\r\n\r\n<span style=\"font-weight: 400;\">Bayes’ work has emerged as an essential tool for measuring the risk of AI predictions. So, how does Bayes’ Theorem operate to quantify AI prediction risk?</span>\r\n<h4><b>Robust mechanism</b></h4>\r\n<span style=\"font-weight: 400;\">With its probabilistic underpinnings, the Bayesian framework provides a robust mechanism for incorporating prior information and evidence into the AI prediction procedure, thus offering AI prediction risk. This Bayesian procedure has been applied successfully to many vital areas.</span>\r\n\r\n<span style=\"font-weight: 400;\">One example is my 2001 </span><a href=\"https://arc.aiaa.org/doi/pdf/10.2514/2.2745?casa_token=QsRkm9XvgLUAAAAA:cuGxm2aRFET9cQsV2IOfIUM1-EcKfxeZOxfPeD1LTnO4-1RZXEfxCFobVcyM8CtzXf926DU40NUCkw\"><span style=\"font-weight: 400;\">work</span></a><span style=\"font-weight: 400;\"> using AI systems, based on Bayes’ work in aircraft structures. Another by </span><a href=\"https://www.dailymaverick.co.za/article/2023-03-22-algorithms-are-moulding-and-shaping-our-politics-heres-how-to-avoid-being-gamed/\"><span style=\"font-weight: 400;\">Chantelle Gray</span></a><span style=\"font-weight: 400;\"> is how Bayes’ work is used to build algorithms shaping our politics.</span>\r\n\r\n<span style=\"font-weight: 400;\">Although the Bayesian method presents notable benefits in terms of adaptability, accuracy and uncertainty management, it is crucial to consider the substantial investments in computational and financial resources necessary to implement and maintain these approaches successfully.</span>\r\n\r\n<span style=\"font-weight: 400;\">However, methods have been developed to reduce this computational load. For example, in 2016, </span><a href=\"https://www.amazon.co.jp/Probabilistic-Element-Updating-Bayesian-Statistics/dp/1119153034\"><span style=\"font-weight: 400;\">Ilyes Boulkaibet, Sondipon Adhikari and I</span></a><span style=\"font-weight: 400;\"> developed robust methods for reducing the computational cost of the Bayesian AI prediction risk quantification procedure.</span>\r\n\r\n<span style=\"font-weight: 400;\">Furthermore, Tsakane Mongwe, Rendani Mbuvha and I, in our 2023 </span><a href=\"https://www.amazon.co.jp/-/en/Marwala/dp/0443190356\"><span style=\"font-weight: 400;\">book</span></a><span style=\"font-weight: 400;\">, developed a Bayesian risk quantification method for machine learning. Given the viability of AI prediction risk quantification, what are the governance, regulatory and policy implications?</span>\r\n\r\n<span style=\"font-weight: 400;\">A concerted effort from all parties involved in the development, deployment and governance of AI systems is required to maximise the benefits of AI while mitigating its risks. It is imperative that policymakers champion and enact regulations mandating AI prediction risk quantification.</span>\r\n\r\n<span style=\"font-weight: 400;\">AI developers and organisations must incorporate risk quantification into their development lifecycle as a fundamental component of ethical AI development, rather than treating it as an afterthought.</span>\r\n\r\n<span style=\"font-weight: 400;\">End-users and the public should be engaged in a transparent dialogue regarding AI prediction risks, ensuring that the design and deployment of AI systems reflect societal values and ethical considerations.</span>\r\n\r\n<span style=\"font-weight: 400;\">In “Tinyiko’s” hospital visit, it is evident that measuring AI prediction risk is advantageous and imperative in the healthcare industry. The consequences of healthcare decisions on patients are substantial; therefore, it is vital to comprehend the reliability and constraints of AI-powered predictions.</span>\r\n<h4><b>Severe consequences of errors</b></h4>\r\n<span style=\"font-weight: 400;\">The consequences of errors in healthcare applications are exceedingly severe, given that they may result in life-threatening misdiagnoses, inappropriate treatments, or lost opportunities for early intervention.</span>\r\n\r\n<span style=\"font-weight: 400;\">Healthcare personnel can weigh the inherent uncertainties of AI-driven insights against the benefits they provide to make informed decisions by measuring the risk associated with AI predictions. This methodology facilitates a sophisticated approach to patient care by integrating AI recommendations with clinical expertise in a transparent, accountable, and patient-centric manner.</span>\r\n\r\n<span style=\"font-weight: 400;\">Moreover, from a regulatory standpoint, it is critical to quantify prediction risk to verify that AI systems satisfy rigorous safety and effectiveness criteria before implementation in essential healthcare settings. In the era of AI, this meticulous risk assessment is vital to preserving patient confidence and adhering to the ethical standards of medical practice.</span>\r\n\r\n<span style=\"font-weight: 400;\">To conclude, measuring performance risk associated with AI predictions, even though it adds additional cost, is not merely a technical obstacle but also a social and moral imperative.</span>\r\n\r\n<span style=\"font-weight: 400;\">Our collective endeavours for safety, fairness and success will be determined by our capacity to quantify and manage the risks associated with these powerful technologies as we approach a future that AI progressively influences.</span>\r\n\r\n<span style=\"font-weight: 400;\">Measuring AI prediction risk must become mandatory for all critical applications such as healthcare. </span><b>DM</b>",
"authors": [
{
"id": "7591",
"name": "Tshilidzi Marwala",
"image": "https://www.dailymaverick.co.za/wp-content/uploads/Tshilidzi-Marwala-01_from-JanP-20180531-USE.jpg",
"url": "https://staging.dailymaverick.co.za/author/tshilidzi-marwala/",
"editorialName": "tshilidzi-marwala",
"department": "",
"name_latin": ""
}
],
"keywords": [
{
"type": "Keyword",
"data": {
"keywordId": "18821",
"name": "Ethics",
"url": "https://staging.dailymaverick.co.za/keyword/ethics/",
"slug": "ethics",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Ethics",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "70223",
"name": "Tshilidzi Marwala",
"url": "https://staging.dailymaverick.co.za/keyword/tshilidzi-marwala/",
"slug": "tshilidzi-marwala",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Tshilidzi Marwala",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "77890",
"name": "healthcare",
"url": "https://staging.dailymaverick.co.za/keyword/healthcare/",
"slug": "healthcare",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "healthcare",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "97828",
"name": "machine learning",
"url": "https://staging.dailymaverick.co.za/keyword/machine-learning/",
"slug": "machine-learning",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "machine learning",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "351007",
"name": "Opinionista",
"url": "https://staging.dailymaverick.co.za/keyword/opinionista/",
"slug": "opinionista",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Opinionista",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "415283",
"name": "AI prediction risk",
"url": "https://staging.dailymaverick.co.za/keyword/ai-prediction-risk/",
"slug": "ai-prediction-risk",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "AI prediction risk",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "415284",
"name": "Thomas Bayes",
"url": "https://staging.dailymaverick.co.za/keyword/thomas-bayes/",
"slug": "thomas-bayes",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Thomas Bayes",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "415285",
"name": "AI risks",
"url": "https://staging.dailymaverick.co.za/keyword/ai-risks/",
"slug": "ai-risks",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "AI risks",
"translations": null
}
}
],
"related": [],
"summary": "Artificial intelligence (AI) developers and organisations must incorporate risk quantification into their development lifecycle as a fundamental component of ethical AI development, rather than treating it as an afterthought. Measuring AI prediction risk is paramount in augmenting AI systems’ transparency.\r\n",
"elements": [],
"seo": {
"search_title": "Risks of AI prediction performance should be measured, especially in critical areas like healthcare",
"search_description": "<span style=\"font-weight: 400;\">Suppose Tinyiko from Duthuni Village goes to Elim Hospital. The nurse, Thuso, orders an emergency x-ray image of Tinyiko’s lung to ascertain whether he has a condition ",
"social_title": "Risks of AI prediction performance should be measured, especially in critical areas like healthcare",
"social_description": "<span style=\"font-weight: 400;\">Suppose Tinyiko from Duthuni Village goes to Elim Hospital. The nurse, Thuso, orders an emergency x-ray image of Tinyiko’s lung to ascertain whether he has a condition ",
"social_image": ""
},
"cached": true,
"access_allowed": true
}