All Article Properties:
{
"access_control": false,
"status": "publish",
"objectType": "Article",
"id": "2383555",
"signature": "Article:2383555",
"url": "https://staging.dailymaverick.co.za/opinion-piece/2383555-ai-is-not-a-high-precision-technology-and-this-has-profound-implications-for-the-world-of-work",
"shorturl": "https://staging.dailymaverick.co.za/opinion-piece/2383555",
"slug": "ai-is-not-a-high-precision-technology-and-this-has-profound-implications-for-the-world-of-work",
"contentType": {
"id": "3",
"name": "Opinionistas",
"slug": "opinion-piece"
},
"views": 0,
"comments": 3,
"preview_limit": null,
"excludedFromGoogleSearchEngine": 0,
"title": "AI is not a high-precision technology, and this has profound implications for the world of work",
"firstPublished": "2024-09-26 21:36:34",
"lastUpdate": "2024-09-26 21:36:35",
"categories": [
{
"id": "435053",
"name": "Opinionistas",
"signature": "Category:435053",
"slug": "opinionistas",
"typeId": {
"typeId": "1",
"name": "Daily Maverick",
"slug": "",
"includeInIssue": "0",
"shortened_domain": "",
"stylesheetClass": "",
"domain": "staging.dailymaverick.co.za",
"articleUrlPrefix": "",
"access_groups": "[]",
"locale": "",
"preview_limit": null
},
"parentId": null,
"parent": [],
"image": "",
"cover": "",
"logo": "",
"paid": "0",
"objectType": "Category",
"url": "https://staging.dailymaverick.co.za/category/opinionistas/",
"cssCode": "",
"template": "default",
"tagline": "",
"link_param": null,
"description": "",
"metaDescription": "",
"order": "0",
"pageId": null,
"articlesCount": null,
"allowComments": "0",
"accessType": "freecount",
"status": "1",
"children": [],
"cached": true
}
],
"content_length": 9466,
"contents": "<span style=\"font-weight: 400;\">In the grand narrative of technological progress, artificial intelligence (AI) has been hailed as a transformative force, poised to revolutionise industries and improve the accuracy of tasks once impossible for machines.</span>\r\n\r\n<span style=\"font-weight: 400;\">From predicting</span><a href=\"https://www.dailymaverick.co.za/opinionista/2023-11-22-ai-and-international-relations-a-whole-new-minefield-to-navigate/\"> <span style=\"font-weight: 400;\">interstate conflicts</span></a><span style=\"font-weight: 400;\"> to diagnostic tools in</span><a href=\"https://ieeexplore.ieee.org/document/8633367\"> <span style=\"font-weight: 400;\">healthcare</span></a><span style=\"font-weight: 400;\">, AI is deployed in areas where precision, efficiency, and reliability are paramount.</span>\r\n\r\n<span style=\"font-weight: 400;\">Yet, an uncomfortable truth lurks beneath the surface: AI is far from a high-precision technology. If AI is too precise, it is considered flawed because it is memorising instead of learning, a phenomenon called</span><a href=\"https://www.ibm.com/topics/overfitting\"> <span style=\"font-weight: 400;\">over-fitting</span></a><span style=\"font-weight: 400;\">. This inherent imprecision in AI systems has significant implications for the world of work, especially in sectors that rely on human judgement, flexibility, and adaptability.</span>\r\n\r\n<span style=\"font-weight: 400;\">One critical framework for understanding AI’s limitations comes from the statistician George Box, who famously said, “</span><a href=\"https://www.imperial.ac.uk/business-school/news/all-models-are-wrong-some-are-useful/\"><span style=\"font-weight: 400;\">all models are wrong, but some are useful</span></a><span style=\"font-weight: 400;\">”. Box’s insight, aimed initially at statistical models, is especially relevant in AI, which relies on models to predict, classify, and make decisions.</span>\r\n\r\n<span style=\"font-weight: 400;\">AI systems, especially those based on machine learning, are ultimately just models — approximations of reality, built on often incomplete,</span><a href=\"https://www.dailymaverick.co.za/opinionista/2024-01-30-the-dual-faces-of-algorithmic-bias-avoidable-and-unavoidable-discrimination/\"> <span style=\"font-weight: 400;\">biased</span></a><span style=\"font-weight: 400;\">, or overly simplistic data. These models are “wrong” because they can never fully capture the complexity of the real world, but they can still be “useful” when deployed with an understanding of their limitations.</span>\r\n\r\n<span style=\"font-weight: 400;\">Understanding these limitations can empower us to use AI more effectively and responsibly.</span>\r\n<h4><b>Probabilistic systems</b></h4>\r\n<span style=\"font-weight: 400;\">Despite the hype, AI technologies — particularly those based on machine learning — are probabilistic systems. They rely on patterns and probabilities to make decisions rather than exact, deterministic rules.</span>\r\n\r\n<span style=\"font-weight: 400;\">For example, a</span><a href=\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10377683/\"> <span style=\"font-weight: 400;\">machine learning algorithm</span></a><span style=\"font-weight: 400;\"> trained to identify cancerous tumours from medical images can be incredibly effective in many cases, but it still has a margin of error. Sometimes, it misclassifies benign growths as malignant, or vice versa.</span>\r\n\r\n<span style=\"font-weight: 400;\">This lack of precision can have serious consequences, especially in healthcare, where mistakes can be life-threatening. It’s crucial to be aware of these potential risks and to approach AI with caution and a critical eye.</span>\r\n\r\n<span style=\"font-weight: 400;\">A valid comparison here is</span><a href=\"https://ieeexplore.ieee.org/document/4353331\"> <span style=\"font-weight: 400;\">nuclear technology</span></a><span style=\"font-weight: 400;\">, a field representing the gold standard for precision and control. Nuclear technology operates within incredibly narrow tolerances, whether for energy generation or medical applications like radiotherapy.</span>\r\n\r\n<span style=\"font-weight: 400;\">The exact amount of uranium or plutonium in a reactor, or the precise calibration of a</span><a href=\"https://ieeexplore.ieee.org/document/1511587?arnumber=1511587\"> <span style=\"font-weight: 400;\">radiation beam</span></a><span style=\"font-weight: 400;\">, must be controlled to the millimetre, as even the slightest deviation can have catastrophic consequences.</span>\r\n\r\n<span style=\"font-weight: 400;\">In this sense, nuclear technology is highly deterministic — its behaviour is governed by physical laws rather than imprecise data for AI, making nuclear technology more precise than AI technology.</span>\r\n\r\n<span style=\"font-weight: 400;\">AI, by contrast, operates on probabilities and approximations. Even with vast amounts of data and processing power, AI models cannot guarantee exact outcomes because they are trained on historical data and predict future behaviours based on patterns.</span>\r\n\r\n<span style=\"font-weight: 400;\">For example, a certain margin of error might be tolerable in some industries, but the gap becomes evident when you compare this to a field like nuclear energy.</span>\r\n\r\n<span style=\"font-weight: 400;\">In nuclear technology, precision is not only expected — it is critical. An AI misclassification in hiring might lead to a bad hire; a misstep in nuclear technology could lead to a disaster. The stakes of precision, therefore, are much higher in nuclear technology, where every variable must be controlled, leaving little room for error.</span>\r\n<h4><b>High-level reasoning</b></h4>\r\n<span style=\"font-weight: 400;\">The imprecision of AI also ties into</span><a href=\"https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(23)01129-7/fulltext#:~:text=Moravec%27s%20paradox%20is%20a%20phenomenon,large%2Dscale%20data%20analysis)%20are\"> <span style=\"font-weight: 400;\">Moravec’s paradox</span></a><span style=\"font-weight: 400;\">, a concept introduced by roboticist Hans Moravec. Moravec observed that while AI excels at tasks requiring high-level reasoning, it struggles with tasks humans find simple and intuitive, such as perception and</span><a href=\"https://dictionary.cambridge.org/dictionary/english/sensorimotor\"> <span style=\"font-weight: 400;\">sensorimotor</span></a><span style=\"font-weight: 400;\"> skills.</span>\r\n\r\n<span style=\"font-weight: 400;\">In other words, AI can outperform humans in areas like chess or data analysis, but flounders in areas like grasping objects or understanding complex emotions. This paradox exposes the fragility of AI in dealing with tasks that demand a combination of physical skill, perception, and context-dependent judgement — critical components of many jobs in the world of work.</span>\r\n\r\n<span style=\"font-weight: 400;\">Moravec’s Paradox suggests that tasks humans perceive as easy — like walking or interpreting social cues — are some of the hardest for AI to replicate.</span>\r\n\r\n<span style=\"font-weight: 400;\">The implication for the world of work is profound. Jobs that require intuitive, sensorimotor skills like caregiving, construction or hospitality are far more difficult to automate than tasks like data processing, scheduling, or pattern recognition.</span>\r\n\r\n<span style=\"font-weight: 400;\">This contradicts the assumption that manual labour will be the first to be automated. The paradox implies that the most vulnerable jobs rely on abstract cognitive skills, while tasks that require human intuition, agility, and empathy are much more challenging to replicate with AI.</span>\r\n<h4><b>Powerful and limited</b></h4>\r\n<span style=\"font-weight: 400;\">Box and Moravec’s work together paint a picture of AI that is both powerful and limited. While AI can be “useful” in specific, well-defined tasks, it is “wrong” because it cannot fully replicate the nuance and adaptability of human intelligence.</span>\r\n\r\n<span style=\"font-weight: 400;\">AI models are only as good as the data they are trained on, and their application is fraught with challenges when the tasks become more embodied or socially complex.</span>\r\n\r\n<span style=\"font-weight: 400;\">Despite its limitations, AI has proven to be remarkably useful across a wide range of applications. For example, AI has become indispensable for fraud detection and risk management</span><a href=\"https://link.springer.com/book/10.1007/978-3-030-42962-1\"> <span style=\"font-weight: 400;\">in finance</span></a><span style=\"font-weight: 400;\">. Algorithms can sift through vast transaction data to identify abnormal patterns far more quickly than humans could.</span>\r\n\r\n<span style=\"font-weight: 400;\">While these systems occasionally flag legitimate transactions as suspicious, they are still invaluable for detecting and preventing fraudulent activity at a scale that would be impossible for human analysts alone. The benefits of AI here far outweigh the occasional misstep, as the technology dramatically reduces the incidence of undetected fraud.</span>\r\n\r\n<span style=\"font-weight: 400;\">The utility of AI is undeniable, even if its predictions are not 100% accurate.</span>\r\n\r\n<span style=\"font-weight: 400;\">One of the critical implications of AI’s imprecision, amplified by Moravec’s paradox, is the risk of over-reliance. When employers and organisations see AI as an infallible tool, they may hand over critical decision-making powers to systems not designed for precise, context-sensitive judgements.</span>\r\n\r\n<span style=\"font-weight: 400;\">If organisations outsource too much decision-making to AI, they risk diminishing the quality of their work.</span>\r\n\r\n<span style=\"font-weight: 400;\">Moreover, AI’s imprecision raises questions about</span><a href=\"https://www.dailymaverick.co.za/opinionista/2023-11-28-the-perils-of-acting-too-slowly-in-embracing-artificial-intelligence/\"> <span style=\"font-weight: 400;\">accountability</span></a><span style=\"font-weight: 400;\">. When an AI system makes a mistake, who is responsible? Is it the developer who created the algorithm, the organisation that deployed it, or the workers who rely on its recommendations?</span>\r\n\r\n<span style=\"font-weight: 400;\">This ambiguity creates a dangerous grey area in workplaces where no one is held accountable for decisions that affect people’s lives, jobs, and well-being. In scenarios like autonomous vehicles or predictive policing, the consequences of AI’s imprecision can have devastating societal impacts.</span>\r\n<h4><b>Significant concern</b></h4>\r\n<span style=\"font-weight: 400;\">Another significant concern is the impact on workers themselves. As AI takes over more functions in various industries, the role of human workers shifts from active decision-makers to passive overseers of machines. This deskilling of labour could create a workforce that is less equipped to intervene when AI systems fail or require human intuition.</span>\r\n\r\n<span style=\"font-weight: 400;\">Workers may be expected to manage complex technologies without fully understanding how they work, leading to frustration, disempowerment, and job dissatisfaction.</span>\r\n\r\n<span style=\"font-weight: 400;\">The myth of AI as a high-precision technology also shapes how we view the future of work. Many proponents of AI automation argue that machines will handle all the tedious, repetitive tasks, freeing humans to focus on more creative and strategic roles.</span>\r\n\r\n<span style=\"font-weight: 400;\">But this vision ignores the reality of AI’s limitations. In manufacturing, logistics, and customer service sectors, AI has led to increased surveillance of workers, micromanagement, and the squeezing of human labour to fit the demands of imperfect machines.</span>\r\n\r\n<span style=\"font-weight: 400;\">AI systems often set unreasonable, unachievable targets or, based on faulty assumptions, force workers to keep up with machines that don’t fully understand the nature of their work.</span>\r\n\r\n<span style=\"font-weight: 400;\">Moravec’s Paradox reminds us that task automation is not as straightforward as it may seem. It challenges the assumption that robots and AI will quickly take over manual or low-skilled jobs, emphasising that tasks that require human intuition, empathy, and sensory experience are challenging for machines to master.</span>\r\n<h4><b>Cognitive abstraction</b></h4>\r\n<span style=\"font-weight: 400;\">Meanwhile, roles that rely heavily on cognitive abstraction, like data processing or routine financial tasks, are more susceptible to AI automation.</span>\r\n\r\n<span style=\"font-weight: 400;\">Box’s work reminds us that no matter how advanced AI becomes, it remains a model of reality, not reality itself. This is crucial for the world of work.</span>\r\n\r\n<span style=\"font-weight: 400;\">Rather than displacing workers, AI should be understood as a tool that complements human abilities.</span>\r\n\r\n<span style=\"font-weight: 400;\">In the governance of AI, it is crucial to acknowledge that AI is not a high-precision technology, ensuring that regulations, standards, and policies account for its inherent limitations and probabilistic nature to prevent over-reliance and mitigate potential risks. </span><b>DM</b>",
"authors": [
{
"id": "7591",
"name": "Tshilidzi Marwala",
"image": "https://www.dailymaverick.co.za/wp-content/uploads/Tshilidzi-Marwala-01_from-JanP-20180531-USE.jpg",
"url": "https://staging.dailymaverick.co.za/author/tshilidzi-marwala/",
"editorialName": "tshilidzi-marwala",
"department": "",
"name_latin": ""
}
],
"keywords": [
{
"type": "Keyword",
"data": {
"keywordId": "70223",
"name": "Tshilidzi Marwala",
"url": "https://staging.dailymaverick.co.za/keyword/tshilidzi-marwala/",
"slug": "tshilidzi-marwala",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Tshilidzi Marwala",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "86661",
"name": "artificial intelligence",
"url": "https://staging.dailymaverick.co.za/keyword/artificial-intelligence/",
"slug": "artificial-intelligence",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "artificial intelligence",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "97828",
"name": "machine learning",
"url": "https://staging.dailymaverick.co.za/keyword/machine-learning/",
"slug": "machine-learning",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "machine learning",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "195710",
"name": "AI",
"url": "https://staging.dailymaverick.co.za/keyword/ai/",
"slug": "ai",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "AI",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "413900",
"name": "opinionistas",
"url": "https://staging.dailymaverick.co.za/keyword/opinionistas/",
"slug": "opinionistas",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "opinionistas",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "424661",
"name": "George Box",
"url": "https://staging.dailymaverick.co.za/keyword/george-box/",
"slug": "george-box",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "George Box",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "424662",
"name": "over-fitting",
"url": "https://staging.dailymaverick.co.za/keyword/overfitting/",
"slug": "overfitting",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "over-fitting",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "424663",
"name": "Nuclear technology",
"url": "https://staging.dailymaverick.co.za/keyword/nuclear-technology/",
"slug": "nuclear-technology",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Nuclear technology",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "424664",
"name": "Moravec’s Paradox",
"url": "https://staging.dailymaverick.co.za/keyword/moravecs-paradox/",
"slug": "moravecs-paradox",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Moravec’s Paradox",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "424665",
"name": "Hans Moravec",
"url": "https://staging.dailymaverick.co.za/keyword/hans-moravec/",
"slug": "hans-moravec",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Hans Moravec",
"translations": null
}
}
],
"related": [],
"summary": "If AI is too precise, it is considered flawed because it is memorising instead of learning, a phenomenon called over-fitting. This inherent imprecision in AI systems has significant implications for the world of work.\r\n",
"elements": [],
"seo": {
"search_title": "AI is not a high-precision technology, and this has profound implications for the world of work",
"search_description": "<span style=\"font-weight: 400;\">In the grand narrative of technological progress, artificial intelligence (AI) has been hailed as a transformative force, poised to revolutionise industries and improve",
"social_title": "AI is not a high-precision technology, and this has profound implications for the world of work",
"social_description": "<span style=\"font-weight: 400;\">In the grand narrative of technological progress, artificial intelligence (AI) has been hailed as a transformative force, poised to revolutionise industries and improve",
"social_image": ""
},
"cached": true,
"access_allowed": true
}