All Article Properties:
{
"access_control": false,
"status": "publish",
"objectType": "Article",
"id": "2657464",
"signature": "Article:2657464",
"url": "https://staging.dailymaverick.co.za/article/2025-04-01-agentic-ai-is-changing-the-rules-faster-than-policymakers-can-write-them/",
"shorturl": "https://staging.dailymaverick.co.za/article/2657464",
"slug": "agentic-ai-is-changing-the-rules-faster-than-policymakers-can-write-them",
"contentType": {
"id": "1",
"name": "Article",
"slug": "article"
},
"views": 0,
"comments": 0,
"preview_limit": null,
"excludedFromGoogleSearchEngine": 0,
"title": "Agentic AI is changing the rules faster than policymakers can write them",
"firstPublished": "2025-04-01 20:08:37",
"lastUpdate": "2025-04-01 20:08:40",
"categories": [
{
"id": "405817",
"name": "Op-eds",
"signature": "Category:405817",
"slug": "op-eds",
"typeId": {
"typeId": "1",
"name": "Daily Maverick",
"slug": "",
"includeInIssue": "0",
"shortened_domain": "",
"stylesheetClass": "",
"domain": "staging.dailymaverick.co.za",
"articleUrlPrefix": "",
"access_groups": "[]",
"locale": "",
"preview_limit": null
},
"parentId": null,
"parent": [],
"image": "",
"cover": "",
"logo": "",
"paid": "0",
"objectType": "Category",
"url": "https://staging.dailymaverick.co.za/category/op-eds/",
"cssCode": "",
"template": "default",
"tagline": "",
"link_param": null,
"description": "",
"metaDescription": "",
"order": "0",
"pageId": null,
"articlesCount": null,
"allowComments": "1",
"accessType": "freecount",
"status": "1",
"children": [],
"cached": true
}
],
"content_length": 8536,
"contents": "<span style=\"font-weight: 400;\">Until recently the idea of </span><a href=\"https://www.forbes.com/sites/bernardmarr/2025/02/03/generative-ai-vs-agentic-ai-the-key-differences-everyone-needs-to-know/\"><span style=\"font-weight: 400;\">agentic artificial intelligence</span></a><span style=\"font-weight: 400;\"> (AI) systems operating in the real world seemed like science fiction. But that’s no longer the case.</span>\r\n\r\n<span style=\"font-weight: 400;\">A case in point is the Chinese start-up Butterfly Effect’s Manus, launched recently. Unlike many conventional AI models, Manus integrates multiple AI systems and is designed to operate with minimal human oversight. Demand has been overwhelming, with millions of people on the waiting list.</span>\r\n\r\n<span style=\"font-weight: 400;\">Agentic AI systems offer clear societal benefits. Yet, their risks will be particularly difficult for policymakers to manage.</span>\r\n<h4><b>What is an agentic AI system?</b></h4>\r\n<span style=\"font-weight: 400;\">To understand agentic AI, we need to start with defining AI more broadly. Despite ongoing debate, the </span><a href=\"https://www.oecd.org/en/publications/explanatory-memorandum-on-the-updated-oecd-definition-of-an-ai-system_623da898-en.html\"><span style=\"font-weight: 400;\">Organisation for Economic Co-operation and Development definition</span></a><span style=\"font-weight: 400;\"> remains widely accepted: “An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”</span>\r\n\r\n<span style=\"font-weight: 400;\">What makes agentic AI different from conventional AI systems? A key distinction is autonomy. These systems require far less human supervision than traditional AI. While most AI models execute predefined tasks in response to human input, agentic AI can initiate and complete tasks independently.</span>\r\n\r\n<span style=\"font-weight: 400;\">Closely linked to this is adaptiveness. Unlike conventional AI, which functions best in static environments, agentic AI dynamically adjusts its behaviour based on changing conditions.</span>\r\n\r\n<span style=\"font-weight: 400;\">A further distinguishing factor is the complexity of objectives. Traditional AI systems optimise for clear-cut goals, but agentic AI must navigate multiple, evolving, and sometimes conflicting objectives.</span>\r\n\r\n<span style=\"font-weight: 400;\">At the heart of agentic AI is reinforcement learning</span> <span style=\"font-weight: 400;\">(RL), a machine learning approach that allows these systems to optimise their behaviour from trial and error. Instead of being programmed with fixed rules, RL-based systems refine their actions over time by optimising for rewards based on past experiences.</span>\r\n<h4><b>Agentic AI in action</b></h4>\r\n<span style=\"font-weight: 400;\">Agentic AI is already in use across multiple sectors. In healthcare, AI-driven systems </span><a href=\"https://arxiv.org/abs/2410.14041\"><span style=\"font-weight: 400;\">manage chronic illnesses</span></a><span style=\"font-weight: 400;\"> by tracking patient histories, reminding individuals to take medication, and even adjusting prescriptions in response to treatment outcomes. Researchers have also developed multi-agent diagnostic systems where multiple AI models </span><a href=\"https://www.nature.com/articles/s42256-024-00944-1\"><span style=\"font-weight: 400;\">collaborate like a team of specialists</span></a><span style=\"font-weight: 400;\">, improving the accuracy of diagnoses for rare or complex diseases.</span>\r\n\r\n<span style=\"font-weight: 400;\">In finance, agentic AI </span><a href=\"https://ieeexplore.ieee.org/document/10849561/\"><span style=\"font-weight: 400;\">analyses market data in real time</span></a><span style=\"font-weight: 400;\">, and executes trades at speeds that humans simply cannot match. Cybersecurity has also benefited, with autonomous AI systems that </span><a href=\"https://cset.georgetown.edu/publication/autonomous-cyber-defense/\"><span style=\"font-weight: 400;\">not only detect threats but also respond instantly</span></a><span style=\"font-weight: 400;\">, sometimes even patching vulnerabilities before they can be exploited.</span>\r\n\r\n<span style=\"font-weight: 400;\">In manufacturing, agentic AI is transforming operational efficiency. Predictive maintenance models now detect potential equipment failures before they happen, reducing costly downtime. Across these sectors, agentic AI reduces human workload, increases precision, and improves responsiveness to complex challenges.</span>\r\n<h4><b>New risks</b></h4>\r\n<span style=\"font-weight: 400;\">Despite major societal benefits, agentic AI introduces new risks that will be </span><a href=\"https://dl.acm.org/doi/10.1145/3630106.3658948\"><span style=\"font-weight: 400;\">particularly difficult to regulate</span></a><span style=\"font-weight: 400;\">.</span>\r\n\r\n<span style=\"font-weight: 400;\">Much of these risks are driven by </span><a href=\"https://arxiv.org/abs/2307.03718\"><span style=\"font-weight: 400;\">the proliferation problem</span></a><span style=\"font-weight: 400;\">. Open-source AI models, while promoting innovation, also make it nearly impossible to track their use. In some cases, the technology underlying agentic AI is stolen, further driving proliferation. The problem, of course, is not proliferation in itself. Rather, when proliferation means policymakers are unable to regulate the use of these systems, it presents a major challenge.</span>\r\n\r\n<span style=\"font-weight: 400;\">The proliferation of agentic AI systems power the</span> <span style=\"font-weight: 400;\">malicious use problem, where bad actors </span><a href=\"https://cdn.openai.com/papers/practices-for-governing-agentic-ai-systems.pdf\"><span style=\"font-weight: 400;\">exploit agentic AI to cause large-scale harm</span></a><span style=\"font-weight: 400;\">. Already, agentic AI has been used for </span><a href=\"https://link.springer.com/article/10.1007/s11704-024-40231-1\"><span style=\"font-weight: 400;\">voice cloning scams and the mass generation of fake news</span></a><span style=\"font-weight: 400;\">.</span>\r\n\r\n<span style=\"font-weight: 400;\">Malicious use, in turn, is exacerbated by the unexpected capabilities problem. As AI models become more sophisticated, they sometimes develop unanticipated abilities that could be misused, with developers only realising the risks after deployment.</span>\r\n\r\n<span style=\"font-weight: 400;\">Over the medium term, the overuse of agentic AI could contribute to overreliance and disempowerment. As agentic AI becomes embedded in high-stakes fields like finance and law, it could become impossible for human operators to detect failures or intervene effectively.</span>\r\n\r\n<span style=\"font-weight: 400;\">In some cases, humans might not even understand when or why an AI system is malfunctioning, let alone how to correct it.</span>\r\n<h4><b>Human oversight</b></h4>\r\n<span style=\"font-weight: 400;\">Yet the most challenging risks, in my view, stem from how reinforcement learning shapes agentic AI behaviour.</span>\r\n\r\n<span style=\"font-weight: 400;\">As mentioned earlier, reinforcement learning agents optimise their actions based on a reward function, learning through trial and error. While developers define high-level goals, AI systems often develop instrumental goals, namely intermediate objectives that help them achieve their broader tasks.</span>\r\n\r\n<span style=\"font-weight: 400;\">Already in 2008, </span><a href=\"https://selfawaresystems.com/wp-content/uploads/2008/01/ai_drives_final.pdf\"><span style=\"font-weight: 400;\">Stephen Omohundro</span></a><span style=\"font-weight: 400;\"> argued that sufficiently advanced AI would pursue instrumental goals such as acquiring resources or increasing computing power to improve performance. </span><a href=\"https://cdn.aaai.org/ocs/ws/ws0218/12634-57409-1-PB.pdf\"><span style=\"font-weight: 400;\">More recent research</span></a><span style=\"font-weight: 400;\"> has confirmed this intuition.</span>\r\n\r\n<span style=\"font-weight: 400;\">A particularly concerning category of instrumental goals is </span><a href=\"https://dl.acm.org/doi/10.1145/3593013.3594033\"><span style=\"font-weight: 400;\">convergent instrumental goals</span></a><span style=\"font-weight: 400;\">, which are objectives that are useful across many different AI tasks. These may include accumulating influence over an environment or even manipulating users to ensure goal completion.</span>\r\n\r\n<span style=\"font-weight: 400;\">The challenge is that these goals emerge without human oversight, making them difficult for policymakers to detect, let alone regulate.</span>\r\n\r\n<a href=\"https://arxiv.org/abs/2209.13085\"><span style=\"font-weight: 400;\">Reward hacking</span></a> <span style=\"font-weight: 400;\">is also fiendishly difficult to regulate. This happens when an AI system finds unintended shortcuts to maximise its reward, sometimes in harmful ways. A well-documented example is when engagement-optimised AI systems (such as those used in social media) promote extreme or emotionally manipulative content because it increases watch time or user interactions — despite the broader harm it may cause.</span>\r\n\r\n<span style=\"font-weight: 400;\"></span><span style=\"font-weight: 400;\">Over the medium to long term, specific types of reinforcement learning agents present particular challenges. Michael Cohen and his co-authors argue in </span><a href=\"https://www.science.org/doi/10.1126/science.adl0625\"><span style=\"font-weight: 400;\">Science</span></a><span style=\"font-weight: 400;\">, one of the world’s most highly cited journals, that sufficiently capable long-term planning agents may have incentives to “thwart human control”. These reinforcement learning agents, with extended planning horizons, could make human oversight close to impossible.</span>\r\n\r\n<span style=\"font-weight: 400;\">Moreover, the potential for humans to withhold rewards “strongly incentivises the AI system to take humans out of the loop”. Mechanisms by which long-term planning agents could do so include taking control over human infrastructure and creating other agents to act on their behalf. </span><span style=\"font-weight: 400;\"></span>\r\n<h4><b>What should policymakers do?</b></h4>\r\n<span style=\"font-weight: 400;\">A growing community of researchers is exploring how to regulate agentic AI, often within the broader field of frontier AI governance. However, much work remains, especially in Africa.</span>\r\n\r\n<span style=\"font-weight: 400;\">One promising approach is a regulatory model that </span><a href=\"https://arxiv.org/abs/2407.07300\"><span style=\"font-weight: 400;\">combines principles-based and rules-based regulation</span></a><span style=\"font-weight: 400;\">.</span>\r\n\r\n<span style=\"font-weight: 400;\">This approach acknowledges two key realities: first, that we do not yet fully understand the risks of agentic AI; and second, that existing safety mechanisms remain underdeveloped. Given this uncertainty, policymakers must build capacity urgently, while building much closer collaboration between AI developers and regulators.</span>\r\n\r\n<span style=\"font-weight: 400;\">There is also recognition that pure self-regulation is insufficient</span><i><span style=\"font-weight: 400;\">.</span></i><span style=\"font-weight: 400;\"> While the AI industry </span><a href=\"https://arxiv.org/abs/2403.13793\"><span style=\"font-weight: 400;\">has made real efforts to prioritise safety</span></a><span style=\"font-weight: 400;\">, the fundamental problems remain: misalignment between the incentives of AI developers and the public interest, and both the potential scope of negative externalities and societal impact produced by AI systems weaken the incentive for self-regulation.</span>\r\n\r\n<span style=\"font-weight: 400;\">Agentic AI represents a major technological leap, offering immense benefits but also introducing unpredictable risks. Unlike traditional AI, agentic systems act autonomously, adapt to new environments, and pursue complex, self-generated objectives, often in ways that are difficult to regulate.</span>\r\n\r\n<span style=\"font-weight: 400;\">For policymakers, the challenge is twofold: understanding these risks; and developing governance structures that can keep pace with rapid technological advancements.</span>\r\n\r\n<span style=\"font-weight: 400;\">It sounds simple, but putting this into practice will not be easy. </span><b>DM</b>\r\n\r\n<i><span style=\"font-weight: 400;\">Professor Willem Fourie is the Chair of Policy Innovation at the Policy Innovation Lab at Stellenbosch University.</span></i>",
"teaser": "Agentic AI is changing the rules faster than policymakers can write them",
"externalUrl": "",
"sponsor": null,
"authors": [
{
"id": "1107448",
"name": "Willem Fourie",
"image": "https://www.dailymaverick.co.za/wp-content/uploads/2025/04/Oped-Fourie-AgenticAI-TW.jpg",
"url": "https://staging.dailymaverick.co.za/author/willem-fourie/",
"editorialName": "willem-fourie",
"department": "",
"name_latin": ""
}
],
"description": "",
"keywords": [
{
"type": "Keyword",
"data": {
"keywordId": "4088",
"name": "Fake news",
"url": "https://staging.dailymaverick.co.za/keyword/fake-news/",
"slug": "fake-news",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Fake news",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "6870",
"name": "Michael Cohen",
"url": "https://staging.dailymaverick.co.za/keyword/michael-cohen/",
"slug": "michael-cohen",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Michael Cohen",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "88207",
"name": "Machine learning",
"url": "https://staging.dailymaverick.co.za/keyword/machine-learning/",
"slug": "machine-learning",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Machine learning",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "370107",
"name": "butterfly effect",
"url": "https://staging.dailymaverick.co.za/keyword/butterfly-effect/",
"slug": "butterfly-effect",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "butterfly effect",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "431494",
"name": "voice cloning",
"url": "https://staging.dailymaverick.co.za/keyword/voice-cloning/",
"slug": "voice-cloning",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "voice cloning",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "432128",
"name": "Willem Fourie",
"url": "https://staging.dailymaverick.co.za/keyword/willem-fourie/",
"slug": "willem-fourie",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Willem Fourie",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "432129",
"name": "agentic AI",
"url": "https://staging.dailymaverick.co.za/keyword/agentic-ai/",
"slug": "agentic-ai",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "agentic AI",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "432130",
"name": "Manus",
"url": "https://staging.dailymaverick.co.za/keyword/manus/",
"slug": "manus",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Manus",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "432131",
"name": "reinforcement learning",
"url": "https://staging.dailymaverick.co.za/keyword/reinforcement-learning/",
"slug": "reinforcement-learning",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "reinforcement learning",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "432132",
"name": "Stephen Omohundro",
"url": "https://staging.dailymaverick.co.za/keyword/stephen-omohundro/",
"slug": "stephen-omohundro",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Stephen Omohundro",
"translations": null
}
}
],
"short_summary": null,
"source": null,
"related": [],
"options": [],
"attachments": [
{
"id": "68531",
"name": "",
"description": "",
"focal": "50% 50%",
"width": 0,
"height": 0,
"url": "https://dmcdn.whitebeard.net/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg",
"transforms": [
{
"x": "200",
"y": "100",
"url": "https://dmcdn.whitebeard.net/i/NJptaJ7zIzwvPEK19cAuREYE_EY=/200x100/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg"
},
{
"x": "450",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/n55A-BrfMgA5TmsuMK_mHyDJozM=/450x0/smart/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg"
},
{
"x": "800",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/FtBBSi65U7f8tbFNwILKXQFCQZs=/800x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg"
},
{
"x": "1200",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/xFjgaMQeAXv8dy4w7mbzL7SAltM=/1200x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg"
},
{
"x": "1600",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/pXg8li0MWTNkvGU3kTE9Cy2xttY=/1600x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg"
}
],
"url_thumbnail": "https://dmcdn.whitebeard.net/i/NJptaJ7zIzwvPEK19cAuREYE_EY=/200x100/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg",
"url_medium": "https://dmcdn.whitebeard.net/i/n55A-BrfMgA5TmsuMK_mHyDJozM=/450x0/smart/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg",
"url_large": "https://dmcdn.whitebeard.net/i/FtBBSi65U7f8tbFNwILKXQFCQZs=/800x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg",
"url_xl": "https://dmcdn.whitebeard.net/i/xFjgaMQeAXv8dy4w7mbzL7SAltM=/1200x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg",
"url_xxl": "https://dmcdn.whitebeard.net/i/pXg8li0MWTNkvGU3kTE9Cy2xttY=/1600x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/04/AFP__20250317__dellatorre-notitle250317_npHFa__v1__HighRes__DeepseekAiAndChinaSArtif.jpg",
"type": "image"
}
],
"summary": "Unlike traditional artificial intelligence (AI), agentic systems act autonomously, adapt to new environments, and pursue complex, self-generated objectives, often in ways that are difficult to regulate.",
"template_type": null,
"dm_custom_section_label": null,
"elements": [],
"seo": {
"search_title": "Agentic AI is changing the rules faster than policymakers can write them",
"search_description": "<span style=\"font-weight: 400;\">Until recently the idea of </span><a href=\"https://www.forbes.com/sites/bernardmarr/2025/02/03/generative-ai-vs-agentic-ai-the-key-differences-everyone-needs-to-know/\">",
"social_title": "Agentic AI is changing the rules faster than policymakers can write them",
"social_description": "<span style=\"font-weight: 400;\">Until recently the idea of </span><a href=\"https://www.forbes.com/sites/bernardmarr/2025/02/03/generative-ai-vs-agentic-ai-the-key-differences-everyone-needs-to-know/\">",
"social_image": ""
},
"cached": true,
"access_allowed": true
}