All Article Properties:
{
"access_control": false,
"status": "publish",
"objectType": "Article",
"id": "930979",
"signature": "Article:930979",
"url": "https://staging.dailymaverick.co.za/article/2021-05-28-plethora-of-concerns-on-issues-of-accountability-transparency-and-privacy-in-ai-content-moderation/",
"shorturl": "https://staging.dailymaverick.co.za/article/930979",
"slug": "plethora-of-concerns-on-issues-of-accountability-transparency-and-privacy-in-ai-content-moderation",
"contentType": {
"id": "1",
"name": "Article",
"slug": "article"
},
"views": 0,
"comments": 0,
"preview_limit": null,
"excludedFromGoogleSearchEngine": 0,
"title": "Plethora of concerns on issues of accountability, transparency and privacy in AI content moderation",
"firstPublished": "2021-05-28 16:07:09",
"lastUpdate": "2021-05-28 16:07:09",
"categories": [
{
"id": "29",
"name": "South Africa",
"signature": "Category:29",
"slug": "south-africa",
"typeId": {
"typeId": "1",
"name": "Daily Maverick",
"slug": "",
"includeInIssue": "0",
"shortened_domain": "",
"stylesheetClass": "",
"domain": "staging.dailymaverick.co.za",
"articleUrlPrefix": "",
"access_groups": "[]",
"locale": "",
"preview_limit": null
},
"parentId": null,
"parent": [],
"image": "",
"cover": "",
"logo": "",
"paid": "0",
"objectType": "Category",
"url": "https://staging.dailymaverick.co.za/category/south-africa/",
"cssCode": "",
"template": "default",
"tagline": "",
"link_param": null,
"description": "Daily Maverick is an independent online news publication and weekly print newspaper in South Africa.\r\n\r\nIt is known for breaking some of the defining stories of South Africa in the past decade, including the Marikana Massacre, in which the South African Police Service killed 34 miners in August 2012.\r\n\r\nIt also investigated the Gupta Leaks, which won the 2019 Global Shining Light Award.\r\n\r\nThat investigation was credited with exposing the Indian-born Gupta family and former President Jacob Zuma for their role in the systemic political corruption referred to as state capture.\r\n\r\nIn 2018, co-founder and editor-in-chief Branislav ‘Branko’ Brkic was awarded the country’s prestigious Nat Nakasa Award, recognised for initiating the investigative collaboration after receiving the hard drive that included the email tranche.\r\n\r\nIn 2021, co-founder and CEO Styli Charalambous also received the award.\r\n\r\nDaily Maverick covers the latest political and news developments in South Africa with breaking news updates, analysis, opinions and more.",
"metaDescription": "",
"order": "0",
"pageId": null,
"articlesCount": null,
"allowComments": "1",
"accessType": "freecount",
"status": "1",
"children": [],
"cached": true
},
{
"id": "38",
"name": "World",
"signature": "Category:38",
"slug": "world",
"typeId": {
"typeId": "1",
"name": "Daily Maverick",
"slug": "",
"includeInIssue": "0",
"shortened_domain": "",
"stylesheetClass": "",
"domain": "staging.dailymaverick.co.za",
"articleUrlPrefix": "",
"access_groups": "[]",
"locale": "",
"preview_limit": null
},
"parentId": null,
"parent": [],
"image": "",
"cover": "",
"logo": "",
"paid": "0",
"objectType": "Category",
"url": "https://staging.dailymaverick.co.za/category/world/",
"cssCode": "",
"template": "default",
"tagline": "",
"link_param": null,
"description": "",
"metaDescription": "",
"order": "0",
"pageId": null,
"articlesCount": null,
"allowComments": "1",
"accessType": "freecount",
"status": "1",
"children": [],
"cached": true
}
],
"content_length": 6548,
"contents": "<span style=\"font-weight: 400;\">As South Africa talks about the Fourth Industrial Revolution, the use of artificial intelligence (AI) is going to become more and more prevalent. While AI can be a powerful tool in moderating online social interactions, there is a plethora of concerns regarding issues of accountability, transparency and privacy in AI content moderation. </span>\r\n\r\n<span style=\"font-weight: 400;\">As Dr Mari-Sanna Paukkeri, CEO and co-founder of the Finnish tech firm, Utopia Analytics, and Avani Singh, a South African data-rights activist and lawyer, pointed out in conversation with </span><span style=\"font-weight: 400;\">the Institute for Security Studies’ senior research adviser, Karen Allen, the need for ethical AI is more important than ever. </span>\r\n\r\n<iframe title=\"YouTube video player\" src=\"https://www.youtube.com/embed/VBlPUbTeBeo\" width=\"853\" height=\"480\" frameborder=\"0\" allowfullscreen=\"allowfullscreen\"></iframe>\r\n\r\n<span style=\"font-weight: 400;\">“What do we mean by ethical artificial intelligence?” Allen asked the AI experts.</span>\r\n\r\n<span style=\"font-weight: 400;\">Ethical AI would aim to ensure “the protection and promotion of fundamental rights like freedom of expression, access to information, privacy, quality, non-discrimination and access to a meaningful remedy,” Singh responded. </span>\r\n\r\n<span style=\"font-weight: 400;\">These may be the minimum requirements of what constitutes ethical AI, but the confluence of other elements such as “transparency, accountability, openness and processes of due diligence”, also speak to ethical artificial intelligence, she added. </span>\r\n\r\n<span style=\"font-weight: 400;\">In South Africa, AI is being used by media companies to sift through large volumes of data to produce articles and, in some instances, to write articles from scratch, said Singh. </span>\r\n\r\n<span style=\"font-weight: 400;\">But AI is increasingly “being used to moderate or curate platforms to determine what is put out”, said Singh. This has tremendous benefits, she added, but AI content moderation also poses tremendous risks, including issues around biases and restrictions on free speech.</span>\r\n\r\n<b>AI content moderation</b>\r\n\r\n<span style=\"font-weight: 400;\">For Utopia Analytics, </span><a href=\"https://utopiaanalytics.com\"><span style=\"font-weight: 400;\">described</span></a><span style=\"font-weight: 400;\"> as a “text analytics and content moderation” firm, AI content moderation would simply refer to the curation of online content by an automated AI tool that would have learnt from human moderators, said Paukkeri. </span>\r\n\r\n<span style=\"font-weight: 400;\">For example, if a platform is looking to restrict hate speech, Utopia Analytics’ AI intelligence moderator could be used to curate and eliminate improper content. </span>\r\n\r\n<span style=\"font-weight: 400;\">The firm’s AI moderator receives human moderation decisions, uses the decisions as training data and learns what it is that made the human moderator accept or decline the comment, and then mimics human moderators’ decisions on a larger scale, said Paukkeri. </span>\r\n\r\n<span style=\"font-weight: 400;\">“This works for any language and any dialect in the world, [and] it takes only two weeks to get the AI model working for a new social platform”, said Paukkeri. </span>\r\n\r\n<span style=\"font-weight: 400;\">Singh commended Utopia Analytics for respecting the United Nations’ Universal Declaration for Human Rights, but raised concerns about other AI service providers not following similar guidelines or codes of conduct — leaving the public at the mercy of AI moderators. </span>\r\n\r\n<span style=\"font-weight: 400;\">As the AI moderator learns from human decision-making, it would therefore receive what humans would define as, for example, hate speech. </span>\r\n\r\n<span style=\"font-weight: 400;\">“[But] when we talk about moderating hate speech, there is no internationally accepted definition of what constitutes hate speech and so legitimate content may be removed based on the very cautious or overbearing nature of the platforms themselves, which creates huge implications for the right to freedom of expression,” explained Singh. </span>\r\n\r\n<span style=\"font-weight: 400;\">AI moderation also raises privacy concerns regarding data collection and storage. </span>\r\n\r\n<span style=\"font-weight: 400;\">Utopia Analytics adheres to the EU privacy legislation, said Paukkeri. But the company does not collect any data and would receive data from its customer companies that have been provided by users of the platform.</span>\r\n\r\n<span style=\"font-weight: 400;\">If the firm no longer has a relationship with a customer, EU privacy legislation requires that the AI moderated data be deleted, she said. </span>\r\n\r\n<b>Human and machine biases</b>\r\n\r\n<span style=\"font-weight: 400;\">Another challenge in the moderation of content through AI is the human biases that AI moderating tools effectively acquire from the human training data. According to Paukkeri, there are many ways of eliminating built-in human biases from being passed on to AI moderators. </span>\r\n\r\n<span style=\"font-weight: 400;\">“Every time we build a new AI model from the data we receive from the customer, we [assess it] and if there’s something that shouldn’t be there we will let the customer know and ask them to re-moderate that content,” said Paukkeri.</span>\r\n\r\n<span style=\"font-weight: 400;\">Utopia Analytics’ AI also learns a “general moderation policy” and therefore notices if there are outliers or if different humans have been moderating in a different way, which happens very often, said Paukkeri. </span>\r\n\r\n<span style=\"font-weight: 400;\">However, there are many AI moderation solutions using reputation as part of their model which could work to perpetuate biases. For example, if a user who has a history of bad online behaviour types a message that happens to be the same as a user who has a history of good online behaviour, only the badly behaved user’s message would be eliminated, said Paukkeri. </span>\r\n\r\n<b>‘How do you hold an algorithm to account?’</b>\r\n\r\n<span style=\"font-weight: 400;\">Because of the increasing presence of AI content moderation, it has become necessary for professional bodies to update declarations and codes of conduct to deal with the issue of accountability. “Accountability remains one of the biggest gaps when we talk about artificial intelligence and that’s partly because of the voluntary codes that don’t necessarily establish accountability measures and perhaps a slight reluctance [sic],” said Singh. </span>\r\n\r\n<span style=\"font-weight: 400;\">Earlier this year, the European Commission published its draft regulation surrounding “the harmonisation of rules [regarding] AI”, she said. The commission seeks to establish an EU AI board, where users would be subject to significant fines of up to </span><span style=\"font-weight: 400;\">€</span><span style=\"font-weight: 400;\">300 million for the most egregious violations, she added. </span>\r\n\r\n<span style=\"font-weight: 400;\">While we are seeing methods to achieve accountability and legislation moving swiftly in certain parts of the world such as the EU, there is a lag in the African context, said Singh. </span>\r\n\r\n<span style=\"font-weight: 400;\">In South Africa, the </span><span style=\"font-weight: 400;\">Protection of Personal Information</span><span style=\"font-weight: 400;\"> Act achieves some accountability, but when considering AI and machine learning the act is still largely outdated, she said. </span>\r\n\r\n<span style=\"font-weight: 400;\">“I fully support the benefits that AI can provide and I take nothing away from the potential that it offers to achieve really meaningful solutions,” said Singh. “I just think we are on a very concerning path at the moment, which is an unaccountable, poorly transparent one. And so, I think there is a lot of work that needs to be done.” </span><b>DM</b>\r\n\r\n<i><span style=\"font-weight: 400;\">Subscribe to the </span></i><span style=\"font-weight: 400;\">Daily Maverick</span><i><span style=\"font-weight: 400;\"> webinar newsletter and keep updated with our upcoming conversations: </span></i><a href=\"https://email.touchbasepro.com/h/d/38911C881454EE15\"><i><span style=\"font-weight: 400;\">https://email.touchbasepro.com/h/d/38911C881454EE15</span></i></a><i><span style=\"font-weight: 400;\"> </span></i>",
"teaser": "Plethora of concerns on issues of accountability, transparency and privacy in AI content moderation",
"externalUrl": "",
"sponsor": null,
"authors": [
{
"id": "72465",
"name": "Victoria O’Regan",
"image": "https://www.dailymaverick.co.za/wp-content/uploads/2024/05/Victoria-ORegan.png",
"url": "https://staging.dailymaverick.co.za/author/victoria-oregan-3-2-2/",
"editorialName": "victoria-oregan-3-2-2",
"department": "",
"name_latin": ""
}
],
"description": "",
"keywords": [
{
"type": "Keyword",
"data": {
"keywordId": "7857",
"name": "Ethics",
"url": "https://staging.dailymaverick.co.za/keyword/ethics/",
"slug": "ethics",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Ethics",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "14767",
"name": "Bias",
"url": "https://staging.dailymaverick.co.za/keyword/bias/",
"slug": "bias",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Bias",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "17214",
"name": "Artificial intelligence",
"url": "https://staging.dailymaverick.co.za/keyword/artificial-intelligence/",
"slug": "artificial-intelligence",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Artificial intelligence",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "85262",
"name": "Cybersecurity",
"url": "https://staging.dailymaverick.co.za/keyword/cybersecurity/",
"slug": "cybersecurity",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Cybersecurity",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "88207",
"name": "Machine learning",
"url": "https://staging.dailymaverick.co.za/keyword/machine-learning/",
"slug": "machine-learning",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Machine learning",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "352319",
"name": "social interactions",
"url": "https://staging.dailymaverick.co.za/keyword/social-interactions/",
"slug": "social-interactions",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "social interactions",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "352320",
"name": "content moderation",
"url": "https://staging.dailymaverick.co.za/keyword/content-moderation/",
"slug": "content-moderation",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "content moderation",
"translations": null
}
}
],
"short_summary": null,
"source": null,
"related": [],
"options": [],
"attachments": [
{
"id": "74619",
"name": "",
"description": "",
"focal": "50% 50%",
"width": 0,
"height": 0,
"url": "https://dmcdn.whitebeard.net/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg",
"transforms": [
{
"x": "200",
"y": "100",
"url": "https://dmcdn.whitebeard.net/i/uGRtvMpmbC7cys05Ng6-8BzDono=/200x100/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg"
},
{
"x": "450",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/TdTc4ll-tytiycgUK9b5CAvHR3w=/450x0/smart/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg"
},
{
"x": "800",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/djxC6xKiCEVFJIb2XNpQ3LTXgKA=/800x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg"
},
{
"x": "1200",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/MRXcgmQ-ZA_Iz-hSxUb31tjiql8=/1200x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg"
},
{
"x": "1600",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/cjj4knUWeiEIWnGJj3yg31ScYRk=/1600x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg"
}
],
"url_thumbnail": "https://dmcdn.whitebeard.net/i/uGRtvMpmbC7cys05Ng6-8BzDono=/200x100/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg",
"url_medium": "https://dmcdn.whitebeard.net/i/TdTc4ll-tytiycgUK9b5CAvHR3w=/450x0/smart/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg",
"url_large": "https://dmcdn.whitebeard.net/i/djxC6xKiCEVFJIb2XNpQ3LTXgKA=/800x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg",
"url_xl": "https://dmcdn.whitebeard.net/i/MRXcgmQ-ZA_Iz-hSxUb31tjiql8=/1200x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg",
"url_xxl": "https://dmcdn.whitebeard.net/i/cjj4knUWeiEIWnGJj3yg31ScYRk=/1600x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2021/05/Tori-webinarEthicalAI.jpg",
"type": "image"
}
],
"summary": "In the second instalment of the cybersecurity webinar series with former BBC foreign correspondent Karen Allen on Wednesday, participants heard about the ‘promise and perils’ of artificial intelligence and content moderation on social media platforms.",
"template_type": null,
"dm_custom_section_label": null,
"elements": [],
"seo": {
"search_title": "Plethora of concerns on issues of accountability, transparency and privacy in AI content moderation",
"search_description": "<span style=\"font-weight: 400;\">As South Africa talks about the Fourth Industrial Revolution, the use of artificial intelligence (AI) is going to become more and more prevalent. While AI can be a powe",
"social_title": "Plethora of concerns on issues of accountability, transparency and privacy in AI content moderation",
"social_description": "<span style=\"font-weight: 400;\">As South Africa talks about the Fourth Industrial Revolution, the use of artificial intelligence (AI) is going to become more and more prevalent. While AI can be a powe",
"social_image": ""
},
"cached": true,
"access_allowed": true
}