All Article Properties:
{
"access_control": false,
"status": "publish",
"objectType": "Article",
"id": "2719306",
"signature": "Article:2719306",
"url": "https://staging.dailymaverick.co.za/article/2025-05-20-how-to-tell-if-a-photos-fake-you-probably-cant-new-rules-are-needed/",
"shorturl": "https://staging.dailymaverick.co.za/article/2719306",
"slug": "how-to-tell-if-a-photos-fake-you-probably-cant-new-rules-are-needed",
"contentType": {
"id": "1",
"name": "Article",
"slug": "article"
},
"views": 0,
"comments": 0,
"preview_limit": null,
"excludedFromGoogleSearchEngine": 0,
"title": "How to tell if a photo’s fake? You probably can’t. New rules are needed",
"firstPublished": "2025-05-20 15:00:12",
"lastUpdate": "2025-05-17 12:46:03",
"categories": [
{
"id": "1825",
"name": "Maverick Life",
"signature": "Category:1825",
"slug": "maverick-life",
"typeId": {
"typeId": "1",
"name": "Daily Maverick",
"slug": "",
"includeInIssue": "0",
"shortened_domain": "",
"stylesheetClass": "",
"domain": "staging.dailymaverick.co.za",
"articleUrlPrefix": "",
"access_groups": "[]",
"locale": "",
"preview_limit": null
},
"parentId": null,
"parent": [],
"image": "",
"cover": "",
"logo": "",
"paid": "0",
"objectType": "Category",
"url": "https://staging.dailymaverick.co.za/category/maverick-life/",
"cssCode": "",
"template": "default",
"tagline": "",
"link_param": null,
"description": "",
"metaDescription": "",
"order": "0",
"pageId": null,
"articlesCount": null,
"allowComments": "1",
"accessType": "freecount",
"status": "1",
"children": [],
"cached": true
}
],
"content_length": 7498,
"contents": "The problem is simple: it’s <a href=\"https://theconversation.com/can-you-tell-the-difference-between-real-and-fake-news-photos-take-the-quiz-to-find-out-253539\">hard to know</a> whether a photo’s real or not anymore. Photo manipulation tools are so good, so common and easy to use, that a picture’s truthfulness is no longer guaranteed.\r\n\r\nThe situation got trickier with the uptake of <a href=\"https://theconversation.com/topics/generative-ai-133426\">generative artificial intelligence</a>. Anyone with an internet connection can cook up just about any image, plausible or fantasy, with photorealistic quality, and present it as real. This affects our ability to discern truth in a world increasingly influenced by images.\r\n\r\nI <a href=\"https://www.wits.ac.za/news/latest-news/opinion/2024/2024-02/how-academics-can-counter-ai-thinks-therefore-i-am.html\">teach</a> and <a href=\"https://scholar.google.com/citations?user=y_wCKHQAAAAJ&hl=en&oi=ao\">research</a> the ethics of artificial intelligence (AI), including how we use and understand digital images.\r\n\r\nMany people ask how we can tell if an image has been changed, but that’s fast becoming too difficult. Instead, here I suggest a system where creators and users of images openly state what changes they’ve made. Any similar system will do, but new rules are needed if AI images are to be deployed ethically – at least among those who want to be trusted, especially the media.\r\n\r\nDoing nothing isn’t an option, because what we believe about media affects how much we trust each other and our institutions. There are several ways forward. Clear labelling of photos is one of them.\r\n<h4><strong>Deepfakes and fake news</strong></h4>\r\nPhoto manipulation was once the preserve of government propaganda teams, and later, expert users of <a href=\"https://www.tandfonline.com/doi/abs/10.1207/S15327728JMME1802_05?casa_token=0Xir7SwwaOQAAAAA:ZkIg7_2eyQFmTc8Ix34dObJWolQPHlvwUVeyaleeAdmdamNJYDNJ79HsYJpKMTSDXZU3BxYmGi4e\">Photoshop</a>, the popular software for editing, altering or creating digital images.\r\n\r\n<p><img loading=\"lazy\" class=\"wp-image-2719389 size-full\" src=\"https://www.dailymaverick.co.za/wp-content/uploads/2025/05/GettyImages-973118252-scaled.jpg\" alt=\"A logo is displayed in the Adobe Systems Inc. Photoshop Express application on an Apple Inc. iPhone in an arranged photograph taken in Tiskilwa, Illinois, U.S.\" width=\"2560\" height=\"1651\" /> A logo is displayed in the Adobe Systems Inc. Photoshop Express application on an Apple Inc. iPhone in an arranged photograph taken in Tiskilwa, Illinois, U.S. Photographer: Daniel Acker/Bloomberg via Getty Images</p>\r\n\r\nToday, digital photos are automatically subjected to colour-correcting filters on phones and cameras. Some social media tools <a href=\"https://www.newsweek.com/tiktok-beauty-filter-glitch-automatic-report-1600809\">automatically “prettify”</a> users’ pictures of faces. Is a photo taken of oneself by oneself even real anymore?\r\n\r\nThe basis of shared social understanding and consensus – trust regarding what one sees – is being eroded. This is accompanied by the apparent rise of untrustworthy (and often malicious) news reporting. We have new language for the situation: <a href=\"https://theconversation.com/fake-news-the-internet-has-turned-an-age-old-problem-into-a-new-threat-72111\">fake news</a> (false reporting in general) and <a href=\"https://theconversation.com/deepfakes-in-south-africa-protecting-your-image-online-is-the-key-to-fighting-them-223383\">deepfakes</a> (deliberately manipulated images, whether for waging war or garnering more social media followers).\r\n\r\n<strong>Read more:</strong> <a href=\"https://www.dailymaverick.co.za/article/2025-03-18-tiktok-influencers-instagram-gangsters-and-fake-news-how-socialmedia-is-affecting-crime/\">TikTok influencers, Instagram gangsters and fake news — how #socialmedia is affecting crime</a>\r\n\r\n<a href=\"https://www.dailymaverick.co.za/article/2025-03-05-crime-intelligence-target-of-social-media-fake-news-but-parliament-hears-unit-also-like-a-mafia/\">Misinformation</a> campaigns using manipulated images can <a href=\"https://arxiv.org/abs/2005.02443\">sway elections</a>, deepen divisions, and even incite violence. <a href=\"https://journals.sagepub.com/doi/full/10.1177/1940161218811981?casa_token=e7UPcvnzpqwAAAAA%3ALqlZj1WX0jZSOg6eCtcsTlyMZ2dASNFaY64l_NWH8TCLrd_q_GtvFrXL7ZhyYrTtQ5k033mS4Zvb\">Scepticism</a> towards trustworthy media has untethered ordinary people from fact-based accounting of events, and has fuelled conspiracy theories and fringe groups.\r\n<h4><strong>Ethical questions</strong></h4>\r\nA further problem for producers of images (personal or professional) is the difficulty of knowing what’s permissible. In a world of doctored images, is it acceptable to prettify yourself? How about editing an ex-partner out of a picture and posting it online?\r\n\r\nWould it matter if a well-respected western newspaper published a photo of Russian President Vladimir Putin pulling his face in disgust (an expression that he surely has made at some point, but of which no actual image has been captured, say) using AI?\r\n\r\nThe ethical boundaries blur further in highly charged contexts. Does it matter if opposition political ads against then-presidential candidate Barack Obama in the US deliberately <a href=\"https://nationalpost.com/news/world/obamas-skin-looked-darker-in-republican-ads\">darkened his skin</a>?\r\n\r\nWould generated images of dead bodies in Gaza be more palatable, perhaps more moral, than actual photographs of dead humans? Is a magazine cover showing a model digitally altered to <a href=\"https://heinonline.org/HOL/Page?handle=hein.journals/regjil10&div=16&g_sent=1&casa_token=kBh-STXeH8kAAAAA:LdWrDD1IiJTbtW4YawTviHZntgBOa6hxvrUvEaLJ4sJ9KYQtucDGGxMxWCiTEWZFAZKv8WT_&collection=journals\">unattainable beauty standards</a>, while not declaring the level of photo manipulation, unethical?\r\n<h4><strong>A fix</strong></h4>\r\nPart of the solution to this social problem demands two simple and clear actions. First, declare that photo manipulation has taken place. Second, disclose what kind of photo manipulation was carried out.\r\n\r\nThe first step is straightforward: in the same way pictures are published with author credits, a clear and unobtrusive “enhancement acknowledgement” or EA should be added to caption lines.\r\n\r\nThe second is about how an image has been altered. Here I call for five “categories of manipulation” (not unlike a film rating). Accountability and clarity create an ethical foundation.\r\n\r\nThe five categories could be:\r\n\r\n<strong>C – Corrected</strong>\r\n\r\nEdits that preserve the essence of the original photo while refining its overall clarity or aesthetic appeal – like colour balance (such as contrast) or lens distortion. Such corrections are often automated (for instance by smartphone cameras) but can be performed manually.\r\n\r\n<strong>E – Enhanced</strong>\r\n\r\nAlterations that are mainly about colour or tone adjustments. This extends to slight cosmetic retouching, like the removal of minor blemishes (such as acne) or the artificial addition of makeup, provided the edits don’t reshape physical features or objects. This includes all filters involving colour changes.\r\n\r\n<strong>B – Body manipulated</strong>\r\n\r\nThis is flagged when a physical feature is altered. Changes in body shape, like slimming arms or enlarging shoulders, or the altering of skin or hair colour, fall under this category.\r\n\r\n<strong>O – Object manipulated</strong>\r\n\r\nThis declares that the physical position of an object has been changed. A finger or limb moved, a vase added, a person edited out, a background element added or removed.\r\n\r\n<strong>G – Generated</strong>\r\n\r\nEntirely fabricated yet photorealistic depictions, such as a scene that never existed, must be flagged here. So, all images created digitally, including by generative AI, but limited to photographic depictions. (An AI-generated cartoon of the pope would be excluded, but a photo-like picture of the <a href=\"https://www.buzzfeednews.com/article/chrisstokelwalker/pope-puffy-jacket-ai-midjourney-image-creator-interview\">pontiff in a puffer jacket</a> is rated G.)\r\n\r\nThe suggested categories are value-blind: they are (or ought to be) triggered simply by the occurrence of any manipulation. So, colour filters applied to an image of a politician trigger an E category, whether the alteration makes the person appear friendlier or scarier. A critical feature for accepting a rating system like this is that it is transparent and unbiased.\r\n\r\nThe CEBOG categories above aren’t fixed; there may be overlap: B (Body manipulated) might often imply E (Enhanced), for example.\r\n\r\n<p><img loading=\"lazy\" class=\"wp-image-2719331 size-full\" src=\"https://www.dailymaverick.co.za/wp-content/uploads/2025/05/GettyImages-1676402246-scaled.jpg\" alt=\"Mobile billboard is seen near the U.S. Capitol on September 12, 2023 in Washington, DC. \" width=\"2560\" height=\"1707\" /> Mobile billboard is seen near the U.S. Capitol on September 12, 2023 in Washington, DC. (Photo by Tasos Katopodis/Getty Images for Friends of the Earth)</p>\r\n<h4>Feasibility</h4>\r\nResponsible photo manipulation software may automatically indicate to users the class of photo manipulation carried out. If needed it could watermark it, or it could simply capture it in the picture’s metadata (as with data about the source, owner or photographer). Automation could very well ensure ease of use, and perhaps reduce human error, encouraging consistent application across platforms.\r\n\r\nOf course, displaying the rating will ultimately be an editorial decision, and good users, like good editors, will do this responsibly, hopefully maintaining or improving the reputation of their images and publications. While one would hope that social media would buy into this kind of editorial ideal and encourage labelled images, much room for ambiguity and deception remains.\r\n\r\nThe success of an initiative like this hinges on technology developers, media organisations and policymakers collaborating to create a shared commitment to transparency in digital media. <strong>DM <iframe style=\"border: none !important;\" src=\"https://counter.theconversation.com/content/252645/count.gif?distributor=republish-lightbox-advanced\" width=\"1\" height=\"1\"></iframe>\r\n</strong>\r\n\r\n<a href=\"https://theconversation.com/how-to-tell-if-a-photos-fake-you-probably-cant-thats-why-new-rules-are-needed-252645\"><em>This story was first published in</em> The Conversation.</a> <em>Martin Bekker is a Computational Social Scientist at the University of the Witwatersrand.</em>",
"teaser": "How to tell if a photo’s fake? You probably can’t. New rules are needed",
"externalUrl": "",
"sponsor": null,
"authors": [
{
"id": "1128678",
"name": "Martin Bekker",
"image": "",
"url": "https://staging.dailymaverick.co.za/author/martin-bekker-2/",
"editorialName": "martin-bekker-2",
"department": "",
"name_latin": ""
}
],
"description": "",
"keywords": [
{
"type": "Keyword",
"data": {
"keywordId": "4084",
"name": "Social media",
"url": "https://staging.dailymaverick.co.za/keyword/social-media/",
"slug": "social-media",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Social media",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "4088",
"name": "Fake news",
"url": "https://staging.dailymaverick.co.za/keyword/fake-news/",
"slug": "fake-news",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Fake news",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "88798",
"name": "Misinformation",
"url": "https://staging.dailymaverick.co.za/keyword/misinformation/",
"slug": "misinformation",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Misinformation",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "398396",
"name": "Artificial general intelligence",
"url": "https://staging.dailymaverick.co.za/keyword/artificial-general-intelligence/",
"slug": "artificial-general-intelligence",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Artificial general intelligence",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "433822",
"name": "Photo manipulation",
"url": "https://staging.dailymaverick.co.za/keyword/photo-manipulation/",
"slug": "photo-manipulation",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Photo manipulation",
"translations": null
}
},
{
"type": "Keyword",
"data": {
"keywordId": "433823",
"name": "Photoshop",
"url": "https://staging.dailymaverick.co.za/keyword/photoshop/",
"slug": "photoshop",
"description": "",
"articlesCount": 0,
"replacedWith": null,
"display_name": "Photoshop",
"translations": null
}
}
],
"short_summary": null,
"source": null,
"related": [],
"options": [],
"attachments": [
{
"id": "84493",
"name": "WASHINGTON, DC - SEPTEMBER 12: Mobile billboard is seen near the U.S. Capitol on September 12, 2023 in Washington, DC. Eko, Greenpeace, Friends of the Earth, Climate Action Against Disinformation, and Kairos Action have a mobile billboard circling the U.S. Capitol during congressional hearings and forums on artificial intelligence (AI) to highlight the new technology’s dangers to climate change. (Photo by Tasos Katopodis/Getty Images for Friends of the Earth)",
"description": "The problem is simple: it’s <a href=\"https://theconversation.com/can-you-tell-the-difference-between-real-and-fake-news-photos-take-the-quiz-to-find-out-253539\">hard to know</a> whether a photo’s real or not anymore. Photo manipulation tools are so good, so common and easy to use, that a picture’s truthfulness is no longer guaranteed.\r\n\r\nThe situation got trickier with the uptake of <a href=\"https://theconversation.com/topics/generative-ai-133426\">generative artificial intelligence</a>. Anyone with an internet connection can cook up just about any image, plausible or fantasy, with photorealistic quality, and present it as real. This affects our ability to discern truth in a world increasingly influenced by images.\r\n\r\nI <a href=\"https://www.wits.ac.za/news/latest-news/opinion/2024/2024-02/how-academics-can-counter-ai-thinks-therefore-i-am.html\">teach</a> and <a href=\"https://scholar.google.com/citations?user=y_wCKHQAAAAJ&hl=en&oi=ao\">research</a> the ethics of artificial intelligence (AI), including how we use and understand digital images.\r\n\r\nMany people ask how we can tell if an image has been changed, but that’s fast becoming too difficult. Instead, here I suggest a system where creators and users of images openly state what changes they’ve made. Any similar system will do, but new rules are needed if AI images are to be deployed ethically – at least among those who want to be trusted, especially the media.\r\n\r\nDoing nothing isn’t an option, because what we believe about media affects how much we trust each other and our institutions. There are several ways forward. Clear labelling of photos is one of them.\r\n<h4><strong>Deepfakes and fake news</strong></h4>\r\nPhoto manipulation was once the preserve of government propaganda teams, and later, expert users of <a href=\"https://www.tandfonline.com/doi/abs/10.1207/S15327728JMME1802_05?casa_token=0Xir7SwwaOQAAAAA:ZkIg7_2eyQFmTc8Ix34dObJWolQPHlvwUVeyaleeAdmdamNJYDNJ79HsYJpKMTSDXZU3BxYmGi4e\">Photoshop</a>, the popular software for editing, altering or creating digital images.\r\n\r\n[caption id=\"attachment_2719389\" align=\"alignnone\" width=\"2560\"]<img class=\"wp-image-2719389 size-full\" src=\"https://www.dailymaverick.co.za/wp-content/uploads/2025/05/GettyImages-973118252-scaled.jpg\" alt=\"A logo is displayed in the Adobe Systems Inc. Photoshop Express application on an Apple Inc. iPhone in an arranged photograph taken in Tiskilwa, Illinois, U.S.\" width=\"2560\" height=\"1651\" /> A logo is displayed in the Adobe Systems Inc. Photoshop Express application on an Apple Inc. iPhone in an arranged photograph taken in Tiskilwa, Illinois, U.S. Photographer: Daniel Acker/Bloomberg via Getty Images[/caption]\r\n\r\nToday, digital photos are automatically subjected to colour-correcting filters on phones and cameras. Some social media tools <a href=\"https://www.newsweek.com/tiktok-beauty-filter-glitch-automatic-report-1600809\">automatically “prettify”</a> users’ pictures of faces. Is a photo taken of oneself by oneself even real anymore?\r\n\r\nThe basis of shared social understanding and consensus – trust regarding what one sees – is being eroded. This is accompanied by the apparent rise of untrustworthy (and often malicious) news reporting. We have new language for the situation: <a href=\"https://theconversation.com/fake-news-the-internet-has-turned-an-age-old-problem-into-a-new-threat-72111\">fake news</a> (false reporting in general) and <a href=\"https://theconversation.com/deepfakes-in-south-africa-protecting-your-image-online-is-the-key-to-fighting-them-223383\">deepfakes</a> (deliberately manipulated images, whether for waging war or garnering more social media followers).\r\n\r\n<strong>Read more:</strong> <a href=\"https://www.dailymaverick.co.za/article/2025-03-18-tiktok-influencers-instagram-gangsters-and-fake-news-how-socialmedia-is-affecting-crime/\">TikTok influencers, Instagram gangsters and fake news — how #socialmedia is affecting crime</a>\r\n\r\n<a href=\"https://www.dailymaverick.co.za/article/2025-03-05-crime-intelligence-target-of-social-media-fake-news-but-parliament-hears-unit-also-like-a-mafia/\">Misinformation</a> campaigns using manipulated images can <a href=\"https://arxiv.org/abs/2005.02443\">sway elections</a>, deepen divisions, and even incite violence. <a href=\"https://journals.sagepub.com/doi/full/10.1177/1940161218811981?casa_token=e7UPcvnzpqwAAAAA%3ALqlZj1WX0jZSOg6eCtcsTlyMZ2dASNFaY64l_NWH8TCLrd_q_GtvFrXL7ZhyYrTtQ5k033mS4Zvb\">Scepticism</a> towards trustworthy media has untethered ordinary people from fact-based accounting of events, and has fuelled conspiracy theories and fringe groups.\r\n<h4><strong>Ethical questions</strong></h4>\r\nA further problem for producers of images (personal or professional) is the difficulty of knowing what’s permissible. In a world of doctored images, is it acceptable to prettify yourself? How about editing an ex-partner out of a picture and posting it online?\r\n\r\nWould it matter if a well-respected western newspaper published a photo of Russian President Vladimir Putin pulling his face in disgust (an expression that he surely has made at some point, but of which no actual image has been captured, say) using AI?\r\n\r\nThe ethical boundaries blur further in highly charged contexts. Does it matter if opposition political ads against then-presidential candidate Barack Obama in the US deliberately <a href=\"https://nationalpost.com/news/world/obamas-skin-looked-darker-in-republican-ads\">darkened his skin</a>?\r\n\r\nWould generated images of dead bodies in Gaza be more palatable, perhaps more moral, than actual photographs of dead humans? Is a magazine cover showing a model digitally altered to <a href=\"https://heinonline.org/HOL/Page?handle=hein.journals/regjil10&div=16&g_sent=1&casa_token=kBh-STXeH8kAAAAA:LdWrDD1IiJTbtW4YawTviHZntgBOa6hxvrUvEaLJ4sJ9KYQtucDGGxMxWCiTEWZFAZKv8WT_&collection=journals\">unattainable beauty standards</a>, while not declaring the level of photo manipulation, unethical?\r\n<h4><strong>A fix</strong></h4>\r\nPart of the solution to this social problem demands two simple and clear actions. First, declare that photo manipulation has taken place. Second, disclose what kind of photo manipulation was carried out.\r\n\r\nThe first step is straightforward: in the same way pictures are published with author credits, a clear and unobtrusive “enhancement acknowledgement” or EA should be added to caption lines.\r\n\r\nThe second is about how an image has been altered. Here I call for five “categories of manipulation” (not unlike a film rating). Accountability and clarity create an ethical foundation.\r\n\r\nThe five categories could be:\r\n\r\n<strong>C – Corrected</strong>\r\n\r\nEdits that preserve the essence of the original photo while refining its overall clarity or aesthetic appeal – like colour balance (such as contrast) or lens distortion. Such corrections are often automated (for instance by smartphone cameras) but can be performed manually.\r\n\r\n<strong>E – Enhanced</strong>\r\n\r\nAlterations that are mainly about colour or tone adjustments. This extends to slight cosmetic retouching, like the removal of minor blemishes (such as acne) or the artificial addition of makeup, provided the edits don’t reshape physical features or objects. This includes all filters involving colour changes.\r\n\r\n<strong>B – Body manipulated</strong>\r\n\r\nThis is flagged when a physical feature is altered. Changes in body shape, like slimming arms or enlarging shoulders, or the altering of skin or hair colour, fall under this category.\r\n\r\n<strong>O – Object manipulated</strong>\r\n\r\nThis declares that the physical position of an object has been changed. A finger or limb moved, a vase added, a person edited out, a background element added or removed.\r\n\r\n<strong>G – Generated</strong>\r\n\r\nEntirely fabricated yet photorealistic depictions, such as a scene that never existed, must be flagged here. So, all images created digitally, including by generative AI, but limited to photographic depictions. (An AI-generated cartoon of the pope would be excluded, but a photo-like picture of the <a href=\"https://www.buzzfeednews.com/article/chrisstokelwalker/pope-puffy-jacket-ai-midjourney-image-creator-interview\">pontiff in a puffer jacket</a> is rated G.)\r\n\r\nThe suggested categories are value-blind: they are (or ought to be) triggered simply by the occurrence of any manipulation. So, colour filters applied to an image of a politician trigger an E category, whether the alteration makes the person appear friendlier or scarier. A critical feature for accepting a rating system like this is that it is transparent and unbiased.\r\n\r\nThe CEBOG categories above aren’t fixed; there may be overlap: B (Body manipulated) might often imply E (Enhanced), for example.\r\n\r\n[caption id=\"attachment_2719331\" align=\"alignnone\" width=\"2560\"]<img class=\"wp-image-2719331 size-full\" src=\"https://www.dailymaverick.co.za/wp-content/uploads/2025/05/GettyImages-1676402246-scaled.jpg\" alt=\"Mobile billboard is seen near the U.S. Capitol on September 12, 2023 in Washington, DC. \" width=\"2560\" height=\"1707\" /> Mobile billboard is seen near the U.S. Capitol on September 12, 2023 in Washington, DC. (Photo by Tasos Katopodis/Getty Images for Friends of the Earth)[/caption]\r\n<h4>Feasibility</h4>\r\nResponsible photo manipulation software may automatically indicate to users the class of photo manipulation carried out. If needed it could watermark it, or it could simply capture it in the picture’s metadata (as with data about the source, owner or photographer). Automation could very well ensure ease of use, and perhaps reduce human error, encouraging consistent application across platforms.\r\n\r\nOf course, displaying the rating will ultimately be an editorial decision, and good users, like good editors, will do this responsibly, hopefully maintaining or improving the reputation of their images and publications. While one would hope that social media would buy into this kind of editorial ideal and encourage labelled images, much room for ambiguity and deception remains.\r\n\r\nThe success of an initiative like this hinges on technology developers, media organisations and policymakers collaborating to create a shared commitment to transparency in digital media. <strong>DM <iframe style=\"border: none !important;\" src=\"https://counter.theconversation.com/content/252645/count.gif?distributor=republish-lightbox-advanced\" width=\"1\" height=\"1\"></iframe>\r\n</strong>\r\n\r\n<a href=\"https://theconversation.com/how-to-tell-if-a-photos-fake-you-probably-cant-thats-why-new-rules-are-needed-252645\"><em>This story was first published in</em> The Conversation.</a> <em>Martin Bekker is a Computational Social Scientist at the University of the Witwatersrand.</em>",
"focal": "50% 50%",
"width": 0,
"height": 0,
"url": "https://dmcdn.whitebeard.net/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg",
"transforms": [
{
"x": "200",
"y": "100",
"url": "https://dmcdn.whitebeard.net/i/L2TUMw2bHqAtXSRqNGhwrE8TZZk=/200x100/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg"
},
{
"x": "450",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/QgdYA0mOOxrl2gwrzroE15H05-w=/450x0/smart/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg"
},
{
"x": "800",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/WoyFg9VUM8yZHwiqi8x0EbIy1-U=/800x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg"
},
{
"x": "1200",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/6pKQ7BeIXfvWrX8KzBTXLpClyw8=/1200x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg"
},
{
"x": "1600",
"y": "0",
"url": "https://dmcdn.whitebeard.net/i/LdIysVOPAmoSiGi-I9jM2I40_pg=/1600x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg"
}
],
"url_thumbnail": "https://dmcdn.whitebeard.net/i/L2TUMw2bHqAtXSRqNGhwrE8TZZk=/200x100/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg",
"url_medium": "https://dmcdn.whitebeard.net/i/QgdYA0mOOxrl2gwrzroE15H05-w=/450x0/smart/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg",
"url_large": "https://dmcdn.whitebeard.net/i/WoyFg9VUM8yZHwiqi8x0EbIy1-U=/800x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg",
"url_xl": "https://dmcdn.whitebeard.net/i/6pKQ7BeIXfvWrX8KzBTXLpClyw8=/1200x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg",
"url_xxl": "https://dmcdn.whitebeard.net/i/LdIysVOPAmoSiGi-I9jM2I40_pg=/1600x0/smart/filters:strip_exif()/file/dailymaverick/wp-content/uploads/2025/05/GettyImages-869590262.jpg",
"type": "image"
}
],
"summary": "The solution demands two simple and clear actions: declare that photo manipulation has taken place and disclose what manipulation was carried out.",
"template_type": null,
"dm_custom_section_label": null,
"elements": [],
"seo": {
"search_title": "How to tell if a photo’s fake? You probably can’t. New rules are needed",
"search_description": "The problem is simple: it’s <a href=\"https://theconversation.com/can-you-tell-the-difference-between-real-and-fake-news-photos-take-the-quiz-to-find-out-253539\">hard to know</a> whether a photo’s real",
"social_title": "How to tell if a photo’s fake? You probably can’t. New rules are needed",
"social_description": "The problem is simple: it’s <a href=\"https://theconversation.com/can-you-tell-the-difference-between-real-and-fake-news-photos-take-the-quiz-to-find-out-253539\">hard to know</a> whether a photo’s real",
"social_image": ""
},
"cached": true,
"access_allowed": true
}