Introduction

On July 11th, a bipartisan group of senators introduced a new bill, the COPIED ACT, which aims to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission. 

A day later, on July 12th, the European Union's Artificial Intelligence Act, with similar scope, was published in the EU Official Journal, making it the first comprehensive horizontal legal framework for regulating AI systems across the EU.

Regulations like these aim to install new guardrails around generative AI by creating transparency standards, putting journalists, artists, and musicians in control of their content, giving individuals a right to sue violators, and prohibiting tampering with or disabling AI provenance information. These two acts are the latest in a series of AI-related regulations that governments globally are implementing as they try to understand and control this technology.

The launch of OpenAI’s ChatGPT in November 2022 sparked a boom in public interest in AI and has since led to its mainstream adoption by consumers and enterprises worldwide. 

 unnamed (23)

Five-year trend of interest in AI (Source: Google)

While the first version of ChatGPT was purely text-based, the latest GPT-4o can reason across audio, vision, and text in real-time. Sora, another dedicated generative AI video model by OpenAI, promises to produce lifelike videos with text prompts as inputs. 

OpenAI is only one among many well-funded AI firms making strides in Generative AI. Consider this AI-generated video produced using a model by Runway, another AI firm that is into high-fidelity, controllable video generation:

With tens of billions invested in AI last year and leading players looking for trillions more, the tech industry is racing to add to the pileup of new, more capable AI models. Some have already surpassed human performance on several benchmarks, including image classification, visual reasoning, and English understanding: 

unnamed (24)

Source: Stanford University - The AI Index Report: Measuring Trends In AI

This brings us to the question of Trust. With increasing computing power and abundant training data, it has become increasingly difficult to distinguish the fakes from the truth. 

Societies have long regarded images, video, and audio as close to ironclad proof that something is real. AI is changing this by blurring the boundary between what is real and what is “generated.” It will eventually be capable of producing artificial but realistic-looking content in almost every format with nothing more than a few good prompts and a click of a mouse. 

According to a recent KPMG report, most people today are either ambivalent or unwilling to trust AI systems. There is a strong association between trust in AI and acceptance of AI (correlation r=0.71).

unnamed (25)

Source: Trust in Artificial Intelligence, a global study by University of Queensland and KPMG Australia

This problem of trust is fundamentally an old one. New technologies, from the printing press to the internet, have often made it easier to spread untruths or impersonate the trustworthy. AI only scales this problem to a new level.

unnamed (26)

Where Generative AI falls on the synthetic content spectrum and its potential harms (Source: Digital Content Next)

Without proper regulations, AI can create a dystopian future in which trust is a rare commodity, and anyone’s likeness can be hijacked for nefarious purposes. Governments, however, don’t have a great track record of keeping up with emerging technology, and AI is no exception. Hence, banking on regulations alone can be dangerous for trust.

AI has vast ethical implications, but it also substantially benefits publishers. Most, however, worry that AI will erode public trust:

unnamed (27)Source: Reuter’s Journalism, media, and technology trends and predictions 2024

How can digital media publishers build and protect trust in the age of AI? This blog explores key areas in which publishers can take proactive steps to build trust and use it as a cornerstone for future AI initiatives.

Disinformation

Online disinformation isn’t new, but AI tools have supercharged it. As of late 2023, 85% of internet users worried about their inability to spot fake content online. A major part of the problem stems from social media companies' failure to address the threat, as most have severely cut back on the human content moderators who were the most successful defense against disinformation. 

Meta, for example, drastically reduced content moderation teams, shelved a fact-checking tool under development and canceled contracts with external content moderators. Now, the platform is dealing with a flood of bizarre advertising-driven AI-generated content. YouTube also cut its content moderation team. Such steps were even more drastic at X.

Publishers need to work harder to close the gaps in which disinformation thrives. The Financial Times has outlined some great steps to fight disinformation:

  • Be transparent: the better readers understand how news decisions and assumptions are made, the more confidence they will have in the press. The Internet allows for more detailed explanations and background information for each article, yet few publishers use this opportunity, leaving the perception that articles are based on superficial opinions or misleading data.

  • Join forces: collaborate to form fact-checking entities (such as the Comprova project) and promote standards for image metadata and provenance information formats.

  • Incorporate image checking into your workflow: it is easy for AI-made or out-of-context images to be incorporated into narratives. When found, this can damage reputation and can be hard to repair. There are a number of tools that can be used for reverse searching and detecting AI-made images or editing, such as TinEye, Google Lens, AI or Not, or Intel’s Fake Catcher.

  • Editorial standards: review your editorial standards: Are they current? Are headline notifications accurate? Do they superficially reflect opinions? Verified quotes in headlines (even when attributed to someone in authority) can help distance publishers from the source of disinformation and avoid disseminating false narratives.

  • Use metadata: prominently label archive images and articles with the date and provenance, especially outside a paywall. Much misinformation is genuine content taken out of context and shared inadvertently.

  • Protect your systems: publishers have been targets of cyber attacks in the past few years. Attacks can compromise the ability to publish new content and also damage archives.

  • Invest in media literacy: Some of the most effective interventions rely on “inoculating” audiences so they can identify disinformation. Educate your audience. 

The Black Box Problem

Algorithms have long been an integral part of our lives, shaping everything from social media feeds to influencing loan approvals. Traditional algorithms were simple, traceable, interpretable instructions that made them trustworthy. Modern AI has changed this. 

State-of-the-art AIs, such as the one behind ChatGPT, have become extremely powerful and capable of generating text, images, and videos that often rival expert writers, designers, and video makers. In doing so, they have also become “black-box” models where humans have no idea how they make decisions. 

Since AI systems are only as good as the data they use, bad data can lead to bias, inaccuracy, and hallucinations. Publishers must ensure that the data used to train their models is of the highest quality. 

Several techniques can be used to mitigate and ideally remove bias from poorly trained models. As users of AI, publishers must acknowledge the possibility of bias and ask algorithm makers probing questions to ensure they have safeguards in place to avoid discrimination. Automate in small doses and thoroughly test new AI models before putting them out for use at scale.

Ethical Concerns

Several initiatives and frameworks have been developed to address the ethical concerns surrounding AI. For example, the Partnership on AI’s (PAI) Responsible Practices for Synthetic Media is a framework for responsibly developing, creating, and sharing synthetic media. 

Similarly, the EU’s Ethics Guidelines For Trustworthy AI identifies seven key requirements that AI systems should meet to be deemed trustworthy:

unnamed (28)

Source: EU’s Ethics Guidelines For Trustworthy AI.

Using these resources as guidelines, publishers can establish ethical standards and best practices to ensure responsible AI use. Implementing and communicating these guidelines externally helps build trust with the readership. It also aids internal teams in understanding and managing the ethical implications of AI-generated content.

4. Privacy, Data Protection, and Consent

AI systems magnify the problem of data protection. Although large data sets produce accurate and representative results, they run a higher privacy risk if breached. Even seemingly anonymized personal data can easily be de-anonymized by AI. Researchers in Europe published a method they say can correctly re-identify 99.98% of individuals in anonymized data sets with just 15 demographic attributes.

Publishers’s privacy policies must ensure their AI systems are transparent about how they collect user information. The data collected and the purpose of AI must be limited by design. Users must be able to opt out of such systems, and the collected data must be deleted upon request.

Bias and Discrimination

If the training data is biased or doesn’t represent diverse perspectives, the output will likely also be biased. This becomes particularly important when publishers use AI in research and content creation. Language models may end up favoring one group of people over others or may reinforce existing stereotypes by generating content that aligns with those stereotypes.

To ensure their training data is diverse, representative, and free from bias, publishers must regularly audit AI systems and evaluate AI tools to identify any biases or discriminatory patterns in the output. It is also a good idea to take guidance from ethical experts on mitigating bias and discrimination in AI systems.

Intellectual Property

Multiple AI firms have been accused of circumventing common web standards used by publishers to block the scraping of their content for use in generative AI systems. Recently, Forbes accused Perplexity, an AI answer engine, of plagiarizing its investigative stories in AI-generated summaries without citing or asking for permission.

This presents a two-way challenge for publishers. On the one hand, they must protect their content from unauthorized scrapping, and on the other, they must ensure that the AI models they use do not reproduce existing content without permission. 

It is important to take active steps to detect plagiarism. This is difficult because the tools designed to identify copied material might not recognize subtle rephrasings or deep structural similarities that AI can produce. Last year, OpenAI pulled the plug on its AI text detector tool because of a low accuracy rate.

To develop trust, publishers can pivot to more engaging formats like video. Generative AI video models today make it possible to convert text-based articles to videos at scale using minimal resources and investment. Video is a top priority for publishers in 2024 as per Reuter’s Journalism, Media, And Technology Trends And Predictions 2024:

unnamed (29)Source: Reuter’s Journalism, media, and technology trends and predictions 2024

Learn more about the power of video and why publishers must be serious about their video strategy in 2024.

How Top Publishers Approach Trust-Building in AI

NYT

The New York Times uses Generative AI to assist journalists in uncovering the truth and helping more people understand the world. AI makes its content more accessible through features like digitally voiced articles and translations into more languages. 

NYT has invested significant resources in community moderation to ensure that its readers have a productive place to discuss all sides of an issue and connect freely over the topics that matter most without abuse. NYT has also developed an AI tool to recognize members of Congress!

Reuters

In 2018, Reuters developed an automation tool to help reporters spot hidden stories in their data and has since taken a big gamble on AI-supported journalism. In 2020, it applied AI to 100 years of archive video to enable faster discovery and introduced automated video reports.

Reuters also utilizes natural language processing (NLP) to produce economic indicator alerts from government sources, giving it a distinct advantage over other news organizations. Reuters' AI initiatives are governed by its trust principles, which help preserve its independence, integrity, and freedom from bias in the gathering and dissemination of information and news.

BBC

BBC has outlined three principles that shape its approach toward Gen AI: (1) always act in the best interests of the public, (2) always prioritize talent and creativity, and (3) be open and transparent.

As of February of this year, BBC shared an update that most of its AI pilots were internal-only and weren't yet ready to create audience content. However, the paper announced 12 pilots covering various areas, from translating content to making it available to a wider audience, reformatting existing content to widen its appeal, a BBC Assistant, more personalized marketing, supporting journalists, and streamlining how it organizes and labels its content.

The Economist

The Economist claims to be in the phase of “cautious experimentation” when it comes to AI and clearly labels its work as "AI-generated” if it was produced, at least in some small way, using generative artificial intelligence.

It has outlined four principles that underlie how it intends to work with generative AI. This includes (1) Using AI only when it truly enhances the quality of journalism, (2) Always maintaining transparency, (3) taking responsibility for what is published (with some exceptions), and (4) a privacy and copyright-first approach.

These are only some examples of how publishers handle trust. While there are deep concerns about trust and intellectual property protection, most publishers see advantages in using AI to make their businesses more efficient and relevant to audiences.

Summary

Societies have long regarded images, video, and audio as close to ironclad proof that something is real. With AI capabilities fast approaching human levels, this is changing. Trust may become a rare commodity if publishers rely solely on regulations instead of taking active steps. Now is the time to act and make trust a cornerstone of your AI strategy.