The Role of AI in Journalism: Balancing Efficiency and Integrity in the AI Era

2025-04-03

CAO YIN, CHINA DAILY

**Transformation of Media by AI**

 

Ladies and gentlemen,

Today, I want to talk about something that is changing our world—AI. Artificial Intelligence, such as ChatGPT and Deepseek, is everywhere now, and journalism is no exception.

Imagine having a super-smart helper that can gather information, write drafts, and even create videos for you. Sounds great, right? But just like any powerful tool, AI comes with its own set of challenges.

Let’s start with the good stuff. AI can do amazing things. In Kunshan, East China’s Jiangsu province, the police used AI to catch criminals. I made a report that the police had a case where someone was cheated out of nearly 1 million yuan. The AI team traced the money in just 10 minutes and stopped half of it from being transferred. They even caught the suspects. That is the power of AI. It can analyze data faster than any human and help solve complex cases that might otherwise go unsolved.

But here is the flip side. AI is also being used for bad things. Many cases have shown that AI can be dangerous if it falls into the wrong hands. It can be used to create fake identities, manipulate images, and even spread false information.

And it is not just about crime. AI is also causing problems with people’s rights. There was a case in Beijing where a voice actor’s voice was copied without her permission. Her voice was used in audio books, and she didn’t even know. The court ruled that this was a violation of her rights. Imagine someone using your voice or face without asking. That is a big deal. It is not just unethical; it is illegal.

AI is also being used in journalism to create multimedia content. Tools that generate images based on explanatory text are becoming popular. These tools can produce high-quality visuals to appeal to a broader audience. However, this also raises concerns about the authenticity of the content. With AI, it is easier to create deepfakes and manipulate audio-visual content, which can spread misinformation.

**The Double-Edged Sword of AI in News Writing**

Now, let’s focus on journalism. AI can be a huge help. It can write drafts, suggest headlines, and even find interesting angles for stories. For example, AI can help journalists transcribe interviews quickly, saving time and effort. It can also tailor content to individual readers’ preferences, increasing online views.

I use AI in my work as well. Sometimes I asked it to provide me with more headlines of a story, as in this way I could choose a better one that not only summarized the main idea of the report, but was more attractive for readers. While sometimes I also talked about hot issues with the tool, such as cyberbullying, juvenile crimes and even AI itself, trying to discover more angles for my reports and learn more about opinions from others.

When I benefit more from it, I more realize that AI can also mess things up.

First, it might use or spread outdated information, data and pictures, as some AI systems are not good enough to learn new things, or cannot be updated in a timely manner. If a journalist just relies on AI to collect information and report, they will easily face fake news.

Second, AI poses significant challenges to news copyright. AI systems often scrape large amounts of data from the internet for training, including copyrighted news content. This practice may infringe on the rights of copyright holders, as it involves the unauthorized use of their intellectual property. The legal battles over AI’s use of copyrighted material highlight the tension between technological innovation and the protection of creators’ rights. If AI-generated content is deemed to infringe on copyright, it could lead to stricter regulations that might stifle AI development, while also ensuring fair compensation for content creators.

Third, AI-generated news has the potential to create and disseminate misinformation. With the ability to produce deepfakes and other forms of synthetic content, AI can be used to manipulate news stories, making it difficult for the public to distinguish between real and fake information. This not only undermines the credibility of news organizations but also poses a threat to democratic processes by spreading false narratives. The ease with which AI can generate convincing content raises concerns about the potential for malicious actors to exploit these technologies for fraudulent or harmful purposes.

In conclusion, the application of AI in journalism brings numerous potential risks, including the spread of false information, algorithmic bias, information manipulation, ethical challenges and news copyright disputes. These risks not only affect the authenticity and credibility of news, but also pose threats to social trust and public safety.

Therefore, the journalism industry must be more careful when using AI, ensuring the accuracy and authenticity of content while protecting the public’s right to information and privacy.

**Higher Standards for Editing and Reporting in the AI Era**

So, what do we do? I think newsrooms need rules for using AI. We cannot let AI replace human judgment. My suggestion is that we must double check AI-generated content really carefully to make sure it is accurate before publishing.

We also need to be honest about when we use AI. If an article is partly written by AI, we should say so. And if some facts, figures and pictures come from other news outlet, we must clarify the information sources to respect others’ efforts, as well as to maintain the order of the journalism.

We need to balance using AI with protecting people’s rights, or the bottom line of AI application is that we cannot infringe upon others’ reputation, privacy and image, as a number of court cases heard by Beijing courts have shown how AI-generated content harmed others.

To address the problems, China has taken some actions.

China’s Cyberspace Administration has already issued some rules to regulate the AI application while  promoting its development. In 2023, for example, the authority jointly released interim measures for managing generative AI services with other six departments, drawing of a bottom line for those providing such services to safeguard national security and public interests.

The measures clarify that AI-generated content must not subvert State power, nor involve terrorism , pornography, ethnic discrimination and violence, as is anything that may disturb economic and social order.

In March, the administration has also worked with several other ministries to issue another regulation, requiring content generated by AI should be marked to help users easily distinguish the AI-powered information from the cyberspace.

But it is not just about rules. We also need to educate ourselves and each other, and we need more research on AI industry to seek lines or boundaries on what should do and how to do.

In early March, China concluded its annual most important political event, Two Sessions, the meetings of the country’s top legislative body and top political advisory body. During that  period, a number of national lawmakers and political advisers, along with many legal experts, have called for more research on AI regulations, rules or even laws.

Meanwhile, I think we need more international cooperation to deal with these new challenges. We need to have more discussions on how to promote the technological development while regulating its application.

As journalists in the fast developing information and internet age, we have to catch up the pace of technologies and understand more about AI. We need to know how it works, how it can help us, and how it can hurt us. We need to stay informed and keep learning. Because if we don’t, we might end up spreading misinformation or violating people’s rights.

Ladies and gentlemen, AI is changing how we do journalism. It can make us more efficient, but it also brings risks. We need to use AI wisely, check our facts more carefully, and protect people’s rights. By doing this, we can make sure AI helps us do better journalism, not worse.

Thank you!