(by Professor Zheng Changzhong at the China-Central and Eastern European Countries Media Roundtable Forum on May 22, 2022)
Leaders and friends, good day!
I am delighted to participate in the “Central and Eastern European Journalists Visit China” program. Along the way, we have not only conducted site visits but also made many friends and shared my perspectives on the principles behind the observed phenomena. Thank you all for your generous help throughout the journey. As the last speaker today, while this is billed as a summary speech, it is actually an opportunity to share my reflections after listening to everyone’s insights.
Earlier, journalist friends freely discussed media responsibilities in the AI era, deeply analyzing both the challenges posed by AI and the duties media must shoulder. From my notes, the primary topic was whether we should avoid, oppose, or actively embrace the wave of AI. Despite concerns about its disruptions, there was a unanimous consensus on “active engagement.”
The arrival of the AI era has distinct dualities: on one hand, it holds enormous potential to drive innovative development in the media industry; on the other, it inevitably brings negative impacts.
Let’s first discuss the benefits. As a new form of productive force, AI can significantly enhance work efficiency and break through traditional human capability limits, truly enabling “empowerment.” It also creates entirely new communication scenarios and collaboration opportunities, making human-machine interaction a key direction for collaboration in the AI era—a point deeply explored and widely endorsed by everyone.
However, this new phenomenon also poses potential risks to human development. Many intellectual tasks are being replaced by AI, sparking concerns about an “AI dependency syndrome.” Such dependency could hinder human intellectual development and even replace intuitive thinking, leading to “mental inertia.” Additionally, AI’s widespread use may reduce face-to-face interactions, weakening emotional bonds. A more pressing concern is AI’s reliance on diverse data inputs: uneven technological development across countries could create disparities in the quantity, quality, and variety of data fed into AI systems, potentially exacerbating digital divides along linguistic and national lines. Cultural and linguistic differences may also trigger new conflicts. For media professionals, the proliferation of disinformation is a “lethal threat”—AI can not only “recreate and upgrade” existing false information but also “fabricate” more deceptive content when lacking valid data.
In response, participants proposed countermeasures. The first is to strengthen collaboration between humans and AI, particularly in verifying disinformation. Beyond traditional manual checks, the practice of The Paper (a Chinese news outlet), introduced by Deputy Director Yang Jian of Shanghai United Media Group—using AI to verify AI—is worth emulating. This demonstrates that AI is not an enemy but a tool; whether it becomes a friend or foe depends on how we use it. The Paper has made significant progress in this regard, and its AI tool “Pengpai Mingcha” could potentially become a global public good for AI governance, not just a practical solution for Chinese citizens. I encourage you to explore collaborations with such tools upon your return.
During the discussion, important themes like AI ethics, education in the AI era, and the brilliance of human nature were also raised, prompting reflections on media responsibilities in these areas. As a university teacher and think tank researcher not directly engaged in media work, I was deeply inspired by these conversations and formed my own insights.
Two questions just occurred to me, which might provide a coherent framework for our thinking: Who created AI? What is the purpose of media?
We know that AI is a product of human intelligence and a significant leap in human rationality at a specific stage—it originates from human rationality and inherently carries the imprint of human civilization. This leads to further questions: Can the light of human nature be amplified through AI? Might the dark side of human nature be exacerbated by AI? When discussing AI governance, we are essentially revisiting profound questions about human nature: Technology has always been a mirror, and the “good” or “evil” of AI reflects how humans perceive themselves and define the future of civilization.
Accompanying you from Beijing to Xi’an and then to Zhejiang, we have been contemplating a major proposition: How has human civilization evolved to its current state? As one of the world’s earliest civilizations and the only one with an unbroken heritage, Chinese civilization embodies both the originality of a “source-type” civilization and the continuity of a “tree-ring-type” civilization—each historical stage’s rational development and civilizational progress are clearly recorded and layered, like the rings of a tree.
A core realization has grown clearer during this journey: The key to the Chinese civilization’s five-thousand-year continuity lies in its human-centered philosophy as a secular civilization. From the ancient political philosophy of “the people are the foundation of the state” to the contemporary governance ideology of “putting people at the center,” this consistent logic runs through the evolution of human history (over 1 million years), cultural history (over 10,000 years), and civilizational history (over 5,000 years). In this human-centered evolution, we have witnessed technological revolutions and driven iterative upgrades in governance systems and practices.
Notably, Chinese civilization has always maintained a clear dialectical mindset when embracing change: acknowledging the humanistic brilliance brought by technological progress while addressing the accompanying governance challenges. By distilling the accumulated human experiences and governance wisdom from history, we have formed a civilizational cognitive system that penetrates the essence of things—this ability to explore “inner mechanisms” is the spiritual code behind Chinese civilization’s enduring vitality.
In short, facing the waves of a new society and future, we must draw wisdom from our ancestors while openly confronting new phenomena and envisioning new prospects, applying the rational laws of human cognition of self and nature—developed through civilizational evolution—to current practices and future foresight.
In this process, we advocate for the creative transformation and innovative development of traditional culture—a path not unique to China, as every nation has its own unique cultural roots. Due to its long and unbroken history, Chinese civilization’s accumulated wisdom and rational thinking should serve as a source of inspiration for addressing challenges in the AI era. Chinese traditional wisdom belongs not only to China but to all humanity; similarly, each nation’s traditional experiences are part of humanity’s shared civilizational treasure.
Therefore, in the face of current epochal challenges, we must revisit the origins of human civilization and deeply grasp the light and limitations of human nature through continuous questioning and reflection. In the AI era, how can we amplify the strengths of human nature while overcoming its weaknesses? This is not just a technical question but a core inquiry into civilizational development.
Crucially, in AI governance and the construction of digital civilization, the power of a single nation is insufficient—globalization has already interconnected the world inseparably, and AI-era data circulation and technological applications require shared participation. Now more than ever, we recognize that humanity faces unprecedented common challenges.
If “a community with a shared future for mankind” and “universal harmony” were once seen as ideal visions, they have become actionable agendas in the face of AI-induced common crises. As Romanian journalist Mr. Ștefan, our delegation leader, said, we need closer solidarity to jointly analyze potential risks in the AI era and collaboratively establish global governance standards—turning “shared destiny” from a concept into practice and making it a spiritual beacon for technological civilization.
The concept of a community with a shared future for mankind is no longer just a slogan in the AI era; it is a critical strategic task to be implemented. For media, pursuing and delivering truth is fundamental. As a Global Times colleague noted, much of AI’s data originates from news media products. A thought-provoking case was shared: when someone intentionally input incorrect information, the AI-generated results were equally flawed. This phenomenon warns us that in the AI era, the media’s mission to safeguard truth is more urgent than ever—because technology’s magnifying glass can both project the light of human nature and amplify the shadow of falsehood.
In this context, some mentioned the importance of education, especially in guiding the next generation. Media professionals’ responsibilities extend beyond their roles as parents or current propagators— the words we write and the truth we safeguard today are preserving authentic historical memories for human civilization, allowing future generations to trace our journey clearly.
In the long river of Chinese civilization, historical memories were once preserved through truthful records in historical classics; today, artificial intelligence has become a new carrier, with vast media information integrated into technological currents through “data feeding.” If AI is like a “child” nurtured by humanity, inputting truth into it is not only a responsibility to technology but also to the future of our descendants.
I am convinced that the mission of media professionals in the AI era—to pursue truth, safeguard humanistic brilliance, and transmit shared global values—is more urgent and profound than ever.
Thank you!
*The views and opinions expressed in the articles are solely those of the individual authors and do not reflect the position of the Secretariat of the Belt and Road Journalist Network.