Ethics for an Intelligent Future: The Third China Forum on Ethics of Science and Technology focus on Emerging Questions in the AI Era

On the morning of November 21, 2025, the Third China Forum on Ethics of Science and Technology opened at the Wu Wenzheng Auditorium of Fudan University. With the theme “Ethics of Science and Technology in the Era of Artificial Intelligence,” the Forum was hosted by the China Society for Ethics of Science and Technology (Preparatory) and organized by the CAST-FDU Institute of Technology Ethics for Human Future, the School of Philosophy, Fudan University, and the CAST National Academy of Innovation Strategy (NAIS).
Wang Jinzhan, member of the Leadership and the Executive Secretary of CAST; Professor Jin Li, Academician of the Chinese Academy of Sciences and President of Fudan University; and Professor Zheng Qinghua, Academician of the Chinese Academy of Engineering and Party Chief of Tongji University, attended the Opening Ceremony and delivered remarks. The Forum attracted over one hundred experts and scholars from across the country, as well as nearly 300 graduate students. Jin Haiyan, Chair of the Council of the CAST-FDU Institute of Technology Ethics for Human Future, moderated the opening ceremony.

During the Opening Ceremony, Professor Jin Li first extended a welcome to more than 400 participating scholars on behalf of Fudan University and expressed gratitude for the guidance and support of CAST and the Shanghai Association for Science and Technology. He emphasized that the rapid development of emerging technologies such as Artificial Intelligence and Brain–Computer Interfaces places higher demands on the development of ethics of science and technology, as ethics serves as a civilizational cornerstone balancing innovation and values. Jin noted that Fudan University has taken the lead in establishing the Institute of Technology Ethics, launching a professional master’s program in Applied Ethics, and releasing “Yi Jian,” the world’s first AI agent system designed specifically for ethics review. He stressed that Fudan’s efforts to build an independent Chinese knowledge system for ethics of science and technology have never ceased. Looking to the future, Jin expressed his hope that the Forum would help build broader consensus, deepen the theoretical framework for Technology for Good, and encourage greater public engagement with ethical issues in science and technology.

Wang Jinzhan expressed warm congratulations on the convening of the Forum. He noted that emerging technologies represented by AI are rapidly iterating, and the complexity and global nature of ethical challenges have become increasingly prominent. He offered three suggestions: first, to uphold the fundamental principle of being people-centered and oriented toward Technology for Good while strengthening ethical norms for AI; second, to accelerate the establishment of the China Society for Ethics of Science and Technology and reinforce the role of the academic community; and third, to systematically build a human-responsibility-centered governance framework and enhance China’s international discourse power. He expressed confidence that through joint efforts, China will build a sound governance system for ethics of science and technology and safeguard the realization of high-level scientific and technological self-reliance.

Following the opening ceremony, the keynote speech session was moderated by Professor Zhang Shuangli, Dean of the School of Philosophy, Fudan University. Professor Yang Min from Fudan University, Professor Wang Shuqin from Capital Normal University, and Professor Gong Qun from Renmin University of China delivered keynote speeches on core topics such as AI safety, AI for good, and the moral risks of AI.
Professor Yang Min pointed out that global governance for AI safety is seriously lagging behind technological development. To monitor risk levels in AI systems, the first step is to conduct safety evaluation of Large Language Models (LLMs). Yang introduced the Fudan Baize Dynamic Safety and Compliance Evaluation Platform, noting that the platform not only enables efficient evaluation of foundational LLMs but also provides forward-looking decision support for national security governance. Second, regarding AI risk early-warning, Yang reported that their tests of 32 open-source LLMs revealed that 11 already exhibit autonomous replication capability, indicating potential risks of model runaway. Based on these findings, he called for establishing cross-disciplinary governance mechanisms to ensure that AI develops along a controllable trajectory.

Professor Wang Shuqin emphasized that modern science and technology represented by AI have become a “fourth force” influencing the future of humanity, after politics, land, and capital. She argued that the instrumental value of technology must remain subordinate to human purposive values. Intelligent agents, she observed, essentially embody human intentions and goals; therefore, the responsibility for Human–AI Value Alignment ultimately falls on humans. Technology practitioners must ensure that algorithms reflect universally shared human values. Given the inherent lag of legal regulation, virtue ethics plays an indispensable complementary role. Scientists and engineers must rely on moral self-discipline to resist temptations in the grey areas where explicit rules are lacking. Likewise, evaluators of technology must uphold virtues of fairness and impartiality to prevent review processes from being compromised by collusive interests, thereby establishing a governance system in which law and virtue reinforce each other.

Professor Gong Qun argued that the ethical risks of generative AI stem from the modern tension between instrumental rationality and value rationality. He identified three core risks of generative AI: the opacity of black-box systems creates regulatory and ethical challenges; AI Hallucination leads to factual and perceptual errors; and extensive collection of personal data leads to privacy risks. To address these issues, Gong advocated establishing a governance framework centered on human responsibility, strengthening institutional mechanisms for algorithmic auditing, and safeguarding personal data rights, thus achieving harmony between technological progress and human values.

After the keynote speeches, Professor Wang Guoyu from Fudan University and Professor Yuan Zhenguo, Lifetime Professor at East China Normal University, engaged in a Thematic Dialogue. Focusing on “Educational Transformation and Ethical Boundaries in the Era of Artificial Intelligence,” the two scholars examined issues of responsibility attribution and the nature of creativity in human–AI collaboration, using the recent experiment at ECNU involving an AI “lead author” call-for-papers as a point of departure. Professor Yuan clarified that this initiative was an extreme exploration of AI creativity, aimed at documenting how humans guide AI collaborative writing through Prompt Engineering, thereby exploring the new human role in the age of AI. Discussing the transformation of educational paradigms, Yuan proposed a triadic “teacher–student–agent” interactive model and predicted that future education will transcend physical boundaries to form personalized, all-space–time learning environments. Concluding the dialogue, Yuan remarked: “The essence of education in the age of artificial intelligence is to teach teachers and students to remain the masters of machines within human–AI coexistence.”

Finally, Professor Zheng Qinghua, Academician of the Chinese Academy of Engineering, delivered an Invited Speech titled “Understanding and Reflections on Human–AI Value Alignment.” He noted that value alignment has become one of the central challenges facing society today. The rapid development of AI brings risks such as algorithmic bias, AI hallucination, and blurred boundaries of intellectual property. The fundamental tension, he argued, lies in the conflict between the complexity and ambiguity of human values and the precision of AI goal-pursuit. To address this, Zheng proposed that alignment must be mutual: AI should adjust its values through reinforcement learning from human feedback, while humans must adapt to new paradigms of Human–AI Collaboration. Achieving value alignment ultimately requires building a multidimensional and evolving system in which technology, governance, and culture advance together.

In the afternoon, the Forum convened several Thematic Forums on key issues including human autonomy and human identity in the era of human–AI coexistence, opportunities and challenges of AI-empowered medicine and neuroscience, AI-driven moral enhancement and possibilities for AI-empowered ethics, ethical governance of AI and global discourse building, methodological studies in ethics of science and technology, and AI ethics in art and narrative. In addition, the Forum gathered emerging scholars of technology ethics from major universities across China and organized eight Graduate Sub-forums, accompanied by an outstanding paper competition. The final results were announced at the Closing Ceremony on the afternoon of November 22.
Staying closely aligned with the theme “Ethics of Science and Technology in the Era of Artificial Intelligence,” this year’s Forum offered profound analysis of the opportunities and challenges accompanying the current wave of AI development. Through in-depth exchanges and discussions, the Forum aims to contribute forward-looking and constructive insights for AI governance and to advance academic efforts toward Technology for Good.
