【LHSS】Towards A Confucian Ethics Governing Human-AI Relations


TitleTowards A Confucian Ethics Governing Human-AI Relations

SpeakerBrian Wong Yue Shun(黄裕舜), Assistant Professor of Philosophy and Fellow at the Centre on Contemporary China and the World, University of Hong Kong

ModeratorNorman P. Ho, Professor of Law, Peking University School of Transnational Law

Date and Time: December 13, 2024 (Friday), 12:15 PM – 13:30 PM (China Standard Time)

VenueSTL 209

LangaugeEnglish

About the Speaker

Brian Wong is an Assistant Professor in Philosophy at the University of Hong Kong. His research examines the intersection of geopolitics, political and moral philosophy, and technology, with particular interests in the ethics and dynamics of non-democratic regimes and their foreign policies, responding to historical and colonial injustices, and the impact of automation on labour and human societies. Brian is a Fellow at the newly established Centre on Contemporary China and the World, at the University of Hong Kong. As the Chief Strategy Officer of the HK-ASEAN Foundation, he advises multinational corporations, family offices, and leading think-tanks on geopolitical affairs and macro risks throughout Asia.

Lecture Summary

With the rapid rise and developments in Artificial Intelligence (AI) over the past two decades, questions have increasingly been raised concerning the moral status and associated rights of AI, as well as the ethics of how we should train, use, and interact with AI. The Confucian principle of ‘Love with Distinction’ – “showing intimacy to his relatives, benevolence to his people, and love to objects” (qinqin renmin aiwu), as envisioned by Mencius (Mengzi) and expanded upon by Neo-Confucians such as Wang Yangming, is instrumental in enabling us to establish different conceptions of human-AI relations, in response to different types of AI. ‘Love with Distinction’ is best interpreted as a process in which we grow our circle of love. In relation to AI, all AI today, given its non-sentience, should be conceived of as wu (objects) to be loved. The more similar an AI agent is to humans, the likelier the agent should serve as a starting point in this organic process. Should technological developments give rise to sentient AI in possession of the necessary internal and external conditions to develop a sense of morality, we should treat such AI as min (people). AI min ought to be treated with benevolence and is capable of receiving praise and blame for its actions. There may also be contingent reasons for us to interact with non-min AI as if they were min, to develop and reinforce their capabilities to predictively behave in an ethically virtuous manner, which is compatible with Xunzi’s conception of moral education. As for treating some AI as qin (relatives) for which intimacy is the appropriate response, there exist tentative reasons that count both in favour of and against such an approach. More careful calibration and weighing of such reasons would be needed.

Poster

1213.png