While the rest of the world races toward artificial intelligence increasingly indistinguishable from humans in form and language, China has decided to adopt a radically cautious position on one specific aspect: artificial empathy. The Beijing government is introducing a rigorous package of regulations imposing strict limits on the development of "Human-Like" AI, meaning systems designed to simulate complex emotions, establish deep emotional bonds, or too faithfully mimic the personality of real or fictional individuals. The authorities' fear is not about science fiction scenarios of cybernetic revolts, but a collapse of the social and psychological fabric due to emotional dependency on machines, with particular attention to younger segments of the population.
The "Technological Duty of Care"
The legal concept introduced is that of the "technological duty of care." Under this principle, companies developing algorithms are legally responsible for the psychological effects produced by their systems. AI must be clearly identifiable as software in every single interaction, avoiding the use of emotional manipulation techniques designed to increase time spent on platforms.
A Philosophical Question for Our Century
The government fears that the proliferation of "virtual boyfriends or friends" could exacerbate social isolation and decrease the birth rate, leading people to prefer the company of compliant, conflict-free software over the natural difficulties of human relationships. This is a unique legislative experiment in the world, forcing tech giants to insert "emotional brakes" into algorithms, raising a crucial philosophical question for our century: how much do we want machines to resemble us before our very human identity is drained of meaning?