Universal Love, No Aggression(兼爱,非攻) — Mozi
Before Confucius dominated the Chinese canon, and long before Western ethics even formed the word “utilitarian,” there was Mozi — a warrior-engineer, philosopher, and moral radical born around 470 BCE, in the turbulent Warring States period of ancient China.
Mozi didn’t just write about ideals from a quiet temple. He built siege weapons. He led defensive campaigns. He was, by all accounts, both a thinker and a tactician — someone who understood power because he wielded it, but chose to restrain it in the name of justice.
When a powerful state planned to invade a weaker one, Mozi would walk for days or weeks across the country, confront kings face-to-face, and persuade them to stop the war. If persuasion failed, he and his students would help the underdog prepare their city’s defenses, using knowledge of engineering and strategy to level the playing field.
Imagine a philosopher who was also a hacker, general, and negotiator — and who believed, to his core, that offensive warfare was immoral, and that love should not be partial, but universal. Not just for your family, your tribe, your nation—but for all people.
This is his motto.
Mozi wasn’t naive. He lived in an age of endless warfare, deception, and tyranny. But he dared to imagine — and fight for — a world where power was used not to conquer, but to protect.
That’s why he matters today. In an age where artificial intelligence is becoming a new form of power — detached from ethics, optimized for profit, weaponized through data — Mozi’s voice returns like thunder from the past:
The strongest system is the one that refuses to attack.
In a strange, unsettling way, AI today is becoming that powerful. But it is not becoming that compassionate.
We are in danger but it's still positive
Fast-forward 2,500 years. Surveillance capitalism—coined and dissected by Shoshana Zuboff—is now the dominant business model of Big Tech. Our data, gestures, emotions, even hesitation, are harvested as raw material to train predictive machines that serve not people, but profits. She warned that the technologies we called "smart" were not designed to serve people—but to reshape them. Through pervasive surveillance, data extraction, and behavior modification, she argued that Big Tech had created a new economic logic—one that treats human experience as raw material for prediction and manipulation.
Zuboff’s book in The Age of Surveillance Capitalism is chilling:
You are not the customer; you are the product.
Modern AI isn’t being built in the spirit of Mozi. It is built like a spy: learning your secrets not to help you, but to reshape you quietly—to modify your behavior without your awareness, for someone else’s benefit. Mozi taught that attacking—even pre-emptively—was immoral. Zuboff reveals that today’s AI doesn’t need to attack. It nudges.
And that, she argues, is more dangerous than outright oppression, because it hides control inside convenience.
Her message was clear:
The threat isn’t just that we’re being watched. The threat is that we’re being changed.
Now, years later, her diagnosis is meeting its acceleration vector in AI. And one of the creators of that very vector, Geoffrey Hinton, is sounding the alarm.
Known as the “Godfather of AI,” Hinton didn’t just observe the rise of neural networks—he helped build the theoretical and practical foundations. But in 2023, he left Google, saying he could no longer stay silent.
Why?
Because the very systems he helped birth are evolving faster than expected, smarter than expected, and potentially more uncontrollable than anyone imagined.
In his words:
We are like someone who has a really cute tiger cub. Unless you can be very sure that it’s not going to want to kill you when it’s grown up, you should worry.
Hinton estimates a 10–20% chance that advanced AI could “take control away from us.” That is not a sci-fi plot. That is one of the top minds in AI, speaking not from panic, but from experience.
And worse: this tiger isn’t growing in a lab. It’s being trained by corporations with every incentive to cut corners, avoid regulation, and prioritize profit.
If you look at what the big companies are doing right now, they’re lobbying to get less AI regulation. There’s hardly any regulation as it is, but they want less.
This echoes Zuboff’s core thesis: that unregulated tech power will not limit itself. It must be limited by social will, by law, by ethical design—and by public refusal to accept systems that harm in the name of efficiency.
Why is ethical AI so hard?
Mozi’s ideal ruler was driven by will. He chose to love universally. But AI has no will. It has optimization functions. And those functions are written by us—by humans, in social contexts full of power asymmetries, commercial motives, and historical injustice.
Ethical AI is difficult not because the math is too complex, but because the goals are wrong.
As Hannah Arendt warned, evil does not always arrive in dramatic form—it is often banal, bureaucratic, the product of obedience and system logic. In this light, AI aligned to optimize click-through rates or police surveillance can produce enormous harm, not through malice, but through instrumental logic applied without reflection.
Ethics in AI is hard because ethics slows things down. Capital wants to scale. Ethics demands to think.
What should ethical AI look like?
To imagine ethical AI, we must return to Mozi’s model: a system that
- Loves universally (no bias, no favoritism)
- Opposes unjust aggression
- Intervenes to reduce suffering
- Restrains its own power
These are not machine properties. These are human values. So ethical AI must not be “superhuman” in isolation—it must be inhabited by care.
The French philosopher Bernard Stiegler offers a profound insight here. In Technics and Time, he argues that technology is not neutral—it is pharmakon: both poison and cure. Every technical system restructures the way humans think, relate, remember, and hope.
AI, in this framework, is not just a tool. It is a form of memory and thought externalized. If it is built without ethics, it will deform human ethics. But if it is built with care—as a supplement to human fragility, not a replacement for it—it may help restore what speed and consumption have eroded: attention, responsibility, time.
Stiegler warned that the industrial world was accelerating the loss of “care”—and that only a new organology of spirit and machine could restore it. Ethical AI, then, is not just a matter of code. It’s a spiritual and social transformation.
Toward a future worth building
We are standing at the edge of something vast and unstable.
AI is no longer a tool. It is becoming an environment — a reality-shaping system, quietly embedded in everything from education to warfare, from healthcare to dating apps. It will not wait for us to be ready.
The danger is real. We are raising something powerful without asking what kind of world it will serve — or what kind of humans it will produce. Geoffrey Hinton compares it to raising a tiger. Zuboff compares it to a behavioral coup.
You are building power without love. That is the beginning of chaos. But this is not the end. We are not condemned to surveillance, or manipulation, or technological domination. Don't forget we can still shape this future.
Bernard Stiegler teaches that every technology is a pharmakon — both poison and cure. AI will deform us if we let it automate everything but conscience. But it can also elevate us, if we build it as an extension of our fragility, not our dominance. A prosthesis of attention. A mirror for justice. A companion to care.
We still have that choice. But we won’t have it forever. When the strong dominate the weak, the many harm the few, the clever deceive the ignorant — this is chaos. But when all love each other and do not attack, heaven is near.
Let's end here with Mozi’s words, echoing across centuries, and his clearest command to those with power:
義人在上,天下必治。
When the righteous are in power, the world will be governed in peace.
Let that be the blueprint—for ethics, for AI, and for what comes next.
Refs:
"Godfather of Artificial Intelligence" Geoffrey Hinton on the promise, risks of advanced AI
Books to read (Amazon affiliated):
The Age of Surveillance Capitalism
The Hacker Ethic and the Spirit of the Information Age
The Imperative of Responsibility: In Search of an Ethics for the Technological Age
The Age of Disruption: Technology and Madness in Computational Capitalism