AI ethics

Why so serious? Why do we need AI ethics?

In my last blog post, I hinted that I will talk about AI ethics this time. So yeah, it’s AI ethics again. Lately, it feels like you can’t read an article about Artificial Intelligence without stumbling into a heavy discussion about “AI ethics.” The term is everywhere, and it’s always spoken about with a sense of gravity and urgency. It got me thinking…

Actually, what is AI ethics? Why are people specifically highlighting its importance now? When we started using computers, mobile phones, or the internet, did we have “computer ethics,” “mobile phone ethics,” or “internet ethics”? For other technologies that we’ve adopted widely, did we place such a heavy emphasis on their ethical implications from the get-go?

These questions seem fair. After all, isn’t AI just another tool? Why the special treatment?

After digging into it, I found the answer.

And the answer is YES.

We’ve Always Had Ethics for New Tech

The call for ethical guidelines is not new. Every time a transformative technology emerges, we humans have to pause and think about how to use it responsibly. It’s a feature, not a bug, of societal progress.

Based on the research that I did, we absolutely did have “computer ethics.” The field has been around for decades, tackling issues like data privacy, hacking, and intellectual property. The Association for Computing Machinery (ACM), a leading professional organization, published its first code of ethics way back in 1973! And I even found a “Ten Commandments of Computer Ethics”, where the first rule is:

Thou Shalt Not Use A Computer To Harm Other People

The Ten Commandments of Computer Ethics

Then came the internet, and with it, “cyberethics.” This new field grappled with challenges unique to a globally connected world: freedom of speech versus harmful content, online anonymity, and the digital divide. These were, and still are, serious ethical debates.

But here comes the crucial distinction, the one that explains why the conversation around AI feels so different.

The Key Difference: A Tool vs. An Agent

At the end of the day, a computer or the internet is fundamentally a tool. A computer follows the instructions we give it, and the internet is a medium for communication and information. The ethical frameworks we built around them were, in essence, still focused on the human using the tool.

We don’t blame a computer for spreading you the computer virus; we condemn the programmer who wrote the code or get mad at the stupid friend who sent you the virus. The responsibility rests with the person.

AI is different. It’s crossing the line from being a simple tool to becoming an autonomous agent.

The Perfect Analogy: A Sickle vs. A Child

To understand this shift, I came up with an analogy that makes it crystal clear.

The Sickle: An Extension of Our Will

Imagine the farmers from 500 years ago. They designed and used tools like sickles. Now, could someone use a sickle to harm another person? Absolutely. But nobody worried about the “ethics of the sickle.”

Why? Because a sickle is just an object, an inanimate object. It has no will, no ability to make a decision. And literally, it cannot move if it isn’t being moved. It cannot commit a murder by itself. The moral responsibility lies 100% with the person holding it.

The Child: An Agent Who Makes Decisions

Now, consider the same farmers, or even their great-great-great-grandfathers. While they didn’t worry about the ethics of the new farming tools that they made, they were already deeply concerned with establishing ethics for something else that they created, which is, another human (i.e. their offspring). This was because their children could decide and act on their own. That’s why they had to teach their offspring right from wrong, fairness, and responsibility.

Why the different approach? Because a child is not a tool. A child is an agent who can make their own decisions, for good or for ill. You can’t give them a direct command for every possible situation they will face in life. Instead, you instill a moral compass to guide their autonomous actions. So, the commandment is “Thou shalt not kill”, but not “Thou shalt not use a sickle to kill”. The ethics should govern the human user, not the tool.

I think now you can see why AI ethics is so critical.

For the first time, we are building non-human agents and giving them some degree of autonomy and the power to make decisions on their own. We are asking AI to drive our cars, diagnose diseases, approve our loans, and recommend who gets hired. In these processes, it will make numerous sub-decisions without our direct, real-time input. Just like for a child, we can’t possibly write a rule for every single decision the AI agent makes. We have to try and build in a set of guiding principles, an ethical framework, to ensure they act in ways that are safe, fair, and aligned with human values.

The Uncharted Territory of AI Ethics

This transition from tool to agent is what makes AI ethics so uniquely challenging and why the conversation is so serious. We are in uncharted territory, facing questions we’ve never had to answer at this scale:

  • Consciousness and Intent: A human child develops consciousness, emotions, empathy, and a genuine understanding of why an action is wrong. An AI currently does not. It follows rules and optimizes goals without any subjective experience or moral understanding. It doesn’t “feel” fairness; it can only be trained on data that we label as fair.
  • Accountability: If a self-driving car causes an accident, who is responsible? The owner? The manufacturer? The software developer? A child eventually grows into an adult and becomes accountable for their own actions, but AI doesn’t. Our legal and moral frameworks for accountability are not yet equipped for this complicated, and even philosophical question.
  • The “Black Box” Problem: Some of the most powerful AI systems are so complex that even their creators don’t fully understand why and how they make a particular decision. How can we trust or correct an agent whose reasoning is a mystery?
  • Scale and Speed: A single human’s bad decisions, while potentially tragic, are limited in their immediate scale. A single biased algorithm used globally in hiring or loan applications can negatively impact millions of people instantly, reinforcing societal inequalities at a massive scale.

So, the next time you hear someone talking about the importance of AI ethics, you’ll know it’s not just hype. It’s a necessary, urgent conversation about how we build and raise our new, powerful, and autonomous creations to ensure they help build a better world, not a more dangerous one.

Side note: While I was writing this article, I couldn’t stop thinking about what we are creating. Like my analogy, I am comparing AI with a human inevitably. So, how similar is an AI to a human? That is a big question to ponder.

About the author

Antony Wong, a tech enthusiast who has a lot to say, but also being an introvert at the same time.

Leave a Reply

Your email address will not be published. Required fields are marked *