top of page
Blurry Blue_edited_edited.jpg

My AI Ethics Compass

  • Writer: Balázs Kis
    Balázs Kis
  • Aug 12
  • 8 min read

Updated: Aug 14

In this piece, Balazs Kis shares the personal ethics compass he follows when using AI – a set of principles shaped by moral conviction, cultural awareness, and professional responsibility

ree

Ethics 101

Some say it’s enough to have a healthy moral compass to behave ethically. This isn’t true. It’s the same as assuming someone can professionally translate once they speak a foreign language.

To my mind, in addition to one's moral compass, ethical behavior has two more components: one is cultural, and the other is field-specific. You need to know what is acceptable in a certain culture, especially if you work in an international setting, and you need to know how people can benefit from, or be hurt by, professional activities in a specific field. Of course, everything begins with having a moral compass, but these are sources that inform you of the potential consequences of your actions. [1] [2]

Human beings are always at the center of ethics. You might be excited about an innovative technology you are about to put to work, but then your moral compass will tell you to care about what it will – or could – do to other people. Ethical behavior means you deploy your innovation only after assessing the latter.

Naively, we could also think that to act ethically, it is enough to comply with regulations. But, on the one hand, not everything is regulated; on the other hand, regulations almost always originate in partial interests and hidden agendas, as a result of lobbying. This means that an action might be legal but not ethical. Even the compliance profession upgraded itself to “compliance and ethics”. [3]

Ethics put to work: Risks and benefits

In business and technology, you can approach ethics in two ways, which are not mutually exclusive:

  • The risk-based approach asks the question of how the activity can hurt or cause harm to people. When we think about risks, we also consider the interests of seemingly unrelated groups and future generations.

  • The value-based approach asks who benefits from the activity and what the benefits are. Is it the same as the advertised target group, or are there potential hidden beneficiaries with hidden agendas?

Whatever your approach, to find out about the potential risks and benefits, you must answer a swarm of questions. I will not list them here (I probably could not), but the sources include a few AI risk assessment articles, which help you get a feel of the depth of the issue. [4] [5]

The other reason I am not sharing the list of questions is that there is no easy way to answer all of them in today’s AI landscape. For a random citizen, it is downright impossible to find the answers. This makes oversight very difficult and the (enforceable) accountability of the providers almost nonexistent.

For savvy AI users, the Red Teaming approach suggested by UNESCO might be something to follow. [6]

AI: An entity with no conscience

But why is it so important imperative that we oversee everything that is produced by AI? Lawyers would say it’s because AI is not a person and cannot be held accountable. From the ethics perspective, it’s even simpler: AI does not have a moral compass. It is unable to have a . No motive to produce high quality, or to act in the best interest of its users.

A large language model or a reasoning model is entirely compulsive. It does not have intentional behavior. If you give it a prompt, it will respond; it has no choice not to. But everything else is language and mathematics, although many would like to see the evolution of more complex intentional behavior, we have yet to find solid evidence of it.

There is one instinct, other than the compulsive responses, that several AI models show: self-preservation. First, an AI model might compulsively respond to your prompt, but it does not mean that the response will comply with your instructions. Several AI models expressly or implicitly refused to shut down when instructed in a prompt, making the providers resort to external means. [7] [8]

Here is the conflict: ethical (business) conduct requires us to act to the benefit of our customers without causing harm to them or anybody else. An AI system has no such motivation. Providers sometimes try to make up for this by imposing external policies and algorithms, but these are mostly in place to prevent the human user from eliciting a harmful response. There is a good reason for this: it is almost impossible to eliminate hidden biases and harmful connotations from the training data, and the model will eventually, inevitably, exacerbate these. [9]

Let’s suppose we work in an environment where AI regulation is hesitant or downright discouraged. In this environment, all that remains is self-regulation. And when you choose to exercise restraint, you might need to do that at the expense of your business and your career.

The AI User’s Code of Honor explained

If you want to promote ethical conduct in building or using AI, you are in a privileged position if you are one of the providers or you work for a government that genuinely means to implement accountability. But what can you do if, like most members of the localization community, you are a mere mortal using – or not using – whatever AI is available to you?

I wish modern AI systems were compelled to obey a set of rules that are there were an equivalent to the “Three Laws of Robotics” [10] that modern AI systems were compelled to obey. But today’s AI models are more like the intelligent ocean in Stanisław Lem’s Solaris [11] [12]: they create content they assume the user wants to see, often to the human user’s mental detriment.

To make up for this, I came up with this code of honor that human AI users might want to live by. The list is probably not complete and could use a lot of improvement, but we need to start somewhere.

  1. A human will use AI with restraint. AI will offer itself as an easy way out for many problems. But it is not always the right choice. For example, as a teacher, you may use AI to prepare for a class: but if you have AI write something for you, you will not be able internalize, or even remember it --and you will not help your students as much as you should. Always be critical of the efficiency gain. For example, good (and fast) writers will find that they can finish a piece quicker than prompting an AI model and then post-editing the output. Or, there might be quicker and cheaper ways to automate something rather than throwing an AI model on the problem. This means the alternative probably comes with a lower environmental impact.

  2. A human will never use AI as a substitute for a human companion or a human professional. An AI model is not a human; it isn’t a person; and it is especially not your coach or therapist. Some models behave as if their Its objective wereis to appease you: it will assume that you are inclined to behave in a way and give you responses that reinforce that behavior, however wrong it might be. (Of course, this is not their objective: it's usually a byproduct of their training and settings.)

  3. A human will not talk about AI like it were a person. We need to stay away from adjectives and metaphors that ascribe human-like traits to AI models. For example, in this article, I almost wrote that models “believe” the human user wants something. But I checked myself and wrote “assume” instead.

  4. A human will never take the truth of an AI response for granted. AI models do not have information. They will search the internet for information and create a response that will be a specific interpretation of the information found. In the process, some information will be hidden, others will be altered, and some will even be made up. Always check and cross-check the sources, and do not use an AI model that does not reveal the sources it used.

  5. A human will never use AI as a substitute for learning. It is OK to use AI for research and to find sources, to summarize information. But you need to draw the line where this actually prevents you to learn new skills or new information. Don’t forget: your edge over AI is your logical and critical thinking, and the food of critical thinking is knowledge and multiple viewpoints. If you allow the AI process to shield you from information and diverse viewpoints, you lose your ability to oversee the AI output.

  6. A human will never accept AI as a substitute for a human artist. Human aArt is elicited from emotion, and its purpose is to elicit an emotional and cognitive response from the human recipient (unless it’s deconstructive art, but then the emotion is the refusal of conventional emotion in art, which is just as human). This is true of music, visual arts, and especially of the performing arts. AI has no emotions to draw art from; music, visuals, and videos generated by AI are not art, they are manipulation. This position might need a bit of clarification: I do not reject the use of technology in creating art. I myself have toyed with AI-generated imagery. But if we are to talk about human art, the end product must come from a human, and it must include significant added value that only a human can provide. Some studies show that generative AI directly caused deepfake videos to proliferate, and most of deepfake videos are created with malicious intent. [13] [14] This is one more reason to use AI with caution. An aside: In this day and age, AI-generated video seems to be mostly used for humiliating and deceiving people. It could be used for good, but the simple possibility of generating video that resembles reality seems to elicit the worst tendencies of human behavior.

  7. A human will work to ensure that there be A human will ensure human oversight by default over AI agents that make decisions affecting other humans. There might be reasons to further use or release unsupervised AI output, but there must not be systems where human oversight is not possible and is not invoked by default.

  8. A human will continuously learn about and work against known unethical tendencies of AI models and AI usage. For example, if we know there is an AI-based gatekeeper between the human user and the search results, we need to be able to bypass the gatekeeper and check the search results directly.

  9. A human will learn – and teach others – to use AI efficiently to the real benefit of the less privileged human user and humanity itself.

Sources

bottom of page