Home  /  Design   /  Ethics in AI

Ethics in AI

We accept the challenge.

The EthicsNet Guardians’ Challenge aims to enable kinder machines.

The Challenge seeks ideas on how to create the best possible set of examples of prosocial behaviour for AI to learn from (i.e. a machine learning dataset).

The Problem

Our interactions with intelligent machines are becoming ever more intimately entwined. Machines are capable of amazing learning capability, simply through observation. However, if we are to enjoy the benefits of AI, trust is essential. One aspect of trust is consistent behavior over time. However, that behavior must also be accepted. Objectionable behavior must be able to be modified to something more preferable.


Socialization is the process by which we teach someone how they should behave if they want to interact with us. We demand that certain boundaries are maintained in exchange for accepting someone within our social circle.


For all of human history, people have taught their children and pets how to behave, and given feedback to family members and peers. We believe that it is only natural that humans should interact with intelligent machines in a similar way. People tend to prefer when they have influence over how autonomous systems behave. Such systems are much easier to trust, accept, and work with.


As human societies have evolved, they have developed rules and taboos to help manage social interactions. If machines are to be socialised, they must learn these behavioural codes, in order to respond appropriately to certain situations.

So let’s build a data set for kindness, shall we?

The Data-driven Approach

The term Machine Ethics describes a variety of techniques designed to help machines such as computers and robots to make decisions that are more in line with the cultural and moral expectations of society. Machine ethics is sometimes referred to as machine morality, computational ethics or computational morality.

Post a comment