Ethics in AI
We accept the challenge.
The EthicsNet Guardians’ Challenge aims to enable kinder machines.
The Challenge seeks ideas on how to create the best possible set of examples of prosocial behaviour for AI to learn from (i.e. a machine learning dataset).
Our interactions with intelligent machines are becoming ever more intimately entwined. Machines are capable of amazing learning capability, simply through observation. However, if we are to enjoy the benefits of AI, trust is essential. One aspect of trust is consistent behavior over time. However, that behavior must also be accepted. Objectionable behavior must be able to be modified to something more preferable.
Socialization is the process by which we teach someone how they should behave if they want to interact with us. We demand that certain boundaries are maintained in exchange for accepting someone within our social circle.
For all of human history, people have taught their children and pets how to behave, and given feedback to family members and peers. We believe that it is only natural that humans should interact with intelligent machines in a similar way. People tend to prefer when they have influence over how autonomous systems behave. Such systems are much easier to trust, accept, and work with.