Understanding responsible artificial intelligence : a case study on the considerations to made and how they can be addressed
Master thesis
Permanent lenke
http://hdl.handle.net/11250/2611612Utgivelsesdato
2019Metadata
Vis full innførselSamlinger
- Master Thesis [4380]
Sammendrag
The aim of this thesis is to contribute with new insights on the concept of responsible artificial
intelligence (RAI), by answering the following main research question:
How can we understand responsible artificial intelligence?
We stand at the precipice of a new era with rapid advancements in artificial intelligence (AI).
Though AI is already deeply embedded in our society and almost every industry, companies
might not know how to take a responsible approach to AI. The area of RAI has gained limited
attention in academia and little research has been conducted on the concept. The purpose of
our master thesis has therefore been to shed light on the concept of RAI, including which
considerations that should be made and how these can be addressed when working toward
RAI. To do so, we have conducted a single case study on Equinor and collected qualitative
data through semi-structured interviews with the employees.
We find that RAI means to take a thorough and holistic approach to how one can use AI
responsibly, it entails acknowledging the importance of humans when using AI, and it
demands an understanding of both responsibility and AI. This understanding of RAI can be
expressed in two main findings; (i) Humans are more important than expected and (ii)
understanding responsibility and AI is a prerequisite. First, acknowledging the importance of
humans when using AI involves holding humans responsible for the AI, entrusting humans to
ensure that ethical principles are maintained, placing humans in control of AI, utilizing the
knowledge and experience of the employees rather than simply replacing them with AI, and
designing the AI in a way that facilitates humans doing what they do best and being able to
fulfill their responsibilities. Second, an understanding of responsibility that facilitates RAI, is
the notion that responsibility entails doing more than what is required or expected. The need
for an understanding of AI is based on the ability it creates to mitigate the possible negative
outcomes of AI and ensure transparency, and thereby trust and acceptance of AI. This
understanding is also at the core of an RAI strategy.
Based on our findings, we believe that when a company understands and acts in accordance
with these insights, it has achieved Responsible Artificial Intelligence.