Digital Trust and Artificial Intelligence: Ethical Standards and Risk

Theo Anderson, Ewa Stawicka

Research output: Contribution to Book/ReportChapterpeer-review

Abstract

Artificial intelligence (AI) will change many companies and industries. The pace of development of the use of AI in practice is influenced by lack of trust. Traditionally, trust was based on trust in family or friends, and in an extended form, organizations or professional groups. Stakeholder choices are based on human ethical standards and elements such as family, culture, religion, and communities. Creating a framework for using AI and risk management may seem complicated, but the process is similar to creating controls, rules, and processes that already exist for humans. The risks of AI technology depend on how it is used. However, it should be noted that the technology remains under human control. The aim of this study is to assess stakeholder confidence in AI depending on ethical standards and the degree of risk by presenting information and data from the literature and the results of the authors’ own research. AI has been found to cause embarrassment among some users. Only restrictive guidelines and a high level of ethical standards can change the attitudes of stakeholders toward creating trust in AI.
Original languageEnglish
Title of host publicationTrust, Digital Business and Technology. Issues and Challenges.
EditorsJoanna Paliszkiewicz , José Luis Guerrero Cusumano, Jerzy Goluchowski
Place of PublicationUnited States
PublisherRoutledge
Chapter12
Pages144-155
Number of pages12
ISBN (Electronic)9781003266495
ISBN (Print)9781032210469
DOIs
Publication statusPublished - 30 Sept 2022

Fingerprint

Dive into the research topics of 'Digital Trust and Artificial Intelligence: Ethical Standards and Risk'. Together they form a unique fingerprint.

Cite this