ETHICAL ISSUES OF THE DIGITAL AGE ARTIFICIAL INTELLIGENCE AND MORALITY

The most significant issue regarding artificial intelligence, which is expected to impact all areas of human life, is whether it will act in accordance with our moral values. Although some scientists accept that artificial intelligence can behave morally, we can assert that this is not very likely for several reasons. These reasons can be outlined as follows:

The subjectivity of good and evil does not align with an algorithmic structure

Artificial intelligence, by its nature, lacks emotions or a conscious moral understanding. Its systems make decisions through data analysis and learning algorithms. Therefore, it is not possible to characterize artificial intelligence as moral or immoral. However, when it comes to the implementation of ethical theories regarding artificial intelligence, it requires consideration of digitalization and data/algorithm-oriented solutions (Kose, 2022, 74–75). For algorithms to be constructed, it appears essential for the subject to possess a numerical nature. If something is inherently variable depending on the situation, it does not seem feasible to convert this variability into a numerical algorithm that accounts for all possible scenarios. When we consider the qualities of good and evil, which we accept as the foundation of morality, such as their ability to vary with time, circumstances, purpose, intent, and conditions, it becomes evident that morality cannot be represented algorithmically as numerical data. Let us consider the act of killing a human being.

Classifying this act as “evil” from an ethical standpoint and encoding this moral judgment into an algorithm would imply asserting that killing is wrong under all circumstances. This approach disregards the fact that killing may be deemed acceptable in situations such as self-defense or warfare. Similarly, defining the concept of “good” algorithmically risks oversimplifying and reducing a complex concept that varies depending on context and intent. Algorithms lack the capacity to comprehend and reflect the nuances and complexities of ethical evaluations. Ethical judgments are not based solely on logical rules; they also rely on emotional assessments, cultural contexts, and individual beliefs. Algorithms cannot fully account for these factors, hence potentially leading to ethically problematic outcomes.

The idea of universal morality is not universally accepted by everyone