The most significant issue regarding artificial intelligence, which is expected to impact all areas of human life, is whether it will act in accordance with our moral values. Although some scientists accept that artificial intelligence can behave morally, we can assert that this is not very likely for several reasons. These reasons can be outlined as follows:
The subjectivity of good and evil does not align with an algorithmic structure
Artificial intelligence, by its nature, lacks emotions or a conscious moral understanding. Its systems make decisions through data analysis and learning algorithms. Therefore, it is not possible to characterize artificial intelligence as moral or immoral. However, when it comes to the implementation of ethical theories regarding artificial intelligence, it requires consideration of digitalization and data/algorithm-oriented solutions (Kose, 2022, 74–75). For algorithms to be constructed, it appears essential for the subject to possess a numerical nature. If something is inherently variable depending on the situation, it does not seem feasible to convert this variability into a numerical algorithm that accounts for all possible scenarios. When we consider the qualities of good and evil, which we accept as the foundation of morality, such as their ability to vary with time, circumstances, purpose, intent, and conditions, it becomes evident that morality cannot be represented algorithmically as numerical data. Let us consider the act of killing a human being.
Classifying this act as “evil” from an ethical standpoint and encoding this moral judgment into an algorithm would imply asserting that killing is wrong under all circumstances. This approach disregards the fact that killing may be deemed acceptable in situations such as self-defense or warfare. Similarly, defining the concept of “good” algorithmically risks oversimplifying and reducing a complex concept that varies depending on context and intent. Algorithms lack the capacity to comprehend and reflect the nuances and complexities of ethical evaluations. Ethical judgments are not based solely on logical rules; they also rely on emotional assessments, cultural contexts, and individual beliefs. Algorithms cannot fully account for these factors, hence potentially leading to ethically problematic outcomes.
The idea of universal morality is not universally accepted by everyone
One of the most critical considerations related to this topic is the issue of universal moral principles. The existence of a moral understanding that varies according to time, society, and circumstances complicates the idea of designing artificial intelligence based on a framework of universal morality. Different societies and cultures may define the concepts of “good” and “evil” in diverse ways, making it challenging for artificial intelligence to act in alignment with universally accepted ethical principles. It should also be noted that even ethical principles considered universally accepted today may change over time and be open to different interpretations. For instance, in some societies, individual freedoms may be prioritized, while in others, collective well being might be regarded as more important. This situation can lead to ethical dilemmas, such as determining when artificial intelligence should be designed to protect individual privacy and when it may be justified to restrict individual freedoms to ensure public safety. To develop and use artificial intelligence in an ethically acceptable manner, a comprehensive ethical framework that considers the values and beliefs of different societies and cultures is required. However, we can assert that this is not entirely feasible in practice. Universal moral principles encompass moral norms and values that are claimed to apply to all humanity. These principles are based on concepts such as respect for human dignity, justice, honesty, and benevolence. While they are widely accepted on paper by many, the extent to which universal moral principles are valid for everyone remains a subject of debate. Factors such as differing cultures, personal conflicts of interest, changing conditions, and the lack of enforcement prevent the consistent application of these principles. While the 21st century is claimed to represent the pinnacle of civil and advanced human rights, the genocide taking place in Palestine before the eyes of the world refutes this assertion. This genocide, carried out with the support of the very nations that establish so-called universal moral principles, is ignored in the realm of human rights and law. This reality reveals the impossibility of a truly universal understanding of morality. Moreover, from the point of view of world history and today’s order, it does not seem feasible for the intelligent systems that human communities, which differ as a result of various factors, will be dealing with to achieve a synergy within the framework of universal ethical principles. In the current global order, the definitions of “good” and “evil” in artificial intelligence are largely determined by the interests and perspectives of the West, particularly the United States. This creates significant issues in the ethical and political dimensions of artificial intelligence. From a Western perspective, for artificial intelligence to be considered “good”, it must be developed and utilized in a way that helps maintain the status quo and sustain the hegemony of the West in the international system. Conversely, artificial intelligence being labeled as “evil” encompasses any developments perceived as contrary to Western interests or U.S. foreign policy objectives. The United States’ silence in the face of coups against regimes opposing its interests, despite its consistent advocacy of democratic principles, calls into question the universal validity of moral and ethical concepts. Such actions clearly demonstrate that these concepts are manipulated by countries like the U.S. and other Western powers to serve their own interests. As a result, the understanding of actions that benefit or harm everyone will inevitably vary, making it impossible to fully establish morality as the foundation of artificial intelligence. This is because human-centered artificial intelligence cannot completely detach itself from algorithmic characterizations in areas such as reliability, security, privacy, confidentiality, explainability, interpretability, transparency, accountability, responsibility, social justice and equality, and bias (Henkoglu, 2023, 30–31).
The inability of artificial intelligence to become independent from societal values
We can assert that the same outcome will occur even if artificial intelligence reaches a level where it can establish its own moral norms over time. This is because, even if machines learn to read or perceive our societal or ethical dilemmas, they will largely live in accordance with our values, beliefs, and mindsets (Leonhard, 2018, 36; Kose, 2022, 105–109). To put it more clearly, as artificial intelligence systems are integrated into and used within human society, they enter a continuous process of learning and development. During this process, they are shaped by the feedback they receive from humans, the data they utilize, and the environments they interact with. This can be described as artificial intelligence being “shaped by society” and “developing alongside society” (Armstrong, 2014, 33–34).
Ultimately, while acknowledging that abstract concepts have not yet been clearly established within algorithms, it will be necessary to build certain bridges between human theories and numerical components (Kose, 2022, 74–75). Even if these steps are achieved, the lack of a general consensus on the concepts of good and evil makes it unlikely to develop algorithms for artificial intelligence based on universal moral principles. Moreover, the convergence of human instincts with modern technology and organizational structures is likely to result in humanity resorting to violence more frequently than ever before (Winston, 2011, 308). Stuart Armstrong argues that if an artificial intelligence were to possess superhuman levels of social skills, technological development, and economic capabilities, it would likely dominate our world in one way or another. He also suggests that once artificial intelligence develops its abilities to a human level, it is highly probable that it will rapidly advance to a superhuman level. In other words, if even one of these capabilities is programmed into a computer, it can be assumed that the future of our world will be governed by artificial intelligence or humans empowered by artificial intelligence (Armstrong, 2014; Leonhard, 2018, 73). Thus, in our view, achieving limitations on artificial intelligence within the framework of all universal principles does not seem feasible. Considering this potential future scenario of artificial intelligence, the risk that AI, devoid of any moral concerns, might bring destruction rather than benefit to humanity remains significantly higher.