Page 37 - AIH-2-3
P. 37
Artificial Intelligence in Health Machine consciousness
a machine feels conscious, but they are likely necessary consciousness entirely, focusing instead on intelligence
conditions for any machine that could eventually lay claim and autonomy. Hildt points out that we ought to engage
81
81
to subjective awareness. more with the topic of artificial consciousness – and,
just as importantly, with the implications of its present
4.5. Limitations of present AI: The absence of absence. Acknowledging that our most advanced creations
genuine consciousness remain essentially mindless (in the phenomenal sense) is
Despite significant advances in AI, the prevailing scientific important to keeping expectations grounded and shaping
consensus holds that no present machine or AI system how we treat these systems.
possesses consciousness in the full sense. 84-88 Today’s A significant phenomenon in this context is
AI, including advanced neural networks and social anthropomorphism – the human tendency to attribute
robots, operates firmly within the bounds of the weak AI human-like qualities, including consciousness, to
paradigm. These systems excel at specific tasks and can machines. This is evident in the way people interact
even display adaptive or context-aware behavior, but there with social robots and virtual assistants. For example,
is no credible evidence that any of them possess a subjective humanoid robots with facial expressions or voice-based
point of view or true self-awareness. Even systems that AIs with personality often elicit feelings of social presence;
incorporate elements of global availability or rudimentary we may talk to them as if they understand or even feel.
self-monitoring implement these features in relatively Such anthropomorphic projections can obscure the reality
narrow ways (for example, a program might monitor its
performance on a task and adjust parameters, but this is that, despite surface appearances, these systems lack inner
far from the rich, self-reflective awareness characteristics experiences. Instances like the robot Sophia being granted
of human consciousness). Phenomenal consciousness in citizenship, or users feeling emotional attachment to AI
machines remains, at present, a speculative topic rather companions, illustrate how far our intuitions can outpace
than an observed reality. We cannot peer into a deep scientific understanding. Scholars caution that this gap
learning model and find a flicker of sentience; at best, between appearance and reality can be problematic. We
we find complex statistical patterns and representations risk misleading ourselves – or the public – about what AI is
shaped by training data. actually doing. As a safeguard, some ethicists argue that we
should consistently remind ourselves that present robots
It is instructive to consider why present AI falls short are not conscious. 84,88 They are complex artifacts, not
of consciousness. One obvious limitation is the lack of an entities with feelings, and we should avoid pre-maturely
integrated self-model in most AI. Human consciousness conferring moral or legal status that is reserved for sentient
involves a sense of self that is continuous in time, situated beings.
in a body, and emotionally colored—features that
mainstream AI does not possess. Another limitation is the 4.6. Ethical and societal implications of artificial
absence of unified, flexible memory and attention akin to consciousness
what the brain employs. While deep learning networks Even though artificial consciousness remains unachieved,
have impressive pattern recognition, they typically lack an the very pursuit of it – and the public’s tendency to ascribe
architecture that integrates disparate knowledge on the fly, minds to machines – raises important ethical questions.
as a global workspace would. In addition, AI systems today If we eventually create a machine that exhibits advanced
lack intrinsic motivation or genuine autonomy in the sense self-awareness or other hallmarks of consciousness, how
that conscious beings exhibit; they pursue goals defined by should we treat it? Conversely, how should we treat today’s
programmers or derived from training data, without an unconscious AI systems, given that people often respond
inner life of desires or will. Finally, the evaluation problem to them as if they were alive? These issues are already the
looms large: Even if an AI were conscious, how would we subject of considerable debate in technology ethics and
truly know? There is no agreed-upon test for machine law.
consciousness, and simple behavioral criteria (like the
Turing test) are inadequate, as they can be passed through On one hand, some thinkers like Gunkel have
85
clever simulation without real awareness. This epistemic explored the notion of “robot rights”: The idea that
gap leads us to assume the absence of consciousness until sufficiently advanced AI or robots might merit certain
proven otherwise. As some scholars note, the absence of any moral or legal protections. Intriguingly, arguments for
observable indicator of consciousness in machines is taken robot rights have been made even in the absence of robot
as confirmation that present AIs simply are not conscious. consciousness. For example, based on the way humans
This point is rarely debated within the AI community. empathize with humanoid machines or on the societal
Indeed, discussions of AI ethics often neglect the issue of value of fostering empathy, a case is made for treating
Volume 2 Issue 3 (2025) 31 doi: 10.36922/aih.5690

