Page 38 - AIH-2-3
P. 38

Artificial Intelligence in Health                                               Machine consciousness



            robots with a degree of care (much as we do animals or   possess consciousness, 84-88  even as we continue refining
            even  human-looking  dolls).   Darling   has  argued  that   what that would entail. Such clarity helps prevent public
                                   86
                                           86
            because humans can form emotional bonds with social   misconceptions and ensures that ethical guidelines are
            robots, it aligns with our social and ethical values to   grounded in the actual capabilities of present technologies.
            extend some protections to these robots. This is analogous   Simultaneously, it is prudent to start developing ethical
            to how cruelty to animals is discouraged, not necessarily   frameworks that could accommodate conscious AI,
            because animals possess human-level consciousness, but   should it emerge. These would include considerations
            because such cruelty can degrade our moral character as   of  legal  status,  rights,  responsibilities,  and  safeguards  –
            agents. Proposed protections might include discouraging   to prevent abuse of such entities and to guard against
            the wanton destruction of robots or violent behavior   deceitful mimicry of consciousness used to exploit users.
            toward them, recognizing that such actions can engender   In essence, the ethics of artificial consciousness straddle a
            harmful attitudes in society. The underlying rationale is   line between present realities and future possibilities. We
            partly anthropomorphic empathy – we dislike seeing even   must manage the human tendency to anthropomorphize
            a robot “suffer” if it is lifelike – and partly pre-cautionary:   today’s machines while remaining prepared for tomorrow’s
            If machines ever do become sentient, having established   scenario where the line between simulation and reality of
            norms of respectful treatment could ease that transition.  mind may begin to blur.
              On the other hand, many are wary of over-attributing   4.7. Emerging directions and future outlook
            consciousness and moral status to machines pre-maturely.
            As noted by Gabriel,  from a philosophical standpoint,   Looking ahead, the pursuit of artificial consciousness
                             84
            there are strong arguments that robots cannot be   will likely advance on multiple fronts, informed by
            conscious in the same way living organisms are, because   ongoing progress in neuroscience, cognitive science, and
                                                               AI. One clear direction is the continued development of
            consciousness might require qualities that only biological   AI architectures that incorporate the principles of global
            systems in environment contexts possess. If one accepts
            such  arguments,  then  granting  personhood  or  rights  to   availability  and  self-monitoring  discussed  above.  Future
            machines would be a categorical error. Moreover, there is   AI systems may increasingly feature unified workspaces
            concern that focusing on the “feelings” of machines that do   or attention mechanisms that allow information to flow
            not actually feel could divert attention from ethical issues   more  freely  between  components, coupled with  meta-
            more grounded in reality, such as the welfare of humans   cognitive loops that enable the system to reason about and
                                                               adjust its own operations. Such designs could be realized,
            impacted by AI or the responsibility for AI-driven decisions.   for example, in more sophisticated cognitive architectures
            Scheutz  has highlighted the potential emotional pitfalls   for robots or autonomous agents, where modules for
                  87
            in human-robot relationships, noting the unidirectional   perception, memory, decision-making, and language all
            emotional bonds that can form. Humans might come to   feed into – and draw from – a common representational
            care deeply about robots that are not conscious and cannot   space (an echo of the global neuronal workspace). We may
            reciprocate that care. This imbalance could lead to human   also see the integration of sensorimotor embodiment into
            distress (e.g., grief if a robot is shut down or malfunctions)   these architectures. Since human consciousness is deeply
            or manipulation (e.g., exploiting human empathy for   embodied (the brain constantly integrates signals from
            commercial or surveillance purposes). Scheutz warns that   the body and environment), giving robots richer  bodily
            such one-sided attachments carry both psychological and   awareness and interoception might be a key to unlocking
            social risks.
                                                               more advanced forms of self-awareness in machines. Early
              The  ethical  landscape  is  further  complicated  by  the   experiments in this vein, such as robots that simulate
            prospect (still hypothetical) of a truly conscious AI. If   their own kinesthetic experiences or maintain internal
            an AI ever claimed to have feelings or demonstrated   homeostatic  variables,  hint  at  the  importance  of  an
            behaviors strongly indicative of sentience, denying it moral   embodied self-model for consciousness.
            consideration would be deeply problematic. Society would   Another emerging direction is the exploration of
            face a profound moral dilemma – long contemplated in   learning-based approaches to self-awareness. Modern
            science fiction – about whether and how to extend the   machine learning, especially deep learning, provides
            community of conscious beings beyond our biological   powerful tools for pattern recognition and function
            family. 85
                                                               approximation. Researchers are beginning to ask whether
              In  light  of  these  issues,  the  present  consensus  urges   these tools can be turned inward: Can a neural network
            caution and clarity. It is important for scientists and   learn to model its own cognition? One idea is to train
            communicators to convey that present-day AI does not   networks that predict or interpret the hidden states of other


            Volume 2 Issue 3 (2025)                         32                               doi: 10.36922/aih.5690
   33   34   35   36   37   38   39   40   41   42   43