Page 12 - AIH-2-1
P. 12
Artificial Intelligence in Health AI in higher medical education
guide whose answers should be examined critically. On the are not equipped with AI knowledge, they will be less
other hand, combining ChatGPT with another tool, such as able to cope with the various and detailed types of ethical
virtual simulators, can be extremely beneficial for medical risk as practitioners. However, advances are being made
40
students. However, it is during this time that ChatGPT even while calls for a faster pace of change are being
should be thoroughly tested against possible errors that made. 46,47 An outline model for the application of AI in
48
can be made in medical education processes. It is also medical education is provided by Zarei et al., along with
worth emphasizing that the long-term impact of AI tools, an assessment of challenges such as the current lack of
49
including ChatGPT, on learning outcomes, especially in infrastructure. Krive et al. designed and tested a model
the field of medicine, should be examined. 41 comprising a modular 4-week AI course, which proved to
be successful.
On the other hand, an interesting study analyzed
42
medical students’ readiness for AI-based solutions. As a specific area, radiology, for example, depends heavily
The findings revealed that students who believed AI on data. 50-52 It is immediately apparent that the successful
technologies would contribute to their profession and manipulation of information-intensive radiological data
reduce workload outnumbered those who held a different using AI requires significant computational resources.
view. In addition, a study proposed a Persian version of This raises concerns about energy use, costs, and
43
the Medical AI Readiness Scale to evaluate the readiness of environmental impact, where developing countries may
medical students to work with AI, including factors, such be at a disadvantage, thus increasing ethical risk for them.
as cognition, ability, vision, and ethics. Another extremely important issue concerns how the
accuracy of AI predictions using various types of metrics
4. Ethical risks in the implementation of AI is to be evaluated. This is connected with algorithmic
53
54
in medical education fairness: If one method of evaluation produces a different
metric than another, the outcome could result in being
Each of the four examples of AI’s significant role in unfair to one or another cohort, an ethical issue. The most
medicine and medical education offers great hope for popular algorithms in the field of medicine are the Dice
rapid improvements in medical practice. However, coefficient and accuracy. However, there is no accepted
4,55
these advancements come with ethical risks that, if not standardization for the assessment of such algorithms in
addressed, could result in a curse of malpractice and bad medicine. Turning to the issue of data biases, the extensive
outcomes for educationalists and their students as well as account provided by Ueda et al. broadly separated into
56
for practitioners. There has been a discussion regarding machine and human-originated, and the discussion of
AI and ethics for many years, as illustrated by Dennett’s biases identified by Pregowska and Perkins (passim)
5
vision of a novel-writing machine and the dilemmas it prompts the need for two underlying dimensions of bias
raises about the notion of self. Yet, it is only recently that to be highlighted in addition. The first is intentional and
44
a focus on ethical risks, AI, and medical education has unintentional. The introduction of bias into a dataset (such
appeared, no doubt in tandem with the rapid development as the over-representation of one demographic cohort
of technology. Indeed, on the general level, as noted above, at the expense of another or incorrect, and model and
57
Weidener and Fischer demonstrated that there is a lack of interpretation bias ) may be intentional on the part of the
13
58
discussion concerning AI and medical education overall, human agent or unintentional (due to accident, neglect,
even though, as Civaner et al. pointed out, there is a human error, or subconscious attitude). Once intentional
14
recognition amongst many medical students that AI needs bias has been identified, the question of motivation arises
to play a role in medical education. This shows that there as a second underlying dimension. Bias can be introduced
is a student (or consumer) demand for AI in educational into dataset selection, and datasets can be manipulated
curricula and a need for educators to fill that gap. There is due to social and political attitudes in some societies. The
thus a clear requirement for AI to be integrated into medical profit motive may also raise issues of control, ownership,
education programs, but reasons can be advanced for the deployment, and use of data, and even falsification. The
59
slow pace of adoption. For example, such programs are increasing role of AI, along with its ability to create and
extensive and well-established, and there may be resistance amplify biases or distort information – complicated by the
from course designers and managers, educators, and other need for radiological data between institutions and across
stakeholders. On the other hand, the integration of AI borders – highlights the importance of transparently
45
60
into medical education is likely inevitable, paving the way identifying agents within the system and their access to AI
for serious disruption and commercial opportunities. tools. This transparency should be integrated into medical
Indeed, it is necessary since a lack of integration will education from the outset. In addition, convincing
61
constitute a further type of broad ethical risk: if students practitioners of the significant benefits AI offers to
Volume 2 Issue 1 (2025) 6 doi: 10.36922/aih.3276

