Page 82 - AC-2-3
P. 82
Arts & Communication Computer vision in tactical AI art
with artistic overtones. Using a custom benchmark
dataset of diverse skin types based on 1,270 images of
parliamentarians from three African and three European
countries, Buolamwini and Gebru assessed the accuracy
of several corporate facial classifiers (Adience, IBM,
Microsoft, and Face++) concerning gender, skin type,
and gender/skin type intersection. They showed that the
error rate of the tested classifiers was significantly higher
for women with darker skin color and published their
dataset to be used for accuracy calibration. Their findings
gained public attention and influenced United States (US)
policymakers and the AI industry. 73
Kate Crawford and Trevor Paglen’s multipart project
74
Figure 8. Mushon Zer-Aviv, The Normalizing Machine (since 2018). Training Humans (2019 – 2020) followed a similar
Installation view at the Fotomuseum Winterthur, 2019. Photograph: agenda. Its critique of the racial bias manifest in CV
Lucidia Grande. Courtesy of the artist training datasets and the use of facial images and videos
without consent for building these datasets was widely
and lacks the capacities for commonsense reasoning and credited with raising public attention about the problems
inference, which makes it inflexible and limits its range in the online database ImageNet, whose more than 14
of meaningful and responsible applications. 10,11 Artworks million Internet-scraped pictures have been used in ML
such as Machine Learning Porn, Level of Confidence, and since 2009. However, claims that part of this project, the
75
The Normalizing Machine point to the fact that CV errors work called ImageNet Roulette (2019), stirred ImageNet to
combine technical deficiencies in recognition range or excise 600,000 offensive synsets are purely conjectural. 10
accuracy with much more decisive human factors, such More importantly, it was revealed that the creators of
as cognitive flaws, prejudices, biases, and conflicting JAFFE, CK, and FERET datasets (featured alongside
economic or political interests. ImageNet in Training Humans) had duly obtained
2.4. Ethical and epistemic limits permissions from the depicted persons, whereas Paglen
and Crawford themselves collected, reproduced, and
Human factors impact AI development, extensive exhibited images from these datasets without consent,
industrialization, and sometimes rushed application and made technical errors in their critical analysis of the
in sensitive areas, such as jurisdiction, HR, insurance, purpose of several datasets. It is no less dubious that Paglen
or health care by retaining or amplifying the existing and Crawford found it appropriate to partner with the high
cultural, economic, linguistic, ethnic, gender, and other fashion industry (Prada Mode Paris) to promote Training
inequities. 4,68-70 However, many undesirable human- Humans, somehow overlooking its forefront position in
induced byproducts, such as biases, remain unanticipated the sustainability and environmental crises and its baggage
during research or unregistered in testing. Instead, they of exploitative business practices. 78,79 Perhaps the critical
are often mitigated after being detected in the deployed AI compromises and ethical inconsistencies of this project
products, which hint at the soundness of the safety culture may be recognized as tradeoffs of Paglen’s position in the
in AI engineering and the social responsibility standards mainstream art world. 80
of the AI industry.
A slew of artworks centers on human perceptive
These issues are particularly conspicuous in face flaws that slip into perceptive apparatuses. For instance,
detection and identification due to the facial convergence Benedikt Groß and Joey Lee’s online project Aerial Bold
of evolutionarily significant visual markers and the
psychological role of the face as a representative locus of 10 ImageNet’s staff had already begun addressing its
76
the self and identity. Flaws of network architectures used problems in 2018, and their statement about the database
77
for facial recognition and biases in facial data annotation improvements, sourced in several writings about the
and classification have been identified by both scientists ImageNet Roulette, makes no mention of that project and no
71
reference to the public criticism stirred by the art scene as the
and artists, as exemplified by Joy Buolamwini and Timnit motives for removing the synsets. Synsets are the groupings
Gebru’s Gender Shades (2018). It started in 2017 as of synonymous words that express the same concept. They
72
scientific research for Buolamwini’s master thesis and are used in the NLP modules of CV architectures to generate
morphed into a documentary and educational project image tags or descriptions.
Volume 2 Issue 3 (2024) 9 doi: 10.36922/ac.2282

