0% found this document useful (0 votes)
27 views10 pages

Ragot 2020

Uploaded by

joshadamgara1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views10 pages

Ragot 2020

Uploaded by

joshadamgara1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

AI-generated vs. Human Artworks. A Perception


Bias Towards Artificial Intelligence?
Martin Ragot Abstract
Nicolas Martin Via generative adversarial networks (GANs), artificial
IRT b<>com intelligence (AI) has influenced many areas, especially
1219, Av des Champs Blancs the artistic field, as symbol of a human task. In human-
35510 Cesson-Sévigné, computer interaction (HCI) studies, perception biases
France against AI, machines, or computers are generally cited.
[email protected] However, experimental evidence is still lacking. This
[email protected] paper presents a wide-scale experiment in which 565
participants are asked to evaluate paintings (which
Salomé Cojean
were created by humans or AI) on four dimensions:
Laboratory of Psychology
liking, perceived beauty, novelty, and meaning. A
LPPL (EA 4638)
priming effect is evaluated using two between-subject
University of Angers
conditions: artworks presented as created by an AI,
11, boulevard Lavoisier
and artworks presented as created by a human artist.
Angers, 49045, France
Finally, the paintings perceived as being drawn by
[email protected]
human are evaluated significantly more highly than
those perceived as being made by AI. Thus, using such
a methodology and sample in an unprecedented way,
the results show a negative bias of perception towards
Permission to make digital or hard copies of part or all of this work for AI and a preference bias towards human systems.
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. Copyrights Author Keywords
for third-party components of this work must be honored. For all other Arts; Artificial Intelligence; AI; Authorship; Bias; CAN;
uses, contact the Owner/Author.
CHI'20 Extended Abstracts, April 25–30, 2020, Honolulu, HI, USA. Computational Creativity; GAN; Painting.
© 2020 Copyright is held by the owner/author(s).
ACM ISBN 978-1-4503-6819-3/20/04.
DOI: https://doi.org/10.1145/3334480.3382892
CSS Concepts
• Human-centered computing~ Empirical studies
in HCI; User studies.

LBW294, Page 1
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

Introduction remains unclear. It seems crucial to study human


For many years, technological progress has wrought perceptions of these AI-generated artworks. For this
profound changes in the creative industry, as in the purpose, different methodologies exist. First, in many
1
Generative Adversarial fields of music (e.g., [18, 19]), parametric design (e.g., studies, a modified Turing test (TT) is used to analyze
Network (GAN) [25] consists [41]), generative fashion design (e.g., [34]), and humans’ capacity to distinguish human or AI artworks
in train jointly two separate paintings (e.g., [23]). This has given rise to the (e.g., [23, 45, 51]). However, the TT seems
models: a generator and a emergence of new research fields: computational insufficient. Indeed, this type of test does not make it
discriminator. The generator aesthetics [31], attractiveness computing (see Chu possible to study audience perceptions. This method
is train to generate the most [14] for a literature review), Creative Adversarial has also been criticized for encouraging imitation and
realistic data (e.g., images, Networks (CAN) [23] and computational creativity [16]. pastiches of existing human artworks [46]. In parallel,
text, signal) and to fool the Computational creativity is a generic field of artificial to the best of our knowledge, few researchers have
discriminator. The intelligence (AI) that studies both artificial and human studied the differences in public perceptions between
discriminator is train to creativity [56] and has grown considerably. Indeed, in AI-generated or human artworks to understand the
discriminate real data (i.e., recent years, AI and especially its subfield machine potential human bias. In their survey, Moffat and Kelly
data from the training learning (ML) have gained popularity [38] thanks to [40] found significant bias against computer-generated
dataset) and fake data research breakthroughs [37]. In 2014, Goodfellow and music. Other researchers have tried to reproduce these
(i.e., data from the collaborators [27] proposed a new ML paradigm to results with a small sample but without success [25,
generator). generate synthetic data called a generative adversarial 42, 44]. In addition, Hong [32] has analyzed the
network (GAN)1. Artists have appropriated these differences in perceptions of computer art within focus
models have been in recent years to the point that they groups. For the same piece of art is more considered to
2
https://obvious-art.com/ have become a new art market [20]. Thus, a first art be “art” when it is perceived to be of human origin than
painting generated by a GAN was sold for $432,500 by AI origin. In the field of art and paintings, Elgammal
Obvious [14]2. With the increase in quality and ubiquity [23] has found conflicting results with a positive bias
3
Due to space limitation, the of creative systems such as GANs, studying these towards machines. Despite this, the bias against
complete list of paintings is creativity support systems seems to be necessary [17, computers is widespread and often cited [36]: “This
not presented. Please contact 29]. Evaluating the relevance of recently produced suggests that there is much more to investigate about
the first author for details. artistic works consists of assessing the perception of our relation with artificially generated art” ([20], p.
the public towards these artworks. To improve the 159).
algorithms, one approach is to try to understand user
perceptions. There is also a need to understand the With increasing exposure to “more high-quality
acceptance of works produced by AI [16]. Currently, computer generated artifacts,” the following research
this perception seems to be rather negative. Indeed, questions have become “more pressing” ([17], p. 276).
Gaut [26] has postulated a “negative value” towards Art is a symbol of an activity long regarded as a human
machines or computers. Colton [15] is also concerned task. Thus, AI raises important questions in terms of
about the rejection of creative computers. acceptability and human-and-machine relationships,
Nevertheless, the evidence for such negative bias which must be studied in the field of HCI. Can people

LBW294, Page 2
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

distinguish computer-generated paintings? Do people are used in this survey. The optimal number of rating
prefer systems (or paintings) created by humans? Do bars seems to be seven [13]. For these four
people have a negative bias towards AI? Finally, we dimensions, participants were asked to indicate how
seek to learn how assumptions about the identity of the much they agreed according to seven-point Likert
Instruction: AI condition painters (AI or human) of the same artworks influences scales (1 = totally disagree; 7 = totally agree). Each
8 paintings created by some their evaluation. This paper presents a large-scale Likert had antagonist anchors without labels between.
Artificial Intelligence will be experiment to assess the existence of a potential
presented. You will be asked to human bias towards works generated by AI by using Material
rate them. There are no right or the technique of priming effect (e.g., [3, 55]). The Following Pasquier [44], the selection of pieces of art
wrong answers. Only your priming effect represents the influence of a first was influenced by their similarity in order to conceal
opinion counts. Please respond
stimulus (the primer) on the processing of a their authorship. For this purpose, 40 impressionist-
spontaneously according to your
feelings. subsequent stimulus (the target). Thus, our study style paintings made by Piet Mondrian, Claude Monet,
evaluates the influence of the exposure of the primer Robbie Barrat, and the collective Obvious were
Instruction: Human condition (i.e., the declared identity of the painter, AI vs. human) selected3.
8 paintings created by some on the perception of artistic targets (i.e., AI-generated
artists will be presented. You will paintings vs. paintings drawn by humans). Protocol
be asked to rate them. There are An online survey was used to control the experimental
no right or wrong answers. Only Method conditions in this type of study [1]. The participants
your opinion counts. Please
Participants who accepted the task on AMT were redirected to the
respond spontaneously according
to your feelings 565 participants (M = 32.80 years; S.D. = 9.91; 234 SurveyMonkey platform. A mixed-subject design was
females and 331 males) were involved. The participants used; see Figure 1. Similar to [25], participants were
Table 1: Instructions were recruited without specific criteria on Amazon randomly assigned to one of the two conditions (i.e.,
Mechanical Turk (AMT) in order to select more diverse Human or AI condition, the between-subject variable).
profiles and obtain a larger sample [11]. The reward Thus, participants saw one of the two primings and
paid to each worker was $0.75. instructions presented in Table 1. All participants saw
portraits and landscapes actually made by humans and
Measures AI (within-subject design).
As Amirshahi [1] has recommended with lead
subjective tests, many proprieties related to the The artworks were randomly selected from 40 paintings
paintings were studied. Inspired by the works of (10 portraits by AI, 10 landscapes by AI, 10 portraits
Berlyne [5], Jordanous [33], and Elgammal [23], many by humans, and 10 landscapes by humans) to improve
dimensions were evaluated: liking (“I like this the generalization of the results, to reduce experiment
painting”), beauty (“This painting looks beautiful”), duration, and to avoid the repetition of painting styles.
novelty (“This painting seems novel”), and meaning (“I Next, for each painting, participants had to complete
perceive the meaning of the painting”). To evaluate several items (i.e., liking, perceived beauty, novelty,
subjective dimensions, questionnaires with Likert scales and meaning). Finally, in order to ensure the

LBW294, Page 3
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

effectiveness of the priming of the declared author type and meaning. All descriptive statistics are presented in
4 (i.e., AI or human), and following previous studies Table 2, Table 3, Table 4, and Table 5.
To consider the variability
(e.g., [42]), an item of manipulation check was
between paintings, the
introduced (i.e., “At the beginning of the study, the Concerning declared liking (see Table 2), the analyses
painting is included as a
identity of the painters was defined. Do you remember show a main effect of induction (F1,481.25 = 17.67,
random factor. Thus, the
the identity of the painters?”). Afterward, it was p < .001, d = 0.06), the type of painting
variability in perception
explained that the origin of the painting had been (F1,30.74 = 26.69, p < .001, d = 0.15), and the real
between the paintings is
manipulated. Participants then had to guess the origin authors (F1,30.74 = 82.44, p < .001, d = 0.25).
absorbed in the model fitting.
of four paintings (one painting was randomly selected
from each category). This modified TT was proposed at Regarding perceived beauty (see Table 3), the analyses
the end of the study to avoid any bias in evaluation and show a main effect of induction (F1,481.23 = 17.74,
the priming effect. Lastly, two demographic questions p < .001, d = 0.05), the type of painting
Induction about age and gender were asked. (F1,32.14 = 22.18, p < .001, d = 0.16), and the real
Between-subject
authors (F1,32.14 = 64.81, p < .001, d = 0.27).
variable
Results
AI Human Mixed model analyses were used for each dependent In terms of perceived novelty (see Table 4), the
variable. Thus, the participant and the paintings4 were analyses show a main effect of induction
considered random factors. The induction condition was (F1,481.28 = 13.31, p < .001, d = 0.11), the type of
Real author: considered a between-subject factor. The type of painting (F1,25.72 = 16.33, p < .001, d = 0.08), and the
AI paintings and the real authors were considered within- real authors (F1,25.72 = 3.47, p = .074, d = 0.04).
subject factors. The statistical analysis was performed
Landscape
using R [47], lme4 [4], and afex [52]. The effect sizes Concerning perceived meaning (see Table 5), the
were based on [10] (measured as d). analyses showed a main effect of induction
Within-subject variables

Portrait (F1,481.34 = 15.09, p < .001, d = 0.13), the type of


Randomized order

Participants who responded “I don’t know” to the painting (F1,28.81 = 19.61, p < .001, d = 0.10), and the
manipulation check were excluded (i.e., 79 out of 565 real authors (F1,28.81 = 52.96, p < .001, d = 0.16).
Real author: participants, or 13.98% of the sample). Moreover, for
Human “Human condition” and “AI condition,” 82% and 62%, Lastly, participants had to distinguish paintings made
respectively, of participants remembered their type of by humans from generated-AI paintings. The analyses
Landscape
induction. Thus, to control the effect of this variable, show an effect of the painting type on the recognition
answering correctly or incorrectly on this item was rate (F1,1717.44 = 17.67, p < .001, ηp2 = 0.34) and the
Portrait introduced as a random factor similar to [1]. author’s type (F1,1717.44 = 64.41, p < .001, ηp2 = 0.34):
see Table 6. Thus, paintings drawn by humans
The effects of the induction condition, the type of (compared to AI-generated paintings) and the portraits
Figure 1. Experimental design
paintings, and the real authors were evaluated on (compared to the landscapes) are the best-recognized
declared liking, perceived beauty, perceived novelty,

LBW294, Page 4
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

Human AI
5.12 4.80
Induction
(1.56) (1.66)
artworks with the lowest percentage of recognition composed-computer music and are opposed to results
Human AI
errors. of Moffat and Kelly [40]. In Elgammal [23], 75% of
Real 5.18 4.72 respondents thought that the AI-generated paintings
Authors (1.44) (1.76) Discussion were made by humans. These differences could be
Land Portrait A phase with a manipulation check was proposed to explained by the improvement of the techniques in
Painting 5.35 4.55 participants to verify the relevance of the proposed computational creativity, especially with the emergence
type (1.39) (1.74) induction. The responses suggest that a large majority of GAN. In parallel, it should be noted that participants
Table 2. Descriptive statistics for of participants believed and remembered their type of are better able to identify the origin of the author for
declared linking. Land = induction (i.e., AI or human condition). These results portraits (69%) than for landscapes (53%). Similar to
landscape seem consistent with those of Friedman [25]. In their Boden [8], with a recognition rate close to chance
first experiment, 83.87% and 65.27%, respectively, (50%), people have difficulty distinguishing the type of
Human AI believed in the assertion in “Human Condition” and “AI authors. These results may suggest that the technical
5.12 4.81 condition.” Unlike in the previous studies, here, the use proprieties of AI-generated paintings are currently
Induction
(1.53) (1.67) of a large sample (565 participants), combined with the more advanced for the generation of landscapes than
Human AI introduction of the answers in the manipulation check portraits. A possible explanation of the existing bias
Real 5.20 4.71 as a random factor, resulted in precise statistical against AI-generated paintings could be that
Authors (1.38) (1.78) analyses. Indeed, the results show significant effects of participants evaluated the pieces of art with an
Land Portrait both the induced and real author. More specifically, the intergroup bias. AI may be anthropomorphized and
Painting 5.37 4.54 artworks presented as AI-generated paintings were thus considered as the out-group [21]. Intergroup bias
type (1.36) (1.73) significantly less liked and were perceived as less can be defined as the tendency people have to evaluate
Table 3. Descriptive statistics for beautiful, novel, and meaningful than paintings their own group (i.e., the in-group) in a more positive
perceived beauty. Land = presented as drawn by a human. The same pattern, manner than another group (i.e., the out-group) [6,
landscape with significant effect, is also apparent for the real type 30]. Social changes relative to the out-group may bring
of authors of the artworks presented. Indeed, AI- a sense of threat and negative feelings that reinforce
Human AI generated paintings were less well evaluated (in terms the intergroup bias [9, 24, 30]. The aim of the out-
4.90 4.63 of liking, beauty, novelty, and meaning) than paintings group devaluation is to maintain a positive social
Induction
(1.46) (1.59) made by humans. These results support the first results identity and superiority in some areas of competence
Human AI obtained by Moffat and Kelly [40] on the perception [54]. To further explain the existence of a negative
Real 4.87 4.63 bias towards computer-generated music. To the best of perception bias towards AI, several concepts can be
Authors (1.47) (1.58) our knowledge, these results had never been replicated cited, such as technophobia ([43]; see [49] for meta-
Land Portrait on a large sample, with the previous methodological analysis), anxiety towards machines, and reactive
Painting 4.81 4.70 precautions presented. Moreover, the modified TT, in devaluation [50]. AI can be seen as a potential source
type (1.48) (1.58) which participants have to guess the real author of the of danger because it could replace humans in many
paintings, showed a better recognition of human areas [43]. To combat this bias, some authors have
Table 4. Descriptive statistics for
paintings (66%) than AI-generated paintings (56%). raised the question of framing information on user
perceived novelty. Land =
landscape These results are consistent with Burnett [12] for perceptions [7, 15, 35, 48, 53]. Indeed, “[f]raming

LBW294, Page 5
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

information might be useful to help humans understand authorship: “Can an Artificial Intelligence make Art
an agent who is unlike them; but this remains to be without artists?” [2]. An initial answer could be in the
tested” ([36], p. 28:24), even if the experimental eye or mind of the viewer.
results seem to be unclear [39]. Thus, for Lamb ([36],
Human AI
p. 28:24), “a more subtle question is if humans are Conclusion
4.81 4.48
Induction biased toward familiar and humanlike forms of The idea of a negative perception bias against
(1.56) (1.69)
creativity,” which could be linked to the previously machines, computers, and/or AI is widely shared,
Human AI
mentioned intergroup bias. particularly in computational creativity. However, there
Real 4.80 4.47
has been a lack of experimental proof that
Authors (1.52) (1.74)
Future works demonstrates evidence of this bias. Unlike previous
Land Portrait
Within this paper, a general bias against AI-generated studies, the current large-scale experiment with 565
Painting 4.90 4.37 artwork can be highlighted. In future works, the participants focused on AI and AI-generated paintings
type (1.50) (1.73) interaction effects across different parameters (e.g., with different methodological precautions (large
Table 5. Descriptive statistics for induction and painting type) may be explored. sample, randomization, manipulation check as random
perceived meaning. Land = Moreover, it may be interesting to include a measure of factor, etc.). In addition to the large sample, the main
landscape self-reported expertise in Art. Hekkert and Wieringen contributions of this paper are the methodological
[28] have analyzed the differences in the art evaluation precautions and the robustness of the results, which
among experts and non-experts; they found a seem to represent significant progress in the field of
Type of correlation between perceived originality and quality HCI and computational creativity to demonstrate the
paintings Recognition that was significantly higher among experts than non- existence of a negative perception bias towards AI at
/ Real rate experts. In the same way, Moffat and Kelly [40] have the expense of human-made systems. Thus, depending
authors found a bias against computers that was significantly on the perceived identity of the author (Human vs. AI),
Landscape 53 % higher in musicians than non-musicians. Nevertheless, the same artworks were evaluated differently.
Portrait 69 % in a modified version of the TT, non-musicians were Moreover, the results show that real artworks made by
surprisingly better than musicians at recognizing music humans are also evaluated more highly than real AI-
Human 66 %
generated by computers or humans. In parallel, in generated artworks. Alongside technological advances
AI 56 %
order to make our results even more generic, the in AI, this paper paves the way for other studies to
Table 6. Recognition rate (in evaluation of a general bias against AI-generated further consider perception bias in computational
percent) according to painting
artworks could be made with different styles of creativity and to analyze the impact of AI on human
type and author type
paintings (e.g., [23]) or types of art (e.g., music, representations.
poem). In this paper, public perception bias is
questioned based on dichotomic human or AI priming. Acknowledgement
However, the frontiers between AI and human- This study was carried out within b<>com, an institute
generated artworks may appear more and more faint. of research and technology dedicated to digital
For Elgammal [22], AI has even blurred the definition technologies. It received support from the Future
of artists. More generally, this raises the issue of

LBW294, Page 6
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

Investments program of the French National Research [9] Marilynn B Brewer. 2001. Ingroup identification
Agency (grant no. ANR-07-A0-AIRT). and intergroup conflict. Social identity, intergroup
conflict, and conflict reduction 3: 17–41.
References [10] Marc Brysbaert and Michaël Stevens. 2018. Power
[1] Seyed Ali Amirshahi, Gregor Uwe Hayn- Analysis and Effect Size in Mixed Effects Models:
Leichsenring, Joachim Denzler, and Christoph A Tutorial. Journal of Cognition 1, 1: 9.
Redies. 2015. JenAesthetics Subjective Dataset: [11] Michael Buhrmester, Tracy Kwang, and Samuel D.
Analyzing Paintings by Subjective Scores. Gosling. 2011. Amazon’s Mechanical Turk: A New
Computer Vision - ECCV 2014 Workshops, Source of Inexpensive, Yet High-Quality, Data?
Springer International Publishing, 3–19. Perspectives on Psychological Science 6, 1: 3–5.
[2] Sofian Audry and Jon Ippolito. 2019. Can Artificial [12] Adam Burnett, Evon Khor, Philippe Pasquier, and
Intelligence Make Art without Artists? Ask the Arne Eigenfeldt. 2012. Validation of Harmonic
Viewer. Arts 8, 1: 35. Progression Generator Using Classical Music.
[3] John A. Bargh, Mark Chen, and Lara Burrows. 126–133.
1996. Automaticity of social behavior: Direct [13] M. Y. Cai, Y. Lin, and W. J. Zhang. 2016. Study of
effects of trait construct and stereotype activation the optimal number of rating bars in the likert
on action. Journal of Personality and Social scale. Proceedings of the 18th International
Psychology 71, 2: 230–244. Conference on Information Integration and Web-
[4] Douglas Bates, Martin Mächler, Ben Bolker, and based Applications and Services - iiWAS ’16, ACM
Steve Walker. 2015. Fitting Linear Mixed-Effects Press, 193–198.
Models Using lme4. Journal of Statistical Software [14] Wei-Ta Chu, Hideto Motomura, Norimichi
67, 1: 1–48. Tsumura, and Toshihiko Yamasaki. 2019. A
[5] D. E. Berlyne. 1971. Aesthetics and Survey on Multimedia Artworks Analysis and
psychobiology. Appleton-Century-Crofts, East Attractiveness Computing in Multimedia. ITE
Norwalk, CT, US. Transactions on Media Technology and
Applications 7, 2: 60–67.
[6] Michael Billig and Henri Tajfel. 1973. Social
categorization and similarity in intergroup [15] Simon Colton. 2008. Creativity Versus the
behaviour. European journal of social psychology Perception of Creativity in Computational
3, 1: 27–52. Systems. AAAI Spring Symposium: Creative
Intelligent Systems.
[7] Reuben Binns, Max Van Kleek, Michael Veale,
Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. [16] Simon Colton. 2012. The Painting Fool: Stories
“It’s Reducing a Human Being to a Percentage”: from Building an Automated Painter. In J.
Perceptions of Justice in Algorithmic Decisions. McCormack and M. d’Inverno, eds., Computers
Proceedings of the 2018 CHI Conference on and Creativity. Springer Berlin Heidelberg, Berlin,
Human Factors in Computing Systems - CHI ’18, Heidelberg, 3–38.
ACM Press, 1–14. [17] Simon Colton, Alison Pease, and Rob Saunders.
[8] Margaret A. Boden. 2010. The Turing test and 2018. Issues of Authenticity in Autonomously
artistic creativity. Kybernetes. Creative Systems. Proceedings of the Ninth
International Conference on Computational

LBW294, Page 7
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

Creativity, Salamanca, Spain, June 25-29, 2018., [27] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza,
272–279. et al. 2014. Generative Adversarial Networks.
[18] David Cope. 2005. Computer models of musical arXiv:1406.2661 [cs, stat].
creativity. MIT Press, Cambridge, Mass. [28] Paul Hekkert and Piet C. W. Van Wieringen. 1996.
[19] David Cope and Douglas R. Hofstadter. 2001. Beauty in the Eye of Expert and Nonexpert
Virtual music: computer synthesis of musical Beholders: A Study in the Appraisal of Art. The
style. MIT Press, Cambridge, Mass. American Journal of Psychology 109, 3: 389–407.

[20] Antonio Daniele and Yi-Zhe Song. 2019. AI + Art [29] Beth A. Hennessey and Teresa M. Amabile. 2010.
= Human. Proceedings of the 2019 AAAI/ACM Creativity. Annual Review of Psychology 61, 1:
Conference on AI, Ethics, and Society - AIES ’19, 569–598.
ACM Press, 155–161. [30] Miles Hewstone, Mark Rubin, and Hazel Willis.
[21] Chad Edwards, Autumn Edwards, Brett Stoll, 2002. Intergroup bias. Annual review of
Xialing Lin, and Noelle Massey. 2019. Evaluations psychology 53, 1: 575–604.
of an artificial intelligence instructor’s voice: [31] Florian Hoenig. 2005. Defining Computational
Social Identity Theory in human-robot Aesthetics. Proceedings of the First Eurographics
interactions. Computers in Human Behavior 90: Conference on Computational Aesthetics in
357–362. Graphics, Visualization and Imaging, Eurographics
[22] Ahmed Elgammal. 2019. AI Is Blurring the Association, 13–18.
Definition of Artist. American Scientist 107, 1: 18. [32] Joo-Wha Hong. 2018. Bias in Perception of Art
[23] Ahmed Elgammal, Bingchen Liu, Mohamed Produced by Artificial Intelligence. Human-
Elhoseiny, and Marian Mazzone. 2017. CAN: Computer Interaction. Interaction in Context,
Creative Adversarial Networks, Generating “Art” Springer International Publishing, 290–303.
by Learning About Styles and Deviating from [33] Anna Jordanous. 2014. Stepping Back to Progress
Style Norms. arXiv:1706.07068 [cs]. Forwards: Setting Standards for Meta-Evaluation
[24] Victoria M Esses, Lynne M Jackson, and Tamara L of Computational Creativity. .
Armstrong. 1998. Intergroup competition and [34] Wang-Cheng Kang, Chen Fang, Zhaowen Wang,
attitudes toward immigrants and immigration: An and Julian McAuley. 2017. Visually-Aware Fashion
instrumental model of group conflict. Journal of Recommendation and Design with Generative
social issues 54, 4: 699–724. Image Models. arXiv:1711.02231 [cs].
[25] Ronald S. Friedman and Christa L. Taylor. 2014. [35] René F. Kizilcec. 2016. How Much Information?:
Exploring emotional responses to Effects of Transparency on Trust in an Algorithmic
computationally-created music. Psychology of Interface. Proceedings of the 2016 CHI
Aesthetics, Creativity, and the Arts 8, 1: 87–95. Conference on Human Factors in Computing
[26] Berys Gaut. 2010. The Philosophy of Creativity: Systems - CHI ’16, ACM Press, 2390–2395.
Philosophy of Creativity. Philosophy Compass 5, [36] Carolyn Lamb, Daniel G. Brown, and Charles L. A.
12: 1034–1046. Clarke. 2018. Evaluating Computational
Creativity: An Interdisciplinary Tutorial. ACM
Comput. Surv. 51, 2: 28:1–28:34.

LBW294, Page 8
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

[37] Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Against Musical Metacreativity. Proceedings of the
2015. Deep learning. Nature 521, 7553: 436– Seventh International Conference on
444. Computational Creativity (ICCC 2016), Sony CSL.
[38] Jay H. Lee, Joohyun Shin, and Matthew J. Realff. [45] Marcus T. Pearce and Geraint A. Wiggins. 2001.
2018. Machine learning: Overview of the recent Towards A Framework for the Evaluation of
progresses and implications for the process Machine Compositions. .
systems engineering field. Computers & Chemical [46] Alison Pease and Simon Colton. 2011. On impact
Engineering 114: 111–121. and evaluation in computational creativity: a
[39] Stephen McGregor, Matthew Purver, and Geraint discussion of the Turing Test and an alternative
Wiggins. 2016. Process Based Evaluation of proposal. Proceedings of AISB ’11: computing and
Computer Generated Poetry. Proceedings of the philosophy, Society for the Study of Artificial
INLG 2016 Workshop on Computational Creativity Intelligence and Simulation of Behaviour, 15–22.
in Natural Language Generation, Association for [47] R Core Team. 2018. R: A Language and
Computational Linguistics, 51–60. Environment for Statistical Computing. R
[40] David C Moffat and Martin Kelly. 2006. An Foundation for Statistical Computing, Vienna,
investigation into people’s bias against Austria.
computational creativity in music composition. [48] Emilee Rader, Kelley Cotter, and Janghee Cho.
Proceedings of the 3rd International Joint 2018. Explanations as Mechanisms for Supporting
Workshop on Computational Creativity (ECAI06 Algorithmic Transparency. Proceedings of the
Workshop). 2018 CHI Conference on Human Factors in
[41] Danil Nagy, Damon Lau, John Locke, et al. 2017. Computing Systems - CHI ’18, ACM Press, 1–13.
Project discover: an application of generative [49] Larry D. Rosen and Phyllisann Maguire. 1990.
design for architectural space planning. Society Myths and realities of computerphobia: A meta-
for Computer Simulation International, 7. analysis. Anxiety Research 3, 3: 175–191.
[42] David Norton, Derrall Heath, and Dan Ventura. [50] Lee Ross. 1995. Reactive Devaluation in
2015. Accounting for Bias in the Evaluation of Negotiation and Conflict Resolution. In K. Arrow,
Creative Computational Systems: An Assessment R. Mnookin, L. Ross, A. Tversky, and R.B. Wilson,
of DARCI. Proceedings of the Sixth International eds., Barriers to Conflict Resolution. New York.
Conference on Computational Creativity, 31–38.
[51] Oscar Schwartz and Benjamin Laird. 2019. bot or
[43] Changhoon Oh, Taeyoung Lee, Yoojung Kim, not. Retrieved July 27, 2019 from
SoHyun Park, Sae bom Kwon, and Bongwon Suh. http://botpoet.com.
2017. Us vs. Them: Understanding Artificial
Intelligence Technophobia over the Google [52] Henrik Singmann, Ben Bolker, Jake Westfall, and
DeepMind Challenge Match. Proceedings of the Frederik Aust. 2019. afex: Analysis of Factorial
2017 CHI Conference on Human Factors in Experiments. .
Computing Systems - CHI ’17, ACM Press, 2523– [53] Rashmi Sinha and Kirsten Swearingen. 2002. The
2534. role of transparency in recommender systems.
[44] Philippe Pasquier, Adam Burnett, and James CHI ’02 extended abstracts on Human factors in
Maxwell. 2016. Investigating Listener Bias computing systems - CHI ’02, ACM Press, 830.

LBW294, Page 9
CHI 2020 Late-Breaking Work CHI 2020, April 25–30, 2020, Honolulu, HI, USA

[54] Henri Tajfel and John Turner. 2004. An


integrative theory of intergroup conflict. In M.J.
Hatch and M. Schultz, eds., Organizational
identity: A reader. Oxford University Press, 56–
65.
[55] Endel Tulving, Daniel L. Schacter, and Heather A.
Stark. 1982. Priming effects in word-fragment
completion are independent of recognition
memory. Journal of Experimental Psychology:
Learning, Memory, and Cognition 8, 4: 336–342.
[56] Geraint A. Wiggins. 2006. A preliminary
framework for description, analysis and
comparison of creative systems. Knowledge-
Based Systems 19, 7: 449–458.

LBW294, Page 10

You might also like