SCIART MAGAZINE
  • Magazine
  • Contact
  • Subscribe!
  • Magazine
  • Contact
  • Subscribe!

FEBRUARY 2019

back to table of contents

PERCEPTION

Is Seeing Believing?
A Collaborative Art and Design Research Project
Picture
Thinkstock image titled Artificial Intelligence, featured in “What AI Can and Cannot Do Today” in Network World, MAR 30, 2017.
In 2018, Anne Trouillet Rogers, founder of art and innovation consultancy Culture A and
digital sociologist Lisa Talia Moretti partnered to create
Is Seeing Believing?, a project that explores public perceptions and understanding of Artificial Intelligence (AI) through visual and nonvisual stimuli. This project expands upon
It Speaks, a research initiative funded by ReFig (a 5 year project supported by the Social Sciences and Humanities Research Council of Canada) that was led by Lisa in early 2018 and used social theory to create a framework that would allow for multi-disciplinary teams to manage bias within big language data sets. This article outlines Anne and Lisa’s process for Is Seeing Believing?, presents their early findings, and discusses their blue sky ambitions. 

“People’s ways of using media, real or imagined, enter into everyday conversations as parts
of the narratives through which people constantly construct who they are and who are the
people they talk about.” - André H. Caron and Letizia Caronia, 2007
​
By Anne Trouillet Rogers & Lisa Talia Moretti, guest contributors

​Technology is often thought of as being a series of products. However, it’s not quite so simple. Rather, we should think of technology as being a series of products that are connected to ideas, that are embedded into culture, and that go on to create a system. It’s for this reason the authors prefer to define technology as a system and not a product. Expanding the definition of technology in this way helps us to visualize how the system of technology is connected to other social systems like information, knowledge, work, power, race, sex, and gender. Technology is therefore sociotechnical in nature and for this reason we need to re-engineer old methods and develop new ones to evaluate technology’s impact on society. Technology needs a heavy dose of lateral thinking that's driven by social theory. Put another way, there is an urgent need to "produce different knowledge, and produce knowledge differently" (Richardson and St. Pierre, 1997, p.969).
 
In May 2018, digital sociologist Lisa Talia Moretti led a piece of research called It Speaks that explored the language used within the Artificial Intelligence (AI) industry - with a specific focus on building chatbots. Inspired by a paper titled "Reframing AI Discourse" (2017), researchers D.G. Johnson and M. Verdicchio discuss how the multiple roles humans play in the creation, design, and deployment of AI has caused two major problems. Firstly, it’s resulted in people thinking and believing that AI is 'uncontrollable' because in the mind of the public, the term "autonomous machines" is being equated with human autonomy. Secondly, it has created 'sociotechnical blindness' by hiding the essential and pivotal roles humans play in the life and times of machines.
Picture
iStock image titled Artificial Intelligence, featured in “A Stanford-led survey of trends in artificial intelligence finds advances in working with human languages, global reach” in Stanford News, DEC 12, 2018.
Within It Speaks, the researchers confirmed this through an analysis of media headlines and images in addition to an analysis of language used within the technology industry. The later analysis led to the discovery of how language has contributed to creating sociotechnical blindness within the technology industry. For more on this, Lisa invites you to read the It Speaks paper which can be downloaded here.
 
While Lisa was conducting her analysis for the report, Anne was investigating the marketplace of AI tools as well as recent artist and tech collaborations. Coincidently, their research allowed them to see the same visuals that were being used to represent AI: glowing, blue brains, brains illustrated as circuit boards, shiny, white or metallic robot figures and disembodied hands appearing again and again in media associated with 'artificial intelligence'.  Where were the people whose lives were being impacted for better, or worse, by AI? Why was there such a disconnect between the art and the reality? Unintentionally, Anne and Lisa happened upon more evidence to support the argument for sociotechnical blindness, discovering a new and exciting research territory in the branding, or rather, re-branding of AI.
 
Combining their unique skill sets and experience in the art, technology, and creative fields, Anne and Lisa have created and designed Is Seeing Believing? The pair have devised the project to become a multi-phase research initiative that explores public perceptions and understanding of AI through visual and non-visual stimuli. They are currently in the middle of the first phase, which analyses diverse visual sources of AI imagery in the mass media. 
Picture
Ferdinand de Saussure’s Sign Analysis as modelled from his Signifiers and Signified model. Diagram by authors.
Picture
CS Peirce’s Semiotics Model, which shows the relationship between a sign, an object and a meaning. Diagram by authors.
​Using Semiotics to Decode the Visual Language of AI
 
Much of what we think we know about AI is influenced by what we 'see' it to be from images and video transferred across the world. These come to us in many forms: an image on a blog, a news site, a blockbuster sci-fi film, a fancy walking, talking gadget (Hello, Sophia!). We’ve started the first phase of our project, Is Seeing Believing?, by exploring how visual media has contributed, along with language, to sustaining and perpetuating sociotechnical blindness. In addition, we are investigating how AI imagery, distributed through digital networks and search engines, has educated the public about this very powerful and increasingly pervasive technology. Using a combination of social listening (Talkwalker), keyword search analysis (Answer The Public), the Google image search engine and stock imagery websites to access images (our 'raw' data for this project), we have, and continue to conduct, an ongoing semiotic analysis of AI imagery that is tagged, stored, and distributed through public sources.
 
For the semiotic analysis, we adopted the methodologies of Ferdinand de Saussure and CS Peirce (Curtin, 2006). Each image has been analyzed by its primary sign components, what those components signify, and what significations or external meanings arise. Analysis of images was done without assigning fixed meanings but rather to investigate how the public may construct meaning based on stimuli such as the image’s color, composition, and subject matter. We began by mapping innovations in AI across an extensive historical timeline and assessing it through the shifting feast of visual media tagged with the key phrase "artificial intelligence." The semiotic analysis focused on five primary public sources of AI visuals: Google Image Search, Stock Art, Entertainment, Product, and Emoji. To date, over 1,000 images across Google Image Search and Stock Imagery alone have been analyzed and the outputs of this are discussed here.  

How Do We Evolve the Visual Language of AI?
 
The AI visuals analyzed are difficult to interpret at face-value because they present a lot of conceptual nuance to be deciphered by the viewer/reader. Creative imagery, such as this, is often challenging to construct meaning from as it relies more heavily on signs and meanings to be deciphered (Peterson, 2017). One of the ways we assessed images was whether they appear to function as performative or decorative imagery. It was discovered that most AI imagery is decorative and/or reiterative rather than performative in nature. The effect is that AI comes to be understood as a physical robot or a green brain living on a screen instead of invisible code that is embedded within the systems in which we conduct our lives. AI becomes something that simply is rather than a cultural artifact that should be debated by the public about what it has the potential to become.
Picture
Throughout the ongoing analysis, we’ve also noted traits of historical art movements present in some of the images, particularly with regards to surrealism, futurism, heroic realism, and humanism. However to the average person who doesn’t have detailed knowledge and understanding of these movements, these images are very difficult to 'decode'. As a result, one of the early challenges we have come across is that the visual language being used for some AI images is unlikely to be translating into individual understanding for many people. Secondly, we have discovered that many of the images are not performing an active and supportive role in educating the public about AI. According to the literature, the most impactful creative images are performative and provoke an experience in the viewer. Yet more often than not, creative images end up only as decorative that is, they represent a headline and its story but serve merely as a content placeholder. Imagine a story about the release of a new iPhone. A decorative image would be a picture of that new iPhone. By contrast, performative images provide more of an experience for the reader by drawing them in as active participant in the understanding of what that image means to them. Performative images aid and support the learning process. Their subject, color and composition also tend to be more arresting in nature and can become 1,000 word stories in themselves.
​

In this respect, we believe that one of the key ways we can evolve the visual language of AI is by making images work harder by making them more performative rather than decorative. We need to ensure that within the visual culture in which we live that we take care in selecting the visuals that will accompany stories of AI. These images shouldn’t just be 'cool' or 'good enough' but rather images selected for their ability to tell a story and be an educational experience. In order to do this, we need to be better versed in why a particular kind of technology has been created (its purpose in the world), what it can do (its intent), what the desired experience is, as well as what its many impacts will be on the world – known as well as imagined.
Picture
Shutterstock image titled Artificial Intelligence, featured in “DeepMind: can we ever trust a machine to diagnose cancer?” in The Conversation, DEC 06, 2017.
​Signs, Signifiers and Significations
 
Based on this early stage analysis, images tagged 'artificial intelligence' most often depict the following in renderings, illustrations, and other visuals. These are the signs we’ve noted of how visual media signifies artificial intelligence to the public.
 
Blue As The Dominant Color: Throughout our analysis, we were struck by the use of blue and bluish hues as the dominant coloration in images across all sources (such as the blue brains mentioned previously). Curious to investigate how this choice of color affects viewers, we dug into scholarship surrounding the psychology of colors, particularly exploring how the color blue mentally and physically affects the state of the viewer. According to a 2009 study by the University of British Columbia and leading psychologists, blue environmental cues are linked to increasing creativity, productive brainstorming, feelings of calm and receptiveness (Science Daily, 2009; Valdez and Mehrabian, 1994). We are further exploring how these cues may affect a viewer’s perception of a headline and digestion of the associated news linked to AI.
 
Brains, Binary Code, and Circuit Boards: These are usually rendered in prominent blue coloration and glowing, or in green coloration and made to appear as pulsing networks or circuit boards. The image of the brain is the dominating visual with binary code or circuit board illustration added as detailed layers. This art often features pulsating connection points, illustrating neural networks. It’s a very literal and heavy-handed approach to likening the network of the brain with images relating to AI networks of information and networked knowledge. Whereas the heart is usually associated with emotions (the irrational), the brain is associated as the source of life and with being the source of intellect and knowledge. To use one’s head not one’s heart in decision-making is considered to be more rational and considered. 
Picture
Getty images titled Artificial Intelligence, featured in “Is Artificial Intelligence Replacing Your Intelligence?” in Entrepreneur, MAR 21, 2018.
Picture
- iStock image titled Artificial Intelligence, featured in “Artificial I, Human Rights, and Algorithmic Transparency” in Dot Magazine, NOV 2018.
​Primarily Gender-Neutral Robots With Human Features: When shown as a physical form, 'artificial intelligence' is often depicted as a robot with a white or silver body with a noticeable sheen. These "shiny" robots are often gazing upwards (the eyes clearly defined), engaged in physical tasks such as touching a holographic screen, working at a desk, or they are shown in contemplative poses akin to Auguste Rodin’s iconic The Thinker. The robots are placed in various areas of the knowledge and manufacturing economies signaling their diverse skills and use. They are most often modeled with human features and attention paid to the eyes, mouths, torsos, and hands. Some of the robots featured as AI display more aggressive features, such as gnarled teeth and glowing red eyes. Such imagery shows the continued influence from mainstream films such as Terminator, Blade Runner and I, Robot on the media’s depiction of robots and their good (or bad) intentions within society. 
Picture
iStock image titled Thinking Robot, featured in “‘Machines That Think’ predicts the future of artificial intelligence” in Science Magazine, FEB 17, 2018.
Picture
The Thinker by Auguste Rodin, conceived in 1880 and cast in 1902, bronze, located at the Musée Rodin in Paris, France. Image from the Musée Rodin collection.
Picture
Thinkstock image titled Artificial Intelligence / Machine Learning, featured in “5 ways industrial AI is revolutionizing manufacturing” in CIO Magazine, SEP 27, 2018.
Picture
Still image from The Terminator film franchise, credited to Melinda Sue Gordon, featured in “Has humanity already lost control of artificial intelligence?” in The Daily Mail, APR 11, 2017.
​Disembodied Robotic Or Human Hands: Most often these elements are rendered as outstretched arms, one human and one robotic, that are reaching out to shake hands. We have also found many instances where a single robotic hand is holding a mini world or pressing control buttons, signifying robot as creator and controller of systems. Additionally, we have often seen the recreation of Michelangelo’s Creation of Adam. What is really interesting in these images is that from 2008 to 2010, the robot’s hand is modeled from Adam’s hand and the human hand is modeled from God’s hand. However, in 2011, the hands suddenly switch positions; the human hand, representing Adam, moves to the left and the robot hand, representing God, moves to the right and therefore at a higher composition. Questions arise from these images: who has the power to make, create or destroy life?; is it human or machine? The media is conflicting. 
Picture
Uncredited stock image, featured in article “The technology behind AI in PPC” in Marketing Land, JAN 18, 2018.
Picture
Detail of The Creation of Adam by Michelangelo, c. 1508–1512, fresco painting, located in the Sistine Chapel in Rome, Italy. Image from wikipedia.com.
Picture
Getty Image titled Business leader - human hand touching robot hand, featured in article “Augmented Intelligence” in The Royal Society of Chemistry, AUG 2017.
​We interpret important significations communicated by the media through these visual signs and signifiers. These include the future of work, the future of communication, the future of AI/robot and human relations; technological domination, and the future of governance, among other important social topics. Notably, there was a lack of racial diversity in images picturing humans. In the most popular stock images, humans were of Caucasian or Asian race. This is sadly reflective of the current technology industry which features a lack of representation from black, hispanic, and other ethnic minority groups. From a gender standpoint, the depiction of male and female figures was relatively balanced. This is not reflective of current industry numbers that show that men predominantly make up the technology workforce.
 
So Why Evolve the Visual Language of AI?
 
As in It Speaks, one of the major outputs of Is Seeing Believing? is to present not only a catalyst to create more performative, experiential - and truthful - images of AI and AI-related content; but also to present a strategy for the improved collaboration between the science and humanities communities. We want to evolve visual language to support the democratization of AI as it permeates everyday use and mass consumption. According to noted visual language specialist Richard E. Horn, "People think visually. People think in language. When words and visual elements are closely intertwined, we create something new and we augment our communal intelligence" (2002). Horn details methods in which visual language improves communication and experience across various industries. Using his reasoning, we believe evolving the visual language of AI will not only improve sociotechnical vision (as opposed to blindness) and engagement between humans and technology system as a whole, but also spark development of a global rebranding of AI. This starts with critiquing and creating fresh and dynamic stock imagery for commercial media and policymaker use, educational programs on AI that can be shared in public forums, and activate more research and collaborations between art and technology.
 
With a nod to this notion of activated research and collaborations, we created a prototype artist brief based on our initial data findings. The brief lays out criteria for how we might create more performative and dynamic imagery around the challenging subject of AI. The brief includes key themes, a glossary of terms, social media hashtags, and emoji that contextualize the public perception of AI. In Fall 2018, we circulated the brief to a few artists working in different mediums. Our aim was to observe how these artists would develop ideas and visuals geared towards commercial media use in response to the initial findings of our semiotic analysis. We also asked the artists to provide us with feedback in an effort to evolve the brief for future distribution. 
 
In response to the brief, some artists expressed the challenge of visualizing the inherent abstract connotations of AI. Comments included:

“This was a tough one!! …. It was actually really difficult not to revert to the cliché robot
or circuit/brain image!” - Jake Messing (painter)

“Gosh! This ended up being more tricky than I imagined….I've opted for a more conceptual approach to try and visualize AI as an idea rather than an object/robot/gadget. Each time I thought about the project I considered a completely different approach...it is a fun subject to think about.” - Helen Dennis (photographer, printmaker)

 
Each artist offered a unique approach to creating new imagery. Pauline Batista, for example, proposed a creative campaign of Future Postcards, illustrating a future where AI has taken over all forms of labour and humans are instead living on the beach (Pauline’s from Rio de Janeiro!) and enjoying life because work is obsolete. Her images suggest digital postcards: one half of the card shows a task a machine is performing and the other half of the card shows what humans are doing now that they no longer have to do this task. 
Picture
Future Postcards 2 by Pauline Batista, 2018.
​Jake Messing explored the concept of AI as the third eye, referring to it as a gate that leads to inner realms and spaces of higher consciousness. In his illustration, the iris is replaced with the power on/off symbol, which merges the digital/visual experience to that of the human/visual experience. 
Picture
Graphic 3rd Eye by Jake Messing, 2018.
​Kysa Johnson wanted to depict pattern recognition in AI and “the set of inherent relationships in visual representations of things that allows us and self-learning computers to recognize things that are not exactly the same but share the same relationships as versions of the same thing.” She was drawn to the notion of AI and interconnectivity. Interestingly, this was also a theme that Helen Dennis extracted from the artist brief.  
Picture
Detail of Blow Up 322 by Kysa Johnson, 2017.
Picture
Untitled by Helen Dennis, 2018
We’re eager to distribute the brief to many more artists and designers after edits inspired by our early prototyping round. If you’re an artist or designer keen to stay in the loop, please connect with us!
 
Conclusion and Next Steps
 
As previously mentioned, while significant work has been done to date, Is Seeing Believing? is in its infancy with many more initiatives being planned. The authors have an open and live survey (which can be found here) that is collecting responses from the public about their perceptions of AI. In addition, future data will be collected through subject-matter expert interviews and co-creation workshops with members of the public and artists. Ultimately, Lisa and Anne have ambitions to create an immersive exhibition that uses the findings created from the visual analysis to push the project into the next frontier - sound. Why an immersive sound exhibition? Because the face of AI is likely not be a face at all, but rather a voice.
 
In a world that is hurtling toward voice being the next computational interface (Hi Siri, Hi Alexa!), technologists need to work more imaginatively and collaboratively with designers, artists, and other creatives. As the entire AI industry grapples with the challenge of 'explainable' AI in order to foster trust with a bewildered and exhausted public, this early finding of technologist-creative collaboration doesn’t just feel like a good suggestion, but an obvious one. In the same vein, a cross-sector working group that spans the industries of media, education and technology and makes public education of AI it’s priority emerges as another strong finding and recommendation at this early stage.
 
Representing and educating the public about AI using innovative story techniques and creative multimedia - that performs - is not set to get easier over time. If anything, the future will continue to pose fresh challenges. In fact, the authors already foresee one on the horizon:
 
How do we visually represent the heard but unseen? 

Full Access Subscription

$50.00
Buy Now

Get full access to all issues of SciArt Magazine.

​Upon purchase, your digital access code will be automatically emailed to you.

For gift purchases, simply forward or print out your confirmation email.

SciArt Magazine is a publication of
​SciArt Initiative, Inc.