SCIART MAGAZINE
  • Magazine
  • About
    • Team
    • Contribute
    • Contact
    • Advertise
    • SciArt Initiative
  • Subscribe!
  • Magazine
  • About
    • Team
    • Contribute
    • Contact
    • Advertise
    • SciArt Initiative
  • Subscribe!

OCTOBER 2019

back to table of contents

STRAIGHT TALK

Yago De Quay’s Artistic Gestures
and Perceptuo-Motor Communication
Picture
Yago de Quay. Photo courtesy of the artist.
By Joe Ferguson, contributor

​Yago De Quay ruined Ferris Bueller's Day Off for me.

You see, there’s a scene in this movie where Ferris’s friend Cameron - played by a young Alan Ruck - stares, passive and frozen, at Seurat’s A Sunday Afternoon on the Island of La Grande Jatte. The camera switches back and forth from Cameron to the figure of a child in the painting, each time zooming closer and closer - the child eventually blurs to abstraction in Seurat’s pointillism. Cameron realizes that the closer he looks at this image the less he sees, and he is struck with the fear that’s how people may see him. Of course, in pure 80s fashion, the scene is accompanied by a heart-tugging soundtrack - Please, Please, Please, Let Me Get What I Want by The Dream Academy.
 
It’s classic John Hughes. Look up the scene on YouTube.
 
When this movie was released and teens flocked to theaters - no streaming service back then - every humanities instructor rejoiced. Hughes had conveyed a truth too few of these teachers had ever effectively related…
 
…passive reflection on art leads to intellectual epiphany!
 
Many have longed for a similar moment, but it is - depressingly - uncommon. 
 
Up until now, I’ve been fine with that. I’ve studied art for so long that I’ve acquired a visual literacy for most mediums. And while that’s lead to a greater appreciation, I still lack that naive experience - the aha story that so many of my art-loving friends passionately tell.
 
I thought it was my training in sciences - my proclivity for facts, figures, and biological concepts that blinded me to the innate pleasure of the gallery experience. My lack of that lightning-bolt moment while staring at Seurat’s masterpiece, however, may not lie with the work itself. It may be that the metaphor of pointillism is not effectively communicated to everyone through an exclusively visual medium.
 
Our brains are hardwired for face-to-face communication. We understand what someone is telling us through language, tone, facial expression, and, importantly, gesture. Think about those emojis you use everyday - faces and hands to relate what you fear may be misunderstood through words alone. Understanding concepts through gesture is called perceptuo-motor communication.
 
Visual art, language, and music are forms of communication that lack a perceptuo-motor experience. Dance and theater may contain gestures during a performance, but the experience for the viewer is still passive. Even the immersive nature of installation still has you walking around while passively considering the work.
 
So, here I am, a few hundred words into this essay and I’m telling you, I’m not bitter I’ve never had that moment. I swear. I still really love that Ferris Bueller scene, but that’s where Yago De Quay comes in. 
 
You see, in many of his works he uses technology to capture gesture to create live art. And in many pieces, the viewer makes the gestures. This turns thousands of years of art appreciation on its head - passive reflection may not be the best and only way to experience art. And understanding that, I had a Dada-esque moment. Like the effect of Duchamp’s Mona Lisa, I can’t see that Ferris Bueller scene the same ever again. I’m stuck with the idea that relating conceptual metaphors through the perceptuo-motor experience may provide more people with their aha moments.
 
But enough about the 80s and Ferris Bueller. For more on this emerging art form, let’s see what Yago has to say…
Picture
"3D[Embodied]" (2014). Live performance. Photo courtesy of Yago de Quay.
​Joe Ferguson: Tell us a bit about your background. How did you make the transition to technology and art?
 
Yago de Quay: So, it’s a cold, snowy January in 2009 in Boston, where I had just graduated Berklee College of Music with a degree in Jazz Guitar. And I was in a crisis: I had fallen out of love with guitar and wasn't that good at it anymore. I still wanted to make music but I didn’t want to play any kind of traditional instrument. So I decided to invent my own instrument. To make something completely different. The problem was I didn’t have the engineering skills to build stuff.
 
Later that year I enrolled in a Masters in Multimedia, with a focus on interactive music, which was a brand new program at the Faculty of Engineering of the University of Porto. That’s where I found my calling. We were a motley crew of teachers, engineers, musicians, visual artists, developers, and physicists. We were definitely the oddballs in the department, but I felt at home. Under the auspices of Carlos Guedes - the program director - we learned technical skills like software and hardware engineering as well as media theory.
 
More importantly, Carlos encouraged me to write scientific papers and participate in scientific conferences, and I was surprised to find my self enjoying it. If fact, I got so into to it that I published seven scientific articles about my new instruments in 2011 alone - more than anyone else in the program - ​and presented them all in academic conferences along with performances. That scientific output led to a series of grants by the European Cooperation in Science and Technology and later by the Portuguese Foundation for Science and Technology to create more instruments that used motion capture technologies.
Picture
"AdMortuos" (2016). Live performance. Photo courtesy of Pedro Miguel Resende & Filipa Rodrigues.
​JF: Traditionally, dancers either followed choreographed movements or reacted to music. With your gesture-based innovation performers can create music. Why did you create these types of pieces? 
 
YDQ: Yes, traditionally dance follows music. In my works, I invert that relationship by making music systems that follow the dancer. I hook up all sorts of sensors on the performer so that their body literally becomes an instrument. What that means is that in addition to composing the music, I have to code software and build hardware to move data from the performers to the audio tracks. Aside from the sheer fun of it, the reason I went in this direction was to be different from DJs and media artists that perform behind laptops.
 
For example, in my show at TEDxLuanda I created a music software that transformed in real-time Piny Orchidaceae’s (the dancer) gestures into chords and sound effect. In the first half of the song there are four poses that are matched to four chords. She can improvise a chord progression by striking different poses. This was achieved using an AI algorithm trained to her movement. In the second half of the song the height of her arms control filters and synthesizers.
 
Another example, Breakdown, created in collaboration with Rodrigo Carvalho (visual artist), Sunny Shen (dancer), and Po-Yang Sung (lighting designer), my arms controlled multiple instruments and sound effects while I sang.
Picture
"ArcAttack" (2013). Music vídeo. Photo courtesy of Yago de Quay.
​JF: How were these performances received by audiences and performers?
 
YDQ: Audiences like my shows because I put a lot of thought in balancing experimental and popular aesthetics. The technology, the experimental part, is exciting, and the electronic beat, based on popular music, is entertaining. But for the audience my performances symbolize much more than just an artwork. They are an opportunity to marvel at scientific progress, an unexpected place where they stop and think “Wow, we can do this now!”
 
For us artists, working with bleeding-edge interactive systems opens the door to an exciting world of opportunities. This type of collaboration is different than traditional collaborations because each artist is codependent on the other. Biometric data must pump into the system to change the music, and the dancers need to hear the music change for them to choreograph. It’s an interdisciplinary feedback loop. In this there is room for serendipity during rehearsals, but also careful planning is needed - any engineer will tell you that interactive systems take days or weeks to develop. It’s a lot of tinkering, practicing, rehearsing, and wondering.
Picture
"Be Real" (2015). Music video. Photo courtesy of Filipa Rodrigues.
​JF: Our usual interaction with technology is very utilitarian. Is your art an extension of the “technology as a tool” model (e.g., a virtual paint brush vs. a real one), or is it something different? Is it a prescient view of how we will use technology in the future?
 
YDQ: If you have a spectrum ranging from technology as art to technology as a tool, my works would fall all over that spectrum. Sometimes you want to highlight the technology - ​like I did in Curie - where you want the audience to see the technology and understand its role and capabilities. My team built this wireless motion capture wristband that lit up when gestures were detected. Signaling to the audience that these wristbands have a purpose and that the performer is not just waving their hands around. We also created a big theremin-like instrument that lit up when your hands were over it. In commercial performances you have to embed the technology tastefully in the artwork, as a sort of product placement.
 
On the other hand, sometimes you want the technology to be an invisible tool that creates a magical experience. This is particularly popular with interactive immersive installations that respond to participants. In my installation Future Earth for Nokia Bell Labs, we developed a computer vision grid and smart earbuds that collect movement, voice, and gaze information and used that data to adapt the artwork to each individual.

Of course one of the issues of using advanced technology in live performances is that you obscure the cause and effect relationship. For example, when you see someone hit a key on a piano and you hear a musical note, there is not doubt in your mind that the pianist played those notes. The action sound relationship is clear. However, with a motion capture wristband attached to a dancer’s arm it’s not always clear to the audience what sounds that wristband is playing. To convincingly establish in the eyes the viewer that artificial cause-and-effect relationship, the artist needs to apply principles of synchronicity, superposition, simplicity, repetition, and semantic congruency.
 
Anyway, that is how art will use technology - sometimes in an exuberant, showy manner, sometimes in a hidden, magical way. Nevertheless, two things are clear: first, technology will enrich immersive experiences like theme parks, escape rooms, immersive theaters, and exhibitions by providing personal content; second, scientific advances in other fields will spill over technology into the art world and stimulate innovation. Many of the motion capture technologies I use - Xbox’s Kinect camera, Nintendo’s Wii, and Xsens - were repurposed from their original application.
Picture
"Curie" (2016). Live performance. Photo courtesy of Filipa Rodrigues.
​JF: What is Stupefy, and how did it come about?
 
YDQ: Early in my PhD at UT Austin, I was prompted by Marcus Swagger, a fantastic digital fabricator, to meet this band called ArcAttack which consists of two brothers, Joe and John DiPrima, that play rock music using Tesla coils. Just to clarify, their music is produced by lighting bolts. I thought that was awesome so we met up and I hooked my motion tracking system to their Tesla coils and started to play music using gestures. I ended up touring all over the country with them and appearing on Spanish and Portuguese national television. You can watch a video here.
 
Anyway, what that experience taught me was that having a visible lightning bolt hitting my hand when I performed a gesture really helped the audience understand I was controlling an instrument. Since then, I’ve included visual feedback in all my works.

Once I started incorporating big interactive visuals in my shows, I started getting calls from companies to perform at their events. It wasn’t only about music anymore, it was about connecting performers, visuals, and music on stage to create one big show, and companies loved it. They either wanted their tech products to be launched with my shows, or their marketing events to have cutting-edge entertainment, or simply to collaborate with me to accelerate innovation in their products. Eventually, I opened a studio called Stupefy that creates tailored tech-driven shows and installations for events. Clients include Intel, Nokia, Toyota, Peugeot, NBC, Ferrari...  We are housed in an amazing creative tech incubator called NEW INC in New York.
 
JF: Technology, in one form or another, has always been utilized in the performing arts (e.g., lighting, sound, etc). In your work, however, technology is as much a performer as the people on stage. From this perspective, do you view the engineer as another performer or as an enabler of the human performers on stage? 
 
YDQ: I suppose that if you are on stage, you are a performer - if not, you are part of the crew. There is a whole DJ-inspired template used by engineers-cum-performers where - at its most basic - you have on stage a performer behind a laptop, a screen behind them with visuals, and music. Artists like Ryoki Ikeda, Toru Izumida, and Boris and Chimp 504, come to mind. But when you start getting more serious with theatrics, engineers will very likely huddle up around a media server somewhere backstage and monitor the interactive system.
 
I’ve been both a performer on stage, like in AdMortuos, and an enabler back stage, like in 3D[Embodied]. It just depends on the type of show you want to put up.
Picture
"Future Earths" (2019). Installation. Photo courtesy of Yago de Quay.
Picture
"Future Earths VR" (2019). VR installation. Photo courtesy of Yago de Quay.
​JF: Conventional artistic appreciation is physically passive - a very cerebral encounter with an art piece (e.g., looking at a painting).  A number of your works require the viewer to interact with the piece or the performer on stage. How does this affect the artistic experience? Do you consider the people who come to these types of performances, or interact with these types of pieces, as viewers or participants? Is this the future of the artistic experience?
 
YDQ: In 2016, I suffered an injury in my left foot while playing soccer. That meant that I couldn’t dance onstage and use my interactive systems for a while. So instead, I started to build installations where the audience would interact with the system. The genesis - of all places - ​was in nightclubs in Portugal and Norway.
 
I developed a music information retrieval algorithm that analyzed the DJs music and built motion tracking platforms where patrons could play harmonically-correct melodies by dancing. I quickly realized that nightclubs were a messy place to implement interactive music, so I started creating stand-alone immersive art installations.
 
I think the key word you used is experience. In an interactive installation, all your senses are enveloped by the artwork and for a moment you forget where and who you are, and instead become a player in this fabricated world. In an installation, it is important to design engagement entry points for a range of participants and observers. My writer for Future Earths, Ross Tipograph, who worked for Punch Drunk’s massive immersive theater hit Sleep No More, recalled that some visitors observed the act from a distance, some engaged with actors, some spent hours exploring every thingamajig in the room.
 
Regarding the future, according to a 2019 report by the Immersive Design Summit, the U.S. immersive entertainment industry is valued at $50 billion, behind video games but way ahead of the music industry. The immersive industry is growing and it includes theme parks, theater, escape rooms, haunted attractions, exhibitions, museums, VR, AR, experiential marketing... The main issues are high production costs, marketing, and scaling. In the future, ubiquitous displays, sensors, and AI will provide a scalable opportunity for personalized, intimate immersive experiences.
Picture
"Breakdown" (2014). Live performance. Photo courtesy of Meg Seidel.
​JF: In addition to artistic expression, are there other applications to the intersection of technology and new media in your work?
 
YDQ: One unexpected discovery in my PhD research was that my collaboration with Intel increased their knowledge base, expanded product applications, accelerated development, improved quality, and promoted their products. The hidden world of innovation in tech-driven art can rejig a company’s resources, activities, and organizational structure. To this end, I created a collaborative framework called Live Product Development.  It’s a joint product-and-artwork development methodology that generates economic, artistic, and scientific value.
 
I now work a lot with tech companies to create experiences with their products with the aim of accelerating innovation. And I’m not alone. Surveys of arts graduates indicate that about two-thirds work in non-arts contexts that are, nevertheless, relevant to their artistic training. In this ever-changing world, creativity in product development comes at a premium.
 
JF: What can we look forward to in your work? 
 
YDQ: I just wrapped up Future Earths, a climate change simulator installation for Nokia Bell Labs. I’m currently working again with Intel on WeSketch, an augmented reality mobile app for events that enables audience members to become instant superstar artists. 
Picture
Yago de Quay. Photo courtesy of the artist.

SciArt Lifetime Digital ​Subscription

$50.00
Buy Now

Subscribe once, be set for life! One time payment, no renewals.

​Upon purchase, your digital access code will be automatically emailed to you.

For gift purchases, simply forward or print out your confirmation email.

SciArt Magazine is a publication of
​SciArt Initiative, Inc.