Skip links
Artificial Intelligence

Artistic footprints and Artificial Intelligence

This writing is part of the panel «Does volume equal power?» presented at the MUTEK Montreal 2021 festival.
In collaboration with artists Analucía Roeder and Gabrielle Harnois-Blouin in the framework of AMPLIFY D.A.I Collaborations Fund with the support of the British Council, Canada Council for the Arts and Conseil des arts du Canada.

In the audio world, artificial intelligence (AI, from now on) is having a significant impact, mainly on workflows. Assisted mastering, assisted mixing and assisted compositing are some of the most developed sectors in this regard.

As artists who work with sound, we know that any decision on these processes has a crucial impact on our productions.

However, most of the time these AI-assisted processes pass unnoticed.

When we talk about art and AI, we refer to how we artists develop works based on data, machine learning and AI.

However, we rarely talk about how AI uses the data generated by our works, the data about us as artists, about the audience that consumes our works, the data generated by publishing or advertising our works on certain platforms, the data generated by the festivals in which we participate.

Artistic footprints and Artificial Intelligence 1

As artist working with technology, we are data producers. As an audience, we are data producers. As organizers, we are data producers.


Digital Footprints


What are the digital footprints we leave as creators and users in the virtual world?

What impact will this data have on the construction of immersive experiences in the near future?

AI is shaping our experience of art in an invisible way.

The AI that is closest to our everyday life is narrow or weak AI.
The one used by voice assistants, text translators, financial systems.

The reason we call it «weak AI» – also called narrow AI – is because these machines are far from having human-like intelligence. They lack self-awareness. In other words, they can’t think for themselves.

However, what this first form of AI is doing is gathering and classifying an immense amount of data that enables it to understand the difference between what a human being says and what they want.

AI can understand our needs, tastes and wishes. It can determine what we want to buy, watch, listen to and read, or in any case, it “suggests” some of these actions to us.

Do Androids Dream of Electric Sheep by Philip K. Dick

 Do Androids Dream of Electric Sheep? by Philip K. Dick

Androids may not dream of sheep yet, but AI do something more significant and much more impactful:


Unperceivable decisions


How do these unperceivable decisions affect our works of art, our publications, our festivals?

At this very moment, artificial intelligence is deciding how you listen to this stream.
One algorithm is analyzing my voice and filtering out all the frequencies that are considered background noise, irrelevant sound.

Another algorithm is analyzing the intensity of my voice and normalizing it so that it is heard evenly.

Other algorithms are analyzing the Internet bandwidth of each person connected to this forum, as well as the browsers we are using, among other factors that help to achieve the best performance of the streaming.

All these algorithms work together to achieve the best user experience. That’s one of their main goals.
It works. Indeed, it does work in this context.

But what happens in the context of art and immersive experiences, where several other factors related to perception and subjectivity come into play?

Artistic footprints and Artificial Intelligence 2

How do we choose if artificial intelligence gets to decide for us without even asking us first?

What if some of you wanted to listen to the background noise around me because you consider those sounds to be relevant information? The algorithms that analyze my voice have already decided for you.

Immersed as we are in this virtual sea, the next immersive experiences will depend more and more on AI, on algorithms that analyze who we are, what we like and how we consume each experience.

To make AI respond as “we humans” do, a large amount of information and explicit instructions are needed.

The more comprehensive the selection of data, the more possibilities AI will analyze, and therefore it will broaden its view in relation to humans and our perceptions.


We need more data inclusion


When we talk about the “we humans” represented by AI, we is not you or me.

That “we humans” is represented by the people who work in AI. A closed, homogeneous circle of people.

  • Most of them are highly educated.
  • They mostly live in the so-called first world countries.
  • They are mainly male, white and heteronormative.

They are the ones who program most of the data for AI analysis, and that is how AI develops the same values, ideals and world-views of its creators.

In its 2021 Diversity Annual Report, Google published the figures of its employees related to technology areas.

Of these employees:

  • 75.4% are men
  • 24.6% are women
  • less than 7% self-identify as other non-binary genders.

Less than 20% of those of us participating in this forum are represented in those numbers.

Talking about diversity is not the same as making room for diversity within databases, within algorithms and frameworks that make up the AI ecosystem.


Different perspectives


Different perspectives of artificial intelligence within art are presented.

  • AI as co-author of data-driven works.
  • AI as an assistant that analyzes and transforms works to achieve a better user experience.
  • AI as a producer of databases where artists and users are represented in all their complexities and particularities.

In this scenario, the following questions arise:

  • What are the digital footprints we leave as creators and users every time we create or stand before an artistic work or performance?
  • What will be the impact of this data on the construction of immersive experiences in the near future?
  • What is the impact we as artists and users have on the learning of AI?

The answers to these questions are built upon a collective construction. An analysis of how we produce our works, how we label them, how we describe them, how we think of ourselves as artists given the development of AI and its impact on our works of art.

The challenge we face is to draw upon the arts and new technologies in order to shape a community that has an impact on machine learning.

This is where the digital footprints, the data we generate about ourselves, our works and our audience become significantly important.

This is what will eventually help us to develop more complete immersive experiences that invite us to perceive the world in all its particularities.

Author: Sol Rezza
Translation: Patricia Labastié
Presented on 26.8.2021 at MUTEK Festival 2021.

Leave a comment

This website uses cookies to enhance your web experience. I invite you to read how the data is used on the privacy policy page.
Translate »