Computational Arts

Mirror (in progress)

rehearsal

For our final project, I’m collaborating with Clemence Debaig and Romain Biros to create a series of interactive performances to explore the level of engagement of the audience when the level of participation and interactivity changes. This is still a work in progress; we’re still preparing for the performance itself.

We started by looking at different types of performer and audience interaction. We found it hard to find examples for performer-to-audience interaction and audience-to-audience interaction. Audience-to-audience interaction is an established mode, but it’s usually referred to as games rather than art. We then decided to create performances to interrogate this gap. Following Helen’s remarks from the last class, we will focus on the relationship between audience and performer.

One of the texts I referred to when developing this concept was “The Space of Electronic Time: The Memory Machines of Jim Campbell” by Marita Sturken. She writes:

“It is the visitor rather than the artist who performs the piece in an installation… An interactive work constructs a complex negotiation with its viewers, both anticipating their potential responses and allowing for their agency in some way.”

Our performance, “Mirror”, is one of the most complex performances I’ve ever been a part of. First, my entanglement with the scene (as prompted by last week’s discussion of material technoscience) is to inhabit many roles at once. I’m helping to develop the concepts and execution and programming the technology used. During the performance, I’ll be a performer, observer, and surveyor of audience reactions. Finally, after the performance is over, I’ll join my teammates in theorizing about it and writing a final text.

Spatially, the positions of the performers and audience members will shift, allowing for multilayered physical perspectives. For one performance, I’m developing a MaxMSP program to be displayed on laptops carried by the performers, who will move throughout the space. It feels strange to create something that’s seen by different people at different times. Every participant’s experience will be different.

I hope the performance goes smoothly. There are still a lot of moving parts. We will depend on the audience, as mentioned above, to perform the piece. They have to show up. We will try to direct their behavior, but we can’t do that completely. But that’s always a caveat when creating any interactive performance. We’ll see!

References

Sturken, Marita. “The Space of Electronic Time: The Memory Machines of Jim Campbell.” Space, Site, Intervention: Situating Installation Art. Ed. Erika Suderburg. 287-296. University of Minnesota Press, 2000.

How Alexa Figures the Human

alexa

To explore the concept of figuration presented in Lucy Suchman’s chapter from “Human-Machine Reconfigurations”, I looked at Amazon’s Alexa voice assistant. Alexa, a machine, figures existing dynamics between genders and races. I returned to the essay “Why Alexa is Racist and Sexist” by Andrew Prescott, which interrogates why digital voice assistants tend to emulate white women. Alexa is an assistant to humans; we order it to do things for our convenience and pleasure.

Alexa lacks agency. It only does what humans instruct it to do. It wholly supports human agency by performing tasks to make its owners’ lives more convenient. Suchman writes, “In the case of the human, the prevailing figuration in Euro-American imaginaries is one of autonomous, rational agency, and projects of artificial intelligence reiterate that culturally specific imaginary.”

In its representation as a white woman, Alexa reinforces human social relationships. Prescott writes, “Alexa is a woman because it suggests subservience. She is white because that is taken to signify intelligence and efficiency.” Suchman writes that “specifically located individuals conceive technologies made in their own image, while figuring the latter as universal.” The tech industry is predominantly white men. Do they feel comfortable ordering a white woman around? In this case, creators are not conceiving technologies in their own image, but adding characteristics (femininity) based on societal gender dynamics.

Furthermore, Alexa is not embodied. It is a voice, both human and machine-like, inside a device. This always foregrounds that it is a machine, one we should not feel bad about ordering around. Prescott also writes of “a continuing machine-like quality for her voice that seems reassuring to human users. Again, it is about distances and what she [sic] feel to be appropriate social relationships. We want to know that Alexa is a machine.” Suchman presents the idea that “embodiment, rather than being coincidental, is a fundamental condition for intelligence.” Using this framework, Alexa can never be an intelligent agent.

In Suchman’s description of her encounter with Stelarc’s Prosthetic Head, I realized that some of the slippages were due to its older, deficient technology. The Prosthetic Heads of Suchman’s time have morphed into the Alexas of today. Suchman’s experience with the Prosthetic Head reminded me of my brief experience unboxing an Amazon Echo for my parents one Christmas. I remember talking to it. Its machine learning had not yet adapted to the contours of any of our voices. We repeatedly yelled the same command to it until it got it. I thought, this is the dystopian device that will take over our lives? The voice interface meant that I had to feel around and discover Alexa’s limitations – it wasn’t immediately apparent in the user interface. Its machineness was evident. As AI gets more advanced, will the gap between human and machine narrow? Will we always need to figure robots as subservient machines, to reinforce our human agency and superiority?

References

Prescott, Andrew. “Why Alexa is Racist and Sexist.” Artificially Intelligent. Eds. Papadimitriou, Irini, Andrew Prescott, and Jon Rogers. 2018. pp. 56-57.

Suchman, Lucy. Human-machine reconfigurations: Plans and situated actions. Cambridge: Cambridge University Press, 2007.

FatFinger and Human Control of Machines

fatfingerFatFinger by Daniel Temkin

I chose to read Daniel Temkin’s essay “Entropy and FatFinger: Challenging the Compulsiveness of Code with Programmatic Anti-Styles” from the Leonardo journal. He has created two esoteric languages that challenge the logic and order of code. He begins by writing:

“The style of most programming languages is aspirational; they connote orderliness and structure, in the face of heaps of evidence that bugs are endemic to code.”

This statement resonates with my experience as a programmer. Each language comes with its own syntax that I have to squeeze my intentions into. I think about learning OpenFrameworks in C++ now after having experience with JavaScript, which is higher level and further from the machine. I get frustrated with having to tell the program things that I assume it should know already. However, I appreciate how this should eventually, theoretically give me more control over the computer.

After introducing Entropy and FatFinger, Temkin writes:

“Both these projects speak to the actual experience of coding, which is fraught with error. Entropy makes error inevitable, while FatFinger tolerates a sloppiness of text (and of the thought behind it) that ordinarily would never pass muster with the interpreter. They work against the compulsiveness of programming. They encourage a style that is more accepting of the inevitable presence of error and of the limited capacity of the programmer to control the machine.”

FatFinger is JavaScript but which tolerates typos – for example, “dokkkkkkkkkkkkkkkkkkumint” can replace “document” and still get evaluated correctly. The results can be hilarious to read, and learning the algorithm for how the code is interpreted was interesting to me. But I wonder if it doesn’t go far enough. FatFinger still enforces syntax, and on a higher level, programmers are still structuring their actions based on the language.

Are we just putting more distance between the human and the machine, by fitting the machine to our sloppy standards? Are there ways in which FatFinger can extend beyond typos to syntax and structure – as Temkin writes, a more “gestural” way of instructing a machine? I think of Rebecca Fiebrink’s Wekinator, where users can simply wave their arm and teach the computer via machine learning. Would programmers actually lose control over the machine in this process?

Ultimately, the dance between humans and computers is complex. I appreciate Temkin’s esoteric languages, and wonder how we could use those ideas to push further.

References

Temkin, Daniel. “Entropy and FatFinger: Challenging the Compulsiveness of Code with Programmatic Anti-Styles.” Leonardo, Volume 51, No. 4, Aug. 2018. https://www.mitpressjournals.org/doi/abs/10.1162/leon_a_01651.

The Algorithm as Meme

“The algorithm” has become a meme at this point. At the Frieze Art Fair, I saw this painting by Jim Shaw, depicting a crowd surrounding a yowling pig with “Social Media Newsfeed Algorythms” written on its back.

algorithm_pig

The whole exhibition had the context of being against Trump, a president known for his (mis)use of social media. Still, seeing this the same week that I read Tarleton Gillespie’s essay “Algorithm [draft] [#digitalkeywords]”, I thought about how the concept of the algorithm has pervaded even the arts. It’s been reduced to a singular entity. Gillespie writes:

“Algorithm” may in fact serve as an abbreviation for the sociotechnical assemblage that includes algorithm, model, target goal, data, training data, application, hardware — and connect it all to a broader social endeavor.

In the arts, where people comment on social issues but may not necessarily have a technical background, the algorithm is a synecdoche. It’s an authority, like Trump, but also an object to be stared at, mocked, and scribbled out. It is Facebook, meaning the actors behind Facebook, but also a technological process out of our direct control.

I researched the Facebook newsfeed algorithm for this assignment. I came across articles from Hootsuite geared toward helping marketers spread their content. The algorithm is constantly changing, which keeps marketers on their toes as they respond with new strategies. In the most recent change, Mark Zuckerberg announced that Facebook would prioritize content from “friends, family and groups.” Hootsuite writes:

The new algorithm prioritizes active interactions like commenting and sharing over likes and click-throughs (passive interactions)—the idea being that actions requiring more effort on the part of the user are of higher quality and thus more meaningful.

It struck me that what Facebook said about their algorithm is a form of marketing – to make the company seem like it rewards “meaningful” interactions for the good of its users. The algorithm’s exact process is always hidden, leaving people to guess at it. Peoples’ livelihoods depend on it, i.e. if a company is promoting a brand, or a person is simply trying to maintain social connections. Hence, the algorithm becomes something powerful yet unknowable.

In a way, we are all conducting software studies experiments on our Facebook newsfeeds. Whether consciously or not, every action we perform (like, comment, even look) will influence our result of the algorithm.

One actual experiment I “liked” was Mat Honan’s article for Wired, where he liked everything he saw on Facebook for two days. He reported that at the end, his “feed was almost completely devoid of human content” as well as politically polarized, extreme on the left and right.

I would extend this experiment by reporting on how it felt emotionally for a user to interact with the algorithm. Personally, I would be mortified to publicly “like” something I disagree with, or something sensitive like bad news from a friend. I would be interested in studying how people feel interacting with the algorithm in routine ways – do they feel empowered? Disempowered? Do they know how the algorithm works? Gillespie’s article was all theory, though some of it rang true to me. From what I’ve seen, software studies about algorithms doesn’t dig in to how it makes its users feel. I would like to explore the intersection of algorithm and human emotion further.

References

Gillespie, Tarleton. “Algorithm [draft][# digitalkeyword].” Culture Digitally, 25 Jun. 2014. http://culturedigitally.org/2014/06/algorithm-draft-digitalkeyword/.

Honan, Mat. “I Liked Everything I Saw on Facebook for Two Days. Here’s What It Did to Me.” Wired, 11 Aug. 2014. https://www.wired.com/2014/08/i-liked-everything-i-saw-on-facebook-for-two-days-heres-what-it-did-to-me/.

Tien, Shannon. “How the Facebook Algorithm Works and How to Make it Work for You.” Hootsuite, 25 Apr. 2018. https://blog.hootsuite.com/facebook-algorithm/.

A fish can’t judge the water

newsfeedBlue Feed, Red Feed by Wall Street Journal

Reading Femke Snelting’s essay “A fish can’t judge the water”, I think about how social media has reshaped our communication. Each platform has an algorithm that orders and highlights posts. It is at once knowable and opaque, which ties into Tarleton Gillespie’s “Algorithm [draft][# digitalkeyword].”

There’s been a lot of criticism lately about how social media creates filter bubbles. We choose who to follow, which is an essential function of social media networks. Then we only see information from those whose views we find palatable. In addition, a combination of human decisions and software algorithm selects which ads to display to us, based on who we’ve chosen to follow and other information we’ve given.

Snelting writes that software “is shaped through and locked into economic models of production and distribution.” Social media splits society along political lines by only showing people what they want to see and shutting out contrasting opinions. There’s been a lot of revelations and speculation about how Facebook influenced the 2016 American election by presenting political propaganda to people who would be receptive to them.

Recently, I was surprised to see an article about “two internets” around the Brett Kavanaugh hearings for the US Supreme Court. Everything I have seen on the internet, from my liberal friends and news sources, denounces him as a sexual predator. Apparently there is a whole separate ecosystem supporting him. Although I believe that this issue is black and white, it has become harder for people to empathize with each other on more nuanced issues. The functions of social media algorithms continue to push us further apart.

References

Snelting, Femke. 2006. “A fish can’t judge the water.” Constant Verlag, 2006. http://www.constantvzw.org/verlag/spip.php?page=article&id_article=72.