Agnosis: Lost Memories
Agnosis: Lost Memories created by Colombian Artist Fito Segrera, was presented and live streamed in Mozilla Hubs Social VR at RealMix2020 Festival in Bogotá on December 1st 2020. A computer interface constructed by the artist captured moments of mental abscence, and through an AI programed by Segrera these moments were recreated and displayed in a performance piece. The music was created by New York based spatial sound composer Mattew Gantt. The RealMix2020 Festival was co-curated by Hyphen-Hub.
Transcription of Fito Segrera’s performance Agnosis:
Hi my name is Fito Segrera, I'm an artist who has been working with technology for over a decade now, I’ve been doing all sorts of different experiments with art and technology, doing research, creation, and all sorts of interesting things. This space was created specifically for the RealMix2020 Festival, and I want to speak a bit about what the piece is about. This space is basically the result of a work I did a couple of years ago in Shanghai, where I built this brain-computer interface with an embedded camera and an array of sensors, and an accelerometer, a gyroscope, some other sensors, a GPS, and so on. I was wearing this device for a couple of weeks every day all the time. And this device was basically programmed to capture moments of absence of attention, moments where my attention was dropping, and it would capture these moments as a photograph by taking a photo of whatever was in front of my eyes. Plus the brain data and the sensor data was packed all together and sent to a machine that was learning from all this data and through an AI which was trying to reconstruct and interpret this data of lost memories and give it back to me. And so this space you see here is a sort of attempt to recreate or manifest those lost memories as interpreted by the machine so the sculptures that you see around you were all generated by an AI using my brain and sensor data. This is also the case for the data that you are seeing all around on the ceiling walls and ground which was organized by a system trying to make sense out of it. And there’s a layer of sounds that is turned off right now because of the performance, where the machine is narrating and voicing some of the memories it recreated as poetry as a sort of a way to return those lost memories to me. So this is what the piece is about and the performance is basically an invitation where Asher who co-curated the festival, brought Matthew Gant who is an amazing performer musician artist dedicated to virtual spaces and general music, and today we will be collaborating. But now, what is going to happen is I will be live streaming some brain data to Matthew over the internet, he is in New York, I am in Colombia, and in real-time he will be receiving that data and that data will transform into sound and on top of that he will be playing some of his on music. And we will also have some graphics and images in the back which is the result of some live programming that I am doing and my brain data.
Matthew Gant: I just wanted to say thank you to Asher, thank you to Fito, thank you Hyphen Hub, and RealMix for making this happen. As they said my own background is in experimental music, spatial sound, experimental composition, and virtual reality. And for this particular collaboration, it’s funny because it came around rather quickly where I found out about this notion of procedural composition that Fito is doing. And initially given the time frame I was very skeptical until I saw this firsthand and it felt very hand and glove to my own work, where I do a lot of procedural composition, a lot of data driving, exploring different sounds, the materiality of sounds, the different processes of creating sound and organizing it. So for tonight what you will be seeing as Fito mentioned is the data that’s gone into the space itself that will be manifested in multiple mediums. You will see it in the visual capacity and also I will be sonifying it in real-time in addition to some of my own compositions. The way I’ve been thinking of this is almost like a concerto, an orchestra of electronic sound/brain data that I will be manipulating in real-time. Other than that a quick little procedural note, the sound in the space is spatial just like you might expect in real life so as you get closer to us you might hear us louder if you move further away it might get quieter, so we would encourage you all to get as close as you would like, make yourself at home and thank you for coming out.
Link to Video https://www.youtube.com/watch?v=8JRFbCeHBtQ
Comments