p.03 I’m Dreaming of Diffusion Installation, Metalworking, Archives

Project Overview

   I’m Dreaming of Diffusion is an archiving project that manifested in the form of an installation piece displayed at the Parsons AI and Design Exhibition (AIADE) done in partnership with LG. It likens the visuals created by AI to dreams, and attempts to strengthen the connections between them. Through a series of interviews, dreams of participants were recorded and the descriptions were put through a Stable Diffusion AI. After about a month the participants were re-interviewed and shown their dream. They were asked to remember what originally happened in the dream, and in what ways the AI was accurate to their dream or innacurate. The interviews and dream renderings were then displayed in a steel tube with fractured mirrors. 


Concept

   In 2020, I started experimenting with generative AI using the Python library Pytorch, but at the time I wasn’t able to get the high resolution images I was hoping for. When the AI boom hit in 2023, I was interested in experimenting with some of the new tools, and started to play around with a Google Collab version of Stable Diffusion called Deforum. Deforum was able to do high quality video creation in a way that I hadn’t been able to before. 

    At the time I was also having regular conversations about people’s dreams, and decided to join the ideas. I was primarily focused in getting folks to talk about and conceptualize AI in a slightly different way. By centering it on similarity between AI processing and the human subconscious, I hoped to bridge a relational gap when people were trepidacious about the ways generative AI was going to be used.

Interviews

   The interview process took place in two steps, with the AI processing happening between the two. The first interview was a description of a recent dream in as much detail as possible. The participants were asked to specifically describe the environment around them and use as many descriptors as possible to provide the AI with enough information. 

    The second interview took place after two to five AI generated videos of the dream were made. The second interview took place around a month after the first, in the hopes that participants wouldn’t remember the dream nearly as well as they had during the first round. Participants were shown each video and asked to remember the dream and reflect on what components the AI was able to accurately replicate. The conversation then widened to the participant’s relationship with dreaming as a whole and their perception of generative AI.

Stable Diffusion

   The AI generating process took place in a Google Collab version of Stable Diffusion. Each dream was generated at 1024x1024px with 50 steps per frame. Five prompts were input, 20 frames apart, for a total of 100 frames per video. The prompts were based on the dreams from the first interviews, with some of the language slightly adjusted to work better as AI prompts.


Some prompts include

    "00": "An overgrown swimming pool in the forest with blue and white tiles"
    "20": "Creepers growing into pool from ancient trees"
    "40": "A pool with a bull inside filled with black liquid"
    "60": "bull refuses to eat beef"
    "80": "Wrestling a bull in a pool filled with black water"
One dream generation from the prompts 



Fabrication

    This project was my first foray into metal work, so I wanted to keep it simple. I made a tube out of two pieces of 2x4ft cold pressed steel sheets, and attached each side of the tube using pop rivets. Then I used a collection of broken mirrors to cover the inside of the piece, reflecting off the small monitor screen at the bottom of the tube.

Exhibition

    The piece was exhibited from December 2023 through January 2024 in the Arnold & Sheila Aronson Galleries with support from LG. An article about the exhibition can be found here.