Sphinx Gate

Reading Time: 2 minutes

Role: Designer, Developer
Context: Interactive art installation
Location: UnScruz 20225, Burningman 2025
Audience: Festivalgoers

Interactive art installation at UnScruz 2025 and Burningman 2025

The idea:

The objective of the piece was to help participants connect their personal experiences to the
universal patterns and archetypes that shape us all. Six participants at a time conversed with an AI through a series of questions which uncovered topics and sentiment that were top of mind. They then laid down on a 12 foot diameter spinning turntable. Above them projected on the ceiling of the turntable, a video of preprogrammed elements and custom video clips mapped to analysis of their conversations walked them through narrative of their personal transformation.

The Space:

The piece was situated amongst many other large scale art installations, first at the 5000 participant regional burn Unscruz, and then at the 80,000 Attendee Burningman festival. The constraints were a very meager budget, most resources were donated. Locally we ran three machines to handle user interaction, video streaming and video projection. For AI compute we connected to the internet with a Starlink. Power was supplied by 4000W generator. DLP projectors were driven by a Resolume instance running on one of the local machines.

The Execution:

The project went through many evolutions over the four month period of its development. Initially, the development team had wanted to render video inferences in real-time on local machines in response to participant input. I was responsible for specifying the development and production platform of python, Docker and Runpod. Collaborating with a technical team of three, I helped map the taxonomies for the incorporation of six different AIs for emotion detection, voice recognition, voice synthesis. I also designed and produced a catalog of 100 AI-generated video clips that were mapped to participants emotional states and displayed during realtime experience. After numerous prototypes, this approach was deprecated in favor of precasting all the generated video, and running all the interaction compute on cloud servers accessed by Starlink. Because the team had started with the foundation logic of taxonomies for emotion, transformations, and the UX, It was easy for me to pre-render aesthetic videos for the whole catalog of potential transformations. Ultimately we ended up using a variety of locally run systems with remotes servers and API calls to AI services to provide the user experience.

The Outcome:

The piece attracted hundreds of participants at UnScruz and thousands at Burningman. Anecdotally, the response was very positive. I had guessed that the most awkward part of the whole UX experience was getting praticipants to speak to the AI. This turned out not to be so. A portion of participants spent much longer than the two to three minutes we had scheduled for the conversations, some, when we let them, would continue conversations for 20 minutes or more with their AI interlocutors. In the future, on an installation like this, I would have the experience be much more abstract. It would engage a wider array of audiences and the whole experience could be run on local compute resources.