← Home

Audio Reactive Stable Diffusion in Realtime

• Written on October 20, 2024

I used Olegchomp’s TouchDiffusion component in TouchDesigner to create a real-time audio-visual experience. This component allowed me to blend diffusion models with live audio input, generating captivating visuals that react in sync with the sound. The result is a dynamic, immersive visuals that showcases the power of combining generative art with real-time sound analysis.

The component utilizes img2img generation using StreamDiffusion to create real-time visuals based on an audio-reactive mask I developed. I’ve achieved impressive results with both SD-turbo and realisticVisionV60B1 + LCM.

image

This technology is truly exciting, with endless possibilities. The ability to update prompts in real-time and have visuals rendered in response to audio elevates stage performance to an entirely new level. I can’t wait to showcase this in my upcoming performances!