MIST
Sounds / Interactive / Generative / Computational Installation
2025
INTRO
Mist is an interactive sound installation that invites audiences to explore the relationship between body, sound, and space. The installation features five laser beams suspended at angles, paired with sensors that detect physical interaction. When a beam is interrupted, it triggers changes in sound, transforming simple gestures into dynamic, evolving soundscapes.At the core of Mist is the idea of randomness and participation, where each interaction creates a unique sonic response, blending sound synthesis with audience movement. The projected visuals—created from 3D-scanned Buddhist sculptures from the Yungang Grottoes—add a temporal and cultural layer, merging digital heritage with contemporary media art practices.By combining real-time sound synthesis, generative video content, and physical interaction, Mist transforms the exhibition space into an ever-changing audiovisual experience. The installation reflects on cross-cultural narratives, sensory perception, and digital embodiment, offering an immersive environment where sound and light become extensions of the human body.In my vision, Mist becomes an interactive generative music synthesizer, where the audience can co-create unique real-time soundscapes through simple body movements. Mist aims to break the boundary between artistic creation and the audience, using generative music and interactive technology to build a collaborative future sound experiment. It envisions a future society where humans and technology create together in new and innovative ways.
Mist is an interactive sound installation that integrates laser beams, sensors, real-time sound synthesis, and video projection to create an immersive audiovisual environment. The installation responds to audience interaction, where physical disruptions of the laser beams trigger dynamic sound transformations and visual changes.The system combines sensor data processing with real-time audiovisual generation. Arduino is used for sensor input, while Pure Data handles sound synthesis. TouchDesigner provides video projection content, blending 3D-scanned models from Yungang Grottoes with motion-capture animations. The custom sensor mounts and structural supports were designed in SolidWorks and fabricated through 3D printing to ensure precise alignment and stability for the laser modules and sensors.
Technical Components
Software:
- Pure Data (for sound processing and generation)
- TouchDesigner (for visual processing and interaction)
- Arduino Nano (for microcontroller programming and sensor integration)
- SolidWorks (for designing custom mounts and structural components)
Hardware:
- 5 Laser modules (for beam projection and interaction)
- Photoelectric sensors (for detecting beam interruptions and triggering responses)
- Custom sensor mounts (designed in SolidWorks, 3D-printed for precise alignment and stability)
- Arduino Nano (for sensor data acquisition and real-time processing)
- Speakers (for spatialized sound output based on interactions)
- Projector (for dynamic video display and immersive visuals)
Interaction:
Each laser beam represents a distinct sound layer. When interrupted, it triggers variations in pitch, timbre, and rhythm, resulting in constantly evolving soundscapes. The accompanying video projection adds a narrative layer to the immersive experience.
Installation Requirements
Space: Minimum 300 x 300 x 250 cm (adaptable to different spaces)
Audio System: 2-4 speakers for spatial sound diffusion
Power Supply: 5v
Dark Environment and smoke: Necessary for optimal visibility of the laser beams