Continuing the spatial sound research of GAP+, to explore sonification and spatial narratives at a human perspective, we fully developed a spatial DSP system based on first person movements. The goal of this system is multi-sensory audio-driven experiences via sonification, procedural audio modulation and HRTF (Head-Related-Transfer) audio spatialization. Built upon non-linear musical narrative, the recursive music progression can be easily learned and controlled by all audience groups.
Temporal Architecture, Spatial Music
In architectural discourse, space is traditionally perceived as an environment composed of material structures and static visual elements. Dynamic elements and human-centered sensory experiences, though often proposed, frequently fail to materialize in final architectural products. The discipline predominantly gravitates towards the allure of drawings and visual forms, constrained by the static nature of materiality. This often leads to a disconnect between the envisioned dynamic experiences and their translation into tangible forms, overlooking the human perspective and the dilution of the original concept. In practice, while building accessibility and acoustic design are addressed through regulations, their potential for creative, non-visual spatial experiences remains largely untapped. This oversight marginalizes certain demographics, such as visually impaired individuals. In book “Spaces Speak, Are You Listening?: Experiencing Aural Architecture”, the ways our brain perceiving visual and acoustic space functions differently, but the overall instructive mindset in the field stays ocularcentric. In a broader sense, the art world involves more mediums, providing seemingly more options for experiences. In recent years, the term “multisensory” is becoming abused, typically encompassing an assemblage of sensory elements arranged within a visually driven narrative, still entrenched in the linear and static ocular-centric culture. Discussions around terms like “soundscape” tend to be limited, focusing on spatial audio techniques such as simulating sound localization and environmental occlusion, or creating atmospheric audio using sound samples in a space. This treatment of “multi-sensory” elements, especially prevalent in media art, often fails to forge meaningful connections between interactive or non-interactive visuals and their accompanying soundscapes, seldom offering fresh cultural insights or innovative thought processes in sound design. This sensory bias not only perpetuates perceptual inequality but also marginalizes certain audience groups who might experience narratives differently.
The field of music and sound, influenced by ocular-centric perspectives, often relegates sound to a supplementary role in interactive designs, where narratives remain inaccessible without visual aids or graphical interfaces. The dominance of tactile and visual interfaces in music, particularly since the introduction of hardware synthesizers, further exemplifies this trend. The advent of Digital Audio Workstations (DAWs) and graphical Digital Signal Processing (DSP) software, like MAX/MSP, revolutionized the industry, yet they often require intricate knowledge or training for effective use. Notable systems like Iannis Xenakis’s Upic and the subsequent Iannix explored graphic, geometric, and spatial controls for audio synthesis and manipulation, but they were primarily designed from a top-down approach for professional musicians, with little consideration for the listener’s spatial experience in the sound field. This dynamic foster a binary division between performers, who interact directly with the narrative, and audiences, who are defined as a passive role with inability to direct the narrative.
In terms of scientific researches, sonification has been utilized to interpret data with sound while its subcategory PMAP(Pulsed Melodic Affective Process) focuses more on emotional and affective aspects. Influenced by ocular-centrism, sonification has been defined as a supplemental method, while the aesthetics of sonification itself, remain underexplored and distant from public since the beginning – sounds are straight, plain, or minimal for mere scientific researches in most situations. Visual is prioritized – interfaces or artworks driven by such mindset presents strong ocularcentric tendencies, and acoustic-focused interfaces are underexplored. The methods of sonification are driven by the visual mindset which stress complexity but in fact creating chaos – complexity that disconnect with the meaning and characteristics of data, due to challenges on chaos caused by the richness of sonic information, which is widely observed in most mass data sonification. Forming complexity seems to be a goal pursued by most artist, even further influencing the field of sound. The process and results are questionable, does such rule inherently apply to all sense? Or is it just a habit of the visual culture making us operate sounds in a visual manner? From such perspective, our project attempts to provide a new way of accessing narrative through acoustic-driven spatial interactions.