This article explores the dominance of ocular-centric culture in art, design, and architecture, which often overshadows non-visual forms of participation. This bias extends its influence into sound and music, traditionally limiting audience interactivity and reinforcing a linear, static mindset. The widespread acceptance of this sensory bias, both by audiences and creators within the industry, exacerbates the issue of sensory imbalance, contributing to a skewed narrative in media art and design practices. This exploration aims to highlight and challenge these entrenched cultural norms, advocating for a more inclusive and multi-sensory perspective in creative fields.
The concept of the “Temporal Architect” is introduced, situated at the confluence of architecture and music. This role leverages expertise from both domains, orchestrating narratives in time and space through the innovative use of spatial music systems. Drawing on the author’s early forays into integrating spatial music systems within architectural and installation contexts, the article presents these systems and interfaces as practical case studies. It delves into the politics of sensory bias and human cognition, offering critical insights into these domains. This narrative serves as a reflective commentary on the current state of the music industry and the built environment, informed by contemporary mediums. It ventures a bold forecast about the future roles of architects and musicians, not confining their practices or responsibilities but rather envisioning the potential of a multidisciplinary approach. It contemplates the advancement of humanity and the pursuit of social justice from a cultural and technological standpoint.
In architectural discourse, space is traditionally perceived as an environment composed of material structures and static visual elements. Dynamic elements and human-centered sensory experiences, though often proposed, frequently fail to materialize in final architectural products. The discipline predominantly gravitates towards the allure of drawings and visual forms, constrained by the static nature of materiality. This often leads to a disconnect between the envisioned dynamic experiences and their translation into tangible forms, overlooking the human perspective and the dilution of the original concept. In practice, while building accessibility and acoustic design are addressed through regulations, their potential for creative, non-visual spatial experiences remains largely untapped. This oversight marginalizes certain demographics, such as visually impaired individuals.
Traditional architectural theories and research often adhere to a static, linear understanding of space, neglecting its inherently responsive and experiential nature. Space is experienced through individual interactions, unfolding non-linearly and varying from one person to another. This ocular-centric bias is evident in research from the print era, where scholars focused on representations like notations and images, often mistaking visual similarities for ontological connections. This approach is exemplified by both musicians and architects – in the research works of John Cage, who concentrated on musical notations and spatial sequences on paper, and Bernard Tschumi’s architectural drawings Manhattan Transcript, which depicted human movement in space as a movie sequence on paper. However, these researches overlook the inherent limitations of their mediums: the loss of visual and acoustic richness due to the absence of time in static media, and the reduction of spatial dimensions on the flat surfaces of paper, consequently excluding the audience from the environment and built a closed loop of soliloquize. The focus in such research is often more on the appearance of these notations rather than on the holistic experience that integrates both spatial and temporal dimensions. The adage “Architecture is frozen music; music is fluid architecture” mirrors this cultural inclination, emphasizing the representation rather than the actual presentation of experiences.
In a broader sense, the art world involves more mediums, providing seemingly more options for experiences. In recent years, the term “multisensory” is becoming fashionable. However, this term is often superficially applied, typically encompassing an assemblage of sensory elements arranged within a visually driven narrative, still entrenched in the linear and static ocular-centric culture. Discussions around terms like “soundscape” tend to be limited, focusing on spatial audio techniques such as simulating sound localization and environmental occlusion, or creating atmospheric audio using sound samples in a space. This superficial treatment of “multi-sensory” elements, especially prevalent in media art, often fails to forge meaningful connections between interactive or non-interactive visuals and their accompanying soundscapes, seldom offering fresh cultural insights or innovative thought processes in sound design. This sensory bias not only perpetuates perceptual inequality but also marginalizes certain audience groups who might experience narratives differently.
The field of music and sound, influenced by ocular-centric perspectives, often relegates sound to a supplementary role in interactive designs, where narratives remain inaccessible without visual aids or graphical interfaces. The dominance of tactile and visual interfaces in music, particularly since the introduction of hardware synthesizers, further exemplifies this trend. The advent of Digital Audio Workstations (DAWs) and graphical Digital Signal Processing (DSP) software, like MAX/MSP, revolutionized the industry, yet they often require intricate knowledge or training for effective use. Notable systems like Iannis Xenakis’s Upic and the subsequent Iannix explored graphic, geometric, and spatial controls for audio synthesis and manipulation, but they were primarily designed from a top-down approach for professional musicians, with little consideration for the listener’s spatial experience in the sound field. This dynamic foster a binary division between performers, who interact directly with the narrative, and passive audiences.
In terms of scientific researches, sonification has been utilized to interpret data with sound while its subcategory PMAP(Pulsed Melodic Affective Process) focuses more on emotional and affective aspects. Influenced by ocular-centrism, sonification has been defined as a supplemental method, while the aesthetics of sonification itself, remain underexplored and distant from public since the beginning – sounds are straight, plain, or minimal. Interfaces or artworks driven by such methods (e.g., Photophore by Takai Systems) presents sounds that disconnect with the meaning and characteristics of data, due to challenges on chaos caused by the richness of sonic information, which is widely observed in most mass data sonification. Forming complexity seems to be a goal pursued by most artist, even further influencing the field of sound. The process and results are questionable, does such rule inherently apply to all sense? Or is it just a habit of the visual culture making us operate sounds in a visual manner?
In response to these challenges, this article proposes rethinking the way we approach non-visual interfaces and multisensory narratives. It advocates for interfaces that provide independent access through various sensory channels, fostering more inclusive and engaging experiences. Additionally, it envisions establishing new performance spaces, communities, and cultures that transcend traditional visual and auditory limitations. This article outlines the author’s explorations in audience-driven spatial music narratives and introduces the concept of the “temporal architect” or “spatial musician.” The role, situated at the confluence of space-accurate and time-accurate narratives, offers fresh perspectives and frameworks for developing creative scenes and communities that embrace a broader spectrum of sensory experiences.
The Definition of Temporal Architecture
The concept of a “Temporal Architect” emerges at the intersection of architectural and musical disciplines, embodying a synthesis of distinct professional mindsets. From a systemic viewpoint, an architect’s expertise lies in crafting complex systems within defined contexts and orchestrating spatial arrangements and sequences across multiple scales. In contrast, musicians and sound artists focus on developing complex systems as temporal sequences at multiple scales, navigating through layers of rhythmic and melodic structures.
This chapter delves into the potential of temporal architecture, examining it through both cultural and technological lenses. Architectural education emphasizes the development of spatial awareness and the ability to construct complex systems that unfold into personal experiences as one navigates through them. On the other hand, music offers a rich tapestry of multi-sequential narratives, intricately woven with sound and deeply connected to the human brain’s capacity for pattern recognition and predictive processing, as elucidated in Elizabeth Hellmuth Margulis’s On Repeat: How Music Plays the Mind.
The role of the Temporal Architect, therefore, can be seen as an amalgamation of these two spheres. It involves integrating the precise, space-accurate thinking characteristic of architects – focusing on tangible, multi-scale environmental dynamics – with the time-sensitive, process-oriented approach of sound artists and musicians. This hybrid role envisions how information, perceived as a spatial distribution, can be dynamically unfolded and experienced over time through a framework of multi-sequential, multi-cyclic systems influencing all senses. Such a role not only bridges the gap between physical space and temporal experience but also opens up new avenues for creative expression and understanding, blending the physicality of architecture with the fluidity of musical narratives. This innovative approach promises to redefine our interaction with and perception of the built and auditory environments, offering fresh perspectives on how we experience and engage with our surroundings.
Human-Centered Spatial Music and Interface
Since 2019, my work has been anchored at the nexus of architecture, media art installations, and music, exploring the concepts of “Spatial Instrument/Music” and “Invisible Architecture.” This exploration later evolved into augmented and extended reality (AR/XR) projects and immersive audio-visual installations. The core inquiry of my work is to investigate how multisensory messages – whether visual, auditory, or tactile – can connect individuals to their environment through personal data. This encompasses examining the operational modes of these messages, whether in spatial or temporal dimensions, and their mediums, be they virtual or physical. In this realm, human behavior and systemic interactivity are intertwined, mutually influencing one another and dynamically shaping the human experience within space.
From 2019 to 2023, my endeavors have focused on spatializing music systems and narrative structures across various projects. The objectives are twofold: firstly, to create intuitive instruments that reflect general audience’s presence and enable them to engage in performances and steer narrative directions; and secondly, to evolve narrative forms and hierarchies that are innately tied to such interactive experiences. This journey has led to the development of a unique methodology rooted in this system, aiming to forge new avenues for audience-driven performances and immersive experiences. These endeavors are set against the backdrop of urban history and accessibility, striving to unveil potential cultural shifts that move beyond the prevalent ocular-centric paradigm.
- GAP+: Spatial Instrument and Invisible Architecture
In 2019, inspired by probing questions about the nature of space and interactivity, I embarked on the project “GAP+,” which sought to challenge conventions of visualization and visual interfaces. The project posits the concept of “invisible space” as an interactive soundscape, diverging from traditional visual-dominated approaches. The project draws its inspiration from the structure of “live set” music, which functions as progression through repeating musical patterns. It offers a nuanced perspective on the conservation of historical urban spaces, constructing an auditory landscape that invites audience participation and engagement with the commercially-dominated urban soundscape. This project serves as a critique and redefinition of architectural presentation, introducing innovative concepts of real-time and remote experiences within the built environment, manifested as a spatial instrument.
This work is rooted in two primary inspirations: the narrative structure of electronic music and the personal auditory experiences encountered while navigating the gaps between buildings on the small island of Kulangsu. Corrrespondingly “GAP+” is divided into two distinct components. The first is a conceptual physical Sound Museum, comprising curved forms and acoustical units with openings that form a labyrinth of narrow alleyways. Within these spaces, stored sounds loop and mix dynamically, creating an immersive soundscape that visitors experience as they traverse the environment. The second component is a remote virtual Sound Museum, employing sensors, Max/MSP, projectors, and speakers and assembling a Theremin-like interface for general audiences, connecting to the sound collected from Kulangsu. Here, visitors’ gestures act as the interface for navigating the virtual space, transforming spatial sound locations into a real-time musical performance. The technical backbone of this project is the K-D Tree algorithm for audio zoning, facilitating a collective re-creation of the soundscape and offering a spatialized music triggering mechanism akin to a live set. This setup democratizes the creation and experience of spatial sound narratives, accessible to all without the need for specialized training.
Culturally, “GAP+” fosters a fresh perspective on non-linear history and reinterprets the identity conflicts of Kulangsu. This island has a multifaceted history, marked by periods of occupation and influence by native fishers, Japan, European countries, and currently by external businesses. Its past, shaped by European colonization during the World War era and its present, buzzing with the rhythms of tourism, the internet, and commerce, make Kulangsu a unique historical tapestry. Recently celebrated as a World Cultural Heritage site and nicknamed “the island of piano,” Kulangsu has often been showcased as a museum where culture and history are presented in a linear and static manner. “GAP+” challenges this traditional narrative of historical museums, focusing on the lived experience and the present essence of Kulangsu. It aims to recreate personal histories for the future, shifting the focus from the conventional view that places greater value on the past than the present.
- Acoustic Garden
Continuing the concept of invisible space and acoustic architecture, previous research led to a reflection on sensory preferences, unearthing the cultural imbalance embedded in what is considered “visually appealing” versus “acoustically appealing,” a dichotomy that inadvertently perpetuates notions of inequality. In 2021, the scope of this exploration was narrowed down, employing augmented reality and spatial audio to investigate the relationship between sound and bodily movement in space. How does sound influence movement, and how does movement in turn generate sound narratives? These questions were the catalyst for the inception of “Acoustic Garden”: a project that unfolds around a mixed-reality interactive spatial audio experience, emphasizing auditory exploration at the pedestrian level.
The project, underpinned by game engine and augmented reality technology, reimagines the structure of electronic music through the lens of audio effect modulation and binaural spatialization. It creates a navigational tool that guides users through space using auditory cues from a range of virtual audio objects, rather than visual indicators. By doing so, it place mainstream users at the same position as blind persons. These objects respond to user movement, crafting immersive musical experiences. Users engage with these audio objects to shape their own musical narratives, fostering a dynamic interaction with the sound environment.
Technically, “Acoustic Garden” translates the narrative structures of classical electronic music into a spatial context. Here, the interplay of sound loops and effects modulation forms a music structure that evolves through repetition and variation. This system translates the spatial relationships and movements of users into modulations of sound effects (modulating parameters such frequency value of a filter, the grain length of a granular synthesizer, or the frequency of a LFO data), enabling exploration of virtual objects through auditory changes. This interaction not only enhances the walking experience but also transforms it into a real-time musical journey. This approach opens new avenues for architectural design, particularly in creating inclusive experiences for visually impaired individuals, and pushes the boundaries of AR/VR by offering spatial experiences independent of visual stimuli. In the realm of music, it democratizes the creation and experience of non-linear interactive music, making it accessible without the need for extensive training.
However, “Acoustic Garden” goes beyond traditional sonification methods that simply replicate or translate data. It places emphasis on the emotional resonance of auditory messages, exploring the potential for audience-involved, spatially driven, non-linear musical narratives. Throughout various implementation phases, challenges in sound design, hardware constraints, and cognitive processing were encountered, underscoring the project’s ongoing exploratory nature. Given the advancements in neuroscience and human brain studies, the discourse shifts to the threshold of human sensory capacity and the differential processing of signals in environmental contexts. This reflective journey promises to yield insights and innovations in how we interact with and perceive our surroundings in the flux of message, leading to the next experiment “Maelstrom”.
“Maelstrom” is an immersive installation that addresses the challenge of navigating the overwhelming data landscape, invoking discourse upon contemporary urban development and mental wellness. With 7.1 surround sound and generative projection, the audience becomes the driving force of the experience, shaping it through their movements in time and space. The project explores how the processing of visual information and embracing a multi-sensory approach can aid in navigating and understanding our environment. It draws upon the cyclical nature of information flow, allowing the audience to engage with environmental features and witness the reciprocal influence between human actions and their surroundings. The installation represents an interconnected ecological system that evolves with human interaction.
Inspired by Marshall McLuhan’s interpretation of Edgar Allan Poe’s A Descent into the Maelstrom, the exhibit emulates the process of escaping information overload through pattern recognition. Visitors navigate the installation by responding to auditory cues, moving through a network of images and sounds that represent the evolution of urban and geological landscapes. Each scene in the installation, while similar, has its unique characteristics, creating a loop of ever-evolving visual and auditory experiences.
The visual of “Maelstrom” employs generative AI model as a branching sequence for image metamorphosis dictated by users. It places the audience at the center of the generative map, where imagery generation functions not through image itself but the temporal evolution process. It employs DALL·E 3 to make variations of urban satellite images as maps for navigation in real-time, mirroring geological shifts and generational evolutions. The similarity across generations becomes the key – characteristics of the map passed down through generations, enabling audience to progressively learn and adapt based on past experiences and memories. Concurrently, the audience’s actions determine the evolutionary direction of the next scene, forging a reciprocal molding relationship with the environment.
From an auditory standpoint, “Maelstrom” is an interactive sound installation, guiding its audience to learn and respond to environmental shifts through an evolving sonic narrative, where audio guides movement and movement reshapes audio to deliver a multi-sensory experience. Under the core framework of Distance-related Audio Effect Modulation, it surpasses traditional audio-visual experiences, morphing into an interactive musical instrument and space where audiences can learn and engage via minimal interface and control. It challenges the conventions of music with spatial narrative and non-linear narratives where music progression is defined as multi-branch, moving beyond mere sonification and visualization. For each scene which resemble a circular shape, user’s distance to multiple targets are used to modulate audio parameters – from the center to the boundary of a circular maelstrom, noise level and high frequency sounds are filtered, correspondingly each sound presents procedural auditory changes on pitch, speed, phase change and overdrive. As user reach the boundary, fill-in audio effects are triggered and on quantization the scene switch to the new one like music progressions. Guided by spatial audio, the installation allows all audience members, including the visually impaired, to participate as independent performers. Technically, with latest technologies of Unreal Engine, Metasounds and distributed speaker array, it interprets responsive environment of Dolby Atmos spatial audio with higher interactivities.
Practice as a Temporal Architect
Summarizing the journey of spatial music exploration, the focus has evolved from a binary argument over visuals and acoustics, towards addressing accessibility and sensory experiences through sensory constraints, ultimately converging on the individual’s cognitive challenges and well-being amidst urban information overload. As a “Temporal Architect” or “Spatial Musician”, there’s potential to forge a system and culture that is parallel with, yet challenges, the dominant visual-centric paradigm. This approach encourages a fresh perspective on technological advancement, steering away from the relentless pursuit of eye-catching visuals to embrace temporal and musical influences in creative production.
This chapter delves deeper into this new role, examining how it intersects with data and information technology, thematic forms, and cultural contexts. It investigates how the role can influence technological developments and cultural trends, offering an alternative to the prevailing visual-centric approach. By extending the principles of spatial music, the discussion aims to forecast the impact and practical applications of this innovative role, exploring its potential to reshape narratives in terms of data and information technology, form and topic, as well as culture and community.
- Generative Technology, Message and Interface
In terms of data, the democratization of spatial computing devices and the ongoing development of sensors allow us to quickly obtain richer, high-precision personal data for interactive design and multi-sensory narratives, but data is abused in the current industry of media art as creators dump more, and more data to produce art pieces. Unlike traditional sonification and visualization, what we need is not the translation between non-human-centered information (such as visual-acoustic), but connecting different channels of information on the sensory spectrum through personal data from a individual-perspective, to reflect a person’s relationship to surroundings, aiding their cognitive system and providing an immersive experience.
Regarding generative models, whether it’s classic generative models (such as Reaction-Diffusion) or new AI models(such as Stable-Diffusion), most people tend to focus on the details and specific information of the result – such as the fidelity of 3D animation generated from motion captures, the resolution and detail of a generated image, or the quality of generated audio – which are driven by a mindset of optimization. People, especially engineers, often assume that mere technological advancement could lead to significant changes on efficiency and humanity. However, does such homogenized progress have an essential impact on human-computer interaction? As the resolution, frame rate, and audio refinement, despite being technically necessary, the industry often overly focuses on resolution and optimization, neglecting the readability and perceptibility of information. When optimization surpasses the average human perceptual threshold, the effort invested becomes inefficient. From the cognitive perspective of an individual, patterns reflect the characteristics of data more than resolution – like in traditional generative art where the same information is often transformed by fractal or Multi-Agent algorithms, retaining the original distribution patterns and characteristics to express new information, connected to affective feedback. Similarly, for AI models, such as Style Transfer, Img2Img, or “Spiral Image Generation Challenge”(trended on social media in 2023) all reflects the consistency feature of data pattern in generative process. Similarly, in terms of time, audio follows a similar principle, where grooves and progression are conveyed by the same variation audio effect modulation or notes, other than the detail of sound textures. From this perspective, neural network models and classic generative models are essentially the same in terms of reflecting pattern and distribution. In the future, the spatial music system could potentially provide a more holistic perspective and solutions upon patterns for such generative technologies.
Technically speaking, intersecting systems based on spatial and temporal scales form the basis for the exploration of the previous Spatial Music System. Can we arrange all types of data in a way like narrow-sense music, or complex time-accurate sequences? Despite in ambient sound, the length of clips or notes are generally non-decisive, as subtle changes of length and speed makes little difference, especially from the audience’s perspective. However, for musical sequences, time-accuracy is necessary – compared with ambient performances, audio-visual performances, DJ sets and music games often emphasize such accuracy, bringing grooves and motivations for audiences in the live scene. In current media art explorations, beyond abused ambient music and endless buzzing drones, potentially we can bring new experiences via spatial music systems, just like how Ryoji Ikeda’s works construct experiences through the frequency of patterns, audio and visual patterns might be reflected in musical sequences in space.
- Practice, Narrative and Topics
The practice of an temporal architect is to fine tuning sensory balance and define boundaries in experiences. The narrative could be constructed, by manipulating the thresholds of perception, and synchronization or asynchronization of the information, serving as means to redefine human perception by controlling the overload and restriction of specific sensory channels in experiences. Here, not necessarily to define standards for “good” or “bad” of the experience or pursuing balance between different senses for everyone, instead artistically we can intentionally set up limitations and boundaries to impose a defined situation upon individuals – like previous explorations of spatial music, by reversing sensory bias and priority of the current ocular-centric culture, visual could potentially be defined as a secondary element by addressing sound and the power of the narrative could be reversed, which place the mainstream audience at a similar position as blind person.
Sensory limits are often utilized. Some game designs limit visual range to demonstrate the importance of sound, such as Dark Echo, which is about navigation and avoiding enemies through sound emission and reflection in the dark (though the game is still driven by visual). Alternatively, it’s also common to paint the room dark to emphasize sound in experiential installations. Sensory overload is an alternative way to set up limits and make one sense invalid, such as being immersed in a mass of ineffective information, also forms a constraint. In the later research of Acoustic Garden, some tests have shown that multiple concurrent sounds in space could exceed the capacity of human brain.
Meanwhile, the synchrony and asynchrony of senses in time, or alignment and misalignment in space, could also offer potential approaches for experience and performance. In earlier exploration of spatial music, systems function as a parallel timeline aside from user’s narrative, interaction is not fast-responsive but following musical quantization, such synchronization of musical pattern and asynchronization of user behavior becomes a scheduling system, potentially serving as a information tuning system. For instance, remapping or integrating perception maps/spatial data into discontinuous, non-linear patterns for display on interfaces significantly affects the input and output of information. Misaligning spatial orientation or rearranging sequences in space could provide a smart and safe mechanism, allowing everyone to participate as a performer with minimal interfaces.
From a personal perspective, nowadays immersive technology and interfaces are slowly getting closer to the ideal ubiquitous computing environment described by Mark Weiser in his article The Computer for the 21st Century. However, while immersed in information, the issues of message overload remain unresolved. For neurodiverse persons, such as those with ADHD (Attention-Defict/Hyperactivity-Disorder), the stress and chaos caused by data streams in interfaces have long-term impacts on mental wellness (e.g. the overflow of advertisement redirection in web browsers). Technically, for today’s generative and reality-capture technologies(e.g. photogrammetry), we still need tools to structure complex and disparate dataset. Classic optimization methods like kd-trees and quad-trees can classify these fractal structures into musical sequences and loops, providing categorized information to users and audiences at various levels. For instance, laser-scanned data of archaeological sites can be deconstructed and temporally arranged according to cluster size and re-interpreted by a spatial music system (such as appregiating or sequencing notes), and converted into visual and auditory experiences. How do we process information? How can we filter and sort unstructured information and complexity for cognitive purposes? Can spatial music system re-processed information to fine-tune mental status or offer spatial experiences?
As an outcome, the topics of temporal architect are interdependent on the medium and system. Narratives derive from the core topic of time and presented spatially, involving the development of places and phenomena as a process of how things evolve – from natural phenomena to urban history, from cellular development to personal life-journey, from public to private spaces, from embryo development to insect metamorphosis – just like how project “GAP+” offers a musical architecture to reveal critical non-linear perspective on linear history and heritage preservation. The aging and growing processes of things, provide affective feedback and emotional connection between audience and the technology. Meanwhile, in spatial terms, the concept is more about the potential development and evolution of dictated by user’s multi-sequential interaction in space – just like the “Maelstrom” project, where environment could be decisively developed by audience and offering a navigation and decision making tool to survive information overload.
As an extension, scientific theories and methods can be involved, such as Morphic Resonance by biologist Rupert Sheldrake, as a challenge to genetic theory, proposing that the development of organisms and populations is a collection of memories and habits, or latent-space walk of an AI dataset where the generation is a collective hybrid result of neighboring data in a higher dimension. Theoretically, all systems about process and possibilities can be related to the practice of temporal architect, translated as musical/temporal experiences in space, while the system provide an interactive tool for audience to participate the productivity. Essentially, the medium of temporal architect does not necessarily provide new topics but enabling individual-focused experience and re-interpretation on histories and memories.
- Culture and Community
Currently, public and technological trends are driven by prevailing cultures of the last generation, such as ocular-centric culture, inheriting inequality from our societies in the past. From the development of graphical interfaces and visual arts, we can see the consistent sub-consciousness of ignoring marginalized groups without reflection on culture. While hurdling towards a future of bigger, brighter, and more and more powerful that we yet don’t understand, or discussing the future or labeling everything with a prefix of “future” without criticizing our current mindset – how many of these visions are truly achievable? How many are proposed as mere theories? Like speculative architecture, how do we avoid to be escapism and irresponsible in name of “future”? How do we approach a future, by establishing new cultural scenes allowing proactive and effective practice than constantly creating new future visions? Culturally, just like how ocular-centric culture imposes linear and static thinking upon everything, could acoustic culture by temporal architects universally offer new rules and mindsets for all senses?
In the near future, as an extension of previous spatial music research, the potential cultural scenes and organizations implied by this medium remain to be explored. It enables possibilities not just for the visual realm but also for the music and sound industry, involving new ways of organization and collaboration. It emphasizes the power structures between the audience and performers, as well as between organizations.
The cultural impact of temporal architect’s practice across sound and visual fields is significant, particularly in music. When spatial distribution and temporal sequences become participatory tools, movements and behaviors in space transform from random acts to decisive elements, shaping and directing narratives. This shift reveals new possibilities in music distribution and performance: merging non-linear spatial music structures with interfaces like touchscreens and gesture sensors to offer innovative distribution models for audio-visual piece. These models surpass traditional vinyl, CD, and streaming methods, presenting interactive music that can be reinterpreted and shaped by audience interaction, offering a dynamic experience beyond the fixed formats of records or streaming services. In the performance realm, high-energy raves or audio-visual shows with audience participation are poised to redefine event organization. This approach blurs the line between performer and audience, democratizing the creation process. While it remains an uncharted territory with uncertainties about its functionality – especially regarding the audience’s role in information filtering and decision-making – these novel scenes suggest an alternative, inclusive direction for the mainstream music industry.
Similarly, this interactive, audience-driven narrative approach could inspire new architecture and art. Contrasting the classic static mediums of architecture and art collectibles, such interactive formats imply that art could embrace a streaming media model, prioritizing replicability and audience engagement over traditional collectibility. This paradigm shift underscores the evolving role of the audience, from passive viewers to active participants, in shaping the artistic experience.
Temporal architecture, as an “event,” proposes new collaborative frameworks for creators and audiences, akin to the dynamics of music labels or curatorial teams. Back in the history of ancestors, till nowadays in some indigenous tribes, music and art are taken as group activity of co-create and participation where interactive relationship build the community other than binary division of performer and audience. Like oral history, such dynamic culture is returning in our current era of message after long silence in the past print era, as predicted by Marshal McLuhan. This concept envisions a cooperative art group where joint projects manifest as curated exhibitions, public installations, or spatial designs. In this setting, each designer or artist contributes as an independent creator, co-developing unique structures or spaces for audio-visual installations and performances. The identity of each artist is distinct and independent, allowing for individual narrative representation. This organizational structure diverges from traditional design firms or personal studios, emphasizing shared interests and maximizing the visibility and creative autonomy of each participant. It aligns with the principles of DAO (Decentralized Autonomous Organization), functioning as a cooperative network that maintains the integrity and credit of its creators. In this event-centered culture, real-time participation reshapes the creative process and the experiential aspect, fostering a more dynamic and inclusive community. This approach not only democratizes the creation process but also enriches the experience for both creators and audiences, heralding a new era of collaborative and participatory art and design.