Architecture of Sound

Responsive Sonic Space in Architecture

This is an innovative project based on author’s earlier exploration on GAP+ Architecture of Sound, aiming at next-gen interface/system development for responsive sound environment.

Brief

The software is the first multi-user platform which allows you to design responsive 3D space with architectural definitions. The system is specialized for data transmission and real-time content in architectural space. It reveals an augmented acoustic world for everyone.

From architects

Draw you own responsive environment with architectural interface!

Sound navigations will largely improve the experience for blind community, as well as for everyone.

For musicians

From space definitions, parameters and signal circuit can be bind to user behavior inputs.

Now you can use your body like a Theremin in soundwalk! Music scenes and collaboration can be easily controlled in a shared space!

To everyone

Sounds can be output to devices like earphones, and experience through real-time.

Everyone is the player of your soundscape! Share and enjoy sound toys you made with your friend now!

Core mechanism:

Through user’s location and behavior, modulations can be sampled and passed from space definitions(zone/field) to space elements(visual/sound), generating various scenes.

Target Platform:

PC/MacOS, IOS(ARKit), Magic Leap

What is the future of interactive architecture? What is the future of music interface?

Check pdf below for detailed proposal.

Framework

How do we design a responsive sound environment in an architectural way? How does temporal media like sound impact our perceptual world in spatialized interactivity?

How does architecture integrate concepts from other industries?

For Architects, “zone” is to define boundaries, “field” is to describe distribution of parameters.

For Musicians, “scene” is to define a combination of sound, “modulation” is to control parameters of sound.

As they are combined together, “zone” defines macro attributes of a specific 3d area, “field” defines their variation in different locations.

When we move through a 3D space, we can switch between “scenes, which construct different events on our timeline

Based on locations we/objects are, parameters can be read from the “field,” modifying messages we receive.

As users alter these spatial definitions, an interaction happens between user and space, as well as between user and user. This is exactly what architectural drawings like Manhattan Transcript trying to argue.

The future of music interface

  1. Spatial interface and collaboration
  2. Complex user input

The future of Interactive Architecture

  1. Hyper dynamic media for instant experience
  2. Augmented reality

Why need it?

Acoustic world is the future of interactive architecture, that sound scenes can be far more tangible than visual/material space.

How to do it?

Unreal engine has provided all support for sound processing and spatialization.

Who does it serve?

Visual and audio performers, blind community, exhibition executors, architecture designers, and anyone interested in sonic space.

Engineer it

or

PLAY IT.

When to release?

The project is in prototype stage, field class is under test.

Before 2021: First usable build for basic object creating and mapping

2021 spring: Test for musician and sound designers.

2021 summer: Open test will be released. Multiplayer will be take into development.

See also:

Research Essay

Plan & Progress

20201219 – ↑↑↑↑↑↑↑↑ Granular Synth Overlay
20201218 – ↑↑↑↑↑↑↑↑ Collision Input on Objects, Chord randomize/appregiator, Synth presets

This version is a cleaned up one:1. Zone and Field are merged into one single “space” class, which can adhere field layer and zone layer. 2. Sound Element and Visual Element are merged into one single “Object” class, as an openframework carrying acoustic layer and visual layer. Object itself is a trigger and receiver, messages from other objects can effect on it.(Now there’s no difference between trigger and elements, it despends on what bahavior are added to it.)

https://streamable.com/dp2wl1
20201016 – ↑↑↑↑↑↑↑↑ Constant spawn input + modsynth custom + system look feeling
https://streamable.com/s8vmhb
20201016 – ↑↑↑↑↑↑↑↑ Collision note input
https://streamable.com/o0g0qx
20201015 – ↑↑↑↑↑↑↑↑ Object Create/Delete + synth input + data mapping
https://streamable.com/xeaonl
20201013 – ↑↑↑↑↑↑↑↑ Tool Set + Mode Switch + Spawning Actors + Snap
https://streamable.com/wpq41g
20201006 – ↑↑↑↑↑↑↑↑Optimized Menu + Wavetable
https://streamable.com/ni4sdg
20200926 – ↑↑↑↑↑↑↑↑Optimized Menu + menu data flow
https://streamable.com/myvreb
20200923 – ↑↑↑↑↑↑↑↑Realtime control above field!!!!!!!(Modulation still mapped to pitch)
https://streamable.com/j44wc2
20200922 – Trace Channel Switch + MouseClickMenu Setup

Mouse Click Menu are spawn near player while editing zone and field channel, and spawn near target while editing visual/sound elements. Mouse Click Menu will be used to modify actors in different edit modes. TAB is bind to switch Edit Mode – 1. edit geometry(size, shape, rotation) 2.edit data(Modulation mapping, mapping range, mapping target etc.)
Lag is obvious after running ue for a whole day, RAM overdriven.
20200919 – Time Synth Setup + Menu Design
https://streamable.com/xgvcoj
20200917 – Sample from Field + Automatic Offset (LFO) ✌✌✌
https://streamable.com/m50c1j
20200913 – Field Class build up + Manual brush editing?
https://streamable.com/aeppwg
20200903 – Zone Class build up + Manual Binary Subdivision function