by Sriram Viswanathan on Wednesday, March 6, 2013
- Performing Arts and Music
- Session type
- Technical level
The summary of the idea is to use people as sound/music elements and enable a dynamic musical collaboration environment that is not pre-decided in a any way. A person's location and their proximity would define new sounds in every possible interaction.
The basic idea is to use something like frequency modulation, sound effects, sampling/synthesis algorithms and assign them to various trackers and markers. These markers could then be tracked using a webcam. Each audience member can be given a 'hat' to wear which contains the marker on the top, enabling a roof mounted webcam to see all the markers and keep track of the movement.
Another way this can be done is through tracking infra red LEDs for getting movement but we won't be able to track 'specific' people in this case, only number of things in an area and things like that.
The collaboration should be 'some' demonstration element to it but ultimate freedom should be left to the crowd. The idea is inspired by the ReacTable surface which has elements for sound creation on a touch based surface.
Wide angle webcam, cardboards/felts for making colored or marked hats or colored lights, good quality speakers (preferably multiple speakers for localized sounds).
Sriram has a bachelors in computer engineering and an MS in Music Technology from Georgia Tech Center for Music Technology (GTCMT), Atlanta, GA USA. Currently employed by an IT consultancy firm called ThoughtWorks based out of Gurgaon.