This is a demonstration of the Clickable Space authoring suite, developed by Mike Wozniewski and Zack Settel at the Society for Arts and Technology in Montreal, Canada.
The software allows for the creation of spatial interfaces, where interaction is based on the relative movement of 3D content, and the resulting geometric events (e.g. intersection, relative distance and incidence) which occur. A user is represented in the interaction scene by an avatar, which is controlled via standard input devices (eg, keyboard, mouse, joystick, Wii controllers, etc) or via realtime tracking systems. Thus, a user's (avatar) movement in the scene relative to other content will generate geometric events. These events, can be mapped on to actions, which are in turn, sent to systems to control sound, lighting, graphics or other.
In a typical application for live performance, a motion-tracked performer will interact with an overlaid virtual 3D interaction scene that corresponds to the exact dimensions of the performance space. As the performer moves about the stage, his/her avatar encounters the various geometric content defined in the scene, and various actions are triggered; his/her use of a wireless controller adds the modality of "clicking", thus offering a wider range possible event-to-action mappings.
The system operates in a distributed fashion over IP networks, supporting applications that can be shared between multiple users in different locations.