Gridjam is a real-time, geographically distributed, networked multimedia event. It is an experimental project that brings together visual artists, composers, musicians and computer scientists, while using the new high speed international LambdaRail network. Gridjam will demonstrate real-time, low latency, interactive, distance computing through the complexity of the live, partly improvised, 3D visualized, musical performance, being both a world-class work of art and a research project into high performance collaborative network computing.
GridJam will utilize artist Jack Ox and programmer David Britton’s Virtual Color Organ, visualizing Alvin Curran's electronic music performed by musicians located in distant locales but connected via next generation networking technologies. The Virtual Color Organ (VCO) is a 3D immersive environment in which music is visually realized in colored and image-textured shapes as it is heard. The visualization remains as a 3D graphical sculpture after the performance. The colors, images, shapes and even the motions and placement of the visualized musical shapes are governed by artist-defined metaphoric relationships, created by hand as aesthetic and symbolic qualities rather than algorithmically. The VCO visually illustrates the information contained in the music's score, the composer's instructions to the musicians, and the musicians contributions to the score as they improvise in reaction to each other's performances and to the immersive visual experience. Illustrative of synesthesia and intermedia, the VCO displays the emergent properties within the meaning of music, both as information and as art.
The aesthetic vs. algorithmic quality is exemplified in the extremely high-resolution detailed hand-drawn imagery of desert landscape formations that comprise the current "visual organ stop" to be used for GridJam. The images are designed to provide suggestive metaphors for the representation of timbre-based contemporary music. In this case, composer Alvin Curran's library of nearly two hundred found sounds provides the starting point for the electronic composition. For GridJam, each of these sound files is provided with a hand-made 3D object whose shape is based on the amplitude and pitch waveforms of the sound file's audio analysis.
GridJam will employ 4 musical performers at 4-6 networked nodes. Each node will be a system of one or more computers providing the following services:
These nodes are attached to the Lambda optical networking for lowest possible latency in transmissions among each other. The data being transmitted consists of low bandwidth MIDI control signals, the digitized audio stream and the high bandwidth video avatar streams, as well as the high bandwidth captured video imagery from the observational viewpoints.
The VCO is a computational system for translating musical compositions into visual performance. This interactive instrument consists of three basic parts:
The VCO is capable of having multiple visual organ stops. An organ stop on a traditional organ is a voice that affects the entire keyboard of notes. An organ stop in the VCO is the 3D immersive environment in which the visualized music will exist and also the visual vocabulary applied to the musical objects. Gridjam will take place in the black and white, hand drawn desert sand and rock structures coming from real deserts in California and Arizona. The original drawings by Ox, from which the 3D modeling was made, serve as the basic texture maps for all of the musical objects.
For some 30 years, Jack Ox’s artistic research has been focused on the “translation” of musical language, composition, orchestration, notation and sound-structure into orginal visual representationsusing models spanning the Romantic symphonies to Schwitters post-Dada Sound-Poems, to pure Digitally generated structures; these “paintings” and drawings always recast her sound-sources in extremely rigorous but nonetheless fanciful visual creations, to visually emulate, say, the color of a Trombone section or random-generating algorithm, as if these and similar sound-objects were naked models in front of her.
In the context of GridJam-The music takes on yet new meanings as it is both subject and object live and deferred acoustic and electronic, near and far, all at the same time. Nothing in this overview can said to be new, except for the fact that the music in one form or another is practically interchaneable with it’s visually projected images, either as a passive representation of a given set of 3-dimensionally visualized wave-forms or as active modifier and transformer of the same.
Since the early 1980’s using land-based European radio-grids, I have created works which virtually or realistically were made by combining musicians and natural sounds from multiple and very distant geographical locations..so the challenges of making a music from 4 distantly located sites is not new to me. What is new is that this music will will produce a stream of images as it is made, allowing one to hear and see into the most minute structures of sound.
In this context of very advanced communications networks, multimedial caves, and simultaneous performance locations, I have made a seemingly strange decision to favor well known acoustic “flavors” the ubiquitous grand piano, the human voice, viola, bass flute/soprano saxophone, trombone, violoncello, as well as the laptop computer whose role will be the electronic transformation of all sounds, acoustic and digital emitted, including sounds created by the computers themselves.
The Piano too is no ordinary piano, but a Diskklavier, capable of triggering sampled sounds or processing messages. So from the orchestrational view, there will be 4 quartets one in each of the given performance locations, each with a Diskklavier grand piano, a high soprano singer, one of the above mentioned instruments (violin, flute, trombone, cello) and one computer player.
While the origins of Jack Ox’s waveform- images are derived from a set of (number?) recorded natural sounds of mine (car-crashes, lion roars, Pavarotti’s highest note, etc)
these sounds, triggered by the pianos, computers or other midi-controllers, will be seldom heard in their raw form but more often radically transformed by real-time strectching, phase vocoding, shuffling, granularization, sequential frequency modulation and extreme filtering carried out by the computer player. The purely acoustic sounds, voice piano winds and strings, will be modified by similar processing at selected moments... this will confer a continuous flow of timbral qualities especially to the acoustic instruments which of course will also be heard un modified as well
On the “macro” the composition will have simple unifying qualities, common to much western music: tonal centers, equal temprered and microtonal scales, drones, pulses, repetitive melodic motives etc, but these elements will not necessarily conform to “good musical practice” as both unpredictable transformations and conjunctions of the sonic events themselves may lead to unforseeable consequences.
On the “micro” level,the composition using both conventionally notated music stucturally improvised music and algorithmically determined events, will comprise of a large array of simple compositional strucutures including massive walls of sound, textures of intense quiet and emptiness, polyphonic layering of sustained tones, unison melody and rapid jump and cut gestures all reflecting Ox’s centrifical voyage over the desert floor...a critically important piece of software will determine in what way and to what degree the images are “controlling the music” and vice versa...so the entire music will have moments of complex but predictable coordination as well as completely unforseeable relations between the sonic and visual actions. Each performance of this work will be in itself a “first performance.”
The final relation between all 4 musical groups and each of their projection environments leads to a summed result that no once musician, artist or spectator can see and hear at once..Hence while there will be 4 interrelated concerted events taking place simultaneously each will have its own “local character” but musically be summed and mixed with the others or parts of others appearing both visually and audibly in each space and at the same time open to decide the amount of autonomy. The details of the algorithmic control required for the above musical interactions are currently being developed.
alvin curran
rome
january 6 2005
The systems for production of the electronic music include the following components:
This system relies on off-the-shelf inexpensive hardware, but requires the development of custom software for the laptops, and the development or refinement of networking software for the transmission and receipt of MIDI and digitized audio streams. This software development is a significant component of the work to be funded under this proposal.
The original building blocks of Gridjam come from a list of almost 200 collected sounds belonging to Curran. The sound files include John Cage reciting short phrases, Maria Calas singing a high note, animal sounds, objects like coins being tossed, and various musical sounds etc. Ox put them through a Max program creating graphs of melody and dynamics. She sorted them into 8 groups based on these simple visualizations and made 3D models reflecting melody from the front and dynamics on the top of the object. There are eight groups because there are eight landscape texture maps. Each object in a group has as the base of the texture map the same hand drawn landscape picture. The colors are based on the timbre of each specific sound sample. These colors can be rather complicated gradients, often in layers. If the sound sample has a sequence of vowel sounds then the colors in the gradient will show these based on Ox’s color system, which defines how and where vowels are made in the vocal track. Other timbre colors come from the extensive list of timbre created by Ox.
The 3D objects made by Ox from Curran’s collected sound files will be able to appear in small bits of the whole, based on the duration of the played sound. If the Disklavier key remains depressed for longer than the actual sound file, the object will begin to appear for a second time, moving along the time line. The time line follows a path from the center of the virtual desert out, until it curves up and back towards the center.Before reaching the center it will curve up again and move outward once more, repeating this pattern for as many times as necessary in order to accommodate the playing. Because of this path, time will move both horizontally and vertically.
At the end of the performance the visualization will remain and can be more thoroughly inspected as a place with moving objects in a black and white desert environment. We will also have recorded the entire multimedia experience so that it can be played back in digital environments, such as the ever more ubiquitous digital planetarium theaters.
The system for production of the VCO graphics real time virtual reality display includes the following components: