Via Lateral Films: "Composer Mark Applebaum's cryptic, painfully fastidious, wildly elaborate, and unreasonably behemoth pictographic score, The Metaphysics of Notation, consists of 70 linear feet of highly detailed, hand-drawn glyphs, two hanging mobiles, and absolutely no written or verbal instructions.
Installed for one year at the Cantor Arts Center Museum on the Stanford University campus it received 45 weekly performances from interpreters from around the world.
There's No Sound In My Head investigates the project and Applebaum's development as a composer. Through interviews with composers and musicologists, performance footage, and conversations with Applebaum as he draws in his studio, the film poses questions about the borders between music and visual art."
Via Chronicle Books: "This comprehensive monograph celebrates the visual art of renowned musician Brian Eno. Spanning more than 40 years, Brian Eno: Visual Music weaves a dialogue between Eno’s museum and gallery installations and his musical endeavors."
Brian Eno: I wouldn’t call myself a synaesthete in the sense that Nabokov was. But I’ll talk about a sound as being cold blue or dark brown. For descriptive purposes, yes, I often see colors when I’m listening to music and think, 'Oh, there’s not enough sort of yellowy stuff in here, or not enough white.' "
Via Daniel Franke: "The basic idea of the project is built upon the consideration of creating a moving sculpture from the recorded motion data of a real person. For our work we asked a dancer to visualize a musical piece (Kreukeltape by Machinenfabriek) as closely as possible by movements of her body. She was recorded by three depth cameras (Kinect), in which the intersection of the images was later put together to a three-dimensional volume (3d point cloud), so we were able to use the collected data throughout the further process. The three-dimensional image allowed us a completely free handling of the digital camera, without limitations of the perspective. The camera also reacts to the sound and supports the physical imitation of the musical piece by the performer. She moves to a noise field, where a simple modification of the random seed can consistently create new versions of the video, each offering a different composition of the recorded performance. The multi-dimensionality of the sound sculpture is already contained in every movement of the dancer, as the camera footage allows any imaginable perspective."
[ Visual Music ]
Video to Sound and Back Again, and Music
Mixing Video Over an Audio Mixer
Via Create Digital Music: "PixiVisor is software for desktop (Mac, Windows, Linux) and mobile (iOS, Android) that transforms images to sound and back again. Producing sound from images is an idea in a variety of tools. But PixiVisor is unique in that it goes the other way, too: sound can be turned back into the originally imagery as a video. In the demo video here from developer Alexander Zolotov, a simple audio mixer can mix together multiple video sources (in beautiful low fidelity), and add effects. A DIY 4-pole plug connects the signal to the mobile gadget – iOS, in this case. The video source (and recording format) is animated GIF files. Alexander Zolotov is also the creator of SunVox, the powerful music making app."
Via Generation Z: "The late 1920s was also the period in which sound was being developed to accompany films and animations in Russia. In 1929 one of the leading experimental Soviet filmmakers, the painter, book illustrator and animator Mikhail Tsekhanovsky (1889-1965) was involved in the production of the first Soviet sound movie Piatiletka. The Plan of the Great Works. When in October of that year the first roll of film was developed, it was Tsekhanovsky who voiced the idea: 'What if we take some Egyptian or ancient Greek ornaments as a sound track? Perhaps we will hear some unknown archaic music?' He was referring to the shapes and outlines of vases and how these could be used as if wave forms to generate sound. It was at this precise moment that technology of synthesizing sound from light, called the Graphical Sound techniques were invented and, possibly the first electronic soundtracks ever created.
The group with whom he was working included the talented inventor and engineer Evgeny Sholpo (1891-1951) who was already working on new techniques of so-called performer-less music, but the most outstanding participant in the project was the aforementioned composer Arseny Avraamov. The next day they were already furiously at work on experiments in what they referred to variously as ornamental, drawn, paper, graphical, artificial or synthetic sound. It was Avraamov who completed the first artificial sound tracks in 1930 and by 1936 there were four main trends of Graphical Sound in Soviet Russia: hand-drawn Ornamental Sound (Avraamov, early Boris Yankovsky, 1905-1973); hand-made Paper Sound (Nikolai Voinov, 1900-1958); Variophone or automated Paper Sound (Evgeny Sholpo, Georgy Rimsky-Korsakov); and the spectral analysis, decomposition and re-synthesis technique (Boris Yankovsky). Yankovsky's idea was related to the separation of the spectral content of sound and its formants, resembling the popular recent computer music techniques of cross synthesis and the phase vocoder. It was certainly one of the most radical, paradigm-shifting propositions of the mid 1930s. Researchers involved in Graphical Sound had to overcome enormous technical and theoretical (as well as more mundane) difficulties during its short existence. The results of their work were surprising and unexpected, and ahead of the group's time by decades. However, collision with the state was fatal. In less than ten years, all of their work had ended and was almost instantly forgotten."
Here is a great timeline of the technology of synthesizing sound. Via UMATIC: "Optical sound technology was developed first solely for recording soundtracks for early speakies, and every one of the Russian innovators used their graphical sound techniques to provide music scores for the kino. But the connection with the Visual Music movement in cinema is also very close, with perhaps the works of Norman McLaren providing the strongest bridge. But the direct cinema techniques of many filmmakers from the 1920's and 1930's on through the 1960's and 1970's show more than a casual relationship with the techniques of direct optical sound synthesis. The works of Oskar Fischinger, Len Lye, Stan Brakhage, John Whitney, Hy Hirsch, Harry Smith, Jordan Belson, Larry Cuba and many others all reflect an ongoing lineage of this visual music tradition. (...) My hope is that this small survey sparks more interest in all of these inventors, composers and artists and their incredible works (...)"
Via Create Digital Music: "Now, I could say more, but perhaps it’s best to watch the videos. Normally, when you see a demo video with 10 or 11 minutes on the timeline, you might tune out. Here, I predict you’ll be too busy trying to get your jaw off the floor to skip ahead in the timeline.
At the same time, to me this kind of visualization of music opens a very, very wide door to new audiovisual exploration. Christian’s eye-popping work is the result of countless decisions – which visualization to use, which sound to use, which interaction to devise, which combination of interfaces, of instruments – and, most importantly, what kind of music. Any one of those decisions represents a branch that could lead elsewhere. If I’m right – and I dearly hope I am – we’re seeing the first future echoes of a vast, expanding audiovisual universe yet unseen."
Via Wikipedia: "UPIC is a computerised musical composition tool, devised by the composer Iannis Xenakis. It was developed at the Centre d'Etudes de Mathématique et Automatique Musicales (CEMAMu) in Paris, and was completed in 1977. The name is an acronym of Unité Polyagogique Informatique du CEMAMu. Xenakis used it on his subsequent piece Mycènes Alpha (1978), and it has been used by composers such as Jean-Claude Risset (on Saxatile (1992)), Takehito Shimazu (Illusions in Desolate Fields (1994)), Aphex Twin, Mari King, and Curtis Roads.
Physically, the UPIC is a digitising tablet linked to a computer, which has a vector display. Its functionality is similar to that of the later Fairlight CMI, in that the user draws waveforms and volume envelopes on the tablet, which are rendered by the computer. Once the waveforms have been stored, the user can compose with them by drawing compositions on the tablet, with the X-axis representing time, and the Y-axis representing pitch. The compositions can be stretched in duration from a few seconds to an hour. They can also be transposed, reversed, inverted, and subject to a number of algorithmic transformations. The system allows for real time performance by moving the stylus across the tablet."
Via IMDB: "Dislocation in time, time signatures, time as a philosophical concept, and slavery to time are some of the themes touched upon in this nine-minute, experimental film, which was written, directed, and produced by Jim Henson – and starred Jim Henson! Screened for the first time at the Museum of Modern Art in May of 1965, Time Piece enjoyed an eighteen-month run at one Manhattan movie theater and was nominated for an Academy Award for outstanding short subject."