Each project informs the next. Design driven inquiry.
Each project informs the next. Design driven inquiry.
Nosaj Thing Visual Show
The performance is focused around disorientation and using the projector as a light source. The first half of the show is dedicated to slowly building a visual style. The images are black and white lines and squares. The positive/negative space is used to create rhythm that is synced with the audio. As the set progresses, the imagery becomes less abstract and focused less on light. Patterns begin to form and space begins to open as if the two dimensions exploded into a third. The relationship the graphics create with the performer is interesting. As the graphics become more spatial, the performer flattens, becoming a 2-D cutout version of himself.
The Visual Show was the first project in my thesis studies. I spent the better part of a year working on this project, and it had many iterations. I began trying to answer my first research question- How can sound create visuals and how do visuals create music? I thought the right approach would be to program software to accomplish this. I learned Max/MSP and attempted to develop a show based on this software, but the end results were not dynamic and felt so cold. It lacked the aspect of liveness, the most important qualities of concerts. I began to use a VJ mixing software, VDMX, to create live effects on footage I shot for each song. Each song used 2-3 video clips; the look was filmic, but not dynamic enough for Nosaj's music. In the end, the approach we used was based on simple shapes and light. Using the projector as a light source was much more effective than the previous versions, and fit perfectly with Nosaj Thing's style of music and performance.
What Am I Investigating?
My investigation was lead by my research questions- what does it mean for a designer to be the performer? How can sound create visuals and how do visuals create music? I didn't exactly get to the point where I could use sound to create visuals, but instead I was able to create a set of visuals that mimicked the way the sound is organized. Each song contains pieces, parts of drums, synthesizers, bass, etc. I decided to use this structure to constrain our performance. Because we couldn't use the sound to create visuals live, it enabled us to perform our visual set live, in concert. It also led me to conduct every experiment and project that followed, each is inspired or an offshoot of the Visual Show.
A Visual Prompt
After the earlier sound to image translations (link to it), I found that the results I was getting were not musical. People submitted files of themselves playing guitar, but it lacked the richness that a musical composition could have. The data felt more like doodles than a finished piece. So I asked 5 musicians I knew to compose a song based on a video I made. The video is 55 seconds of liquid light footage that was manipulated and layered. I asked them to not think of it as a score, but to use the piece as inspiration for songwriting. The length of the finished song was dependent the musician, not the video. They could make their composition perfectly synched with the visual, or use it as a starting point for a 10-minute opus.
What Am I Investigating?
The ultimate goal is to use my ability as a visual maker to influence or inspire the creation of music. I am challenging the question of how a musician collaborates with a designer. I want to make a contribution in music, but how do I make those contributions? This prompt, like my other research projects, is attempting to use an alternative method for generating content. The ideal situation would have one or all musicians using the song for their own album material. Granted, this technique has been used before for film soundtracks and art projects, but the emphasis for this project is me finding my place amongst musicians and designers. It helps define the work I will pursue in the future.
The tracks I received in return all captured a certain moodiness that was comes through in the inaudible prompt. Nosaj Thing composed a soothing synchronized track, uncharacteristic of his usual style. He embraced the visual and composed the song as a true professional sound designer. Kevin Corcoran recorded his track in an old key making shop, 4 feet wide and 20 feet deep, using an old weather radio and mixer feedback. Bulbs recorded his track after a night of bike rides and breathing practice, the focus of his track is subtlety. Chris Larsen said he was going to use the two tracks he recorded for his own album. Larsen said "I am going to stick with this and make an album..." I didn't expect this to be the response on the first try, but is encouraging for my future endeavors.
This study began with a conductive pencil that makes a buzzing noise as you draw. I used an arduino kit, called drawdio as a research tool. My interest was very focused at this point: How can I translate music into an image? What makes the translation meaningful? I had already used a computer to literally translate the audio file of a song into its respective visual data, but I found this process to lack character. Music is powerful because it can evoke emotion. Emotion is not a quality that is quantifiable, so why should I use a computer program to translate a song for me?
Using the buzzing pencil, I asked two of my classmates to draw portraits of musicians while they listened to their music. They were told to use the pencil to draw, but they also had to make a beat that coincided with the song they are hearing. By having a person translate the music manually, the visual (portrait) is directly affected by the music- in real time.
The second iteration of this project uses a larger canvas for the users to draw on. The first version confined the person drawing to an 8.5" x 11" sheet of paper, so their movements and gestures were understated. Using a large sheet of paper and giving them a wall to draw on enabled their movements to be more expressive.
In this version, the portraits became the focal point, as opposed to the buzzing pencil. The outcome of this iteration was large portraits, that were made with graffiti markers, so the lines are much bolder and expressive than the pencil version. Because the portraits are so large, the individual strokes the user made became less important. Instead, a portrait was developed over the course of the 3-4 minute songs, and they used the full time to create the portraits. In the pencil iteration, they did not use the full length of the song to finish their portraits.
Still, there are many layers to the project that the users must adhere to. They had to listen to the music, be mindful of the buzzing marker, and also reference the portrait of the artist as they drew. They were able to do this, but I wanted to simplify the process, hence the 3rd iteration, the Blind Fold version.
Blind Fold Iteration
This version uses the same buzzing marker as the last iteration, but the user is blind folded. They are told not to draw what the artist looks like, instead they must draw the rhythm and beat of the music. The Blind Fold iteration removes any concern of formal qualities while drawing. The user does not have to look at a portrait of an artist, and does not even have the ability to look at the paper. This frees up their ears and allows the user solely focus on the music, and how their arm movements are relating to the song.
The drawings turned out to have a rhythmic quality to them that was not seen in the first two versions. The focus becomes less about the artist and more about the drawing mechanics. How does a person respond to a drum beat? What does a droning vocal look like? How do you draw a build? These questions can all be seen in the documentation of this iteration. The short answer is- in rhythm.
Image to Sound Translations
Translation became an interest while working on Nosaj Thing's Visual Show. The early ideation for that project involved Nosaj's Ableton Live set controlling parameters in Max/MSP and Processing, to create live dynamic visuals perfectly synched to his music. This would allow for the visuals to be solely created through his music, live. That idea never came to fruition, but is explored in this study. The emphasis is on not on the technology, but how a person interprets technology. Using a program that translates .wav files into a .bmp image, I asked users of Amazon's Mechanical Turk to play what they think the image sounds like. I sent out a video clip that had no sound, just a line that moved across the image to keep time.
How does an image get translated into sound? Is it through the computer, is it through the human eye? Is it both? Can there be a magic box that can turn any graphic into music? These were all questions that were investigated as I conducted this research project.
This study is interesting in what it reveals. The original image(above) is me playing the A chord every fourth beat. The graphic shows rhythm with the bold peaks, and sustain with the waves dissolving into the next peak. I asked people to play this with no understanding of how the translation is made. Some people followed the rhythm of the peaks, and were able to stay in time, but always with a different chord. Even if they were able to somehow guess that the graphic was an A chord, that's not the point of interest. I was interested to see if a person could use this graphic to express themselves. The results were all over the place. People would send in songs of them soloing with heavy distortion, gentle fingerpicking, dissonant chord strumming and something reminiscent of the 7th Heaven theme song. The point is, every single person had such a varied interpretation of the graphic, there was no way for me to group them. They were all so random. The graphic is visually complex, and not legible to a person who is unfamiliar with the computer program. Given the nature of the graphic, I can see how the responses were so random. There is not enough guidance for the user.
I had to create a video that directed the user better than the first attempt, in order to reduce the amount of randomness. I made two videos that were very clearplay the c chord every time you see a shape, and play the f chord when the cursor hits the line. These two videos were sent out to users of Amazon's Mechanical Turk, and I asked them to follow the directions, and send me an .mp3 of them playing the guitar.
These results were not illuminating in any way. We all know that everyone will play a C chord differently, the chord stays the same but there are so many factors (the type of microphone, the type of guitar, etc) that determine the way a recording sounds. I received 10 clips of a person playing the F chord along with a video, but so what? In contrast, the project “In B flat” by Darren Solomon shows the possibilities of using anonymous musicians to create a beautiful piece, and it is very successful because the parameters are just right. Solomon asked contributors “to sing or play an instrument, in Bb major. Simple, floating textures work best, with no tempo or groove. Leave lots of silence between phrases.” The result is a mesmerizing collage of musical meandering in the key of B flat.
As I began working on my thesis, I had to come to terms with what my skill set was. I tried programming in Max/MSP, I'm not a programmer. I tried designing instruments, I'm no industral designer. In the end, I focused on what I know how to do best, make do. These failures are all part of the process. It's necessary to find out what works and what doesn't.
It was crucial to get responses from different genres of music in order to see how the visual translates. I reached out to Nosaj Thing to get the project started. I asked two musicians who make rock music to contribute- Chris Larsen from Buildings Breeding and James Higgs from Spanish Prisoners. I also asked two friends who make experimental music, Jonathan Almaraz from Bulbs and Kevin Corcoran.
Norman Klein has been an influence on my work since I started working with Nosaj Thing. He has a way about his words that makes sense with my nonsense. His constant spewing of ideas is integral to articulating my thoughts. I come to him with a vision. I say “Norman, I’m making a project that does this.” He says a million and one things in response.