Art Center MDP Thesis 2011
By Link Huang

Physical Motion in Mutating Kinetic Interfaces

Humans are fascinated by motion created by physical objects or displayed on a digital screen. Anything that moves will draw our visual attention. Physical movement is important for our vision because it provides us with additional information such as volume and special relationships between ourselves and the object or environment. It draws us in and engages our perception into a deeper level. Visual perception isn't a passive act, but rather an active learning experience. Human learns to see and understand the data perceived through motion to fully understand the world around us. It involves the integration of multi-sensorial processing ranging from our senses and our muscles to coordinate with the neurological brain function. Our brain strings this information and provides our cognition with phenomenological understanding of motion. We live in a world surrounded by endless motion. (It can come from walking, physical transformation of a flower blooming, a car driving by, or even a leaf falling from the tree due to the nature of gravity.) Our brain is constantly connecting imagery together to understand the attributes of all the information perceived. Every person creates a different phenomenological interpretation because it reflects each individual's experiences, regardless what each person sees. The physics of motion has been and will always be fascinating to humans. In ancient Greek, philosophers declared that motion is a cause of illusion. Then the physics of motion was later proven and explained by Galileo, Newton and Einstein's general theory of relativity. Even more so than trying to fully understand movement, artists have been more passionate with the reproduction of artificial motion through visual effect techniques. These techniques progress with the evolution of inventions that turns energy into motion through mechanical structures or digital screen representations. Since the development of steam engines, which was the driving force behind industrial revolution in the late 19th century, it leads the invention of machines and mechanical structures to create new technology such as film and photography. It was then the development of motion pictures that exemplify the power of motion as a medium for visual communication.
Motion graphics and visual effects mediate the world of fantasy and reality. For centuries, animated graphic representations and information displaying in a scripted scenario have been a tradition for storytelling. These special mechanics creates effects enhancing visual and cognitive perception suspend the viewer's disbelieve and immerse them in a fictional scripted world. In the entertainment world, hyperreality merges imagination with reality. It is crucial to complete the storytelling, from stage performance, to silent films, and from television to cinema. (Brewster, 10)

Hyperreality is build up from layers of creative imagination, a humanistic desire for a different world. A world in which varies by experiential interpretation creatively expressed in a system of simulated experiences. The realities are mediated to an extent that can no longer be distinguished from fantasy. Take Disneyland for example; as Jean Baudrillard puts it, "An imaginary effect concealing that reality no more exists outside than inside the bounds of the artificial perimeter." (Baudrillard, 1) Baudrillard defines three orders of simulation. First, the representation of the real world is artificially represented in different mediums such as books, paintings, maps, projection, television screen and more. The second, order simulation blurs the boundaries between reality and representation. The third order describes the simulation and its surroundings which he calls hyperreality. It produced algorithms like computer programming code to construct virtual reality or augmented experiences. Baudrillard believes that hyperreality will dominate the way of experiencing and understanding the world we live in. (Lane, 2) Even now we have moved into a digital era, the use of motion as a communication medium has been mostly limited to the flat rectangular screens of cinema, television, computers, or personal digital devices. These devices vary in size, resolution, and portability. With the development of augmented reality and pervasive computing, the fundamental paradigm still remains similar user experience and interaction. Even though much of this technology has evolved, such as the scale and portability of the digital screen, the methods of using projectors, new developments on haptic feedback wearables for VR or AR or even animated 3D projection mapping onto physical surfaces, but the user interaction as well as perception of motion remains virtual. (Burdea, 6)


In these recent years, there are increasing amount of interest in creating real kinetic motion of physical objects as a communication medium. Although it seems like a new movement, the idea isn't entirely new. The roots of the obsession of animating physical objects dates far back as the 18th century and can be found in earlier works of automatons and clockwork mechanics. Throughout history, artisan crafts have individuals who hand made their intricate items. These skill sets were passed through generation to the next and was considered a rite of passage for each family member. The construction of these mechanics to drive physical animations requires mathematical calculations and the sensitivity of sculptors to feel and create.
It wasn't until the late emergence of new "smart" materials such as tiny motors, micro controllers, organic actuators and fast network embedded processors, rapid prototyping, cheaper building material, more accessible pricing, availabilities of software codes that created new opportunities for moving motion away from the digital screen and displaying them in the real world. Instead of creating these animation and visual effects on a flat screen, designers can bring back the craft of material making and dynamically reshape and reconfigure real physical objects to interact with the user. By designing mechanical structures and setting up programmed codes to customize these motor behaviors, designers now can accessibly create interchangeable movements in tactile objects, giving the look and feel of physical animation. These objects not only stimulate visual, but also tactile, aural, and kinesthetic perceptions in humans. Tangible user interfaces provide affordances to digital information while facilitates the advantages of human's capability to grasp and manipulate physical objects in the real world. (Ishii, 3) Translating visual effects and motion animation to tangible objects with digital entities create much richer and more effective interactions in a new perspective. Human's natural response to motion has been deeply rooted to recognize qualities of an object feeling alive, provoking a significantly deeper emotional response from the user.

We can analyze from what we learned through the foundation provided by the evolution of motion design, and then apply these theories into robotics, kinetic art to design interactive physical interfaces. These principles inform future designs in physical objects and architectural systems. Human's cognitive understanding of movement has been undervalued by the limitations of flat and compact screens. This thesis investigation explores the means to interact with physically animated objects and manipulative interactions with these digital devices. Translating from the principles and techniques of motion graphic design into physical devices will provide a broader spectrum of interaction design.

Historical / Context

There is an abundance of historical documentation regarding human being's obsession with creating physical animation in forms of art, automatons and robotics. This is a major inspiration for creating physically animated objects. In 1739, one of the most famous of these clockwork automatons was The Canard Digérateur (The Digesting Duck), it's a mechanical duck made by a Frenchman Jacques de Vaucanson. The duck had capabilities of quacks, eat, drink, splashing water and digesting food. Another famous automaton from that era was The Writer by a Swiss maker Pierre Jacquet-Droz. It consists with internal clockwork mechanics and a life size boy that can write any message up to 40 letters. These creations were technologically crafted with highly delicate parts and were created purely out of enjoyment. That was the beginning of human's fascination with creating artificially animated objects. (Riskin, 7)
During the early 1900s, Italian futurist painters explored motion as a concept for artistic interpretations. Even though they did not create intricate mechanical automatons, they were the first to investigate the principle of motion, speed, interpret values and composition to create a visual vocabulary based on motion. Painters like Giacomo Balla with many works of art dealing with speed and movement. Swifts is one of the famous pieces created by Balla in 1913. (Perloff, 9)

Then in 1920, Walter Gropius invited Laszlo Maholy-Nagy and others to join the staff of Bauhaus school. Maholy-Nagy and others such as Alexander Calder and Nicholas Takis began the "Kinetic Art" movement. Their experiment involves creating sculptures that consists of parts moved by air, magnets, and sometimes the
audience themselves. These artists create these kinetic sculptures using motion as the central part of their artistic expression. Early works of physical movement incorporates strong aesthetic values of motion. (Simanowski, 8). This physical motion vocabulary are being explored today by companies like Art+COM that created the famous BMW kinetic sculpture consisting of metal balls individually suspended with a string from the ceiling. Other artists like Sachiko Kodama in the year 2000, created organic kinetic sculptures not based on physical objects but using magnetically actuated ferrofluids. Her installation poetically transforms organic liquid into solid forms.

In the modern days, robotic art offers rich motion aesthetics in both perceptual and functional characteristics. Some projects attempt to simulate animal or human such as Aibo the robot dog or Asimo the Honda humanoid robot. Others experiment with alternative forms of motion vocabulary to communicate with the user. Researchers like Fabian Hemmert in 2009 investigates motion vocabulary with mobile phones to communicate and respond to various modes of usage like haptic, weight shifting, and ambient physical movements. Projects like Hemmert's Mobile phone explorations are limited by current forms of technology. It builds closely on existing object and devices, adding icing on the cake. There is a miss opportunity to invent new devices that breaks out of the conventional form and limitations. The physical movements in this specific example aren't very interactive to the user's inputs but rather more of a respond to the features on the phone. From the historical examples we can conclude that self generated motion from physical or flat medium are engaging and can communicate to humans. Motion plays a major role in visual effects, it encompass a vast amount of techniques to create specific types of motion. One particular aspect of VFX is morphology that continuously incorporates a mesmerizing effect on human perception through displaying various transformation and mutation of forms over a predetermined period of time. When one object structurally altered itself to another representation, the movement can be driven by computational data sensed by inputs of the user or its environment. The object's form response to the user's input with physical changes, allowing adaptation to the new function or context. The various transformations can communicate to the user on multiple levels. Form no longer follows function, as form now becomes function. These animated devices are not limited to only interacting with human, but can also respond and create dialogue with a system of multiple devices. If we live in an environment full of activated objects which implies rich contents and interactions that are embedded, these tangible things and inhabitable spaces will assemble a new symbiotic system to provide ongoing relationship with human and their environment. This new ecology of things provides an evolving system that can be interpret and influenced by the interactions and decisions by people or other objects. (Allen, 4)

Process

My goal through these experiments are to draw inspirations from techniques used in motion graphic and visual effect animations then embed these principles into physical objects, devices, and environments. These project explorations are initiated through a matrix chart that I have put together. The left column of the grid consists of vocabularies I have defined for techniques used in the visual effects language on digital screen. The top bar of the grid lists the various human sensory inputs and psychological related approaches. (Such as eyes, breath, heat, proximity, force, bend, touch, gestalt/cognitive psychology, and neurology tests/theories.) I choose a category from the side, then draw down to their cross section as the hybrid of ideas inspired to create a project. Many of my ideas and inspirations are from years of experience working in the motion graphics industry. During graduate school, I studied neurology and psychology of human visual perception as part of my research. The human brain is constantly interpreting data from sensorial inputs gathered about its environment, but when the information is incomplete, it continuously generates gestalts in attempt to make connections and logical sense of the abstraction. (Ramachandran, 3) This information derives from foveating vision, far periphery of vision, touch and feel, or any senses of the human body. Understanding the way our eyes and brain works as well as the phenomenological comprehension have greatly impacted my decisions to create devices that puts a twist on human perception. Prototype Phase I
In the early stage of my investigation I began with the idea of manipulating visual perception, or relocating your eye sight and control to another physical location or space. So I came up with some experiments using DIY eye tracking technology as main user input and experimented with the output of what the user perceive or can control purely with their eyes.


Physically Animated Motion Graphics Immersive Space

This is the first experiment of my projects incorporating ideas of translating motion graphics to the physical real world. If the text and graphic elements have volume which occupies physical space, how can these elements give a similar experience inspired by television commercial or film title sequence? My approach in this investigation is designing an installation displaying animated physical graphic elements. These physical objects are connected to ropes that are driven by multiple pulleys. This installation is presented in a dark space with illuminated graphic elements. By sitting or standing in a designated position, the viewer can experience a pre- scripted animation as these graphic elements move into the viewer's periphery in sequence.


Super Hero Gaze Telekinesis

This project takes on the idea of visual effects in film when the super hero has the power to control and manipulate things with their eyes. The controlling interface of this experiment has it's similarities with the Saccade Controlled Visual Angle project. It also uses eye-tracking glasses to control a flash light driven by two axis servo motors. As the user look in different direction, the flash light points at the direction the viewer is looking at. The purpose of this setup is to open up a physical box while looking at it. The box is constructed with a light sensor on the top center. As the light triggers the photocell, a servo underneath the box will pull on four strings that are attached to all four sides of the box and pull the four sides like flaps folding downwards in a synchronized motion. This experiment utilizes the gaze of the human eye to trigger physical transformation of objects in the real world. What if everything in the world can be activated and controlled by just looking at it?
Extended Sight, there are two versions of this project. This version is inspired by the idea of relocating your eye sight to another physical location and still capable of controlling the direction of your view. Both use eye-tracking glasses that incorporate a hacked PS3 infrared webcam for the left eye and an LCD monitor mounted for the right eye. The LCD monitor is mounted on the glasses by brackets driven by two feather weight servos to allow a two axis rotation to follow the focus of the pupil. The main task of the eye-tracking is to control another physical device that holds a viewing camera for the LCD screen. The biggest difference between these two versions is how the viewing camera was mounted.
The first iteration of this experiment is called "Extended Sight", where the viewing camera is mounted on a small robotic bracket driven by two standard size servos to allow x and y axis rotations for the camera to look around the outside world. If the human eye sight can be physically moved to another location, what perspective can the viewer perceive?


The Split Experience

When a person gaze towards something or someone, naturally the eye's focus would constantly jump back and forth, scanning and following contours while identifying various information of the subject. These quick eye movements are call saccades. Although our brain interprets these saccade imagery to be seamless, we are biologically blind between these saccade points. This experiment is the next iteration of my previous project "Extended Sight"; it takes on the metaphors of the saccade blindness in conjunction with jump cut techniques to display different angles of one scene in films. This device consists of two parts, a glasses mount eye tracking camera which controls two semicircular acrylic armatures driven by two servos that rotate in x and y axis. The main purpose of this structure is to mount a viewing camera, so the camera can rotate 180 degrees around an object that can be placed on a central platform. Due to the scale constraints, the maximum size of an object that this specific prototype can revolve around is approximately 6in x 8.5in. The display screen of this viewing camera is mounted on the right eye of the glasses. Therefore, when the user is wearing these glasses, the left eye is used for controlling the viewing camera, and the right eye is used for looking at the display of that camera. Due to technological constraints of the type of viewing screen, the only way to be able to foveate on the LCD display screen while the left eye is being tracked is by rotating the screen in the same direction the pupil is moving. This requires an additional two axis bracket driven by two feather weight servos to turn the display towards the same direction that the eyes are pointing at.
Through this device, the eyes are no longer used only for navigation and identification, but it adds a new exaptive function of control. It takes the viewer out of their normal perspective and objects in new angles.

Prototype Phase II

In conjunction with exploring eye tracking as a tool for perception manipulation I was also investigating objects that change its form through physical motion. Physical Mutation Interface is an interactive morphing interface inspired by the ideas of organic mutation, morphing physical object that works as an interactive display. These objects embody physical motion to reshape or reconfigure its form and sensors to communicate with the user. It display data through mechanical transformation or organic mutation, changing its shapes create some form of motion. That is the key concept of physical mutation interface, this idea brings communicating motion through visual effects from the digital screen to the real world. In my research, I defined two different major categories for physical form alteration with these tangible interfaces. Mechanical Transformation, much like most people would imagine of the relegated science fiction Transformers, where you would see the mechanics of moving parts as the object adapt its form for a different context. The other is Organic Mutation, much like it's name, form alteration would be embodied by a layer of skin. As the form seamlessly morphs into another, the surface remains intact and flexible. Up to date, I have created three directions to the idea of Physical Mutation Interface, Progressive Interaction, Non-Visual Interaction, and Temporal Interaction.

PROGRESSIVE INTERACTION

- Progressively reveal layers of capabilities through interaction. This experiment investigates how complex system can be hidden within a simple form and how it can progressively reveal layers of capabilities through user interaction. When certain parts are initiated, it physically animates and transforms to display or hide these features. This system provides constrains oppose to layout out all the features and controls on a single display. Key Traits:
- The goal is to design a system that s complex, but presents simplicity, so different functions are revealed as needed. - Initial interface presented in its raw form - Progressively revealing layers of capabilities
- Isolate functions and introduce new feature - If with other objects, it will change its form to interact with other object. - Ideally when every side of this object can open, it can provide a perpetual sense of
discovery during use.


NON-VISUAL INTERACTION

Tactile Organic Mutational Interface This form of tactile interface is purely organic; the skin which protects the internal structures remains continuously seamless as the object alters its form. It is designed to feel natural in the user's hand, starts out neutral with the shape like a sphere or ellipse. As the user squeeze, push, twist and grip the device, this object changes it's form to provide new features. These features are yet to be defined. The form also protrudes itself with bumps at various points as indicators to notify new affordances to the user. Since the goal of this device is to operate purely through touch without the need to see it, it is perfect for controls or performs tasks when the user's sights or even other senses are occupied. For example in a driving scenario, it may be detrimental to take your eyes off the road, so the user can use this device without having to look at it.


TEMPORAL INTERACTION

Physical Interface Changing Over Time This experiment investigates how physical interface changes over time can introduce new ways of interaction through building a long term relationship with the user. Short maybe a couple of hours, long maybe as much as a week. I envision its form changes with or without the user's interaction. This metaphor for this type of object and user relationship is very similar to crafting and caring for a Japanese Banzai tree. Through the user's tentative attention to care for and craft this object, it begins to develop personal value through each individual user. Although the interaction part is still very much on the surface level, but this one is very interesting to me. What influences the form to change over time is something I still have to work out. It maybe a set of data from the internet, or even by personal encounters or experiences.
Key Characteristics:

- Changes through human interactions - Changes over time on its own - introduce new ways of interaction as the form changes - Slower interface change. Over time, in
the scale of days. - Changes base on how it's used, or data coming from somewhere else - Builds a closer relationship with the user through investing time and efforts over a long period of time.


Reflections

The experiments I have created so far indicate two major directions. One is a perceptual expatiation of the eye, providing new function to the eye such as control and manipulating the user's own perception. The other is creating mutating physical interfaces, allowing the user to manipulate and influence the form and the outcomes of the device. Each have their technological limitations as the interactions are quite simple and on the surface level.
Due to the use of the current motion tracker software used for the eye-tracking device, the location of the iris isn't always accurate. The software requires movement for the tracker to detect the location of any pixel change. This creates a problem for this current iteration which the tracker only works about half of the time while the user is trying to control the device. If I were to further develop the possibilities for The Split Experience, I will try to experiment with the type of object being observed. It may not be a single object, but a collection of three dimensional stage performance scenes. As the user rotates the camera around, it can display different scenes and angles.
The physical mutation interface is still in a rough, initial stage. The interactions are very straightforward, the object only allows for one or two manipulation of its form. The interactions on the human receptive side are also not complex enough to create productive and variety personal interpretations. The importance of the physical mutation interface is not the result and outcome but more about the process of the interactions. Using a puzzle-like object for an example, if every side of a complex polycube can be triggered to pop open or extend out at the touch of the surface. Various sequences can cause different sides to respond, through different user's interactions, the polycube can result in different outcomes. Which means it can create interactions with productive results varying upon user.


Plans for Spring Term

The direction from my projects I choose to further develop over the spring term is the physical mutation interface. One major aspect of my project I am lacking is deciding on the specific context which these objects live in. Different context can greatly change the design of the object. It will be a hard decision for me to settle on a specific context for each mutation interface. The strategy is to brainstorm lists of contexts suitable for the principles that I have defined, and pick out ideal possibilities pertaining to my interest. Then redesign these forms and interactive features around the selected context. My thesis investigation aims to introduce more complex systems in object mutation, creating movements and transformation that push the boundaries and the potential of motion in physical interactive objects. After I have settled on the contexts to work with, my thesis paper for the mid-term will require a re-write. The topic will remain similar, but it will be adjusted overall to sound more unified for the context.


Bibliography

1. Baudrillard, Jean, Simulacra and Simulation (The Body, In Theory: Histories of Cultural Materialism), University of Michigan Press, February 15, 1995.
2. Lane, Richard J., Jean Baudrillard, Routledge; 2nd edition, January 16, 2009. 3. Ramachandran, M.D., Blakeslee, Sandra, Phantoms In The Brain, Harper Perennial,
1999. 4. Ishii, H. and Ullmer, B. Tangible Bits: Towards Seamless Interfaces between People,
Bits and Atoms. Proceedings of CHI 1997, ACM Press, 1997. 5. Allen, Philip V., The New Ecology of Things, Media Design Program at Art Center
College of Design; Limited edition, April 16, 2007. 6. Burdea, G. (1996). Force and touch feedback for virtual reality, John Wiley and Sons.
7. Riskin, J. "The defecating duck, or, the ambiguous origins of artificial life." Critical Inquiry 2003, 20(4): 599-633.
8. Simanowski, R., Digital Art and Meaning: Reading Kinetic Poetry, Text Machines, Mapping Art, and Interactive Installations Univ. of Minnesota Press, May 20, 2011.
9. Perloff, Marjorie, The Futurist Movement: Avant-Garde, Avant Guerre, and the Language of Rupture, University of Chicago Press; 1st edition, Dec 3, 2003.
10. Brewster, Ben, Theater to Cinema: Stage Pictorialism and the Early Feature Film, Oxford University Press, USA, May 28, 1998.

 

 
 
Art Center College of Design © Link Huang 2012