Monday, February 11, 2008

Pixelate - applied Reflexive Tile script

This work is the result of a research project conducted at the Graduate School of Design in collaboration with Dido Tsigaridi. Our research produced a 360 degree rotating mirror platform utilizing Shape Memory Alloy metals set into a particular counterbalanced formation. The idea was to create a field of mirrors (sensor enabled) to respond to human presence and movement. The bottom of this post describes the proposed (Dec. 2005) responsive surface in more detail.

While this project has remained dormant, I have been trying to visualize the dynamics of the surface (presumed to be a wave-like motion) for some time. The construction of the rotating mirror platform was meant to demonstrate the movement of a single modular unit. Up until this point, the most I could do to visualize a full scale ripple effect was to model it through standard 3D programs and at best, produce an animation of that movement. Recent innovations to the Second Life platform (and more specifically the development of the 'reflexive tile script' by Oze Aichi), have now provided the ability to study the effect of avatar/human proximity real-time on the individual and overall tile movement response.

While this method is used only to simulate the idealized movement of the prototype wall tiles, it remains an demonstrative way to study the dynamics of distributed responsive form. This idealized representation of the surface movement (which is not quite as fast in its physical form) presents the opportunity to observe unforseen patterns in behavior of the elements as well as the behavior of those individuals participating. This study in particular focuses on the movement and patterning of the surface which is why we don't see representation of the supporting structure and electronics. This is partly for clarity's sake and partly to reflect the virtual nature of the form (and the open possibilities that this engenders).

The following video is the result of brief experimentation with the script (Reflexive Tile) provided by Oze Aichi at The Tech Museum via The Arch by Keystone Bouchard. In this trial I reduced the scale of the tile and set a static, reflective texture to match the surrounding steel. This combination produced some interesting and unpredictable patterns as the avater engages the surface. While the original design calls for mirrors, I turned up the reflectivity of each prim to compensate. Here is a video demonstration of the resulting surface movement.

VIDEO (Hi-res version also found at YouTube link here).


The SMA Responsive Surface
Research project in collaboration with Dido Tsigaridi at the Harvard Graduate School of Design.

The surface deforms and reflects based upon human presence and movement along its surface. Each device senses and moves individually based upon a simple proximity sensor feedback loop.

The mirror remains stable and reflects the surrounding environment until approached by a pedestrian. The mirror then rotates its angle toward the pedestrian dependent upon their proximity to it. At its closest proximity, the mirror then resets to its stationary position until the user begins to move away. The result is a pedestrian which can see their own image undistorted while onlookers see a refracted and shifting image of the pedestrian. The surface becomes a statement about how we see ourselves in relation to how others perceive us in an observational context.

Initial testing and experimentation for the kinetic mirror produced a 360 degree rotating and adjustable platform operating on a standard 9 volt battery. The use of SMA greatly reduces the need for moving mechanical actuators or servos decreasing energy use and minimizing potential for wear or damage. You can find the video of the functioning platform here, and a link to the project page on my website here.

Tuesday, February 5, 2008

Archidemo - Potential of the 2D Image within Virtual Space

I took a moment to visit an interesting variety of builds representing the work of Hidenori Watanave and the Archidemo group located on the NikkeiBP+NikkeiBP way sim. I discovered the group and sim location through a post on Networked Performance. The island's work is clearly experimental but it is interesting to see what people have come up with to date. While some works were a bit confusing, I think this was more the result of project overlap and crowding than the clarity of the builds themselves. I was able to recognize some responsive elements such as proximity and tracking scripts, and most were utilized in a fairly straighforward manner. Still, it's worth the visit as this group seem to take a different approach than most of the other sims I have had the opportunity to visit. Here is a brief (rough) clip I made on one of the more compelling projects.


The build consists of panoramic 2D still images which collide and ricochet within a surrounding panoramic image (also a 2D still). As the avatar navigates within the surrounding space they have the ability to touch a central portion of the floor within one of the individual floating spaces. Touching the floor situates the avatar within that particular room which provides the opportunity to examine the entirety of the RL panoramic image. This seems to invoke the concept of 'compound space' as the visitor has the ability to essentialy 'jump' between physical (although static) locations through the medium of virtual space.

As the virtual has enabled communication between physical spaces for quite some time, this build becomes an interesting metaphor for the spatial/informational relationships between (physical <--> physical) and (physical <--> virtual) spaces. While this project is more a study of the possibilities of the 2D image within virtual space, a natural next step for this build might be to incorporate live video feeds into the RL spatial pockets (for example 360 degree panoramic room cameras that are directly manipulated by the avater inhabiting that particular spatial pocket). This might result in a powerful new way to experience physical spaces through Second Life and other related virtual media.

Wednesday, January 30, 2008


We seem to have recently developed quite a vocabulary with regard to 4D virtual architectures (interactive, active, reflexive, reactive, responsive, reflective, 4D, flexspace, etc.) While there may be a variety of reasons for creating these works, they usually tend to fulfill some level of practical purpose: collaboration, entertainment, challenging perceptual norms, a focus for socialization, or a tool for simulating and testing real world interactions. While these virtual builds contain particular reactive qualities such as response to movement, presence, voice or other behaviors, I feel that they also allow for a more robust form of interaction rarely taken advantage of in SL builds (with certain recent exceptions).

There have been a handful of spectacular builds I have had the absolute pleasure to visit lately. After experiencing Parsec, the reactive sculpture garden, and other recent interactive works posted on The Arch, New World Notes, and Dusan Writer's Metaverse, I think we are starting to see the emergence of a particular quality of architectural space so easily engendered by the nature of the virtual construct in which we work. This quality might be known as Gamespace.

While most responsive builds I visit tend to induce the initial 'wow' factor, the post wow hangover usually leaves me with little more than a few new friend contacts and some interesting topics to discuss (all in all, not a bad result). As the usual scenario plays out: I walk through the build, toy around with the responsive elements a few times, and then find myself thinking - that was fun, now what? While the initial visit may be spectacular, there is rarely any reason to return unless there is a new addition to the build or I am introducing it to another avatar. This has been the case for much of my own limited collection of responsive builds which prompted me to write this piece.

The answer? I think the most successful interactive architectural builds (physical or virtual) allow the users to engage the system and each other in some form of game. Virtual interactive space is perfectly suited to adopt game-like features that can easily be programmed into the overall interactive experience. The point of a game is to play within a set of rules to accomplish a goal of some sort. Sometimes this goal is winning; sometimes the goal is to get the most points for a given task; sometimes it is simply to compete, socialize, or even to create music, art or architectural form. The combination of rules and goals creates the game dynamic, and this provides a purpose to the activity that drives it.

Gamespaces generally allow for open, playable environments which contain both defined games (defined rule sets with required actions and definitive goals within the game) and the ability to free-form games which develop through the course of play and experimentation (emergent games). Emergent games are not specifically engineered into the original purpose of the game, but the unique way the environment is constructed allows the players to devise their own corresponding rule sets which then develop through interaction with the system and each other. The nature of the gamespace requires a delicate balance between these defined rule sets and the freedom which can lead to emergent gaming behavior. As interactive architects I believe it is our duty to explore and establish this relationship as it relates to architectures and those who inhabit them.

Let's take
Parsec as a recent example. Parsec contains an interesting feature that unlocks a spectacular visualization when the right combination of voices or vocal gestures is enacted. There are no explicit rules to define interaction with the system (besides the brief introductory sequence); one simply shows up and begins speaking. Through the course of interacting with the system and the individuals present, the game is determined by the players - as they play it. The individuals talk and are able to control the spheres which jump and respond to the inflection in their voices. They become entranced by the surrounding visuals and their vocal response to them. They respond to the spectacular ‘supernova’ with delight when the combination of their voices unlocks that particular visualization. Individuals that contribute to the game environment become the players and, in the case of Parsec, the game (and goal) becomes both socialization and the creation of music.

Through this dynamic the players are encouraged to explore the effect their voices have on the system (and each other) and are rewarded through those activities. In other words, there is a driver (a goal) behind the vocalization in addition to the generated conversation between participants. The system rewards the speaking voice with a playful visual response. The system rewards conversation by releasing spectacular visuals when the individuals interact vocally. This carefully balanced combination of rule sets, goals, and open playability creates spontaneity to the game as well as a conscious, defined structure that drives the players toward a particular purpose. This is crucial to gamespace because the resultant experience is unique every time the game is played.

Another good example is the Architectural Jazz build by avatar Keystone Brouchoud that I discuss in an earlier
post. The rule set of the game here is simple - movement toward particular prims which then results in musical notes when each prim is approached. The goal – the creation of music. While this format reflects an uncomplicated (and brilliant) interaction, it may be considered a form of game nonetheless. Maybe a bit more of an instrument or tool, but that is for an upcoming post.

If we examine this music analogy for a moment, we have a composition on one hand and jazz on the other. A musical composition has a set of rules when, carefully followed, produces a scripted and balanced structure of sound. Jazz, on the other hand, relies upon improvisation, intuition, a good sense of timing, and sociability. Both methods produce music, but one allows for a free-form and whimsical style much more condusive to sociability, unpredictability, and just plain fun. Keystone has successfully adopted this feature and in doing so has created a consistently unique and inviting interactive virtual experience. I think this is a quality more interactive builds would do well to adopt.

While this free form (open) interactive quality is crucial to the game, even jazz has rules. While I think this build might benefit from a bit more of a defined or choreographed rule set, the potential for emergent gaming is clearly present here. Say, for example, that the players were able to 'carve' visual paths through the build which represented that particular sequence of notes. Other players could then run through the same paths making detours to create variations on each other's 'musical paths'. This might provide some structure to the experience to balance the already present open playability.

While rules can provide a framework for interaction they must not be allowed to fully encompass a build. I believe the more rigid the ruleset, the less opportunity for spontaneous emergent gameplay that may become realized through interaction with the build. The player should be given the choice as to whether they desire to engage the system. The game need not necessarily be the dominant feature of the interaction, but must be exhibited through a subtle and real quality of the architectural space.

As virtual interactive architectures move adopt this quality, we may begin to realize spaces and structures that form through the game: a space where the interaction among avatars actually creates form. As architects we have recently become obsessed with patterns and forms generated through software algorithms adopted from the natural environment or developed through human observation and analysis (known as generative forms). What if these generative properties become tied to behavior or actions? Maybe movement, conversation, or the power of voice alone is enough to generate form. What if form becomes the by-product of socialization?

I believe it is inevitable that responsive spaces will begin to adopt the quality of gamespace. We can only spend so much time with virtual architectures exploring the movement of prims that change color, distance, or fragment with human/avatar input. While these explorations help us to begin to define certain interactions and interactive elements, I think the true purpose of such explorations is to develop a platform for the game by defining the parameters and introducing the players to it.

After all, SL and other virtual spaces were born from gaming engines. This was their original function. Maybe it's time to take the next step and begin utilizing active, reflexive, interactive, reactive, responsive, reflective, 4D, flexspaces toward their full realized potential: gamespace.

Up next... virtual interactive architectures as a tool for creation.

Thursday, January 3, 2008

Virtual Dome - Form Follows Presence

This build represents first release of S.O.N.A.R 1.2 as it stands in its current form.
Here is the location of the video.
As avatars move about the center of the landing pod, a fluid swarm will begin to form a dome above the pod perimeter. The 'seeds' emerge from the arms of the pod to rest at random proximal locations about an avatar. After this migration stage, they begin to grow into static physical elements known as 'fruit'. Due to this randomized localization, the form of the dome remains constantly in flux. It is also programmed to follow the movement of the avatar so the location of the dome is variable but predictable. The result is a responsive, variable (fluid) dome generated through the presence and movement of multiple avatars.
I'm still working additional elements into the project but in the meantime, have a go at this SLURL location (Architecture Island). The eventual goal is a fully interactive system allowing avatars to have some direct and indirect control over the function of the individual elements. This is still a prototype so if you experience any bugs please IM Far Link in Second Life.

Wednesday, January 2, 2008

Second Life as a Development Tool for Interactive Builds

Hello all,

This is a post from the sonar website, but I think it is a bit more applicable to this blog. I'm trying to identify practical developmental characteristics of Second Life: particularly its role as a development tool to simulate interactive architectures and/or responsive environments. The following are some thoughts about how SL provides advantages over other 3D modelling tools for this purpose. The ideas are not fully developed so I'm looking for some additions to the list. Feel free to comment or criticize..

Recent advances in virtual gaming engines (specifically Second Life) make it an ideal platform for the study of interactive architectures and responsive environments. The unique characteristics of virtual space allows for 4D spatial processing of kinetic architectures as they respond to human (avatar) behaviors and, in turn, effect those behaviors in an ongoing feedback cycle. This facilitates observation, testing, and analysis of the interaction between humans and scripted objects (programmable elements) and the resultant behaviors and dynamics that emerge from those interactions.

In Tobi Schneidler's interactive architectural and media works, he identifies a critical precondition of the development of any responsive or interactive space: the need for a 1:1 prototype scale for usability testing. He goes on to stress the importance of such testing as there are behavioral patterns and outcomes that the designer cannot predict or account for until they are observed within the full context of the interaction. Current 3D modeling and animation tools allow for more flexibility in formal expression and kinetics, but the search continues for the proper digital tools to test and develop various types of interactive architectural environments. I believe that Second Life provides an ideal platform for this type of research/design development.

I am identifying 3 key qualities of Second Life virtual space and its ability to test and develop responsive/interactive environments. These ideas are explained briefly here but are part of a larger essay that will be released in the coming weeks. These include..
-Human factors engineering
-Physics and time
-Encoding objects and environments

Human Factors Engineering

A pre-inhabited environment provides the opportunity for chance social encounters and interactions. This more accurately reflects the way we 'happen upon' and engage space (and each other) while exploring the physical world. For this reason, the environment of Second Life places more demands upon the designer's ability to visually market a designed space to chance passerby. This holds true to the 'view from the street' effect which visually entices users to visit and engage a site as opposed to simply downloading an animation or being passed through a series of internet links to a website video. It requires a bit more commitment on the part of the user. The designer is now required to consider how a site or architectural installation will attract users through its physical appearance, siting, and behaviors. If the site/installation does not look appealing or engaging, avatars may simply walk by and make the choice not to engage it.

Social Interrelation and Cooperation
We bring our physical mannerisms and behavioral tendencies into Second Life and act these out through the representative avatar. For this reason, characteristics of Second Life tend to reproduce some aspects of social interaction within physical reality. This is true for qualities ranging from altruism and selfishness to embarrassment and self-esteem. Etiquette also plays a role: for example it is usually considered courteous to face an avatar when speaking with them and to maintain equal elevation when holding a conversation with another avatar or group of avatars (two avatars speaking will generally approach each other, hover to the same height, and face each other to speak). Second Life also allows for sociospatial physics: for example, an avatar must be within 'listening distance' of another avatar for a conversation to take place. These human behaviors and the nature of space which allows them to play out is crucial to take into account when attempting to model responsive environments which must react accurately and reliably to both predictable and unpredictable human behaviors.

Avatar-Human Proxy
Current 3D modeling and animation tools mostly place the user in the role of observer. In these cases the user is generally represented by a virtual camera that the user controls to adjust their position and orientation with respect to a given modeled structure or environment. Second Life, on the other hand, allows the user to adopt the role of participant which enables the user to interact with designed space. The user relates to and associates themselves with the avatar (their digital representation) and this generally results in the user acting through the avatar as opposed to perceiving the control of a separate entity. They begin to associate the avatar's actions with their own and thus there is an identity associated with every action. The user may also begin to feel that their actions have consequence or causality associated with them and this gives significance to those actions (be they virtual, physical, or both).

Recording Experience
Through the use of mouselook and machinima, we can actually record the experience of users in any given space. Second Life gives us the possibility of establishing cameras in any position or in multiple positions simultaneously. This gives designers the capability to record for analysis the interaction taking place from multiple perspectives including those from the 'eyes' of the users themselves. We also have the ability to visually track movement and positions in a given space or structure. When this data is compiled, we can begin to recognize trends and discover patterns of use unseen by the naked eye. This component will be utilized heavily in the coming experiments and more on this to come..

Physics and Time

Kinetics, Physics Modeling and Usability
Second Life allows for physics modeling such as mass, momentum, gravity, and a host of other characteristics. While physical construction is a necessity for any kinetic/dynamic object or structure, (due to scalar complications like friction, tensile strength, etc.) Second Life provides the opportunity to study the interaction taking place between individuals and environments which help to develop that particular aspect of a responsive/interactive space. This can then serve as the basis for further development of the physical build as it progresses through its stages of development.

'Real Time' and Co-Inhabitation
Second Life allows multiple users to interact with environments and each other in an interrupted singular temporal stream. There is no reversal or playback within the program itself (with the exception of exported video which is dissociated from the main program) therefore traits like embarrassment, mistakes, and other realistic human qualities become engineered into the overall interactive experience. As Second Life has developed over time, we also have the possibility of diurnal cycles (day and night) which provide opportunity for time based interactive works that utilize lighting for daytime or nighttime participation. This forces the designer to address such cycles and provides an additional element of realism which must be accounted for in any simulated interactive build.

Encoding Objects and Environments

Complete freedom to encode objects and environments
This opens the potential for surface, structural or environmental characteristics to contain an element of responsiveness. While many of these characteristics may not apply to a subsequent physical build, the point is the ease with which the designer can encode an environment. This allows the designer to explore different avenues of a particular idea not necessarily possible through physical exploration. This may present additional ideas and opportunities that will eventually inform another part of the project unforeseen in the original design.

Exchange of Information
The nature of Second Life's virtual environment allows information to be transmitted to and from this space in a bi-directional manner. While virtual space provides an excellent platform for the study of interactive works, it also allows the subsequent built physical work to communicate with its virtual simulation. This feature has been utilized in many projects best demonstrated in the
Muscle Project by Oosterhuis et al. A virtual simulation of the structure was initially built in Virtools for testing purposes. The eventual built form retained this virtual double allowing users to have an effect on the final build by manipulating both the virtual form as well as the physical build directly. This idea of the 'dual reality interface,' allows the interactive work to be effected from multiple locations as well as through multiple forms of media.

Quantifying Interaction and Inhabitation
Second Life provides unprecedented ability to record and quantify interactions. Cameras and script tracking provide for in-world real time data acquisition and analysis. All actions in Second Life are ultimately quantifiable: every discussion and action has the ability to be recorded and documented for review and study. Designers are able to track avatar movement through space, visual attention through mouselook recordings, textual conversations, or attendance for any given designed space. While this has sometime resulted in unwarranted and unethical surveillance in virtual environments, its responsible use can assist in testing variables such as a site's popularity, its use, and specific interactions between avatars and kinetic elements.