Flightphase was brought on to this project by HUSH Studios as an Art and Technology Director to create, in collaboration with HUSH and 160over90, the image-based responsive environment at the University of Dayton. The 36-foot wall at the admissions center was to become an interactive attractor for the prospective students and their families.

The brief for this project sounded great. We were to take the concept of the wall ‘opening up’ to reveal fragments of experiences at UD, and turn it into a responsive environment that uses intuitive gestural interaction and is evocative of UD’s visual brand. The goal was also to make it engaging and simply fun.

Our task was to design and implement the installation: formulate the visual design and the interaction design, in concert with software and hardware solutions that would support it.

From a wider range of ideas we came up with, the University of Dayton chose the concept of a field of cubes which animate in various ways, and, when someone is present in front of them, come together to form ‘screens’ with videos. This procedural animation, as well as the computer vision application that provided the interactive input, was built with an open source coding platform openFrameworks.

The videos in the installation were directed by Peter Rhoads, and produced, as was the entire project, by Hush.

Visual Language

Our basic element, the cube, a great mechanism for revealing the videos, was now a starting point for explorations of the visual design that would make the cubes more than ‘just cubes’, and that would resemble the visual language of UD, especially the flat rectangular shapes of the Dayton banners. We experimented with ways of rendering the cubes to abstract them to their basic geometric shapes. We ended up using orthographic camera, no lighting, and frequently rendering one of the faces of the cubes with the same color as the background.

We then added a color shader. Each face of each cube is rendered with a single color, but this color changes depending on the faces’s angle to the camera. The color is picked from pre-designed image gradient that constitutes a palette. There are several palettes that change frequently. The palettes not only change the color of the cubes, but also how much they are abstracted — how much become part of with the background, creating a further variation in the designs of the shapes in the pattern.

Altogether, the entire field of cubes, with how they overlap and with the negative-space shapes formed between the cubes, had a potential to create a variety of looks and patterns that could be perceived as flat 2d imagery or as 3d objects. This gave it the quality of something more structural and dimensional that could ‘open up’, rather than just being on the surface of the wall.

cu_dimensional1_Sm
cu_dimensional1_Sm

The field of cubes is animated with waves of activity. The effects we apply to the cubes are grouped together to create many specific different looks. Within their group however, the effects are picked randomly, resulting in patterns which, while staying withing the general aesthetic boundaries we created, vary in the specifics.

affectors1

The circles in the above ‘debug’ view of the application are what we call Affectors. Invisible in normal mode, they travel randomly on the wall and each of them rotates, moves or scales the cubes that it passes over. Several of those Affectors combined together can produce very interesting kinds of effects.

The Affectors start small and grow to their final size as they travel around. The longer the cube has been under the effect of the Affector, the more it is influenced by it.
Most of the time a lot of Affectors act on the cubes simultaneously, and they get fairly large, as in the image below.

affectors1

We put together sets of those Affectors that we thought looked good together, creating a specific kind of pattern. Within each set the animation is subtle, and fairly low-key. The change between the Affector Sets is much more dramatic. It is triggered by interactive events — such as when someone enters the interaction area, and also periodically after some time has passed. The change in the Affector Set is accompanied by a change in palette. Both the sets and the palettes are picked randomly, creating more variation and unique looks.

Interaction Design

The interaction is very intuitive: simply, a person’s presence elicits the response of the system, not requiring an intentional interaction. The gestural interaction was designed based on our previous experience using the Kinect cameras. At Dayton, 4 of the Kinects, embedded in the ceiling in front of the wall provide information about the viewer’s presence and movement. The contour of the person seen by the overhead cameras is mapped onto the field of cubes. Three large areas on the wall are regions where the videos can be revealed. The cubes falling within the person contour and the video region fade in their video tile as they scale, rotate and move to congeal with other cubes forming the video.

The images below shows the Kinect install diagram and the combined depth images from the 4 Kinects. Mis-aligning the cameras slightly actually helped us avoid the interference between them (we have learned from aligning them too well!).

diagram2

kinect_view_

One of the most complicated aspects of the application was transition from the inherent behavior of the cubes to the behaviors that were tied to revealing the video and responding to the person’s presence. While each cube needed to respond immediately to the person, it still needed to retain some of the animation that it would have otherwise, creating a smooth transition: a video reveal that would remain in the style of the pattern of the graphics that was currently being ‘performed’ by the cubes. We also wanted the rectangle of the video never to have a straight edge, to always be influenced by the waves of activity passing through the field of cubes.

viewer_effect

In the images above you can also see our simulator of the interaction — an app that communicates with our main application with the same kind of information about the position and contour of the person as the real tracker application sends.

When a person enters the interaction area, they send a wave through the field of cubes, a kind of ‘hello’ from the system, a gesture of recognition and acknowledgment that each viewer has an impact on the entire system.

entry_wave

The cubes that are not in the video revealing regions, but are within the area affected by the viewer assume a different behavior than all the other cubes.  This way the viewer leaves a trace as they move through the cubes, and their every move is immediately reflected in the patterns they see in front of them.

When nobody is present in the interaction area the installation will go into ‘Idle Mode’. In this mode the graphics continue with their waves of activity, and on top of them typography animates on, forming questions that UD would like their students to ponder. The palettes and the behavior of the cubes change slightly to accommodate the legibility of the type.

entry_wave

Hardware and Software tools

The software for this project was built using openFrameworks. For video tracking we’re using a modified version of TSPS (Toolkit for Tracking People in Spaces). We are using 2 Mac Minis to get input from the Kinect cameras — each Mac Mini running the TSPS app blending the input from two Kinects, and sending the contour information over to the Mac tower. Another blending process there puts the depth images from all the 4 Kinect cameras together into a single long interaction area. The same tower is also running our application responsible for animating and drawing the cubes.
There was yet another blending process involved: blending the projected image: we are using 3 projectors with some overlap between them. To do the image blending we are again using a slightly modified version of a projector blending shader that we used before for other projects.

Here Jeff is working on our Mac tower in the control room, with 3 displays mirroring the projector output. On the left you can see the two Mac Minis running TSPS.

ud_controlRoom

Prototyping Process: content design and software design

We were going for a happy medium between letting the system go to form things that would be unexpected and emergent, and achieving a look that was somewhat controlled and designed. We’d often try things out and observe what that will produce, then curate the resulting effects to the most desirable ones. The entire process was pretty great, with a constant feedback loop between the software design and content design: the design setting the initial boundaries for the software tool, and the tool letting the design be expressed in specific ways. We started with creating motion tests to get a sense of what kind of movement and look we wanted to achieve. Based on that Jeff started to build the software that would be capable of achieving those kinds of animations and that could also produce the emergent behavior we were going for. The rest of the process was spent working directly within this brand new tool, I creating the actual animations and designs, and Jeff reshaping the tool and adding capabilities to it, all in a very close cycle. This had the effect of really focusing the final outcome to fit this very specific vision we had.

More video documentation and Jeff and I talking about how it works:

Few impressions by everyone involved in the project: UD, 160over90 and Hush:

Client: University of Dayton
Agency: 160over90
Production Company: HUSH
Art & Technology Director: Flightphase

FLightphase credits:
Creative Direction, Interaction Design, Bespoke Software Design

Creative Direction/Design: Karolina Sobecka
Technical Direction: James George, Jeff Crouse
Lead Sofware Development: Jeff Crouse
Additional Software Development: Caleb Johnston