Participants were asked to pick up a wind generation device (also known as foam-board) and wave it in the direction of the propeller. The propeller rotated at different speeds depending on the quality of wind generated. Participants could then look through the simulation window to see how their wind affects the virtual environment.
The original version of this assignment required that we create either an audio or a visual experience, but not both. The original CTRL+W was a purely visual experience. For the second version, we could to use both audio and video. I added a generative sound component by composing eight six-note 'phrases' that could be strung together into a melody. The system would select which phrase to play back based on the propeller's speed.
Aside from some minor technical glitches, CTRL+W was by far my favorite project of the term. I felt constrained by the fact that we could only use a single knob to create an experience. In retrospect, the constraints forced me to be inventive and to consider the different facets of the interaction: Where does the user stand? What will she see or hear? How does the knob rotate, and how does the system respond? One thing I noticed was how much richer the experience was when sound was added. In its current state, the visual interface doesn't change very much during the interaction. The sound added an element of discovery to the experience; users were interested in hearing different melodies triggered by their motions. To me this kind of human-machine interaction space seems very promising in terms of a possible thesis direction. I plan to investigate this hunch in future projects.