Runway – Natural & Intuitive 3D Modelling

Group number and name: #13, Team CAKE
Team members: Connie, Angie, Kiran, Edward

Project Summary

Run­way is a 3D mod­el­ling appli­ca­tion that makes 3D manip­u­la­tion more intu­itive by bring­ing vir­tual objects into the real world, allow­ing nat­ural 3D inter­ac­tion with mod­els using gestures.

Previous Blog Posts

P1: Group Brainstorming
P2: Contextual Inquiry and Task Analysis
P3: Low-Fidelity Prototype
P4: Usability Test with Lo-Fi Prototype
P5: Working Prototype
P6: Pilot Usability Study

Demo Video

In this video, we use augmented reality technologies to show the virtual 3D scenes floating in midair where the user sees them. These were produced in realtime – the images were not overlaid after the fact. Sometimes, you may notice that the user’s hand passes in front of where one of the objects appears to be located, and yet the object blocks the user’s hand. This is simply because the camera used to record the video has no depth sensor – it does not have any way of knowing that the virtual objects should appear in a location further from the camera than the user’s hand.

Changes since P6

  • Implemented third-person video capture: This allows us to take demo videos that show the objects floating in front of the user, as they see them. Previously, only the active user could see the objects, whereas a camera would just see two images on the screen
  • Implemented object creation: We added another operation mode that allows the user to create primitives at specified locations in the scene
  • Implemented object scaling: We added gestures to the object manipulation mode so that users could also scale objects. Inserting two fingers into an object activates scaling, while having one finger within the object and one outside of it activates rotation.
  • Improved performance: We implemented spatial partitioning data structures to improve the efficiency of several operations, allowing us to include more complicated meshes in our scenes

Evolution of Goals and Design

Our project goal has not changed significantly over the course of the semester, although our design focus has shifted from providing intuitive gestures to providing a usable system. Our goal has always been to provide a natural and intuitive user interface for 3D modeling by uniting a 3D workspace with a real-world interaction space. Because much of the core functionality of our application would be provided by existing hardware (e.g. 3D views from a 3D monitor and 3D gesture data from the Leap) the focus of our project design was originally to come up with an intuitive set of gestures to use for performing various 3D modeling tasks. As such, we used P4 to refine our gesture set based on how a user tends to act with a real, physical object. However, we found ourselves somewhat restricted while implementing these gestures (e.g. rotation especially), as we were limited to gestures that were easily recognizable by the hardware. As we attempted to refine the usability of our system, we began to focus more and more on dealing with aspects like stability and performance, which we found hindered users in P6 more than the gesture set itself. Thus, our focus has become much more myopic as we’ve realized that the basic usability of our application strongly affects the average user’s experience.

Our conception of our user base has also changed in light of the usability barriers we have come against. Originally, we targeted experienced 3D modelers who could make the most effective use of the functionality we provided; however, as we pared down our functionality to the scope of a semester-long project, we began to realize that 3D modelers would be using such an application at a much finer and much more sophisticated level than we could manage. Furthermore, as we struggled with the stability and performance of our implementation, we came to realize that it would take some time for modern hardware to catch up to the accuracy needed for 3D modeling. As a result, we began to focus more on making the application accessible to an average, inexperienced user who could give us more of an idea of how the typical user would approach a 3D modeling task, rather than observing the habitual methods of experienced 3D modelers. In this way, we could gain more insight into the system’s usability for general 3D interactions.

Critical Evaluation of System

Our work suggests that, with further iteration, Runway could definitely be turned into a useful real-world system. Our users found the system intuitive and relatively easy to use. They found the system to be more compelling than traditional 2D systems for 3D modelling, even with our small set of prototype functionality. The most interesting development for Runway would be to integrate our interaction paradigm of aligned gesture and stereoscopic display space with existing 3D modelling systems, which would take full advantage of these existing systems while providing the intuitive interface for interacting with 3D data that Runway provides.

Unfortunately, our users were at times greatly hindered by the instability of gesture detection with a Leap (in particular, detecting and distinguishing between clenched fists and pointing fingers). This could be improved in future iterations in both software (with control theory to stabilize gesture detection) and hardware (with more advanced sensors). The usability bottleneck is our hardware tracking robustness. The Leap Motion sensor is not sufficiently reliable and accurate, which causes frustration for our users. We could also improve the believability of the 3D representation. In order for the 3D display to be completely convincing, there should be face-tracking, so that the scene appears stationary as you move your head around. As hardware and algorithms improve, this type of interface will become more and more feasible.

In our application, the most task that users picked up most quickly on was the 3D painting task, for which the gesture seemed to be the most intuitive, as there was a direct analogue to finger painting on a physical object. This suggests that with the right gestures, this type of interface for 3D modelling would be intuitive and quick to pick up on. In the space of 3D modelling, the current applications are very complex and generally have tutorials and hints in the application as to what functions the user may want to access and how to access them. This was something that we lacked in our application, as well as something users commented on — the gestures may be more intuitive, but still require some memorisation, and could use some form of prompting in the background of the application. In addition, current 3D modelling systems allow for a significant degree of precision, which is lost with our current level of gestures and hand-tracking.

Further Work

Extending the functionality of our core application would be first priority in our subsequent work. This includes:

  • More robust and sophisticated gestures The inconsistency between the gestures users made and the interpreted command significantly hindered user experience with the application. To ameliorate this problem, we would like to integrate more robust gesture detection through software, and through hardware if possible. We would also like to further explore the possible gesture set to integrate more sophisticated gestures: in particular, rotation was something that users found difficult, and the gesture could likely be improved to be both more intuitive for the user and more robustly detected.
  • A gesture tutorial, along with better onscreen feedback on currently available gestures. During our user studies, we found that users commonly misunderstood or forgot the relevant gestures to achieve their goals. To mitigate this problem, we want to include not only a thorough demo and tutorial, but also a way to suggest and/or remind users of the available gestures within a mode. It would also be beneficial to have more useful feedback about gesture recognition and tracking state, for example having something pop up whenever users performed a gesture wrong or including a simple indicator for how many hands are currently being tracked.
  • Additional modelling operations commonly used in existing 3D modelling systems. Obviously, our system is not a fully fledged 3D modelling application yet. To become more useful, we want to implement more features that are commonly used in Maya or Blender, such as rigging, keyframing, as well as more complicated mesh manipulations such as face extrusion and mesh subdivision

In terms of additional functionality, we would love to include

  • Haptic feedback is another way that we can increase the believability of our system, as well as improve our understanding of how users perceive the stereoscopic objects. Something as simple as having vibrating actuators on a glove that buzz when the user touches an object will provide them with valuable feedback and intuition. This is also important to help perceptual ambiguities with stereoscopic objects, especially since the brain often cannot fuse the two images forming an object into a single virtual 3D object. Haptic feedback will augment the spatial output of the system so if spatio-visual output is unreliable, spatio-haptic output will suffice
  • Face-tracking is important for the believability of the stereoscopically rendered objects. Currently, if the user moves their head, the objects will appear to drift along with them instead of remaining stationary as if they were really floating in space. The easiest way to perform this is to put a fiducial marker (which looks like a barcode) on the 3D glasses, and use another webcam to track the position of that marker.

We would like to perform further testing with users in a more directed fashion to understand several of the issues we have noticed with our system that we did not specifically address in our hi-fi prototype. For example, because of the first-person nature of our system, it is difficult to assess how accurate our spatial calibration of gesture and display space is; in our current system we have a pointer element that follows the user’s fingertips, but we could perhaps study how easily users perform tasks without such a feedback element. Haptic feedback would also be helpful in this regard. We also did not perform long tests to assess the effects of gorilla arm, but we know that gorilla arm fatigue is a common problem with gestural systems. We could experiment with allowing users to rest their elbows on the table, and determine the effects of this on gestural stability and fatigue. Finally, once our system is more refined and more fundamental operations are available, we would like to test our system with experienced 3D modellers to see if they approach the task of 3D modelling the same way inexperienced users do. This would also help us determine if our system would be truly useful as an application.

Source Code

The source code for our project is available here (Dropbox)

Third Party Code

  • Unity3D: We are using the Unity Game Engine as the core of our application.
  • Leap Motion: We use the Leap Motion SDK to perform our hand-tracking and gesture recognition.
  • DotNumerics: We use the DotNumerics Linear Algebra C# Library for some of our calibration code
  • NYARToolkit: The NYARToolkit is an Augmented Reality C# library that we use to localize our third-person view webcam to record our demo videos
  • .NET Framework: Since our code was written exclusively in C# for Unity3D, we made ample use of Microsoft’s .NET framework
  • Although not used as source, we used ffmpeg as an integral component of our third-person video recording.

We did not directly use code that we had previously written, although the calibration process code was a direct extension of research performed in Edward’s Fall 2012 Independent Work.

Demo Session Materials

Our PDF presentation is available here

P5 Runway – Team CAKE

Group number and name: #13, Team CAKE
Team members: Connie, Angie, Kiran, Edward

Project Summary

Run­way is a 3D mod­el­ling appli­ca­tion that makes 3D manip­u­la­tion more intu­itive by bring­ing vir­tual objects into the real world, allow­ing nat­ural 3D inter­ac­tion with mod­els using gestures.

Tasks Supported

The tasks cover the fundamentals of navigation in 3D space, as well as 3D painting. The easiest task is translation and rotation of the camera; this allows the user to examine a 3D scene. Once the user can navigate through a scene, they may want to be able to edit it. Thus the second task is object manipulation. This involves object selection, and then translation and rotation of the selected object, thus allowing a user to modify a 3D scene. The third task is 3D painting, allowing users to add colour to objects. In this task, the user enters into a ‘paint mode’ in which they can paint faces various colours using their fingers as a virtual brush.

Task Choice Discussion

From user testing of our low-fi prototype, we found that our tasks were natural and understandable for the goal of 3D modelling with a 3D gestural interface. 3D modelling requires being able to navigate through the 3D space, which is our first task of camera (or view) manipulation. Object selection and manipulation (the second task) are natural functions in editing a 3D scene. Our third task of 3D painting allows an artist to add vibrancy and style to their models. Thus our tasks have remained the same from P4. In fact, most of the insight we gained from P4 was in our gesture choices, and not in the requirements for performing our tasks.

Interface Design

Revised Interface Design

The primary adjustments we made to our design concerned the gestures themselves. From our user tests for P4, we found that vertex manipulation and rotation were rather unintuitive. This, in addition to our discovery that the original implementations would not be robust, prompted us to change the way these gestures worked. We added the space bar to vertex manipulation (hold space to drag the nearest vertex), both to distinguish vertex manipulation and object translation, and to make vertex manipulation more controlled. For rotation, we changed our implementation to something with a more intuitive axis (one stationary fist/pointer acts as the center of rotation), so that the gesture itself is better-defined for the sake of implementation.

Two main features that we added were calibration (elaborated on below) and statuses. The statuses are on-screen text prompts that show data fetched from the leap (e.g. what hand configuration(s) are currently being tracked), the mode, the current color (if relevant), and some debugging information about the mesh and the calibration. These statuses are displayed at screen depth in the four corners, in unassuming white text. They are primarily for information and debugging purposes, and hopefully do not detract from the main display.

Everything else in our design remained largely the same, including the open-handed neutral gesture, pointing gestures for object manipulation and painting, and fists for camera/scene manipulation. We also mapped mode and color changes to keyboard keys somewhat arbitrarily, as this is easy to adjust. Currently, modes are specified by (1) calibration, (2) object manipulation, and (3) painting. Colors are adjusted using triangle brackets to scroll through a predefined list of simple colors. We hope to adjust this further from user testing feedback.

Update Storyboards

Our storyboards have remained the same from P4, since our tasks from P4 are suitable for our system. These storyboards can be found here

Sketches of Unimplemented Features

The two major features that we have not implemented are head tracking and 3rd-person viewing (both of which are hard to sketch but easily described). Head tracking would allow us to adjust the object view such that a user can view the side of the object by simply moving his or her head to the side (within limits). Software for third-person viewing can render the display in its relative 3d space in a video, so that we can demo this project more easily (this is difficult to depict specifically in a sketch because any sketches are already in third-person). Both of these features are described in more detail below.

Prototype Description

Implemented Functionality

Our current prototype provides most of the functionality required for performing our three tasks. The most fundamental task that we did not incorporate into our low-fi prototype was calibration of the gesture space and view space. This was necessary so that the system knows where the user’s hands and fingers are relative to where the 3D scene objects appear to float in front of the user. The calibration process is a simple process that must be completed before any tasks can be performed, and requires the user to place their fingertip exactly in the center of where a target object appears (floating in 3D space) several times. This calibration process is demonstrated in Video A. Once this calibration is completed, a pointer object appears and follows the user’s fingertip. This pointer helps provide feedback for the user, showing that the system is following them correctly. It also helps them if they need to fine-tune the calibration, which we have implemented on a very low level where various keyboard keys change the values of the calibration matrix.

After calibration has been performed, the user can start performing the 3D tasks. As mentioned above, there are three modes in our prototype: View mode, Object Manipulation mode, and Painting mode. In all three of these modes, we allow camera manipulation, including translation and rotation, using fist gestures as described above. Basic camera manipulation is shown in Video B.

In Object Manipulation mode, we support translation and rotation of individual objects in the scene, as well as vertex translations. Translation requires the user to place a single fingertip onto the object to be moved, after which the object would follow the fingertip until the user returns to the neutral hand gesture. Rotation involves first placing one fingertip into the object, and then using the other fingertip to rotate the object around the first fingertip. These operations are shown in Video C. Finally, vertex translation is activated by holding down the spacebar; then, when the user approached suitably close to a vertex on an object, they could drag that vertex to a desired position and release it by releasing the spacebar. Vertex manipulation is shown in Video D.

Finally, the Painting mode allows the user to select a paint color using the keyboard and paint individual faces of a mesh using a pointing fingertip. When a fingertip approaches suitably close to a face on a mesh, that face is painted with the current color. This allows the user to easily paint large swathes of a mesh in arbitrary shapes. This process is shown in Video E.

We implemented these features because they are central to performing our tasks, and we had enough time to implement these features to a reasonable level of usability.

Future Functionality

All of our basic functionality is in place. However, there are several things that should be completed for our final system, mentioned in the previous section.

  • The most important such task from a user standpoint is to add in head tracking. This will allow the user to move their head and see the appropriately differing view of the scene while still maintaining the calibration between viewpoint and gesture. We will accomplish this by using an Augmented Reality toolkit that provides can provide the position and orientation of fiducial markers (which look like barcodes) in a camera frame; attaching a marker to the 3D glasses will allow us to track the user’s head position and therefore viewpoint relative to the monitor. We have not implemented this because the core scene manipulation functionality is more central to our system. Our system works fine without head tracking, as long as the user does not move their head too much.
  • For the purposes of demonstrating our project, we also want to use augmented reality toolkits to allow third-person viewers to see the 3D scene manipulation. Currently, as you can see in first videos, the system looks rather unimpressive from a third-person point of view; only the active user can see that their finger appears in the same 3D location as the pointer. Adding this third-person view would allow us to take videos that show the 3D virtual scene “floating” in space just as the main user would see it. This is also implemented using the Augmented Reality toolkit, which will allow such a third-person camera to know its position relative to the Leap sensor (which we will place at a specified position relative to a fiducial marker on the desk). Since this is clearly for demo purposes, it is not central to the application’s core functionality that is required for user testing.
  • Finally, if we have time we would like to add in an additional hardware component to provide tactile feedback to the user. We have several small vibrating actuators that we can use with an arduino; this will allow us to provide a glove to the user that will vibrate a fingertip when it intersects an object. This would add a whole new dimension to our system, but we want to make sure that our core functionality is already well in place before extending our system like this.
  • Some already implemented parts of our system can also use improvement – for example, several parts of the mesh processing code will likely require optimization for use on larger meshes. Similarly, we can experiment with our handling of Leap sensor data to increase robustness of tracking; for example, some sensing inaccuracies can be seen in the videos below, where an extended fingertip is not detected.

Wizard of Oz techniques

No wizard of oz techniques were used in this prototype. Our system already has all of its basic functionality, so this was not necessary.

External Code

We are building off of the Unity framework as a platform for development, which provides a lot of the basic functionality of an application, input, rendering, and scene management. Conveniently, the Leap Motion SDK provides a Unity extension that allows us to access Leap sensor data from within Unity. We also use the DotNumerics Linear Algebra C# library for some calibration code.

We borrow several classes from the sample Unity code provided with the Leap SDK. Otherwise, the C# Unity scripts are all written by our team. For our unimplemented portions, namely the third-person view and head tracking, we will be using the NYARToolkit C# library for tracking and pose extraction.

Videos

Video A – This video shows the basic setup with stereoscopic 3D monitor and Leap. It also demonstrates the calibration workflow.

Video B – This video shows demonstrates camera manipulations, including how to translate and rotate the scene.

Video C – This video shows object manipulation, including how to select, translate, and rotate objects.

Video D – This video shows vertex translation, where the user can deform the object by selecting and moving a vertex on the mesh.

Video E – This video shows 3D painting, where the user can easily paint faces on the mesh using gestures.

P4 Runway – Team CAKE

Group number and name: #13 Team CAKE
First names of everyone in your group: Connie, Angela, Kiran, Edward

Project Summary

Runway is a 3D modelling application that makes 3D manipulation more intuitive by bring­ing vir­tual objects into the real world, allow­ing nat­ural 3D inter­ac­tion with mod­els using gestures.

Test Method

Our user test simulates this appearance by using a blob of homemade play dough; we use a human as our Wizard-of-Oz gesture recognizer to allow users to manipulate the play dough model.

Informed Consent

We modified the standard IRB Adult Consent Form to suit our purposes, both for expediency (we expect that our testers will read a form more quickly than we can speak) and for clarity (the visual layout with separate sections for description, confidentiality, benefits, and risks is very helpful). We will have our user read the form and sign two separate sections – one to consent to the experiment and to have their performance described at a high level in our blog post, and a separate one to have their photographs included in our blog post. The main differences between our form and the IRB form are the research/non-research distinction and the confidentiality section (since our results would be discussed on a public blog, there is very little confidentiality beyond not giving names). Our consent form is available at https://docs.google.com/document/d/17tbgbv7Gk_uJpzcbOPBmua0XOxpcuCdWelPdDq9OWo4

Participants

We had three participants, all juniors in the computer science department; one of them is female and the other two are male. All participants had experience in 3D computer graphics; one did independent work in graphics, while the other two are currently taking the graphics course. These participants were all acquaintances of group members; we asked them personally to take part in our study based on their background.

Testing Environment

We set up our prototype with dough simulating the 3D object(s) to be manipulated being projected into real space, and a small cardboard model of the coordinate planes, to indicate the origin of the modelling coordinate system. One person is a “Wizard of Oz” who moves the dough in reaction to the user’s gestures. Both the “wizard” and the user sit at a table, to imitate the environment of working on a computer. For the painting task, a dish of soy sauce is used to paint on the dough.We have another “wizard” assist in the object manipulation task to reshape the dough according to the user’s actions. We performed two of the tests in the CS tearoom and one in a dorm room.

Roles

  • Kiran had the most intensive job of reacting to the user’s positioning gestures; he had to accurately and quickly move the models according to our gesture set.
  • Connie was the assistant wizard who did the clay shaping for model scaling and vertex manipulation. She was also the unofficial “in-app assistance” (the user’s “Clippy”).
  • Edward was one of our observers, and was also the photographer.
  • Angela presented the tasks from the pre-determined script. She also was an observer/scribe

Testing Procedure

In our testing procedure, we first gave the participant a copy of the consent form to read, which also provided them a very general overview of the system and their tasks. We then showed them some examples of the basic navigation gestures, and also explained the difference between the global, fist gestures and the model-level, finger gestures. Although we were advised in the spec not to demonstrate one of our primary tasks, the nature of our gestural interface meant that there were no obvious actions the user could perform; demonstrating the basic possibilities of one vs. two handed and fist vs. finger gestures was necessary to show the range of inputs.

In the first task, we then had them experiment with the navigation gestures, familiarizing themselves with the basic gestures with the goal of placing the scene into a specific view, to demonstrate their understanding of the interface. In the object manipulation task, we specified several object transformations that we wanted the user to perform (some on the model level, and some on the vertex level). Finally, the painting task allowed the user to move into painting mode (pressing a key on an invisible keyboard), select a color, and and paint a design onto the model.

The scripts we used for testing are available at https://docs.google.com/document/d/1V9iHcgyMkI4mnop9zU1EV72JxfLiamoJdSG0mwTxLEw/edit?usp=sharing

Images

User 1 performing task 1. This image shows the fist gestures, as well as the model (a head) and the coordinate axes.

User 3 performing task 2. Note the finger gesture (as opposed to the fist gesture)

User 2 performing task 3, painting “hair” onto the model’s head.

Results Summary

With the first task of view manipulation, all of the users picked up on the gestures quite easily. However, they also each attempted to obtain a large degree rotation by continually twisting their arms around, which is quite awkward, rather than realizing that they could go back to a neutral gesture and rotate again from a more comfortable position. User 2 realized this fairly quickly; the other users need some hints. For task 2, object manipulation, the users again easily manipulated the object. However, we gave little instruction on how to select an object and deform it, which all of the users struggled with. Moreover, none of the users realized that they could actually “touch” the object; once we mentioned this, selection became easier, though as we designed selection and deformation to be a single-handed gesture, and the users only knew of two-handed gestures, it took a couple of tries for them to get to a single hand gesture. All of the users also tried pinching to deform, which is a more intuitive gesture. Task 3 was easiest for the users, as the gesture for painting is exactly like finger painting.

Discussion

User 1 commented that some of the gestures are not necessarily the most intuitive gestures (a minor usability problem) for the particular commands, but we are also limited by what a Leap sensor can detect. We received very positive feedback from User 2, who even remarked that our system would make working with meshview (a program used for viewing meshes in COS426) a lot easier.

The selection and deformation task (task 2) was the most difficult for all of the users, as they hadn’t seen any gestural commands demonstrated that were similar to those for selection and deformation. For this task, there were two main problems: (1) the users did not realize that they could “touch” the object, which selection required, and (2) the most natural gesture for deformation is pinching, as opposed to selecting, pulling, and going to a neutral gesture. For the former, the problem lies in the nature of the prototype, as the object was being held and manipulated by one of us, which made it seem like the user could not actually interact directly with the object. For the latter, we had considered using pinching as a gestural command, but the Leap sensor would have difficulty distinguishing between a pinch and a fist, so we decided on a pointing finger instead. All of the critical incidents related to users not being sure of what gesture to perform to achieve a goal. When we broke from script to give the users hints, they picked up the gestures easily. A good tutorial at the start of using such a gestural interface would probably take care of this problem.

We were not surprised at how easily our users picked up the painting task, since it was fairly obvious from our physical setup. We expect that, when confronted with our real system displaying a virtual object, in particular, one that the user could pass through with their hands, this task would have a very different set of potential problems. However, we do note that the tactile feedback that made the physical painting so easy would be helpful in our final system.

Next Steps

We are ready to build a higher fidelity prototype without further testing.

Our tests confirmed that in order to get truly useful feedback about our application, it is imperative that we have the actual system in place since most of our known difficulties will be related to viewing stereoscopic 3D, the lack of tactile feedback, and the latency/accuracy of gesture recognition. While we will definitely conduct tests after building Version 1 of the system, we do not believe we need to keep testing at the low-fidelity prototype phase. The most helpful part of this phase was affirming that our gestures definitely work, though they may not be optimal. However, to further refine gestures, we need to know how people interact with the real system.

Team CAKE – Lab 3

Connie, Angela, Kiran, Edward
Group #13

Robot Brainstorming

  1. Peristalsis robot: Use servo motors with rubber bands to get it to move with elastic energy somehow
  2. Helicopter robot: spinning rotor to make it fly (like the nano quadrotor except less cool)
  3. Puppet robot: use motor or servos to control puppet strings
  4. Crab style robot: crawling on six legs
  5. Robo-ball: omnidirectional rolling
  6. 3- or 4-wheeled robot: like a car
  7. fixed magnitude offset in motor speed of 2-wheel robot — move in squiggles
  8. Magnet-flinging robot: has a string with a magnet attached to it, uses a motor catapult to throw it forward, latches on to nearest magnet, and then has another motor to reel in the string. rinse and repeat
  9. Flashlight-controlled robot: use photosensors and it only moves if a lot of light is shone on it
  10. Tank robot: use treads around wheels
  11. Hopping robot: uses a servo to wind up a spring and fling itself forward
  12. Inchworm robot: moves like an inchworm
  13. Sidewinder robot: moves like a sidewinder
  14. Hot air balloon: make a fan that blows air past a heated element into a balloon (might be unsafe)
  15. Sculpture: moves linked magnets in constrained area with a magnet on motor (more of an art piece than a robot)

Red light Green light Robot

Our robot is a two-wheeled vehicle made of two paper plates. It plays red light green light with the user: when the user shines a flashlight on the vehicle, it moves forwards. It stops moving when the light stops shining on it.
We made this because it was a simple way to make a robot that was still interactive, rather than just moving arbitrarily on its own. While our electronics worked exactly as planned, it was very difficult to create a chassis that would allow the motor to drive the wheels while being able to support the weight of the battery pack, arduino, and breadboard. In fact, our robot didn’t really work – it just shuddered slightly but didn’t move. This was primarily due to the weight of the components; we’d need a more specialized set of parts like Lego or some structural kit with gears instead of sticks, plates, and tape. It was especially difficult to find a way to connect the smooth motor shaft with the plate (although we did get a very good attachment with just one plate and a motor).

Here is a shot of our robot in action, or to be more accurate, robot inaction.

Here is a shot of our robot in action, or to be more accurate, robot inaction.

In this picture you can see the electronics as well as the attachments of the components and dowels to the wheels.

In this picture you can see the electronics as well as the attachments of the components and dowels to the wheels.

This is the design sketch for the Red light Green light robot

This is the design sketch for the Red light Green light robot

Parts List

  • Arduino Uno with battery pack
  • Breadboard (as small as possible!)
  • Wires
  • Photoresistor
  • PN2222 Transistor
  • 1x DC motor
  • 1x 1N4001 diode
  • 1x 270Ω resistor
  • 1x 3KΩ resistor
  • 2x paper plates
  • 1x photoresistor
  • Wooden dowel, at least 40cm long
  • Tape
  • Paperclips

Instructions

  1. Attach the photoresistor from 5V to analog pin 5 through a 3KΩ pulldown resistor
  2. Attach the motor as shown in http://learn.adafruit.com/adafruit-arduino-lesson-13-dc-motors/breadboard-layout; use the diode between the two sides of the motor, attaching the middle pin of the transistor to digital port 3 through the 270Ω resistor
  3. Measure the ambient light reading from the photoresistor, and then the flashlight reading, and set the threshold appropriately between the two readings
  4. Punch holes as appropriate in the paper plate wheels (small holes in the center, two larger ones opposite each other).
  5. Unfold a paperclip and wind one half around the spinning axle of the motor. Tape the other half flat on the outside of one wheel.
  6. Break the dowel in half and poke the halves through the larger holes in the wheels, tape them in place.
  7. Securely attach arduino, breadboard, and battery pack in a solid block. Connect the motor appropriately, and make sure the photoresistor faces upward.
  8. Unfold a paperclip and securely tape half onto the arduino/breadboard/batteries contraption. Unfold the other half and poke a straight prong through the paper plate not attached to the motor.

Source Code

/* Red light Green light robot
 * COS436 Lab 3, group 13
 */
int motorPin = 3;
int photoPin = A5;
int motorspeed = 0;
int threshold = 830;
 
void setup() 
{ 
  Serial.begin(9600);
  pinMode(motorPin, OUTPUT);
  pinMode(photoPin, INPUT);
} 
 
 
void loop() 
{
  Serial.println(analogRead(photoPin));
  if (analogRead(photoPin) > threshold) {
    motorspeed = 200;
  }
  else {
    motorspeed = 0;
  }  
  analogWrite(motorPin, motorspeed);
  delay(40);
} 

Assignment 3 – Heuristic Evaluation of SCORE

Junjun, Saswathi, Angela, Edward

1. Most Severe Problems

Login errors

When attempting to log in, users quite frequently get redirected to an error page saying that they are “not authorized to log into PeopleSoft”. This may also happen even when the username and password are correct, and before 2am. This is a very annoying and severe error (we rated it a 3).
This violates the following heuristics:

  • Heuristic 9: help users recognize, diagnose, and recover from errors: “Not authorized” is both unhelpful and seemingly incorrect (students should be authorized to log in), and does not propose and form of solution. The error message should at least be more informative (was it a bad username/password combination? Is the time invalid for the particular user?); it should also not occur when the username/password are correct.
  • Heuristic 7: flexibilty and efficiency of use: The user must then go back to the original login page after receiving this error; it would be much more efficient to display error messages on the original login page, so that the user can easily try logging in again.

Class Registration

Users must search for courses to enroll in by course number to find a specific course. This information is not available easily through SCORE
This violates the following heuristic:

  • Heuristic 6: recognition rather than recall: the course numbers must be looked up, as no one remembers them. Users should be able to search courses by keywords and words in the course titles, or have SCORE integrated with ICE or even the Registrar’s page.

2. Problems Recognized Because of Nielsen’s Heuristics

We noticed a problem with consistency in the navigation interfaces – tabs, drop-down menus, and radio buttons are all used at different times for navigation. We only noticed this as a distinct problem by learning about Nielsen’s heuristics. Thus, H4 was a useful heuristic.

Also, we did not directly notice the lack of help and documentation as a problem. Since we all have used SCORE a lot, we already know what to click and what all the cryptic things mean. However, we realized (and remembered) how little sense it made for a first-time user after reading over the list of heuristics. Thus, H10 was a useful heuristic.

3. Non-Nielsen Usability Problems

Since the heuristics seem very broad, most of our problems fit into them (and several problems seem to fit more than one heuristic).

One problem that didn’t quite fit was the login error (“You are not authorized by PeopleSoft”) described above. While the error message isn’t very useful, the main problem seems to be an error of functionality. The user should be able to log in, but they cannot.

We might add a heuristic called “consistent functionality”: The system should behave predictably and consistently.

4. Useful Class Discussion Questions

  1. Are any of the heuristics inherently more severe than the others? Explain why.
  2. Is there a line between usability flaws and functional flaws? How terrible can a UI be before the application is no longer functional?
  3. Here are a list of several usability problems. What heuristic violation would you categorize them as, and why?
    • automatic logout without warning
    • no file previews available
    • faulty spell check
    • etc.
  4. Give examples of severity level 4 violations for each heuristic.

5. Solo Heuristic Evaluation Links

Junjun: https://dl.dropbox.com/u/49280262/JunjunChenA2.pdf
Saswathi: https://docs.google.com/a/princeton.edu/file/d/0By483E15Y63_cGpDQXVFanAzUE0/edit?usp=sharing
Angela: https://docs.google.com/file/d/0B0fj2iAnOQwcOGZGSnk4dFJJUlU/edit?usp=sharing
Edward: https://docs.google.com/document/d/11uVTSsP-xRlUDFN6l3am83bPBsQVlEhQ3oJNN0eUoP8/edit?usp=sharing

Lab 2 – Team Cake

Members: Connie, Angie, Kiran, Edward
Lab Group 13

We built two sets of hardware interfaces: our final instrument was a wind instrument that played chords when you blew into “pipes” (straws), and our alternate interface was a “force-sensitive softpot” that combined the two sensors into a single control element. In our final instrument, we blew on a thermistor to change its temperature and combined three of these to create a wind-like instrument. We liked the weirdness of using heat to play music, as well as the quirky design: we used straws as the blowholes on our Arduino. The simple but exotic interface also lent itself well to our eerie synthesis mappings. However, it would have been better if we could have had a larger array of thermistors. Then, instead of mapping each thermistor change to a scale of notes, we could have done more complicated things and given the user more control. All in all, we were very satisfied with the sounds created by our final instrument – the ambiance that it gave off was really cool, and we thought it went really well together with our wind instrument interface.

Force-sensitive Softpot Instrument

For our first two instruments, we used the same hardware interface. We wanted to design an instrument that deeply combined two sensors to create a different experience than we made in previous labs, where all the sensors were separate in the interface. Thus, we decided to make a Force-sensitive Softpot, that sensed not only where on a slider you were touching but also how hard. Combining the two dimensions of input like this made controlling two parameters into one action, allowing for more artistic effects.

We created two different mappings for our FSS instrument. The first was very simple, where position along the softpot mapped directly to frequency and the force on the FSR mapped to volume. Our choice of waveform (sine wave with a slight reverb) this created an eerie-sounding howling effect. We found after some experimentation that the softpot required a certain minimum level of force to give stable readings, so we adapted the FSR mapping to be thresholded instead (only mapping readings above the threshold to volume).

Our second mapping was a little bit more radical. We mapped force on the FSR to timbral parameters used in waveshaping synthesis, to make a sine tone sound more “sharp” or “buzzy”. We also changed the softpot mapping to a discrete frequency mapping, so that tones were constrained to the 12 notes of the western chromatic scale. The timbral effects were especially cool – we could make sounds that almost sounded like speech, since the different levels of timbre sounded almost like different vowels, and changing between them sounded like “Wa” noises.

Final Instrument: Wind instrument


This “wind instrument” used the user’s breath to warm up thermistors placed at the bottom of the “pipes” (straws). Each pipe is responsible for a different chord and a different tempo. For example, in our demonstration we have a fast Fm6, a fast Cm9, and a slower, lower Cm9. The actual pitch classes played are randomly selected from the chord. The temperature reading controls the octave in which the chord is played: as you blow into the pipe, the chord gets higher, and as you stop blowing, the chord gradually falls down to a lower range as the thermistor cools off. This causes a fancy decaying effect, almost like the fading resonance of a struck bell.

Parts List

  • 3x Thermistor
  • 3x 330K Resistors
  • 3x Straws
  • Arduino Uno
  • Wires
  • Straws

We also require a computer installed with Processing and ChucK (and Arduino, of course).

Instructions

  1. Connect one thermistor to 5V and to A0. Connect A0 to ground through a 330K pull down resistor.
  2. Cut a straw so it is about 5cm in length. Place it on top of the thermistor
  3. Repeat steps 1 and 2 twice more for your two remaining thermistors, connecting them to A1 and A2.
  4. Loosely tape the straws together.
  5. Take some readings of the thermistor with the straws on, but without blowing into them. Set the maximum such value as the baseline in the ChucK code.
  6. To play the instrument, blow into the straws. The raised temperature of your breath should warm up the thermistors, which will gradually cool. Take off the straws if you want it to cool down faster.

Source Code

Arduino Code

/*
 Temperature Blow Straw Music Instrument
 Lab 2
 COS 436: Human-Computer Interfaces
 March 6, 2013
 Arduino Code
 */

//variables 
int temperature1; //temp sensor 1
int temperature2; //temp sensor 2 
int temperature3; //temp sensor 3

//pins
int pinT1 = A0; 
int pinT2 = A1; 
int pinT3 = A2; 

void setup()
{
  // initialize the serial communication:
  Serial.begin(9600);
}

void loop() {
  temperature1 = analogRead(pinT1);
  temperature2 = analogRead(pinT2);
  temperature3 = analogRead(pinT3);
  
  //Serial.print("Temperature 1: ");
  Serial.print(temperature1);
  Serial.print(" ");
  Serial.print(temperature2);
  Serial.print(" ");
  Serial.println(temperature3);
  
  delay(100);
}

Processing Code

/* Generic numerical message forwarding from Arduino to OSC */
import java.util.*;
import oscP5.*;
import netP5.*;
import processing.serial.*;

Serial myPort;

OscP5 oscP5;
NetAddress dest;
int port = 6448;
String oscName = "/arduinoData";

String data;

void setup() {
  size( 400, 300 );
  myPort = new Serial(this, Serial.list()[0], 9600);
  myPort.bufferUntil('\n');
  oscP5 = new OscP5(this,12000);
  dest = new NetAddress("127.0.0.1",port);
}

void serialEvent(Serial myPort) {
  String instr = myPort.readStringUntil('\n');
  data = instr;
  sendOsc();
}

void draw() {
 ;
}

void sendOsc() {
  OscMessage msg = new OscMessage(oscName);
  String[] tokens = data.split("\\s");
  for (int i = 0; i  0) {
      msg.add(Float.parseFloat(tokens[i]));  
    }
  }
  oscP5.send(msg, dest);
}

ChucK Code

// HCI Lab 2 - Wind Instrument 
// ChucK Synthesis Script
// Generates random notes in chords, with the octave based on the readings from the arduino.
JCRev r => Gain g => dac;
.1 => r.mix;
.7 => r.gain; 
.5 => g.gain;

// ---------------- Globals ---------------
32 => float baseline;   // Ambient temperature readings for thermistors

1 @=> float octave[];  // Current octave
1 @=> float in[];      // Current reading
// ----------------------------------------

// ---------------- Chords ----------------
[0, 2, 4, 7] @=> int chord1[];    // I9
[5, 7, 9, 12] @=> int chord2[];   // IV
[7, 11, 14, 17] @=> int chord3[]; // V7

[0, 2, 3, 7] @=> int chord4[];    // i9
[5, 8, 12, 14] @=> int chord5[];  // iv6
[7, 11, 14, 17] @=> int chord6[]; // V7
// ----------------------------------------


// Function that plays notes in appropriate chord
fun void playstuff(int chord[], int index, int tonic, int speed) {
    SinOsc s => r;
    .2 => s.gain;
    3 => int x;
    1 => int off;
    while (true) {
        if (in[index] >= baseline) {
            Math.round((in[index] - baseline)/2) => octave[index];
            0.5 => s.gain;
            if (Math.random2(0,1) == 0) {
                1 => off;
            } else {
                3 => off;
            }
            Std.mtof(tonic + chord[(x + off)%4] + 12*octave[index]) => s.freq;
        } else {
            0 => s.gain;
        }
        x + 3 => x;
        if (x > 3) x - 4 => x;
        speed::ms => now;
    }
}

// Initialize one thread for each thermistor
spork ~ playstuff(chord4, 0, 60, 150);
spork ~ playstuff(chord5, 1, 60, 150);
//spork ~ playstuff(chord6, 2, 60, 150);
spork ~ playstuff(chord4, 2, 48, 600);


// --------------------- OSC Processing -------------------------
OscRecv recv;
6448 => recv.port;

recv.listen();

recv.event("/arduinoData, f f f") @=> OscEvent @ oe;

while(true) {
    oe => now;
    while(oe.nextMsg()) {       
		oe.getFloat() => in[0];
        oe.getFloat() => in[1];
        oe.getFloat() => in[2];
        <<>>;
    }
}

  1. ,0.,0.
  2. ,0.,0.

A2 – Edward Zhang

1. Observations

I had the opportunity to observe people before classes of varying sizes – before a small seminar (COS598), a medium-sized class (COS435), a large lecture (COS436), and a huge lecture (MUS103). All of my observations were in the classrooms or directly outside of them before classes.

1.1 General Observations

  • In general, students are either having conversations with other students or are doing something on their laptop/phone.
  • Before the seminar class, most students are chatting with each other. Topics of conversation generally revolved around the papers to be discussed in that lecture. Essentially the entire class knows each other, since all of them are members of the graphics lab, but in previous seminar classes people generally gain the same familiarity to be able to strike up casual or academic conversations with most others.
  • In the medium-sized class, very few people are talking with others. Most are on their laptops (the desk space and convenient outlets in the Friend classrooms make this very convenient).
  • In HCI, many people come to class with their friend groups and often chat with them on the way in and after sitting down. It appears that the popularity of the class means that most people are friends with some number of others in COS436.
  • In the huge class, it appears to be difficult to chat because of the acoustics of the room – people seem to stick in groups of two or three instead. There appear to be a lot more untalkative “singles” compared to HCI; many have their phones out while others have paper and pen out for notes (I assume these are mostly studious freshmen…). There are fewer laptops than I expect in a class of that size, perhaps due to fear of the professor.

1.2 Individual Observations

  1. COS435 (medium-sized class): I observed an individual who I was not acquainted with. He entered the classroom and went to the same seat that he usually sat in (IIRC), conveniently in front of me. After opening up his laptop, he made sure to plug it into an outlet (this reminded me of the tiny desks and lack of power in lecture halls – the small auditorium and the Friend classrooms are very convenient in comparison). This individual checked their email (apparently Princeton Gmail), glanced at Facebook, and then opened up what appeared to be code for an assignment. I also noticed his neighbor take out a physical mouse and start playing Starcraft right after the professor arrived; his game continued into the lecture (?!).
  2. MUS103 (Huge class in McCosh): A girl carrying a salad in a plastic container (from Frist?) came in (alone) and sat down in front of where I was sitting (and left one seat between her and the next person over; apparently the movie theater law of “at least one seat between separate parties” holds in class too). She checked her phone while eating the salad; I couldn’t tell what she was doing, although based on her hands I assume she sent at least one text. Right before the lecture started, she got up and retrieved one of the handouts at the front of the classroom (did she forget it or just have her hands full?)
  3. COS598 (seminar-sized): Although most interactions before this seminar were as described above, there was one interesting case. I observed a female grad student who was new to the department who took the time before class to find someone not already engaged in a conversation and introduce herself (I was one of the ones she accosted). She asked whether I was a graduate student, what I thought of the class so far, and we discussed some of the technicalities of the readings. Through our short conversation she mentioned that she was a grad student who had just switched from ELE to the COS PIXL group and wanted to get to know people. I then watched her go off to introduce herself to another graduate student (she did this twice more the next class, but after that she stopped – I assume she had met everyone by then). This special effort to get to know everyone personally made a huge impact on me, as I had never encountered someone who explicitly went to every person in the class for introductions. However, I think that since it was a seminar class, she probably would have gotten to know everyone eventually. I imagine that her main goal was to meet everyone in the research group rather than the class, but it still inspired my many ideas involving giving people contexts to meet others in a more practical way for a larger class.

Brainstorming

Shared brainstorming with Connie Wan (cwan)

  1. Traffic Light Crossing Planning Aid – Tracks status of the crossings at Washington and in front of Forbes.
  2. Say hi to the camera – Like security cameras at store entrances, let people wave at them to amuse themselves.
  3. Paperwork area – Get handouts, sign in, vote on class polls, etc. in one location
  4. Music Areas – Let people plug in their computers/iPods into communal (directional?) speakers
  5. Phone game XL – Put large screens somewhere in the classroom to make phone games a social activity.
  6. Phone silencer – Deactivate/silence phones during class automatically
  7. Bike rental – Have stations around campus with bikes that you can sign out to get to classes faster.
  8. Make-a-friend phone game – Adapt social mobile games to look for people in the same classroom.
  9. Classwide opt-in games – Classwide trivia game that everyone in the class can join into (like the Delta in-flight trivia game).
  10. Crowd Traffic Analysis – Tell people which seats/entrances are crowded so they can choose which entrance to go into.
  11. Food Smell Diffuser/Eliminator – Show off your delicious food by wafting the smell over the entire classroom. Or, eliminate the smell of obnoxiously pungent food.
  12. Informal Discussion Organizer – Share your informal conversation topics so that people can find you and join in
  13. Student(s) of the day – Introduce the entire class to each other by randomly selecting people to record and play a short 20 second video of themselves.
  14. Whiteboard/Graffiti area – Put whiteboards on some of (all) the walls for graffiti, psets, etc.
  15. Electronic device charging – map all the locations of the nearest outlets in the classroom and whether they’re in use or not
  16. Announcement/spam board – Post event announcements, cool links, lost & founds, restricted to the ten minutes before class
  17. Fun floor area – Make interactive floor tiles or simple wall touch gadgets for amusement on the way to class (inspired by Disneyworld lineup areas)
  18. Cooperative puzzle – Have the entire class work together on some multipart challenge (e.g. sporcle).

Paper Prototypes

My first choice for paper prototyping is the Informal Discussion Organizer (#12). I think it would be the most concretely useful, especially in classes like HCI where informal, creative discussion can introduce you to ideas, viewpoints, and systems that you might not have heard of otherwise.

Home screen for the “Conversation Finder”, aka Informal Discussion Organizer. Major features:

  • The current class and classroom (detected via your schedule OR through which access point you’re accessing wifi from)
  • A list of current conversation topics
  • A button allowing you to add a discussion/topic

When you click on one of the topics, it lists where the discussion is taking place and who started the conversation (and optionally who else is taking part).


When the “Add Topic” button is clicked on the main screen, you get brought to a simple form that lets you enter a subject, choose from a list of locations, and optionally indicate who else is discussing (your name would be auto-filled upon posting). You can post it by clicking the green button, or cancel by clicking the red one.

My second choice for paper prototyping is the Student of the Day. Every Princeton student is a fascinating person, and I think giving people the opportunity to get to know people in their classes is valuable both for social and academic reasons.

The Student of the Day (currently a generically named individual) displays the name and a short (under 30 seconds) video clip introducing themselves with whatever they can fit in the allotted time. Their netid is also displayed in case you want to contact them later. Underneath, a history of previous videos can be viewed based on date. (Names are random generic names)

At some point during the course, the app will select you as the Student of the Day. This message will show up prompting you to record your short clip.

Clicking the button will bring you to a camera recording interface, with a timer underneath showing you how long your video is.

User Testing

  1. Teodor Georgiev: This participant, after looking over the home screen of the app, decided to browse the current topics and decided to “run off and join the conversation”. This was great because they did not spend long on the app and got the relevant information for them to run off and start talking. When I had an accomplice start a conversation with him (and hint that it might be of interest to other people) he tried out the Add Topic page. Unfortunately, he felt that the process was a little bit tedious. Furthermore, he said afterwards probably wouldn’t have thought to open it up without the obvious prompting, and even if he did it was a little inconvenient to think up and type in the topic.

    This user spent a bit of time looking at and analyzing every part of the interface, but after examining everything and getting “back in character” he decided to “go find one of the conversations” right after viewing it.

  2. Brenda Hiller: Like the first participant, the user was able to use the main interface to find a conversation of interest very rapidly. In contrast to the first participant, this user was very excited to be able to add their topic and was very interested in advertising their conversation to the class. She did not think that the fluidness of the workflow was a significant downside, although when questioned further she did admit having it more streamlined would help a lot.

    This user checked a topic of interest, and went off to find the conversation right after taking the first image. The second image shows a conversation an accomplice struck up with the user, and the user added the conversation with little prompting.

  3. David Bieber: This user tried to click everything on the app, which caused a lot of confusion when things did not work (either not implemented in the prototype or not intended to be interactive). However after exhausting the possible click locations, the user decided to go and join a conversation. Upon looking at the options for adding a topic, he was confused by the options presented for locations because he had misconceptualized the use of the app – he had thought it was a general conversation finder, rather than only for a single class, at a specific time, and even after an explanation he gave up since he was in a rush.

    The user is vigorously swiping and pressing buttons on the left, and disappointed when only a few locations actually do something. On the right, they are confronted with a list of location options that they did not understand.

I think this study may be a little bit skewed, since it was obvious through the app that I intended the user to participate in conversations. However, it seems likely for at least one of the users tested that they probably would have just stayed on their laptop alone, rather than actively seek out conversations.

Insights

Having a list of conversations going on in a handy place is very valuable to people – my participants rapidly pulled out the app, found what was going on, and went off to talk. Like Google, we want people to spend as little time in our app as possible, and it was successful in this regard. However, it is clear that having people populate the list of conversation topics will be very difficult, especially given the already short waiting time before class. This app would definitely have to make the topic creation workflow nearly instant, or even automatic (picking up group conversations, extracting a topic, and posting them automatically). One interesting thing I noted was that people would only open up conversation info for topics they were interested in – they would click a topic of interest, determine where it was, and go find the people. Nobody opened the info for a conversation they were not interested in. This suggests that the topic description must be chosen extra carefully, since, if a larger list of topics was in the app, people would want to filter conversations more carefully.

Ambient Etch-a-Sketch

Connie Wan (cwan), Angela Dai (adai), Kiran Vodrahalli (knv), Edward Zhang (edwardz)

Group 13 (aka CAKE)

We built a pseudo Etch-A-Sketch emulator that modifies its appearance based on the user’s environment – the temperature and light conditions. Instead of only drawing horizontal and vertical lines, we can draw curved lines by changing the slope of the current line to be drawn. We use two potentiometers to serve as the knobs of this “Curve-A-Sketch” and use Processing to display the user’s drawing. The ambient light level controls the color of the drawing, while the temperature controls the width of the stroke. This project was a lot of fun to play with because of the nostalgia of playing with a childhood toy, enhanced with the novel control of changing its color and stroke width – for example, we could change colors simply by touching the temperature sensor, and width by moving our head over the photoresistor. We also wanted to blow on the temperature sensor to change color, but it didn’t quite work out because of some bugs with a clear button. The clear button only worked sometimes, because the circuit was a bit buggy– quick changes in resistance messed up the data we sent from the Arduino to processing.

Sketches

Ambient Etch-A-Sketch: Remember the Etch-a-Sketch? This lets you do everything the Etch-a-Sketch did, and more! The features of your cursor , such as color and pen width, change based on your ambient environment conditions (temperature and light).

 

The upper half of this image shows an LED light array that lights up in concert with the location of your finger. With an added control for color, you can create a mini light show using just your hands.
The lower half shows our Smart Nightlight. As its basic function, the tricolor LED will be brighter if the room is darker (we put a screen between the LED and the photoresistor to eliminate feedback). The user can change the color and the overall brightness of the nightlight.

 

Storyboard

Ambient Etch-A-Sketch Storyboard

Final System


This video demonstrates the features of our Ambient Etch-a-Sketch. It shows the user controlling the color and pen width with the photoresistor and thermistor, respectively.

List of parts

  • 2x Potentiometer (rotary)
  • 1x Arduino Uno
  • 1x Push Button
  • 1x Photoresistor
  • 1x Thermistor
  • 3x 10KΩ Resistor
  • 1x Breadboard
  • Wires
  • Computer

Instructions

  1. Plug the two potentiometers into opposite sides of the breadboard. Fix an orientation – the one on the left controls the horizontal coordinate, and the one on the right controls the vertical coordinate. Attach the middle pin of the right potentiometer to A0 and the middle pin of the left potentiometer to A1. Attach the side pins of both potentiometers to 5V and ground.
  2. Connect one side of the thermistor to 5V, and the other side to A2 and ground through a 10KΩ pull-down resistor.
  3. Connect one side of the photoresistor to 5V, and the other side to A2 and ground through a 10KΩ pull-down resistor.
  4. Connect one side of the push button through a 10KΩ resistor to 5V and the other side to D2 and ground.
  5. Run the Arduino code, and then start the Processing application
  6. Draw! The extreme settings of the knobs correspond to the sides of the Processing window. You can adjust the color by shadowing the photoresistor (e.g. leaning over so your head blocks some light), and change the stroke width by changing the temperature (the easiest way is by pressing the thermistor between your fingers).

Source Code

Arduino Code

/*
 Ambient Etch-a-Sketch!
 Lab 1
 COS 436: Human-Computer Interfaces
 February 20, 2013
 
 */

//variables 
int y; //y coordinate of etch-a-sketch
int x; //x coordinate of etch-a-sketch
int temperature; //temperature of the room 
int light; //light ambience of the room 
int clearButton; //clear Button 

//pins
int pinY = 0; 
int pinX = 1; 
int pinTemperature = A2; 
int pinLight = 3; 
int pinClearButton = 7;

void setup()
{
  // initialize the serial communication:
  Serial.begin(9600);
  
  // initialize the clear button as an input 
  pinMode(pinClearButton, INPUT);
  
}

void loop() {
  // read data on every loop 
  x = analogRead(pinX);
  y = analogRead(pinY);
  light = analogRead(pinLight);
  temperature = analogRead(pinTemperature);
  clearButton = digitalRead(pinClearButton);
  
  //test it works
  //Serial.println("Scaled x: \n");
  if((x < 1024) && (y < 1024) && (light < 1024) && (temperature < 1024) && (clearButton <= 1)) {
    Serial.println(x); 
    Serial.println(y);  
    Serial.println(light); 
    Serial.println(temperature);
    Serial.println(clearButton);
  }
  delay(100);
}

Processing Code

//Lab 1: Resistance 
//Connie Wan, Angela Dai, Kiran Vodrahalli, Edward Zhang 
//Feb 27, 2013

import processing.serial.*; 

Serial myPort;
float dx; //change from controls
float dy; //change from controls
float old_dx; //check prev
float old_dy; //check prev
float x; //curent position 
float y; //current position
float light;  //current lighting
float temperature; //current temperature 
int clearButton; //clearButton 
String input; //get data from Arduino 

static final int X_WIDTH = 400;
static final int Y_HEIGHT = 400;
static final int UNIT_DRAW = 20; 

void setup() {
  //init
  myPort = new Serial(this, Serial.list()[4], 9600);
  println(Serial.list());
  x = 200;
  y = 200;
  size(X_WIDTH, Y_HEIGHT);
  colorMode(HSB, 255, 255, 255);
  background(0, 0, 255);
  stroke(0); 
  myPort.clear();
  if(myPort.available() > 0) {
    input = myPort.readStringUntil('\n');
    if(input != null){
      String xval = trim(input);
      old_dx = Integer.parseInt(xval);
      println(dx);
    }
    
    input = myPort.readStringUntil('\n');
    if(input != null) {
      String yval = trim(input);
      old_dy = Integer.parseInt(yval);
      println(dy);
    }
   
    input = myPort.readStringUntil('\n');
    if(input != null) {
      String lval = trim(input);
      light = Integer.parseInt(lval);
      println(light);
    }
  
    input = myPort.readStringUntil('\n');
    if(input != null) {
      String tval = trim(input);
      temperature = Integer.parseInt(tval);
      println(temperature);
    }
    input = myPort.readStringUntil('\n');
    if(input != null) {
      String cval = trim(input);
      clearButton = Integer.parseInt(cval);
      println(clearButton);
    }
  }
  myPort.clear();
}

void draw() {
  
 while(myPort.available() > 0) {
    input = myPort.readStringUntil('\n');
    if(input != null){
      String xval = trim(input);
      dx = Integer.parseInt(xval);
      //println(dx);
    }
    else {
      return;
    }
    input = myPort.readStringUntil('\n');
    if(input != null) {
      String yval = trim(input);
      dy = Integer.parseInt(yval);
      //println(dy);
    }
    else {
      return;
    }
    input = myPort.readStringUntil('\n');
    if(input != null) {
      String lval = trim(input);
      light = Integer.parseInt(lval);
      //println(light);
    }
    else {
      return;
    }
    input = myPort.readStringUntil('\n');
    if(input != null) {
      String tval = trim(input);
      temperature = Integer.parseInt(tval);
      //println(temperature);
    }
    else {
      return;
    }
    input = myPort.readStringUntil('\n');
    if(input != null) {
      String cval = trim(input);
      clearButton = Integer.parseInt(cval);
      println(clearButton);
    }
    else {
      return;
    }
    myPort.clear();

    if(clearButton == 1) {
      background(0, 0, 255);
    }
    
    //scaling
    
    
    dx = UNIT_DRAW*((dx/1023.0) -0.5);
    dy = UNIT_DRAW*((dy/1023.0) - 0.5);
    
    light = light/1023.0;
    temperature = (temperature - 500) *(255.0/100.0);
    if(temperature  255) {
        temperature = 255;
    }
    
    println(temperature);
    //change color
    stroke(temperature, 255, 255);
    //change thickness
    strokeWeight(10*light);
      if(x >= X_WIDTH) {
          x = X_WIDTH;
      }
      if(y >= Y_HEIGHT) {
          y = Y_HEIGHT;
      }
      print("x: " + x + "\n");
      print("y: " + y + "\n");
      
      if(((dx  old_dx -.01))){
        dx = old_dx;
      }
      if(((dy  old_dy -.01))){
        dy = old_dy;
      }
      
      if((dx != old_dx) || (dy != old_dy)){
        line(x, y, x + dx, y + dy);
        x = x + dx;
        y = y + dy;
     
        old_dx = dx;
        old_dy = dy;
      } 
  } 
}

Color Wheel

Colleen Caroll, Angela Dai, Connie Wan, Edward Zhang

The Color Wheel is a game that challenges players to create a specified color using LEDs and colored filters. It is a fun and easy way for children to learn about the primary colors and the secondary colors that they create! The system is great because it is simple to use and understand. A color appears on the computer screen (e.g. “purple”), one colored led lights up on the board (e.g. red lights up), and then the user only has to turn the color wheel so that the color on the wheel mixes with the light to create the desired color (e.g. player turns the wheel to blue, thus making purple). It was, however, difficult to get a medium that would diffuse the LED properly so that color of the light and the filter would mix properly to the desired color. The LED is so focused that it was a challenge to get it to blend well with a material. We tried saran wrap, copy paper, toilet paper, and finally–thin sketch paper with watercolor to increase the translucence slightly. We put a lot of effort into finding the most effective and  aesthetically pleasing diffuser, as it an integral part of the function of our game. Our game works well (we have played several successful games) but would be even better with a more diffuse light (or another filter) that blended better.

Diagrams:

This schematic shows the rotating diffusing filter (mounted on a potentiometer) atop the LEDs, with the “Submit” push button on the side.

Images of Final System:

The electronics. Turn the paper filter (which is connected to a potentiometer) so that the blended color with the displayed LED light becomes a prompted color. Push the button to submit your guess.

For a prompted color of purple and a blue LED light, we turn the paper filter to red so that a purple color is seen.

 

We used processing to display a prompt color to achieve (in this case green) using the displayed LED color and the filter paper.

Parts List:

Electronics:

  • Red, Yellow, and Blue LED (or a tricolor LED + yellow LED)
  • 3x 330Ω Resistors
  • 1x 10kΩ Resistor
  • Linear Potentiometer
  • 12mm button
  • Breadboard
  • Arduino Uno

Other:

  • Straw
  • Paper
  • Red, yellow, and blue markers
  • Duct Tape

Instructions:

  1. Connect each of the LEDs to ground through a 330Ω resistor. Connect the positive side of the red LED to digital port 9, the yellow LED to digital port 10, and the blue LED to digital port 11.
  2. Connect the middle pin of the potentiometer to A0 (analog input 0) on the Arduino, and the outer pins to ground and 5V power,
  3. Connect one pin of the button to 5V power, another to digital pin 2, and another to ground through a 10kΩ resistor.
  4. Tape the end of the straw to the potentiometer, so that the straw sticks straight out of the knob. The tape should be strong enough such that turning the tape turns the knob of the potentiometer and the straw.
  5. Cut the other end of the straw lengthwise such that it splits into four strips. Flatten out the strips so they are perpendicular to the rest of the straw.
  6. Using the markers, color a square of paper (approx. 10cm x 10cm) so that approximately a third is red, a third yellow, and a third blue. The paper and markers should result in something that, when the LEDs shine through it, the colors blend into the primary combinations (green, purple, orange).
  7. Attach this paper to the straw strips using tape. The potentiometer-straw diffuser should be positioned such that the LEDs are underneath the paper to one side, and turning the potentiometer places each of the different colored paper regions over the LEDs.

Source Code:
Arduino Code

/**
* arduino
**/

/*
 Radiate, L0
 */

int val = 0;       // variable to store the value coming from the sensor
int leds[] = {9, 10, 11}; // LED pins
int NUM_LEDS = 3;           // number of LEDs
const int buttonPin = 2;
int buttonState = 0;

enum COLOR {
  RED,
  YELLOW,
  BLUE,
  ORANGE,
  GREEN,
  PURPLE,
  NUM_PROMPTS
};
const int NUM_QUESTIONS = 9;
int question_leds[] =    {RED, RED,    RED,    YELLOW, YELLOW, YELLOW, BLUE, BLUE,   BLUE};
int question_prompts[] = {RED, ORANGE, PURPLE, YELLOW, ORANGE, GREEN,  BLUE, PURPLE, GREEN};
int solutions[] =        {RED, YELLOW, BLUE,   YELLOW, RED,    BLUE,   BLUE, RED,    YELLOW};

int index = 0; // index of question
int filterindex = 0;

void setup_question()
{
  // turn on/off corresponding leds
  for (int i = 0; i < NUM_LEDS; i++) {
    if (i == question_leds[index]) {
      digitalWrite(leds[i], HIGH);
    }
    else
      digitalWrite(leds[i], LOW);
  }
  Serial.println(question_prompts[index]);
}

void setup()
{
  // initialize the serial communication:
  Serial.begin(9600);

  // initialize pushbutton as input
  pinMode(buttonPin, INPUT);

  // initialize led outputs:
  for (int i = 0; i < NUM_LEDS; i++)
    pinMode(leds[i], OUTPUT);

  // pick question
  index = random(NUM_QUESTIONS);
  setup_question();
}

void loop() {
  // read state of pushbutton
  buttonState = digitalRead(buttonPin);

  // if pressed check solution
  if (buttonState == HIGH) {
    val = analogRead(A0);    // read value from pot
    delay(10);
    filterindex = (int) (val / 1023.0 * 3 + 0.5);
    if (filterindex == 3) filterindex = 0; // two red so that pot can be circular
   /* Serial.print(val);
    Serial.print(" ");
    Serial.print(filterindex);
    Serial.print(" ");
    Serial.println(solutions[index]); */
    if (filterindex == solutions[index]) {
      // correct! new question
      index = random(NUM_QUESTIONS);
      setup_question();
    }
  }
}

Processing Code:

/**
processing -- displays colour prompts
**/

 import processing.serial.*;
 Serial port;
 PFont f; // font to display messages
 int xp = 100; // position of text
 int yp = 70;

 void setup() {
   size(256, 150);

   println("Available serial ports:");
   println(Serial.list());

   port = new Serial(this, Serial.list()[4], 9600);  

   // If you know the name of the port used by the Arduino board, you
   // can specify it directly like this.
   //port = new Serial(this, "COM1", 9600);
   background(255, 255, 255);
   f = createFont("Arial", 16, true); // Arial, 16 point, anti-aliasing on
   fill(0);
   //text("BEGIN", xp, yp);
 }

 void draw() {
 }

 void serialEvent(Serial myPort) {
   // get ascii string
   String instring = myPort.readStringUntil('\n');
   if (instring != null) {
     // trim off whitespace
     instring = trim(instring);
     int prompt = int(instring);
     drawbackground(prompt);
   }
 }

 void drawbackground(int prompt) {
   switch (prompt) {
     case 0: 
     background(255, 0, 0);
     text("RED", xp, yp);
     break;
     case 1:
     background(255, 255, 0);
     text("YELLOW", xp, yp);
     break;
     case 2:
     background(0, 0, 255);
     text("BLUE", xp, yp);
     break;
     case 3:
     background(255, 127, 80);
     text("ORANGE", xp, yp);
     break;
     case 4:
     background(0, 255, 0);
     text("GREEN", xp, yp);
     break;
     case 5:
     background(255, 0, 255);
     text("PURPLE", xp, yp);
     break;
     default:
     background(0, 0, 0); // error
     fill(255);
     text("ERROR " + prompt, xp, yp);
     fill(0);
     break;
   }
 }

Other Ideas:

1. Chalk Dust Diffuser – Fill up a room with smoke (or chalk dust if a smoke machine is not readily available) and have multicolored LEDs changing color in sync to music. Awesome ambient lighting for a dance floor. This was a “diffuser-focused” idea.

Poof (320x240)

2. Simon Says – Using a set of buttons and colored LEDs, this game presents the user with sequences of colored lights of increasing length. They have to press the buttons corresponding to the appropriate lights in the correct order to advance. Unfortunately, while interactive, this didn’t really use the idea of a diffuser effectively.
CameraZOOM-20130217174819580

3. Color matching game – This was an interactive game that had a diffuser as an integral part of the system. Create colors by moving the appropriately colored filter above the lit LED.

CameraZOOM-20130217173317849 (2)