Group 20 — Name Redacted
Brian, Ed, Matt, and Josh
Summary: Our group is creating an interactive and fun way for middle school students to learn the fundamentals of computer science without the need for expensive software and/or hardware.
Introduction: The purpose of our system is to introduce students to the basic concepts and principles of computer science and to teach them in a fun, intuitive, and interactive way. The purpose of this experiment is to gauge various aspects of our system such as intuitiveness, ease of use, and difficulty, and to ascertain any changes we could make that would improve the user’s overall experience. In particular, we want to see how easy or hard each of the three tasks are and whether or not the tasks were intuitive. Our bigger concern is that the tasks are not intuitive. Every student of computer science will think that different topics are of varying difficulties, but if our system is not intuitive, no study will be able to learn from it.
Implementation and Improvements:
-
Since P5, we created a better debugging mechanism and initialization feature, which we have now made into our easy task.
-
There are no wizard-of-oz techniques in P6 whereas there were those techniques in P5.
-
The TOY Program has a compilation phase and a running phase to allow jumps to future labels. The user does not notice the difference between the two phases to provide a better level of abstraction.
-
The TOY Program goes line by line waiting 1.5 seconds during execution so the users can see how the registers, etc. change.
-
The TOY Program has error messages where there is either a runtime or compile time error.
http://blogs.princeton.edu/humancomputerinterface/2013/04/22/p5-name-redacted/
Method:
Participants: Our participants were all Princeton University students with varying degrees of computer science backgrounds, ranging from no background to only two introductory courses. We chose these participants because they did not have a lot of computer science background, and our target audience are people who are trying to learn computer science but do not have much formal training. The participants ages ranged from 19 to 22. One came with an engineering background, one with a social science, and one with an anthropology background. All three participants were male and each had some prior level of teaching experience. When choosing participants, we wanted students who also had taught, so that we could simultaneously test both groups in our market. With only three participants, we really did not have a lot of opportunity to create a diverse group of participants.
Apparatus: Our project uses paper tags with the corresponding text labels, a projector, a webcam and a laptop. In a school, the projector will already be connected and set up to a laptop, so we set that up for the testers. We asked the users to connect the webcam and then follow the initialization instructions so that the program can map the webcam coordinates to the projector coordinates. The paper tags already had tape on the back of them so that the users could easily put the tags onto the wall. We conducted the test in an eating club since there was a space about as large as a classroom that allowed us to project on a large open wall.
Tasks:
Program Initialization (EASY): This task is the initialization of the program that requires the users to initialize the webcam and projector space. This task also includes key commands that allow the users to debug their own code, by showing them which lines are grouped together as rows. Part of the idea from the task came from an expansion of the Tutorial feedback, since the tutorial should really begin when the program is first loaded on the teachers laptop and be geared towards helping users debug their programs.
Numbers (MEDIUM): In this task, users will be introduced to number systems and learn how to convert between them. Users will be able to make numbers in binary, hexadecimal, octal, or decimal and see the number converted in real time into the other number systems.
TOY Program (HARD): In the TOY programming environment, users (primarily students) will be free to make and experiment with their own programs. Users create programs by placing instruction tags in the correct position and sequence on the whiteboard. The program can then be run, executing instructions in sequence and providing visual feedback to users to help them understand exactly what each instruction does.
Task Selection: We chose the “Program Initialization” task because it is the first step in using our system and every user would have to complete this task. Once this step is completed, it makes sense to have the user try out each of the applications we developed for the platform.
Procedure: We made sure each participant consented to the study with knowledge that their image and or video with them may be published on a blog post on the publicly searchable web. With each participant we introduced them to the goals of the project as stated in a previous section of this write up. We then gave the participants a high level introduction into the system, explaining how the tags are used as well as what the tasks entail. We introduced the participant into each task and asked them to perform the task. We offered the participants a post-survey questionnaire which is in the Appendices section of this write up.
Test Measures: We asked the participants to fill out a questionnaire that asked them to rate our program on a scale from 1 to 5 on several different aspects. These aspects were Overall Ease of Use, Overall Intuitiveness, Initialization: Difficulty of Task, Assembly: Difficulty of Task, and Numbers: Difficulty of Task. We also analyzed how users interacted with the program by taking less quantitative measurements about how much it seemed that they struggled with a given task, or how easy a task was, etc.
Documentation:
Each tag has a unique visual marker.
Participant 1 Using “Numbers” Program
Participant 2 Performing the Initialization Task
TOY Program
Results and Discussion:
Measure |
Average Rating (out of 5) |
Overall Ease of Use |
4.66 |
Overall Intuitiveness |
4.33 |
Initialization: Difficulty of Task |
4.66 |
Assembly: Difficulty of Task |
4.33 |
Numbers: Difficulty of Task |
4.00 |
Overall we felt that the tests were rather successful in not only reinforcing the feedback that we had received in previous iterations of the prototype but also in showing us how to further improve our system. To start, the feedback was extremely positive. We received an overall average “Ease of Use” rating on our anonymous survey of a 4.66 out of 5. On the “Intuitiveness” of our system, we received an overall average rating of 4.33 out of 5. Both of these numbers were high both for users of moderate and limited CS background. This tells us that we were able to provide users with an interesting an intuitive educational experience despite the fact that they had very limited training and knowledge of CS concepts. Users had very positive verbal feedback. One of the users who had no computer science background even remarked upon the “theme of rows” that seemed to be going across our different applications. This was really interesting because horizontal tag rows is a fundamental metaphor built into the system and it is something that someone could pick up without seeing a line of code. Of the three tasks, it was clear both from their self reported values and from observing the users that the “Numbers” task was clearly the most confusing. They primarily seemed to take issue with the location of where numbers were displayed in relation to the ID tags placed on the board. We will focus more energy on cleaning up that task and making it more visually intuitive and appealing.
Going forward from the positive information given, much of the constructive criticism that we received centered around the depth of the product. Our users seemed to grasp the fundamentals of putting tags on the wall. Our users even started to develop different user patterns. One user, for example, would constantly “compile” his assembly code to check for errors while most just wrote the program and compiled once at the end. The place where they struggled was with the tasks being too “abstract.” This idea of abstractness has been on our minds since our first contextual inquiry when it was suggested to us. Users wanted to know “how the initialization step worked.” They wanted more information about the conversion process from binary to hex.
Jumping off from comments like these, we have ideas of specific changes we want to make to the system to help increase its connection with the users. We are going to change the number system a little so that we display exactly how the numbers are generated (2^0 + 2^1 + 2^3 = 11). We are going to include more documentation for each of the three tasks. It was a little confusing where to put the base tags for the numbers method. We may create rows for the number method so that the user knows where to put the hexadecimal, decimal, and binary tags. It was also unclear that the base numbers had to be to the far left of each row, so that would be included in the documentation. Similarly, the testers asked what the arguments for various commands were and this needs to be cleared up in the documentation about the program. All of these specifics boil down to two main themes. We need to cater the UI to make visual suggestions to the user as to what they should do next (i.e. how many arguments does print take with visual boxes). We, also need to increase the visual documentation so that the user can ask the questions about what’s going on “behind the curtain.”
We are also working on tag persistence. Users don’t understand that if they walk in front of the camera, they block it’s field of view so we want to make the camera less vulnerable to view blocking but having it remember tags for a few seconds. We also learned a few things about the space that our project needs to be in to work. The sunlight was causing problems for the camera since the black and white tags were not being picked up by the software. We needed to move the projector closer to the wall, which created a smaller output image. The code section for the TOY Program was a little small for fitting three tags on one row
Appendices:
Consent form – Updated from P4:
https://docs.google.com/document/d/17J1J9EMwjDzunhS-5s4-L517EtXTwkqsPd7rr3VfHc
Interface Feedback and Demographic form –
https://docs.google.com/forms/d/1xOa9_gey2mPCII_JeCPdjFmfMKsI_x2r1eScCl1P2Es/viewform
https://docs.google.com/a/princeton.edu/forms/d/1xOa9_gey2mPCII_JeCPdjFmfMKsI_x2r1eScCl1P2Es/edit
The demographics are on the second page of the form.