[cross posted from CACM blog and own blog]
You have probably come across some viewpoint articles about computational thinking in CACM in recent years (Guzdial, 2008, Wing, 2006). Computational thinking is the seductive notion that ways of solving problems like a computer scientist should be taught to kids early on, like reading, writing and ‘rithmetic. What’s not to like about that idea? No doubt every discipline is fond of its methods and approaches and feels everyone in the general population who know about them.
It’s a potentially useful idea for my current project Making Games in Schools, which is training teachers to work with game making software in their classes. The idea is that kids learn important computing concepts while making their games, and so that participation in the project should make learners more positive to CS and improve their understanding of such concepts. But how do you measure this? At the start of the project, I thought: “Great. I’ll just borrow from the material which is amassing in the CS Education community about computational thinking and use it as a pre/post test measurement.” It turns out that the community isn’t quite at that stage yet. It seems we’re still discussing what the concept of computational thinking might include in detail, and therefore haven’t got as far as defining an assessment (if I’m wrong about this, I would be delighted if you would let me know!) I did have to find some way of measuring the kids’ knowledge of CS concepts, so here’s what I did.
Our colleagues at Sussex University (http://www.flipproject.org.uk/) have been working on a sister project to ours, and started thinking about how computational thinking and a visual programming language for game making might be related (Howland et al., 2009). Keiron Nicholson had the brain wave that perhaps you could assess some aspects of understanding of CS concepts by asking children to play a specially prepared game and then answer questions about it. Their team developed a pilot game which could be used as a design tool to find out how children expressed programming constructs in natural language. In our project, we needed a test which could be administered to hundreds of children and easily marked, so we developed another version of this game and matching multiple choice questions. We now have a pre and post test of matching difficulty (I hope!) and have piloted it for a couple of iterations with children and teachers. The test is aimed at 11-12 year olds.
Based on the curriculum guidelines in Scotland, and what I have read about computational thinking, the test covers:
- Identifying rules describing complete sequences of actions in the right order
- Identifying conditions and consequences (IF and ELSE)
- Simple operators such as AND, OR and NOT
- Tracking variable state: reasoning about what events previously in the game have led to the current outcome
- Abstraction/ categorisation (see below)
Below is a screen shot from an example encounter.
The player meets The Guardian of the Well, who asks her to investigate why some of her animals are being poisoned when they drink from the well. A sequence of creatures are seen to come and drink at the well, and either choke and die or continue to gambol around happily. The chickens and imps die, whereas the spiders and the pigs survive. A feature of the creature’s morphology determines its fate which you can see from watching the video (I am not giving away the answer here!). A number of questions relate to this encounter, an example of which is below.
Essentially, they are intended to assess whether the viewer can identify rules which govern the behaviour of members of different classes and then extrapolate from these rules. It could be taken as a measure of ability to identify inheritance relationships, and to abstract away from the concrete to the more general.
After pilot testing, we decided to create a video of the game for the class to view rather than getting individual kids to play the game for themselves. There are a number of reasons for this, but mostly it is because you can’t guarantee that the kids will play the game in the fashion intended and this means that everyone could have a different experience on which to base their test answers. For example, if a kid skipped a conversation or took a wrong turning, it could affect their score unfairly. We also found that the test was far too easy (average score 75%!) so I spent a contorted weekend trying to think of the most devious distracters I could which would match with different sorts of misconceptions. Our latest panel of teachers reckon that their classes should find the current version challenging.
There are of course limitations to this approach. Are we really measuring computational thinking? This is such a broad concept that it is almost impossible to say. It is measuring some aspects of it which are easiest to define. Does it rely too heavily on English language skills such as reading comprehension? Perhaps, but we hope that enabling the learners to view a visual representation of the encounters should cut reliance on reading. Further, their teachers will read the questions out loud and answer vocabulary queries for the class. Is the multiple choice style of the questions appropriate? It could measure learners’ abilities to identify rules but not their ability to produce them. The problem is, of course, that learners at the beginning of their computing careers have no representation system for expressing rules apart from natural language, and in the Sussex group’s pilot project they found it tremendously hard. They are not used to using ordinary language to express concepts concisely and comprehensively but they have no alternative formalism at this stage.
You can see videos of the games and the question here and here.
If you are interested in this area, please do take a look and let me know what you think. I expect another couple of iterations will be required to get it right so expert opinions and very valuable. If you can think of more questions or encounters (particularly harder ones) I would be very grateful. My head is spinning from trying to think of them!
GUZDIAL, M.
2008. Education<br />Paving the way for computational thinking. Commun.
ACM, 51, 25-27.
HOWLAND, K., GOOD, J. &
NICHOLSON, K. 2009.
Language-based support for computational thinking. Proceedings of
the 2009 IEEE Symposium on Visual Languages and
Human-Centric Computing (VL/HCC). IEEE Computer Society.
Essential product for kids. I came across a site softwareforkids.com where Educational Software & Games for Children of All Ages are available.
Posted by: Amir | Thursday, October 28, 2010 at 11:55 AM