After a teaching induced hiatus, I have returned to some analysis of the Campie Primary School game making study. Today, for your delight and edification, I will look at the children's editing behaviour when using the game. By the way, I'm choosing to blog this analysis as a way for me to work out what it means, and as a precursor to writing an academic paper about it. Feel free to point out mistakes if you spot 'em.
Why is editing behaviour of interest?
What do I mean by editing behaviour anyway? I am looking at three categories of activity: adding, editing and deleting. Adding is when the children put something new into their game, such as a new blue print object like a monster, or a new conversation or line of dialogue within a conversation, a new idea on a fridge magnet, or a new reply to a Comments Cards discussion. You would expect that adding would be most frequent as it is core to building the game. Editing is when the child goes back to change the properties of a blueprint, or alter a line of dialogue, or the writing on a fridge magnet, or peer review remarks in Comment Cards. Deleting, as you would expect, is when an author removes a blueprint, or an idea, a line of dialogue or a Comments Card comment completely. You would expect editing and deleting to be relatively less frequent than adding unless you had a neurotic author in the throes of writer's block.
Taking a cynical view, you might think that kids would spend hours just adding dragons willy nilly to their areas. Actually, I have watched this with short workshops with younger kids - it's loads of fun to put as many skeleton warriors as possible in your game to see what happens. Now, being teachers, we hope for rather more than that in the long run. We want to see some thought on the part of our mini game designers. We want to see evidence that they are refining their ideas, making changes to their games on the basis of testing them out. In fact one of the arguments for making games in school is that it encourages children draft and redraft their work because they get immediate feedback from the game when they play it. This is in comparison to writing, where children in early stages of writing development typically spent loads of time generating text and less time going back to edit or revise it. (Mike Sharples makes this point in his book "How we write: Writing as Creative Design"). As children develop as writers, they gain the metacognitive (thinking about thinking) skills to think about what they have written and evaluate how good it is. At a previous Adventure Author study, a visiting education advisor commented "They are learning something about real authoring, the opportunity to compare, to adopt and adapt, a re-cycling of ideas". What I now have is log file data to examine this sort of observation in more detail.
On average, the children did spend most of their time adding things to their game (72%), but there is also evidence that they revised their game by editing (13%) and deleting (15%).
So far so good. But we might also wonder how the editing cycle works. Do the children spend one session making a game and the next editing it? This might be equivalent of writing a first draft of a story and then returning the next day to make changes to it. You might predict, for example, that there would be a pattern of more edits as the project progressed. In fact, this is not the case as the graph below shows.
You can see that in the avergae case, the children tended to add, edit and delete in every session. (I have no idea why there is a one minute sesson on April the 29th. None of the AA team were there that day. Perhaps something else happened in the class to interrupt the session). On May 19th the children used Comment Cards and so edited their comments and replies to each other, which explains the increase in edits for that session. It looks more likely that the children were going through quite tight create-test-revise cycles within single sessions. We can check that with the aid of more lovely graphs in the next blog post.
Of course, averaging across the whole class only takes us so far. It can be useful to look at the behaviour of individual children, especially when we know what the games they produced are like.
Child R: Regular, effective testing
Cathrin's notes say that Child R produced a well thought out game with attention to detail. In terms of his working patterns, he seemed to do a fair amount of work on blueprints and terrain, and then test his game. He does not test his game on every session, for example. Similarly, the graph indicates that he does not edit his game on every session. He does edit and test, but he perhaps has a longer period of building before he stops to test his ideas. It seems from the progress from his game file that he started working on a new area in the last four sessions; this is perhaps also indicated by the fact that the fourth last session is all about adding new blue prints.
Child K: Careful revisions but without the benefit of game testing
Child K spent a lot of time editing her work, particularly once she started working on conversations. Her comments during the peer review session indicate that she felt she needed more work on her conversations even after this work. However, Cathrin's analysis of her game suggests that her conversations were more suited to a written story than a game, so she may have benefited from more regular testing. She spent 13% of her time game testing in comparison to the 22% average. One could argue that her editing behaviour could have been more effective and better informed if it had been based on feedback from running the game.
Child C: Deletions from frustration
Child C had a very curious pattern of behaviour. In fact, when I first wrote my script to process his log files, I thought I had made a mistake. He spends ages adding blueprints and then deletes them and then adds almost exactly the same blueprints again. He doesn't spend very much time testing his game. He didn't change his game according to peer feedback. Cathrin noted that his game was disappointingly empty, and his teacher noted that he spent a lot of time helping other children with their games. In this case the fact that he changes his game by deleting things is probably not a good sign, as he repeatedly creates something similar. He implemented none of the ideas listed in his fridge magnets. Once again, testing his game by playing it might have helped him to see what would have improved it.
You may be wondering why this analysis of children's behaviour from log files matters. What conclusions can we draw from what these individual children did? This is intended to be descriptive analysis: I'm not setting out to prove a hypothesis or compare one condition to another. The descriptions can be useful in a couple of ways: 1. We can use it to improve Adventure Author - we know what patterns of behaviour are associated with good or less good games and so if we can detect them in the software we can perhaps make helpful suggestions to the user. 2. We can explain to teachers about what ways of working seem to be effective and which are the warning signs to look out for.
As far as the children's editing behaviour goes, there is evidence that they do revise their games. There is a general pattern of creating and editing within a single session rather than waiting to revise games after completing a first draft. Case study evidence suggests that editing behaviour combined with game testing is effective: the game itself provices useful feedback which helps the children revise their games in productive ways. There is some case study evidence which points to unproductive editing behaviour which is related to a lack of testing.