[edited after doing another review!]
You may have read my last post and concluded that a) I am on some kind of crazy power trip b) I have just served on a Programme Committee as an associate chair or c) that it is Friday afternoon. All three are true, and as I have been doing nothing but reviewing and marking all week, I feel like writing instead of reading. Here are my thoughts on how to get a paper accepted in IDC which I hope might be useful to authors for future year. [IDC notifications and reviews will be out on Monday and this post does not contain spoilers!]
Interaction Design and Children is a conference series entering its second decade, a fact which makes me feel old as I attended the first one as a post doc. As the years have gone by, the conference has become increasingly popular and standards have improved. This year we moved to a system where Associate Chairs are responsible for checking review quality and writing meta reviews. This is another step towards being all grown up and scientifically mature. This year 28 papers were accepted (some with shepherding), which is about 30% acceptance rate. (For comparison, CHI 2012 had an acceptance rate of 22% in 2012, but started out 20 years ago with 45%.) Having reviewed papers almost every year for IDC, papers co-chaired in 2007 and acted as AC this year for 10 papers I feel like I have a handle on what is expected. Here are some thoughts to help new authors planning a submission in the future.
- More than cute systems. The days when you could get published at IDC for developing a cute system for cute children are hopefully gone. You'll need some kind of evaluation with kids, and more than just testing it on your neighbours' children and reporting that they thought it was "awesome". If you just have a cute system, a demo might be more appropriate.
- More than involving kids with the design. In my view, we should also be past the stage where a paper gets accepted simply because the paper used kids as design partners. There is certainly a place for critically evaluating design methodologies or introducing new methods for working with challenging user groups. But the paper needs more of a contribution than finding that the kids provided useful suggestions for the design.
- Describe the design of the software/technology. I want a screenshot!It ddn't occur to me to include this one the first time, because it seems obvious to me as a computer scientist. But I have now reviewed several papers, possibly written by psychologists, which report results extensively but not the technology which was used. For the results to be interpretable we need to know about the design features of the software which was used in the study.
- Methodology is important. Choose your methodology carefully, and execute it well. In my experience, IDC doesn't particularly value quantitative over qualitative methodologies. I don't think reviewers really have physics envy and want to see lots of numbers. So you don't need to put stats in just because you think they are expected. What matters is whether the method matches the research questions,and whether the conclusions can be supported by the evidence the methods produced. Pick a well estblished and documented methodology and a set of of commonly used measures. If you are going to include stats then please make sure you get them right! You can see my previous rants about this here and here. In short: don't bother with stats on small sample sizes, don't use t-tests or ANOVAs on likert data, report effect sizes, be careful about drawing conclusions when you have low power. Don't do multiple comparisons without applying post-hoc corrections, and don't throw in co-variates without good reason. (See also Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359–66. doi:10.1177/0956797611417632)
- Rich description is more convincing than weak stats. As a more specific version of the above point, if you have access to only a small number of participants, but feel there are still valuable points to be drawn from your study, consider a rigorous qualitative methodology instead of statistics. In some sub fields, it is difficult to find large numbers of participants (such as when working with children with autism or those with particular medical conditions). Rigor is the point here: don't just informally describe what the participants did, but use a well known approach such as case study analysis or grounded theory to systematically draw a picture of the participants' experiences in some depth.
- Try to use well known measures such as questionnaires rather than making up your own questions or tests.
- Let the kids speak for themselves. It's generally frowned on to have teachers or parents' views to be used a proxy for kids. Gather data directly from the kids unless there is very good reason not to.
- If using quantitiative methods, state your hypotheses clearly and base them in the literature. I'm looking for a match between the findings of previous studies reported in the literature, and clearly stated directional hypotheses (ideally with effect size predictions). Instead of "There will be a difference in children's attitudes to geography after playing with the Amazing Jupiter Robot", go for "The geography scores of children who played with the Amazing Jupiter Robot will be 10% higher than the control group". Where did you get that prediction of 10%? You got it from Robertson (2012)'s work on the Amazing Mars Robot which reported similar effect sizes.You didn't pluck it out of the air. For a set of prompts for critically evaluating your article, see p79 of Dienes, Z. (2008). Understanding Psychology as a Science: An Introduction to Scientific and Statistical Inference (1st ed., p. 150). Palgrave Macmillan. To be completely honest, I have never reviewed or read a paper in CHI or IDC which does give effect size predictions. I would like to, though. Please make my day! If you can't do that, at least specify the direction of the hypothesis: "Users who play with the Amazing Jupiter Robot will perform better on standardized geography scores than control users". Don't sit on the fence. The more specific the hypothesis, the more informative it is to have to supported (or disconfirmed). Also, don't introduce new variables into the experimental design without having a good reason from the literature. Gender or social class might be easy to get data on but please have a good theoretical reason for believing there are likely to be differences according to such groupings.
I'm sure there are many more tips which other IDC bods could suggest, but that seems enough to be going on with for now.