Should you be strolling past Computers & Education you will notice that I have a paper in the forthcoming issue. (If you want a copy go to my publications page and get a close to final draft without paying the ridiculous publisher fee). Today's blog is not merely an exercise in self congratulation, nor yet a rant against publishers. It is a story – nay, a parable – of the pain and torment which is academic publishing. If you wish to avoid reading about this pain and torment, skip to the practical suggestions at the end.
In 2009 I started analysing the data from a 2008 cohort of first year students and the sorts of learning which was visible from reading their blogs. I read a lot about self directed learning, did a hefty amount of analysis and wrote a 28 page paper. I spent a while deciding where to publish it. Based on what sense I could make of the advice from my department and grizzled academics at the time, I decided on this strategy: be selective about publishing and aim high. I understood that it was quality rather than quantity which we were aiming for. So I decided to write a few high quality journal papers at that time rather than focussing on conferences. This actually was too high risk, and has left me with a publishing gap because I submitted three papers to top flight journals and got all three rejected. I am still not sure what to do with two of them, but I will rework them in some way and resubmit them elsewhere. In fact, I am not sure whether this is a failed publishing strategy or not, because it might work out for the best in the long run. It all depends on whether you take the attitude that we should be publishing for the sake of external validation (like the REF or RAE frameworks) or whether we should be doing it for the sake of contributing the best papers we can to our fields. It may be that it is possible for these to coincide.
Anyhow, with this particular paper, I submitted it initially to Journal of the Learning Sciences. It was rejected by the editor but she supplied some incredibly useful advice and references and suggested I revise it. I did so. The revisions took about the first three weeks of my maternity leave (before the baby!). While electricians rewired my house, I sought refuge in Moray House library, tracked down another 10 or so key articles, and systematically followed the advice of the editor. The paper swelled to 49 pages. I resubmitted it a few days before the baby was born. A few months later the journal outright rejected the paper, based on two pretty high quality reviews. Essentially, the work wasn't enough of a contribution, given what was known in that field already, and the study wasn't well enough designed. Well, that was pretty gutting.
Actually, I made a determined effort that I wasn't going to take it personally so once I was back at work I decided to resubmit it to Computers & Education. But first I had to cut it seriously, as C&E won't take such massive manuscripts.
This time, I made a sensible move. My problem with Journal of the Learning Science was that I hadn't read enough of the previous work from the journal before submitting my paper. So I didn't really know what would be news to them, or what sort of methods they would favour. So with C&E, I hunted through some recent issues which had been published since I did my initial lit review. I found an article on blogging which proposed a framework of the educational affordances of blogging, and decided I would base my own article around it. I suspect this might be a good move in general: people in education like frameworks. It also made my contribution clear: I would extend this framework based on analysis of a larger set of data in a different domain. This required a major reshaping of the paper which shrank back to 23 pages, but different ones from the original paper to JLS!
One of the reviewers wrote a long and detailed review. The paper was accepted on the condition that major revisions were made according to these comments. I have to say that the reviewer had a bee in her bonnet about a certain subset of work in the field, which I hadn't chosen to focus on, but I decided at this point to grit my teeth, make the changes and resubmit. It took three weeks hard work and involved a whole other set of reading, including an entire book. Finally the paper was sent back to reviewers and another set of minor (and pointless) surface issues were raised. The damn thing was accepted a few weeks back and then there were a round of proof reading edits from the publisher. It took around 2 years from beginning writing to publication.
The parable of this story is not about the unfairness of reviewers, or the pressure on academics to publish. It's about the importance of having a mastery mindset.
The book which the last reviewer suggested I read was Carol Dweck's book on Mindsets. Although not entirely relevant to my paper itself, ironically it summed up the way I was thinking during the process of writing the paper. It's a fascinating book, and I strongly recommend it. Here is the bare bones: there are two sorts of attitudes which people can have towards their learning: performance oriented and mastery oriented. If you focus on an external outcome like getting praise from a teacher, passing an exam or getting a paper accepted in a high quality journal at the expense of learning then this is performance oriented. The problem with this mindest is that it is vulnerable to failure. People with performance mindsets get discouraged and would rather undertake easy problems which they know they can do rather than trying something more challenging. People with mastery mindsets don't mind failing because failures can give feedback which will help them to learn and improve their performance. If you approached paper writing with a mastery mindset, you would be aiming to learn how to write the best sort of paper you could in that field. You wouldn't mind reviewer and editor criticisms and you would do lots and lots of revisions to keep improving the paper. Somewhere in the course of writing this paper, I think I adopted a mastery approach. And you know what? The published paper is hugely better than the original. I have learned a huge amount about writing papers in this field. From that point of view, the 2 year cycle was worth it. As to the publications gap, I am not sure how much it matters. If I really have learned a lot about writing papers, then future papers of mine ought to be of a higher standard which is what the REF evaluation framework is trying to measure.
Based on this experience, here are some suggestions which I will try to follow next time. I hope you find them useful too. I am sorry if they seem obvious but I wish I had taken them on board before!
Consider where you might want to publish before you even plan the study. If you are aiming for a high quality journal, they may have expectations about the sort of experimental work you can publish there and you need to be sure you have the experimental design rigorous enough at the outset.
Before you decide on a journal, consider their stated word counts very carefully. Certainly do this before you start writing. It will save you over generating text, writing stuff which you find interesting but which you will end up having to cut anyway.
Once you have decided on a journal, take the time to peruse recent issues for similar topics. Someone once told me that when you write a journal paper you are joining in a conversation which has probably been going on for some time. Make sure you refer to recent papers on the same topic, and take note of the analysis methods used. Even the way the results are reported can be a useful guideline because it shows that reviewers for the journal accept something in that format.
Don't take reviewers' comments as a personal attack. Sometimes it does feel like that, but the point is to improve your writing. A good review is one where the reviewer has taken the time to give detailed and thorough criticism and concrete suggestions for improvement. Work through these patiently and thoroughly. Write a letter explaining clearly how you responded to each comment. A bad review is one which is not detailed. I personally don't tend to bother making changes based on a vague review because if I don't understand what it means I could just as easily make the paper worse rather than better by just guessing. You can always ask the editor for clarification, or politely explain that you have chosen not to respond to that comment. Editors aren't stupid: they know a poorly written review when they see it.