A few weeks ago there were a number of interesting posts floating around the web discussing the appropriateness of science blogging as a form of self-promotion (see this post by Scicurious for an excellent backgrounder) . This is an issue that I’ve spent a lot of time thinking about – communicating with people about our own research wasn’t the only reason that Peter and I got into blogging, but it was a very big part of it. And it’s one of the main reasons why I advocate for researchers to get involved in social media.
Out of curiosity I put up a poll asking people whether they felt it was ok for a person to blog about their own research, and today I thought I would share those results (sorry for the longer than expected delay getting them posted).
The survey asked “Is it ok to blog about your own research?”
The available options were:
2. Yes, but only if done using your real name
We had 34 responses which makes this an <sarcasm> extremely representative sample </sarcasm>. What’s more, I have no idea of the background of those who responded, although they are likely people who follow either myself, Scicurious or Mr Epidemiology on twitter. Here is how those responses broke down.
11 people (32.4% of respondents) stated unequivocally that yes, it is ok to blog about your own research. I presume that these people don’t care whether you identify yourself as the study’s author in the post (something which would be impossible should you choose to use a pseudonym). All 23 other respondents (67.6% of those who completed the survey) said that it is ok to blog about your own work, but only if you use your real name. This is how I voted personally since I think this would prevent people from smearing their competition without disclosing their own conflict of interest, which is a concern that has been voiced by Drug Monkey. Given that comment (and other similar comments that I’ve heard in the past), I was surprised that not a single person said that it was completely inappropriate to blog about your own work under any circumstances (it might be worth noting that the vote breakdown does seem to be in general agreement with the bulk of the comments on Sci’s original post).
Finally, a small minority of respondents added a few additional comments after the survey. Here are a few:
If using a pseudonym you need to create the links; otherwise you are being a bit dishonest. Don’t like all this talk of ‘trashing your competitors’ though – that smacks of your field not doing actual science but just competing for funding…
I have a few rules I impose on myself: I only blog about specifics after the paper’s published. I don’t do anything that I think might hint of “prior publication.”
[the above pretty much perfectly describes the approach that Peter and I have taken at Obesity Panacea... I can't imagine that our current or previous supervisors would have been so indulgent with our blogging if we were scooping our own publications!]
I’ve never seen a difference between a blog post explaining your paper and a conference presentation, other than the content of your presentation will likely be ephemeral and not recorded and available indefinitely. If you’re research paints a contrasting picture to that of a colleague, as long as your data is available for comparison (i.e. published), then it shouldn’t where you discuss it.
Is there anyone out there who thinks it is inappropriate to blog about your own research under any circumstances? Let me know why in the comments below.
Thanks to everyone who took part in the survey!
Over the past year or so I’ve had a number of interesting conversations with people about peer review. It seems as though many people think the current system is broken, although I have yet to hear many suggestions on where to go from here. Sometimes people mention wikis or other web-based means of publication, but many (myself included) worry that there needs to be some form of peer review to assess study quality, fraud, and other things that are undesirable (although whether traditional peer review achieves these goals is very much up for debate).
Here are my thoughts on the problem, and what I see as a relatively simple solution.
How do we assess study quality?
Historically, if you had to assess the quality of a study without having the luxury of reading it, you would probably ask two questions:
1. Was it published in a peer reviewed journal?
2. If published, how prestigious is the journal?
While far from perfect, these questions give a general sense of the quality of a piece of research. Something that was published in Nature is likely of higher quality than something published in a small society journal (or at least that’s an assumption that many of us are willing to make when pressed for time), and both of these papers are likely to be of higher quality than a paper that has been rejected from multiple journals and now sits unpublished in a desk drawer.
This quick and dirty assessment of paper quality worked for a long time, since there were a fairly limited number of journals where you could publish research on any given topic. If peer reviewers deemed your work to be of high enough quality and/or impact, then it was accepted for publication. If not, it went unpublished. That served as a simple, albeit crude, way to assess the quality of a study or experiment. If no one was willing to publish your paper, then it must not be of very high quality.
Taken a step further, these questions can also be used to assess the quality of a researcher. Are you publishing many peer-reviewed papers? Are they in top journals? If the answer to either of those questions is no, then the implication would be that your research was of lower quality than someone who answered yes.
There are problems with this line of reasoning (among several obvious problems: not all papers that get rejected are low quality, and not all papers that sneak through the peer review process are high quality), but in general I would say that many people were happy with the system, since it was simple and (at least perceived to be) reasonably effective at keeping low quality studies on the outside and higher quality studies on the inside.
Why don’t these questions work anymore?
There are a lot of new journals popping up. Not just one or two, but hundreds. New Open Access publisher Hindawi publishes more than 120 journals in medicine alone! I get several emails every week publicizing the launch of a new journal(s), most of which are open access, and which seem to have varying standards of peer review (some use external reviewers, others are only reviewed in-house by the editors). The issue now is that if you can afford to pay the open access publishing fees, no paper is unpublishable. If you submit to enough journals, then your paper will almost certainly be accepted eventually, at which point you can say that the study has been published in a peer reviewed journal. So the first question from above (“Was it published in a peer reviewed journal?”) is no longer a useful way to assess paper quality, since almost anything can be published in some form of peer reviewed journal eventually.
Related to the issue of journal proliferation, people are becoming less and less devoted to any single journal. Rather than reading a specific journal from cover to cover each month, I have email alerts that send me a message whenever a paper is published on certain topics, regardless of the journal. As a result, papers published in low-impact journals can still get lots of attention, even if few people actually read that journal on a regular basis. In contrast, before online journal access it would have been much less likely that anyone would come across a paper in an obscure journal, no matter how relevant to their work.
Article-level metrics (e.g. assessing the number of citations for a specific paper, rather than the impact of the journal itself) are also reducing the importance of publishing in “prestigious” journals, since people now have more precise ways of determining whether your paper is being cited regularly. This isn’t to say that there are no benefits to publishing in prestigious journal – far from it. But the penalties of publishing in a low-impact journal are now much less than they used to be.
Is this a good or bad thing?
This depends on your perspective. If you liked the system where there were few journals and not everything could be published, then this will almost certainly seem like a bad thing. Suddenly we’ve lost one of the simplest (albeit very imperfect) ways of determining the quality of a study, or the quality of a researcher. This means that you could conceivably find a paper (or publish a paper yourself) that proves/supports just about anything, regardless of how poorly the study was conducted, which is a big problem.
Despite these obvious problems, however, I think that the new system could be a good thing… if we are willing to tweak the peer review and publishing process.
A journal that publishes everything
This may sound a bit far-fetched, but hear me out on this. At this point, almost every paper will get published eventually. What’s worse, it will often get peer reviewed at multiple journals before finally be accepted. This means that much of the time and expense of reviewing/rejecting the paper at higher journals was wasted, since it doesn’t actually keep the paper from being published – it just bumped it down to a lower quality journal.
So why not just publish everything that is submitted to a journal? PLoS ONE already does a more restricted version of this – they publish everything that they receive that is above a certain threshold of quality (as opposed to other journals, that consider both quality and “impact”, e.g. whether it’s a splashy finding or not). The papers would still be peer reviewed, but the purpose of the review would be to assess the study quality, using a pre-determined checklist (I’m picturing something similar to, but much more detailed than the Downs and Black checklist that is sometimes used to assess study quality in systematic reviews).
The checklist would include things like methodology (x-sectional, intervention, RCT, etc), number of participants, likelihood of bias, etc, and could range from 0-100. The final score and the checklists themselves would be published along with the paper, along with any additional reviewer comments. The peer review could be done using the current method of simply sending manuscripts out for review, or there could be a central clearing house run by the NIH or some such organization. The critical point is that the articles would be peer reviewed, and the quality of the article would be made abundantly clear on the article itself.
Using this system, you could publish a paper as soon as it’s received for review – it would simply need to say “pending quality review” or something of that nature. You could also require that all studies also put their full dataset online in order to aid with replication and hopefully reduce the likelihood of fraud, which isn’t easily caught by traditional peer review anyway (some journals, such as BMC Public Health already require that authors be willing to share data upon request, although I don’t think there is any mechanism for determining whether this actually takes place). The quality score of a paper could even be amended as authors improve their study by performing additional experiments or analyses.
This would keep the best aspects of peer review – extra eyes and ears providing thoughtful comments on how a paper could be improved – while acknowledging the fact that the current system doesn’t do a tremendous job of quality control (for an excellent look at the shortcomings of traditional peer review, please check out this paper by Richard Smith titled Classical Peer Review: An Empty Gun).
What are the advantages of this system?
I see a number of benefits to adopting this “publish everything” model.
1. This new system would make paper quality exceedingly clear – if I say that wifi causes cancer, and can only point to a study that scored 2/100 for quality, and you point to a study that found the opposite that scored a 90/100, then we have a better idea of which side to take. If the findings are conflicting and study quality is similar, then we know that the issue is yet to be settled. Essentially we are making it easier to do systematic reviews, by assessing study quality when a paper is published, rather than waiting for the systematic review to come along.
2. This system would incentivize high quality research, rather than “sexy” findings. If I know that my study will be judged on the quality of my methods, rather than the controversy or novelty of the findings, then it will help to improve the methodological quality of studies in general.
3. This system would make it ok to replicate prior work, or publish null findings. We talk a lot about the importance of replication in science, but we also know that it’s really hard to publish a replication study in a prestigious journal (it’s hard to spin a replication study as “cutting edge” since, by definition, it’s already been done by someone else first). It the same thing with null results – we know they’re harder to publish, we know this introduces biases into systematic reviews, and yet there haven’t been many effective ways to fix it (I’m curious if PLoS ONE publishes more “null” studies than other journals, since it doesn’t concern itself with a study’s impact – anyone with info on this please let me know).
If papers are judged solely on quality rather than the novelty of the finding, that removes the incentives against performing/writing up replication studies and null results.
What are the downsides of this system?
The biggest downside of this system is that everything, regardless of quality, would be published. So if your assessment of study quality begins and ends with “was it published in a peer reviewed journal?”, then this is obviously going to be a problem. Of course my counter-argument is that we’re already in a situation where you can publish anything regardless of quality, so that’s not really going to be a big change anyway. Of course there would be a lot of complicating factors (what goes into the quality checklist, who performs the review, how to make sure it’s applied consistently, ways to appeal if something was done incorrectly, etc), but if the over-arching idea has merit then I think the plumbing could be dealt with in turn.
Why isn’t this working already?
As I was writing up this post, James Coyne pointed out that WebmedCentral has most of the characteristics I’m looking for. They publish everything, they do so rapidly, they publish their reviews online, and they include a quality score. However, the quality score seems to be completely arbitrary, and their 10-question quality checklist focuses on the writing (e.g. “Is the quality of the diction satisfactory?”) rather than the quality of the study methodology, which is the real issue. I think it’s a worthwhile attempt, but I don’t think any modified form of peer review (including post-publication peer review, which has been spectacular in a few specific situations – e.g. Rosie Redfield and #arsenicDNA – and generally underwhelming elsewhere) will really catch on without without a true assessment of study methodology published alongside the paper.
So, what do you think?
I’ve been mulling this over in my head for a while, and I’m very curious to hear if anyone thinks this is even remotely plausible. It’s basically a more extreme version of PLoS ONE, which was pretty extreme in its own way when it first came out. Could this idea ever work in practice? If not, why not? And specifically, if you think it’s a bad idea, I’m curious to hear how this type of peer review be worse than the current form, given that we’re already at the point where peer review is weeding out less and less material with the creation of every new journal.
I’d love to hear what you think!