Travis Saunders

This user hasn't shared any biographical information


Posts by Travis Saunders

Time for a new type of peer review?

Over the past year or so I’ve had a number of interesting conversations with people about peer review. It seems as though many people think the current system is broken, although I have yet to hear many suggestions on where to go from here. Sometimes people mention wikis or other web-based means of publication, but many (myself included) worry that there needs to be some form of peer review to assess study quality, fraud, and other things that are undesirable (although whether traditional peer review achieves these goals is very much up for debate).

Here are my thoughts on the problem, and what I see as a relatively simple solution.

How do we assess study quality?

Historically, if you had to assess the quality of a study without having the luxury of reading it, you would probably ask two questions:

1. Was it published in a peer reviewed journal?

2. If published, how prestigious is the journal?

While far from perfect, these questions give a general sense of the quality of a piece of research. Something that was published in Nature is likely of higher quality than something published in a small society journal (or at least that’s an assumption that many of us are willing to make when pressed for time), and both of these papers are likely to be of higher quality than a paper that has been rejected from multiple journals and now sits unpublished in a desk drawer.

This quick and dirty assessment of paper quality worked for a long time, since there were a fairly limited number of journals where you could publish research on any given topic. If peer reviewers deemed your work to be of high enough quality and/or impact, then it was accepted for publication. If not, it went unpublished. That served as a simple, albeit crude, way to assess the quality of a study or experiment. If no one was willing to publish your paper, then it must not be of very high quality.

Taken a step further, these questions can also be used to assess the quality of a researcher. Are you publishing many peer-reviewed papers? Are they in top journals? If the answer to either of those questions is no, then the implication would be that your research was of lower quality than someone who answered yes.

There are problems with this line of reasoning (among several obvious problems: not all papers that get rejected are low quality, and not all papers that sneak through the peer review process are high quality), but in general I would say that many people were happy with the system, since it was simple and (at least perceived to be) reasonably effective at keeping low quality studies on the outside and higher quality studies on the inside.

Why don’t these questions work anymore?

There are a lot of new journals popping up. Not just one or two, but hundreds. New Open Access publisher Hindawi publishes more than 120 journals in medicine alone! I get several emails every week publicizing the launch of a new journal(s), most of which are open access, and which seem to have varying standards of peer review (some use external reviewers, others are only reviewed in-house by the editors). The issue now is that if you can afford to pay the open access publishing fees, no paper is unpublishable. If you submit to enough journals, then your paper will almost certainly be accepted eventually, at which point you can say that the study has been published in a peer reviewed journal. So the first question from above (“Was it published in a peer reviewed journal?”) is no longer a useful way to assess paper quality, since almost anything can be published in some form of peer reviewed journal eventually.

Related to the issue of journal proliferation, people are becoming less and less devoted to any single journal. Rather than reading a specific journal from cover to cover each month, I have email alerts that send me a message whenever a paper is published on certain topics, regardless of the journal. As a result, papers published in low-impact journals can still get lots of attention, even if few people actually read that journal on a regular basis. In contrast, before online journal access it would have been much less likely that anyone would come across a paper in an obscure journal, no matter how relevant to their work.

Article-level metrics (e.g. assessing the number of citations for a specific paper, rather than the impact of the journal itself) are also reducing the importance of publishing in “prestigious” journals, since people now have more precise ways of determining whether your paper is being cited regularly. This isn’t to say that there are no benefits to publishing in prestigious journal – far from it. But the penalties of publishing in a low-impact journal are now much less than they used to be.

Is this a good or bad thing?

This depends on your perspective. If you liked the system where there were few journals and not everything could be published, then this will almost certainly seem like a bad thing. Suddenly we’ve lost one of the simplest (albeit very imperfect) ways of determining the quality of a study, or the quality of a researcher. This means that you could conceivably find a paper (or publish a paper yourself) that proves/supports just about anything, regardless of how poorly the study was conducted, which is a big problem.

Despite these obvious problems, however, I think that the new system could be a good thing… if we are willing to tweak the peer review and publishing process.

A journal that publishes everything

This may sound a bit far-fetched, but hear me out on this. At this point, almost every paper will get published eventually. What’s worse, it will often get peer reviewed at multiple journals before finally be accepted. This means that much of the time and expense of reviewing/rejecting the paper at higher journals was wasted, since it doesn’t actually keep the paper from being published – it just bumped it down to a lower quality journal.

So why not just publish everything that is submitted to a journal? PLoS ONE already does a more restricted version of this – they publish everything that they receive that is above a certain threshold of quality (as opposed to other journals, that consider both quality and “impact”, e.g. whether it’s a splashy finding or not). The papers would still be peer reviewed, but the purpose of the review would be to assess the study quality, using a pre-determined checklist (I’m picturing something similar to, but much more detailed than the Downs and Black checklist that is sometimes used to assess study quality in systematic reviews).

The checklist would include things like methodology (x-sectional, intervention, RCT, etc), number of participants, likelihood of bias, etc, and could range from 0-100. The final score and the checklists themselves would be published along with the paper, along with any additional reviewer comments. The peer review could be done using the current method of simply sending manuscripts out for review, or there could be a central clearing house run by the NIH or some such organization. The critical point is that the articles would be peer reviewed, and the quality of the article would be made abundantly clear on the article itself.

Using this system, you could publish a paper as soon as it’s received for review – it would simply need to say “pending quality review” or something of that nature. You could also require that all studies also put their full dataset online in order to aid with replication and hopefully reduce the likelihood of fraud, which isn’t easily caught by traditional peer review anyway (some journals, such as BMC Public Health already require that authors be willing to share data upon request, although I don’t think there is any mechanism for determining whether this actually takes place). The quality score of a paper could even be amended as authors improve their study by performing additional experiments or analyses.

This would keep the best aspects of peer review – extra eyes and ears providing thoughtful comments on how a paper could be improved – while acknowledging the fact that the current system doesn’t do a tremendous job of quality control (for an excellent look at the shortcomings of traditional peer review, please check out this paper by Richard Smith titled Classical Peer Review: An Empty Gun).

What are the advantages of this system?

I see a number of benefits to adopting this “publish everything” model.

1. This new system would make paper quality exceedingly clear – if I say that wifi causes cancer, and can only point to a study that scored 2/100 for quality, and you point to a study that found the opposite that scored a 90/100, then we have a better idea of which side to take. If the findings are conflicting and study quality is similar, then we know that the issue is yet to be settled. Essentially we are making it easier to do systematic reviews, by assessing study quality when a paper is published, rather than waiting for the systematic review to come along.

2. This system would incentivize high quality research, rather than “sexy” findings. If I know that my study will be judged on the quality of my methods, rather than the controversy or novelty of the findings, then it will help to improve the methodological quality of studies in general.

3. This system would make it ok to replicate prior work, or publish null findings. We talk a lot about the importance of replication in science, but we also know that it’s really hard to publish a replication study in a prestigious journal (it’s hard to spin a replication study as “cutting edge” since, by definition, it’s already been done by someone else first). It the same thing with null results – we know they’re harder to publish, we know this introduces biases into systematic reviews, and yet there haven’t been many effective ways to fix it (I’m curious if PLoS ONE publishes more “null” studies than other journals, since it doesn’t concern itself with a study’s impact – anyone with info on this please let me know).

If papers are judged solely on quality rather than the novelty of the finding, that removes the incentives against performing/writing up replication studies and null results.

What are the downsides of this system?

The biggest downside of this system is that everything, regardless of quality, would be published. So if your assessment of study quality begins and ends with “was it published in a peer reviewed journal?”, then this is obviously going to be a problem. Of course my counter-argument is that we’re already in a situation where you can publish anything regardless of quality, so that’s not really going to be a big change anyway. Of course there would be a lot of complicating factors (what goes into the quality checklist, who performs the review, how to make sure it’s applied consistently, ways to appeal if something was done incorrectly, etc), but if the over-arching idea has merit then I think the plumbing could be dealt with in turn.

Why isn’t this working already?

As I was writing up this post, James Coyne pointed out that WebmedCentral has most of the characteristics I’m looking for. They publish everything, they do so rapidly, they publish their reviews online, and they include a quality score. However, the quality score seems to be completely arbitrary, and their 10-question quality checklist focuses on the writing (e.g. “Is the quality of the diction satisfactory?”) rather than the quality of the study methodology, which is the real issue. I think it’s a worthwhile attempt, but I don’t think any modified form of peer review (including post-publication peer review, which has been spectacular in a few specific situations – e.g. Rosie Redfield and #arsenicDNA – and generally underwhelming elsewhere) will really catch on without without a true assessment of study methodology published alongside the paper.

So, what do you think?

I’ve been mulling this over in my head for a while, and I’m very curious to hear if anyone thinks this is even remotely plausible. It’s basically a more extreme version of PLoS ONE, which was pretty extreme in its own way when it first came out. Could this idea ever work in practice? If not, why not? And specifically, if you think it’s a bad idea, I’m curious to hear how this type of peer review be worse than the current form, given that we’re already at the point where peer review is weeding out less and less material with the creation of every new journal.

I’d love to hear what you think!

Travis

 

What science blog networks do you visit most frequently?

It’s no secret that the science blogosphere has undergone massive changes in the past 18 months.  There have been new networks (Scientopia, PLoS BLoGs), dramatically expanded/revamped networks (Scientific American, The Guardian, Wired), and networks that are under new management (Scienceblogs). There are even networks that have stuck around through it all, largely unchanged (Nature Network).

I’ve come to come to think of these networks as each representing a distinct niche in the science blogosphere.  These niches may not perfectly represent each network, but they’re what I associate with the network, and what I look for when I’m visiting.

Scienceblogs is where I go to find animated discussions about atheism, skepticism, and climate science. Deep Sea News is where I go for things related to oceans and aquatic animals.  Scientopia’s bloggers are mostly active researchers, and on any given day their network has excellent posts on what it’s like to be a scientist – from trainee right through to PI.  Conversely, PLoS BloGGers are mostly science journalists, who often discuss issues related to their work, as well as large dollops of actual journalistic pieces (there are also a few active researchers there, myself included).  Like PLoS Blogs, Wired and Discover seem to be written mainly by professional journalists, doing science journalism.  And then there’s the new Scientific American blog network, which is a pleasant mix of several things – journalists, scientists, etc.

I like this new science blogosphere, as it offers a number of different experiences to suit different tastes and even different moods (I find that I enjoy Scientopia while working in the lab, but prefer to read the more journalistic pieces on PLoS BLoGs and Scientific  American in my free time).

With all of these choices, I’m curious to know what networks people read most frequently.  The survey below allows you to rank the 3 networks that you visit most frequently (excluding any networks where you contribute regularly).  I’m assuming that Scienceblogs still has the most absolute visitors, but I’m interested to hear how the various networks rank, and why people put them in that order.  I’ve tried to get in the ones that I read and hear about most frequently, but this is by no means an exhaustive list. That being said, some of these networks are far more “niche” than others, so it may not be entirely fair to compare them all head-to-head.

Feel free to suggest ones that I might have missed in the comments.  Now go ahead and vote! Check back next week for the final tally.

Travis

Create your free online surveys with SurveyMonkey, the world’s leading questionnaire tool.

Post Publication Peer Review: Blogs vs Letters to the Editor

There has been a lot of discussion recently about the value of peer review (including this phenomenal post by Joe Pickrell of Genomes Unzipped), and whether other models might be cheaper, faster, and ultimately better than the current system.

Regardless of what these alternative models of publishing look like, I agree with Joe that social media will play an important role in identifying high quality papers.  Social media would thus be acting as a form of post publication peer review (henceforth referred to as PPPR), and has actually been doing so for some time (Researchblogging.org being the best example that I can think of, although the PLoS Hubs is aimed at this as well).  This is in contrast to Letters to the Editor, which up until a few years ago was the only form of PPPR available to researchers.  I have recently had experience with both of these forms of PPPR, and thought it would be fun to compare and contrast the experience with each, focusing on the categories that I considered when deciding whether publish my critique in a blog post or Letter.

Speed

My experience with a Letter to the Editor came about last summer when I felt that the conclusions of this article in the International Journal of Behavioural Nutritional and Physical Activity (IJBNPA) did not match up with their data (actually, I felt that their conclusions were directly contradicted by their data).  The article was published on July 29th, 2010, and my colleague Stephanie emailed it to me that same day.  I read the paper in detail about a week later, and decided to write a Letter to the Editor with Stephanie and our co-supervisor.

Unfortunately IJBNPA had never published a Letter before, and it took some time for IJBNPA and their publisher (BMC) to decide whether they were willing to publish Letters in the journal, and whether or not they would charge their usual $1670 USD processing fee.  Fortunately, by the end of 2010 BMC had told us that IJBNPA would begin accepting Letters, and that they would waive their processing fee.

Thus our Letter was officially submitted to the journal in January of 2011, five months after the initial article had been published. Although it was accepted quite quickly, our Letter couldn’t be published until the authors of the original paper had had a chance to respond.  Thus our article was officially published on May 25, nearly ten months after the article we were critiquing.  It is worth noting that the original article received BMC’s “Highly Accessed” designation, meaning that it was among the more popular articles in the journal during that time-span (during this time readers had no way to know that anyone felt there was a problem with the paper).

In contrast to a Letter, a blog post about an article can be published as soon as it is written. In this case I wrote a blog post about our critique on June 12, 2011 and published it on June 13, for a total turn-around of 1 day.  Rosie Redfield’s famous #arsenicDNA blog post and Letter to the Editor showed similar time differences – her blog post was published on December 4, 2 days after the article she was critiquing.  In contrast, her Letter to the Editor wasn’t published online until May 27, a full 5 months after the original article.  When it comes to speed, traditional Letters can’t compete with blogs.

Winner: Blogging.

Impact

More >

Academic career versus fulfilling personal life: are they mutually exclusive?

family photo preparations

Travis’ Note: Earlier this year Nature held a Career Columnist Competition looking for Post Docs and PhD Students who were interested in writing about the ups and downs of being a trainee.  They received over 300 submissions, and the 6 who were chosen look fantastic. Unfortunately for me I was in the 294+ who did not get selected, but the good news is that I can now repost my submission here!  It doesn’t relate specifically to science communication, but I’m hoping it may still be of interest.

—–

I am currently in the second year of a four-year PhD program.  I enjoy the work that I am doing, and frankly I love the lab that I am working in.  But as I inch towards the completion of my degree, I can already feel myself becoming increasingly anxious about what comes next.  I have many friends who have gone down this road before me, and they have taken a number of routes, both traditional and otherwise.  Some have gone on to post-docs, and several are in tenure-track positions at research or teaching universities.  Still others have gone to work with industry or government, and a few have even decided to focus on science writing or other “non-academic” pursuits.

But when it comes to deciding what I want to do next, I really have no idea.  The one thing I do know is that in addition to a job, I also want a life.  And this is something that I have noticed many, if not most, of my graduate student peers are also looking for.  While they still love research, they don’t want the typical tenured professor’s life of previous generations – those professors who spent 16-hour days in the lab, whose entire life revolved around their work with little time for family or other interests.  In the limited number of conferences that I have attended, I have already heard multiple professors begin a Lifetime Achievement Award acceptance speech by thanking their family for “putting up with the fact that I was never home”.  That doesn’t strike me as a fond way to look back at a lifetime of research.

It seems at times that having a life outside of research – spending any waking hours away from work and the lab – can be viewed by some as a form of academic infidelity.  Take, for example, Kathy Weston’s recent article in Science Careers which details the way in which she fell out of love with her research career at a well-respected institution, largely because she fell in love with the rest of her life.  At times it seems that there really is no happy medium – we can either give up our lives for a successful career, or give up our career for a successful life.  And yet I would expect that any rational person (which I hope would include most scientists) would realize that allowing some semblance of work-life balance would not only make life as a professional researcher more pleasant, but also more alluring to students like myself.  As Dr. Weston explains in her recent article, she would likely not have become as disillusioned with her career as a scientist if it had been possible to accommodate her additional roles as a mother, wife, and daughter.

And so it is not surprising that when I speak with my graduate students colleagues, they describe long-term career goals that are likely more modest than those of previous generations – a fulfilling job with a modest income, and the ability to do good science and/or teaching.  But most are quick to add that they are not interested in being that professor – the world-travelling superstar – because most of us do not feel capable of being that type of scientist without giving up everything else in our lives.

So when I finish my PhD, I really don’t know where I will go next.  But I hope that there will be an option that allows me to be fulfilled both professionally and personally, rather than having to choose one over the other.

Travis

Scienceblogging Roundup: July 3-9

While we post lengthy discussions here on Science of Blogging, there are many research updates, news stories, videos, etc. related to science communication that we come across on a daily basis that never grace the pages of the blog. Most of these mini-stories we share with our followers on Twitter, and we encourage those of you with active Twitter accounts to communicate with us there to get real-time updates of all the stuff we are discussing (Follow Peter and/or Follow Travis). For those of you who shy away from Twitter, enjoy below the best mini-stories that we came across during the prior week along with links to the original source so that you can follow the full story.

  • The effects of churnalism on healthcare news and the public (PLoS Guest Blog)
  • At a recent conference Rebecca Watson was propositioned in an elevator.  She told people, and all hell broke loose.  John Rennie examines the inhuman treatment Rebecca Watson has received this week, and makes the obvious but excellent point that it is wrong to make people pointlessly uncomfortable (Gleaming Retort)
  • Researchers at Johns Hopkins claim they can track public health trends using twitter (Johns Hopkins University)
  • Is it beneficial for obesity researchers to build trust with industry? (Obesity Panacea)
  • Scientific American has unveiled their new blog network, which has an absolutely amazing lineup (including the only lineup of the major science blogging networks that is more than 50% female).  Congrats to former Scibling and Plogster Bora Zivkovic for assembling such a terrific crew, and to all of the bloggers who have joined the network! (Scientific American)
Those are the posts that caught our eye this week.  Have a great weekend!
Travis