Knowledge Translation

Social Media for Scientists: A lecture

Travis and I recently gave a keynote presentation at our alma mater, Queen’s university, on the utility of social media use among academics, researchers, and graduate students.

The 1 hr presentation, entitled “How to win friends and influence people with social media” covers the following topics:

1. Why researchers and graduate students use social media

2. The (many) pros and (few) cons of being an academic online

3. How to build a basic strategy for taking your research online

Enjoy the video and please share with any colleagues who might be interested. Feel free to skip to 4:20 for the start of the talk. Looking forward to your comments!

Peter

Time for a new type of peer review?

Over the past year or so I’ve had a number of interesting conversations with people about peer review. It seems as though many people think the current system is broken, although I have yet to hear many suggestions on where to go from here. Sometimes people mention wikis or other web-based means of publication, but many (myself included) worry that there needs to be some form of peer review to assess study quality, fraud, and other things that are undesirable (although whether traditional peer review achieves these goals is very much up for debate).

Here are my thoughts on the problem, and what I see as a relatively simple solution.

How do we assess study quality?

Historically, if you had to assess the quality of a study without having the luxury of reading it, you would probably ask two questions:

1. Was it published in a peer reviewed journal?

2. If published, how prestigious is the journal?

While far from perfect, these questions give a general sense of the quality of a piece of research. Something that was published in Nature is likely of higher quality than something published in a small society journal (or at least that’s an assumption that many of us are willing to make when pressed for time), and both of these papers are likely to be of higher quality than a paper that has been rejected from multiple journals and now sits unpublished in a desk drawer.

This quick and dirty assessment of paper quality worked for a long time, since there were a fairly limited number of journals where you could publish research on any given topic. If peer reviewers deemed your work to be of high enough quality and/or impact, then it was accepted for publication. If not, it went unpublished. That served as a simple, albeit crude, way to assess the quality of a study or experiment. If no one was willing to publish your paper, then it must not be of very high quality.

Taken a step further, these questions can also be used to assess the quality of a researcher. Are you publishing many peer-reviewed papers? Are they in top journals? If the answer to either of those questions is no, then the implication would be that your research was of lower quality than someone who answered yes.

There are problems with this line of reasoning (among several obvious problems: not all papers that get rejected are low quality, and not all papers that sneak through the peer review process are high quality), but in general I would say that many people were happy with the system, since it was simple and (at least perceived to be) reasonably effective at keeping low quality studies on the outside and higher quality studies on the inside.

Why don’t these questions work anymore?

There are a lot of new journals popping up. Not just one or two, but hundreds. New Open Access publisher Hindawi publishes more than 120 journals in medicine alone! I get several emails every week publicizing the launch of a new journal(s), most of which are open access, and which seem to have varying standards of peer review (some use external reviewers, others are only reviewed in-house by the editors). The issue now is that if you can afford to pay the open access publishing fees, no paper is unpublishable. If you submit to enough journals, then your paper will almost certainly be accepted eventually, at which point you can say that the study has been published in a peer reviewed journal. So the first question from above (“Was it published in a peer reviewed journal?”) is no longer a useful way to assess paper quality, since almost anything can be published in some form of peer reviewed journal eventually.

Related to the issue of journal proliferation, people are becoming less and less devoted to any single journal. Rather than reading a specific journal from cover to cover each month, I have email alerts that send me a message whenever a paper is published on certain topics, regardless of the journal. As a result, papers published in low-impact journals can still get lots of attention, even if few people actually read that journal on a regular basis. In contrast, before online journal access it would have been much less likely that anyone would come across a paper in an obscure journal, no matter how relevant to their work.

Article-level metrics (e.g. assessing the number of citations for a specific paper, rather than the impact of the journal itself) are also reducing the importance of publishing in “prestigious” journals, since people now have more precise ways of determining whether your paper is being cited regularly. This isn’t to say that there are no benefits to publishing in prestigious journal – far from it. But the penalties of publishing in a low-impact journal are now much less than they used to be.

Is this a good or bad thing?

This depends on your perspective. If you liked the system where there were few journals and not everything could be published, then this will almost certainly seem like a bad thing. Suddenly we’ve lost one of the simplest (albeit very imperfect) ways of determining the quality of a study, or the quality of a researcher. This means that you could conceivably find a paper (or publish a paper yourself) that proves/supports just about anything, regardless of how poorly the study was conducted, which is a big problem.

Despite these obvious problems, however, I think that the new system could be a good thing… if we are willing to tweak the peer review and publishing process.

A journal that publishes everything

This may sound a bit far-fetched, but hear me out on this. At this point, almost every paper will get published eventually. What’s worse, it will often get peer reviewed at multiple journals before finally be accepted. This means that much of the time and expense of reviewing/rejecting the paper at higher journals was wasted, since it doesn’t actually keep the paper from being published – it just bumped it down to a lower quality journal.

So why not just publish everything that is submitted to a journal? PLoS ONE already does a more restricted version of this – they publish everything that they receive that is above a certain threshold of quality (as opposed to other journals, that consider both quality and “impact”, e.g. whether it’s a splashy finding or not). The papers would still be peer reviewed, but the purpose of the review would be to assess the study quality, using a pre-determined checklist (I’m picturing something similar to, but much more detailed than the Downs and Black checklist that is sometimes used to assess study quality in systematic reviews).

The checklist would include things like methodology (x-sectional, intervention, RCT, etc), number of participants, likelihood of bias, etc, and could range from 0-100. The final score and the checklists themselves would be published along with the paper, along with any additional reviewer comments. The peer review could be done using the current method of simply sending manuscripts out for review, or there could be a central clearing house run by the NIH or some such organization. The critical point is that the articles would be peer reviewed, and the quality of the article would be made abundantly clear on the article itself.

Using this system, you could publish a paper as soon as it’s received for review – it would simply need to say “pending quality review” or something of that nature. You could also require that all studies also put their full dataset online in order to aid with replication and hopefully reduce the likelihood of fraud, which isn’t easily caught by traditional peer review anyway (some journals, such as BMC Public Health already require that authors be willing to share data upon request, although I don’t think there is any mechanism for determining whether this actually takes place). The quality score of a paper could even be amended as authors improve their study by performing additional experiments or analyses.

This would keep the best aspects of peer review – extra eyes and ears providing thoughtful comments on how a paper could be improved – while acknowledging the fact that the current system doesn’t do a tremendous job of quality control (for an excellent look at the shortcomings of traditional peer review, please check out this paper by Richard Smith titled Classical Peer Review: An Empty Gun).

What are the advantages of this system?

I see a number of benefits to adopting this “publish everything” model.

1. This new system would make paper quality exceedingly clear – if I say that wifi causes cancer, and can only point to a study that scored 2/100 for quality, and you point to a study that found the opposite that scored a 90/100, then we have a better idea of which side to take. If the findings are conflicting and study quality is similar, then we know that the issue is yet to be settled. Essentially we are making it easier to do systematic reviews, by assessing study quality when a paper is published, rather than waiting for the systematic review to come along.

2. This system would incentivize high quality research, rather than “sexy” findings. If I know that my study will be judged on the quality of my methods, rather than the controversy or novelty of the findings, then it will help to improve the methodological quality of studies in general.

3. This system would make it ok to replicate prior work, or publish null findings. We talk a lot about the importance of replication in science, but we also know that it’s really hard to publish a replication study in a prestigious journal (it’s hard to spin a replication study as “cutting edge” since, by definition, it’s already been done by someone else first). It the same thing with null results – we know they’re harder to publish, we know this introduces biases into systematic reviews, and yet there haven’t been many effective ways to fix it (I’m curious if PLoS ONE publishes more “null” studies than other journals, since it doesn’t concern itself with a study’s impact – anyone with info on this please let me know).

If papers are judged solely on quality rather than the novelty of the finding, that removes the incentives against performing/writing up replication studies and null results.

What are the downsides of this system?

The biggest downside of this system is that everything, regardless of quality, would be published. So if your assessment of study quality begins and ends with “was it published in a peer reviewed journal?”, then this is obviously going to be a problem. Of course my counter-argument is that we’re already in a situation where you can publish anything regardless of quality, so that’s not really going to be a big change anyway. Of course there would be a lot of complicating factors (what goes into the quality checklist, who performs the review, how to make sure it’s applied consistently, ways to appeal if something was done incorrectly, etc), but if the over-arching idea has merit then I think the plumbing could be dealt with in turn.

Why isn’t this working already?

As I was writing up this post, James Coyne pointed out that WebmedCentral has most of the characteristics I’m looking for. They publish everything, they do so rapidly, they publish their reviews online, and they include a quality score. However, the quality score seems to be completely arbitrary, and their 10-question quality checklist focuses on the writing (e.g. “Is the quality of the diction satisfactory?”) rather than the quality of the study methodology, which is the real issue. I think it’s a worthwhile attempt, but I don’t think any modified form of peer review (including post-publication peer review, which has been spectacular in a few specific situations – e.g. Rosie Redfield and #arsenicDNA – and generally underwhelming elsewhere) will really catch on without without a true assessment of study methodology published alongside the paper.

So, what do you think?

I’ve been mulling this over in my head for a while, and I’m very curious to hear if anyone thinks this is even remotely plausible. It’s basically a more extreme version of PLoS ONE, which was pretty extreme in its own way when it first came out. Could this idea ever work in practice? If not, why not? And specifically, if you think it’s a bad idea, I’m curious to hear how this type of peer review be worse than the current form, given that we’re already at the point where peer review is weeding out less and less material with the creation of every new journal.

I’d love to hear what you think!

Travis

 

To Blog Or Not To Blog?

Dear Professor, To blog or not to blog?  This is not a question that you should worry about…for now. You compete successfully in three peer review arenas: publishing, grant seeking and tenure & promotion (T&P).  These three are interdependent with success in one begetting success in another.  The three are built on the same assumption: that your peers are in the best position to critique and thus make awards of publications, of grants and of tenure.  This isn’t going to change dramatically in the near future, so please don’t fret over all this blogging stuff.  Your klout score is not about to sway your T&P committee.

But note that in Canada, at least, times they are a changin’ (♫)

Canadian research funding is dominated by three federal granting councils (SSHRC, CIHR and NSERC) all of whom are rolling out new funding programs with non-academics on the peer review committees.  As I mentioned in a previous blog some (admittedly only a few) peer reviewed journals are including non academics on their editorial boards.  Campus-community collaborations are increasingly recognized by T&P committees (especially when the university based scholar and his/her community partner receives a $1M Community University Research Alliance) and there is even a national alliance to examine academic reward and incentive structures for community engaged scholarship.

But you don’t have to worry about that…for now.

Just know that blogs get way more traffic than your peer reviewed paper ever will.  More >

An RCT to determine the value of blogging

Blog World Expo 2008

When Science of Blogging first went live one of the first comments we received was from the well-known pseudonymous science blogger Drug Monkey, who said that:

One of the mission critical assignments is to figure out how to show real-world impact of blogging. Traffic numbers are insufficient to convince a traditional audience. How to make the determination of impact easier, consistent and valid?

One of the main reasons that Peter and I started Science of Blogging was because we’ve seen that it has a lot of value for us personally.  It’s been a useful way to promote our research and network with others, but DM has a point – simply telling someone that your post got X number of hits doesn’t really convey the benefits of blogging.  But I’m not sure that we will ever have an Impact Factor-like metric that will allow people to easily quantify just how effective an individual blog is.  We could certainly create one based on some combination of comments, incoming links, and viewers per post (or google rankings, etc), but I’m skeptical that it would ever be used in performance reviews or the like.  It would be terrific if it did, but I just don’t see it happening.  If people don’t see value in blog traffic stats, I don’t think they’re going to value any other blog-related metric either.

Instead, since we are all researchers anyway, I think it makes sense to do the studies to see whether blogging about a topic can help achieve hard outcomes that are already valued.  For example, does blogging about a journal article increase the number of downloads or citations that it receives?  Does it increase the likelihood that health-care professionals will perform an evidence-based treatment, or avoid a non-evidence-based treatment?  Does it help individuals to adopt healthier behaviours?

These are the things that will convince people that blogging is worth the effort.  And since we’re all researchers, it really wouldn’t be that hard to actually start to measure these things.

Here’s an example of an RCT that would be tremendously useful in determining the value of blogging in terms of increasing paper downloads and citations, and would cost absolutely no money to perform.  Select 30 papers from a wide range of academic disciplines, all of which are at least 5 years old and have less than 3 citations (e.g. if they aren’t cited much now, it’s unlikely that they ever will be).  Randomly select 15 of these articles, and ask for volunteers from among the 1000+ active bloggers on Researchblogging.org who are willing to blog about the papers relevant to their discipline.  Then, track the number of downloads and citations for the blogged and non-blogged papers over a period of several years, to see if there is a difference between the two groups.

We could even do something similar using papers in the PLoS journals as a convenience sample – are the PLoS papers that have been discussed in blogs downloaded and cited more often?  This could be potentially biased (I’m assuming that the papers that get blogged about are probably more interesting or novel, which would make them more likely to get cited as well), but the data is freely available for anyone with a summer student with time to kill.

I know there are a million and one qualitative studies that could also be done in this area, and I’ve participated in a few myself.  But lots of people (myself included) like to see hard numbers, and it really wouldn’t be very hard to get them. Seriously, why isn’t the science blogging community doing this?  If I’m just ignorant of the research, please tell me.  And if it really doesn’t exist, then why don’t we get it going?

Travis

Knowledge Dissemination: blogging vs peer review

Travis’ Note: Today’s post is from Dr David J Phipps of ResearchImpact, a Canadian knowledge-exchange network.  The original post can be found on Mobilize This!, the ResearchImpact blog.  Thanks to David for allowing us to cross-post his article here.

In an age of self publishing – including blogs, videos, and other Web-based media – why do we still seek to publish in traditional academic peer-reviewed journals? Vanity.

ResearchImpact-York published two academic papers in 2009. In 2010 we had one in press, two submitted, and one just rejected for a second time, from the same journal. Since our first post on May 30, 2008, ResearchImpact has published 206 blogs on Mobilize This!, an average of 6 or 7 each month.

Here’s a comparison of blogging and peer-reviewed publishing: More >

Top 5 Twitter Etiquette Tips

While I claim to be no expert on Twitter etiquette, I would hope that over the past 2 years of tweeting I have picked up at least a few morsels of useful info.

Whenever I’ve tried to explain how Twitter works, I use the analogy of attending a large party with some potentially important guests in attendance.

Tip #1: How to make a Twitter entrance

As is the case with large parties, you know very few people there. Thus, when you first get there, you want to introduce yourself to as many people as possible.

But you wouldn’t simply enter through the front door holding a megaphone and announce to everyone present: “HELLO I AM JOHN AND I WOULD LIKE TO TALK TO ALL OF YOU!”

That is, you don’t want to just blindly follow hundreds or even thousands of people without really getting to know any of them, and giving them an opportunity to learn something about you.

Most appropriate method would be to introduce yourself to a few people at a time, and to move around the room, slowly building contacts. More >