Blog World Expo 2008

When Science of Blogging first went live one of the first comments we received was from the well-known pseudonymous science blogger Drug Monkey, who said that:

One of the mission critical assignments is to figure out how to show real-world impact of blogging. Traffic numbers are insufficient to convince a traditional audience. How to make the determination of impact easier, consistent and valid?

One of the main reasons that Peter and I started Science of Blogging was because we’ve seen that it has a lot of value for us personally.  It’s been a useful way to promote our research and network with others, but DM has a point – simply telling someone that your post got X number of hits doesn’t really convey the benefits of blogging.  But I’m not sure that we will ever have an Impact Factor-like metric that will allow people to easily quantify just how effective an individual blog is.  We could certainly create one based on some combination of comments, incoming links, and viewers per post (or google rankings, etc), but I’m skeptical that it would ever be used in performance reviews or the like.  It would be terrific if it did, but I just don’t see it happening.  If people don’t see value in blog traffic stats, I don’t think they’re going to value any other blog-related metric either.

Instead, since we are all researchers anyway, I think it makes sense to do the studies to see whether blogging about a topic can help achieve hard outcomes that are already valued.  For example, does blogging about a journal article increase the number of downloads or citations that it receives?  Does it increase the likelihood that health-care professionals will perform an evidence-based treatment, or avoid a non-evidence-based treatment?  Does it help individuals to adopt healthier behaviours?

These are the things that will convince people that blogging is worth the effort.  And since we’re all researchers, it really wouldn’t be that hard to actually start to measure these things.

Here’s an example of an RCT that would be tremendously useful in determining the value of blogging in terms of increasing paper downloads and citations, and would cost absolutely no money to perform.  Select 30 papers from a wide range of academic disciplines, all of which are at least 5 years old and have less than 3 citations (e.g. if they aren’t cited much now, it’s unlikely that they ever will be).  Randomly select 15 of these articles, and ask for volunteers from among the 1000+ active bloggers on Researchblogging.org who are willing to blog about the papers relevant to their discipline.  Then, track the number of downloads and citations for the blogged and non-blogged papers over a period of several years, to see if there is a difference between the two groups.

We could even do something similar using papers in the PLoS journals as a convenience sample – are the PLoS papers that have been discussed in blogs downloaded and cited more often?  This could be potentially biased (I’m assuming that the papers that get blogged about are probably more interesting or novel, which would make them more likely to get cited as well), but the data is freely available for anyone with a summer student with time to kill.

I know there are a million and one qualitative studies that could also be done in this area, and I’ve participated in a few myself.  But lots of people (myself included) like to see hard numbers, and it really wouldn’t be very hard to get them. Seriously, why isn’t the science blogging community doing this?  If I’m just ignorant of the research, please tell me.  And if it really doesn’t exist, then why don’t we get it going?

Travis