Travis and I recently gave a keynote presentation at our alma mater, Queen’s university, on the utility of social media use among academics, researchers, and graduate students.
The 1 hr presentation, entitled “How to win friends and influence people with social media” covers the following topics:
1. Why researchers and graduate students use social media
2. The (many) pros and (few) cons of being an academic online
3. How to build a basic strategy for taking your research online
Enjoy the video and please share with any colleagues who might be interested. Feel free to skip to 4:20 for the start of the talk. Looking forward to your comments!
Our friend Scicurious has an excellent post today on Scientopia discussing blogging as a form of scientific communication. Specifically, she asks whether it is appropriate to blog about your own research. Scicurious does not blog about her own work, but many people do. Peter and I explicitly started our blog Obesity Panacea to communicate our own research, and other work in our field of study.
Sci’s post stems from a recent session at Experimental Biology on Science Communication (full details of the event available here), and focuses specifically on the issue of self-promotion among academics. Quite frankly, it wasn’t viewed very positively by some of the presenters at the session. From her post, here is her description of their views (my emphasis added):
…academics have two different kinds of self-promotion. One is ok, and one is not. One takes place in the ivory tower, and one involves the dreaded public.
Academic self-promotion is good. Knowing and meeting the right people, staying in touch and making sure they remember who you are. Academic self-promotion is in fact more than good, it’s essential. The sad reality of biomedical science as I know it is that no one will fund your work if they don’t have a clue who you are. By “you”, I don’t mean you personally (though that certainly helps), but who you have trained with, who THAT person trained with, who’s in your department, and what you all have done. Grant people like to call this “evidence of past productivity”, and “training environment”, but what it really means is whether or not you’ve published, and who do you work with that they’ve heard of. There’s a reason we refer to papers as “Smith et al, 2011″, and not by their titles, because by referring to that person we are referring to their body of work, their history, and their expertise.
This means you have to do a lot of self-promotion within academia. We call this “networking”, “presenting at conferences”, “chatting up the seminar speaker at lunch”, and in extreme cases “brown nosing”. This is the “good” kind of self-promotion, the kind that we get a lot of lectures about.
Unfortunately, there’s also the “bad” self-promotion. This is the kind that we are taught to loathe in academia. The kind that involves seeking out the press, trumping up your findings, and becoming Dr. Oz. We are taught from the beginnings of grad school and even before to mistrust people who do this. If your science is good…well you shouldn’t HAVE to say anything. Build it and they will come. If you are trumpeting your science, holding press conferences, giving TED talks, and posing for magazines…scientists get very quick to mistrust your work. This is because behavior like this has a history, and it’s not a good one. Too many times, scientists like this have shot to fame in the public eye, and been shot down just as quickly. Self-promotion outside the ivory tower smacks of ego. The ideal scientist is the one that is famous only among other scientists.
Regular readers will notice that some of the above is reminiscent of a previous post I wrote discussing whether we can trust researchers who give TED talks.
The comments section of the post is very interesting, but I wanted to highlight one comment made by another blogger and researcher named Drug Monkey (my emphasis).
More on point, I do think it a bit of a problem to blog too closely to one’s own area. It just seems like an unfair extra attack on the scientific arguments.
A blogger could easily pursue an agenda that had positive effects on their next grant review by creating a bigger sense of Significance. Could tear down the competition too.
Fighting for Open Access and against GlamourMagification…ditto. Fighting the good fight on work hours, dual careers, geographical immobility….it gets slippery.
I don’t disagree that blogging about your own work could have an impact on the field, but I don’t think it is necessarily nefarious. I responded in the comments on Sci’s post, but thought it would be good to repost the comment here:
I’ve always been surprised by the view that blogging about your own work is somehow not Kosher (keeping in mind that I’m a bit biased since this was explicitly one of the reasons why Peter and I began blogging in the first place, and our field of study lends itself to knowledge translation activities). If it’s ok to do a plenary session or media interview or editorial/review paper explaining how your work fits into the larger context, I don’t see why it’s off-side to post similar things on a blog. Trashing another research group at a conference or in a Letter to the Editor would have at least as large an impact on the field as doing so on a blog, no?
I don’t disagree that this can potentially lead to changes in a paper’s citation count or its impact on the field, but is that by default a bad thing? Is it better for a good paper to languish uncited because it’s in a journal no one reads, or is it better for people to find out about that paper on your blog? If a person were lying about their own research that’s one thing, but if I you are telling people accurate information about your own work as well as other work in your field of research, I don’t see why this should be a problem. And as Sci points out, if you start playing up your work as something it’s not, that is going to bite you in the butt pretty quickly.
The one qualification that I would add is that if you are writing about your own work, I think it’s critical that you let people know it’s your own. Trashing your competitors and praising your own work, without letting your readers know about your conflict of interest, would be absolutely inappropriate. But if you are transparent about your position and potential bias, then I think it’s a completely legitimate form of scientific communication.
So, is it ok to blog about your own work? Does it matter whether you do so under your real name? If it’s not ok to blog about your own work, is it still ok to do media interviews and other forms of more traditional self-promotion?
I’m curious to hear what people think. But first, please do go read Sci’s awesome post.
Create your free online surveys with SurveyMonkey, the world’s leading questionnaire tool.
Over the past year or so I’ve had a number of interesting conversations with people about peer review. It seems as though many people think the current system is broken, although I have yet to hear many suggestions on where to go from here. Sometimes people mention wikis or other web-based means of publication, but many (myself included) worry that there needs to be some form of peer review to assess study quality, fraud, and other things that are undesirable (although whether traditional peer review achieves these goals is very much up for debate).
Here are my thoughts on the problem, and what I see as a relatively simple solution.
How do we assess study quality?
Historically, if you had to assess the quality of a study without having the luxury of reading it, you would probably ask two questions:
1. Was it published in a peer reviewed journal?
2. If published, how prestigious is the journal?
While far from perfect, these questions give a general sense of the quality of a piece of research. Something that was published in Nature is likely of higher quality than something published in a small society journal (or at least that’s an assumption that many of us are willing to make when pressed for time), and both of these papers are likely to be of higher quality than a paper that has been rejected from multiple journals and now sits unpublished in a desk drawer.
This quick and dirty assessment of paper quality worked for a long time, since there were a fairly limited number of journals where you could publish research on any given topic. If peer reviewers deemed your work to be of high enough quality and/or impact, then it was accepted for publication. If not, it went unpublished. That served as a simple, albeit crude, way to assess the quality of a study or experiment. If no one was willing to publish your paper, then it must not be of very high quality.
Taken a step further, these questions can also be used to assess the quality of a researcher. Are you publishing many peer-reviewed papers? Are they in top journals? If the answer to either of those questions is no, then the implication would be that your research was of lower quality than someone who answered yes.
There are problems with this line of reasoning (among several obvious problems: not all papers that get rejected are low quality, and not all papers that sneak through the peer review process are high quality), but in general I would say that many people were happy with the system, since it was simple and (at least perceived to be) reasonably effective at keeping low quality studies on the outside and higher quality studies on the inside.
Why don’t these questions work anymore?
There are a lot of new journals popping up. Not just one or two, but hundreds. New Open Access publisher Hindawi publishes more than 120 journals in medicine alone! I get several emails every week publicizing the launch of a new journal(s), most of which are open access, and which seem to have varying standards of peer review (some use external reviewers, others are only reviewed in-house by the editors). The issue now is that if you can afford to pay the open access publishing fees, no paper is unpublishable. If you submit to enough journals, then your paper will almost certainly be accepted eventually, at which point you can say that the study has been published in a peer reviewed journal. So the first question from above (“Was it published in a peer reviewed journal?”) is no longer a useful way to assess paper quality, since almost anything can be published in some form of peer reviewed journal eventually.
Related to the issue of journal proliferation, people are becoming less and less devoted to any single journal. Rather than reading a specific journal from cover to cover each month, I have email alerts that send me a message whenever a paper is published on certain topics, regardless of the journal. As a result, papers published in low-impact journals can still get lots of attention, even if few people actually read that journal on a regular basis. In contrast, before online journal access it would have been much less likely that anyone would come across a paper in an obscure journal, no matter how relevant to their work.
Article-level metrics (e.g. assessing the number of citations for a specific paper, rather than the impact of the journal itself) are also reducing the importance of publishing in “prestigious” journals, since people now have more precise ways of determining whether your paper is being cited regularly. This isn’t to say that there are no benefits to publishing in prestigious journal – far from it. But the penalties of publishing in a low-impact journal are now much less than they used to be.
Is this a good or bad thing?
This depends on your perspective. If you liked the system where there were few journals and not everything could be published, then this will almost certainly seem like a bad thing. Suddenly we’ve lost one of the simplest (albeit very imperfect) ways of determining the quality of a study, or the quality of a researcher. This means that you could conceivably find a paper (or publish a paper yourself) that proves/supports just about anything, regardless of how poorly the study was conducted, which is a big problem.
Despite these obvious problems, however, I think that the new system could be a good thing… if we are willing to tweak the peer review and publishing process.
A journal that publishes everything
This may sound a bit far-fetched, but hear me out on this. At this point, almost every paper will get published eventually. What’s worse, it will often get peer reviewed at multiple journals before finally be accepted. This means that much of the time and expense of reviewing/rejecting the paper at higher journals was wasted, since it doesn’t actually keep the paper from being published – it just bumped it down to a lower quality journal.
So why not just publish everything that is submitted to a journal? PLoS ONE already does a more restricted version of this – they publish everything that they receive that is above a certain threshold of quality (as opposed to other journals, that consider both quality and “impact”, e.g. whether it’s a splashy finding or not). The papers would still be peer reviewed, but the purpose of the review would be to assess the study quality, using a pre-determined checklist (I’m picturing something similar to, but much more detailed than the Downs and Black checklist that is sometimes used to assess study quality in systematic reviews).
The checklist would include things like methodology (x-sectional, intervention, RCT, etc), number of participants, likelihood of bias, etc, and could range from 0-100. The final score and the checklists themselves would be published along with the paper, along with any additional reviewer comments. The peer review could be done using the current method of simply sending manuscripts out for review, or there could be a central clearing house run by the NIH or some such organization. The critical point is that the articles would be peer reviewed, and the quality of the article would be made abundantly clear on the article itself.
Using this system, you could publish a paper as soon as it’s received for review – it would simply need to say “pending quality review” or something of that nature. You could also require that all studies also put their full dataset online in order to aid with replication and hopefully reduce the likelihood of fraud, which isn’t easily caught by traditional peer review anyway (some journals, such as BMC Public Health already require that authors be willing to share data upon request, although I don’t think there is any mechanism for determining whether this actually takes place). The quality score of a paper could even be amended as authors improve their study by performing additional experiments or analyses.
This would keep the best aspects of peer review – extra eyes and ears providing thoughtful comments on how a paper could be improved – while acknowledging the fact that the current system doesn’t do a tremendous job of quality control (for an excellent look at the shortcomings of traditional peer review, please check out this paper by Richard Smith titled Classical Peer Review: An Empty Gun).
What are the advantages of this system?
I see a number of benefits to adopting this “publish everything” model.
1. This new system would make paper quality exceedingly clear – if I say that wifi causes cancer, and can only point to a study that scored 2/100 for quality, and you point to a study that found the opposite that scored a 90/100, then we have a better idea of which side to take. If the findings are conflicting and study quality is similar, then we know that the issue is yet to be settled. Essentially we are making it easier to do systematic reviews, by assessing study quality when a paper is published, rather than waiting for the systematic review to come along.
2. This system would incentivize high quality research, rather than “sexy” findings. If I know that my study will be judged on the quality of my methods, rather than the controversy or novelty of the findings, then it will help to improve the methodological quality of studies in general.
3. This system would make it ok to replicate prior work, or publish null findings. We talk a lot about the importance of replication in science, but we also know that it’s really hard to publish a replication study in a prestigious journal (it’s hard to spin a replication study as “cutting edge” since, by definition, it’s already been done by someone else first). It the same thing with null results – we know they’re harder to publish, we know this introduces biases into systematic reviews, and yet there haven’t been many effective ways to fix it (I’m curious if PLoS ONE publishes more “null” studies than other journals, since it doesn’t concern itself with a study’s impact – anyone with info on this please let me know).
If papers are judged solely on quality rather than the novelty of the finding, that removes the incentives against performing/writing up replication studies and null results.
What are the downsides of this system?
The biggest downside of this system is that everything, regardless of quality, would be published. So if your assessment of study quality begins and ends with “was it published in a peer reviewed journal?”, then this is obviously going to be a problem. Of course my counter-argument is that we’re already in a situation where you can publish anything regardless of quality, so that’s not really going to be a big change anyway. Of course there would be a lot of complicating factors (what goes into the quality checklist, who performs the review, how to make sure it’s applied consistently, ways to appeal if something was done incorrectly, etc), but if the over-arching idea has merit then I think the plumbing could be dealt with in turn.
Why isn’t this working already?
As I was writing up this post, James Coyne pointed out that WebmedCentral has most of the characteristics I’m looking for. They publish everything, they do so rapidly, they publish their reviews online, and they include a quality score. However, the quality score seems to be completely arbitrary, and their 10-question quality checklist focuses on the writing (e.g. “Is the quality of the diction satisfactory?”) rather than the quality of the study methodology, which is the real issue. I think it’s a worthwhile attempt, but I don’t think any modified form of peer review (including post-publication peer review, which has been spectacular in a few specific situations – e.g. Rosie Redfield and #arsenicDNA – and generally underwhelming elsewhere) will really catch on without without a true assessment of study methodology published alongside the paper.
So, what do you think?
I’ve been mulling this over in my head for a while, and I’m very curious to hear if anyone thinks this is even remotely plausible. It’s basically a more extreme version of PLoS ONE, which was pretty extreme in its own way when it first came out. Could this idea ever work in practice? If not, why not? And specifically, if you think it’s a bad idea, I’m curious to hear how this type of peer review be worse than the current form, given that we’re already at the point where peer review is weeding out less and less material with the creation of every new journal.
I’d love to hear what you think!
I came across an interesting article this morning in Slate questioning recent papers on the “contagiousness” of factors ranging from obesity to divorce. The papers were published in top journals like the New England Journal of Medicine (I wrote this enthusiastic blog post about the findings back in 2008) and have generated a wide range of media attention, including the TED talk which I’ve embedded below.
As far as I know the questions surrounding these papers have been entirely statistical (as opposed to ethical) in nature. Below is the abstract of a critique published in the journal of Statistics, Politics, and Policy earlier this year which nicely outlines the problem of having a high profile paper with a poor stats section:
The chronic widespread misuse of statistics is usually inadvertent, not intentional. We find cautionary examples in a series of recent papers by Christakis and Fowler that advance statistical arguments for the transmission via social networks of various personal characteristics, including obesity, smoking cessation, happiness, and loneliness. Those papers also assert that such influence extends to three degrees of separation in social networks. We shall show that these conclusions do not follow from Christakis and Fowler’s statistical analyses. In fact, their studies even provide some evidence against the existence of such transmission. The errors that we expose arose, in part, because the assumptions behind the statistical procedures used were insufficiently examined, not only by the authors, but also by the reviewers. Our examples are instructive because the practitioners are highly reputed, their results have received enormous popular attention, and the journals that published their studies are among the most respected in the world. An educational bonus emerges from the difficulty we report in getting our critique published. We discuss the relevance of this episode to understanding statistical literacy and the role of scientific review, as well as to reforming statistics education.
I should mention that frankly this stats discussion is well over my head, and it may be that these critiques are thoroughly off base – it took the authors a long time and multiple attempts to get this article published, which could be a sign that there is little weight to the arguments, although it could also be a sign that it’s just hard to get this sort of thing published (our friend Yoni Freedhoff detailed the whole process a few weeks ago, which is where I first heard about these new issues). The point being that these papers are among the most high-profile studies published in my field of research in the past few years, and yet people are now saying things like this:
”[Christakis and Fowler's] errors are in some places so egregious that a critique of their work cannot exist without also calling into question the rigor of review process,”
When I was reading the Slate piece this morning it got me thinking about other recent scientific findings which have been presented in “big idea” forums like TED only to have important questions raised about their veracity.
For example, earlier this year Felisa Wolfe-Simon and other NASA researchers published a paper in Science claiming to have found bacteria which could use arsenic rather than phosphorous as the backbone of its DNA. Shortly thereafter Rosie Redfield wrote a scathing review of the paper on her blog, spawning a massive backlash against the paper in the field as a whole. This backlash prompted Dr Wolfe-Simon and her co-authors to retreat from the media and argue that:
Any discourse will have to be peer-reviewed in the same manner as our paper was, and go through a vetting process so that all discussion is properly moderated.
And yet just a few months later David Dobbs reported that Dr Wolfe-Simon had not only presented her findings at TED, but had also reiterated her paper’s highly disputed conclusions. Here is what David had to say in March:
Apparently the peer-reviewed realm now includes the high-profile TED conference, where on Wednesday Wolfe-Simon talked about her paper. Neither video nor transcript is released as yet [Travis' Note: I still haven't seen them online, but please let me know if someone else has seen them], but accounts suggest she discussed her controversial discovery outside the realm of peer review — in fact, in the most public venue imaginable —and one anonymous source I spoke to today said she repeated the paper’s explicit and disputed claims about arsenic incorporating DNA.
And then there is the case of Marc Hauser, popular author and Harvard researcher who has been under investigation for academic misconduct for the the past year, and whose ultimate fate (as well as his guilt or innocence) remains very unclear. In fairness, he hasn’t presented at TED (although Slate called it “TED-level stuff“), but his popular book Moral Minds certainly places him into the “big idea” category of scientist.
The fact that these eminent “big idea” researchers seem to keep making questionable moral/ethical/academic misjudgments is distressing for a few reasons. First and foremost, it’s because these “big idea” scientists are really the stewards for all of us. TED talks, popular books – these are the way that many non-scientists find out about what is that we do, and why it matters. If the people doing those talks and writing those books turn out to be sketchy then it makes all of us look bad.
But it is also worries me because this is not good for science. I had to stop myself from writing this is not the way that science is done, because that just seems like a cliche in a blog-post mentioning Rosie Redfield and #arsenicDNA. But for science to be done there has to be room for genuine debate, and TED talks don’t seem to have much of that… they seem more like a monologue where you present your ideas as fact. If Dr Wolfe-Simon’s talk had been a debate between herself and one of her critics then I think it would have been far more useful to the advancement of science. And while I realize that advancing science per se is not the purpose of a TED talk, I can’t help but feel that there is something fundamentally wrong about presenting your shiny new finding as fact to a large and very influential audience when it is still eminently unclear whether the finding is legitimate or not. Not that TED talks are wrong, but that it’s dangerous to get too far ahead of the science, or present something as fact when there remain unresolved questions. I have been unable to find a video or transcript of Dr Wolfe-Simon’s talk so I could be off base here, but it certainly seems distressing on the face of it.
The final thing that I find personally distressing about these issues is that I love listening to TED talks and reading books about big ideas. And to be honest I would love to be one of those people who gives those sorts of compelling talks that so clearly demonstrate why an idea or piece or research has meaning outside of the lab. It worries me that other people who share that goal seem to be spreading a message which may not actually represent the “truth”… it makes me nervous about knowledge translation in general if those who are among the most successful are also those pushing the most questionable findings.
Most Science-Content Is Available to A Very Small Number of People
Think of all the presentations that researchers give on a regular basis. Even grad students like myself give several talks and/or lectures every year, each of which is available to just a handful of people who attend the talk itself. Why not use social media to spread that content with a far greater number of people, with little or no additional effort? Below are 3 tools that I have found especially useful for sharing previously produced content with the world at large.
Earlier this year well-known obesity researcher Angelo Tremblay gave a free lecture at the Children’s Hospital of Eastern Ontario in Ottawa. He was kind enough to allow me to record the audio of his talk (using my ipod and a Belkin TuneTalk), and then provided me with the slides of his presentation.
Following Dr Tremblay’s talk I published his slides on Slideshare, a free site for hosting Powerpoint presentations (it’s basically Youtube for slideshows). Using their simple “Slidecast” feature, I then uploaded the audio of Dr Tremblay’s talk, and synced the audio with the slides. Since it was an hour-long talk, I broke the presentation into 4 separate sections, so that a person could watch any individual section in 10-15 minutes. Finally, I then wrote up a short blog post on Obesity Panacea outlining each section of the presentation, and embedded the Slidecast so that people could watch it on the blog (I’ve embedded one of the presentations below as well to give a sense of what the Slidecasts look and sound like).
Dr Tremblay’s original talk had a very good turnout for a Lunch & Learn event – I’d say somewhere between 50 and 100 people. But that number is dwarfed by the number of people who have since accessed his content online. To date the 4 blog posts on Obesity Panacea containing Slidecasts have been viewed more than 2,500 times (2,100 unique views), and the individual Slidecasts have been played over 1,900 times (averaging just shy of 500 views per Slidecast).
Even if we assume that the same 500 people watched each Slidecast, that’s still at least 5x the number of people who were at Dr Tremblay’s original presentation in Ottawa. And this is content that Dr Tremblay had already produced – all I had to do was put it online so that people could access it. And it cost a total of ~$60 for a recording device that attaches to an ipod that I already have (a laptop and a cheap mic would also have done the trick).
I have done other Slidecasts myself, uploading my a conference presentation and simply recording the audio while I was rehearsing my talk a few days before the event. It added maybe 20 minutes to my conference preparation, but it allowed me to share the content with way more people than would ever see me at the conference itself. Endocrinologist Yannis Guerra has done something similar, uploading the slides from a talk he gave at his hospital on the health impact of sedentary behaviour. Since the presentation is simply the slides with no audio, it would have literally taken just 2-3 minutes to create his account and upload that presentation, which has been seen nearly 100 times in its first few days (which would be a phenomenal turnout at most hospital seminars that I’ve been to!). That’s the great thing about Slideshare – it is extremely effective in promoting your work with very little additional effort.
Podcasts are similar to Slideshare in that they allow you to spread content that you have already produced. For example, I uploaded the audio from Dr Tremblay’s talk to the Obesity Panacea podcast, which has since received another 1000 downloads. Podcasts are also a great excuse to meet new researchers. For example, at the recent Canadian National Obesity Summit I recorded conversations with a number of researchers, which I will be putting up on the Obesity Panacea podcast in the coming weeks. Most of the conversations I recorded were from poster sessions, with me simply talking to people about their projects (in this way podcasts and other forms of interviews can also be a great way to network with other researchers). Once those conversations are online, they will be available to a far greater number of people than could ever have attended the poster sessions themselves.
Video is another tool that is both easy and obvious. For example, a few years ago Peter recorded a lecture that he gave to an undergraduate physiology course, which has since been viewed several hundred times. This is content that he had already produced, all he did was record it so that he could share it online. Here is a short section of that video:
We have recently done something similar here in Ottawa, when we hosted a debate last week on the impact of food and exercise on body weight. There were roughly 150 people at the debate itself, with another ~80 people watching the debate live through Justin.tv, a free live-streaming website which allows you to embed live video on any website (the blog post with the embedded video can be seen here). The live-stream video was obtained using this $88 webcam (roughly 1/4 the cost of the “light refreshments” that we purchased for the event) and the audio was recorded using a Skype conference call microphone that I borrowed from my lab. The webcam recording of the debate has now been seen more than 350 times, well more than twice the number of people who attended the debate itself. And we are in the process of uploading a higher resolution version of the video which will be more suitable for people to share and embed on other websites.
For less than $100 dollars (just 1/10 of the budget for the event as a whole) we were able to double the number of people who have accessed the debate.
Are there any tools that you have found especially useful in promoting scientific content online?
To get future posts delivered directly to your email inbox or to your RSS reader, be sure to subscribe to Science of Blogging.
Dear Professor, To blog or not to blog? This is not a question that you should worry about…for now. You compete successfully in three peer review arenas: publishing, grant seeking and tenure & promotion (T&P). These three are interdependent with success in one begetting success in another. The three are built on the same assumption: that your peers are in the best position to critique and thus make awards of publications, of grants and of tenure. This isn’t going to change dramatically in the near future, so please don’t fret over all this blogging stuff. Your klout score is not about to sway your T&P committee.
But note that in Canada, at least, times they are a changin’ (♫)
Canadian research funding is dominated by three federal granting councils (SSHRC, CIHR and NSERC) all of whom are rolling out new funding programs with non-academics on the peer review committees. As I mentioned in a previous blog some (admittedly only a few) peer reviewed journals are including non academics on their editorial boards. Campus-community collaborations are increasingly recognized by T&P committees (especially when the university based scholar and his/her community partner receives a $1M Community University Research Alliance) and there is even a national alliance to examine academic reward and incentive structures for community engaged scholarship.
But you don’t have to worry about that…for now.
Just know that blogs get way more traffic than your peer reviewed paper ever will. More >
Travis’ Note: Today’s guest post is from the epic pseudonymous blogger Scicurious. She is one of the founding members of the Scientopia network of science bloggers, where you can find her extremely interesting and popular blog Neurotic Physiology. She has written previous Science of Blogging guest posts on how to start a science blog, and issues to consider when deciding whether to blog under a pseudonym.
A few weeks ago, Sci had an opportunity to blog the Experimental Biology 2011 Conference (my posts on it are here). I’ll admit, I volunteered, but the organizers were wonderfully welcoming of a young blogger, and very pleased to have me on board. And then Travis and Peter let me know that they were going to blog an upcoming conference, and asked for tips. And TIPS. Boy, do I have TIPS! And they asked me to post them. So below you will find the stuff that I did, along with various tips on how to keep up your energy, and how to make the scientists LOVE your blog. But keep in mind. These tips apply best to scientists who are blogging conferences in their field. To journalists, not so much.
Before and during the conference, I did the following:
1) Went through the abstracts and found stuff I liked. I narrowed it down by cool titles and then looked for abstracts that were good. I made a real effort to get far outside my field, but stick to within your field if that’s what you prefer.
2) Emailed the contact people for each abstract (4 days before the meeting), asking them if they’d like their work at the conference to be blogged. In the initial email I made a point to include my academic position and university, as well as links to my blog. Each email was specific for their abstract, making it clear that I had read their abstract and was interested on a more than cursory level. Technically, this isn’t required, if something is presented at a conference, it’s public, but I know that many scientists don’t really feel that way, and I would much rather make friends than enemies.
3) When they got back to me (and they ALL did, no one said no, but everyone also said they’d scoped me out on my blog and on Google beforehand), I set up a time to meet with the presenter Often the PI was present,especially if the student was younger.
4) I met with each group for 30 minutes. During that time I asked about their work, took copious notes, and also had them run through the presentation. I also took care to ask if there was anything in particular they wanted to emphasize. A couple of times I had to get an interpreter (wonderful presenter from Brazil, she spoke no English, and I no Portuguese. But we got through it! And her science is awesome.).
5) I then went back, sat my butt down, ate many cookies, and wrote it up. Before I posted it I sent it off to the authors of the study for approval, with a stated deadline of 12 hours (I told them during the interview when they would receive the post, and when I would need their edits back). Don’t worry, they’ll get back to you.
6) When the post went live (with their edits, everyone sent at least minor edits), I sent them a link to it with a thank you note. I have since ended up in several school and department newsletters and on some laboratory websites!
Tips for getting PIs and shy scientists to warm to you.
One of the mission critical assignments is to figure out how to show real-world impact of blogging. Traffic numbers are insufficient to convince a traditional audience. How to make the determination of impact easier, consistent and valid?
One of the main reasons that Peter and I started Science of Blogging was because we’ve seen that it has a lot of value for us personally. It’s been a useful way to promote our research and network with others, but DM has a point – simply telling someone that your post got X number of hits doesn’t really convey the benefits of blogging. But I’m not sure that we will ever have an Impact Factor-like metric that will allow people to easily quantify just how effective an individual blog is. We could certainly create one based on some combination of comments, incoming links, and viewers per post (or google rankings, etc), but I’m skeptical that it would ever be used in performance reviews or the like. It would be terrific if it did, but I just don’t see it happening. If people don’t see value in blog traffic stats, I don’t think they’re going to value any other blog-related metric either.
Instead, since we are all researchers anyway, I think it makes sense to do the studies to see whether blogging about a topic can help achieve hard outcomes that are already valued. For example, does blogging about a journal article increase the number of downloads or citations that it receives? Does it increase the likelihood that health-care professionals will perform an evidence-based treatment, or avoid a non-evidence-based treatment? Does it help individuals to adopt healthier behaviours?
These are the things that will convince people that blogging is worth the effort. And since we’re all researchers, it really wouldn’t be that hard to actually start to measure these things.
Here’s an example of an RCT that would be tremendously useful in determining the value of blogging in terms of increasing paper downloads and citations, and would cost absolutely no money to perform. Select 30 papers from a wide range of academic disciplines, all of which are at least 5 years old and have less than 3 citations (e.g. if they aren’t cited much now, it’s unlikely that they ever will be). Randomly select 15 of these articles, and ask for volunteers from among the 1000+ active bloggers on Researchblogging.org who are willing to blog about the papers relevant to their discipline. Then, track the number of downloads and citations for the blogged and non-blogged papers over a period of several years, to see if there is a difference between the two groups.
We could even do something similar using papers in the PLoS journals as a convenience sample – are the PLoS papers that have been discussed in blogs downloaded and cited more often? This could be potentially biased (I’m assuming that the papers that get blogged about are probably more interesting or novel, which would make them more likely to get cited as well), but the data is freely available for anyone with a summer student with time to kill.
I know there are a million and one qualitative studies that could also be done in this area, and I’ve participated in a few myself. But lots of people (myself included) like to see hard numbers, and it really wouldn’t be very hard to get them. Seriously, why isn’t the science blogging community doing this? If I’m just ignorant of the research, please tell me. And if it really doesn’t exist, then why don’t we get it going?
Travis’ Note: Today’s post is from Dr David J Phipps of ResearchImpact, a Canadian knowledge-exchange network. The original post can be found on Mobilize This!, the ResearchImpact blog. Thanks to David for allowing us to cross-post his article here.
In an age of self publishing – including blogs, videos, and other Web-based media – why do we still seek to publish in traditional academic peer-reviewed journals? Vanity.
ResearchImpact-York published two academic papers in 2009. In 2010 we had one in press, two submitted, and one just rejected for a second time, from the same journal. Since our first post on May 30, 2008, ResearchImpact has published 206 blogs on Mobilize This!, an average of 6 or 7 each month.
Here’s a comparison of blogging and peer-reviewed publishing: More >
Whenever I’ve tried to explain how Twitter works, I use the analogy of attending a large party with some potentially important guests in attendance.
Tip #1: How to make a Twitter entrance
As is the case with large parties, you know very few people there. Thus, when you first get there, you want to introduce yourself to as many people as possible.
But you wouldn’t simply enter through the front door holding a megaphone and announce to everyone present: “HELLO I AM JOHN AND I WOULD LIKE TO TALK TO ALL OF YOU!”
That is, you don’t want to just blindly follow hundreds or even thousands of people without really getting to know any of them, and giving them an opportunity to learn something about you.
Most appropriate method would be to introduce yourself to a few people at a time, and to move around the room, slowly building contacts. More >