Travis and I recently gave a keynote presentation at our alma mater, Queen’s university, on the utility of social media use among academics, researchers, and graduate students.
The 1 hr presentation, entitled “How to win friends and influence people with social media” covers the following topics:
1. Why researchers and graduate students use social media
2. The (many) pros and (few) cons of being an academic online
3. How to build a basic strategy for taking your research online
Enjoy the video and please share with any colleagues who might be interested. Feel free to skip to 4:20 for the start of the talk. Looking forward to your comments!
A few weeks ago I was approached by the folks at Petridish.org, asking if I’d be interested in doing a post on their science crowdfunding site. I’m fascinated by crowdfunding and think that it has a huge amount of potential, both as a means of funding science, and as a means of incentivizing science communication – in a world where the public funds your research directly, you have much more incentive to communicate with them about your work. Since I didn’t know much about Petridish.org at the time, I asked if one of their founders would be interested in doing an interview with me instead. Below is that interview, with Petridish.org co-founder Matt Salzberg. More on Matt can be found at the bottom of this post.
I have yet to try crowdfunding myself, but if you have any experience with Petridish or any other crowdfunding platform (or thoughts on which platform(s) will eventually succeed and pull away from the rest of the pack) I’d love to hear about it in the comments.
1. Simple question: what is crowdfunding?
Crowdfunding reflects the power of the internet to pool the collective actions of many small participants to make a larger project happen. In the case of Petridish.org, we help scientists and researchers raise funding for their projects from people who are passionate about their work.
2. Could you describe what Petridish.org is, and how it works?
Petridish.org is the largest crowdfunding website devoted entirely to science and research funding. Researchers post materials about a project they want to launch, and contributors on our site can donate to those projects in exchange for rewards and other tokens of appreciation.
Typically, researchers set a goal and a deadline by which they hope to raise the money. If they reach the goal by the deadline, then the project is successfully funded. If they don’t reach the goal, no money changes hands.
3. What gave you the idea for starting Petridish.org?
Before starting Petridish.org, I worked at a large venture capital firm and became interested in the power of the internet to transform the way things were funded. One website, kickstarter had become very successful in raising money for art and creative projects. I wanted to bring that power to science funding, which is an interest area of mine and is an area that desperately needs new models for funding.
4. Can you give a rough idea of the % of projects that have been funded so far, and how much funding has been received by researchers (either the average amount or the total across all projects)?
We’ve done significantly in excess of $100,000 of transaction volume since launching earlier this year. 80% of all of our projects have been successfully funded.
5. There have been a number of crowdfunding science websites in the past few years (Microryza, SciFlies.org, Science Donors, MyProjects, Open Genius, #SciFund Challenge, etc), many of which seem to have trouble taking off. What makes Petridish.org different and/or more likely to succeed?
There are a few things that make us different. First, we’ve focused on building a high quality, fun web experience for contributors. Aside from our focus on design, we hand select only the most interesting and impactful projects to feature on our site, including those with great videos, pictures and rewards. Many of the existing sites focus exclusively on the experience for the scientist raising money– we cater to both sides of the marketplace.
Second, we only do “all or nothing” funding. We do this because it protects the scientist from having to do a project without sufficient funding and it protects contributors who wouldn’t want to donate to a project that doesn’t have enough funding to go through. It also encourages people to really pull together to promote a project, since a project won’t happen without enlisting the support of others as well.
6. The crowdfunding science initiative that I’ve been most impressed with is the #SciFund Challenge, since they’ve partnered with Rockethub to bring their projects to a wider audience, and also done a lot of work to promote science communication. How are you bringing people to Petridish.org to see the projects that are listed there?
We do extensive web marketing activities to help the projects get funded. We have presences on Facebook, Twitter, and Pinterest, which helps drive people to our projects. We also have a weekly newsletter that sends the projects to thousands of our supporters. And we’re building relationships with larger media companies for regular press coverage. The all or nothing mechanism also really incentivizes people to share the projects and enlist their friends to help the project get funded.
A few weeks ago there were a number of interesting posts floating around the web discussing the appropriateness of science blogging as a form of self-promotion (see this post by Scicurious for an excellent backgrounder) . This is an issue that I’ve spent a lot of time thinking about – communicating with people about our own research wasn’t the only reason that Peter and I got into blogging, but it was a very big part of it. And it’s one of the main reasons why I advocate for researchers to get involved in social media.
Out of curiosity I put up a poll asking people whether they felt it was ok for a person to blog about their own research, and today I thought I would share those results (sorry for the longer than expected delay getting them posted).
The survey asked “Is it ok to blog about your own research?”
The available options were:
2. Yes, but only if done using your real name
We had 34 responses which makes this an <sarcasm> extremely representative sample </sarcasm>. What’s more, I have no idea of the background of those who responded, although they are likely people who follow either myself, Scicurious or Mr Epidemiology on twitter. Here is how those responses broke down.
11 people (32.4% of respondents) stated unequivocally that yes, it is ok to blog about your own research. I presume that these people don’t care whether you identify yourself as the study’s author in the post (something which would be impossible should you choose to use a pseudonym). All 23 other respondents (67.6% of those who completed the survey) said that it is ok to blog about your own work, but only if you use your real name. This is how I voted personally since I think this would prevent people from smearing their competition without disclosing their own conflict of interest, which is a concern that has been voiced by Drug Monkey. Given that comment (and other similar comments that I’ve heard in the past), I was surprised that not a single person said that it was completely inappropriate to blog about your own work under any circumstances (it might be worth noting that the vote breakdown does seem to be in general agreement with the bulk of the comments on Sci’s original post).
Finally, a small minority of respondents added a few additional comments after the survey. Here are a few:
If using a pseudonym you need to create the links; otherwise you are being a bit dishonest. Don’t like all this talk of ‘trashing your competitors’ though – that smacks of your field not doing actual science but just competing for funding…
I have a few rules I impose on myself: I only blog about specifics after the paper’s published. I don’t do anything that I think might hint of “prior publication.”
[the above pretty much perfectly describes the approach that Peter and I have taken at Obesity Panacea... I can't imagine that our current or previous supervisors would have been so indulgent with our blogging if we were scooping our own publications!]
I’ve never seen a difference between a blog post explaining your paper and a conference presentation, other than the content of your presentation will likely be ephemeral and not recorded and available indefinitely. If you’re research paints a contrasting picture to that of a colleague, as long as your data is available for comparison (i.e. published), then it shouldn’t where you discuss it.
Is there anyone out there who thinks it is inappropriate to blog about your own research under any circumstances? Let me know why in the comments below.
Thanks to everyone who took part in the survey!
Our friend Scicurious has an excellent post today on Scientopia discussing blogging as a form of scientific communication. Specifically, she asks whether it is appropriate to blog about your own research. Scicurious does not blog about her own work, but many people do. Peter and I explicitly started our blog Obesity Panacea to communicate our own research, and other work in our field of study.
Sci’s post stems from a recent session at Experimental Biology on Science Communication (full details of the event available here), and focuses specifically on the issue of self-promotion among academics. Quite frankly, it wasn’t viewed very positively by some of the presenters at the session. From her post, here is her description of their views (my emphasis added):
…academics have two different kinds of self-promotion. One is ok, and one is not. One takes place in the ivory tower, and one involves the dreaded public.
Academic self-promotion is good. Knowing and meeting the right people, staying in touch and making sure they remember who you are. Academic self-promotion is in fact more than good, it’s essential. The sad reality of biomedical science as I know it is that no one will fund your work if they don’t have a clue who you are. By “you”, I don’t mean you personally (though that certainly helps), but who you have trained with, who THAT person trained with, who’s in your department, and what you all have done. Grant people like to call this “evidence of past productivity”, and “training environment”, but what it really means is whether or not you’ve published, and who do you work with that they’ve heard of. There’s a reason we refer to papers as “Smith et al, 2011″, and not by their titles, because by referring to that person we are referring to their body of work, their history, and their expertise.
This means you have to do a lot of self-promotion within academia. We call this “networking”, “presenting at conferences”, “chatting up the seminar speaker at lunch”, and in extreme cases “brown nosing”. This is the “good” kind of self-promotion, the kind that we get a lot of lectures about.
Unfortunately, there’s also the “bad” self-promotion. This is the kind that we are taught to loathe in academia. The kind that involves seeking out the press, trumping up your findings, and becoming Dr. Oz. We are taught from the beginnings of grad school and even before to mistrust people who do this. If your science is good…well you shouldn’t HAVE to say anything. Build it and they will come. If you are trumpeting your science, holding press conferences, giving TED talks, and posing for magazines…scientists get very quick to mistrust your work. This is because behavior like this has a history, and it’s not a good one. Too many times, scientists like this have shot to fame in the public eye, and been shot down just as quickly. Self-promotion outside the ivory tower smacks of ego. The ideal scientist is the one that is famous only among other scientists.
Regular readers will notice that some of the above is reminiscent of a previous post I wrote discussing whether we can trust researchers who give TED talks.
The comments section of the post is very interesting, but I wanted to highlight one comment made by another blogger and researcher named Drug Monkey (my emphasis).
More on point, I do think it a bit of a problem to blog too closely to one’s own area. It just seems like an unfair extra attack on the scientific arguments.
A blogger could easily pursue an agenda that had positive effects on their next grant review by creating a bigger sense of Significance. Could tear down the competition too.
Fighting for Open Access and against GlamourMagification…ditto. Fighting the good fight on work hours, dual careers, geographical immobility….it gets slippery.
I don’t disagree that blogging about your own work could have an impact on the field, but I don’t think it is necessarily nefarious. I responded in the comments on Sci’s post, but thought it would be good to repost the comment here:
I’ve always been surprised by the view that blogging about your own work is somehow not Kosher (keeping in mind that I’m a bit biased since this was explicitly one of the reasons why Peter and I began blogging in the first place, and our field of study lends itself to knowledge translation activities). If it’s ok to do a plenary session or media interview or editorial/review paper explaining how your work fits into the larger context, I don’t see why it’s off-side to post similar things on a blog. Trashing another research group at a conference or in a Letter to the Editor would have at least as large an impact on the field as doing so on a blog, no?
I don’t disagree that this can potentially lead to changes in a paper’s citation count or its impact on the field, but is that by default a bad thing? Is it better for a good paper to languish uncited because it’s in a journal no one reads, or is it better for people to find out about that paper on your blog? If a person were lying about their own research that’s one thing, but if I you are telling people accurate information about your own work as well as other work in your field of research, I don’t see why this should be a problem. And as Sci points out, if you start playing up your work as something it’s not, that is going to bite you in the butt pretty quickly.
The one qualification that I would add is that if you are writing about your own work, I think it’s critical that you let people know it’s your own. Trashing your competitors and praising your own work, without letting your readers know about your conflict of interest, would be absolutely inappropriate. But if you are transparent about your position and potential bias, then I think it’s a completely legitimate form of scientific communication.
So, is it ok to blog about your own work? Does it matter whether you do so under your real name? If it’s not ok to blog about your own work, is it still ok to do media interviews and other forms of more traditional self-promotion?
I’m curious to hear what people think. But first, please do go read Sci’s awesome post.
Create your free online surveys with SurveyMonkey, the world’s leading questionnaire tool.
Over the past year or so I’ve had a number of interesting conversations with people about peer review. It seems as though many people think the current system is broken, although I have yet to hear many suggestions on where to go from here. Sometimes people mention wikis or other web-based means of publication, but many (myself included) worry that there needs to be some form of peer review to assess study quality, fraud, and other things that are undesirable (although whether traditional peer review achieves these goals is very much up for debate).
Here are my thoughts on the problem, and what I see as a relatively simple solution.
How do we assess study quality?
Historically, if you had to assess the quality of a study without having the luxury of reading it, you would probably ask two questions:
1. Was it published in a peer reviewed journal?
2. If published, how prestigious is the journal?
While far from perfect, these questions give a general sense of the quality of a piece of research. Something that was published in Nature is likely of higher quality than something published in a small society journal (or at least that’s an assumption that many of us are willing to make when pressed for time), and both of these papers are likely to be of higher quality than a paper that has been rejected from multiple journals and now sits unpublished in a desk drawer.
This quick and dirty assessment of paper quality worked for a long time, since there were a fairly limited number of journals where you could publish research on any given topic. If peer reviewers deemed your work to be of high enough quality and/or impact, then it was accepted for publication. If not, it went unpublished. That served as a simple, albeit crude, way to assess the quality of a study or experiment. If no one was willing to publish your paper, then it must not be of very high quality.
Taken a step further, these questions can also be used to assess the quality of a researcher. Are you publishing many peer-reviewed papers? Are they in top journals? If the answer to either of those questions is no, then the implication would be that your research was of lower quality than someone who answered yes.
There are problems with this line of reasoning (among several obvious problems: not all papers that get rejected are low quality, and not all papers that sneak through the peer review process are high quality), but in general I would say that many people were happy with the system, since it was simple and (at least perceived to be) reasonably effective at keeping low quality studies on the outside and higher quality studies on the inside.
Why don’t these questions work anymore?
There are a lot of new journals popping up. Not just one or two, but hundreds. New Open Access publisher Hindawi publishes more than 120 journals in medicine alone! I get several emails every week publicizing the launch of a new journal(s), most of which are open access, and which seem to have varying standards of peer review (some use external reviewers, others are only reviewed in-house by the editors). The issue now is that if you can afford to pay the open access publishing fees, no paper is unpublishable. If you submit to enough journals, then your paper will almost certainly be accepted eventually, at which point you can say that the study has been published in a peer reviewed journal. So the first question from above (“Was it published in a peer reviewed journal?”) is no longer a useful way to assess paper quality, since almost anything can be published in some form of peer reviewed journal eventually.
Related to the issue of journal proliferation, people are becoming less and less devoted to any single journal. Rather than reading a specific journal from cover to cover each month, I have email alerts that send me a message whenever a paper is published on certain topics, regardless of the journal. As a result, papers published in low-impact journals can still get lots of attention, even if few people actually read that journal on a regular basis. In contrast, before online journal access it would have been much less likely that anyone would come across a paper in an obscure journal, no matter how relevant to their work.
Article-level metrics (e.g. assessing the number of citations for a specific paper, rather than the impact of the journal itself) are also reducing the importance of publishing in “prestigious” journals, since people now have more precise ways of determining whether your paper is being cited regularly. This isn’t to say that there are no benefits to publishing in prestigious journal – far from it. But the penalties of publishing in a low-impact journal are now much less than they used to be.
Is this a good or bad thing?
This depends on your perspective. If you liked the system where there were few journals and not everything could be published, then this will almost certainly seem like a bad thing. Suddenly we’ve lost one of the simplest (albeit very imperfect) ways of determining the quality of a study, or the quality of a researcher. This means that you could conceivably find a paper (or publish a paper yourself) that proves/supports just about anything, regardless of how poorly the study was conducted, which is a big problem.
Despite these obvious problems, however, I think that the new system could be a good thing… if we are willing to tweak the peer review and publishing process.
A journal that publishes everything
This may sound a bit far-fetched, but hear me out on this. At this point, almost every paper will get published eventually. What’s worse, it will often get peer reviewed at multiple journals before finally be accepted. This means that much of the time and expense of reviewing/rejecting the paper at higher journals was wasted, since it doesn’t actually keep the paper from being published – it just bumped it down to a lower quality journal.
So why not just publish everything that is submitted to a journal? PLoS ONE already does a more restricted version of this – they publish everything that they receive that is above a certain threshold of quality (as opposed to other journals, that consider both quality and “impact”, e.g. whether it’s a splashy finding or not). The papers would still be peer reviewed, but the purpose of the review would be to assess the study quality, using a pre-determined checklist (I’m picturing something similar to, but much more detailed than the Downs and Black checklist that is sometimes used to assess study quality in systematic reviews).
The checklist would include things like methodology (x-sectional, intervention, RCT, etc), number of participants, likelihood of bias, etc, and could range from 0-100. The final score and the checklists themselves would be published along with the paper, along with any additional reviewer comments. The peer review could be done using the current method of simply sending manuscripts out for review, or there could be a central clearing house run by the NIH or some such organization. The critical point is that the articles would be peer reviewed, and the quality of the article would be made abundantly clear on the article itself.
Using this system, you could publish a paper as soon as it’s received for review – it would simply need to say “pending quality review” or something of that nature. You could also require that all studies also put their full dataset online in order to aid with replication and hopefully reduce the likelihood of fraud, which isn’t easily caught by traditional peer review anyway (some journals, such as BMC Public Health already require that authors be willing to share data upon request, although I don’t think there is any mechanism for determining whether this actually takes place). The quality score of a paper could even be amended as authors improve their study by performing additional experiments or analyses.
This would keep the best aspects of peer review – extra eyes and ears providing thoughtful comments on how a paper could be improved – while acknowledging the fact that the current system doesn’t do a tremendous job of quality control (for an excellent look at the shortcomings of traditional peer review, please check out this paper by Richard Smith titled Classical Peer Review: An Empty Gun).
What are the advantages of this system?
I see a number of benefits to adopting this “publish everything” model.
1. This new system would make paper quality exceedingly clear – if I say that wifi causes cancer, and can only point to a study that scored 2/100 for quality, and you point to a study that found the opposite that scored a 90/100, then we have a better idea of which side to take. If the findings are conflicting and study quality is similar, then we know that the issue is yet to be settled. Essentially we are making it easier to do systematic reviews, by assessing study quality when a paper is published, rather than waiting for the systematic review to come along.
2. This system would incentivize high quality research, rather than “sexy” findings. If I know that my study will be judged on the quality of my methods, rather than the controversy or novelty of the findings, then it will help to improve the methodological quality of studies in general.
3. This system would make it ok to replicate prior work, or publish null findings. We talk a lot about the importance of replication in science, but we also know that it’s really hard to publish a replication study in a prestigious journal (it’s hard to spin a replication study as “cutting edge” since, by definition, it’s already been done by someone else first). It the same thing with null results – we know they’re harder to publish, we know this introduces biases into systematic reviews, and yet there haven’t been many effective ways to fix it (I’m curious if PLoS ONE publishes more “null” studies than other journals, since it doesn’t concern itself with a study’s impact – anyone with info on this please let me know).
If papers are judged solely on quality rather than the novelty of the finding, that removes the incentives against performing/writing up replication studies and null results.
What are the downsides of this system?
The biggest downside of this system is that everything, regardless of quality, would be published. So if your assessment of study quality begins and ends with “was it published in a peer reviewed journal?”, then this is obviously going to be a problem. Of course my counter-argument is that we’re already in a situation where you can publish anything regardless of quality, so that’s not really going to be a big change anyway. Of course there would be a lot of complicating factors (what goes into the quality checklist, who performs the review, how to make sure it’s applied consistently, ways to appeal if something was done incorrectly, etc), but if the over-arching idea has merit then I think the plumbing could be dealt with in turn.
Why isn’t this working already?
As I was writing up this post, James Coyne pointed out that WebmedCentral has most of the characteristics I’m looking for. They publish everything, they do so rapidly, they publish their reviews online, and they include a quality score. However, the quality score seems to be completely arbitrary, and their 10-question quality checklist focuses on the writing (e.g. “Is the quality of the diction satisfactory?”) rather than the quality of the study methodology, which is the real issue. I think it’s a worthwhile attempt, but I don’t think any modified form of peer review (including post-publication peer review, which has been spectacular in a few specific situations – e.g. Rosie Redfield and #arsenicDNA – and generally underwhelming elsewhere) will really catch on without without a true assessment of study methodology published alongside the paper.
So, what do you think?
I’ve been mulling this over in my head for a while, and I’m very curious to hear if anyone thinks this is even remotely plausible. It’s basically a more extreme version of PLoS ONE, which was pretty extreme in its own way when it first came out. Could this idea ever work in practice? If not, why not? And specifically, if you think it’s a bad idea, I’m curious to hear how this type of peer review be worse than the current form, given that we’re already at the point where peer review is weeding out less and less material with the creation of every new journal.
I’d love to hear what you think!
Two weeks ago I posted a survey asking people to rank the 3 science blog networks they visit most frequently. The responses (23 in total) are probably not representative, given that most of the traffic to the post came from people clicking on my tweet or on those of other former Sciencebloggers (and current Scientopioids). But I thought I should post the results nonetheless, so here they are.
As you can see, Scientopia and Scientific American appear to be the big winners, with Scienceblogs, Wired, Discover and PLoS BLoGs packed slightly behind, and the other networks getting a few votes each. Notable omissions that were pointed out to me via twitter were Free Thought Blogs (a network of skeptic/atheist bloggers including the part-time home of uber-blog Pharyngula) and Occam’s Typewriter (which Drug Monkey refers to as a collection of Nature Network refugees/émigrés). Thanks to DM and Cath Ennis for pointing out those omissions, which I’m guessing would have each received at least a couple votes.
The survey also asked what people liked most about their favourite networks. Here are a couple of the most interesting answers [with their top 3 in brackets]:
“‘I go to SciAm for the science, Scientopia for the culture, and SciBlogs for a sense of nostalgia.” [Scientopia, SciAm, Scienceblogs] [Travis' Note: this experience matches my own quite closely]
“When I see attention called to them on Twitter, they most fit my interest. I was originally attracted to the Guardian by Ben Goldacre.” [PLoS Blogs and The Guardian]
“Breadth of coverage.” [SciAm, Wired, Discover]
“Interesting topics, usually end up visiting from Twitter” [Scientopia, Wired, The Guardian]
So there you have it. It would be very interesting to have a more representative poll of regular visitors to science blog networks – the attendees of #Scio12 might be a good group of science communicators who are likely to be aware of most of these networks. It would also be interesting to see which of these networks have the highest number of drop-ins, as compared to those with the most devoted repeat visitors (my guess is that Scientopia, as well as any network hosting Pharyngula, would have the most dedicated following, while the more “mainstream” networks like Wired and Discover are more likely to get drop-ins, but that’s just my uneducated guess).
Thanks to everyone who completed the survey, and feel free to offer your own interpretations in the comments below.
It’s no secret that the science blogosphere has undergone massive changes in the past 18 months. There have been new networks (Scientopia, PLoS BLoGs), dramatically expanded/revamped networks (Scientific American, The Guardian, Wired), and networks that are under new management (Scienceblogs). There are even networks that have stuck around through it all, largely unchanged (Nature Network).
I’ve come to come to think of these networks as each representing a distinct niche in the science blogosphere. These niches may not perfectly represent each network, but they’re what I associate with the network, and what I look for when I’m visiting.
Scienceblogs is where I go to find animated discussions about atheism, skepticism, and climate science. Deep Sea News is where I go for things related to oceans and aquatic animals. Scientopia’s bloggers are mostly active researchers, and on any given day their network has excellent posts on what it’s like to be a scientist – from trainee right through to PI. Conversely, PLoS BloGGers are mostly science journalists, who often discuss issues related to their work, as well as large dollops of actual journalistic pieces (there are also a few active researchers there, myself included). Like PLoS Blogs, Wired and Discover seem to be written mainly by professional journalists, doing science journalism. And then there’s the new Scientific American blog network, which is a pleasant mix of several things – journalists, scientists, etc.
I like this new science blogosphere, as it offers a number of different experiences to suit different tastes and even different moods (I find that I enjoy Scientopia while working in the lab, but prefer to read the more journalistic pieces on PLoS BLoGs and Scientific American in my free time).
With all of these choices, I’m curious to know what networks people read most frequently. The survey below allows you to rank the 3 networks that you visit most frequently (excluding any networks where you contribute regularly). I’m assuming that Scienceblogs still has the most absolute visitors, but I’m interested to hear how the various networks rank, and why people put them in that order. I’ve tried to get in the ones that I read and hear about most frequently, but this is by no means an exhaustive list. That being said, some of these networks are far more “niche” than others, so it may not be entirely fair to compare them all head-to-head.
Feel free to suggest ones that I might have missed in the comments. Now go ahead and vote! Check back next week for the final tally.
Create your free online surveys with SurveyMonkey, the world’s leading questionnaire tool.
Today we have another post in our Get To Know a Scienceblogger series.
Jonathan Eisen is a professor at the Genome Center at the University of California (UC), Davis and holds appointments in the Department of Evolution and Ecology in the College of Biological Sciences and Medical Microbiology and Immunology in the School of Medicine. In addition to his research, Dr. Eisen is also a vocal advocate for “open access” to scientific publications and is the Academic Editor-in-Chief of PLoS Biology. He is also an active and award-winning blogger/microblogger at The Tree of Life and on Twitter. You can learn more about him here. The info in this biography and the picture at left have been taken from Jonathan’s blog, which uses a Creative Commons Attribution License.
What is the topic of your blog?
Many threads woven together
Open science and open access to scientific literature
Microbiology and microbial diversity
Genomics and evolution
What was your primary reason for starting a blog?
Sharing with others fun things I was doing — got sick of sending out lots of email messages and wanted a better way to share …
How often do you post, and roughly how much time goes into each post?
Varies – no system. I post when I have time and have something interesting to post about. Maybe 2-3 x / week. Some posts take five minutes some take 4 hours …
How do you fit in time for social networking?
I view it as a fundamental part of my job as a scientist and an educator. I use social networking to follow the literature, to do outreach, to communicate with colleagues, etc.
Have there been any benefits to blogging, either personally or professionally?
Lots. See http://phylogenomics.
Have there been any downsides to blogging, either personally or professionally?
#1 issue is when I write something that is too obnoxious and regret it later. I have done this maybe 3-4 times and have learned to try and write about ideas without criticizing individual people too much.
What piece of advice would you give other scientists in your situation who are considering moving into social media?
Don’t be afraid. Spend as much time or as little time as you want on this. These systems are tools, no more or no less. You decided how to use them just like you decide how to use a microscope. But like a microscope they can be really useful – so consider experimenting with them.
What have been the most effective ways of promoting your blog?
Were you surprised by anything blog related, either good or bad?
Not really … it’s all pretty straightforward. Main surprise I guess is how many people read my blog …
Any other information that you think people would find useful?
Blogs, twitter, facebook, etc are all just computer programs. They are neither good nor bad. They can be used well or poorly.
There has been a lot of discussion recently about the value of peer review (including this phenomenal post by Joe Pickrell of Genomes Unzipped), and whether other models might be cheaper, faster, and ultimately better than the current system.
Regardless of what these alternative models of publishing look like, I agree with Joe that social media will play an important role in identifying high quality papers. Social media would thus be acting as a form of post publication peer review (henceforth referred to as PPPR), and has actually been doing so for some time (Researchblogging.org being the best example that I can think of, although the PLoS Hubs is aimed at this as well). This is in contrast to Letters to the Editor, which up until a few years ago was the only form of PPPR available to researchers. I have recently had experience with both of these forms of PPPR, and thought it would be fun to compare and contrast the experience with each, focusing on the categories that I considered when deciding whether publish my critique in a blog post or Letter.
My experience with a Letter to the Editor came about last summer when I felt that the conclusions of this article in the International Journal of Behavioural Nutritional and Physical Activity (IJBNPA) did not match up with their data (actually, I felt that their conclusions were directly contradicted by their data). The article was published on July 29th, 2010, and my colleague Stephanie emailed it to me that same day. I read the paper in detail about a week later, and decided to write a Letter to the Editor with Stephanie and our co-supervisor.
Unfortunately IJBNPA had never published a Letter before, and it took some time for IJBNPA and their publisher (BMC) to decide whether they were willing to publish Letters in the journal, and whether or not they would charge their usual $1670 USD processing fee. Fortunately, by the end of 2010 BMC had told us that IJBNPA would begin accepting Letters, and that they would waive their processing fee.
Thus our Letter was officially submitted to the journal in January of 2011, five months after the initial article had been published. Although it was accepted quite quickly, our Letter couldn’t be published until the authors of the original paper had had a chance to respond. Thus our article was officially published on May 25, nearly ten months after the article we were critiquing. It is worth noting that the original article received BMC’s “Highly Accessed” designation, meaning that it was among the more popular articles in the journal during that time-span (during this time readers had no way to know that anyone felt there was a problem with the paper).
In contrast to a Letter, a blog post about an article can be published as soon as it is written. In this case I wrote a blog post about our critique on June 12, 2011 and published it on June 13, for a total turn-around of 1 day. Rosie Redfield’s famous #arsenicDNA blog post and Letter to the Editor showed similar time differences – her blog post was published on December 4, 2 days after the article she was critiquing. In contrast, her Letter to the Editor wasn’t published online until May 27, a full 5 months after the original article. When it comes to speed, traditional Letters can’t compete with blogs.
Travis’ Note: Earlier this year Nature held a Career Columnist Competition looking for Post Docs and PhD Students who were interested in writing about the ups and downs of being a trainee. They received over 300 submissions, and the 6 who were chosen look fantastic. Unfortunately for me I was in the 294+ who did not get selected, but the good news is that I can now repost my submission here! It doesn’t relate specifically to science communication, but I’m hoping it may still be of interest.
I am currently in the second year of a four-year PhD program. I enjoy the work that I am doing, and frankly I love the lab that I am working in. But as I inch towards the completion of my degree, I can already feel myself becoming increasingly anxious about what comes next. I have many friends who have gone down this road before me, and they have taken a number of routes, both traditional and otherwise. Some have gone on to post-docs, and several are in tenure-track positions at research or teaching universities. Still others have gone to work with industry or government, and a few have even decided to focus on science writing or other “non-academic” pursuits.
But when it comes to deciding what I want to do next, I really have no idea. The one thing I do know is that in addition to a job, I also want a life. And this is something that I have noticed many, if not most, of my graduate student peers are also looking for. While they still love research, they don’t want the typical tenured professor’s life of previous generations – those professors who spent 16-hour days in the lab, whose entire life revolved around their work with little time for family or other interests. In the limited number of conferences that I have attended, I have already heard multiple professors begin a Lifetime Achievement Award acceptance speech by thanking their family for “putting up with the fact that I was never home”. That doesn’t strike me as a fond way to look back at a lifetime of research.
It seems at times that having a life outside of research – spending any waking hours away from work and the lab – can be viewed by some as a form of academic infidelity. Take, for example, Kathy Weston’s recent article in Science Careers which details the way in which she fell out of love with her research career at a well-respected institution, largely because she fell in love with the rest of her life. At times it seems that there really is no happy medium – we can either give up our lives for a successful career, or give up our career for a successful life. And yet I would expect that any rational person (which I hope would include most scientists) would realize that allowing some semblance of work-life balance would not only make life as a professional researcher more pleasant, but also more alluring to students like myself. As Dr. Weston explains in her recent article, she would likely not have become as disillusioned with her career as a scientist if it had been possible to accommodate her additional roles as a mother, wife, and daughter.
And so it is not surprising that when I speak with my graduate students colleagues, they describe long-term career goals that are likely more modest than those of previous generations – a fulfilling job with a modest income, and the ability to do good science and/or teaching. But most are quick to add that they are not interested in being that professor – the world-travelling superstar – because most of us do not feel capable of being that type of scientist without giving up everything else in our lives.
So when I finish my PhD, I really don’t know where I will go next. But I hope that there will be an option that allows me to be fulfilled both professionally and personally, rather than having to choose one over the other.