The economist just published a pair of articles broadly about the state of affairs in scientific research (and from their perspective everything is in a tail spin). "How Science Goes Wrong" and " Trouble at the lab". Both articles are worth reading, although few will find themselves in agreement with all of their conclusions. Neither article takes very long to read, so I will not try to sum up all of the arguments here. For two very different perspectives on these articles check out Jerry Coyne's blog who largely agrees with the statements they make. An alternative perspective on why these articles missed the mark almost entirely, see the post by Chris Waters my colleague here at Michigan State University . Chris points out that most studies do not represent a single experiment examining a particular hypothesis, but several independent lines of evidence pointing in a similar direction (or at least excluding other possibilities).
However, instead of going through all of the various arguments that have been made, I want to point out some (I think) overlooked issues about replication of scientific experiments. Principally that it can be hard, and even under extremely similar circumstances stochastic effects (sampling) may alter the results, at least somewhat.
Let's start by assuming that the original results are "valid", at least in the sense that there was no malfeasance (no results were faked), the experiments were done reasonably well (i.e. those performing the experiments did them well with appropriate controls), and that the results from the experiments were not subject to "spin" and no crucial data was left out of the paper (that may negate the results of the experiments). In other words, ideally what we hope to see out of scientists.
Now, I try and replicate the experiments. Maybe I believe strongly in the old adage "trust but verify" (in other words be a skeptical midwesterner). Perhaps, the experimental methods or results seem like a crucial place to begin for a new line of research (or as an alternative approach to answering questions that I am interested in).
So, I diligently read the methods of the paper summarizing the experiment (over and over and over again), get all of the components I need for the experiment, follow it as best as possible, and .... I find I can not replicate the results. What happened? Instead of immediately assuming the worst from the authors of the manuscript, perhaps consider some of the following as well.
1- Description of methodological detail in initial study is incomplete (this has been and remains a common issue). Replication is based on faulty assumptions introduced into the experiment because of missing information in the paper. Frankly this is the norm in the scientific literature, and it is hardly a new thing. Whether I read papers from the 1940's, 1970's or from the present I generally find the materials and methods section lacking, from the perspective of replication. While this should be an easy fix in this day and age (extended materials and methods included as supplementary materials or with the data itself when it is archived), it rarely is.
What should you do? Contact the authors! Get them on the phone. Often email is a good start, but a phone or skype call can be incredibly useful at getting all of the details out of those who did the experiment. Many researchers will also invite you to come spend time at their lab to try out the experiment under the conditions, which can really help. It also (in my mind) suggests that they are trying to be completely above board and feel confident about their experimental methods, and likely their results as well. If they are not willing to communicate with you about their experimental methods (or to share data, or how they performed their analysis), you will probably be in good shape to feel skeptical about how they have done their work.
2- Death by a thousand cuts. One important issue (relating to the above) is that it is almost impossible to perfectly replicate an experiment, ingredient for ingredient (what we call reagents). Maybe the authors used a particular enzyme. So you go ahead and order that enzyme, but it turns out to be from a different batch, and the company has changed the preservative used in the solution. Now, all of a sudden the results stop working. Maybe the enzyme itself is slightly different (in particular if you order it from a different company).
If you are using a model organism like a fruit fly, maybe the control (wild type) strain you have used is slightly different than the one from the original study. Indeed, in the post by Jerry Coyne mentioned above, he discusses three situations where he attempted to replicate other findings and failed to do so. However, in at least two of the cases I know about, it turned out that there were substantial differences in the wild type strains of flies used. Interesting arguments ensued, and for a brief summary of it, check out box 2 in this paper. I highly recommend reading the attempts at replication by Jerry Coyne and colleagues, and responses (and additional experiments) by the authors of the original papers (in particular for the role of the tan gene in fruit fly pigmentation).
Assuming that the original results are valid, but you can not replicate them, does it invalidate the totality of the results? Not necessarily. However, it may well make the results far less generalizable, which is important to know and is an important part of the scientific process.
3- Sampling effects. Even if you follow the experimental protocol as closely as possible, with all of the same ingredients and strains of organisms (or cell types, or whatever you might be using), you may still find somewhat different results. Why? Stochasticity. Most scientists take at least some rudimentary courses in statistics, and one of the first topics they learn about is sampling. If you have a relatively small number of independent samples that you use (a few fruit flies for your experimental group, compared to a small number in their control group), there is likely to be a lot of stochasticity in your results because of sampling. Thankfully we have tools to quantify aspects of the uncertainty associated with this (in particular standard errors and confidence intervals). However for many studies they treat large quantitative differences as if they were essentially discrete (compound A turns transcription of gene X off....). Even if the effects are large, repeating the experiment again may result in somewhat different results (different estimate, even if confidence intervals overlap).
If the way you assess "replication" is something like "compound A significantly reduced expression of gene X in the first experiment, does it also significantly reduce expression upon replication", then you may be doomed to frequently failing to replicate results. Indeed statistical significance (based on p values etc...) is a very poor tool in statistics. Instead you can ask whether the effect is in the same direction, and whether the confidence intervals between the initial estimate and the new estimate upon replication overlap.
Ask the authors of the original study for their data (if it is not already available on a data repository), so you can compute the appropriate estimates, and compare them to yours. How large was their sample size? How about yours? Can that explain the differences?
4- Finally, make sure you have done a careful job at replicating the initial experiment itself. I have seen a number of instances where it was not the initial results, but the replication itself which was suspect.
Are there problems with replication in scientific studies? Yes. Are some of the due to the types of problems as discussed in the economist or on retraction watch? Of course. However, it is worth keeping in mind how hard it is to replicate findings, and this is one of the major reasons I think meta-analyses are so important. It also makes it clear why ALL scientists need to make their data available through disciplinary or data type specific repositories like DRYAD, NCBI GEO, the short read archive or more general ones like figshare.
A blog about genes, and the crazy things they do to little critters (and I consider people a type of critter) in different environments and along with other genes.
Showing posts with label Research. Show all posts
Showing posts with label Research. Show all posts
Tuesday, October 22, 2013
Monday, October 14, 2013
Fallout from John Bohannon's "Who's afraid of peer review"
As many many scientists, librarians and concerned folk who are interested in scientific publishing and the state of peer review are aware, the whole 'verse' was talking about the "news feature" in Science by John Bohannon entitled "Who's afraid of peer review?".
The basics of the article was a year long "sting" operation on a "select" group of journals (that happened to be open access.. more on this in a second) focusing in part on predatory/vanity journals. That is some of the journals had the "air" of a real science journal, but in fact would publish the paper (?any paper?) for a fee. Basically Bohannon generated a set of faux scientific articles that at a first (and superficial) glance appeared to represent a serious study, but upon even modest examination it would be clear to the reader (i.e. reviewers and editors for the journal) that the experimental methodology was so deeply flawed that the results were essentially meaningless.
Bohannon reported that a large number of the journals he submitted to accepted this article, clearly demonstrating insufficient (or non-existent peer review). This and the head line has apparently lead to a large amount of popular press, and many interviews (I only managed to catch the NPR one I am afraid).
However, this sting immediately generated a great deal of criticism both for the way it was carried out, and more importantly the way the results were interpreted. First and foremost (to many) that ALL of the journals that were used were open access, and thus no control group for journals with the "traditional" subscription based models (where libraries pay for subscription to the journals). In addition, the journals were sieved to over-represent the shadiest predatory journals. That is it did not represent a random sample of open access journals. One thing that really pissed many people off (in particular among advocated of open access journals, but even beyond this group) that Science (A very traditional subscription based journal) used the summary headline: "A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals.", clearly implying that there was something fundamentally wrong with open access journals. There are a large number of really useful critiques of the article by Bohannon including ones by Michael Eisen, The Martinez-Arias lab, Lenny Teytelman, Peter Suber, Adam Gunn (including a list of other blogs and comments about it at the end). There is another list of responses found here as well. Several folks also suggested that some open access advocates were getting overly upset, as the sting was meant to focus on just the predatory journals. Read the summary line from the article highlighted in italics above, as well as the article and decide for yourself. I also suggest looking at some of the comment threads as Bohannon does join in on the comments Suber's post, and many of the "big" players are in on the discussion.
A number of folks (including myself) were also very frustrated with how Science (the magazine) presented this (and not just for the summary line). Making the "sting" appear to be scientifically rigorous in its methods, but then turning around and saying this is just a "news" piece whenever any methodological criticism is discussed. For instance, when readers commented about both the lack of peer review and the biased sampling of journals used for the "sting" operation for Bohannon's article, this was a response by John Travis (managing editor of News for Science magazine):
I was most interested in the fact Science (the journal) had an online panel consisting of Bohannon, Eisen and David Roos (as well as Jon Cohen Moderating) to discuss these issues. Much of it (especially in the first half hour) is worth watching, I think it is important to point out that Bohannon suggests he did not realize how his use of only OA journals as part of the sting operation would be viewed. He suggests that he meant this as largely a sting of the predatory journals, and that if he did it again he would have included the subscription based journals as a control group. You can watch it and decide for yourself.
The panelists also brought up two other important points that seem to not get discussed as much in the context of open access vs. subscription models for paying for publication or for peer review.
First, many subscription based journals (including Science) have page charges and/or figure charges that the author of the manuscript pays to the journals. As discussed among the panelists (and I have personal experience with paying for publication of my own research), these tend to be in the same ballpark as for the publication of open access papers. Thus the "charge" that the financial model for publication for OA journals would lead to more papers being accepted is true for many of the subscription journals as well (in particular for journals that are entirely online).
Second (and the useful point to come out of Bohannon's piece) is that there are clear problems with peer review being done sufficiently well. One suggestion that was made by both Eisen and Roos (and has been suggested many times before) is that the reviews provided by the peer referees of the manuscript and the editor could be published alongside (or as supplemental data on figshare) the accepted manuscript, so that all interested readers can assess the extent to which peer review was conducted. Indeed there are a few journals which already do this such as PeerJ, Embo J, ELife, F1000 Research, Biology Direct and some other BMC-series (see here for an interesting example), Molecular Systems Biology, Copernicus Journals. Thanks to folks on twitter for helping me put together this list!
This latter point (providing the reviews alongside published papers) seems to be so trivial to accomplish, and the reviewers names could easily remain anonymous (or they could provide their names providing a degree of academic credit and credibility to the scientific community) if so desired. So why has this not happened for all scientific journals? I am quite curious about whether there are any reasons NOT to provide such reviews?
The basics of the article was a year long "sting" operation on a "select" group of journals (that happened to be open access.. more on this in a second) focusing in part on predatory/vanity journals. That is some of the journals had the "air" of a real science journal, but in fact would publish the paper (?any paper?) for a fee. Basically Bohannon generated a set of faux scientific articles that at a first (and superficial) glance appeared to represent a serious study, but upon even modest examination it would be clear to the reader (i.e. reviewers and editors for the journal) that the experimental methodology was so deeply flawed that the results were essentially meaningless.
Bohannon reported that a large number of the journals he submitted to accepted this article, clearly demonstrating insufficient (or non-existent peer review). This and the head line has apparently lead to a large amount of popular press, and many interviews (I only managed to catch the NPR one I am afraid).
However, this sting immediately generated a great deal of criticism both for the way it was carried out, and more importantly the way the results were interpreted. First and foremost (to many) that ALL of the journals that were used were open access, and thus no control group for journals with the "traditional" subscription based models (where libraries pay for subscription to the journals). In addition, the journals were sieved to over-represent the shadiest predatory journals. That is it did not represent a random sample of open access journals. One thing that really pissed many people off (in particular among advocated of open access journals, but even beyond this group) that Science (A very traditional subscription based journal) used the summary headline: "A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals.", clearly implying that there was something fundamentally wrong with open access journals. There are a large number of really useful critiques of the article by Bohannon including ones by Michael Eisen, The Martinez-Arias lab, Lenny Teytelman, Peter Suber, Adam Gunn (including a list of other blogs and comments about it at the end). There is another list of responses found here as well. Several folks also suggested that some open access advocates were getting overly upset, as the sting was meant to focus on just the predatory journals. Read the summary line from the article highlighted in italics above, as well as the article and decide for yourself. I also suggest looking at some of the comment threads as Bohannon does join in on the comments Suber's post, and many of the "big" players are in on the discussion.
A number of folks (including myself) were also very frustrated with how Science (the magazine) presented this (and not just for the summary line). Making the "sting" appear to be scientifically rigorous in its methods, but then turning around and saying this is just a "news" piece whenever any methodological criticism is discussed. For instance, when readers commented about both the lack of peer review and the biased sampling of journals used for the "sting" operation for Bohannon's article, this was a response by John Travis (managing editor of News for Science magazine):
I was most interested in the fact Science (the journal) had an online panel consisting of Bohannon, Eisen and David Roos (as well as Jon Cohen Moderating) to discuss these issues. Much of it (especially in the first half hour) is worth watching, I think it is important to point out that Bohannon suggests he did not realize how his use of only OA journals as part of the sting operation would be viewed. He suggests that he meant this as largely a sting of the predatory journals, and that if he did it again he would have included the subscription based journals as a control group. You can watch it and decide for yourself.
The panelists also brought up two other important points that seem to not get discussed as much in the context of open access vs. subscription models for paying for publication or for peer review.
First, many subscription based journals (including Science) have page charges and/or figure charges that the author of the manuscript pays to the journals. As discussed among the panelists (and I have personal experience with paying for publication of my own research), these tend to be in the same ballpark as for the publication of open access papers. Thus the "charge" that the financial model for publication for OA journals would lead to more papers being accepted is true for many of the subscription journals as well (in particular for journals that are entirely online).
Second (and the useful point to come out of Bohannon's piece) is that there are clear problems with peer review being done sufficiently well. One suggestion that was made by both Eisen and Roos (and has been suggested many times before) is that the reviews provided by the peer referees of the manuscript and the editor could be published alongside (or as supplemental data on figshare) the accepted manuscript, so that all interested readers can assess the extent to which peer review was conducted. Indeed there are a few journals which already do this such as PeerJ, Embo J, ELife, F1000 Research, Biology Direct and some other BMC-series (see here for an interesting example), Molecular Systems Biology, Copernicus Journals. Thanks to folks on twitter for helping me put together this list!
This latter point (providing the reviews alongside published papers) seems to be so trivial to accomplish, and the reviewers names could easily remain anonymous (or they could provide their names providing a degree of academic credit and credibility to the scientific community) if so desired. So why has this not happened for all scientific journals? I am quite curious about whether there are any reasons NOT to provide such reviews?
Wednesday, July 24, 2013
Genetics really is hard (to interpret)
I am sure this will not surprise most of you, but genetics research can be really hard. I don't simply mean that doing genetics experiments is hard (which it can be), but interpreting the results from genetic analysis can be difficult. This post is about an interesting story involving the analysis of a a gene called I'm not dead yet (Indy) in the fruitfly Drosophila (one of the geneticists favorite organisms) and its role in extending lifespan. This story, that has taken place over the past decade has taken a number of interesting twist and turns involving many of the subjects that I like to discuss in this blog and my own work, including trying to make sense of the results from genetic studies, the influence of factors like genetic background and environment on mutational effects, and of course Drosophila itself. While I do not study lifespan (longevity), I have been interested, and following the story for this research over the past 5-6 years because of the implications of the influence of genetic background effects (which I do work on). I should also mention that other than being a geneticist I do not claim to have any great knowledge of the study of aging, but I will do my best on that.
I hope in this (and future) posts to accomplish a few things, so I thought I would lay them all out first (in case I start to ramble off in strange directions).
Not surprisingly, many scientists are interested in the biology of aging, and in particular in what factors influence longevity. In addition to it being very cool, and of obvious importance to many people on the planet, it is also important for aspects of evolutionary theory. The point being that many scientists are interested, and approach questions of aging from many different perspectives, which is great. It is also not surprising that geneticists (and again the general public) are interested in finding genes that influence the aging process (why do some people live longer than others). So in the year 2000 (you know, when all of our computers did not shut down) when a paper entitled "Extended life-span conferred by cotransporter gene mutations in Drosophila" came out, there was a lot of buzz. The basic results suggested that reducing the function or expression of a particular gene, Indy increased how long fruit flies lived. While we (the people) are not fruit flies, by the year 2000 research had already clearly demonstrated that there were many shared genes in all animals (including people and flies), and many seemed to have pretty similar functions. Thus explaining the excitement and buzz. By the way, Indy is short for "I'm not dead yet", and if you do not get the reference check this out (start at 0:58 if the two minutes is too long), or here if you prefer it in musical form, or here as a cartoon.
So what did they do in this study? The punchline is that using multiple, independently generated mutations they demonstrated that as you reduced Indy expression and function, the fruit flies lived for a longer time (increased longevity) when compared to the fruit flies with normal (wild-type) copies of the Indy gene. Seems straightforward enough, and by using multiple independent mutations they demonstrate (at one level) the repeatability of the results. That is, there results are not some strange one-off random results, but can be reproduced, which provides some degree of generality to these results.
Of course, results are rarely so simple and clear, and with additional investigations layers of complexities are often demonstrated. Studying longevity can be particularly difficult, and not only because you will have to wait a long time to see when something dies of natural causes.
So does Indy actually influence lifespan? The short answer is that the results from follow up studies have been pretty mixed, so it is perhaps not as clear as hoped from the original study. More on that soon in subsequent posts!
References and links if you want more information from the original studies
Rogina B, et al. (2000) Extended life-span conferred by cotransporter gene mutations in Drosophila. Science 290:2137–2140.
Toivonen, et al. 2007. No influence of Indy on lifespan in Drosophila after correction for genetic and cytoplasmic background effects. PLoS Genetics. 3(6):e95
Wang, et al. 2009. Long-lived Indy and calorie restriction interact to extend life span. PNAS USA. 106(23):9262-7. doi: 10.1073/pnas.0904115106
Toivonen JM, Gems D, Partridge L. Longevity of Indy mutant Drosophila not attributable to Indy mutation. Proc Natl Acad Sci USA. doi: 10.1073/pnas.0902462106.
Helfand, SL. et al., 2009. Reply to Partridge et al.: Longevity of Drosophila Indy mutant is influenced by caloric intake and genetic background. 106(21): E54. doi: 10.1073/pnas.0902947106.
Frankel. S. & B. Rogina. 2012. Indy mutants: live long and prosper. Frontiers in Genetics. 3(13). doi: 10.3389/fgene.2012.00013
Rogina B, Helfand SL. 2013. Indy mutations and Drosophila longevity. Front Genet. 4:47.
doi: 10.3389/fgene.2013.00047
I hope in this (and future) posts to accomplish a few things, so I thought I would lay them all out first (in case I start to ramble off in strange directions).
- Describe a cool story about something important to just about everyone (who does not want to find out how to live longer).
- Discuss the means and logic of how genetic analysis. That is how we (geneticists) go about figuring out whether a particular gene (or variant of a gene) influences something we care about (like how long we live).
- Context matters a lot for genetic analysis. Factors like the food used to feed your critters (among many others factors), and the genetic background (of the critters) that the mutation is studied in can profoundly change what you see (the results).
- Scientists, even when making honest efforts to perform good, reproducible research can get different results because of seemingly subtle differences in 2&3.
Not surprisingly, many scientists are interested in the biology of aging, and in particular in what factors influence longevity. In addition to it being very cool, and of obvious importance to many people on the planet, it is also important for aspects of evolutionary theory. The point being that many scientists are interested, and approach questions of aging from many different perspectives, which is great. It is also not surprising that geneticists (and again the general public) are interested in finding genes that influence the aging process (why do some people live longer than others). So in the year 2000 (you know, when all of our computers did not shut down) when a paper entitled "Extended life-span conferred by cotransporter gene mutations in Drosophila" came out, there was a lot of buzz. The basic results suggested that reducing the function or expression of a particular gene, Indy increased how long fruit flies lived. While we (the people) are not fruit flies, by the year 2000 research had already clearly demonstrated that there were many shared genes in all animals (including people and flies), and many seemed to have pretty similar functions. Thus explaining the excitement and buzz. By the way, Indy is short for "I'm not dead yet", and if you do not get the reference check this out (start at 0:58 if the two minutes is too long), or here if you prefer it in musical form, or here as a cartoon.
So what did they do in this study? The punchline is that using multiple, independently generated mutations they demonstrated that as you reduced Indy expression and function, the fruit flies lived for a longer time (increased longevity) when compared to the fruit flies with normal (wild-type) copies of the Indy gene. Seems straightforward enough, and by using multiple independent mutations they demonstrate (at one level) the repeatability of the results. That is, there results are not some strange one-off random results, but can be reproduced, which provides some degree of generality to these results.
Of course, results are rarely so simple and clear, and with additional investigations layers of complexities are often demonstrated. Studying longevity can be particularly difficult, and not only because you will have to wait a long time to see when something dies of natural causes.
So does Indy actually influence lifespan? The short answer is that the results from follow up studies have been pretty mixed, so it is perhaps not as clear as hoped from the original study. More on that soon in subsequent posts!
References and links if you want more information from the original studies
Rogina B, et al. (2000) Extended life-span conferred by cotransporter gene mutations in Drosophila. Science 290:2137–2140.
Toivonen, et al. 2007. No influence of Indy on lifespan in Drosophila after correction for genetic and cytoplasmic background effects. PLoS Genetics. 3(6):e95
Wang, et al. 2009. Long-lived Indy and calorie restriction interact to extend life span. PNAS USA. 106(23):9262-7. doi: 10.1073/pnas.0904115106
Toivonen JM, Gems D, Partridge L. Longevity of Indy mutant Drosophila not attributable to Indy mutation. Proc Natl Acad Sci USA. doi: 10.1073/pnas.0902462106.
Helfand, SL. et al., 2009. Reply to Partridge et al.: Longevity of Drosophila Indy mutant is influenced by caloric intake and genetic background. 106(21): E54. doi: 10.1073/pnas.0902947106.
Frankel. S. & B. Rogina. 2012. Indy mutants: live long and prosper. Frontiers in Genetics. 3(13). doi: 10.3389/fgene.2012.00013
Rogina B, Helfand SL. 2013. Indy mutations and Drosophila longevity. Front Genet. 4:47.
doi: 10.3389/fgene.2013.00047
Friday, February 27, 2009
Why are scientists so cautious?
An article from the New York Times writer Gina Kolata from June 27th, is getting a fair bit of buzz, both around the blogosphere and among my friends on facebook (many of whom are scientists as well).
http://www.nytimes.com/2009/06/28/health/research/28cancer.html
The gist of it is as follows. The current trend among large scientific research grant agencies is to fund projects that "play it safe". That is, they will not be risky projects, with respect to having significant "productive" output in terms of research articles out the other side (i.e. funds = research articles). Those proposals that may be the most ground breaking (both in terms of basic research, as well as any potential for significant clinical advances) are also often the most risky. This article does a good job getting at the heart of both the political and cultural components of this issue.
However it got me thinking about the culture of science in general, and our mentorship process. In particular about how a major part of the training of scientists with respect to critical reasoning, also leads perhaps to excessive skepticism. Is this possible? Now, I tend to be an overly skeptical person, and like most scientists, and I often look for the flaws in all of the experiments I perform. However, is it possible that we take it to far as a scientific community?
From my own training experience, I know that some of the most valuable time spent was in "Journal club", where a group of students, post-docs and faculty would get together each week over coffee, and argue about a couple of recent papers. However, in most situations, this would turn in to a session to find every possible flaw in the study. While there is certainly value in this (knowing a good experiment from a bad one for instance), I am know wondering if this does not lead to a culture of scientists who are unable to take risks, or appreciate proposals for "risky" science?
This is something I will have to mull over.....
http://www.nytimes.com/2009/06/28/health/28cancerside.html
http://www.nytimes.com/2009/06/28/health/research/28cancer.html
The gist of it is as follows. The current trend among large scientific research grant agencies is to fund projects that "play it safe". That is, they will not be risky projects, with respect to having significant "productive" output in terms of research articles out the other side (i.e. funds = research articles). Those proposals that may be the most ground breaking (both in terms of basic research, as well as any potential for significant clinical advances) are also often the most risky. This article does a good job getting at the heart of both the political and cultural components of this issue.
However it got me thinking about the culture of science in general, and our mentorship process. In particular about how a major part of the training of scientists with respect to critical reasoning, also leads perhaps to excessive skepticism. Is this possible? Now, I tend to be an overly skeptical person, and like most scientists, and I often look for the flaws in all of the experiments I perform. However, is it possible that we take it to far as a scientific community?
From my own training experience, I know that some of the most valuable time spent was in "Journal club", where a group of students, post-docs and faculty would get together each week over coffee, and argue about a couple of recent papers. However, in most situations, this would turn in to a session to find every possible flaw in the study. While there is certainly value in this (knowing a good experiment from a bad one for instance), I am know wondering if this does not lead to a culture of scientists who are unable to take risks, or appreciate proposals for "risky" science?
This is something I will have to mull over.....
http://www.nytimes.com/2009/06/28/health/28cancerside.html
Subscribe to:
Posts (Atom)