Most science reporters aren’t scientists. I suspect this wouldn’t bother me nearly as much if I hadn’t been a scientist before I became a science writer. That gave me somewhat high standards for researching what I write about. The truth is, my standards are so high that I don’t always manage to live up to them. Still, I don’t really ask for much. All I want is for science reporters to think about the studies they’re discussing before propagating their own misunderstandings to the masses.
It’s not always easy to do—even for scientists. I actually spend a lot of time teaching my graduate students how to read, and critically evaluate, scientific studies rather than just taking what the authors say at face value. The first lesson? Don’t just stop at the abstract.
When scientists publish studies in peer-reviewed journals (generally considered your best choice for getting reliable research information, since experts review these papers to see if the science is good enough to be worthy of publication—at least in theory) the articles start off with an abstract—approximately 250 words that synthesize what the study was looking for and its basic results. These do not, however, tell the whole story.
Abstracts rarely go into details about the study design, who was excluded, the actual questions asked on a survey, or any of the other really important information that tells you what the scientists actually did instead of how they interpreted it. The devil is in those details. What you learn about a study when you read the whole thing, as opposed to just looking at the abstract, can completely change your mind about what it says.
Take the recent study that many people are claiming supports the teaching of abstinence-only education in schools. If you read the abstract, you learn that fewer students in the abstinence-only group went on to have sex than students in the control or comprehensive education groups. If you read the whole paper, on the other hand, you also learn that:
1. Approximately one quarter of the 7th and 8th grade students enrolled in the study were already sexually active before the study began
2. At one year out from the intervention, the number of students who became sexually active was essentially the same in the abstinence-only group and one of the two comprehensive-education groups. The difference only really showed up two years out.
3. The abstinence-only program used in the study was significantly different than most abstinence-only programs currently available in classrooms around the country, and as such the results can’t be used to talk about the efficacy of those programs.
Even ignoring the conservative outlets that understandably jumped on anything that looked like plausible research support for their pet issue, there were far more blatant pronouncements that this study had turned the sex education world on its head than nuanced analyses of what it actually said.
Which brings us back to the fact that most reporters, and readers, aren’t trained as scientists. Scientists learn to take an abstract at face value at our peril. Even if that short snippet does accurately and fully describe the research that was done, one study does not define reality. Scientists talk about something called the “weight of evidence,” which reflects the fact that you need lots of data, piling up relentlessly on top of itself, slotting together and filling in weaknesses in understanding, to support a scientific belief.
The truth is that this should be obvious even to the completely non-scientifically inclined reader. Just think about how frustrating it is every time an article on the health benefits of a particular diet is followed up by another article claiming it can kill you. It takes a lot of scientists doing a lot of studies and examining a problem in a lot of ways before there is actually enough data for people to be able make informed decisions, and it seems like many science writers forget that in their desire to get as much information out to the public as quickly as possible. It’s the same quest for content that makes people report press releases as news, even when they know they should question the information.
I’ve been guilty of this myself. I work as a science writer, and sometimes I get so excited about the possibilities in an abstract that I want to share it right away. I try to be careful in how I talk about research, discuss conclusions as preliminary, and mention that more work needs to be done, but these subtleties can be difficult to convey.
It’s a hard line to walk. On one hand, there is the desire to believe that every new study is news and newsworthy—a desire fueled by press releases, looming deadlines, and editors who will chastise you if you don’t tackle the same topics covered by the competition. On the other hand, you don’t want to confuse people by presenting preliminary results as accepted fact or make readers look too desperately for a solution that may still be decades away.
Still, these issues are why I think much of the best writing about science, and sex, can be found in the blogosphere rather than in the mainstream press. Although you have to choose your sources carefully, the people who write about a topic because they spend their lives studying it often have more useful insights on the background and complexities of an issue than those who pick it up as a thread on the A.P. This is particularly true when you’re talking about sexual health topics. These subjects can be so heavily stigmatized it’s difficult for even many of the most seasoned reporters to discuss them without hiding behind flippancy or personal baggage. It’s often easier to make light of the science of sex, in part because people often have trouble writing about sex without worrying that readers may think they’re discussing their own sex life.
The trouble is, of course, picking your sources. In theory, although often not in reality, someone is looking over the shoulders of people at mainstream media houses to make certain that their research checks out and their writing makes sense. When you’re getting your science information from blogs and other less “reputable” outfits, you have to make those judgments yourself. Fortunately most of the better writers let you know where they learned about the issues they’re discussing, so that you can at least have a chance of making informed judgments about the quality of their information.
I know that these things may be easier for me then they are for many science writers, because I write about a field in which I have also worked as a researcher, but it’s still not easy. Sure, I’m intimately familiar with the science of the intimate from the perspective of a scientist, a teacher, and an author… as well as a rational human being. Yes, I have a good understanding of where the questions lie, how certain studies fail to address them, and why doing some types of research is completely impractical. And you know what? Sometimes I still get it completely wrong.
Two of Elizabeth’s favorite sources for insightful analysis of sexual health news are
1. The Wisdom of Whores. This blog is written by Dr. Elizabeth Pisani—an epidemiologist studying HIV. She uses carefully chosen data to craft well thought out arguments about hot sexual-science issues in the news.
2. Sexuality and Religion. In this blog, Reverend Debra Haffner uses her expertise as a religious leader and activist to discuss, among other things, the interactions between sexuality, religion, and the political process.
It’s not always easy to do—even for scientists. I actually spend a lot of time teaching my graduate students how to read, and critically evaluate, scientific studies rather than just taking what the authors say at face value. The first lesson? Don’t just stop at the abstract.
When scientists publish studies in peer-reviewed journals (generally considered your best choice for getting reliable research information, since experts review these papers to see if the science is good enough to be worthy of publication—at least in theory) the articles start off with an abstract—approximately 250 words that synthesize what the study was looking for and its basic results. These do not, however, tell the whole story.
Abstracts rarely go into details about the study design, who was excluded, the actual questions asked on a survey, or any of the other really important information that tells you what the scientists actually did instead of how they interpreted it. The devil is in those details. What you learn about a study when you read the whole thing, as opposed to just looking at the abstract, can completely change your mind about what it says.
Take the recent study that many people are claiming supports the teaching of abstinence-only education in schools. If you read the abstract, you learn that fewer students in the abstinence-only group went on to have sex than students in the control or comprehensive education groups. If you read the whole paper, on the other hand, you also learn that:
1. Approximately one quarter of the 7th and 8th grade students enrolled in the study were already sexually active before the study began
2. At one year out from the intervention, the number of students who became sexually active was essentially the same in the abstinence-only group and one of the two comprehensive-education groups. The difference only really showed up two years out.
3. The abstinence-only program used in the study was significantly different than most abstinence-only programs currently available in classrooms around the country, and as such the results can’t be used to talk about the efficacy of those programs.
Even ignoring the conservative outlets that understandably jumped on anything that looked like plausible research support for their pet issue, there were far more blatant pronouncements that this study had turned the sex education world on its head than nuanced analyses of what it actually said.
Which brings us back to the fact that most reporters, and readers, aren’t trained as scientists. Scientists learn to take an abstract at face value at our peril. Even if that short snippet does accurately and fully describe the research that was done, one study does not define reality. Scientists talk about something called the “weight of evidence,” which reflects the fact that you need lots of data, piling up relentlessly on top of itself, slotting together and filling in weaknesses in understanding, to support a scientific belief.
The truth is that this should be obvious even to the completely non-scientifically inclined reader. Just think about how frustrating it is every time an article on the health benefits of a particular diet is followed up by another article claiming it can kill you. It takes a lot of scientists doing a lot of studies and examining a problem in a lot of ways before there is actually enough data for people to be able make informed decisions, and it seems like many science writers forget that in their desire to get as much information out to the public as quickly as possible. It’s the same quest for content that makes people report press releases as news, even when they know they should question the information.
I’ve been guilty of this myself. I work as a science writer, and sometimes I get so excited about the possibilities in an abstract that I want to share it right away. I try to be careful in how I talk about research, discuss conclusions as preliminary, and mention that more work needs to be done, but these subtleties can be difficult to convey.
It’s a hard line to walk. On one hand, there is the desire to believe that every new study is news and newsworthy—a desire fueled by press releases, looming deadlines, and editors who will chastise you if you don’t tackle the same topics covered by the competition. On the other hand, you don’t want to confuse people by presenting preliminary results as accepted fact or make readers look too desperately for a solution that may still be decades away.
Still, these issues are why I think much of the best writing about science, and sex, can be found in the blogosphere rather than in the mainstream press. Although you have to choose your sources carefully, the people who write about a topic because they spend their lives studying it often have more useful insights on the background and complexities of an issue than those who pick it up as a thread on the A.P. This is particularly true when you’re talking about sexual health topics. These subjects can be so heavily stigmatized it’s difficult for even many of the most seasoned reporters to discuss them without hiding behind flippancy or personal baggage. It’s often easier to make light of the science of sex, in part because people often have trouble writing about sex without worrying that readers may think they’re discussing their own sex life.
The trouble is, of course, picking your sources. In theory, although often not in reality, someone is looking over the shoulders of people at mainstream media houses to make certain that their research checks out and their writing makes sense. When you’re getting your science information from blogs and other less “reputable” outfits, you have to make those judgments yourself. Fortunately most of the better writers let you know where they learned about the issues they’re discussing, so that you can at least have a chance of making informed judgments about the quality of their information.
I know that these things may be easier for me then they are for many science writers, because I write about a field in which I have also worked as a researcher, but it’s still not easy. Sure, I’m intimately familiar with the science of the intimate from the perspective of a scientist, a teacher, and an author… as well as a rational human being. Yes, I have a good understanding of where the questions lie, how certain studies fail to address them, and why doing some types of research is completely impractical. And you know what? Sometimes I still get it completely wrong.
Two of Elizabeth’s favorite sources for insightful analysis of sexual health news are
1. The Wisdom of Whores. This blog is written by Dr. Elizabeth Pisani—an epidemiologist studying HIV. She uses carefully chosen data to craft well thought out arguments about hot sexual-science issues in the news.
2. Sexuality and Religion. In this blog, Reverend Debra Haffner uses her expertise as a religious leader and activist to discuss, among other things, the interactions between sexuality, religion, and the political process.
This sums up so much that bothers me about popular science reporting. One of my biggest pet peeves are science reporters who assume humans are naturally monogamous, then fold, spindle and mutilate study abstracts about animals to support this.