For knowledge’s sake

ERV

There’s a really interesting article about endogenous retroviruses and their implications as regards human evolution (in brief: invaluable shards of evolutionary history scattered throughout our genome like a Sumerian rubbish pit) in the most recent issue of the New Yorker. You should really read the whole thing, but my brain kind of snagged on one bit in particular, which I think raises some thorny intellectual and ethical questions about the nature of science, knowledge and research. This is obviously relevant to researchers and scientists, but also pertinent to folks like me, who just like to invent and extrapolate upon what those former hard-working individuals discover.

First, the article discusses the burgeoning and promising new field of paleovirology, wherein scientists of various disciplines use the reconstructed genomes of millennia-extinct viruses in order to learn more about the nature and process of evolution. It’s a novel way of learning about our prehistoric past, with serious shades of Jurassic Park. And like Jurassic Park, this new technique has serious dangers lurking just beneath the surface. Exhibit A, a group of students who used commonly-available materials and information to reconstruct a working version of the polio virus:

Thanks to steady advances in computing power and DNA technology, a talented undergraduate with a decent laptop and access to any university biology lab can assemble a virus with ease. Five years ago, as if to prove that point, researchers from the State University of New York at Stony Brook “built” a polio virus, using widely available information and DNA they bought through the mail. To test their “polio recipe,” they injected the virus into mice. The animals first became paralyzed and then died. (“The reason we did it was to prove that it can be done,’’ Eckard Wimmer, who led the team, said at the time. “Progress in biomedical research has its benefits and it has its downside.’’) The effort was widely seen as pointless and the justification absurd. “Proof of principle for bioterrorism,’’ Coffin called it. “Nothing more.”

There seems to be a strange assumption undergirding the criticism of the Stony Brook students: that if an experiment doesn’t have a direct, currently known application to a current human health and/or technological problem, then it isn’t worth pursuing (or, even worse, it’s enabling terrorism). I wondered about that as I continued through the article, until several paragraphs later, when the assumption is stated explicitly by one of the researchers:

“The knowledge you gain from resurrecting something that has not been alive for a million years has to be immensely valuable,’’ Harmit Malik told me in Seattle. “We didn’t take it lightly, and I don’t think any of our colleagues did, either.’’ He repeatedly pointed out that each virus was assembled in such a way that it could reproduce only once. “If you can’t apply the knowledge, you shouldn’t do the experiment,” he said. Malik is a basic research scientist. His work is not directly related to drug development or treating disease. Still, he thinks deeply about the link between what he does and the benefits such work might produce. That is an entirely new way to look at the purpose of scientific research, which in the past was always propelled by intellectual curiosity, not utilitarian goals. Among élite scientists, it was usually considered gauche to be obsessed with anything so tangible or immediate; brilliant discoveries were supposed to percolate. But that paradigm was constructed before laboratories around the world got into the business of reshaping, resurrecting, and creating various forms of life.

So…is the idea that the more potential danger posed by a technology, the more we should restrict the purposes to which it can be applied? That in this age of Encino Man viruses, the only proper avenue of research can be those that offer a direct and immediate benefit to humanity? Just to emphasize Malik’s point: “If you can’t apply the knowledge, you shouldn’t do the experiment.”

To which I can only say: I’m not a scientist, but something in me deeply rebels against attitudes like that. I can see the point of his reasoning, but it strikes me as so limiting and poor a conception of the role that knowledge plays in society that I can only hope at least a few scientists don’t share it. I could rebut it using a kind of long-view utilitarianism. This goes something like: we might not know the purpose of a research avenue now, but that doesn’t mean that it won’t be invaluable to us later (sort of like the use of Calabi-Yau space mathematics by modern string theorists, when the mathematicians who first developed that math never once thought of it having a scientific application). But I don’t think that even this gets to heart of what I find so objectionable about Malik’s attitude.

Because what he’s essentially saying is: if HUMANS can’t use it (and let’s say for the sake of argument that we can all be sure of its ultimate uselessness) then we shouldn’t bother to investigate it at all.

Of course, the caveat is that he’s referring to research with a potentially dangerous technique, but what’s to say that seemingly innocuous research into another area might not produce some other dangerous knowledge? Should we stop all scientists from learning information that could have dangerous applications without direct benefit to humans? And maybe I feel like this because of another, broader interpretation of long-view utilitarianism–even if the knowledge doesn’t help us scientifically or technologically, it could still contribute artistically. It could still inspire future discoveries, because the human mind doesn’t work in very predictable, linear ways. And maybe I object on the basic principle that when you restrict our investigation into our world, for any reason, you open up the door to restrict that investigation for more sinister, political, repressive reasons (stem cells, anyone?)

And yet, maybe I’m being a little hypocritical. Animal testing is undoubtedly central to much scientific research. Animal testing advocates will generally argue that a) testing these days has rigorous oversight and everything possible is done to minimize suffering and maximize useful results, and b) well, if you’re so against animal testing, I hope you aren’t taking any diabetes medications, or undergoing open-heart surgery or on antidepressants, etc. etc.

Which is all perfectly true, and I’m not against animal testing for medical reasons. And YET, we all know that animals are routinely tested for applications other than the medical ones defended in these arguments. Some of these tests might eventually lead to knowledge that will help save someone’s life. But not all of it. Sometimes scientists test animals to learn something whose application to human health and well-being is tangential at best. Seeing as how I hate the idea of restricting science based on short-term human utilitarianism, shouldn’t I be fine with any animal testing that furthers our knowledge of the world (so long as it’s within the ethical boundaries of argument a)?

And…I don’t know. The idea of those Stony Brook students paralyzing and killing mice just to prove that they could do it is horrifying. A certain science blog I like to read occasionally features posts from undergraduate science students. One student in particular posts quite graphically about his misadventures in a zebra fish experiment, where it appears he has killed at least a dozen fish just performing certain simple procedures ineptly. Is some Platonic ideal of Knowledge served by this student killing twelve fish before he learns how to properly inject a chemical? I guess I’ll cop to being “speciesist” in my thinking, but I just don’t know how far I can let that carry me.

The death and suffering of other living creatures has to be weighed against the pursuit of knowledge. And maybe restrictive, “speciesist” utilitarianism is the only framework we have to calibrate those scales.

Advertisements