How one researcher learned to embrace negative results
Researchers don’t dream of negative studies, but experiments that don’t go as expected and trials that yield negative results are critical for moving science forward. To highlight this important part of the research process, we asked research scientists to speak about their own experiences with “failure.” Our first contributor is Michele Heisler, a health services researcher who develops and tests health system-based interventions.
By Michele Heisler
There is a certain moment that every researcher who develops and evaluates health care interventions both eagerly anticipates and dreads. It is the moment that comes after years of planning and honing the intervention to be tested, securing funding, recruiting study participants, conducting the intervention and being sure it is being conducted as envisioned, meticulously gathering and recording data, and desperately trying to keep people involved in the study. After years of hard work, you and maybe some members of your research team gather in a room around a single computer. The code is written. All the relevant data is entered. You push the run button, the computer whirs, and output appears on the screen. You take a deep breath and lean over to review it.
And there it is. The results are there in a single regression often. You peer down and sigh. It didn’t work. The patients who received your amazingly crafted, brilliant, sure-to-succeed did not do any better on your primary outcome than patients who got usual care or some other approach that you didn’t think would work as well. All those years of work and in a single minute you realize you have what is called a “negative” study. Your hypothesis was wrong. You can’t blame it on poor fidelity to your imagined intervention: you carefully assessed fidelity and it was delivered as intended. You can’t blame it on lack of engagement: Most of the participants engaged in the intervention as well as you could have hoped. It just didn’t work any better than the alternative.
The first time this happened to me, I felt crushed by a sense of failure. How could it not have worked? A similar intervention had worked beautifully with patients with a different health condition. I had felt so sure this would be an effective intervention. And all those years spent on this just to find it didn’t work? I gave myself a bracing self-lecture about this is why randomized controlled trials and equipoise, the principles used to assign patients to different treatments in trials, were so very important. I reminded myself that I was an objective researcher seeking truth and not an advocate for certain approaches until they were rigorously tested—and even then continuing to question and challenge. But, it was only after I sulked for a while and then buckled down to try to make sense of the results and write them up that I began to see the importance of these “negative” findings. Happily, we did gather qualitative data from participants about their views and experiences that we were able to scour. As I delved into those, the reasons the intervention didn’t work as we had hoped began to become clear—and were fascinating and unexpected, but made so much sense in retrospect. My team and I became excited about the lessons learned from the failure of the intervention, the reasons it failed, and how we needed to change and adapt our approach to incorporate these lessons.
We want things to work. We believe in our ideas. Until they are rigorously tested, we just know our brilliant interventions will work as we imagined. I still dread the moment when truth about success or failure will irrevocably be flashed on the computer screen. I now firmly believe though that the lessons from the failures may be as crucial as the “successes” to inform interventions that will improve health.
Want more on this subject? Read the second contribution to the series, in which surgical oncologist Anees Chagpar explains why she considers her non-significant and negative studies to be important parts of her publication history.
Featured image courtesy of Sebastian Sikora.