I get highly uncomfortable when people ask if the ministry I oversee “works.” Answering that question is not easy. In fact, in many cases, it is nearly impossible to provide a definitive answer. In this post, I’m continuing to explain why that is.
Last year, I had several meetings with representatives from Notre Dame’s Wilson Sheehan Lab for Economic Opportunities [LEO, for short] about conducting an outcome study on Resilient Recovery Ministries. LEO has the funds and expertise necessary to conduct outcome studies to determine if a program “works.”
They have a unique methodology that I was excited about. Using publicly available government databases, they can track the financial independence of a program’s participants. That would be great because getting a job and becoming financially independent is a good way to measure whether our participants are sober and living the kind of quiet life the Bible says we should make it our ambition to live.
At the end of several years of research, we might have been able to answer the question of whether Resilient Recovery works by saying something like, “People who attend our program are more financially stable and use fewer public benefits. On average, our program saves society x amount of dollars per attendee.”
There was one problem, though.
In order to conduct a rigorous study, we would need a control group—a group that doesn’t attend Resilient against whom we could compare Resilient’s participants.
But this proved nearly impossible. Our population lives extremely chaotic lives. A sizable proportion are chronically homeless. Most do not work or aren’t involved in any social clubs or activities. They have no regular habits, no social ties, and no routines. The chaos means we can’t maintain an experimental group, much less a control group, long enough to conduct an outcome study.
LEO suggested we use a waitlist as a control group. But that would not work for us. We serve everyone who comes through our doors. The sober living homes that we partner with are in a similar position. They don’t have a waitlist. A homeless addict makes a call in a moment of desperation, and if the home has an opening at that moment, they send a van to pick up the potential new client. Clients that aren’t picked up quickly often stumble back to the streets. Sometimes, the window of opportunity is open for just a couple of hours.
So we went another route
Given the chaotic lives of many of our participants, a partnership with LEO was off the table. So, we tried something different. Along with the Meros Center, we conducted a study with a less ambitions goal. We called it The Stay in Touch Project. Our goal was to track people for a year. Like scientists studying a rare population of Speckled Horn Warblers, we just want to know where our people go and what happens to them. To do this, we enrolled 30 people in the study and checked in with them biweekly. In case they went missing, we obtained a list of family, friends, and treatment providers we could contact to help us track them down.
We hired a person to keep track of the 30 participants. We invested thousands of dollars of grant money into the project. Six months into the project, we lost track of roughly 84% of the people we so desperately wanted to stay in touch with.
An attrition rate of 84% makes outcome research impossible. Any conclusions about outcomes we could make from our study would be victims of the Survivorship Bias, which ChatGPT defines this way:
Survivorship bias is the logical error of focusing only on successful outcomes or surviving examples while ignoring failures, leading to skewed conclusions. It distorts analysis by overlooking critical factors that caused others to fail, often resulting in overly optimistic assumptions or flawed decision-making.
To see why the Survivorship Bias applies to Resilient Recovery, consider all the factors that would impact a person’s longevity in Resilient Recovery. Each of those factors is like a filter screening out short-term attendees. The people who “survive” to be counted at month four are different from those who don’t “survive.”
THE SURVIVORSHIP BIAS FILTER
People still around at the end of the program or study were healthier, more stable, and had more resources than those who didn’t make it to the finish line. Because those who survived to the end of the program were more “fit,” we can’t actually say it was our program that helped them. It would be equally accurate to say our program filtered out people for whom sobriety was more challenging to attain.
But surely people don’t rely on data affected by the survivorship bias, right?
As I’ll show in next week’s post, people and organizations absolutely use data impacted by the survivorship bias. They filter, filter, filter until only a hale and hardy few are left and then take credit for the resilience and chutzpa of the people who graduate from their program. In the same way that oblivious tourists wear loud shorts and black knee socks without a hint of irony, people use and quote data affected by the survivorship bias as if they are sharing the results of a peer-reviewed article from a top-tier science journal.
I’ll repeat my central premise. The answer of whether a charity, program, or ministry “works” is more difficult to answer than most people imagine. Like almost everybody else, I could tout the data from our Stay in Touch Study as evidence that our program works, but I would look—at least in my own eyes—like the couple in the picture above. And am loathe to do so.