Most people agree that doing robust studies and pilots is important to the development of mobile and digital health, and to reaching the cost savings these technologies offer. But what kind of evidence is important? When is it time to stop running pilots and go to scale, and what challenges does that process offer? And furthermore, does every provider organization have to reinvent the wheel, or can they learn from each others' pilots?
Those were a few of the questions addressed at the Partners Connected Health Symposium in Boston by speakers from Partners, Humana, the California Health Care Foundation (CHCF), and the Center for Technology and Aging.
Margaret Laws, the Director of Innovations for the Underserved at CHCF, said that pilots in clinical settings often failed to track return on investment.
"We think, if we have a really successful program in one community health system, all the others will just start doing it," she said. "Well it doesn't happen, and one reason is because there is no analysis of the work and time and resources that go in and what comes out."
David Lindeman, director of the Center for Technology and Aging, pointed out that the problem with using return on investment to evaluate pilots is that many of the newest endeavors in medicine are preventative and/or have expensive startup costs. So short term ROI can look deceptively bad. He described a calculator his team uses to evaluate longer-term return on investment. He said that using that tool, one intervention done by Centura moved "from a very modest first-year ROI, over several years to 3-to-1 and then 4-to-1, which allowed their organization to say 'We're going to take this statewide'."
Sree Chaguturu, the medical director of population health management at Partners, said return on investment is table stakes at this point, that is, the minimum evidence that a pilot has to show. But in his role as a decision-maker at an employee health plan at Partners, he also looks to see whether technologies -- particularly monitoring or sensor technologies -- address specific pain points.
"The challenge as an employer is the 'Big Brother' problem," he said. "If you're providing these sensors, how do you make sure that's done in a sensitive way? Then there is the question of what do you do with that data. What are your intervention arms? Employers might have their own providers, but if they don't how do they get that information to the people who can intervene? So the challenges are trust and intervention."
He also mentioned that providers don't always think about the need for technical support, and don't have the human infrastructure in place to help users manage and understand new technology.
Both Laws and Rajni Aneja, a strategic executive at Humana, talked about the need to specifically consider the population the intervention is targeting. Is the user interface one older people will be able to use? Is it available in the right languages? That sort of population targeting can affect the success of an intervention, but it also makes it harder to use the same strategies at different sites without doing somewhat repetitive efficacy studies. Still, Aneja thinks as the data builds it will become more apparent what is and isn't generalizable.
"When we think about providing services to our members it's for better monitoring of their care," she said. "And when we do these sorts of pilots they ... build an ecosystem that in the longterm will help us with better results and better quality care. You do the learning and you have a plan to scale from it. We always start small where we think is the actual need. And once the adoptions and the results and the outcome are measured, then we can scale it into different markets within the US."