The role of evidence in mobile health app development — and the level of scrutiny such apps should be subjected to — is still an open question. At the mHealth Summit 2012, a variety of speakers from different sectors of the market offered their opinions on what, exactly, is meant by evidence and on the perennial question of the value of randomized controlled trials (RCTs) in mobile health.
Bonnie Spring, Director for Center for Behavior and Health at Northwestern University, said that evidence of effectiveness is a broad category.
“If I’m an engineer or a designer, evidence means user satisfaction and sustained use. If I’m a computer scientist, evidence means functionality and few bugs. If I’m a corporation it means sales and if I’m a venture capitalist it means a return on investments,” she said. “If I’m a scientist, which I am, I mean that it actually changes behavior and produces a health outcome as evidenced by a research design that gives me confidence.”
Spring believes that means continuing to seek rigorous scientific evidence, including RCTs, widely considered the gold standard for evidence. But several speakers at the conference feel the RCT is not a useful standard in the fast-moving field of mobile health.
Joseph Cafazzo of the Centre for Global eHealth Innovation in Canada went as far as to have a slide listing “Why I hate RCTs.”
“They’re enormously expensive,” he said. “We spend at least three times as much doing trials as building apps themselves.”
Cafazzo pointed to a pilot study he completed in 2007 for a mobile app to help patients with hypertension. The RCT was just completed, and the results are about the same. He said his company has just published a new pilot study on an app for diabetic kids called Bant.
“I think the RCT could [be published] in 2015. But honestly, we’re learning things through small pilots that can get apps into the field right now. In 2015, we want to see Bant further along than it is now,” he said. “In the end, I haven’t had one parent say ‘I can’t wait for that RCT to be over so my kid can get this app.’ We’ll do the RCTs, but we have to be a lot more nimble for the purposes of these apps.”
Cafazzo said that RCTs were designed to evaluate pharmaceuticals, and that the big difference between drugs and apps is that drugs have more capacity to cause harm. The worst case scenario for most apps, he argued, is a null effect.
Spring pushed back, pointing to MelApp, an app for evaluating moles, that she said has no efficacy data. Spring said the app helps people determine whether or not a mole is worth seeing a dermatologist about.
“There’s a possibility that it could do harm – if people felt confident in a result that was inaccurate and, as a result, didn’t go,” she said.
Spring did agree that RCTs seemed too slow in the world of mHealth. She showed slides of an intervention using PalmPilots that only recently got through the process.
At another session, Abdul Shaikh of the National Cancer Institute again took up Spring’s question of standards of evidence, pointing out that other kinds of evidence are relevant in a world where mobile health entrepreneurs have different options for their funding.
Gary Bennett of Duke University spoke about the disconnect between academics and entrepreneurs.
“We have a consumer market that doesn’t really privilege evidence,” he said, saying consumers are spending a lot of money on apps with no evidence behind them. “NIH funding is not all that plentiful right now. I’m not sure we have a sufficient amount of money to develop a market-ready app.”
He also echoed Cafazzo’s sentiments, saying that the Silicon Valley mentality of constant iteration didn’t mesh with the pace of an RCT.
“Many of my friends have had the experience of doing a trial and finding no one even has the device anymore,” he said. With his own project, iOTA, he said, “After that four years, our version 1.0 is not something we even want to disseminate.”
Bennett opted to release iOTA as an API rather than an app. He feels academics should play to their strengths — developing and proving evidence-based methods — and then put those methods in the hands of designers with a focus on marketing.
Anne DeGheest of MedStars, who was providing an investor perspective, said that the evidence is important to investors, but companies often privilege it to the exclusion of other relevant questions about a product.
“I give you the benefit of the doubt,” she said. “It works. So how big is the market? What’s the problem you’re solving? Are there a lot of people who are willing to pay for it? And then we go back and see if it works.”
Chris Bergstrom, Chief Strategy and Development Officer at WellDoc, also spoke in support of the status quo, pointing to WellDoc, which is both FDA-cleared and RCT-evidenced, as an example of a promising company that followed the rules.
“No doctor’s going to prescribe a product if they don’t believe it’s an effective product that will move the needle,” Bergstrom said. “I don’t really see that changing. This has been how healthcare has operated for decades and I think it will be for the next few years,” he said.
Bergstrom countered the idea that trials are too slow, saying a four year development time is not unusual in other industries like automotive or mobile phones. He also said it’s possible to integrate a market design process with an RCT.
“That can be your base level of claims and in parallel you can be improving on top of that,” he said. “As you’re working on your commercial product, your trials are scaling.”