Is the RCT too slow for mobile health?

By: Jonah Comstock | Dec 5, 2012        

Tags: | | | | | | | | | | |  |
Bant app

Bant: A diabetes management app

The role of evidence in mobile health app development — and the level of scrutiny such apps should be subjected to — is still an open question.  At the mHealth Summit 2012, a variety of speakers from different sectors of the market offered their opinions on what, exactly, is meant by evidence and on the perennial question of the value of randomized controlled trials (RCTs) in mobile health.

Bonnie Spring, Director for Center for Behavior and Health at Northwestern University, said that evidence of effectiveness is a broad category.

“If I’m an engineer or a designer, evidence means user satisfaction and sustained use. If I’m a computer scientist, evidence means functionality and few bugs. If I’m a corporation it means sales and if I’m a venture capitalist it means a return on investments,” she said. “If I’m a scientist, which I am, I mean that it actually changes behavior and produces a health outcome as evidenced by a research design that gives me confidence.”

Spring believes that means continuing to seek rigorous scientific evidence, including RCTs, widely considered the gold standard for evidence. But several speakers at the conference feel the RCT is not a useful standard in the fast-moving field of mobile health.

Joseph Cafazzo of the Centre for Global eHealth Innovation in Canada went as far as to have a slide listing “Why I hate RCTs.”

“They’re enormously expensive,” he said. “We spend at least three times as much doing trials as building apps themselves.”

Cafazzo pointed to a pilot study he completed in 2007 for a mobile app to help patients with hypertension. The RCT was just completed, and the results are about the same. He said his company has just published a new pilot study on an app for diabetic kids called Bant.

“I think the RCT could [be published] in 2015. But honestly, we’re learning things through small pilots that can get apps into the field right now. In 2015, we want to see Bant further along than it is now,” he said. “In the end, I haven’t had one parent say ‘I can’t wait for that RCT to be over so my kid can get this app.’ We’ll do the RCTs, but we have to be a lot more nimble for the purposes of these apps.”

Cafazzo said that RCTs were designed to evaluate pharmaceuticals, and that the big difference between drugs and apps is that drugs have more capacity to cause harm. The worst case scenario for most apps, he argued, is a null effect.

Spring pushed back, pointing to MelApp, an app for evaluating moles, that she said has no efficacy data. Spring said the app helps people determine whether or not a mole is worth seeing a dermatologist about.

“There’s a possibility that it could do harm – if people felt confident in a result that was inaccurate and, as a result, didn’t go,” she said.

Spring did agree that RCTs seemed too slow in the world of mHealth. She showed slides of an intervention using PalmPilots that only recently got through the process.

At another session, Abdul Shaikh of the National Cancer Institute again took up Spring’s question of standards of evidence, pointing out that other kinds of evidence are relevant in a world where mobile health entrepreneurs have different options for their funding.

Gary Bennett of Duke University spoke about the disconnect between academics and entrepreneurs.

“We have a consumer market that doesn’t really privilege evidence,” he said, saying consumers are spending a lot of money on apps with no evidence behind them. “NIH funding is not all that plentiful right now. I’m not sure we have a sufficient amount of money to develop a market-ready app.”

He also echoed Cafazzo’s sentiments, saying that the Silicon Valley mentality of constant iteration didn’t mesh with the pace of an RCT.

“Many of my friends have had the experience of doing a trial and finding no one even has the device anymore,” he said. With his own project, iOTA, he said, “After that four years, our version 1.0 is not something we even want to disseminate.”

Bennett opted to release iOTA as an API rather than an app. He feels academics should play to their strengths — developing and proving evidence-based methods — and then put those methods in the hands of designers with a focus on marketing.

Anne DeGheest of MedStars, who was providing an investor perspective, said that the evidence is important to investors, but companies often privilege it to the exclusion of other relevant questions about a product.

“I give you the benefit of the doubt,” she said. “It works. So how big is the market? What’s the problem you’re solving? Are there a lot of people who are willing to pay for it? And then we go back and see if it works.”

Chris Bergstrom, Chief Strategy and Development Officer at WellDoc, also spoke in support of the status quo, pointing to WellDoc, which is both FDA-cleared and RCT-evidenced, as an example of a promising company that followed the rules.

“No doctor’s going to prescribe a product if they don’t believe it’s an effective product that will move the needle,” Bergstrom said. “I don’t really see that changing. This has been how healthcare has operated for decades and I think it will be for the next few years,” he said.

Bergstrom countered the idea that trials are too slow, saying a four year development time is not unusual in other industries like automotive or mobile phones. He also said it’s possible to integrate a market design process with an RCT.

“That can be your base level of claims and in parallel you can be improving on top of that,” he said. “As you’re working on your commercial product, your trials are scaling.”

  • http://twitter.com/rjonesplymouth Ray Jones

    I agree with the arguments put forward by many reported in this article that a 3-4 year RCT process to examine the specifics of a particular app is too slow and so a waste of time and resources. But I also agree with Bergstrom that clinicians need to see the results of RCTs to be persuaded, and Spring that we need evidence of behaviour (and preferably health) change to be sure of the cost effectiveness of the e-health/m-health approach.

    These are not irreconcilable arguments. The key is in what is considered to be the intervention and what the control. Apps, websites, and the online world will be used b patients and will continue to evolve at a rapid pace. Each individual will have different health needs and different technological preferences and abilities. Many will use combinations of ‘tools’, so carrying out RCTs where one app is the intervention and ‘do nothing’ is the control is impossible. The key is to know how information, support, and communication impacts on the patient’s wellbeing. RCTs are appropriate and still needed to assess the impact (and so cost effectiveness) of providing that information, support and communication. But this should not be formulated in terms of specific app(s) but in terms of ‘functional specification’ of what is needed. So the academic world can carry out RCTs on the ‘functional specification’ (for app developers) or ‘passport to care’ (for patients) (in the form of a document) while software providers continue their rapid
    iterations of detailed development of apps and websites.

    A general study design for RCTs might be:
    1. Consultation with users on their varied needs for information, support and communication and agreement on the relevant outcomes to assess their wellbeing. Review of current apps, websites, online resources. Drafting of ‘functional specification’ and ‘passport to care’. Provision of ‘functional specification to interested software developers.

    2. RCT
    INTERVENTION: patients proactively encouraged to make use of apps and online resources, provided with ‘patient passport’, and perhaps some ‘e-health support [ref]
    CONTROL: patients passively continue to use whatever is available. ANALYSIS: Intention to Treat analysis addresses “Is passport and online motivational support in using apps cost effective?”. ‘As-treated’ analysis and qualitative methods can be included to give more immediate feedback to developers.

    In the meantime developers will continue to iterate improvements to products, but the slower RCT process provides evidences on the cost-effectiveness of the approach.

    What do you think?
    Ray Jones
    Professor of Health Informatics, Plymouth University, UK

    Ref: Sheaves B, Jones R, Williamson GR, Chauhan R. Phase 1 pilot study of
    e-mail support for people with long term conditions using the Internet BMC Medical Informatics and Decision Making.
    2011;11(20). Epub 5 Apr 2011.

  • Pingback: Mobile can replace in-person weight loss programs | mobihealthnews