Brisbane, Australia-based digital health company ResApp has been talking up its Smartcough-C study — a multisite, double-blind study of the company’s smartphone-based respiratory analysis software — for a number of months. Conducted at Massachusetts General Hospital, Cleveland Clinic, and Texas Children’s Hospital, the trial was intended to support ResApp’s FDA submission for a number of respiratory conditions.
However, the top line results are in, and it doesn’t look like things went according to plan. Citing unexpected issues with the clinical data, the company says the study failed to meet most of its primary endpoints for either negative or positive results.
“These are not the results that we expected given our experience in Australia,” Tony Keating, CEO and managing director of ResApp Health, said in a statement. “It is obvious that a large number of tests have been affected by procedural anomalies and we now need to go through each case one by one to fully understand the results. I am confident that we can add another layer of detail to the next set of study protocols to deliver robust results and that we have adequate funding to complete a second US pediatric pivotal clinical study this US winter as well as continue and complete our adult program, including our US adult pivotal study which is also set to begin this US winter.”
ResApp’s technology uses algorithms to identify respiratory conditions based on recordings of cough sounds. Studies in the company’s native Australia have returned promising results. In April 2016, ResApp achieved an accuracy of 89 percent in a clinical study of 524 pediatric patients conducted by the company at Joondalup Health Campus (JHC) and Princess Margaret Hospital (PMH) in Perth, Western Australia, where the company is based. In a smaller trial of 243 adult patients, also at Joondalup, the company saw accuracy between 91 and 100 percent.
But the Smartcough-C results appear to have been tainted by two data collection anomalies. First, contrary to instructions, many patients were treated before the recording was made, leading to a high level of inaccuracy. Second, a number of the recordings had too much interference to be used in the study. When those recordings were excluded, even the few conditions that achieved high accuracy, such as bronchitis, ended up with such small sample sizes that they might still not be useful for an FDA submission.
The positive and negative agreement rates ranged from just 36 percent positive agreement for asthma up to 95 percent negative agreement rate for bronchiolitis.
This is a major setback for the company, but they hope to learn from these mistakes for future studies.
“The Smartcough-C data provides a valuable insight into the recruited US population and into US diagnosis practices,” Dr Udantha Abeyratne, chief scientist at ResApp Health, said in a statement. “We can use this study data to retrain the algorithms to capture such differences and significantly boost the robustness of our algorithms as well as refine study procedures at the participating hospitals to deliver results which are more representative of the algorithms’ capabilities.”
This isn’t the first issue the study has had. In April, ResApp announced it had to expand the study to compensate for unseasonably low pneumonia rates in the study population.