The days of claiming artificial intelligence as a feature that set one startup or company apart from the others are over. These days, one would be hard-pressed to find any technology company attracting venture funding or partnerships that doesn’t posit to use some form of machine learning. But for companies trying to innovate in healthcare using artificial intelligence, the stakes are considerably higher, meaning the hype surrounding the buzzword can be deflated far more quickly than in some other industry, where a mistaken algorithm doesn’t mean the difference between life and death.
Over the past five years, the number of digital health companies employing some form of artificial intelligence has dramatically increased. CB Insights tracked 100 AI-focused healthcare companies just this year, and noted 50 had raised their first equity rounds since January 2015. Deals in the space grew from under 20 in 2012 to nearly 70 in 2016. A recent survey found that more than half of hospitals plan to adopt artificial intelligence within 5 years, and 35 percent plan to do so within two years. In Boston, Partners HealthCare just announced a 10-year collaboration with GE Healthcare to integrate deep learning technology across their network. The applications for AI go far beyond just improving clinician workflow and processing claims faster.
“The problem we are trying to solve is one of productivity,” Andy Slavitt, the former acting administrator of the Centers for Medicare and Medicaid Services said during the Light Forum, a two-day conference that brought together CEOs, healthcare IT experts, policymakers and physicians at Stanford University last week. “We need to be taking care of more people with less resources, but if we chase too many problems and business models or try to invent new gadgets, that’s not going to change productivity. That’s where data and machine learning capabilities will come in."
Respondents to the hospital survey said the technology could have the most impact on population health, clinical decision support, diagnostic tools and precision medicine. Even drug development, real world evidence collection and clinical trials could be faster, cheaper and more accurate with AI. But the time to put all of our faith in AI is still not here.
“The human brain is a really strong prior on what makes sense,” Andrew Maas, chief scientist and cofounder of Roam Analytics said during the Light Forum. “Computers are powerful on assessing, but not on the level of reliability you will trust soon.”
How do we get there?
So everybody wants it, but just how soon will we see the purported transformation of healthcare from machine learning? Lately, we’ve seen it in everything from the most straightforward app to the most complex diagnostic tasks, coming in the form of natural language processing or image recognition to powerful algorithms crunching databases made up of decades of medical research.
Like any other technology in healthcare, AI can’t be brought in without a mountain of extra challenges including regulatory barriers, interoperability with legacy hospital IT systems, and serious limitations on access to crucial medical data needed to build powerful health-focused algorithms in the first place. But that’s not stopping innovation, albeit cautious innovation, and digital health stakeholders are realizing that unlocking AI’s true full potential requires strategic partnerships, quality data, and a sober understanding of statistics.
As the understanding of AI in healthcare matures, the biggest names in technology aren’t shying away from the mountainous challenges that come with innovating in the industry, like regulatory barriers, legal access to quality data and the constant issue of lack of interoperability. Just this week, Google announced it has built upon its tried and true consumer-level machine learning capabilities into healthcare. Google Brain, the company’s research team, worked with the likes of Stanford, University of California San Francisco to acquire de-identified data from millions of patients.
It’s more than that, as Google CEO Sundar Pichai explained at the tech giant’s Google I/O developer event last week. Last year, they launched the Tensor computing centers, which the company describes as AI-first data centers.
“At Google, we are bringing all of our AI efforts together under Google.ai. It’s a collection of efforts and teams across the company focused on bringing the benefits of AI to everyone,” Pichai said. “Google.ai will focus on three areas: Research, Tools and Infrastructure, and Applied AI.”
In November, Google researchers published a paper in JAMA showing that Google's deep learning algorithm, trained on a large data set of fundus images, can detect diabetic retinopathy with better than 90 percent accuracy. Pichai said another area they are looking to apply AI is pathology.
“This is a large data problem, but one which machine learning is uniquely equipped to solve,” he said. “So we built neural nets to detect cancer spreading to adjacent lymph nodes. It’s early days but our neural nets show a much higher degree of accuracy: 89 percent, compared to 73 percent. There are important caveats — we also have higher numbers of false positives — but already getting this in the hands of pathologists, they can improve diagnosis.”
Another example is Apple’s recent acquisition of AI company Lattice, which has a background in developing algorithms for healthcare applications.
Microsoft, too, is wading into the space. Just a couple of months ago, the company launched the Healthcare NExT initiative, which brings together artificial intelligence, cloud computing, research and industry partnerships. The initiative includes projects focused on genomics analysis and health chatbot technology, and a partnership with the University of Pittsburgh Medical Center. A couple of weeks ago, Microsoft partnered with data connectivity platform provider Validic to add patient engagement to their HealthVault Insights research project.
We’ve seen AI in various forms in lots of startups, too, from Ginger.io’s behavioral health monitoring and analytics platform Sensely’s virtual assistants to apps and wearables from companies like Ava – which just released research with the University of Zurich – and Clue, to predict fertility windows. Others, like the recently-launched Buoy Health, have created medical specific search engines. Buoy sources from over 18,000 clinical papers, covering 5 million patients and spanning 1,700 conditions. Beyond a symptom checker, Buoy starts by asking age, sex, and symptoms, then measures against the proprietary and granular data to decide which questions to ask next. Over about two to three minutes, Buoy’s questions narrow down to get more and more specific before offering individuals a list of possible conditions, along with options for what to do next.
Another promising area is medical imaging. In November, Israel-based Zebra Medical Vision, a machine-learning imaging analytics company, announced the launch of new platform that allows people upload and receive analysis of their medical scans from anywhere with an internet connection. Zebra launched in 2014 with a mission to teach computers to automatically analyze medical images and diagnose various conditions, from bone health to cardiovascular disease. The company has steadily built up an imaging database, which they are combining with deep learning techniques in order to developing algorithms to automatically detect and diagnose medical conditions. Another Israeli company with a similar offering is AiDoc, which just raised $7 million.
But no matter how big and powerful the technology company may be, the availability of patient data is what makes the difference between a buzzword or an algorithm that can diagnose or predict outcomes. That’s why many companies are in the training stage.
As Joe Lonsdale, CEO of venture capital firm 8VC said during the Light Forum at Stanford, “The hard part is creating the data in the first place.”
Dr. Maya Peterson, a professor of biostatistics at the University of California Berkeley School of Public Health, offered an even more sober view.
“Relationships [between data] in the real world are complex, and we don’t fully understand them,” she said during HIMSS' Big Data and Healthcare Analytics Forum in San Francisco this week. “And machine learning is overly ambitious in a way, as we are going into more complex questions. That isn’t a good thing.”
A good algorithm is hard to build
Machines can only learn from the data provided them, so researchers, engineers and entrepreneurs alike are busy assembling larger and higher quality databases.
Last month, Alphabet-owned Verily launched the Project Baseline Study, a collaborative effort with Stanford Medicine and Duke University School of Medicine to amass a large collection of broad phenotypic health data in hopes of developing a well-defined reference of human health. Project Baseline aims to gather data from around 10,000 participants, each of whom will be followed for four years, and will use that data to develop a “baseline” map of human health as well as to gain insights about the transitions from health to disease. Data will come in a number of forms, including clinical, imaging, self-reported, behavioral, and that from sensors and biospecimen samples. The study’s data repository will be built on Google computing infrastructure and hosted on Google Cloud Platform.
“If the government did data quality and data sharing initiatives, it would be a lot different,” Andrew Maas, chief scientist at Roam Analytics (a San Francisco-based machine learning analytics platform provider focused on life sciences) said at the Light Forum. “If the private sector wants to do that, and gather data in abundance, that’s great. Give us that data and we’ll be back and have something amazing in a year. But if data is not collected because people are scared, we can’t do anything.”
The availability of patient data and computing power means the difference between promises and actual impact. That brings us to IBM Watson Health, which has amassed giant amounts of data via numerous partnerships, teaching the cognitive computing models it claims will unlock vast amounts of insights on patient health. As actual evidence are yet to be fully realized, public opinion on IBM Watson is split. Some think it is the granddaddy of machine learning.
During the Light Forum, Chris Potts, Stanford University’s director of Linguistics and Computer Science as well as the chief scientist at Roam Analytics, said Watson is “arguably the most promising in health.” Others aren’t so sure – Social Capital CEO Chamath Palihapitiya called it “a joke.” But, as evidenced by the many collaborations we have reported on, that doesn’t seem to be hindering the company’s ability to take up new partners. Just last week, they joined MAP Health Management to bring their machine learning capabilities to substance abuse disorder treatment, and the research arm of IBM is working with Sutter Health to develop methods to predict heart failure based on under-utilized EHR data.
IBM Watson actually got its start in 2011, when the machine won a game of Jeopardy, inspiring the company to get to work putting the technology to use.
“We had to train the technology for the medical domain, and there are many complexities there – it varies by specialty, and that’s all different in different parts of the world. We had to train system to understand language of medicine,” Shiva Kumar, Watson Health’s vice president and chief strategy officer said at the Light Forum. “The first step is natural language processing. Can you know enough to start deriving insights? Can you do that at the point that you engage in dialogue to come up with best possible answers? Talk to patient, go a step further, assimilate, continue moving on.”
To do that, IBM Watson must tackle the problem of unstructured data, Kumar explained.
“We tend to use word cognitive computing, because it goes beyond machine learning and deep learning. Being able to derive insights, being able to integrate, and learn.
“Healthcare is unique; it’s highly regulated, and has a ton of data it can’t use. And there are many silos,” he said. “So it’s a place where a lot of technology can improve it. But at the end of day, success is determined by practitioners.”
How to move forward
Many experts predict AI will hit healthcare in waves – Allscripts Analytics Chief Medical Officer Dr. Fatima Paruk told Becker’s Hospital Review said she foresees the first applications in care management of chronic diseases, followed by developments that leverage the increasing availability of patient-centered health data along with environmental or socioeconomic factors. Next, genetic information, integrated into care management, will make precision medicine a reality.
Some of the areas where AI could make the biggest impact are those already notoriously late to the technology game: Pharmaceutical companies. But that’s starting to change.
During the Light Forum, Jeff Kindler, partner at Lux Capital and former chairman and CEO of Pfizer, called pharma the “classic example of innovators dilemma,” due to the fact that they have never been in a tight enough financial position to be required to shift their business model. But seeing the potential of AI to speed up the process is too hard to pass up, although it will take more communication between healthcare stakeholders to see where to apply AI.
“If you talk to payers, and they don’t know who pharma or big data or artificial intelligence, they think ‘I’m going to get screwed.’ So how does this trust gap get crossed?” Kindler said during the Light Forum. “Historically, pharma and device manufacturers were not distinguishing between the two because the data wasn’t available; it was like throwing darts. But as AI and machine learning becomes more robust, you will have a separation between costs of operation and costs that don’t matter because they are increasing efficiency.”
Efficiency is a key area for drug development, especially in light of shakeups at the FDA that could make AI even more readily impactful.
“I work in an industry where it takes 12 years to launch a product,” Judy Sewards, Pfizer’s vice president of digital strategy and data innovation said at the Light Forum. “That’s three presidential terms, or three World Cups. Over that time, it takes 1,600 scientists to look at research and 3,600 clinical trials involving thousands of patients. Where we start to think about AI is how can we speed up the process, make it smarter, connect breakthrough medicine and connect patients who need it the most?”
What’s bringing that to life, Sewards said, is the work they are doing with IBM Watson on immunocology.
“Some worry that machines or AI will replace scientists or doctors, but it is actually more like they are the ultimate research assistant, or wingman,” she said.
Rajeev Ronanki, Deloitte’s principal in life sciences and healthcare, told Becker’s Hospital Review there needs to be a confluence of three powerful forces to drive the machine learning trend forward: exponential data growth, faster distributed systems, and smarter algorithms that interpret and process that data. When that trifecta comes together, Ronanki forecasts CIOs can expect returns in the form of cognitive insights to augment human decision-making, AI-based engagement tools, and AI automation within devices and processes to develop deep domain-specific expertise.
“We expect the growth to continue, with spending on machine intelligence expected to rise to $31.3 billion,” Ronanki told Becker’s, citing an IDC report.
“Where we are today is ground zero, basically,” Roam Analytics CEO and cofounder Alex Turkeltaub said during the Light Forum. “We’re more or less figuring out the commercial pathway, and at best using master’s level statistics, no more than that, because it’s hard to put data together and deal with regulation. Most of even the most cutting-edge deep learning algorithms were developed in the 60s, which were based on ideas from the 1600s. We’ve got to figure out a better way.”
Especially, since, as Pfizer’s Judy Sewards pointed out: “In our industry, you need to be 100 percent. Error is someone’s life.”