Skin cancer is the most common type of cancer, with some 5.4 million cases in the United States each year, and it’s also the most treatable as long as it is detected in the early stages. However, not everyone has the same access to dermatologists that could offer such promising prognoses, so a group of Stanford researchers set out to change that by creating an algorithm that can visually detect and potentially diagnose skin cancer.
In a study published this week in the journal Nature, the researchers from Stanford’s Artificial Intelligence Laboratory explain how they fed an algorithm images as raw pixels and associated labels, which required very little processing.
"We made a very powerful machine learning algorithm that learns from data, instead of writing into computer code exactly what to look for, you let the algorithm figure it ou," co-lead author Andre Esteva told Stanford’s news site.
First, they had to build on an existing Google-developed algorithm that was trained to identify 1.28 million images from 1,000 object categories. However, as no such image database of skin lesions was available, they then went to work scraping images from the internet, may of which were labeled in foreign languages and varied in quality.
“There’s no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own,” Brett Kuprel, graduate student and co-lead author of the paper said. “We gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy – the labels alone were in several languages, including German, Arabic and Latin.”
They then collaborated with Stanford dermatologists and microbiology and immunology professor Helen M. Blau to classify the images, ultimately compiling 130,000 images of skin lesions representing over 2,000 different diseases. They trained a convolutional neural network, or CNN, to classify skin lesions using only pixels and disease labels as inputs, then put it head-to-head with 21 board-certified dermatologists.
The testing involved biopsy-proven clinical images of the most common and deadly types of skin cancers – malignant carcinomas and melanomas– provided by the University of Edinburgh and the International Skin Imaging Collaboration Project. Tasked with identifying benign lesions and the deadly lesions, the algorithm matched the performance of the dermatologists.
“We realized it was feasible, not just to do something well, but as well as a human dermatologist,” Sebastian Thrun, an adjunct professor in the Stanford Artificial Intelligence Laboratory, said to the Stanford news site. “That’s when our thinking changed. That’s when we said, ‘Look, this is not just a class project for students, this is an opportunity to do something great for humanity.’”
While the algorithm is only currently on a computer, the team intends to put it on smartphones to bring the ability to reliably diagnose skin cancer to a large population. That will of course come with the need for rigorous validation -- apps that promise to detect skin cancer with the smartphone have already been the subject of FTC action -- but the researchers are hopeful that the technology could be a real option for many without access to a dermatologist.
“My main Eureka moment was when I realized how ubiquitous smartphones will be,” said Esteva. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera. What if we could use it to visually screen for skin cancer? Or other ailments?”