DermNet provides Google Translate, a free machine translation service. Note that this may not provide an exact translation in all languages

Translate

Artificial intelligence in dermatology

Authors: Manu Goyal, Research Scholar, Visual Computing Lab, Manchester Metropolitan University, Manchester, UK; A/Prof Moi Hoon Yap, Visual Computing Lab, Manchester Metropolitan University, Manchester, UK. DermNet NZ Editor in Chief: Adjunct A/Prof Amanda Oakley, Dermatologist, Hamilton, New Zealand. Copy edited by Gus Mitchell/Maria McGivern. July 2019.


toc-icon

What is artificial intelligence?

Artificial intelligence (AI) employs computer systems to perform tasks that normally require human intelligence, such as speech recognition and visual perception. AI relies on technologies and algorithms such as robotics, machine learning, and the internet to imitate the workings of the human brain. With unlimited computational power and storage capacity, AI has the potential to outperform human beings.

In medicine, computer vision algorithms have the potential to recognise abnormalities and diseases by evaluating colour, shape, and patterns [1].

Examples of AI applications include:

  • Technology to enable self-driving cars
  • Speech recognition algorithms to interact with humans, such as Apple's SIRI, Amazon's Alexa, and Google Assistant
  • Language translation algorithms
  • Identification of dog breeds (one algorithm has been reported to have achieved an accuracy of more than 96%)
  • Prediction of user preferences such as a list of movies or targeted advertisements
  • Prediction of periods of high demand for a taxi or a flexible workforce [2,3].

How are artificial intelligence and deep learning used in medicine?

Deep learning computer algorithms are based on convolutional neural networks. Neural networks are based on a computational model inspired by the workings of the biological brain. A large number of connected nodes called artificial neurones are similar to biological neurones in the brain. These systems learn the features of an object by evaluating manually labelled data, such as ‘dog’ or ‘no dog’. The learned features can then be used to infer the nature of a new image.

Images are widely used to diagnose injury and disease and in studies of human anatomy and physiology. Advanced medical imaging techniques include magnetic resonance imaging (MRI), dual-energy X-ray absorptiometry, ultrasonography, and computed tomography (CT) [4–10].

In medical imaging, convolutional neural networks are used to determine ‘abnormal’ or ‘normal’. They train on large labelled databases of medical images and match or exceed human vision for the detection of objects in the images in areas such as:

  • Breast cancer
  • Brain tumour
  • Skin cancer
  • Alzheimer disease [11,12].

These computer algorithms will be scalable to multiple devices, platforms, and operating systems, reducing their cost and increasing their availability for diagnosis and research. Universities, governments, and research-funding agencies have recognised the opportunities to improve early diagnosis of diseases, such as cancer, heart disease, diabetes, and dementia, and are investing heavily in the sector.

AI techniques approved by the US Food and Drug Administration (FDA) for clinical use by September 2018 include products to:

  • Identify signs of diabetic retinopathy in retinal images
  • Recognise signs of stroke in CT scans
  • Visualise blood flow in the heart
  • Detect skin cancer from clinical images captured using a mobile app.

How is artificial intelligence used in skin cancer diagnosis?

According to the US Skin Cancer Foundation, more people are diagnosed with skin cancer each year in the US than all other cancers combined [13]. Skin cancers are commonly classified as melanoma or non-melanoma skin cancer (the keratinocytic cancers, basal cell carcinoma, and squamous cell carcinoma). Skin cancers can be difficult to distinguish from common benign skin lesions, and the appearance of melanoma is especially variable. This means that:

  • Skin cancers can be missed because they are thought to be harmless
  • Large numbers of harmless lesions are unnecessarily surgically excised so as not to miss a potentially dangerous cancer.

Dermatologists examine skin lesions by visual inspection and dermoscopy. They use their experience in pattern recognition to determine which skin lesions should be excised for diagnosis or treatment. In recent years, there has been a huge interest in using AI algorithms to aid lesion diagnosis. There are a number of datasets of skin lesions that are publicly available to aid AI research.

Researchers at the University of Stanford performed a dermatologist-level classification of skin cancers with a deep learning algorithm on a dataset of 129,450 clinical images that included 2,032 skin diseases [14]. They also tested their algorithm against 21 board-certified dermatologists and found the algorithm’s performance at classification was on a par with their experts.

The International Skin Imaging Collaboration (ISIC) offers an extensive public dataset that in September 2018 had 23,906 digital dermoscopic images of more than 18 types of skin lesion. Since 2016, ISIC has also conducted a yearly 'Skin lesion analysis towards melanoma detection’ challenge. The 2017 winner of their challenge achieved more than 98% accuracy in distinguishing melanomas from benign moles [15]. ISIC then included more categories of skin lesions in the 2018 challenge, such as basal cell carcinoma and actinic keratosis. We can expect improved accuracy and more categories of skin lesions to be added to the competition every year.

Machine learning algorithms for skin lesions

To create a new machine learning skin cancer algorithm, each type of skin lesion is assigned a class. At its simplest, there may be just two classes; for example, ‘benign’ and ‘malignant’, or ‘naevus’ and ‘melanoma’. More sophisticated algorithms can assess multiple classes.

Before an algorithm is tested using a new image, deep learning algorithms are trained on a large number of images in each class. The process involves three main stages (figure 1).

Stage 1

In stage 1, the algorithm is fed with digital macroscopic or dermoscopic images labelled with the 'ground truth'. (The ground truth in this context is the lesion diagnosis, which is assigned by an expert dermatologist or is the result of histopathological examination.)

Figure 1. Overview of training of different types of skin lesions with the help of deep learning

Stage 2

In stage 2, convolutional layers (a series of filters applied to the input, such as an image) extract the feature map from the images. A feature map represents data with multiple levels of abstraction.

  • Initial convolutional layers extract low-level features like edges, corners, and shapes.
  • Later convolutional layers extract high-level features to detect the type of skin lesion (figure 2).

Figure 2. Typical feature maps learned using convolutional neural networks

Stage 3

In stage 3, the feature maps are used by the machine learning classifier for pattern recognition of different classes of skin lesion. The deep learning algorithm can now be used to classify a new image (figure 3).

Figure 3. Inference produced by deep learning algorithms on a new image of a skin lesion

ABCD criteria

The clinical ABCD criteria used by non-experts to screen pigmented skin lesions are Asymmetry, Border irregularity, Colour variation, and Diameter over 6 mm (figure 4). See ABCDEs of melanoma, which includes 'E' for Evolution.

A: The asymmetry property checks whether two halves of the skin lesion match (or not) in terms of colour and shape. The lesions are divided into two halves based on the long axis and divided again based on the short axis. Melanoma is likely to have an asymmetrical appearance.

B: The border property defines whether the edges of skin lesion are smooth and well defined or not. Skin cancers tend to have irregular borders.

C: The colour property assesses the number and variability of colours throughout the skin lesion. Melanoma and pigmented basal cell carcinoma often include shades of 3–6 colours (black, tan, dark brown, grey, blue, red, and white), whereas naevi and freckles tend to have only one or two colours, which are symmetrically distributed.

D: The diameter property measures the approximate diameter of the skin lesion. The diameter of malignant skin lesions is generally greater than 6 mm (the size of a pencil eraser).

Figure 4. The ABCD rule for skin cancer

Yang and colleagues proposed to adopt the ABCD rule for image processing and machine learning algorithms [16]. In their work, they compared the performance of their system with doctors (general, junior, and expert) and deep learning algorithms for the diagnosis of skin lesions testing dataset (table 1). They invited two doctors from each category to perform this task.

They trained their system on their skin disease SD-198 dataset of 6,584 clinical images from 198 different lesion categories, and they extracted low-level features from three visual components: texture, colours, and borders. Yang’s computer-assisted device (CAD) system performed better than the VGG-Net and ResNet deep learning algorithm and was comparable with the performance of junior doctors. However, the expert dermatologists were significantly superior to the CAD system.

Performance evaluation of computerised methods and doctors on a testing set of 3,292 images
Methods Accuracy Standard error
Yang CAD system 56.47 53.15
Medical experts General doctors (n=2) 49.00 47.50
Junior doctors (n=2) 52.00 53.40
Expert dermatologists (n=2) 83.29 85.00
Deep learning VGGNet 50.27 48.25
ResNet 53.35 51.25

Other machine learning research on skin cancer

IBM is also working on an AI tool called Watson to analyse skin lesion images for the detection of melanoma. Their device uses six key points to analyse and determine the probability of melanoma: colour, border irregularity, asymmetry level, globule and network, similarity to skin lesion images in their database, and melanoma score; these criteria are similar to ABCD criteria [17].

MetaOptima Technology Inc. has launched their DermEngine platform to provide a teledermatology service. Their visual search tool compares a user-submitted image with similar images in a database of thousands of pathology-labelled images gathered from expert dermatologists around the world. Deep learning techniques are used to search for related images based on visual features such as colour, shape, and pattern [18].

What is the future of artificial intelligence and skin cancer diagnosis?

Research involving AI is making encouraging progress in the diagnosis of skin lesions. However, AI is not going to replace medical experts in the near future. In the first place, a human is needed to select the appropriate lesion for evaluation — often among hundreds of unimportant ones.

Medical diagnosis relies on taking a careful medical history and perusal of the patient’s records. It takes into account the patient's ethnicity, skin, hair and eye colour, occupation, illness, medicines, existing sun damage, the number of melanocytic naevi, and lifestyle habits (such as sun exposure, smoking, and alcohol intake). The behaviour and previous treatment of the lesion are also clues to the diagnosis.

AI can offer a second opinion and can be used to screen out an entirely benign lesion, such as a melanocytic naevus that is symmetrical in colour and structure.

These algorithms will inevitably evolve with improved accuracy in the detection of potentially malignant skin lesions, as databases expand to include more images and more patient and lesion-specific labels.

 

References

  1. Chollet F. Building powerful image classification models using very little data. The Keras Blog. 5 June 2016. Available at: blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html (accessed 22 July 2019).
  2. Guizzo E. How Google’s self-driving car works. IEEE Spectrum. 18 October 2011. Available at: spectrum.ieee.org/automaton/robotics/artificial-intelligence/how-google-self-driving-car-works (accessed 22 July 2019).
  3. Hinton G, Deng L, Yu D, et al. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine 2012; 29: 82–97. Journal
  4. Ahmad E, Goyal M, McPhee JS, Degens H, Yap MH. Semantic segmentation of human thigh quadriceps muscle in magnetic resonance images. Cornell University. arXiv:1801.00415. 2018. Available at: https://arxiv.org/abs/1801.00415 (accessed 23 July 2019).
  5. Goyal M, Reeves ND, Davison AK, Rajbhandari S, Spragg J, Yap MH. DFUNet: Convolutional neural networks for diabetic foot ulcer classification. Cornell University. arXiv:1711.10448. 2017. Available at: https://arxiv.org/abs/1711.10448 (accessed 23 July 2019).
  6. Alarifi JS, Goyal M, Davison AK, Dancey D, Khan R, Yap MH. Facial skin classification using convolutional neural networks. 2017. Available at: https://link.springer.com/chapter/10.1007/978-3-319-59876-5_53 (accessed 23 July, 2019)
  7. Goyal M, Yap MH, Reeves ND, Rajbhandari S, Spragg J. Fully convolutional networks for diabetic foot ulcer segmentation. 2017 IEEE International Conference on Systems, Man, and Cybernetics (SMC). IEEE Xplore. Available at: https://ieeexplore.ieee.org/document/8122675 (accessed 23 July, 2019).
  8. Goyal M, Reeves N, Rajbhandari S, Yap MH. Robust methods for real-time diabetic foot ulcer detection and localization on mobile devices. IEEE J Biomed Health Inform 2019; 23: 1730–41. DOI: 10.1109/JBHI.2018.2868656. PubMed
  9. Goyal M, Yap MH. Multi-class semantic segmentation of skin lesions via fully convolutional networks. Cornell University. arXiv:1711.10449. 2017. Available at: https://arxiv.org/abs/1711.10449 (accessed 23 July 2019).
  10. Yap MH, Pons G, Martí J, et al. Automated breast ultrasound lesions detection using convolutional neural networks. IEEE J Biomed Health Inform 2018; 22: 1218–26. DOI: 10.1109/JBHI.2017.2731873. PubMed
  11. Evans GW. Artificial intelligence: Where we came from, where we are now, and where we are going. University of Victoria. 11 July 2017. Available at: https://dspace.library.uvic.ca/handle/1828/8314 (accessed 23 July 2019).
  12. Weidlich V, Weidlich GA. Artificial intelligence in medicine and radiation oncology. Cureus 2018; 10: e2475. DOI: 10.7759/cureus.2475. PubMed Central
  13. American Cancer Society. Cancer facts and figures 2018. Atlanta: American Cancer Society, 2018. Available at: www.cancer.org/content/dam/cancer-org/research/cancer-facts-and-statistics/annual-cancer-facts-and-figures/2018/cancer-facts-and-figures-2018.pdf [accessed 3 May 2018].
  14. Esteva A, Kuprel B, Novoa RA, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017; 542: 115–8. Journal
  15. Codella NC, Gutman D, Celebi ME, et al. Skin lesion analysis toward melanoma detection: A challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), hosted by the International Skin Imaging Collaboration (ISIC). Cornell University. arXiv: 1710.05006. 2018. Available at: https://arxiv.org/abs/1710.05006 (accessed 23 July 2019).
  16. Yang J, Liang J, Sun X, Rosin PL. Clinical skin lesion diagnosis using representations inspired by dermatologist criteria. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE Xplore. Available at: https://ieeexplore.ieee.org/document/8578235 (accessed 7 July 2019).
  17. IBM. How Watson is learning to identify melanoma. Available at: www.ibm.com/cognitive/au-en/melanoma/ (accessed 7 July 2019)
  18. MetaOptima Technology Inc. What visual search can do for you. Available at: www.dermengine.com/en-ca/visual-search (accessed 7 July 2019).

On DermNet

Other websites

Books about skin diseases

 

Related information

Sign up to the newsletter