How AI will transform Health Care and your life
Health Care systems around the world are facing a fundamental financial crisis. It takes years and significant cost to train medical staff, and further we are facing a rise in complex diseases that are costly to treat, alongside the aging population
Moreover, there is the issue of misdiagnosis or time taken to correctly diagnose due to information overload across the system. In part these failures are due to over worked medical staff and in part due to handling Big Data with inadequate information management systems.
There is a real cost to both misdiagnosis and delays for correct diagnosis. These costs are both economic and also personal as people have to deal with the consequences of shattered lives that may result.
In terms of the OECD countries we live in a world where the younger age group are lesser in number (i.e. the future workforce for tax receipts, and they are lower users of healthcare on average) and yet medical treatment costs are on the rise, and in many non OECD countries face the challenges of growing populations with a lack of specialised medical staff.
A solution to this impending crisis exists in the intelligent application of Big Data analytics and in particular advanced Artificial Intelligence technologies. An intelligent use of these technologies will help alleviate the pressure points and improve the costs for the overall Health Care system and the results at a micro level for the individual patient will be beneficial.
Furthermore, the technologies can help the insurance sector and hospitals in efficiently managing costs with readmission rates and correct treatments.
Areas to be discussed in the future include:
- Robotic Surgery
- Medical Imaging
- Electronic Health Record (EHR) analysis
- Clinical Trials
- Drug Discovery
- Prevention with wearables and real time analytics
Convolutional Neural Networks (CNNs) are particularly powerful when working with large data sets for computer vision problems and have had a major impact in many sectors of the economy. Examples include the finance and retail sector for automating payments with face and visual search. Another example is the insurance sector for automating the claims verification process by classifying the type and make of vehicle associated with the insurance policy and to check that it is the same as the one in the picture sent by the claimant.
The next section of this article focusses on skin image analysis, cancer and Medical Imaging with the role of Convolutional Neural Networks (CNNs).
Cancer is a terrible disease with huge cost both at an economic level and the cost for the patient and those close to them. The earlier it can be diagnosed the better the probability of a successful outcome. The potential to apply Deep Learning Computer Vision technologies for processing images of various scans increases the likelihood of successful diagnosis and quicker treatment.
The application of CNNs have huge potential impact in this area. The chart above from Signify Research shows the rapid growth rate in AI technologies for image analysis to hit revenues of $2Bn by 2023 with Deep Learning set to play the major role. The advantage of deploying such technology is that it can alleviate the skills gap as it takes years and significant investment to train medical staff.
Deep Learn Strategies (DLS) team have worked on advanced application of such technology for a data set relating to Skin Lesions in order to demonstrate the potential of this technology to help solve the problems that the healthcare system faces.
The aim is to augment the medical staff rather than replace them.
Furthermore, our team have demonstrated the concept of explainable AI with the CNN using visualisations of the decision. This is important in the health care sector where the medical staff need to understand and trust the decisions of the machine with confidence.
We took a large dataset labelled with 7 different types of skin lesions one of which is melanoma cancer. We trained a state-of-the-art CNN to process this dataset and learn the patterns in the images that enabled the network to classify an unlabelled image to one of the seven different classes. The end to end deep learning architecture consists of data pre- processing and the Deep Neural Network for training. This is followed by a visualisation step that shows which areas of the image the network is focussing its attention on to take the appropriate decision.
The CNN network was trained with NVIDIA GPUs and we used TensorFlow as a framework for the Deep Learning. We used 2 different architectures namely Inception V3 and DenseNet as a backbone architecture for the CNN. We found DenseNet to be performing better than InceptionV3. This is as a result of dense connections between the convolutional layers of the CNN. The data set is challenging because it consisted of multiple images that are difficult to classify even for the human eye. Deep Learning makes the process relatively easy here as a result of seeing thousands of images at once.
We used a dataset of approximately 10,000 images with 90% used for training and 10% used for testing. There is published research that the human accuracy of the dermatologist is around 86.6% in being able to detect and identify cancer. Our CNN was able to achieve an accuracy of 90% across 7 classes. We believe our accuracy would have been even higher but for the class imbalance within the dataset and in particular if we had access to a larger dataset.
It is noted that recently that a team from Google have hit 99% accuracy rates on a different and larger dataset relating to lung cancer (we worked on skin lesions). The results from Google demonstrate the benefit of hospitals and regulators enabling data to be shared with anonymisation available to protect patient identity so as to enable the startup community to access larger datasets and in the process demonstrate how this technology can help save lives.
Our CNN took 3 days to train on a NVIDIA Titan X GPU. We believe the visualisations of the output of our CNN can help the dermatologist focus their attention on the relevant parts of the image that the CNN identifies as having indications of cancer.
Some of the visualisations are shown below for a type of skin lesion:
Image A is the original image that is usually captured with a camera and image B is a heat map showing the visualisation of the network with colours indicating areas of attention at different levels with red meaning the highest attention.
A. Original Image
C. Original Image
Another example shown in C, is benign keratosis – like lesions, and image D is the heat map. Our network is able to process such images that are difficult even for the human eye to analyse.
This article is intended to demonstrate the practical application of AI Deep Learning technology to medical imaging with the potential to alleviate the stress on the health care system whilst providing for better outcomes for the individual patient.
We recognise that there may be a degree of pushback from the traditionalists and those suspicious of new technologies, and we recommend that such technology should be trialled in a controlled environment for rigorous testing before being deployed at scale. However, the costs of not moving forwards with such technology will be to continue down a path of high costs both to the overall health care system and for the individual patient as an increase of approximately 4% in accurate predictions has a significant impact once the overall population and Health Care system is taken into account.
Imtiaz Adam MSc Computer Science (AI and Machine Learning Engineer), Sloan Fellow in Strategy & The DLS team www.dls.ltd
Glossary of terms
Deep Learning is a field that consists of application of neural networks with hidden layers between the input and output layers.
CNN – Convolutional Neural Network. A deep neural network technique commonly used for computer vision when the data set is large enough and access to a GPU is available.
GPU – Graphical Processing Unit, is a parallel computing hardware allowing for massive computations through multiple computing cores.
DenseNet is a CNN architecture with dense connections between the layers
InceptionV3 is a CNN architecture with kernels of different sizes within the same layer
Kernels are weight matrices used in convolutions with images to generate feature maps in a CNN
Article Source: https://www.linkedin.com/pulse/using-ai-explainable-deep-learning-help-save-lives-imtiaz-adam/?published=t
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No GA727816.
© Copyright 2018 PULSE project – All Rights Reserved