The Fundamentals of AI in Medicine

Introduction

"If you can't explain it simply, you don't understand it well enough."

- Albert Einstein, Nobel Prize winning physicist

Technologists contend that the world will change more in the next 20 years than it did in the previous 300.  They say that the reason for this remarkable change will be the widespread application of an array of technologies commonly referred to as artificial intelligence, or AI.  Often statements such as these are more provocation than they are accurate prediction — and predicting the future has always been a dicey business. Nonetheless, there is widespread agreement that the technology that underlies AI will be disruptive over the coming decades, across all industries.  Healthcare will be no exception.

Table 1 provides a non-exhaustive list of current and potential applications of AI in medicine.  The table makes it clear: AI will change the way we diagnose patients. It will change the way we treat patients.  It will change the way we make new discoveries. Its presence will be more ubiquitous than the stethoscope. And, like any new technology in medicine — particularly ones as fundamental as this — it’s worth understanding how the technology works before it ends up in front of us and our patients.  Those who don’t understand the fundamentals and basic tenants of AI may find themselves feeling left behind, walking around in a world (or hospital) full of technology that feels more like science fiction than reality.

The purpose of this article is to give clinicians a conceptual understanding of AI.  This is the first in a series of articles that is designed to get clinicians up to speed on artificial intelligence, machine learning, and deep learning, within the context of healthcare.  It’s a guide that is intended to be accessible to all clinicians, regardless of aptitude or affection for mathematics or technology as a whole. In this article, we will explain things simply, using plain-English, in a manner that will enable clinicians to achieve a gut-level understanding for what these technologies are about.  Subsequent articles in the series will discuss how these technologies will likely be applied to clinical medicine followed by in-depth discussions of the social, economic, and legal implications of these technologies. When read in sum total, these articles are designed to provide readers with an overarching picture of AI in healthcare.

Let me pause to attempt to instill a bit of humanity, camaraderie, and motivation for this journey.  The mission to better understand these technologies and their application in medicine is critically important, and extremely exciting.  We are very fortunate to be the ones at the forefront of medicine at the precise time that these technologies are maturing and transitioning from “bench” to bedside.  Of course, with this tremendous opportunity comes tremendous responsibility, and it is our generation’s responsibility to ensure that we harness the promise of AI in healthcare and avoid its hazards.  In order to fulfill this responsibility, we have to go beyond the abstractions of a philosopher in an armchair and understand the technology at a deeper level. We need to engage with the details of how machines see the world — what they “want”, their potential biases and failure modes, their temperamental quirks — because only then can we intelligently shape our roadmaps and policies with respect to AI .  We need a grasp of the fundamental principles, just like we did when we learned about antibiotics in medical school. And, since none of us had AI as part of our medical school curriculum, we have a bit of work to do. But, this kind of responsibility is something that we in healthcare are quite accustomed to. It is our job, and it is what we love.  

And so we begin.  Today is a great time to start understanding how machines think.

First Principles:  The Difference between AI, Machine Learning, and Deep learning

"One bit of advice:  It is important to view knowledge as sort of a semantic tree — make sure you understand the fundamental principles, i.e., the trunk and big branches, before you get into the leaves/details or there is nothing for them to hang on to."

- Elon Musk, Technology entrepreneur, investor, and engineer, CEO Tesla & SpaceX

AI, machine learning, and deep learning are not exactly the same thing.  Think of the relationship between AI, machine learning, and deep learning as concentric circles (Figure 1).  The largest circle is AI — the big idea that came first. Machine learning is a subset of AI — it’s an approach to AI concerned with one of the hallmarks of human intelligence, the ability to learn.  Deep learning is the smallest circle, and it fits within both of the bigger ones — it’s a technique for implementing machine learning that’s become one of the major drivers of today’s AI boom.  

In other words, all of machine learning is AI, but not all of AI is machine learning…  and so on.

Artificial Intelligence

"Artificial intelligence is getting computers to do things that traditionally require human intelligence..."

- Pedro Domingos, Professor of Computer Science and Engineering, University of Washington

Artificial intelligence is a field within computer science.  It’s a multidisciplinary field concerned with understanding and building intelligent entities, often in the form of software programs.  Its foundations include mathematics, logic, philosophy, probability, statistics, linguistics, neuroscience, and decision theory. Many subfields also fall under the umbrella of AI, such as computer vision, robotics, machine learning, and natural language processing.  Putting it all together, the goal of this constellation of fields and technologies is to get computers to do what had previously required human intelligence. The concept of AI has been in the heads of dreamers, researchers, and computer scientists for at least 70 years.  The computer scientist, John McCarthy, coined the term and birthed the field of AI when he brought together a group of pioneering minds for a conference at Dartmouth College in the summer of 1956 to discuss the development of machines that possessed characteristics of human intelligence.2  Over the decades, the field has gone through periods of optimism and pessimism, booms and busts, though it seems that in the last 10 years we’ve reached something of an inflection point in our ability to realize AI, and that’s why you’re reading this article.  

Machine Learning

"[Machine learning is a] field of study that gives computers the ability to learn without being explicitly programmed."

- Arthur Samuel, Pioneer in Artificial Intelligence, Coined the term ‘machine learning’ in 1959

Machine learning is a subfield of artificial intelligence.  Its goal is to enable computers to learn on their own.  Machine learning has been one of the most successful approaches to artificial intelligence, so successful that it is much to credit for the resurgence of its parent-field in recent years.  Machine learning at its most basic is the practice of using algorithms to identify patterns in data, to build mathematical models based on those patterns, and then to use those models to make a determination or prediction about something in the world.  This is different from the usual way of doing things. Traditionally, the way you’d get a computer to do anything was for humans to tell it, line-by-line in the form of human-authored computer code. Machine learning is different. It doesn’t require step-by-step guidance from a human, because its learning algorithms guide computers to learn it all on their own.  Think of this algorithmic learning process as the embodiment of the old adage that, “There is no better teacher than experience,” but in the case of machine learning, the teacher isn’t human, the teacher is data — lots of data. So, when we entered the Age of Big Data, suddenly there was plenty of digitized data to go around. This flood of data, combined with exponential increases in computing power were the one-two punch that made machine learning the first AI-approach to truly blossom.  

Deep Learning

"AI [deep learning] is akin to building a rocket ship. You need a huge engine and a lot of fuel. The rocket engine is the learning algorithms but the fuel is the huge amounts of data we can feed to these algorithms."

- Andrew Ng, Global leader in AI

Deep learning is a subset of machine learning.  It is a technique for implementing machine learning.  Like the rest of machine learning, deep learning uses algorithms for finding useful patterns in data, but deep learning is distinguished from the broader family of machine learning methods based on its use of artificial neural networks (ANNs) — an architecture originally inspired by a simplification of neurons in a brain.

Call Out:  You should know that artificial neural networks don’t really work like neurons in our brain.  Think of an ANN as a very complex math formula made up of a large number of more basic math formulas that “connect” and “send signals” between each other in ways that are reminiscent of the synapses and action potentials of biologic neurons.

 

Deep learning has advantages over traditional machine learning by virtue of its ability to handle more complex data and more complex relationships within that data.  It can often produce more accurate results than traditional machine learning approaches, but it requires even larger amounts of data to do so (deep learning is said to be extremely ‘data hungry’).  One reason deep neural networks can do this is because of their depth.  The ‘deep’ in deep learning refers to its mathematical depth — the number of layers of math formulas that make up the more complex math formula that is the neural network in its totality.  

 

Call Out:  The idea of neural networks has been kicking around for decades—all the way back to the earliest days of AI—but the approach largely went out of vogue because of a few crucial bottlenecks that were holding it back (not enough data, not enough computational power, not the right neural network architectures, and not enough depth to these networks, etc.).  Some true believers persisted, and in 2012, a small, heretical research group led by Geoffrey Hinton, unblocked those bottlenecks and achieved a dramatic milestone in the ability of a computer to automatically identify objects within images. In so doing, he reignited not only the approach, but the entire field of AI.

 

Over the last 10 years, deep neural networks of one design or another have been behind many of the major advancements in AI, not only in computer vision (e.g., face recognition, automated interpretation of medical imaging, etc.), but also in natural language processing, understanding, and generation (e.g., digital assistants like Apple’s Siri, Google Translate, etc.).   While it’s impossible to say what the future holds, one thing is certain:  Now is a very good time to start understanding how deep learning works.

Noteworthy examples of deep learning:

  • Face and object recognition in photos and videos

  • Automated interpretation of medical imaging, e.g., x-rays, pathology slides

  • Self-driving cars

  • Google search results

  • Natural language understanding and generation, e.g., Google Translate

  • Automatic speech recognition, e.g., digital assistants

  • Predicting molecular bioactivity for drug discovery

Table 1.  A non-exhaustive list of current & potential applications in medicine

Basic biomedical research

Translational research

Clinical practice

Automated experiments

Biomarker discovery

Disease diagnosis

Automated data collection

Drug-target prioritization

Treatment selection

Gene function annotation

Drug discovery

Risk stratification

Predict transcription factor binding sites

Drug repurposing

Patient monitoring

Simulation of molecular dynamics

Prediction of chemical toxicity

Automated surgery

Literature mining

Genetic variation annotation

Genomic interpretation

   

Digital assistant / scribe for doctors

   

Digital assistant for patients

Adapted from:  Kun-Hsing Yu, Andrew L Beam, and Isaac S Kohane. 2018. “Artificial intelligence in healthcare.” Nature Biomedical Engineering, 2, 10, Pp. 719.

 

Figure 1.  Machine learning and deep learning are subsets of AI.


 

Citations:

  1. Leonhard, Gerd.  “Futurist, humanist, keynote speaker, author, film-maker.”  Gerd,  Leonhard Gerd, 2019,  https://www.futuristgerd.com/gerd/future-thinker/.

  2. McCarthy, John.  “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.”  Formal Reasoning Group, Stanford University, 3 Apr 1996 http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html.

Quote Sources:

Albert Einstein quoted in “Talk:Albert Einstein” Wikiquote, A Wikimedia Project, https://en.wikiquote.org/wiki/Talk:Albert_Einstein#If_you_can't_explain_it_simply,_you_don't_understand_it_well_enough.

Musk, Elon.  “I am Elon Musk, CEO/CTO of a rocket company, AMA!”  Reddit, Jan. 5, 2015,

https://www.reddit.com/r/IAmA/comments/2rgsan/i_am_elon_musk_ceocto_of_a_rocket_company_ama/cnfre0a/.

Pedro Domingos quoted in Reese, Byron.  “Voice in AI - Episode 23: A Conversation with Pedro Domingos.”  GigaOm, Dec 4, 2017, https://gigaom.com/2017/12/04/voices-in-ai-episode-23-a-conversation-with-pedro-domingos/.

Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal of research and development, 3(3), 210-229.

Andrew Ng quoted in Kelly, Kevin. “Cognifying.” The Inevitable:  Understanding the 12 Technological Forces That Will Shape Our Future, Viking Press, 2017, page 68-69.

 

About the Author

Peter Schilling, MD, MsC

Peter Schilling, MD, MSc is an orthopaedic surgeon with expertise in statistical risk modeling, outcomes analysis, and application of digital technology within the healthcare delivery system.

More Content by Peter Schilling, MD, MsC
Previous Article
Employee Profile: Arunan from Engineering!
Employee Profile: Arunan from Engineering!

This employee profile of Suki team member, Arunan Rabindran from Engineering, covers why Arunan joined Suki...

Next Article
Suki's Mission
Suki's Mission

Suki's mission is to bring joy back to medicine and help lift the administrative burden from doctors. Lear...