Google Cloud Uses AI to Better Predict Patient Health

With better information comes better treatment. Google Cloud is working on AI engines that scour patient data to determine who is sick, who is at risk, and viable treatment options. It all hinges on machine learning and access to patient data.

Image credit: Google Health

Deep learning and neural networks have the potential to determine patient outcomes. Researchers in Google Cloud’s healthcare AI division are developing algorithms that comb through patient data and predict a patient’s likelihood of contracting disease and, in some cases, of survival.

AI already is changing patient diagnoses and treatment. Artificial intelligence is currently being tested to help medical professionals identify specific conditions. As the king of search engines, Google has the added benefit of knowing the most searched medical conditions and can tailor its tools to meet those needs. Its healthcare tools rely on Google Cloud, a set of high-performance, customizable solutions based on the latest Intel® technologies designed to address the security, compute, and memory requirements of demanding enterprise workloads and applications.

Doctor Google

Google Cloud’s healthcare division is developing a tool that uses computer vision, image search, and deep learning techniques to diagnose skin, hair, and nail conditions. Google claims it can identify more than 80 percent of the conditions seen in clinics and more than 90 percent of those commonly searched online.

Similar AI models are helping ophthalmologists identify lung cancer and optical diseases, such as diabetic retinopathy and age-related macular degeneration (AMD), both of which cause blindness. Google’s Automated Retinal Disease Assessment tool, which was tested in India and Thailand, not only can diagnose existing diabetic retinopathy, it also can determine which patients are likely to develop the condition down the line. To prevent AMD, Google’s AI algorithms can suggest treatment for at least 50 eye diseases and can predict which patients will develop AMD within six months.

In theory, Google’s AI algorithms have a 90 percent accuracy rate, but in practice, it’s not always as effective for a variety of reasons. Training images are high quality to ensure accurate readings. Inconclusive images are often tossed.

In the lab—especially in less high-tech environments—patient images can be fuzzy or otherwise hard for the AI to read. Those images are sent to doctors, which slows the timeframe for diagnosis. However, for patients with clear images, diagnosis can come in minutes, rather than in weeks or months. Using AI to detect disease and assist clinicians can change the medical landscape in poor countries with limited resources and overworked medical staff.

Google’s Automated Retinal Disease Assessment tool

Image credit: Google

In the Deep

Deep learning might impact how long your next hospital visit might last. Google teamed up with researchers at the University of California, Stanford University, and the University of Chicago to create an AI-enabled system that predicts the outcome of a patient’s hospital stay. The goal is to prevent patient readmissions and to reduce time spent in the hospital.

The AI tool looks at data from patients’ electronic health records (EHRs) and, through machine learning and predictive analytics, finds relevant data that improves decision-making for clinicians. The AI can find and present insights that can inform what types of treatments a patient will respond to best.

A critical component of this is Vertex AI, Google Cloud’s AI platform. Vertex AI allows developers to create and deploy machine learning engines more quickly. It also has APIs to support integration of vision, video, translation, and natural language machine learning into apps. With so many disparate EHR systems, not to mention each doctors’ method of recording patient notes, having a single platform to integrate machine learning on Google Cloud can speed up the development process.