The researchers trained an in-depth learning algorithm to detect the signs of Alzheimer's in patients on average six years before the condition was diagnosed by a human doctor.
Californian scientists have shown that a neural network, once trained, was able to scan images of the patients' brains and detect the presence of Alzheimer's on average 75.8 months before the actual diagnosis.
The 20-person team based their research on a modern diagnostic method, named F-FDG PET (or fluorine 18 (18F) fluorodeoxyglucose positron emission tomography), in which a radioactive glucose dye is passed through the brain and photographed.
The specialists then examine and interpret these images using the naked eye for the signs of Alzheimer's, a precursor known as mild cognitive impairment (MCI), or other correlated conditions across the spectrum.
Although they seem to take a long time, this method has led to faster and earlier diagnosis and more effective treatments.
But since this method depends on pattern recognition, researchers saw this opportunity as an opportunity to dramatically improve their performance by implementing a self-learning AI algorithm, publishing their results in Radiology.
"It is widely recognized that in-depth learning can help address the increasing complexity and volume of imaging data as well as the diverse experience of trained imaging physicians," the team wrote.
"The application of machine learning technology to complex models of results, such as those found in functional brain PET imaging, is only starting to be explored.
"We hypothesized that the deep learning algorithm could detect features or patterns that are not evident in standard clinical image review and thus improve the final diagnostic classification of individuals."
They decided to evaluate whether an in-depth learning algorithm could be trained to predict the final clinical diagnosis in patients undergoing F-FDG PET and how its success was compared to current clinical standards.
From their study of 2,109 images from 1,002 patients who had already been diagnosed, they found that their algorithm was able to detect Alzheimer's in images taken on average over six years prior to diagnosis.
The algorithm achieved better results in recognizing patients who would have Alzheimer's than doctors, as well as patients who would not develop either Alzheimer's or its precursor MCI.
These results are the latest in a series of studies and trials that show the potential power for the AI to transform health care and preventive diagnosis.
In September, the Francis Crick Institute revealed that an AI has learned to model and predict heart disease mortality rates in patients with a higher level of accuracy than trained physicians or models created by experts.
Google's DeepMind AI project, meanwhile, reached an important milestone during the summer, as its AI system was able to examine 3D eye images and diagnose potentially lethal conditions, as well as offer treatment tips in seconds.
The algorithm, tested in collaboration with the Moorfields Eye Hospital in London, was able to recommend the best course of treatment for over 50 eye diseases with an accuracy of 94%.
Despite complaining about a handful of limiting factors, including a small sample size, the Californian researchers concluded that they had developed a deep learning algorithm capable of predicting Alzheimer's "with high accuracy and robustness".
They added that with access to a much larger volume of data and opportunities to calibrate the model, the algorithm they developed could be integrated directly into the workflow of physicians and serve as an essential support tool.