- Research
- Open access
- Published:
A modified deep learning method for Alzheimer’s disease detection based on the facial submicroscopic features in mice
BioMedical Engineering OnLine volume 23, Article number: 109 (2024)
Abstract
Alzheimer’s disease (AD) is a chronic disease among people aged 65 and older. As the aging population continues to grow at a rapid pace, AD has emerged as a pressing public health issue globally. Early detection of the disease is important, because increasing evidence has illustrated that early diagnosis holds the key to effective treatment of AD. In this work, we developed and refined a multi-layer cyclic Residual convolutional neural network model, specifically tailored to identify AD-related submicroscopic characteristics in the facial images of mice. Our experiments involved classifying the mice into two distinct groups: a normal control group and an AD group. Compared with the other deep learning models, the proposed model achieved a better detection performance in the dataset of the mouse experiment. The accuracy, sensitivity, specificity and precision for AD identification with our proposed model were as high as 99.78%, 100%, 99.65% and 99.44%, respectively. Moreover, the heat maps of AD correlation in the facial images of the mice were acquired with the class activation mapping algorithm. It was proven that the facial images contained AD-related submicroscopic features. Consequently, through our mouse experiments, we validated the feasibility and accuracy of utilizing a facial image-based deep learning model for AD identification. Therefore, the present study suggests the potential of using facial images for AD detection in humans through deep learning-based methods.
Introduction
Alzheimer’s disease (AD) is a chronic neurodegenerative disease among people usually aged 65 and older. The progression of AD pathology can be described as a continuum with a long preclinical phase without clinical symptoms, an early clinical phase in which mild clinical symptoms are present, and finally a dementia phase. With the accelerated speed of population aging, AD has become an increasingly serious public health concern all over the world. At present, more than 35 million people have been diagnosed with AD worldwide. This number is expected to double every 20 years in large aging populations [1,2,3,4].
For an effective intervention (including counseling, psycho-education, cognitive training and medication), early AD detection is of importance, because increasing evidence has illustrated that early diagnosis holds the key to effective treatment of AD. Therefore, a variety of AD detection techniques have been developed, and even several tools have been applied to detect AD in clinic [2, 5,6,7,8,9,10, 12, 26].
Nowadays, three best biomarkers are recognized and used for routine diagnosis of AD in human cerebrospinal fluid (CSF) and blood: tau protein, amyloid β peptides (Aβ) and apolipoprotein E4 (APOE4). Because biomarker studies will help to better understand the early stages of disease, early detection of AD is the key to taking timely caring measures to avoid the disease and help prevent deterioration of the patient [1]. Most of these methods are related to clinical severity, neurasthenia and neuronal loss. Mass spectrum (MS), western-blot, immunohistochemistry (IHC), flexible multi-analyte profiling (xMAP) and positron emission tomography (PET) are highly sensitive, magnetic resonance imaging (MRI) is a rapid assay, and enzyme-linked immunosorbent assay (ELISA) is simple to use. Despite all these techniques helping to advance the detection of AD, they have some limitations indeed. In recent years, deep learning technologies have demonstrated revolutionary performance in several areas including but not limited to visual object recognition, human action recognition, object tracking, image restoration, de-noising, segmentation tasks, audio classification, and brain–computer interaction. With the success of deep learning in classifying 2D natural images, more and more studies have attempted to take advantages of deep learning in the domain of medical images. A number of researchers have tried to use artificial intelligence techniques to detect and diagnose AD.
Liu et al. [5] developed a method with MRI feature for AD classification, which used a depth wise separable convolution (DSC)-based convolutional neural network (CNN), and decent recognition accuracy rate has been achieved. Acharya et al. [8] developed a computer-aided brain diagnosis (CABD) system that could determine if a brain scan shows signs of Alzheimer’s disease. This method utilizes MRI for classification with several feature extraction techniques. Maqsood et al. [9] proposed an efficient and automated system based on a transfer learning classification model of Alzheimer’s disease for both binary and multi-class problems (Alzheimer’s stage detection). Their algorithm was then validated using the testing data, giving overall accuracies of 89.6% and 92.8% for binary and multi-class problems, respectively. Odusami et al. [11] proposed a deep learning-based method that can predict mild cognitive impairment (MCI), early MCI (EMCI), late MCI (LMCI), and AD. The Alzheimer’s disease Neuroimaging Initiative (ADNI) functional MRI (fMRI) dataset consisting of 138 subjects was used for evaluation. Jo et al. [13] developed a deep learning-based framework to identify informative features for AD classification using tau PET scans. Cai et al. [14] investigated various methods for detecting AD using patient’s speech and transcript data from the Dementia Bank Pitt database. The proposed approach involved pre-trained language models and Graph Neural Network (GNN) that constructed a graph from the speech transcript and extracted features using GNN for AD detection. Vu et al. [15] proposed a deep learning approach-based model of AD detection applying to MRI and PET images. Shankar et al. [16] suggested a novel model for AD detection with brain image analysis (BIA). Janghel et al. [18] presented a deep learning-based approach for AD detection from the ADNI database, the dataset contained fMRI and PET images of AD patients as well healthy persons’ images. Altafa et al. [21] presented an AD detection and classification algorithm with the features extracted from whole as well as segmented regions of magnetic resonance (MR) brain images. Ying et al. [26] developed a new approach for early detection of AD by noninvasive methods. They proposed to make utilization of multimodal features with speech acoustic and linguistic features for the speech recognition of AD. Puente-Castro et al. [30] developed a system that automatically detects the presence of AD in sagittal MRI images.
Most of the methods proposed above utilize artificial intelligence technology to detect AD. So it is indispensable to acquire medical data such as MRI images of patients. However, the acquisition of these data demands large and expensive medical equipment, thereby adding additional financial burden to the patients. Moreover, some of them cannot meet the requirements of acquiring MRI images due to personal reasons [16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31]. In our study, a modified deep learning CNN algorithm was proposed to acquire AD-related submicroscopic features in the facial images of mice, which were used to identify AD. We verified the feasibility and accuracy of the proposed method through mice experiments, thereby laying a solid foundation for the potential application of facial submicroscopic feature-based methods in AD detection among humans. Once the conventional optical facial image-based method proposed in this paper is implemented to detect AD for humans, a better socio-economic benefit can be definitely achieved.
Dataset
In this work, we designed a modified deep learning method to identify AD for mice with conventional optical facial images.
We presented the related data and methodology in our work, as well as the pipeline of training and optimizing the modified neural network as follows. Firstly, we acquired the high-definition facial images of AD model mice and normal mice in an animal experiment. Then, a deep learning model was proposed, and then trained and tested with the mice facial image.
Facial image capture for mice in an animal experiment
Facial image capture was conducted in a mouse experiment, which was approved by the Animal Ethics Committee of the Institute of Modern Physics, Chinese Academy of Science and carried out in accordance with the European Communities Council Directive of 22 September 2010 (10/63/EU).
In the experiment, 15 female and 14 male 3xTg-AD mice (APP Swedish, MAPT P301L, and PSEN1 M146V) were purchased from Yangzhou Youdu Biotechnology Co., Ltd. China. 3xTg-AD mice are a transgenic mouse model developed to simulate the pathogenesis of AD, and exhibits both plaque and tangle pathology, closely resembling the pathological features observed in human AD patients. The 3xTg-AD mouse model has been widely used to study the pathogenesis of AD and evaluate potential therapeutic interventions, and the earliest cognitive impairment manifests at 4 months as a deficit in long-term retention [33, 34]. C57BL/6 mice (21 female and 20 male mice), which are homologous to transgenic mice, were used as a control group, i.e., normal mice. The animals were kept in a temperature- and humidity-controlled colony room and maintained on a light/dark cycle of 12 h/12 h with ad libitum access to food and water. The mice were aged to 6 months before undergoing this study.
During the experiment, the facial images of the mice were acquired by a high-definition (HD) conventional optical camera. A HD video of 3–5 min was recorded for every mouse when the mouse moved freely around a certain area. Then, the HD facial images of the mice were captured from the videos. Every facial image contained the mouth, whiskers, and at least one ear and one eye of the mouse. Shown in Fig. 1 and Fig. 2 are some representative facial images for the normal mice and AD mice.
In this study, 2530 facial images of 70 mice were obtained. There were 1369 facial images for the 41 normal mice including 634 for the 21 female mice and 762 for the 20 male mice among all of the images. The other 1181 facial images were captured for the 29 AD mice including 612 for the female AD mice and 569 for the male AD mice. Then, the 2530 facial images were divided into training set and testing set randomly for the subsequent CNN model training and testing. The training set consisted of 2081 facial images of 57 mice among the 2530 images, containing 1080 facial images of 33 normal mice and 1001 facial images of AD mice. The testing set was composed of 469 facial images of 13 mice among the 2530 images, including 289 facial images of 8 normal mice and 180 facial images of 5 AD mice. Both the training and testing sets contained female and male mice randomly. The datasets of the facial images are shown in Table 1 specifically.
Results
As aforementioned, the facial images of the mice used in this work were taken from our mice experiment. For the subsequent experiment, we divided this dataset of the images into training, validation and testing sets, respectively, as shown in Table 1. We used the best parameters for the proposed method over a very large number of trials, and then the parameters included a mini batch size of 8, an initial learning and rate of 0.00001. The maximum number of epochs was 280 with early stopping according to the validation set.
In order to test the efficiency and superiority of the proposed model developed, we also implemented the other seven classical classification network models, such as AlexNet, LeNet, VGG16, VGG19, ResNet18, MobileNet and ZF-Net. To have a fair comparison, the same training, validation and test sets shown in Table 1 were used for all these models. Also the best training parameters were used for all the network models. We utilized the python deep learning toolbox to implement and train these models on a GPU.
Table 2 shows the results of classification performance of the different deep learning models. Among all the methods, the classification accuracy rate obtained by our proposed method appeared to be the best. For our proposed method, the classification accuracy rate of normal and AD mice reached up to 99.78%, and the sensitivity, specificity and precision were 100%, 99.65% and 99.44%, respectively. Compared with the other methods, the proposed model achieved a better performance in identifying normal or AD mice using the facial images.
To identify the critical regions in the facial images of AD mice that serve as key influencing factors for AD detection, we generated heat maps of the mice’s facial images. Additionally, we employed the CAM [35, 36] algorithm to perform heat staining, enabling us to visualize the areas of significance in the images. Several representative hot maps of the facial images derived from the CAM algorithm are shown in Fig. 3. Clearly, the mice faces were the key influence factor for the AD detection indeed.
Discussion
Deep learning is being widely used for automatic disease recognition from clinical image data. Recent studies have shown that in certain circumstances, deep learning algorithms can detect AD even better than clinicians, thereby being rather appealing. Therefore, the computer-aided diagnosis has become an important research topic, due to its relatively low cost while maintaining an expert level [15, 33, 42].
The CNN model is popular in deep learning community owing to its great success in image classification. Its achievements have attracted researchers to develop CNN-based systems for AD detection. Medical images such as MRI, fMRI, PET and even diffusion tensor imaging (DTI) have been used to identify AD. To our knowledge, the most prominent diagnostic modality for AD detection is MRI. However, although great efforts have been made to improve the accuracy of deep learning-based medical image classification, few work is applied to the clinic for practical AD detection.
Chien et al. [41] developed new facial asymmetry measures to compare AD patients with healthy controls. A three-dimensional camera was used to capture facial images, and 68 facial landmarks were identified using an open-source machine-learning algorithm called OpenFace. A standard image registration method was used to align the three-dimensional original and mirrored facial images. Their study utilized the registration error, representing landmark superimposition asymmetry distances, to examine 29 pairs of landmarks to characterize facial asymmetry. After comparing the facial images of 150 AD patients with those of 150 age- and sex-matched non-demented controls, they found that the asymmetry of 20 landmarks was significantly different in AD than in the controls (p < 0.05). The AD-linked asymmetry was concentrated in the face edge, eyebrows, eyes, nostrils, and mouth. But their study did not use deep learning algorithms and needed more time to extract more facial features. Compared with the previous medical image-based work, the method proposed in this paper has several advantages. The acquisition of conventional optical facial images is not only convenient and fast, but also cheap in equipment. Furthermore, there are no additional restrictions for the objectives under test. So the proposed method has the potential in widely using in the clinic in the future. Compared with the other networks, our proposed deep learning algorithm has achieved an excellent performance in evaluation matrices such as accuracy, sensitivity, specificity and precision. Therefore, our proposed method has a better socio-economic benefit indeed.
Nevertheless there are still some directions for improving our work in the future. Firstly, we demonstrated the feasibility of using facial images for AD detection via deep learning in mice, but there are great differences in facial features between mice and humans. So it is of vital importance to test our proposed method for humans under ethical permission.
Secondly, although 2530 facial images of 70 mice including normal and AD mice, female and male mice were used to train and test the proposed model in this work, the number of the facial images captured remains insufficient. Probably, this is the major limitation of this work. Furthermore, the various pathological stages of AD can last for years, the results obtained at a single time point may not be well applied to every time point in preclinical and early onset. We will increase the number and quality of the facial images of AD mice captured under different conditions to improve the robustness of the proposed method and capture images at multiple pathological stages of AD mice to adapt to the complexity of AD patients in clinical applications in our future work.
Thirdly, we demonstrated that the facial region of mice had a significant impact on the detection of AD with the CAM algorithm. However, there is no clear biological evidence showing the relationship of the differences in facial features between AD and normal mice. In our future work, the following two biological aspects should be explored to demonstrate that AD patients have different facial features from those of healthy people. Some researchers believe that a certain human hormone in AD patients is different from that in healthy people, and the hormone causes subtle changes in facial skin or organ fat of AD patients. Therefore, the submicroscopic features generated by the hormone are a key factor to detect AD with facial images. Besides, it is thought that AD patients have some significant changes in emotion, and the emotion changes can cause abnormal responses in neurons of the brain. Then, the abnormalities in the neurons alter the facial expressions. These two aspects might be the mechanisms underlying the deep learning-based method for capturing the abnormal submicroscopic changes to identify AD with facial images. In any event, the detailed mechanisms are worth for further study.
Cost and accessibility: In our study, we employed a conventional optical facial image for detecting Alzheimer's disease (AD). The cost of capturing a facial image is extremely low due to the widespread availability of cameras and smart phones. When compared to traditional methods, our approach utilizing a software algorithm exhibits notable cost advantages and greater feasibility.
Conclusion
In our current study, we have meticulously designed and refined a multi-layer recurrent residual neural network model, utilizing it to detect Alzheimer’s disease (AD) in mice through conventional optical facial images. This proposed model was rigorously trained using a dataset comprising 2081 facial images of both healthy and AD-affected mice. Subsequently, 469 facial images were selected for testing the algorithm, achieving an identification accuracy of an astounding 99.78%. Furthermore, we employed the CAM algorithm to generate heat maps highlighting the AD-related features in the mouse facial images, conclusively demonstrating the presence of AD-associated submicroscopic characteristics within these images. This experiment effectively validated both the feasibility and precision of our proposed method. We firmly believe that this study serves as a solid foundation for future deep learning-based AD detection techniques, leveraging facial submicroscopic features in humans.
Methods
Image data acquisition processing
In order to acquire appropriate facial images in the mouse experiment aforementioned, we designed an image acquisition system consisting of four different modules such as image capture, image transmission, deep learning model and detection result presentation. The structure of the system is shown in Fig. 4.
Image capture module: The facial images of the mice were captured using one or more conventional optical cameras. Frontal or side facial images were obtained, which were the only model input data used for identifying AD. Especially, the facial images contained the eyes, nose, mouth, forehead, jaw and other facial organs of the mice in high definition.
Image transmission module: This module consisted of a computer with wired or wireless network and storage devices. The captured facial images were transmitted in real-time to the deep learning model module with the network or storage device.
Deep learning model module: This module was used to analyze the facial image and extract the submicroscopic features of the facial images for the detection of AD. The details of the deep learning model are described in Sect. “Deep learning model”.
Detection result presentation module: The detection result was judged and presented according to the output of the deep learning model.
Deep learning model
A multi-layer cyclic residual CNN model was designed and modified in this paper, which is particularly effective when dealing with scenes involving a large number of images. The structure of the model, shown in Fig. 4, consisted of convolutional layer, pooling layer, activation layer, totally full layer and a multi-layer cyclic residual module. In detail, the convolutional layer was designed to extract different features of the image. The pooling layer further abstracted the original features, which greatly reduced the training parameters and eased the over-fitting of the model. The proposed model allowed a collection of features through the convolutional kernel's filtering mechanism, which decreases the amount of network parameters through convolutional weight sharing and pooling activity. The soft-max classifier was inserted into the fully connected layer to classify the samples after extracting the features.
The key part of the proposed model was a multi-layer cyclic residual module, which is shown in the left of Fig. 5. The details of the residual module are described as follows: (I) the output of the previous layer in the proposed CNN was used as the input layer data of the residual module; (II) the model consisted of a normalization layer, a activation layer, a convolutional layer, another batch normalization layer, another activation layer and another convolutional layer; (III) the results of (I) and (II) above were added together as the output of the residual module. The proposed residual module was used to retain the original input image information and ensure that the proposed deep learning method could capture more submicroscopic features of the input facial image.
To extract the submicroscopic features from the facial images of the mice, there were N color images Xn,n ∈ [3,N] after the data pre-processing, and their pixels were scaled to a size of 512 × 512 and normalized to the interval [0, 1]. Moreover, the standard convolutional layer of the convolutional kernel of size 5 × 5 was then fed for feature extraction. For each convolutional layer operation, batch normalization (BN) function and rectified liner unit (ReLU) activation function were implemented. Thereafter, each convolutional layer was accompanied by a max pooling of size 2 × 2, which sampled down by half the previous feature map. Seventeen such standard convolutional layers were applied to this model.
A 16 × 16 × 512 feature matrix was fed to the classification module after the previous feature extraction. Firstly, the feature map was flattened to 1026 feature vectors, and then the feature vectors were densified by using two connected layers, each layer was set to contain 2 neurons. C was the number of classifications in the AD mice dataset. Then, the C-dimensional score vector S ([S1…,Sl,…SC]) was expressed by the predictive probability with the soft-max function, and the value of each fraction was between [0, 1]. The soft-max function is given by
where P(yn = l|Xn) is the forecasted likelihood for sample Xn to be class l.
The network weight w and the cost-function of the network need to be optimized during the process of CNN training. Regularized cross-entropy was used as cost-function in this analysis. The cost-function can be translated as:
where ync is 0 if the Xn ground truth label is the lth dot, or if it is 1 otherwise. The l2 regularization with its coefficient γ controlled the weight w while training the model, also detected the limitation of the model space so that over-fitting might be avoided.
Evaluation metrics
Classification results obtained through the proposed method were evaluated using different evaluation metrics such as accuracy, sensitivity, specificity and precision.
Classical classification network models
To assess the efficiency and superiority of the proposed model in this study, we implemented seven additional classical classification network models, namely AlexNet, LeNet, VGG16, VGG19, ResNet18, MobileNet and ZF-Net. These models are presented as follows.
AlexNet was designed by Krizhevsky et al. who trained a large, deep convolutional neural network to classify 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes in 2012. The AlexNet model achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry in the ILSVRC-2012 [37].
LeNet was designed by Yann LeCun in 1998. LeNet is the first convolutional neural network applied to recognize handwritten digit in deep learning. It is considered as the basis of modern convolutional neural networks.
VGG net was designed by the Visual Geometry Group, Oxford University in 2014. The outstanding contribution of VGG net is to prove that small convolution can effectively improve performance by increasing network depth. VGG16 is a convolutional neural network model comprising 16 layers, while VGG19 is a similar model but with 19 layers. The VGG16 and VGG19 model have been widely used in image classification and target detection [38].
ResNet was proposed by Kaiming He et al. at Microsoft Research in 2015 and won the ILSVRC-2015. ResNet eliminates the difficulty of training neural networks with too much depth. The basic architecture of the ResNet18 network is ResNet, and its depth consists of is 18 layers [39].
MoblieNet was proposed by Google in 2017. It is a lightweight neural network focused on mobile devices [40].
ZF-Net was proposed by Matthew Zeiler and Rob Fergus in 2013. The network won the ILSVRC-2013. It improves on AlexNet by adjusting the architecture hyperparameters, leading to superior performance and outcomes [41].
Class activation mapping (CAM) algorithm
The CAM algorithm was adopted to acquire the ROI (region of interest) of the maximal influence factor in AD detection with the facial images of the mice. The principle of the CAM algorithm which was used in the reference paper [32, 35].
Data availability
No datasets were generated or analyzed during the current study.
References
Shui B, Tao D, Florea A, et al. Biosensors for Alzheimer’s disease biomarker detection: a review. Biochimie. 2018;147:13–24.
Noor MBT, Zenia NZ, Kaiser MS, et al. Application of deep learning in detecting neurological disorders from magnetic resonance images: a survey on the detection of Alzheimer’s disease Parkinson’s disease and schizophrenia. Brain Inform. 2020;7:1–21.
Venugopalan J, Tong L, Hassanzadeh HR, et al. Multimodal deep learning models for early detection of Alzheimer’s disease stage. Sci Rep. 2021;11(1):3254.
Ebrahimi-Ghahnavieh A, Luo S, Chiong R. Transfer learning for Alzheimer’s disease detection on MRI images. In: 2019 IEEE international conference on industry 4.0, artificial intelligence, and communications technology (IAICT). New York: IEEE. 2019: Pp. 133–138.
Liu J, Li M, Luo Y, et al. Alzheimer’s disease detection using depthwise separable convolutional neural networks. Comput Method Progr Biomed. 2021;203: 106032.
De Roeck EE, De Deyn PP, Dierckx E, et al. Brief cognitive screening instruments for early detection of Alzheimer’s disease: a systematic review. Alzheimer Res Ther. 2019;11(1):1–14.
Balagopalan A, Eyre B, Rudzicz F, et al. To BERT or not to BERT: comparing speech and language-based approaches for Alzheimer’s disease detection. arXiv preprint. 2020. arXiv:2008.01551.
Acharya UR, Fernandes SL, WeiKoh JE, et al. Automated detection of Alzheimer’s disease using brain MRI images—a study with various feature extraction techniques. J Med Syst. 2019;43:1–14.
Maqsood M, Nazir F, Khan U, et al. Transfer learning assisted classification and detection of Alzheimer’s disease stages using 3D MRI scans. Sensors. 2019;19(11):2645.
Pan D, Zeng A, Jia L, et al. Early detection of Alzheimer’s disease using magnetic resonance imaging: a novel approach combining convolutional neural networks and ensemble learning. Front Neurosci. 2020;14:259.
Odusami M, Maskeliūnas R, Damaševičius R, et al. Analysis of features of Alzheimer’s disease: detection of early stage from functional brain changes in magnetic resonance images using a finetuned ResNet18 network. Diagnostics. 2021;11(6):1071.
Loewenstein DA, Curiel RE, Duara R, et al. Novel cognitive paradigms for the detection of memory impairment in preclinical Alzheimer’s disease. Assessment. 2018;25(3):348–59.
Jo T, Nho K, Risacher SL, et al. Deep learning detection of informative features in tau PET for Alzheimer’s disease classification. BMC Bioinform. 2020;21:1–13.
Cai H, Huang X, Liu Z, et al. Exploring multimodal approaches for Alzheimer’s disease detection using patient speech transcript and audio data. arXiv preprint. 2023. arXiv:2307.02514.
Vu TD, Ho NH, Yang HJ, et al. Non-white matter tissue extraction and deep convolutional neural network for Alzheimer’s disease detection. Soft Comput. 2018;22:6825–33.
Shankar K, Lakshmanaprabu SK, Khanna A, et al. Alzheimer detection using group grey wolf optimization based features with convolutional classifier. Comput Electr Eng. 2019;77:230–43.
Dubois B, Villain N, Frisoni GB, et al. Clinical diagnosis of Alzheimer’s disease: recommendations of the international working group. The Lancet Neurol. 2021;20(6):484–96.
Janghel RR, Rathore YK. Deep convolution neural network based system for early diagnosis of Alzheimer’s disease. Irbm. 2021;42(4):258–67.
Sangubotla R, Kim J. Recent trends in analytical approaches for detecting neurotransmitters in Alzheimer’s disease. TrAC Trend Anal Chem. 2018;105:240–50.
Afzal S, Maqsood M, Nazir F, et al. A data augmentation-based framework to handle class imbalance problem for Alzheimer’s stage detection. IEEE Access. 2019;7:115528–39.
Altaf T, Anwar SM, Gul N, et al. Multi-class Alzheimer’s disease classification using image and clinical features. Biomed Sign Process Control. 2018;43:64–74.
van Oostveen WM, de Lange ECM. Imaging techniques in Alzheimer’s disease: a review of applications in early diagnosis and longitudinal monitoring. Int J Mol Sci. 2021;22(4):2110.
Mehmood A, Yang S, Feng Z, et al. A transfer learning approach for early diagnosis of Alzheimer’s disease on MRI images. Neuroscience. 2021;460:43–52.
Weller J, Budson A. Current understanding of Alzheimer’s disease diagnosis and treatment. F1000Research. 2018. https://doiorg.publicaciones.saludcastillayleon.es/10.12688/f1000research.14506.1.
Böhle M, Eitel F, Weygandt M, et al. Layer-wise relevance propagation for explaining deep neural network decisions in MRI-based Alzheimer’s disease classification. Front Aging Neurosci. 2019;11:194.
Ying Y, Yang T, Zhou H. Multimodal fusion for Alzheimer’s disease recognition. Appl Intell. 2023;53(12):16029–40.
Kruthika KR, Maheshappa HD. Alzheimer’s Disease neuroimaging initiative. Multistage classifier-based approach for Alzheimer’s disease prediction and retrieval. Inform Med Unlocked. 2019;14:34–42.
Dyrba M, Hanzig M, Altenstein S, et al. Improving 3D convolutional neural network comprehensibility via interactive visualization of relevance maps: evaluation in Alzheimer’s disease. Alzheimer Res Ther. 2021;13:1–18.
Xu L, Liang G, Liao C, et al. An efficient classifier for Alzheimer’s disease genes identification. Molecules. 2018;23(12):3140.
Puente-Castro A, Fernandez-Blanco E, Pazos A, et al. Automatic assessment of Alzheimer’s disease diagnosis based on deep learning techniques. Comput Biol Med. 2020;120: 103764.
Lahmiri S, Shmuel A. Performance of machine learning methods applied to structural MRI and ADAS cognitive scores in diagnosing Alzheimer’s disease. Biomed Sign Process Control. 2019;52:414–9.
Tanveer M, Richhariya B, Khan RU, et al. Machine learning techniques for the diagnosis of Alzheimer’s disease: a review. ACM Trans Multimed Comput Commun Appl (TOMM). 2020;16(1s):1–35.
Oddo S, Caccamo A, Shepherd JD, Murphy MP, Golde TE, Kayed R, Metherate R, Mattson MP, Akbari Y, LaFerla FM. Triple-transgenic model of Alzheimer’s disease with plaques and tangles: intracellular Abeta and synaptic dysfunction. Neuron. 2003;39(3):409–21.
Billings LM, Oddo S, Green KN, McGaugh JL, LaFerla FM. Intraneuronal Abeta causes the onset of early Alzheimer’s disease-related cognitive deficits in transgenic mice. Neuron. 2005;45(5):675–88.
Zhou B, Khosla A, Lapedriza A, et al. Learning deep features for discriminative localization. IEEE Comput Soc. 2016. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/CVPR.2016.319.
Selvaraju RR, Cogswell M, Das A, et al. Grad-CAM: visual explanations from deep networks via gradient-based localization[C]//IEEE international conference on computer vision. IEEE. 2017. https://doiorg.publicaciones.saludcastillayleon.es/10.1109/ICCV.2017.74.
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks. International conference on neural information processing systems. Curran Associates Inc.: New York. 2012: 1097–1105.
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. Comput Sci. 2014. https://doiorg.publicaciones.saludcastillayleon.es/10.48550/arXiv.1409.1556.
Russakovsky O, Deng J, Su H, et al. ImageNet large scale visual recognition challenge. Int J Comput Vision. 2015;115(3):211–52. https://doiorg.publicaciones.saludcastillayleon.es/10.1007/s11263-015-0816-y.
Howard AG, Zhu M, Chen B, et al. MobileNets: efficient convolutional neural networks for mobile vision applications. ArXiv preprint. 2017. https://doiorg.publicaciones.saludcastillayleon.es/10.48550/arXiv.1704.04861.
Zeiler MD, Fergus R. Visualizing and understanding convolutional networks. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer vision—ECCV 2014: 13th European conference, Zurich, Switzerland, September 6–12, 2014, proceedings, part I. Cham: Springer International Publishing; 2014. https://link.springer.com/chapter/10.1007/978-3-319-10590-1_53?spm=5176.100239.blogcont55892.13.pm8zm1.
Chien CF, Sung JL, et al. Analyzing facial asymmetry in Alzheimer’s dementia using image-based technology. Biomedicines. 2023;11(10):2802.
Funding
This work was jointly supported by the Top Leading Talent Program of Gansu Province, the Key Research and Development Program of Gansu Province (Grant No. 23YFFA0010) and the Science and Technology Major Special Program of Gansu Province (Grant No. 23ZDKA011).
Author information
Authors and Affiliations
Contributions
Guosheng Shen developed the deep learning method and experiments; Fei Ye and Wei Cheng designed and conducted the mice experiments. Qiang Li proposed and financed the entire experiment.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Shen, G., Ye, F., Cheng, W. et al. A modified deep learning method for Alzheimer’s disease detection based on the facial submicroscopic features in mice. BioMed Eng OnLine 23, 109 (2024). https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12938-024-01305-0
Received:
Accepted:
Published:
DOI: https://doiorg.publicaciones.saludcastillayleon.es/10.1186/s12938-024-01305-0