Radiology Room |
Ultrasound Room |
Surgery Room |
Laboratory Room |
Comprehensive Room |
Pediatrics Room |
Dental Room |
Medical operation instruments |
Hospital Furniture |
Medical supplies |
News Center
AI Can Distinguish Brain Tumors from Healthy Tissue
Researchers have made significant advancements in artificial intelligence (AI) for medical applications. AI holds particular promise in radiology, where delays in processing medical images can often postpone patient care. Convolutional neural networks (CNNs) are robust tools used to train AI models on large image datasets to identify and classify images. This enables the networks to “learn” to distinguish between different types of images. Furthermore, CNNs also have the ability for “transfer learning,” allowing models trained for one task to be applied to similar new tasks. AI models have already demonstrated the ability to identify brain tumors in MRI images with near-human accuracy. Now, in a new study, researchers have shown that AI models can be trained to differentiate between brain tumors and healthy tissue.
While detecting camouflaged animals and classifying brain tumors may seem unrelated, the researchers from Boston University (Boston, MA, USA) saw a connection between the natural camouflage of animals and the way cancerous cells blend with surrounding healthy tissue. The ability to generalize — the process of categorizing various items under a common identity — is crucial for the AI model to detect camouflaged objects. This capability could be particularly advantageous for detecting tumors. In their retrospective study using publicly available MRI data, the researchers explored how neural networks could be trained using brain cancer imaging data, incorporating a unique camouflage detection step to enhance the networks' tumor detection capabilities.
The researchers utilized MRIs from public repositories of both cancerous and healthy brain scans to train the networks to identify cancerous areas, distinguish them from healthy tissue, and classify the type of cancer. The results, published in Biology Methods and Protocols, showed that the networks performed nearly flawlessly at detecting healthy brain scans, with only 1-2 false negatives, and were also able to differentiate between cancerous and non-cancerous brains. One of the networks achieved an accuracy of 85.99% in detecting brain cancer, while the other reached 83.85%. An important feature of these networks is their ability to explain their decisions, which can increase the trust that both medical professionals and patients place in the AI models. This transparency is particularly valuable, as deep learning models are often criticized for their lack of interpretability. The network was capable of generating images that highlighted specific areas in its classification of tumor-positive or tumor-negative scans, which would allow radiologists to verify the AI's findings, serving almost as a second opinion in radiology.
Going forward, the researchers believe that developing deep network models with decisions that are easy to explain will be crucial for AI to play a transparent and supportive role in clinical settings. While the networks performed less effectively when distinguishing between different types of brain cancer, the study demonstrated that they exhibited distinct internal representations. The accuracy and clarity of the networks improved as they were trained using camouflage detection. Transfer learning increased the networks' accuracy, and while the best performing model was about 6% less accurate than standard human detection, the research successfully highlights the improvements in accuracy brought about by this training approach. The researchers argue that, when combined with methods to explain the network’s decisions, this approach will foster the transparency needed for future AI applications in clinical settings.
“Advances in AI permit more accurate detection and recognition of patterns,” said the paper’s lead author, Arash Yazdanbakhsh. “This consequently allows for better imaging-based diagnosis aid and screening, but also necessitate more explanation for how AI accomplishes the task. Aiming for AI explainability enhances communication between humans and AI in general. This is particularly important between medical professionals and AI designed for medical purposes. Clear and explainable models are better positioned to assist diagnosis, track disease progression, and monitor treatment.”
http://www.campbellhunter.com/en/index.asp .