AI That Sees Lung Scans the Way a Radiologist Does
Every day, radiologists read through large volumes of CT scans looking for something that might be easy to miss. A small nodule. A subtle shadow. Something that could be early lung cancer, or could be nothing at all.
Lung cancer kills more people globally than any other cancer. Survival is strongly linked to how early it is found. And the workload in most radiology departments leaves little room for the kind of sustained, dual-focus attention that finding a small suspicious lesion actually requires.
A study just published in Scientific Reports, from an international research team that includes researchers at Kaunas University of Technology in Lithuania, has built an AI system designed to do something that existing tools have consistently struggled with. It sees the scan in two ways at the same time.
The way a radiologist actually reads a scan
When a radiologist works through a CT scan, they are constantly shifting perspective. Zooming in on an area of interest to examine fine detail. Stepping back to understand how that area relates to the whole lung. It is not one or the other. It is both, repeatedly, throughout the reading.
Most AI systems built for this task have had to choose. They are either good at capturing fine local detail, or they are good at understanding broader structural context. Getting both simultaneously has been a persistent technical problem.
The team's solution is a model they call C-Swin. It combines two different types of neural network architecture working together. A convolutional neural network handles the fine-grained local features, the kind of detailed pattern recognition that picks up small lesions and subtle textures. A Swin Transformer, an architecture that uses a shifting window approach to analyse spatial regions of the image, handles the broader context. The two components work in parallel, their outputs integrated rather than sequential.
Researcher Inzamam Mashood Nasir, based at KTU, describes it simply. One part of the model focuses on small details, such as tiny spots or textures in the lungs, while another looks at the overall image and understands the bigger picture. You can think of it as having a magnifying glass and a full view of the scan at the same time.
What the results showed
The model was tested on the IQ-OTH/NCCD dataset, a publicly available CT scan collection, classifying scans into three categories: normal, benign and malignant.
Distinguishing between benign (non-cancerous) nodules and malignant tumours is one of the hardest tasks in radiology; getting it wrong leads to either missed cancers or unnecessary, invasive biopsies.
The results were strong. C-Swin achieved 96.26% accuracy, with precision of 97.48% and an F1 score of 97.42%. Against existing methods the improvement in accuracy ranged from 2.31% to 6.81% depending on the comparison.
In medicine those margins are not small. A percentage point in diagnostic accuracy, scaled across thousands of patients and hundreds of thousands of scans, translates into real outcomes.
The researchers are careful about what they are claiming. The model was trained on a single dataset. It has not yet been tested across different scanner manufacturers, different imaging protocols, or different patient populations. Nasir is direct about this. In real-world conditions there are many variables, and the system needs to be tested across all of them before clinical use.
This caveat does not diminish the finding. It is the honest description of where good research sits before it becomes clinical practice. The next steps are clinical validation, testing in hospital environments, and integration into existing medical imaging systems.
Why timing is key
Lung cancer is still most commonly diagnosed late, when treatment options are narrower and outcomes are harder. The gap between what is biologically possible and what actually reaches patients in time is one of the defining problems in oncology.
AI tools that genuinely reduce missed cases and lower false positive rates, meaning fewer patients sent for unnecessary procedures and the anxiety that comes with them, address both sides of that problem at once.
Nasir points out that the architecture is not limited to lung cancer. Any medical imaging task that requires both detailed local analysis and broader structural understanding could benefit from the same approach. Brain tumours, breast cancer, and eye diseases are all mentioned as potential applications.
The bigger picture
This week, Google DeepMind CEO Demis Hassabis gave two significant interviews, one on the 20VC podcast with Harry Stebbings and one with science communicator Cleo Abram, covering his vision for what AI can do in medicine. His consistent message was that AI's most important work is not in consumer products. It is in defeating disease. He has spoken about wanting to see the decade-long drug discovery process compressed to months. About AI reaching the point where medicine does not look the way it does today.
The C-Swin paper is not that scale of ambition. It is one model, one dataset, one carefully bounded set of results waiting for clinical validation. But that is exactly how the distance between here and there gets covered. Not in single leaps but in studies like this one, done carefully, published openly, and built on by the next team.
The biology of lung cancer is becoming better understood. The treatments are beginning to follow. And now, slowly, so is the machinery of detection.
Source: Yousafzai SN, Nasir IM, Mansour S et al. A hybrid deep learning approach integrating CNN and transformer for lung cancer classification using CT scans. Scientific Reports. 2026. doi:10.1038/s41598-026-41161-7
Image: AI-generated illustration