Hybrid Deep Learning with Attention Fusion for Enhanced Colon Cancer Detection
| dc.contributor.author | Alpsalaz, Suheyla Demirtas | |
| dc.contributor.author | Aslan, Emrah | |
| dc.contributor.author | Ozupak, Yildirim | |
| dc.contributor.author | Alpsalaz, Feyyaz | |
| dc.contributor.author | Uzel, Hasan | |
| dc.contributor.author | Bereznychenko, Viktoria | |
| dc.date.accessioned | 2026-01-15T15:03:36Z | |
| dc.date.available | 2026-01-15T15:03:36Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | This study introduces a hybrid deep learning model integrating EfficientNet-B3 and Vision Transformer with an Attention Fusion mechanism for automated colon cancer detection using the Kvasir endoscopic dataset. The model leverages EfficientNet-B3's strength in capturing fine-grained local textures and Vision Transformer's ability to model global contextual relationships. A multi-head attention-based fusion block harmonizes these features, achieving comprehensive representations and enhanced classification stability. Model optimization was guided by the Matthews Correlation Coefficient (MCC), alongside evaluations of accuracy, F1-score, and Brier Score. Experimental results demonstrate a 96.2% accuracy and an MCC of 0.961, surpassing standalone baselines and existing benchmark architectures. Cross-validation confirmed robust generalization, while Grad-CAM analyses improved interpretability by visualizing salient histopathological regions influencing predictions. Despite slight overfitting tendencies, the model maintained strong performance across all eight image classes. These findings highlight the model's ability to address limitations of single-architecture approaches by combining local and global feature extraction, offering rapid, objective, and reliable diagnostic support. The proposed framework shows significant promise for integration into computer-aided colonoscopy systems, paving the way for enhanced clinical diagnostics and reduced pathologist workload through AI-driven precision medicine. | en_US |
| dc.identifier.doi | 10.1038/s41598-025-29447-8 | |
| dc.identifier.issn | 2045-2322 | |
| dc.identifier.scopus | 2-s2.0-105026242284 | |
| dc.identifier.uri | https://doi.org/10.1038/s41598-025-29447-8 | |
| dc.identifier.uri | https://hdl.handle.net/20.500.12514/10150 | |
| dc.language.iso | en | en_US |
| dc.publisher | Nature Portfolio | en_US |
| dc.relation.ispartof | Scientific Reports | en_US |
| dc.rights | info:eu-repo/semantics/openAccess | en_US |
| dc.subject | Colon Cancer | en_US |
| dc.subject | Deep Learning | en_US |
| dc.subject | Hybrid Model | en_US |
| dc.subject | Efficientnet-B3 | en_US |
| dc.subject | Vision Transformer | en_US |
| dc.title | Hybrid Deep Learning with Attention Fusion for Enhanced Colon Cancer Detection | en_US |
| dc.type | Article | en_US |
| dspace.entity.type | Publication | |
| gdc.author.scopusid | 60008497800 | |
| gdc.author.scopusid | 58083655800 | |
| gdc.author.scopusid | 57200142934 | |
| gdc.author.scopusid | 59221704100 | |
| gdc.author.scopusid | 58826043600 | |
| gdc.author.scopusid | 57207913770 | |
| gdc.bip.impulseclass | C5 | |
| gdc.bip.influenceclass | C5 | |
| gdc.bip.popularityclass | C5 | |
| gdc.collaboration.industrial | false | |
| gdc.description.department | Artuklu University | en_US |
| gdc.description.departmenttemp | [Alpsalaz, Suheyla Demirtas] Minist Hlth, Akdagmadeni State Hosp, Yozgat, Turkiye; [Aslan, Emrah] Mardin Artuklu Univ, Fac Engn & Architecture, Mardin, Turkiye; [Ozupak, Yildirim] Dicle Univ, Silvan Vocat Sch, Diyarbakir, Turkiye; [Alpsalaz, Feyyaz; Uzel, Hasan] Yozgat Bozok Univ, Akdagmadeni Vocat Sch, Yozgat, Turkiye; [Bereznychenko, Viktoria] Natl Acad Sci Ukraine, Inst Electrodynam, Dept Theoret Elect Engn & Diagnost Elect Equipment, Beresteyskiy Ave 56, UA-03057 Kyiv, Ukraine | en_US |
| gdc.description.issue | 1 | en_US |
| gdc.description.publicationcategory | Makale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanı | en_US |
| gdc.description.scopusquality | Q1 | |
| gdc.description.volume | 15 | en_US |
| gdc.description.woscitationindex | Science Citation Index Expanded | |
| gdc.description.wosquality | Q1 | |
| gdc.identifier.openalex | W4416773191 | |
| gdc.identifier.pmid | 41315603 | |
| gdc.identifier.wos | WOS:001651442500034 | |
| gdc.index.type | WoS | |
| gdc.index.type | Scopus | |
| gdc.index.type | PubMed | |
| gdc.oaire.impulse | 1.0 | |
| gdc.oaire.influence | 2.5488711E-9 | |
| gdc.oaire.keywords | Article | |
| gdc.oaire.popularity | 3.5084218E-9 | |
| gdc.openalex.collaboration | International | |
| gdc.opencitations.count | 0 | |
| gdc.plumx.crossrefcites | 1 | |
| gdc.plumx.scopuscites | 0 | |
| gdc.scopus.citedcount | 0 | |
| gdc.virtual.author | Aslan, Emrah | |
| gdc.wos.citedcount | 0 | |
| relation.isAuthorOfPublication | ea96819c-4e93-4dc4-a97c-2ca74bd3f34d | |
| relation.isAuthorOfPublication.latestForDiscovery | ea96819c-4e93-4dc4-a97c-2ca74bd3f34d | |
| relation.isOrgUnitOfPublication | 39ccb12e-5b2b-4b51-b989-14849cf90cae | |
| relation.isOrgUnitOfPublication.latestForDiscovery | 39ccb12e-5b2b-4b51-b989-14849cf90cae |
