Dual-imaging 2026 Abstracts


Area 1 - Dual-imaging

Full Papers
Paper Nr: 5
Title:

Towards Prediction of Brain Tumour Responses after Intervention Based on 3D MR T2 Images

Authors:

Xiaohong W. Gao, Chia-Hui Chien, Guan-Lin Liu and Amja Manullang

Abstract: This paper presents a Siamese network to predict brain tumour responses after intervention. The work consists of two phases. The first phase is segmentation to segment lesions in order to locate brain lesion regions. The dataset is obtained from 2025 BraTS Brain Tumor Progression Challenge that was organized in conjunction with MICCAI 2025 . In this work, the model of SAM-2, referring to Segment Anything Model 2, is applied for 3D MR images. For segmentation, the average IoU is 60% based on validation dataset. After segmentation, the slices with lesions are selected to classify tumour progression. In this paper, a Siamese Vision Transformer (SViT) is applied, which allows the inferences between baseline state, i.e. after tumour resection or initial intervention and later tumour development. The backbone model is CMT-Ti. Inspired by the work of MuSiC_ViT for x-ray chest disease detection, this SViT system accomplishes four classification of therapeutic responses, which are Complete Response (CR), Partial Response (PR), Stable Disease (SD) and Progressive Disease (PD). Overall, based on the available training dataset with 91 patients, 90% accuracy can be achieved.
Download

Paper Nr: 6
Title:

Attenuation Correction in Preclinical PET/MRI Imaging Using Denoising Diffusion Probabilistic Models with Multi-Loss Optimization

Authors:

Kishore Krish, Sivakumar Duraisamy and Jyh-Cheng Chen

Abstract: Attenuation correction (AC) is a critical challenge in preclinical PET/MRI imaging due to high-resolution requirements and limited photon counts, which exacerbate attenuation effects and reconstruction errors. Conventional methods, including CT and MR-based AC, provide standard solutions but are limited by additional radiation exposure, artifacts, and reliance on accurate segmentation. Deep learning (DL) approaches have emerged as promising alternatives, yet they often produce over-smoothed images and fail to preserve fine structural details. In this study, we present a conditional denoising probabilistic model (DDPM) to generate high-quality attenuation corrected PET (PETAC) images directly from non-attenuation corrected PET (PETNAC) inputs. The model was trained and validated on a curated dataset of micro-phantoms and FDG PET rat scans acquired with a Bruker 7T preclinical PET/MRI system. We evaluated multiple loss function strategies, including MSE, SSIM and VGG perceptual losses, individually and in combination. The combined MSE+SSIM+VGG loss achieved the best results, improving quantitative metrics such as PSNR, SSIM, RMSE, SNR, CNR, and SUV accuracy while preserving anatomical details. Our findings demonstrate that diffusion-based AC offers a robust, high-fidelity alternative to conventional approaches in preclinical PET/MRI imaging.
Download

Short Papers
Paper Nr: 7
Title:

Generative Augmentation of Anatomical-Functional Imaging for Rare Lung Cancer Subtype Classification

Authors:

Rumeth Payagalage and Prasan Yapa

Abstract: Lung cancer is the most common cancer type worldwide, and it has the highest mortality rate. Mainly, lung cancer is classified into two types, but they have a variety of rare subtypes. Even though these subtypes have higher similarities among themselves, their treatment strategies are often different. However, early detection and accurate classification of the lung cancer subtype can save patients’ lives, enabling doctors to provide effective treatment strategies. Due to the limited high-quality anatomical and functional images, the development of artificial intelligence tools for lung cancer early detection and classification remains to be explored. This study focuses on exploring generative image augmentation techniques for PET scans to enhance the training datasets with rare lung cancer subtypes, ensuring the fidelity and clinical relevance of generated samples while mitigating the common mode collapse issue of generative models.
Download