BIOIMAGING 2022 Abstracts


Full Papers
Paper Nr: 1
Title:

A Multiple-instance Learning Approach for the Assessment of Gallbladder Vascularity from Laparoscopic Images

Authors:

Constantinos Loukas, Athanasios Gazis and Dimitrios Schizas

Abstract: An important task at the onset of a laparoscopic cholecystectomy (LC) operation is the inspection of gallbladder (GB) to evaluate the thickness of its wall, presence of inflammation and extent of fat. Difficulty in visualization of the GB wall vessels may be due to the previous factors, potentially as a result of chronic inflammation or other diseases. In this paper we propose a multiple-instance learning (MIL) technique for assessment of the GB wall vascularity via computer-vision analysis of images from LC operations. The bags correspond to a labeled (low vs. high) vascularity dataset of 181 GB images, from 53 operations. The instances correspond to unlabeled patches extracted from these images. Each patch is represented by a vector with color, texture and statistical features. We compare various state-of-the-art MIL and single-instance learning approaches, as well as a proposed MIL technique based on variational Bayesian inference. The methods were compared for two experimental tasks: image-based and video-based (i.e. patient-based) classification. The proposed approach presents the best performance with accuracy 92.1% and 90.3% for the first and second task, respectively. A significant advantage of the proposed technique is that it does not require the time-consuming task of manual labelling the instances.
Download

Paper Nr: 3
Title:

Robust Teeth Detection in 3D Dental Scans by Automated Multi-view Landmarking

Authors:

Tibor Kubík and Michal Španěl

Abstract: Landmark detection is frequently an intermediate step in medical data analysis. More and more often, these data are represented in the form of 3D models. An example is a 3D intraoral scan of dentition used in orthodontics, where landmarking is notably challenging due to malocclusion, teeth shift, and frequent teeth missing. What’s more, in terms of 3D data, the DNN processing comes with high memory and computational time requirements, which do not meet the needs of clinical applications. We present a robust method for tooth landmark detection based on a multi-view approach, which transforms the task into a 2D domain, where the suggested network detects landmarks by heatmap regression from several viewpoints. Additionally, we propose a post-processing based on Multi-view Confidence and Maximum Heatmap Activation Confidence, which can robustly determine whether a tooth is missing or not. Experiments have shown that the combination of Attention U-Net, 100 viewpoints, and RANSAC consensus method is able to detect landmarks with an error of 0:75  0:96 mm. In addition to the promising accuracy, our method is robust to missing teeth, as it can correctly detect the presence of teeth in 97.68% cases.
Download

Paper Nr: 6
Title:

Neural Network PET Reconstruction using Scattered Data in Energy-dependent Sinograms

Authors:

Gabrielle Fontaine, Peter Lindstrom and Stephen Pistorius

Abstract: PET image reconstruction largely relies on pre-reconstruction data correction, which may add noise and remove information. This loss is particularly notable when correcting for scattered coincidences, which are useful for image reconstruction, though algorithmic scatter reconstructions require a detector energy resolution that exceeds the current state-of-the-art. Preliminary research has demonstrated the feasibility of using convolutional neural networks (CNNs) to reconstruct images directly from sinogram data. We have extended this approach to reconstruct images from data containing scattered coincidences. Monte Carlo simulations were performed to simulate PET data from digital phantoms. Data were modeled using 15% FWHM energy resolution detectors. Energy-dependent sinograms (EDSs), containing true and scattered coincidences, were constructed from the data. After data augmentation, 210,000 sinograms were obtained. A CNN was trained on the EDS-activity pairs for image reconstruction. A second network was trained on sinograms containing only photopeak coincidences. Images were also reconstructed using FBP, and MLEM approaches. The EDS trained network outperformed the photopeak trained network, with a higher mean structural similarity index (0.69 ± .05 vs. 0.63 ± .05) and lower average mean square error (0.16 ± .04 vs. 0.20 ± .04). Our work demonstrates that CNNs have the potential to extract useful information from scattered coincidences, even for data containing significant energy uncertainties.
Download

Paper Nr: 10
Title:

Rendering Medical Images using WebAssembly

Authors:

Sébastien Jodogne

Abstract: The rendering of medical images is a critical step in a variety of medical applications from diagnosis to therapy. Specialties such as radiotherapy and nuclear medicine must display complex images that are the fusion of several layers. Furthermore, the rise of artificial intelligence applied to medical imaging calls for viewers that can be used in research environments and that can be adapted by scientists. However, desktop viewers are often developed using technologies that are totally different from those used for Web viewers, which results in a lack of code reuse and shared expertise between development teams. In this paper, we show how the emerging WebAssembly standard can be used to tackle these issues by sharing the same code base between heavyweight viewers and zero-footprint viewers. Moreover, we propose a full Web viewer developed using WebAssembly that can be used in research projects or in teleradiology applications. The source code of the developed Web viewer is available as free and open-source software.
Download

Paper Nr: 14
Title:

Classifying Diabetic Retinopathy using CNN and Machine Learning

Authors:

Chaymaa Lahmar and Ali Idri

Abstract: Diabetic retinopathy (DR) is one of the main causes of vision loss around the world. A computer-aided diagnosis can help in the early detection of this disease which can be beneficial for a better patient outcome. In this paper, we conduct an empirical evaluation of the performances of twenty-eight deep hybrid architectures for an automatic binary classification of referable DR, and compared them to seven end-to-end deep learning (DL) architectures. The architectures were compared using the Scott Knott test and the Borda count voting method. All the empirical evaluations were over the APTOS dataset, using five-fold cross validation. The results showed the importance of combining DL techniques and classical machine learning techniques for the classification of DR. The hybrid architecture using the SVM classifier and MobileNet_V2 for feature extraction was the top performing and it was classified among the best performing end-to-end deep learning architectures with an accuracy equal to 88.80%; note that none of the hybrid architectures outperformed all the end-to-end architectures.
Download

Paper Nr: 18
Title:

Automatic Label Detection in Chest Radiography Images

Authors:

João Pedrosa, Guilherme Aresta, Carlos Ferreira, Ana M. Mendonça and Aurélio Campilho

Abstract: Chest radiography is one of the most ubiquitous medical imaging exams used for the diagnosis and follow-up of a wide array of pathologies. However, chest radiography analysis is time consuming and often challenging, even for experts. This has led to the development of numerous automatic solutions for multipathology detection in chest radiography, particularly after the advent of deep learning. However, the black-box nature of deep learning solutions together with the inherent class imbalance of medical imaging problems often leads to weak generalization capabilities, with models learning features based on spurious correlations such as the aspect and position of laterality, patient position, equipment and hospital markers. In this study, an automatic method based on a YOLOv3 framework was thus developed for the detection of markers and written labels in chest radiography images. It is shown that this model successfully detects a large proportion of markers in chest radiography, even in datasets different from the training source, with a low rate of false positives per image. As such, this method could be used for performing automatic obscuration of markers in large datasets, so that more generic and meaningful features can be learned, thus improving classification performance and robustness.
Download

Paper Nr: 20
Title:

Classifying Alzheimer’s Disease using MRIs and Transcriptomic Data

Authors:

Lucia Maddalena, Ilaria Granata, Maurizio Giordano, Mario Manzo, Mario R. Guarracino and Alzheimer’s Disease Neuroimaging Initiative (ADNI)

Abstract: Early diagnosis of neurodegenerative diseases is essential for the effectiveness of treatments to delay the onset of related symptoms. Our focus is on methods to aid in diagnosing Alzheimer’s disease, the most widespread neurocognitive disorder, that rely on data acquired by non-invasive techniques and that are compatible with the limitations imposed by pandemic situations. Here, we propose integrating multi-modal data consisting of omics (gene expression values extracted by blood samples) and imaging (magnetic resonance images) data, both available for some patients in the Alzheimer’s Disease Neuroimaging Initiative dataset. We show how a suitable integration of omics and imaging data, using well-known machine learning techniques, can lead to better classification results than any of them taken separately, also achieving performance competitive with the state-of-the-art.
Download

Paper Nr: 26
Title:

Detection of Microcalcifications in Digital Breast Tomosynthesis using Faster R-CNN and 3D Volume Rendering

Authors:

Ana M. Mota, Matthew J. Clarkson, Pedro Almeida and Nuno Matela

Abstract: Microcalcification clusters (MCs) are one of the most important biomarkers for breast cancer and Digital Breast Tomosynthesis (DBT) has consolidated its role in breast cancer imaging. As there are mixed observations about MCs detection using DBT, it is important to develop tools that improve this task. Furthermore, the visualization mode of MCs is also crucial, as their diagnosis is associated with their 3D morphology. In this work, DBT data from a public database were used to train a faster region-based convolutional neural network (R-CNN) to locate MCs in entire DBT. Additionally, the detected MCs were further analyzed through standard 2D visualization and 3D volume rendering (VR) specifically developed for DBT data. For MCs detection, the sensitivity of our Faster R-CNN was 60% with 4 false positives. These preliminary results are very promising and can be further improved. On the other hand, the 3D VR visualization provided important information, with higher quality and discernment of the detected MCs. The developed pipeline may help radiologists since (1) it indicates specific breast regions with possible lesions that deserve additional attention and (2) as the rendering of the MCs is similar to a segmentation, a detailed complementary analysis of their 3D morphology is possible.
Download

Short Papers
Paper Nr: 4
Title:

U-Net based Semantic Segmentation of Kidney and Kidney Tumours of CT Images

Authors:

Benjamin Bracke and Klaus Brinker

Abstract: Semantic segmentation of kidney tumours in medical image data is an important step for diagnosis as well as in planning and monitoring of treatments. Morphological heterogeneity of kidneys and tumours in medical image data is a major challenge for automatic segmentation methods, therefore segmentations are typically performed manually by radiologists. In this paper, we use a state-of-the-art segmentation method based on the deep learning U-Net architecture to propose a segmentation algorithm for automatic semantic segmentation of kidneys and kidney tumours of 2D CT images. Therefore, we particularly focus on transfer learning of U-Net architectures and provide an experimental evaluation of different hyperparameters for data augmentation, various loss functions, U-Net encoders with varying complexity as well as different transfer learning strategies to increase the segmentation accuracy. We have used the results of the evaluation to fix the hyperparameters of our final segmentation algorithm, which has achieved a high segmentation accuracy for kidney pixels and a lower segmentation accuracy for tumor pixels.
Download

Paper Nr: 8
Title:

Weakly Supervised Deep Learning-based Intracranial Hemorrhage Localization

Authors:

Jakub Nemcek, Tomas Vicar and Roman Jakubicek

Abstract: Intracranial hemorrhage is a life-threatening disease, which requires fast medical intervention. Owing to the duration of data annotation, head CT images are usually available only with slice-level labeling. However, information about the exact position could be beneficial for a radiologist. This paper presents a fully automated weakly supervised method of precise hemorrhage localization in axial CT slices using only position-free labels. An algorithm based on multiple instance learning is introduced that generates hemorrhage likelihood maps for a given CT slice and even finds the coordinates of bleeding. Two different publicly available datasets are used to train and test the proposed method. The Dice coefficient, sensitivity and positive predictive value of 58.08 %, 54.72 % and 61.88 %, respectively, are achieved on data from the test dataset.
Download

Paper Nr: 9
Title:

Voronoi Diagrams and Perlin Noise for Simulation of Irregular Artefacts in Microscope Scans

Authors:

Atef Alreni, Galina Momcheva and Stoyan Pavlov

Abstract: Artefacts are a common occurrence in microscopic images and scans used in life science research. The artefacts may be regular and irregular and arise from different sources: distortions of the illumination field, optical aberrations, foreign particles in the illumination and optical path, errors, irregularities during the processing and staining phases, et cetera. While several computational approaches for dealing with patterned distortions exist, there is no universal, efficient, reliable, and facile method for removing irregular artefacts. This leaves life scientists within cumbersome predicaments, wastes valuable time, and may alter the analysis results. In this article, the authors outline a systematic way to introduce synthetic irregular artefacts in microscopic scans via Perlin Noise and Voronoi Diagrams. The reasoning behind such a task is to produce pairs of “successful” and manufactured “failed” image counterparts to be used as training pairs in an artificial neural network tuned for artefact removal. At the moment, the outlined method only works for grayscale images.
Download

Paper Nr: 11
Title:

Vision Transformers for Brain Tumor Classification

Authors:

Eliott Simon and Alexia Briassouli

Abstract: With the increasing amount of data gathered by healthcare providers, interest has been growing in Machine Learning, and more specifically in Deep Learning. Medical applications of machine learning range from the prediction of medical events, to computer-aided detection, diagnosis, and classification. This paper will investigate the application of State-of-the-Art (SoA) Deep Neural Networks in classifying brain tumors. We distinguish between several types of brain tumors, which are typically diagnosed and classified by experts using Magnetic Resonance Imaging (MRI). The most common benign tumors are gliomas and meningiomas, however there exist many more which vary in size and location. Convolutional Neural Networks (CNN) are the SoA deep learning technique for image processing tasks such as image segmentation and classification. However, a recently developed architecture for image classification, namely Vision Transformers, have been shown to outperform classical CNNs in efficiency, while requiring fewer computational resources. This work introduces using only Transformer networks in brain tumor classification for the first time, and compares their performance with CNNs. A significant difference between the two models, tested in this manner, is the lack of translational equivariance in Transformers, which the CNNs already have. Experiments for brain tumor classification on benchmark real-world datasets show they can achieve comparable or better performance, despite using limited training data.
Download

Paper Nr: 12
Title:

Improved MRI-based Pseudo-CT Synthesis via Segmentation Guided Attention Networks

Authors:

Gurbandurdy Dovletov, Duc D. Pham, Josef Pauli, Marcel Gratz and Harald H. Quick

Abstract: In this paper, we propose 2D MRI-based pseudo-CT (pCT) generation approaches that are inspired by U-Net and generative adversarial networks (GANs) and that additionally utilize coarse bone segmentation guided attention (SGA) mechanisms for better image synthesis. We first introduce and formulate SGA and its extended version (E-SGA), then we embed them into our baseline U-Net and conditional Wasserstein GAN (cWGAN) architectures. Since manual bone annotations are expensive, we derive coarse bone segmentations from CT/pCT images via thresholding and utilize them during the training phase to guide image-to-image translation attention networks. For inference, no additional segmentations are required. The performance of the proposed methods regarding the image generation quality is evaluated on the publicly available RIRE data set. Since MR and CT image pairs in this data set are not correctly aligned with each other, we also briefly describe the applied image registration procedure. The results of our experiments are compared to baseline U-Net and conditional Wasserstein GAN implementations and demonstrate improvements for bone regions.
Download

Paper Nr: 15
Title:

Callus Thickness Determination Adjuvant to Tissue Oximetry Imaging

Authors:

Gennadi Saiko

Abstract: Introduction: Corns and calluses are thickened skin due to repeated friction, pressure, or other irritation. While in many cases, calluses are harmless, if not removed timely, they may lead to skin ulceration or infection. Thus, the removal of calluses is an essential part of surgical debridement. Often, healthcare professionals experience problems with their identification. This study aims to develop an approach for callus thickness determination using hyperspectral imaging. Methods: Based on the two-layer tissue model developed by Yudovsky D et al., 2010, we have developed a computationally simple way of extracting the epithelial thickness from spectral measurements of skin reflection. We have performed a numerical evaluation of the proposed algorithm: generated the reflectance spectrum using the two-layer model, added noise, and reconstructed the epidermal thickness L using the proposed method. To evaluate performance, we have used the following parameters: thickness of the epithelium: 0.1-2mm, dermal blood concentration: 0.2%, 3%, and 7%, blood oxygen saturation: 60%, 80%, and 99%. Results: We have found that the model reasonably well extracts epidermal thickness L in the 0.1-1.5mm range. Beyond that, the reflectance signal does not bring information about underlying layers. The most significant factor, which impacts estimation, is the scattering coefficient of the epidermis. Other factors can be mainly ignored. Conclusions: The proposed model can be easily implemented in image processing algorithms for hyperspectral/multispectral imaging systems.
Download

Paper Nr: 17
Title:

Remote PPG Imaging by a Consumer-grade Camera under Rest and Elevation-invoked Physiological Stress Reveals Mayer Waves and Venous Outflow

Authors:

Timothy Burton, Gennadi Saiko and Alexandre Douplik

Abstract: Introduction: The photoplethysmographic (PPG) signal contains information about microvascular hemodynamics, including endothelial-related metabolic, neurogenic, myogenic, respiratory, and cardiac activities. The present goal is to explore the utility of a consumer-grade smartphone camera as a tool to study such activities. Traditional PPG is conducted using a contact method, but the resultant contact pressure can affect venous flow distribution and distort perfusion examination. This motivates us to develop a remote PPG method (rPPG) to study such activities. Methods: We used an imaging setup composed of a stand-mounted consumer grade camera (iPhone 8) with on-board LED illumination. The camera acquired 1920x1080 video data at 60 frames per second (fps); 90 second videos were captured for a hand in rest and elevated positions. Spatial averaging was performed to extract rPPG, which was filtered using continuous wavelet transform to analyse frequency ranges of interest. Results: The data demonstrated a plurality of observed patterns, which differed between rest and elevation positions. In addition to cardiac and respiratory activities, we noticed another two distinct low frequency patterns: oscillations that we conclude are likely Mayer waves, and monotonic reflection increase (gravitational venous outflow). In some cases, these two patterns are combined. Conclusions: rPPG demonstrated potential for venous compartment examinations.
Download

Paper Nr: 19
Title:

COVID-19 Diagnosis using Single-modality and Joint Fusion Deep Convolutional Neural Network Models

Authors:

Sara El-Ateif and Ali Idri

Abstract: COVID-19 is a recently emerged pneumonia disease with threatening complications that can be avoided by early diagnosis. Deep learning (DL) multimodality fusion is rapidly becoming state of the art, leading to enhanced performance in various medical applications such as cognitive impairment diseases and lung cancer. In this paper, for COVID-19 detection, seven deep learning models (VGG19, DenseNet121, InceptionV3, InceptionResNetV2, Xception, ResNet50V2, and MobileNetV2) using single-modality and joint fusion were empirically examined and contrasted in terms of accuracy, area under the curve, sensitivity, specificity, precision, and F1-score with Scott-Knott Effect Size Difference statistical test and Borda Count voting method. The empirical evaluations were conducted over two datasets: COVID-19 Radiography Database and COVID-CT using 5-fold cross validation. Results showed that MobileNetV2 was the best performing and less sensitive technique on the two datasets using mono-modality with an accuracy value of 78% for Computed Tomography (CT) and 92% for Chest X-Ray (CXR) modalities. Joint fusion outperformed mono-modality DL techniques, with MobileNetV2, ResNet50V2 and InceptionResNetV2 joint fusion as the best performing for COVID-19 diagnosis with an accuracy of 99%. Therefore, we recommend the use of the joint fusion DL models MobileNetV2, ResNet50V2 and InceptionResNetV2 for the detection of COVID-19. As for mono-modality, MobileNetV2 was the best in performance and less sensitive model to the two imaging modalities.
Download

Paper Nr: 22
Title:

Relevance-based Channel Selection for EEG Source Reconstruction: An Approach to Identify Low-density Channel Subsets

Authors:

Andres Soler, Eduardo Giraldo, Lars Lundheim and Marta Molinas

Abstract: Electroencephalography (EEG) Source Reconstruction is the estimation of the underlying neural activity at cortical areas. Currently, the most accurate estimations are done by combining the information registered by high-density sets of electrodes distributed over the scalp, with realistic head models that encode the morphology and conduction properties of different head tissues. However, the use of high-density EEG can be unpractical due to the large number of electrodes to set up, and it might not be required in all the EEG applications. In this study, we applied relevance criteria for selecting relevant channels to identify low-density subsets of electrodes that can be used to reconstruct the neural activity on given brain areas, while maintaining the reconstruction quality of a high-density system. We compare the performance of the proposed relevance-based selection with multiple high- and low-density montages based on standard montages and coverage during the reconstruction process of multiple sources and areas. We assessed several source reconstruction algorithms and concluded that the localization accuracy and waveform of reconstructed sources with subsets of 6 and 9 relevant channels can be comparable with reconstructions done with a distributed set of 128 channels, and better than 62 channels distributed in standard 10-10 positions.
Download

Paper Nr: 24
Title:

Unsupervised Image-to-Image Translation from MRI-based Simulated Images to Realistic Images Reflecting Specific Color Characteristics

Authors:

Naoya Wada and Masaya Kobayashi

Abstract: In this paper, a new domain adaptation technique is presented for image-to-image translation into the real-world color domain. Although CycleGAN has become a standard technique for image translation without pairing images to train the network, it is not able to adapt the domain of the generated image to small domains such as color and illumination. Other techniques require large datasets for training. In our technique, two source images are introduced: one for image translation and another for color adaptation. Color adaptation is realized by introducing color histograms to the two generators in CycleGAN and estimating losses for color. Experiments using simulated images based on the OsteoArthritis Initiative MRI dataset show promising results in terms of color difference and image comparisons.
Download

Paper Nr: 27
Title:

Automatic Detection and Identification of Trichomonas Vaginalis from Fluorescence Microscopy Images

Authors:

Yongjian Yu and Jue Wang

Abstract: Trichomonas vaginalis (TV) causes sexually transmitted infections that, if unresolved timely, can lead to adverse health conditions. We construct a software platform integrating a novel, robust multiscale image analysis pipeline for automatic detection and characterization of TV from dual-resolution, multi-band digital fluorescence microscopy scans. We develop two spectral indices to highlight the TV in the spectrally contaminated image. The system employs a search algorithm that incorporates the spectral indices to locate the microorganisms from the low-resolution scans across the sample slide, and then identifies the TV using a multiscale edge-sensitive automatic thresholding segmentation and index-driven ranking in the high-resolution view. Method capability is demonstrated through the discriminability in the feature classification and in the TV test pipeline, both showing a high sensitivity. This technique can be used to enable automatic, fast diagnosis of trichomoniasis at the point-of-care clinics.
Download

Paper Nr: 5
Title:

Random-walk Segmentation of Nuclei in Fluorescence Microscopic Images with Automatic Seed Detection

Authors:

Tabea G. Pakull, Frederike Wirth and Klaus Brinker

Abstract: In personalized immunotherapy against cancer analysis of cell nuclei in tissue samples can provide helpful information to predict whether the benefits of the therapy outweigh the usually severe side effects. Since segmentation of nuclei is the basis for all further analyses of cell images, research into suitable methods is of particular relevance. In this paper we present and evaluate two versions of a segmentation pipeline based on the established random-walk method. These versions contain automatic seed detection, using a distance transformation in one of them. In addition, we present a method to select the required hyper-parameter of the random-walk algorithm. The evaluation using a benchmark dataset shows that promising results can be achieved with respect to common evaluation metrics. Furthermore, the segmentation accuracy can compete with a reference CellProfiler segmentation pipeline, based on the watershed transformation. Based on the presented pipeline, the random-walk method can also be integrated into more advanced pipelines to further improve segmentation results.
Download

Paper Nr: 13
Title:

3D MRI Image Segmentation using 3D UNet Architectures: Technical Review

Authors:

Vijaya Kamble and Rohin Daruwala

Abstract: From last few decades machine learning & deep convolutional neural networks (CNNs) used extensively and have shown remarkable performance in almost all fields including medical diagnostics. It is used in medical domain for automatic tissue, lesion detection, segmentation, anatomical or structure segmentation classification & survival predictions. In this paper we presented an extensive technical literature review on 3D CNN U-Net architectures applied for 3D brain magnetic resonance imaging (MRI) analysis. We mainly focused on the architectures, its modifications, pre-processing techniques, types datasets, data preparation, methodology, GPU, tumor disease types and per architectures evaluation measures in this works. Our primary goal for this extensive technical review is to report how different 3D U-Net architectures or CNN architectures have been used to differentiate between state-of-the-art strategies, compare their results obtained using public/clinical datasets and examine their effectiveness. This paper is intended to present detailed reference for further research activity or plan of strategy to use 3D U-Nets for brain MRI automated tumor diseases detection, segmentation & survival prediction analysis. Finally, we are presenting a novel perspective to assist research directions on the future of CNNs & 3D U-Net architectures to explore in subsequent years to help doctors & radiologist.
Download

Paper Nr: 21
Title:

Multi Modality Medical Image Translation for Dicom Brain Images

Authors:

Ninad Anklesaria, Yashvi Malu, Dhyey Nikalwala, Urmi Pathak, Jinal Patel, Nirali Nanavati, Preethi Srinivasan and Arnav Bhavsar

Abstract: The acquisition time for different MRI (Magnetic Resonance Imaging) image modalities pose a unique challenge to the efficient usage of the contemporary radiology technologies. The ability to synthesize one modality from another can benefit the diagnostic utility of the scans. Currently, all the exploration in the field of medical image to image translation is focused on NIfTI (Neuroimaging Informatics Technology Initiative) images. However, DICOM (Bidgood et al., 1997) images are the prevalent image standard in MRI centers. Here, we propose a modified deep learning network based on U-Net architecture for T1-Weighted image (T1WI) modality to T2-Weighted image (T2WI) modality image to image translation for DICOM images and vice versa. Our deep learning model exploits the pixel wise features between T1W images and T2W images which are important to understand the brain structures. The observations indicate better performance of our approach to the previous state-of-the-art methods. Our approach can help to decrease the acquisition time required for the scans and thus, also avoid motion artifacts.
Download