DCBIOSTEC 2020 Abstracts


Short Papers
Paper Nr: 1
Title:

Evolutionary Optimization of Image Processing for Cell Detection in Microscopy Images

Authors:

Andreas Haghofer, Josef Scharinger and Stephan Winkler

Abstract: In this paper, we present a new evolutionary algorithm for the automated, self-adaptive optimization of image processing workflows for cell detection in various kinds of microscopy images. We use evolution strategies that optimize the parameters of image processing pipelines. The algorithm automatically adjusts all degrees of freedom of the available image processing steps so that these cells are detected as well as possible. Even without any further preprocessing or labeling of these images, our approach is able to calibrate and optimize the image analysis process where other machine learning based approaches are inclined to fail. Another benefit of our methodology is extendability and adjustability of the whole workflow. Without the need to rewrite the whole optimization algorithm, the complete set of image processing algorithms could be changed. The support for multithreading and graphics processing units (GPUs) for independent optimization processes increases the applicability of this workflow on low-end hardware as well as on high-end servers.

Paper Nr: 2
Title:

Effect of e-Mental Health on Physiological Variables for Attention Regulation in Anxiety Disorders

Authors:

Claudia Lizbeth Martínez-González and Luis Fernando Burguete-Castillejos

Abstract: Currently, there are two problems: 1. Establish guidance strategies in relation to anxiety disorders and 2. Generate tools so that those who are already diagnosed with any of these diseases can make their treatments efficient. In the present research work is exposed the current problem that many Mexicans live when they fail to obtain clear information about what happens to them, particularly when they have a panic attack and go to an emergency room to be treated. On the other hand, this research is also dedicated to design and develop a technological tool that allows a patient diagnosed with an anxiety disorder, and who is presenting a panic attack, to stop or diminish the symptoms of emotional crisis. The technological tool is a mobile application that would allow to generate a favorable alteration on the symptoms of anxiety since it has been based on the classic methods in terms of the initial physiological regulation through a breathing exercise and, later, propitiate the emotional regulation from the modification of cognitions, through a series of activities that would allow the user to regulate their attention function.

Paper Nr: 3
Title:

Deep-learning based Analysis of Mammograms to Improve the Estimation of Breast Cancer Risk

Authors:

Francesca Lizzi

Abstract: My PhD research work consists in looking for a new automatic method to find image-based marker from mammograms to diagnose breast cancer early. In fact, breast cancer is the most frequently diagnosed cancer among women worldwide and it is the second leading cause of death. It has been evaluated that one woman in eight is going to develop a breast cancer in her life. It is also widely accepted that early diagnosis is one of the most powerful instrument we have in fighting this type of cancer. For these reasons mammographic screening programs are performed on asymptomatic women at risk every two years in a range between 45 and 74 years. Full Field Digital Mammography (FFDM) is a non-invasive high sensitive method for early stage breast cancer detection and diagnosis, and represents the reference imaging technique to explore the breast in a complete way. Since mammography is a 2D X-ray projection imaging technique, it suffers from some intrinsic problems: a) breast structures overlapping, b) malignant masses absorbing X-rays similarly to the benignant ones and c) sensitivity being lower for masses or microcalcifications clusters detected in denser breasts. Breast density is defined as the amount of fibroglandular parenchyma or dense tissue with respect to fat tissue as seen on a mammographic exam. Furthermore, to have a sufficient sensitivity in dense breast, a higher radiation dose has to be delivered to the patient. Moreover, breast density is an intrinsic risk factor in developing cancer. The most used density standard has been established by the American College of Radiology (ACR) in 2013 and it is reported on the Breast Imaging Reporting and Data System (BI-RADS) Atlas. This standard defines four qualitative classes: almost entirely fatty (``A''), scattered areas of fibroglandular density (``B''), heterogeneously dense (``C'') and extremely dense (``D''). Since mammographic density assessment made by radiologists suffers from a not negligible intra and inter-observer variability, automatic methods have been developed in order to make the classification reproducible. The first problem in training machine learning models is due to the lack of huge public mammograms dataset and this makes the comparison among different methods difficult. Furthermore, many previous approches use a two-steps classification, which implies that classification is not completely automatic: first, they extract features from the images or they apply a segmentation method and, afterwards, they train a classifier with a Support Vector Machine or other machine learning methods. In a my previous work, a deep learning technique has been explored in order to build a breast density classifier based on residual convolutional neural network (CNN), a class of neural network that is usually used for image analysis. Thanks to the screening programs, huge amounts of mammograms can be collected and used for the development of analysis software. In the last few years, deep learning-based methods have been developed with success in a wide range of medical image analysis problems. Since deep learning methods needs a huge amount of data, the ``Azienda Ospedaliero-Universitaria Pisana'' (AOUP) collected about 2000 mammographic exams (each consisting of 4 images) from the Senology Department. The exams have been selected by a mammography specialized physician and a radiology technician. This dataset has been anonymized and extracted from the AOUP database. We are also collecting a new longitudinal dataset of screening mammograms from the "Azienda ASL Toscana Nord-Ovest" (ATNO) which is made of both cancer and control cases along with histopatological reports and a questionnaire with the known breast cancer risk factors. The latter dataset will include all the screening mammographic exams related to a woman before the diagnosis as well as all the mammographic exams of each healthy woman. The main idea of my PhD research is to look for a signal that can distinguish women who are going to have the disease and women who will not contract the disease. In order to reach this goal, I will explore the trend of the CNN-extracted features and other classes of features, that are related to breast density, over the women life time. A breast density classifier, based on convolutional neural networks, has been trained and evaluated and I extracted the features it computed to perform the classification. At the same time, I trained another classifier, a Support Vector Machine, with the first order statistical features. The results obtained with the last classifier are promising. I am going to build a classifier which takes as input both CNN-extracted features and the statistical one in order to refine the performance. All the possible algorithms and protocols that can be useful to understand the behavior of the classifiers, such as the Class Activation Maps analysis, will be studied in order to validate and control the performance not only in terms of accuracy. Afterwards, all the features will be computed on the longitudinal dataset and they will be studied in order to find a significative trend that can distinguish women who are going to have the disease and women who are not. This could allow to have a very early diagnosis and to ensure the best prognosis possible for women.

Paper Nr: 4
Title:

Multi-Scale Predictive Model for the Aggregation Kinetics of Biotherapeautic Proteins

Authors:

Ritpratik Mishra

Abstract: Protein aggregation is one of the primary reasons for the degradation of protein and has also been linked to several diseases like Alzheimer’s. Importance of understanding the kinetics of protein aggregation is fundamental in devising ways to mitigate the degradation. In recent years, with the development of high performance computing techniques, it has been possible to understand aggregate formation at atomic level and the factors leading to aggregation which were difficult to capture with experimental techniques. These breakthroughs, coupled with understanding gained from solution experiments, can enable us to build strategies to combat aggregation, including the design and evaluation of peptides and small molecules that inhibit the growth or that facilitate the dissociation of aggregates. However, the time-scales and length-scales relevant for aggregation makes unfeasible the use of accurate all-atom (AA) explicit solvent simulations for prediction of all steps of aggregation thermodynamics and/or kinetics. This calls the need for a multi-scale modelling which enables us to use different computational techniques for different stages of aggregation. The choice of computational technique used in our multi-scale modelling was done by finding a proper balance between “loss in resolution/accuracy while modelling the protein” and “the computational cost”. In this study, we have considered insulin as the therapeutic protein owing to its low molecular weight and significant importance in the pharmaceutical industry. We have modelled aggregation kinetics of the insulin using a widely used extended Lumry-Eyring Model . The model can be briefly summarised as a native insulin monomer undergoing conformational changes, to form a aggregation prone partially folded intermediate (PFI) insulin species, which acts as a precursor for oligomer growth till the nucleus formation which is the rate limiting step. Broadly, the process can be divided into native insulin unfolding to PFI, followed by oligomer growth till the formation of nucleus and finally fibril growth and each step is modelled at a different resolution. In an earlier workin our group, we had used AA explicit solvent simulations to identify aggregation-prone PFIs of insulin. We then used a structural bioinformatics approach to identify most stable N-PFI insulin homo-dimers, a putative aggregation pathway complex. In contrast to functional protein-receptor association, there can be multiple aggregation pathway complexes. To identify such complexes, a set of most probable conformations of the N-PFI complex were first obtained using rigid-body docking coupled with MMPBSA energies for ranking of complexes. These docked complexes were used as starting structures for several long coarse-grained molecular dynamics (CGMD) simulations to obtain an ensemble of aggregation pathway complexes. The physics based MARTINI forcefield, appropriate for non-native protein-protein interactions relevant for aggregation, was used for this set of simulations. We then used linear integer programming to identify the smallest number of distinct N-PFI complexes sufficient to represent the group of dimers obtained from CGMD simulations. We hypothesise that this small set of N-PFI complexes can be taken as configurational-diffusion-limited transient state for insulin homo-dimer on aggregation pathway. The dimerisation rate was then calculated using the transition-complex theory on this set of N-PFI complexes. Insulin unfolding and dimerisation rates were then used as input to mesoscale Population Balance Method (PBM) based on extended Lumry-Eyring kinetics. Predictions of nucleation time and oligomeric species concentration from our multi-scale model were compared to experimental data on insulin aggregation kinetics with very good agrrement. We also investigated the effect of organic inhibitor BSPOTPE on the nucleation time and were able to make very close predictions to the reported experimental data. We are presently working on employing the developed multi-scale model to investigate higher molecualr weight biotherpeautic proteins like mono-clonal antibodies.

Paper Nr: 6
Title:

Study by Simulation and Reconstruction of a Brain-dedicated Positron Emission Tomograph based on Resistive Plate Chambers

Authors:

Ana L. Lopes, Miguel Couceiro, Paulo Crespo and Paulo Fonte

Abstract: Due to its high position accuracy and low production price, Resistive Plate Chamber (RPC) technology has been under development at Laboratório de Instrumentação e Física Experimental de Partículas (LIP – Laboratory of Instrumentation and Experimental Particle Physics) in collaboration with Instituto de Ciências Nucleares Aplicadas à Saúde (ICNAS - Institute of Nuclear Sciences Applied to Health) to be used both in animal and human Positron Emission Tomography (PET) imaging. These gaseous detectors present an excellent time resolution of 300 ps Full Width at Half Maximum (FWHM) for 511 keV photon pairs, which allows the use of Time-of-Flight information. Experimentally, these detectors already proved to be able to provide Depth of Interaction information, which renders the corresponding images parallax-free. Thus, given the promising results obtained with the small animal RPC-PET system, especially regarding its spatial resolution of 0.4 mm FWHM, we are now aiming at the construction of an RPC-PET system dedicated to brain imaging, named HiRezBrainPET.