Abstracts Track 2024


Nr: 16
Title:

Optimizing Critical Care Monitoring through Ambient Data Integration and Neural Networks

Authors:

Advait Thaploo, Max Wang, Ujjayi Pamidigantam and Parthiv Patel

Abstract: Blood oxygen saturation is a key clinical biomarker in critical care. Studies have shown that, potentially due to differences in skin melanin distribution, rapid pulse oximetry tests underperform in patients of darker skin tones. Further studies suggest skin color alone may not explain discrepancies between pulse oximetry tests and gold-standard arterial blood gas tests (SaO2). We hypothesized that an ensemble machine learning model could be used to identify pulse oximetry measurements that are likely overestimations using patient demographics as well as vitals and laboratory results collected at the time of concomitant SpO2 and SaO2 measurements from electronic health record (EHR) data. We defined occult hypoxemia as the binary case where a SpO2 reading shows lower blood oxygen levels than a concomitant SaO2 test. Data were taken from a subset of the MIMIC-IV dataset, which contained recordings from 81,798 patients who had received SpO2 and SaO2 tests at the Beth Israel Medical Center from 2008-2019. Ethical approval for this data was provided by the institutional review boards of the Massachusetts Institute of Technology and BIDMC. We utilized an ensemble of three machine learning classifiers to detect occult hypoxemia: Logistic Regression, Random Forest, and XGBoost, trained using stratified k-fold cross-validation, with hyperparameters optimized through GridSearchCV. The meta-learner model was trained on the validation predictions of the base models. We assessed model performance using metrics such as accuracy, balanced accuracy, precision, recall, F1 score, Cohen's Kappa, and Matthews Correlation Coefficient. We reported the confusion matrix, ranked features with SHAP values, conducted a permutation test for statistical significance (yielding a p-value), and calculated the AUROC. Our model ensemble achieved an accuracy of 70.8%, and a recall of 80.8% for the positive class. The ensemble was more successful in identifying true positive cases, as indicated by the high recall. The permutation test yielded a p-value of 0.0099, signifying that the model's performance is statistically significant against pure chance. However, the model demonstrated poorer specificity in the negative case, with a precision of 60% for patients without occult hypoxemia. The RandomForest classifier was the most influential in the ensemble, with the best 5-fold cross-validation accuracy score of 0.7876. Logistic regression reported a cross-validation score of 0.6153, and XGBoost 0.7420. The AUROC of 0.67 further highlights the model's modest discriminative ability. The most important features in the model were the SOFA respiratory score, FiO2, length of stay in the ICU, temperature, renal replacement status, heart rate, respiratory rate, RDW, CCI, and the anion gap. Race was the sixteenth-most important feature. This implies that patients with poorer oxygenation and respiration, longer ICU stays, and to a certain extent, those from minority groups may experience reduced pulse oximetry accuracy. While this model shows promise, and suggests modest efficacy in correcting SpO2 readings using ICU data, further development is essential before implementation in the clinic as a decision support algorithm. For future refinement, a more balanced dataset, especially with more negative cases, would improve specificity in patients without occult hypoxemia.

Nr: 61
Title:

Implementation and Optimization of Asymmetric Sigmoid Functions for Weighted Gene Co-Expression Network Analysis

Authors:

Merve Yarici and Muhammed Erkan Karabekmez

Abstract: In the realm of biological studies, examining of gene expressions at the genomic level often provides more robust and accurate results when compared to the analysis of individual genes, a trend consistently observed across a wide range of contexts. It can enhance understanding of the molecular mechanisms responsible for driving molecular alterations. In this context , weighted gene co-expression network analysis (WGCNA) has recently been widely used to cluster transcriptomic datasets. Unlike the classical molecular biology perspective that proceeds inductively, WGCNA was built on systems biology principles that handle multiple molecular entities as a whole and it implements thresholds to pairwise correlations to construct co-expression matrices by hard thresholding with sign function or soft thresholding using power function. However, these functions may sometimes exaggerate minor differences in expression correlations. We have previously proposed to use asymmetric sigmoid functions parameterization with using the grid search approach in soft thresholding as an alternative solution. However, the number of variables in asymmetric sigmoid functions may vary and parameterization can be problematic. In this study, we have introduced a systematic procedure for a more straightforward parameterizing asymmetric sigmoid function to ease using it as an alternative soft-thresholding solution in WGCNA. The efficiency of the employment was shown on four different COVID-19 datasets, on a yeast dataset, and on an E.Coli dataset. The results indicate that this approach provides biologically plausible associations for the resulting modules and applying asymmetric sigmoid functions in soft thresholding enhances the efficiency of WGCNA.

Nr: 62
Title:

Disease Classification and Biomarker Discovery in Breast Cancer Through Multi-Omics Integration with Incorporating Prior Biological Knowledge

Authors:

Miray Unlu Yazici and Malik Yousef

Abstract: The diverse omics data generated through high throughput technologies enable us to integrate multi-omics data, enhancing our understanding of the molecular mechanisms behind complex diseases. However, there is an apparent gap between omics data and integration strategies. The purpose of this study is to develop a novel machine learning (ML) approach, Enriched Groping-Scoring-Modeling (E-G-S-M) tool, which is designed for multi-omics integration (microRNA and mRNA expression profiles) via incorporating Prior Biological Knowledge (PBK) in Breast cancer. The grouped potential biosignatures (microRNA and its related mRNAs in each group) associated with the disease are selected for ML model development based on their biological function scores. The novelty of this approach is that scoring and ranking of the groups is built upon PBK. The Breast Invasive Carcinoma (BRCA) microRNA (miRNA) and gene expression (mRNA) datasets obtained from The Cancer Genome Atlas (TCGA) (Tomczak et al., 2014)are downloaded and classified based on Hormone Receptor (HR) types of molecular subtypes into the following groups: 425 samples of HR+ and 124 of HR-. The normalized omics data matrices (21,839 mRNAs and 1882 miRNAs) are divided into training and test datasets (split ratio 90:10) and training set is introduced into a grouping component to find highly correlated miRNA-mRNA pairs. Subsequently, each group is given as input for a scoring component to select the most relevant ones associated with the target disease. This component is based on gene set enrichment analysis and returns a score for over-represented Reactome terms in significant groups. The group-level scoring via PBK is not a scoring of individual markers, but it allows us to identify the best-scored biosignature groups. The top ranked groups are used to develop a Random Forest model for classification of the HR+ and HR- cases. The performance of the model is evaluated with a test set over 10-fold randomized k-fold cross-validation technique (Jung & Hu, 2015). Our approach also identifies the potential biomarker associated with the disease and gives it as output. The top 10 groups ranked and filtered based on their PBK scores are used to perform classification of the HR+/- cases. Performance of the E-G-S-M and the miRcorrNet tool (Yousef et al., 2021) without incorporating PBK are comparable and have similar performance metrics. While the average AUCROC scores are 0.98 and 0.99, the accuracy scores are 0.92 and 0.94 for E-G-S-M and miRcorrNet tools respectively. In addition, E-G-S-M tracks iteration information and reveals disease related biological groups. The top detected group in classification of BRCA molecular subtypes consist of hsa-mir-17 and associated genes CDC20, CENPA, CDCA7. The other significant groups are hsa-mir-4728 (genes: STARD3, ERBB2, GRB7), hsa-mir-106b (genes: CDCA7, ORC1, STMN1). These miRNAs and their associated genes have regulatory roles in the BRCA molecular mechanisms. Multi-omics and prior biological knowledge driven analysis enable us to capture key biological molecules and to get a deeper understanding of cellular mechanisms of complex diseases. Integration of other PBK in the scoring stage can strengthen the differentiation between tumour and normal tissue or between different subtypes. Furthermore, this approach can be applied into other omics data for regulatory network analysis via collective interaction of markers in the identified significant groups.

Nr: 73
Title:

Efflux Mechanism and Conformational Landscape of P-glycoprotein to Better Rationalize the Prediction of Inhibitors Involved in Drug-Drug Interactions

Authors:

Ahmad Elbahnsi, Bàlint Dudàs, Xavier Decleves, Salvatore Cisternino and Maria Miteva

Abstract: P-glycoprotein (P-gp) is one of the most studied ABC transporters which plays a key role in the ADMET (Absorption, Distribution, Metabolism, Excretion, Toxicity) properties of very diverse xenobiotics and drugs. Moreover, P-gp is of major interest in cancer treatment because of its involvement in the multidrug resistance (MDR) phenotype by expelling anticancer drugs out of tumor cells, thus reducing their therapeutic activity. The inhibition of P-gp also causes drug-drug interactions (DDI) which have an impact on drug efficacy/toxicity. The ability to identify substrates and inhibitors of P-gp is therefore crucial to prevent DDI and adverse drug reactions (ADR) during the clinical stages of the drug development process. To ensure its (drug) efflux functions, P-gp uses the energy provided by ATP binding and hydrolysis to transit between open states (inward-facing, IF) allowing compound binding; and closed states (outward-facing, OF) allowing its eviction from the cell. The IF and OF 3D structures of human P-gP were solved by cryo-EM (Kim and Chen, 2018; Nosol et al., 2020; Urgaonkar et al., 2022). However, these structures give only a partial information and do not elucidate the efflux mechanism involving large conformational changes in order to expulse very diverse drug substrates.  Recently, we developed an original enhanced sampling method for molecular dynamics (MD), namely kinetically excited targeted MD, that allowed us to reveal the transitions between the IF and OF states and the translocation pathway in BCRP, another important ABC transporter (Dudas et al., 2022). Here, we optimized  this approach to generate transitory conformations along the gating cycle of P-gp and to unveil the mechanisms of substrate efflux and inhibitor interactions. Our simulations provided for the first time exploration of the P-gp transition pathway, and the studied conformational landscape revealed some crucial features about its functions and dynamics. Such data are subsequently useful to i) better define the binding sites, ii) characterize their interaction modes with known active molecules and iii) rationalize the prediction of new inhibitors and substrates. Dudas, B., Decleves, X., Cisternino, S., Perahia, D., Miteva, M.A., 2022. ABCG2/BCRP transport mechanism revealed through kinetically excited targeted molecular dynamics simulations. Comput. Struct. Biotechnol. J. 20, 4195–4205. https://doi.org/10.1016/j.csbj.2022.07.035 Kim, Y., Chen, J., 2018. Molecular structure of human P-glycoprotein in the ATP-bound, outward-facing conformation. Science 359, 915–919. https://doi.org/10.1126/science.aar7389 Nosol, K., Romane, K., Irobalieva, R.N., Alam, A., Kowal, J., Fujita, N., Locher, K.P., 2020. Cryo-EM structures reveal distinct mechanisms of inhibition of the human multidrug transporter ABCB1. Proc. Natl. Acad. Sci. 117, 26245–26253. https://doi.org/10.1073/pnas.2010264117 Urgaonkar, S., Nosol, K., Said, A.M., Nasief, N.N., Bu, Y., Locher, K.P., Lau, J.Y.N., Smolinski, M.P., 2022. Discovery and Characterization of Potent Dual P-Glycoprotein and CYP3A4 Inhibitors: Design, Synthesis, Cryo-EM Analysis, and Biological Evaluations. J. Med. Chem. 65, 191–216. https://doi.org/10.1021/acs.jmedchem.1c01272.

Nr: 103
Title:

Laser Processing of Transparent Materials for the Enhancement of Raman Signal in Liquid Samples

Authors:

María Gabriela Fernández-Manteca, Celia Gómez Galdós, Borja García García, Luis Rodriguez-Cobo, Jose-Miguel Lopez-Higuera, Alain A. Ocampo Sosa and Adolfo Cobo

Abstract: In the field of advanced material processing techniques, ultrafast laser assisted etching (ULAE) has emerged as a pivotal technology, revolutionizing the domain of precision microfabrication. This method is based on a two-step process. In the first, from the non-linear absorption of ultrashort pulses, very localized and high resolution modifications can be achieved within transparent materials. The second step consists of a wet etching process in which the laser affected zone is significantly more selective than the pristine material - up to 3 orders of magnitude higher. Traditionally, the etching rate was significantly dependent on the polarization of the laser beam. However, recent studies have shown that high etching rates can be achieved independently of the polarization [1]. This makes it possible to fabricate, from simple setups, custom-made three-dimensional glass devices for multiple applications [2]. Raman spectroscopy is a fast analytical technique that requires minimal sample preparation. It operates by detecting vibrational and rotational signals emitted by the molecular bonds of a sample when exposed to laser light. The Raman spectrum produced is distinctive for each sample, revealing its individual chemical composition. Traditionally, Raman spectroscopy faces difficulties in accurately identifying specific compounds within liquid samples due to the extensive drying area of the droplet on commonly used substrates. This large drying area often leads to the dispersion of analytes, making it challenging to obtain distinct signals. However, the integration of ULAE serves as a solution to this problem. By strategically modifying the surface properties of Raman substrates to enhance hydrophobicity through ULAE. This deliberate alteration forces the analytes into smaller, confined spaces, where they accumulate in higher concentrations. Consequently, this focused accumulation leads to an amplification of Raman signals. The enhanced intensity of the Raman spectra obtained from these confined spaces allows greater specificity in the identification of individual compounds within liquid samples. In this study, we applied ULAE on commonly used transparent Raman substrates. Following this, liquid samples of various compositions were deposited as droplets. Spatial scans of these drops were conducted using a Raman spectroscopy system equipped with a 532nm laser source. The obtained results from these diverse conditions were compared, providing a detailed analysis of the effects of ULAE on different liquid compositions. This comparative approach not only refines the sensitivity and accuracy of Raman spectroscopy, but also broadens its applicability, promising a future of advanced analytical methods. This work is part of the R+D projects PREVAL23/05, INNVAL19/17, INNVAL23/10, financed by IDIVAL, TED2021-130378B-C21 and PID2022-137269OB-C22, funded by MCIN/AEI/10.13039/501100011033. References: [1] Mario Ochoa, Pablo Roldán-Varona, José Francisco Algorri, José Miguel López-Higuera, and Luis Rodríguez-Cobo (2023). “Polarisation-independent ultrafast laser selective etching processing in fused silica.” Lab on a Chip, 23(7), pp. 1752–1757. [2] Pablo Roldán-Varona, Calum A. Ross, Luis Rodríguez-Cobo, José Miguel López-Higuera, Eric Gaughan, Kevin Dhaliwal, Michael G. Tanner, Robert R. Thomson, and Helen E. Parker (2023). “Selective Plane Illumination Optical Endomicroscopy with Polymer Imaging Fibres.” APL Photonics, 8(1), pp. 016103.

Nr: 111
Title:

Challenges and Solutions of De-Identifying German Unstructured Medical Free Text in a Telehealth Disease Management Programme

Authors:

Martin Baumgartner, Karl Kreiner, Fabian Wiesmüller, Dieter Hayn, Gerhard Pölzl and Guenter Schreier

Abstract: Using structured medical data to develop modern artificial intelligence (AI) models can significantly improve medical care. However, nuanced information frequently remains unstructured in free text data. Patients with the same diagnoses and vital parameters might have significantly different needs. Therefore, documents like doctors’ letters are required by healthcare professionals (HCP) to get a comprehensive understanding of a patient’s condition. If AI models are to be capable of the same holistic view, such free text data must be considered during training. Therefore, complex natural language processing (NLP) tasks are required to structure even such intricate information. To collaborate with NLP specialists and increase patient privacy, references to identifying information must be removed, for which no standard model exists yet due to the lack of large open-source datasets for medical text. Language-specific solutions are found in literature, mostly for the English language, though studies for other languages (e.g. Spanish, Chinese) exist. Experiments on German corpora are sparse and based on doctors’ letters, which are structured and spell-checked to a certain degree. We propose an algorithm, which removed identifying references from completely unstructured medical notes from a telehealth-supported disease management programme (n ≥ 63,000 notes). These records were written by different HCPs (e.g. doctors, nurses) and include colloquial language, domain-specific abbreviations, spelling errors, nick names, etc. These conditions are not fully addressed by any of the literature studies and make reliable de-identification is especially challenging. To remove references to individuals (e.g. names, phone numbers, addresses), four types of checks were implemented: a) regular expressions b) dictionaries of database-internal names, c) dictionaries of publicly available names and d) a hand-curated list of common precursors to personal information. The checks were compartmentalized into individual algorithms and arranged in an ordered ensemble. This allowed the algorithm to be a) more efficient because no further checks were required once one of them was positive and b) well testable, since individual unit tests could be written. During masking, a named entity recognition (NER) system was implemented to retain context by substituting references with the type of reference (e.g. patient, location). A subsample of 200 clinical notes was selected for evaluation, which was stratified for note length and de-identification frequency. Each sample was manually assigned values for true positives (reference correctly redacted), true negatives (note correctly not redacted), false positives (reference wrongfully redacted) and false negatives (reference wrongfully not redacted). Performance was compared to a NER system by Python library spaCy and an earlier version of the algorithm. The current version de-identified texts with a sensitivity of 0.943 and a specificity of 0.933, representing a substantial improvement over spaCy (sensitivity: 0.082, specificity: 0.804) and the older version of the algorithm (sensitivity: 0.94, specificity: 0.62). We conclude that with the rise of large language models, de-identification of clinical free texts is becoming increasingly important. Our results showed that off-the-shelf solutions failed on our clinical notes. Our approach contributes novel ideas for de-identifying German medical text with challenging conditions.

Nr: 147
Title:

3D Virtual Reality Prism Adaptation Simulation System for Hemispatial Neglect

Authors:

Jukang Lee, Tae Hee Kim, Je Hyun Yoo and Ji Young Kim

Abstract: Objectives: Hemispatial neglect syndrome is a neuropsychological condition, which fail to report, respond, or orient to external stimulation located contralateral to a brain lesion when the failure cannot be attributed to a primary sensory or motor deficit(Heilman, Watson, & Valenstein, 1985). Prism adaptation training was effective for the treatment of hemispatial neglect(Rossetti, 1998: Pisella, 2006). We developed a 3D VR prism adaptation simulation system. The purpose of the study was to evaluate the effects of the VR training with the simulation system for hemispatial neglect rehabilitation. Materials and Methods: The 3D VR prism adaptation simulation system was developed to implement the prism adaptation training. Leap motionⓇ was used for hand motion tracking. Oculus lift dk2Ⓡ was used for prism simulation immersive virtual reality presentation to the subject. The 3D VR prism training program consisted of 3 sessions which repeated 10 times over 2 weeks. In the first session, the subjects were instructed to move their virtual hand straight to a midline target in the VR. During the training, the hand path tracking was not displayed except for the initial staring portion and the ending portion, which simulated the prism glass training environment. The first session finished when the subjects succeeded to hit the target 20 times continuously. In the second session, the virtual hand path was programmed to move 10° deviated rightwards from the original path which was blinded to the subject, simulating the prism glass applied condition. The subjects missed the target to the right side initially. But after several trials of targeting error, the subject adapted to the deviation and could hit the target correctly. After adaptation to the deviation condition, the third session started, in which the hand path deviation was eliminated blinded to the subject. The subjects showed left side target missing initially, which was similar to ‘the after effect’ of prism glass training. Neglect tests (line bisection test, Albert’s test and star cancellation test) were performed before, one week and two weeks of the interventions to evaluate the effect of 3D VR training. Results: Ten subjects(M:F=7:3) with hemispatial neglect due to the right brain lesion were recruited. Line bisection test scores were improved from 48.51%(before) to 57.31%(1 week) and 64.76%(2 weeks)(p < 0.01). Albert’s test score were improved from 62.75%(before) to 81.50%(1 week) and 92.50%(2 weeks)(p < 0.01). Star cancellation test scores were changed from 41.25%(before) to 53.04%(1 week) and 59.82%(2 week)(p < 0.01). Conclusions: All neglect test scores became improved after the 3D virtual reality prism adaptation simulation training. Hemispatial neglect improved significantly using the 3D VR prism adaptation simulation program.

Nr: 170
Title:

Predicting Laryngeal Aspiration Through Voice Analysis App Based on a Machine Learning Algorithm

Authors:

Jukang Lee, Kyoung Hyo Choi, Yoon Ghil Park, Jinyoung Park, Kyoung Cheon Seo, Yong Jae Na, Myungeun Yoo and Ji Young Kim

Abstract: Introduction Dysphagia can result in serious complications such as aspiration pneumonia, malnutrition and dehydration. The gold standard diagnostic method is VFSS(videoflouroscopic swallowing study) in the x-ray room conducted by a specialist. There is a need for a simple screening tool for dysphagia for community use. Aspiration during swallowing can result in observable voice alterations('wet voice'). Phonologic parameters(RAP, shimmer, HNR and VTI) significantly increase in patients with dysphagia (Ryu JS et al. Am J Phys Med Rehabil 2004). Voice analysis before and after swallowing can be a screening tool for dysphagia, if the analysis process is simple and easy. So we developed a smartphone app that analyses voice changes after swallowing based on a machine-learning algorithm. Objective We developed a smartphone app that analyses voice changes after swallowing based on a machine-learning algorithm. We aimed to evaluate the performance of the app as a screening tool for oropharyngeal dysphagia. Method Among the patients referred for VFSS, 868 subjects were enrolled for the study. VFSS was performed for detecting dysphagia. Penatration-aspiration scale(PAS) 1 was assigned to normal group and PAS 2 to 8 were assigned to abnormal(dysphagia) group according to VFSS finding. Each subject's voices (/a/ sound) were recorded using the mobile phone app before and after the VFSS. Based on the /a/ voice data captured with mobile phone app before and after the VFSS, an autoencoder with a strong performance in detecting abnormalities is used to determine dysphagia. One hundred thirty three subjects data set were selected for the analysis. Voice data from 55 normal cases were used for training, 8 normal and 44 abnormal cases were used for validation, and 29 normal and 104 abnormal cases used for test data. Results From the confusion matrix results, we achieved a sensitivity of 91.8%, a specificity of 50.0%, and an accuracy of 85.2%. The area under the ROC curve is 0.7577. Conclusion The voice analysis app showed acceptable performance for screening of dysphagia. A further improvement of the performance and advancement are needed. Also, application test to the community is necessary.

Nr: 270
Title:

CO-Dec: Bulk RNA Sequence Deconvolution with Combination of Deep Learning and Benchmarking Methods

Authors:

Toi Nishikawa

Abstract: Background Single-cell RNA sequencing (scRNA-seq) plays a critical role in characterizing tissue composition and unveiling cells influences disease. However, its application into routine clinical settings and research is challenging due to high costs and stringent sample collection requirements. Bulk RNA sequence deconvolution (bulk deconvolution), a computational method estimating the cell composition of bulk profile samples, has the potential to broaden the applications of scRNA-seq. In early years, bulk deconvolution methods such as scSAM (Shen-Orr et al., Nature Methods, 2010 Apr;7(4):287-9) utilizing linear regression and CIBERSORT (Newman et al., Nature Methods, 2015 May;12(5):453-7) employing support vector machines, have been developed. Since 2019, methods like MuSiC (Wang et al., Nature Communications, 2019 Jan 22;10(1):380), deconvSeq (Du et al., Bioinformatics, 2019 Dec 15;35(24)), DWLS (Tsoucas et al., Nature Communications, 2019 Jul 5;10(1):2975), BisqueRNA (Jew et al., Nature Communications, 2020 Jun 3;11(1)), SCDC (Dong et al., Briefings in Bioinformatics, 2021 Jan 18;22(1)), etc., using scRNA-seq references have been introduced. Additionally, in recent years, deep learning models such as Digitallsorter (Torroja et al., Statistical Genetics and Methodology, 2020 Feb 06;10) and Scaden (Menden et al., Science Advances, 2020 Jul 22;6(30)) has been developed. Methods using scRNA-seq references achieved high accuracy and are widely used. However, they are limited by dependency on scRNA-seq reference quality (referred to as “Textbook approaches.") Conversely, deep learning methods lack the ability to utilize scRNA-seq references but offer the advantage of learning from a large paired dataset of bulk RNA-seq and cell composition (we call the methods "Exercise-book approaches”). In the study, we propose "RNA sequence deconvolution with a combination of deep learning and benchmarking methods (CO-Dec)." CO-Dec integrates the "Textbook approach" employing scRNA-seq references with the "Exercise-book approach" utilizing deep learning methods. Materials and Methods Based on Cobos's study (Cobos et al., Nature Communications, 2010 Dec 2;11(1)), we used Baron's pancreatic scRNAseq dataset as the scRNA-seq reference and simulated bulk RNA-seq from scRNA-seq data. Cell composition was estimated using the cell composition vector output from benchmarking methods (SCDC, BisqueRNA, FARDEEP, RLA, and DCQ) and the bulk RNA sequence vector as input to a neural network model. We employed root mean squared error (RMSE) and Pearson's correlation coefficient as evaluation value to compare accuracy with the benchmarking method. Results Collaborating the deep learning method with the benchmarking method resulted in improved accuracy. Pearson's correlation increased from 0.9027 to 0.9259, 0.365 to 0.7219, 0.8775 to 0.8877, 0.8851 to 0.8877, and 0.2859 to 0.6259 for SCDC, BisqueRNA, FARDEEP, RLA, and DCQ, respectively. RMSE decreased from 0.0640 to 0.0541, 0.1481 to 0.1005, 0.0722 to 0.0663, 0.0689 to 0.0651, and 0.1609 to 0.1108 for SCDC, BisqueRNA, FARDEEP, RLA, and DCQ, respectively. Discussion CO-Dec showed superiority over benchmarking methods in bulk deconvolution tasks by synergizing the strengths of the "Textbook approach" utilizing scRNA-seq references and the "Exercise-book approach" employing deep learning methods.

Nr: 271
Title:

Intelligent Conversational Agents for Patient Empowerment: Design, Development, Evaluation, and Implementation in Two Surgical Contexts Using Retrieval Augmented Generation with Large Language Models

Authors:

Federico Guede-Fernandez, Salomé Azevedo, Miguel Azeitona Santos, João Silva, Rúben Gabriel Carvalho and Ana Londral

Abstract: The pre- and post-surgical phases are extremely delicate for patients. Most patients have low literacy levels of how to prepare for and recover from a particular surgical procedure. While the clinical team diligently provides valuable information, the sheer volume of instructions may overwhelm patients, hindering comprehension and accessibility outside of scheduled consultations. This challenge can impede their capacity to prepare for surgery adequately, adhere to postsurgical instructions, and make well-informed decisions regarding their treatment. In this situation, employing conversational agents (CA) can be helpful to answer patients' questions precisely when they need assistance. Large Language Models (LLM) are trained on an enormous amount of text data to comprehend and produce language similar to humans. They are proficient at tasks like text generation, translation, and question-answering. Retrieval augmented generation (RAG) is a technique for improving the retrieval accuracy of LLM-generated responses with data from external documents. We propose to design, develop, evaluate, and implement the CA based on the RAG framework to provide information about the surgery and the recovery process to patients and their relatives by making the patients' interactions with the CA more human-like. In this study, two surgical scenarios will be explored: cardiac and neurological. While cardiac surgeries have become less risky and are now quite common, they still evoke natural concerns for patients and their families despite their routine nature and low associated risks. Deep Brain Stimulation (DBS) is a neurosurgical intervention to address movement disorders linked to conditions such as Parkinson's disease (PD), essential tremor, dystonia, and various other neurological disorders. It's crucial to note that DBS can provide relief from symptoms, but despite the procedure's efficacy, the community at large continues to be hesitant and misinformed about presumed associated risks. The proposed RAG-based CA will be able to analyse and understand the context of a conversation and retrieve relevant information from specific documents provided by clinical experts for these specific interventions. The main advantage of this model is that it learns language patterns and can generate contextual responses through training on a large dataset of texts. The Participatory Action Research (PAR) approach will be followed to support the collaborative development of the CA while simultaneously implementing and evaluating through involving different stakeholders (patients, their relatives, and the clinical team) in the design and research process to know their preferences and goals and evaluate the user engagement. This approach consists of the following stages: 1) Planning, which involves the design of the CA; 2) Action, which comprises the development of the CA; 3) Observation, which includes implementation of the CA; and 4) Reflection for evaluation whether CA effects/issues were desirable and suggest ways of CA improvement in the next iteration. By implementing a dedicated CA for cardiac surgery and DBS information, we aim to combat misinformation and hesitancy effectively by offering accurate and easily accessible details about the procedure, its benefits, and associated risks as the CA becomes a reliable source of information. This dispels misconceptions and empowers patients to make informed decisions, fostering a more confident and well-informed community.

Nr: 273
Title:

PerFSeeB: Designing Long High-Weight Single Spaced Seeds for Full Sensitivity Alignment with a Given Number of Mismatches

Authors:

Sofya Titarenko and Valeriy Titarenko

Abstract: Sequence comparison is one of the major problems in the field of bioinformatics. It requires comparing macromolecules (DNA, RNA, and proteins) or their elements against each other and is widely used in genomic data science problems. We can formalise the comparison problem as the search for optimal global (local) alignment. The problem can be solved using dynamic programming algorithms (Smith-Waterman, Needleman-Wunsch). While dynamic programming guarantees finding the optimal alignment, it is very computationally expensive, and alternative techniques were proposed and are extensively used (e.g., heuristic-like and alignment-free). The amount of data generated by modern genome-sequencing hardware has been increasing exponentially worldwide. DNA sequencing technology is constantly changing and improving. The above makes the problem of sequence alignment ever challenging, especially with the new era of metagenomic data coming. To reduce computation time, searching first for candidate places has been suggested. This can be done using so-called seeds (short sub-sequences). Different seeds result in different sensitivity (a proportion of correctly aligned sequences). The ultimate aim is to increase the sensitivity (ideally to the case of full sensitivity) while reducing the complexity of calculations. This work proposes an algorithm to investigate the properties of high-weighted spaced seeds. The focus is on so-called lossless seeds that guarantee full sensitivity. The algorithm confirms that the desirable property for the optimal spaced seeds is periodicity and finds a certain relationship between the number of mismatches, period block size and length of the seed and read. The next algorithm provides a list of optimal periodic spaced seeds based on the number of allowed mismatches (we experimented with numbers from 2 to 9) and block sizes up to 50 symbols. The third algorithm helps to generate spaced seeds for an arbitrary read length using the list suggested by the second algorithm. The efficiency of found seeds is compared with the spaced seeds generated by other seed software (e.g. MegaBLAST, BFAST, rasbhari). We confirm that the seeds generated by the PerFSeeB method help significantly reduce the computational complexity for sequence alignment. Moreover, all algorithms are optimised using low-level Intel intrinsic functions, which allow fast performance on a standalone multicore workstation and experiments with long high-weight seeds (the type of seeds which were not considered before). The optimisation methods include a novel approach to data storage and arithmetic and logic operations. The codes can be found at https://github.com/vtman/PerFSeeB. The results are published in Titarenko, V., Titarenko, S. PerFSeeB: designing long high-weight single spaced seeds for full sensitivity alignment with a given number of mismatches. BMC Bioinformatics 24, 396 (2023). https://doi.org/10.1186/s12859-023-05517-4

Nr: 276
Title:

Impedimetric Screen-Printed Immunosensor for the Rapid Detection of Chagas Disease

Authors:

Franchin Lara, Cecilia Yamil Chain, José Cisneros, Andrea Villagra, Carlos Labriola, Alessandro Paccagnella and Stefano Bonaldo

Abstract: Chagas disease is an endemic parasitic condition affecting Latin America and recently Europe caused by the Trypanosoma cruzi. This work proposes a new impedimetric immunosensor for the rapid, cost-effective, label-free, and low-volume detection of specific anti- T. cruzi antibodies, used as biomarkers of Chagas disease in human serum. The biosensor is based on gold screen-printed electrodes that are functionalized with isolated cruzipain proteins by adsorption, following a highly reproducible 40 minutes- chemical procedure. The biosensor is tested with clinical serum samples from patients with (positive) or without (negative) anti- T. cruzi antibodies as previously verified by ELISA testing. The detection is performed through electrochemical impedance spectroscopy, showing ~22% variation of the charge transfer resistance ( Rct ) for positive human serum due to the immunoreaction between the immobilized cruzipain and the specific antibodies contained in the samples. Based on the cutoff value of Rct ratios, the developed immunosensor is capable to successfully discriminate positive and negative human serum samples at a dilution factor of 1/6400 in phosphate buffered saline solution, showing an excellent agreement with the ELISA results. The kinetics of the binding between immobilized cruzipain and specific antibodies are also characterized, with an equilibrium dissociation constant (KD) of 12.59 nM and a negative cooperative binding indicated by a Hill coefficient of 0.85. The immunosensor demonstrates high sensitivity and specificity in detecting Chagas disease biomarkers in serum samples. The paper also discusses the cost-effectiveness of the immunosensor compared to ELISA, highlighting its potential as a point-of-care diagnostic tool for Chagas disease.

Nr: 278
Title:

Processing of Complex-Valued Multi-Echo fMRI Data

Authors:

Michal Mikl, Marie Nováková, Anezka Kovarova, Martin Gajdoš and Radek Mareček

Abstract: Multi-echo functional magnetic resonance Imaging (fMRI) data can be used to optimize sensitivity to observed BOLD (blood oxygenation level dependent) response across the brain. Typical data processing pipeline is based on weighted combination of the time-series from individual echoes, followed with standard processing steps as for single-echo fMRI data. But most of fMRI procedures deals with magnitude data. Phase information is typically neglected but previous studies showed that the phase information might enhance identification of non-BOLD signal components. In case of complex-valued fMRI data, simple weighted combination cannot be applied on phase part or real+imaginary representation of data. We propose a solution based on the rotation of complex data with maximization of real part for each voxel time-series. This operation does not affect the temporal signal fluctuations but enables to apply weighted summation of complex data based on contrast to noise (CNR) ratio as it is typical approach for magnitude data. We used dataset consisting of 13 healthy volunteers with complex-valued multi-echo fMRI data and electrophysiological recording of ECG and breathing for subsequent evaluation of physiological noise present in fMRI data. Data were acquired with 3T MR scanner. Seven fMRI runs with visual-motor task and different acquisition parameters were measured during one visit. The runs differed in repetition time (from 3.05 s to 0.5 s), multi-slice acceleration factor (1,4,6,8) and flip angle. The three echoes were obtained at the echo times 17, 35, and 52 ms. Matlab with SPM12 toolbox and own scripts were used to process the data. At the first step, magnitude and phase were converted to real and imaginary representation. Magnitude data was realigned to correct for head movements and the same parameters (translations and rotations) were applied to real and imaginary data. Subsequently, maximization of real part for each voxel time-series was applied on real and imaginary data. Weighted combination of three echoes was calculated with rotated complex data. Finally, both magnitude and real plus imaginary part of this combined data were used for the rest of preprocessing (spatial normalization into the MNI space and spatial smoothing with gaussian filter) and activation analysis. RETROICOR algorithm was applied to the data to evaluate the variance related to physiological noise. We compared activation analysis of complex data with magnitude-only data. The activation patterns revealed by complex data were like the magnitude-only data. Real part of complex data provided higher correlation within the functional brain networks (r=0.34 for magnitude, r=0.44 for real part) and higher explained variability by the representative signals in individual regions (44% for real part vs. 29% for magnitude data). We observed similar portion of physiological noise in magnitude and complex data (about 11% of variability in grey matter) but there was a big difference between physiological noise explained in real part and imaginary part (38% of variability in grey matter). We introduced and tested procedure for combination of complex-valued multi-echo fMRI data. Our results showed that the procedure does not affect the ability to detect activation as well as connectivity patterns in the data. Moreover, it is possible our approach can partially decrease the amount of physiological noise in real part of processed complex data, but this requires more validation.

Nr: 281
Title:

Combining Deep Learning and Machine Learning for the Automatic Identification of Hip Prosthesis Failure: Development, Validation and Explainability Analysis

Authors:

Anna Corti, Federico Muscato, Francesco Manlio Gambaro, Katia Chiappetta, Mattia Loppini and Valentina D. A. Corino

Abstract: Revision hip arthroplasty yields less favorable outcomes compared to primary total hip arthroplasty (THA), and understanding the timing of THA failure is crucial. However, early detection of THA failure remains challenging. Artificial intelligence, specifically deep learning (DL) and machine learning (ML), can enhance diagnostic accuracy for THA failure by automatically evaluating X-ray images in monitoring patients with hip arthroplasties. This study aims to create a combined DL-ML approach for automatic detection of hip prosthetic failure from conventional plain radiographs. The analysis involved antero-posterior and lateral radiographic views from routine post-surgery follow-ups. Two patient cohorts were included for model development and validation, respectively. The training cohort consisted of 280 patients (140 failed and 140 non-failed) with a total of 560 images. The validation cohort consisted of 352 patients (275 failed and 77 non-failed) with a total of 771 images. After pre-processing, three images were generated: the original, acetabulum, and stem images. Convolutional neural networks were employed to predict prosthesis failure, encompassing loosening, bearing surface wear and osteolysis, malpositioning and dislocation. Two feature-based DL-ML pipelines were developed, in which deep features were extracted either from the original image (original image pipeline) or from the three images (3-image pipeline). Features were either used directly or reduced through principal component analysis. The two feature-based pipelines were compared with the baseline end-to-end DL model (baseline model) applied to the original images. Support vector machine (SVM) and random forest (RF) classifiers were considered for each pipeline. The SVM applied to the 3-image pipeline demonstrated the best performance, with an accuracy of 0.958 ± 0.006 in internal validation (compared to an accuracy of 0.945 ± 0.009 obtained with the SVM applied to the original image) and an F1-score of 0.874 in the external validation set. The baseline model achieved an accuracy of 0.936 ± 0.010 averaged on the 50 repetitions. Hence, combining global features from the original image with local information extracted from the acetabulum and the stem, significantly enhanced the classifier's performance, underscoring the efficacy of feature concatenation. Nonetheless, the comparable outcomes between the original image and 3-image pipelines implied a predominant influence of original images in identifying failures. This proposition was confirmed by the explainability analysis, pinpointing the complete original images features as the primary contributor. Nevertheless, the role of the acetabulum and stem images was also emphasized by the explainability analysis, which detected one feature from the acetabulum and one from the stem among the 20 most impactful features. In summary, this study introduces a novel combined DL-ML approach for THA failure detection, enhancing the role of the stem and acetabular component analysis.

Nr: 284
Title:

Network Analyses for Functional Annotation and Drug Repurposing

Authors:

Erik Sonnhammer

Abstract: The FunCoup network database (https://funcoup.sbc.su.se) provides comprehensive functional association networks of genes/proteins that were inferred by integrating massive amounts of multi-omics data, combined with orthology transfer. The current release includes 22 species from all domains of life, including SARS-CoV-2, allowing users to visualize and analyze interactions between SARS-CoV-2 and human proteins in order to better understand COVID-19. The FunCoup networks can be used for a variety of purposes. The website allows users to visualize tissue-specific networks and regulatory interactions the human interactome. A unique feature of the FunCoup website is the possibility to perform ‘comparative interactomics’ such that subnetworks of different species are aligned with each other using orthologues. This way the level of conservation of the network between species can be easily studied. A number of functional genomics web resources use FunCoup. One example is the PathBIX (https://pathbix.sbc.su.se/) web server for network-based pathway analysis, which runs the ANUBIX algorithm that has been shown to be more accurate than previous network-based methods. The PathBIX website performs pathway annotation for 21 species, and utilizes prefetched and preprocessed network data from FunCoup networks and pathway data from three databases: KEGG, Reactome, and WikiPathways. MaxLink (https://maxlink.sbc.su.se/) is a network-based gene prioritization server that uses FunCoup. Based on a user-supplied list of query genes, MaxLink identifies and ranks genes that have a statistically significant level of links to the query list. This functionality can be used e.g. to predict new potential disease genes from an initial set of genes with known association to a disease. Another application of functional association networks is to identify candidates for drug repurposing. We have evaluated network crosstalk-based methods that perform well for pathway enrichment in their ability to predict drug repurposing. FunCoup was used to construct a new benchmark for performance assessment of network-based drug repurposing tools, which was used to compare the network crosstalk-based methods ANUBIX, BinoX, and NEAT with a state-of-the-art technique, Network Proximity. We found that network crosstalk-based drug repurposing is able to rival the state-of-the-art method and in some cases outperform it. References: Funcoup 5: Functional association networks in all domains of life, supporting directed links and tissue-specificity Persson E, Castresana-Aguirre M, Buzzao D, Guala D, Sonnhammer E J. Mol. Biol., 433:166835 (2021) "PathBIX - a web server for network-based pathway annotation with adaptive null models" Castresana-Aguirre M, Persson E, Sonnhammer E Bioinformatics Advances, 1:vbab010 (2021) "Network crosstalk as a basis for drug repurposing" Dimitri Guala and Erik L L Sonnhammer* Front. Genet. 13:792090 (2022)

Nr: 286
Title:

Combining Speech and Electroglottographic Data to Assess the Validity of Early Voice Biomarkers of Parkinson’s Disease

Authors:

Khalid Daoudi

Abstract: Background: Parkinson's disease (PD) is the second-most common neurodegenerative disease. It is well known that the diagnosis of PD can be very challenging in early disease stages. On the other hand, it has been established that speech impairment is an early clinical feature of PD. For this reason, acoustic analysis of PD speech has recently gained an ever-increasing interest. However, the large majority of research use data from mid or/and late stage PD patients. This have led to numerous shortcuts and contradictory hypothesis on potential acoustic early markers of PD. Purpose: We used electroglottographic signals in conjunction with speech signals, recorded from early stage PD patients, to check the validity of some hypothesis on early voice biomarkers of PD. Method: A total of 46 French speakers were recruited in the framework of a research project involving the neurology and ENT departments of 2 French university hospitals. Twenty six patients (10 females and 16 males) were diagnosed with idiopathic PD (mean age of 62.2 ± 7.2 years and a mean symptom duration of 3.1 ± 1.6 years). Twenty healthy controls (HC) with a mean age of 59.1 ± 8.6 years (10 females and 10 males) without any history of neurological or communication disorders were recruited. Each participant performed several speech tasks recorded by a headmount condenser microphone. In parallel to speech, electroglottographic signals were also recorded. An electroglottograph (EGG) is a non-invasive device that indexes the contact area between the two vocal folds when speaking. We extracted from EGG signals the ground truth of the voiced/unvoiced speech segments and the time-location of the glottal closing instants (GCIs). The latter led, in particular, to the computation of the ground truth values of the instantaneous fundamental frequency (F0) in voiced segments. This allowed a precise computation of the widely used phonation perturbation measures: jitter, shimmer and HNR (harmonic-to-noise ratio). Ground truth GCIs also allowed accurate estimation of the glottal flow, the airflow excitation signal generated by the vocal folds during phonation. We then applied a standard technique for the parametrization of the glottal flow waveform. Results: Statistical analysis revealed no significant difference between the phonation perturbation measures of the HC an PD groups. This indicated that these measures cannot be considered as early voice markers of PD, contrarily to reports of numerous studies. We also found that speech-alone automatic methods for the computation of these measures are sensitive to the accuracy of F0 estimation. This may explain some contradictory findings in the literature. Statistical analysis revealed no significant difference between the glottal flow features of the HC an PD groups. This indicated that glottal flow analysis is not suitable for the assessment of speech impairment of early PD. We also found that speech-alone automatic methods for the computation of glottal features are sensitive to the accuracy of GCI detection algorithms. Indeed, some algorithms wrongly revealed significant statistical difference between PD and HC. Conclusion: By using EGG data recorded from early PD patients, we showed the invalidity of some hypothesis on early PD voice biomarkers. We also identified potential sources of contradictory findings when using speech-alone automatic methods. EGG biosignals can be very useful to confirm or reject hypothesis on PD voice biomarkers.

Nr: 287
Title:

Relationship Between Environmental Predictors and Hospital Admission in Lyon, France: A Distributed Lag Nonlinear Analysis

Authors:

Levi Monteiro Martins, Elsa Coz, Mohand-Said Hacid and Delphine Maucort-Boulch

Abstract: The congestion of emergency services is a global problem. For instance, in France, the annual number of visits to emergency services has consistently increased from 1996 to 2019. In 1996, this number was 10.1 million for metropolitan France. It then increased continuously between 1996 and 2016 at an average rate of 3.5% per year, and at a slower pace of 1.6% on average per year between 2016 and 2019. This rise in medical visits leads to an increase in hospital admissions, creating problems related to limited resources, medical staff overload, and equity in healthcare. Many studies have shown an association between air pollution and hospital admissions. Thus, understanding the risks associated to air pollution and hospital admissions can play an important role in helping hospital management better prepare and provide improved services to patients. In this work, we propose the use of a Quasi-Poisson generalized linear regression and distributed lag non-linear model (DLNM) to investigate the delayed and non-linear effects of air pollution and weather on daily hospital admission of individuals diagnosed with diseases of the circulatory system and respiratory-related diseases (International Classification of Diseases 10th Revision I00-I99 and J00-J99). Furthermore, we propose to perform subgroup analyses by age (>=18<65, >=65), gender (male, female), french deprivation score (1-3,4-5), and charlson score ( 0, 1, 2, and >2). The study included all hospital admissions from January 1, 2009, to December 31, 2019, from the Hospices Civils de Lyon (HCL), a hospital network located in Lyon, France. Hospital data were extracted from the Program for the Medicalization of Information Systems (PMSI). Air pollution data was provided by Atmo Auvergne-Rhône-Alpes and the Institut national de l'environnement industriel et des risques (INERIS). The pollutants included in this study were the particulate matter PM2.5 and PM10, ozone (O3), nitrogen dioxide (NO2), sulfur dioxide (SO2), and carbon monoxide (CO). Meteorological data were provided by Météo-France, and included the temperature, wind speed, relative humidity, and rainfall. Goodness of fit of the model was evaluated by the use of Quasi-Poisson Akaike Information Criterion (Q-AIC) and residual plots. In addition, a sensitivity analysis was conducted by varying the degrees of freedom and the maximum lag period of the cross-basis predictors. We used the packages “mgcv”(version 1.9.0), “dlnm” (version 2.4.7) for modeling in the R environment (version 4.3.2). In our preliminary results for hospital admissions combined, i.e. admissions by I00-I99 and J00-J99, no significant association was found to air pollutants. However, temperature and relative humidity exhibited a positive association at initial lags. Extreme cold temperatures and low relative humidity increased the risk of admissions. For admissions related to circulatory system diseases, extreme concentrations of SO2 presented a positive association at initial lags. In addition, for admissions related to respiratory diseases, extreme concentrations of O3 presented a positive association at initial lags.

Nr: 290
Title:

Peer Support Apps Improve Lifestyles for Early Type II Diabetes

Authors:

Takeo Shibata, Mizuki Kosone, Shota Yoshihara, Megumi Shibuya and Kenji Sata

Abstract: Introduction Diabetes is one of the leading causes of death and disability worldwide. The number of type II diabetes is more than 500 million recently. The future number had been estimated about 1.3 billion in 2050 (Lancet. 2023 Jul 15;402(10397):203-234). Though improvements of their lifestyles are necessary to prevent and remedied type II diabetes, it is difficult to continue better lifestyle habits. We provided a peer support apps to continue better lifestyle habits. This study aims to evaluate effects of the peer support app for improving lifestyles of early type II diabetes. Methods A randomized controlled trial for three months was conducted on citizens who had early type II diabetes (age ranges from 40 to 70, HbA1c ranges from 5.6% to 7.0%). They were randomly assigned into two groups using or not using the peer support apps. All participants set a daily walking step goal and try to achieve their goals. Participants who were assigned to use peer-support apps made team with anonymous five members on the apps. They uploaded a challenge photo every day, and then team members encourage each other. Daily walking steps, achievement rate of their daily walking step goals, BMI, HbA1c, blood pressure, lifestyle habits were compared. Mann-Whitney’s U test was used to evaluate these lifestyle related improvements. Results Thirty-eight participants were corrected in this trial. Achievement rates for the group using the app and the group not using the app were 57.5% and 26.5%, respectively (p=0.037). Average numbers of daily steps for three months were 6,854 steps and 3946 steps, respectively (p=0.034). No significant differences are shown for changes of BMI, HbA1c, and blood pressure. Significant differences in lifestyle habits were shown for the improvement period of dietary habits and for interests of exercise regularly. Discussion Five team members encourage each other every day on the peer support app, which helps them to continue improved their lifestyle habits. Because a team is small, it is difficult to slack off their improved lifestyle habits. Though some participants were over 65 years old in this study, they could use the app after a short training. Elderly people account for 29.1% of Japanese population. This app maybe expected not only for early diabetes but also for common elderly to improve their lifestyle and reduce their loneliness.

Nr: 293
Title:

Utilizing Cross-Validation to Reveal Hidden Biological Knowledge in Multi-Omics Data

Authors:

Miray Unlu Yazici, J. S. Marron, Burcu Bakir-Gungor and Malik Yousef

Abstract: Massive biological datasets generated with high-throughput technologies have shifted the emphasis towards integration of multiple datasets. In silico investigations enable high statistical power approaches and machine learning algorithms to understand the underlying characteristics of complex diseases with multiple integration methods. We designed a novel method in which informative groups are constructed by using miRNA and mRNA expression profiles. Following that, significant groups detected with a scoring function are used to develop a machine learning model. This model enables the identification of disease related biomarkers and their joint interactions in cellular signaling underlying diseases. Furthermore, a randomized bootstrap resampling method is employed to extract distinct molecular patterns via iterations and new biomarkers within the informative groups. The Breast Invasive Carcinoma (BRCA) microRNA (miRNA) and gene expression (mRNA) profiles are downloaded from The Cancer Genome Atlas (TCGA). The normalized datasets are split into training and test sets (split ratio of 90:10). The training set is introduced into the grouping component and significant miRNA-mRNA pairs also called as groups are constructed. Next, these groups are scored based on their classification performance in the scoring component. The top 10 groups are used to develop a Random Forest ML model. The pattern component identifies co-occurrence patterns and clustering of disease-associated groups by using appearance information over iterations and groupassociated gene information, respectively. Our multi-omics integrative tool facilitates capturing key biological molecules and unveiling cellular mechanisms underlying heterogeneous diseases. Integration of biological knowledge from case sets with the designed components in our study provide strong differentiation between tumor and normal tissue. This method can also be used for subtype identification and disease status differentiation. As a result, this approach provides a comprehensive view.

Nr: 27
Title:

A Comparison Study of Marginal and Internal Fit Assessment Methods for Fixed Dental Prostheses: in-Vitro Study

Authors:

Byoungju Yun, Keunbada Son and Kyu-Bok Lee

Abstract: Numerous studies have previously evaluated the marginal and internal fit of fixed prostheses; however, few reports have performed an objective comparison of the various methods used for their assessment. The purpose of this study was to compare five marginal and internal fit assessment methods for fixed prostheses. A specially designed sample was used to measure the marginal and internal fit of the prosthesis according to the cross-sectional method (CSM), silicone replica technique (SRT), triple scan method (TSM), micro-computed tomography (MCT), and optical coherence tomography (OCT). The five methods showed significant differences in the four regions that were assessed (p < 0.001). The marginal, axial, angle, and occlusal regions showed low mean values: CSM (23.2 µm), TSM (56.3 µm), MCT (84.3 µm), and MCT (102.6 µm), respectively. The marginal fit for each method was in the range of 23.2–83.4 µm and internal fit (axial, angle, and occlusal) ranged from 44.8–95.9 µm, 84.3–128.6 µm, and 102.6–140.5 µm, respectively. The marginal and internal fit showed significant differences depending on the method. Even if the assessment values of the marginal and internal fit are found to be in the allowable clinical range, the differences in the values according to the method should be considered.

Nr: 60
Title:

Insights into Yeast Dynamic Response to Chemotherapeutic Agent Through Time Series Genome-Scale Metabolic Models

Authors:

Muhammed Erkan Karabekmez

Abstract: Organism-specific genome-scale metabolic models (GSMMs) can unveil molecular mechanisms within cells and are commonly used in diverse applications, from synthetic biology, biotechnology, and systems biology to metabolic engineering. There are limited studies incorporating time-series transcriptomics in GSMM simulations. Yeast is an easy-to-manipulate model organism for tumor research. Here, a novel approach was proposed to integrate time-series transcriptomics with GSMMs to narrow down the feasible solution space of all possible flux distributions and attain time-series flux samples. After mapping gene expressions to reaction expressions through GPR rules, for each time point of the transcriptomics a time point specific GSMMs were constructed by using a modified version of E-FLUX approach (The existing approach could generate infeasible GSMMs that's why it is modified to guarantee feasible GSSMMs). After model constrcution, by using ACHR sampling algorithm of CobraToolbox in Matlab 2000 flux samples for each reaction in each model were generated. The flux samples were clustered using machine learning techniques, and the clusters' functional analysis was performed using reaction set enrichment analysis. Wasserstein distance was used to calculate distance between flux distributions. For the time series analysis, matrix of distances within time series for each reaction were calculated. By using norm distance between those matrices k-medoid clustering was implemented after optimizing the number of clusters. A time series transcriptomics response of Yeast cells to a chemotherapeutic reagent – doxorubicin - was mapped onto a Yeast GSMM as a case study. Eleven flux clusters were obtained with our approach, and pathway dynamics were displayed. Induction of fluxes related to bicarbonate formation and transport, ergosterol and spermidine transport, and ATP production were captured. Integrating time-series transcriptomics data with GSMMs is a promising approach to reveal pathway dynamics without any kinetic modeling and detects pathways that cannot be identified through transcriptomics-only analysis.

Nr: 63
Title:

The Relationship Between Frailty Status and Psycho-Social Indices in Korean Adults: A Cohort Study

Authors:

Ji Young Kim

Abstract: Background: Frailty is a state of increased vulnerability to poor homeostasis resolution after a stressor event, often resulting from cumulative physiological damage over a lifetime. In older adults, frailty is associated with a higher risk of adverse health outcomes, including falls, disabilities, hospitalization, and mortality. This study aims to investigate the connection between frailty and psychosocial factors. Methods: We enrolled 930 participants, aged 54 to 79, from the Aging-Cognition Cohort, a subset of the Wonju-Pyeongchang Arirang Cohort, between 2020 and 2022. We assessed frailty scores according to Fried's criteria. We utilized unsupervised machine learning techniques such as clustering and dimensionality reduction to explore the relationship between frailty status and clinical, laboratory, and psychosocial factors. Network-based analysis was employed to pinpoint key psychosocial factors among PWI, CES, GDS, MSPSS, and UCLA. Association analyses were conducted to identify robust links between psychosocial factors and frailty. Results: Our network analysis revealed increasing complexity in the network connections among clinical, laboratory, psychosocial indices, and frailty components within the frail group. Principal component analysis using hub variables with strong interconnections did not distinctly separate frailty status. Hierarchical clustering of these hub variables showed a mild classification effect on frailty index. Univariate linear regression demonstrated that 21 variables were statistically significant in both continuous and categorical linear models. Among the five psychosocial variables, UCLA emerged as the central hub with the most connections to other indices. The significant relationship between UCLA and frailty persisted even after adjusting for covariates. Conclusions: This study highlights a robust association between frailty and psychosocial factors in the older Korean population, suggesting the need for a prospective cohort study to elucidate the causal relationship between frailty and psychosocial factors.

Nr: 149
Title:

Conjugating Statistical Inference via Machine Learning of Intelligible Rules from Data in the Framework of eXplainable AI with More Traditional Mathematical Modeling Deduction

Authors:

Diego Liberati

Abstract: Logical synthesis (IEEE Trans CaS I, 2000) does allow to infer from binary codified (bio-)data understandable rules codifying the underlying process in a kind of piece- wise clustering whose partitioning hyperplanes are orthogonal to the salient variables. When just one hyperplane is sufficient in the space of variables, the (iterated) cascade of Principal Component Analysis and K-means is able to (iteratively) bisecate the ortho-normalized space still orthogonally to the most salient principal components (Intelligent Data Analysis 2007). When data are dynamics, a PieceWise Affine regression (Automatica 2003) is able to identify with the desired precision the multivariable time course of the salient variables. Examples on biodata are for instance in (IEEE Trans KDE 2002) and (Frontiers in Oncology, 2019), while examples on bio signals are for instance in (Nonlinear Analysis: Hybrid Systems, 2008 and 2009) and (Annals of Biomedical Engineering 2009). Such a powerful portfolio of quite simple methods could be generally applied. In systems biology, together with Galerkin simulation of the dynamics of the identified molecular domains, it was able to forecast a not yet known mutant of the oncosuppressor SoS1 (Biotechnology advances), while sensitivity analysis of the salient variables does allow to model the role of the Tumor Necrosis Factor in inducing either apoptosis or survival, depending on the state of the targeted cell, in the so-called by-stander effect in radio-therapy. A couple of application are here of interest. At a macro bioelectric scale, non invasive identification of arrhythmic ectopic focus in endocardium is approached via piece-wise affine regression of virtually simulated potential time courses at few millimeters apart repeating beat to beat, in order to match the recorded multichannel EKG map of the torso, as opposite to the traditional unsuccessful approach of de-convoluting such recorded map on an inverse model of the mediums trusting that the standard Tickhonov regularization could compensate for both modeling raw approximation and errors, besides ill-conditioning. Remote sensing could be added in a kind of ambulatory monitoring through our ROBO MD, developed within the Innovation 4 Welfare EU framework. At a micro biochemistry physical chemistry scale, biomolecules are not just analyses, as above recalled, but also synthetized, for instance in drug discovery, by conjugating the proposed intelligible machine learning with the traditional heavy Schroedinger equation modeling approach, within the frame of the Centre Européen pour la Computation Atomistique et Moléculaire.

Nr: 156
Title:

Exploring the EHR Documentation Practices and Motives of the Elderly Care Physician: An Interview Study in Nursing Homes in the Netherlands

Authors:

Charlotte Albers, Yvonne Jorna, Marike E. de Boer, Karlijn Joling, Martine de Bruijne and Martin Smalbrugge

Abstract: Introduction Elderly care physicians (ECPs) in nursing homes register medical and healthcare information about their patients in the electronic health record (EHR). However, because they inconsistently use a standardized and structured manner, it is difficult to reuse the data for other purposes such as quality improvement and scientific research. To improve this, it is necessary to get insight into the behavior and attitude of ECPs concerning standardized and structured registrations in the EHR. This research aims to explore how and why ECPs register in the EHR and which factors influence standardized and structured registrations. The findings will be used to formulate recommendations to improve the way in which data is captured by the ECP in Dutch nursing homes. Methods We conducted interviews with ECPs until we reached data saturation. We recruited participants through purposive sampling to ensure sufficient differences in age, gender, healthcare organization and use of different EHRs between participants. Analysis was performed based on the inductive thematic analysis method by Braun & Clark. Results Reasons for ECPs to register in the EHR are mainly related to the direct patient care: retrieving information about patients and transferring information about patients to colleagues. ECPs do not have reuse of data as one or their primary goals when reporting data in the EMR. However, they are willing to change this. The most important factors that promote structured and standardized registration are giving it attention within an organization, having clear agreements on how and where to register and receiving feedback on physician’s own actions and performance based on one’s registered data. This increases the motivation to register in a more standardized way. Conclusion The main conclusion is that ECPs do not attach great importance to reuse of data, but they are nonetheless willing to make changes in their use of the EHR if the benefits are clear enough. It would be of great importance to make more clear to the ECPs how they could benefit from standardized and structured registrations. Therefore, one of the recommendations would be to meet the ECPs their needs by involving ECPs through giving them feedback information on their medical actions through the data that they have registered. Another recommendation would be to regularly discuss their consensus on how to register within the EMR to ensure everyone is well-informed and aligned.

Nr: 194
Title:

Unsupervised Background Detection for Histological Whole Slide Image

Authors:

Kevin François Bouaou and Louise Mathé

Abstract: Introduction: Cancer diagnosis involves pathologists examining tumor tissue on glass slides. With the emergence of digitized whole slide image (WSI), use of computer vision and machine learning (ML) methods to assist pathologists has increased. However, WSI are principally composed of unused background, thus needlessly hindering model training and increasing image analysis resources and processing time. Existing unsupervised solutions permit removing background, yet information could be lost and the methods are image specific with customs parameters. The implementation of an automatic, unsupervised and generalized method could optimize model training and save performances without losing useful information. Methods: We implemented a new method, SuperEntropy and compared it with two methods from the literature, Otsu [1] and EntropyMasker [2]. In our method, the image is superpixel-segmented [3] and a 3.7 entropy threshold is used to identify background-only regions. Edge detection and hole filling post-processing are applied to the resulting mask. The methods are evaluated on two datasets of resized H&E-stained WSI (1035x1339 pixels). The first dataset, from the 2022 Gleason Challenge, includes 26 prostatectomy biopsies digitized with 6 scanners (Phillips, Akoya, KFBio, Leica, Olympus, and Zeiss). The second is from The Cancer Genome Atlas Program (TCGA), 30 WSI from 8 different organ biopsies (kidney, breast, brain, bladder, ovary, lung, rectum and colon) were selected. Performance is assessed using F-score, sensitivity, and average processing time across both datasets, and statistical differences are analyzed using a Mann-Whitney U test. Results: For the Otsu method, the results are F-score = 0.96 ± 0.05, Sensitivity = 0.94 ± 0.07, with an average time of 0.22 ± 0.04s. For EntropyMasker results are F-score = 0.94 ± 0.13, Sensitivity = 0.95 ± 0.14, with an average time of 4.24 ± 2.27s. Finally, SuperEntropy results are F-score = 0.95 ± 0.09, Sensitivity = 0.97 ± 0.05, and an average time of 3.00 ± 1.36s. The SuperEntropy F-score shows no significant difference with Otsu , but is significantly higher than EntropyMasker. In terms of sensitivity, SuperEntropy is significantly higher than Otsu and EntropyMasker sensitivity. Conclusion: We aim to create an automatic, unsupervised technique to detect and extract background information, reducing the data for ML without information loss. It is primordial to lose as little information as possible to maintain the model's performance. Our new method, based on Superpixel extraction and entropy estimation shows a significantly higher sensitivity with a preserved F-score, preventing substantial data loss. Future work may involve applying these techniques to WSI ML training and inference process to see the added value. Bibliography: WANG, Dayong, KHOSLA, Aditya, GARGEYA, Rishab, et al. Deep learning for identifying metastatic breast cancer. arXiv, 2016. SONG, Yipei, CISTERNINO, Francesco, MEKKE, Joost M., et al. An automatic entropy method to efficiently mask histology whole-slide images. medRxiv, 2022. Pati, P., Jaume, G., Fernandes, L. A., Foncubierta, A., et al. HACT-NET: a hierarchical Cell-to-Tissue graph neural network for histopathological image classification. arXiv, 2020.

Nr: 201
Title:

Comparison Between E.coli (W3110) and Enterobacter Species in Their hyc Operon Locus Using Bioinformatics Analysis

Authors:

Aida Taqi Al Lawati

Abstract: Enterobacter strains are considered to be more suitable for industrial scale hydrogen production because they grow quicker, can utilize a wide range of substrate, and have higher tolerance to dissolved oxygen, hydrogen partial pressure and pH. In order to enhance clean hydrogen production globally in a sustainable manner, bioinformatics analysis was performed to compare between the genetic map at hyc locus of the two bacterial strains (E. coli & Enterobacter), which are considered to be the main gene used by bacteria mentioned in hydrogen production. In this study, the hydrogenase-3 gene cluster (hycABCDEFGHI) of Enterobacter was studied further and compared with that of E. coli. Analysis of NCBI database showed that hyc locus in both bacterial strains are highly homologous. However, some Enterobacter strains (Enterobacter Crenshaw and Enterobacter pseudoroggenkampii) showed the absence of the hyc operon region, suggesting that there might be another hydrogenase available in the specified lifestyle of Enterobacter, also the isolation source could be a reason of the absence of hyc in the mentioned strains. A phylogenetic tree was constructed and clearly supports that almost all of Enterobacter strains contain hyc operon, although very few strains were not having hyc locus, but they are all connected as one big clad. The physical map of hyc locus of both bacteria were very similar as well as the upstream and downstream. However, the non-encoding region upstream hycA and well as the upstream and downstream. However, the non-encoding region upstream hycA and downstream hycI shared by both bacteria, although the sequence of E.coli non-encoding shared region was not homologues to that of Enterobacter. Since Enterobacter got very similar hyc operon compared to E. coli, this conclude that it has the ability in Hydrogen production and if both bacteria combined together, it might end up with high quantities of green Hydrogen gas production.

Nr: 214
Title:

Sequence-Associated Mechanistic Insights into DNA Fragility

Authors:

Patrick Pflughaupt

Abstract: Genomic insertion and deletion alterations, which occur through the formation of DNA strand breaks, are the second most significant DNA modifications after point mutations. However, unlike point mutations, the various sequence-context dependencies of DNA strand breakpoints and the detailed regional variation of breakage propensities in the genome have not been thoroughly explored through cutting-edge computational means. This computational project aims to understand the genomic sequence dependencies in tissues experiencing different physiological, spontaneous and pathological processes, revealing the sequence-driven commonalities and differences across these processes leading to DNA strand breaks. Our work identified how the DNA sequence context influences the propensity of a breakpoint appearing under different conditions, showing that the sequence context can be separated into three distinct short-, medium-, and long-range effects. Focusing on the short-range effect, we quantified the k-meric breakage propensities and revealed the relationship between the DNA sequence and various types of induced breakages. We also applied sequence-based feature engineering to datasets of chromatin, epigenetic, and structural features of the genome. By combining these sequence determinants, we built a robust machine learning engine that understands different mechanical, biological and physicochemical aspects of DNA fragility. Our high-quality sequence-based features demonstrated significant performance enhancement in machine learning, as compared to the usage of basic triplet counts in a certain range surrounding a breakpoint location. This work is currently being developed and applied towards a generalised DNA fragility model for cancer applications.

Nr: 252
Title:

A Comparative Study of in Silico Modeling Approaches for the Ligand-Binding Domain of Zebrafish Androgen Receptor and Empirical Evaluation

Authors:

Md Adnan Karim, Chang Gyun Park, Hyunki Cho, Annmariya Elayanithottathil Sebastian, Chang Seon Ryu, Juyong Yoon and Young Jun Kim

Abstract: The androgen receptor (AR) activation by androgens is vital for tissue development, sexual differentiation, and reproductive attributes in zebrafish. However, the understanding of the molecular mechanism of their activation remains limited due to the unavailability of crystal structures. In this study, we utilized both ab initio (AlphaFold) and homology (Swiss-model) structure model of zebrafish androgen receptor ligand-binding domain (zAR-LBD) to study the binding specificity, affinity and molecular interactions of endogenous hormones (Testosterone (T), 11-Ketotestosterone (11-KT), and Dihydrotestosterone (DHT)) both in silico and in vitro. Molecular docking analysis showed that both structures formed the same interactions and similar binding energy patterns with androgens. Swiss-model molecular dynamics simulation trajectory showed increased fluctuation rates in RMSD and RMSF analyses, while AlphaFold hydrogen bond patterns correlated with in vitro androgenic activity, following the order DHT > T ≥ 11-KT. Furthermore, comparative analysis of 11 features obtained from molecular docking, molecular dynamics simulation and in vitro androgenic activity analysis also exhibited higher correlation in AlphaFold-zAR-LBD complexes while preserving the maximum amount of information. Our findings emphasize integrating ab initio structural models with in vitro and in silico analyses for insights into the molecular mechanisms of endocrine-disrupting effects across species.

Nr: 254
Title:

Eigenvector Spatial Filter Effectively Removes Spatial Autocorrelation Inherent in Geographically Distributed Environmental Epidemiological Data

Authors:

Zio Kim, Hyung-Jin Yoon, Su Hwan Kim and Jin-Youp Kim

Abstract: Background Asthma is a disease significantly influenced by environmental factors and inherent spatial autocorrelation (SA). Evaluating the risk of air pollutants (nitrogen dioxide [NO2], sulfur dioxide, carbon monoxide, and particulate matter) on asthma has primarily limited to statistical models such as time-series analysis, often neglecting the potential distortion caused by spatial autocorrelation (SA). We assessed the effectiveness of Eigenvector Spatial Filtering (ESF) in removing SA compared to conventional techniques. Methods We retrospectively analysed daily inpatient, outpatient and emergency (ED) hospital visits for asthma between 2015 and 2017 in 72 districts in the Seoul Metropolitan Area. ESF, a widely used spatial method for removing latent SA from regional data, was used to remove SA. The proposed model with ESF (ESF model) was compared with aggregated model, general model, and spatial model with fixed effect. Findings The ESF model successfully reduced SA in 12 outcome-pollutant combinations, showing statistically insignificant autocorrelations in all combinations. However, a spatial model with fixed effects did not yield consistent results for SA removal. The ESF model also accurately redirected every underestimated Cumulative Relative Risk (CRR) of pollutant-outcome relations, under a statistical significance level of 0.05 (for example, NO2-asthma ED CRR: 0·71 [0·69–0·74] to 1·18 [1·15–1·21]; CO-asthma Inpatient CRR: 0·71 [0·68–0·73] to 1·06 [1·04–1·09]). Interpretation We found that SA exists in environmental disease - asthma - and that the ESF model can effectively remove SA in analysis considering various types of hospital visits.

Nr: 274
Title:

Use of T2* Mapping for Analysis of Multi-Echo fMRI Data

Authors:

Anezka Kovarova and Michal Mikl

Abstract: The multi-echo fMRI paradigm represents a paradigm shift, enhancing sensitivity to the Blood Oxygenation Level Dependent (BOLD) signal by capturing multiple echoes within a single imaging sequence. This advanced approach not only refines the spatiotemporal resolution of fMRI but also introduces the intriguing possibility of incorporating T2* mapping principles into every time point of the imaging data. This unique integration opens possibilities for comprehensive T2* parameter fitting throughout the entire fMRI time series, theoretically enriching the understanding of neurovascular dynamics. This abstract explores the pilot study of the multi-echo multi-band (ME MB) fMRI approach coupled with the principles of T2* mapping, as a synergistic strategy for advancing the field of fast functional imaging. The dataset consists of data from 49 healthy volunteers aged 20-38 who do not have any neurological, psychiatric, or mental disorder. The whole study protocol was approved by the Masaryk University Ethics Committee. The measurements were performed on the Siemens Prisma 3T MR whole-body scanner with 64-channel head-neck coil. First step of the acquisition were high-resolution anatomical images (MPRAGE, voxel size 1 x 1 x 1 mm, FOV 224 x 224x 240 mm) and then 7 different BOLD runs with block design combining visual stimulation and motor activity (pressing buttons) were obtained. The acquisition time of each run was 6 minutes, echo times (TE) were 17, 35 and 52 ms, voxel size was 3 x 3 x 3 mm. Repetition times (TR), number of scans and flip angles (FA) were different for every run as follows: 1) 3.05 s / 120 / 80°; 2) 3.05 / 120 / 45°; 3) 0.8 s / 450 / 45°; 4) 0.8 s / 450 / 20°; 5)0.6 s / 600 / 45°; 6) 0.6 s / 600 / 20°; 7) 0.4 s / 900 / 20°. Acquired data was processed in as composite ME model data the optimal combinations weighted by the contrast-to-noise ratio in the standard way in SPM12. The other model was calculated as T2* estimation of the same data. We used GLM models on single-subject and group levels to verify the results of activation. The main focus was on activation level and power of statistics, so we evaluated the differences between multi-echo and T2* models by assessing the number of active voxels, variance of residuals, percent signal change (PSC) and the power of t-statistics. We explored both global and local metrics which were chosen according the level of activation. The comparison of ME MB data and T2* data was done using several metrics. For global level of activation, we computed the number of active voxels, where the T2* model provided similar values as ME model and in runs 3 and 5 the T2* yielded even higher values than ME which were significantly higher. The variance of residuals was slightly worse in T2* model. We also evaluated data in chosen ROIs based on activation and then picked top 50 voxels from each ROI. When comparing the t-values, the T2* model outperformed ME model especially in runs 3,4,5,6 where the difference was statistically significant. Both models provided similar values of PSC except for runs 1, 2, and 7. The pilot comparison study of ME MB data and T2* mapping showed promising results in terms of acquiring the same or even higher level of data quality in observed metrics. We hope that T2* as a quantitative parameter could contribute to the robustness and stability of the fast fMRI data processing.

Nr: 277
Title:

Electrochemical Biosensor for the Monitoring of Phages of Lactococcus lactis in Milk-Based Samples

Authors:

Franchin Lara, Stefano Bonaldo, Erica Cretaio, Elisabetta Pasqualotto, Matteo Scaramuzza and Alessandro Paccagnella

Abstract: Lactococcus lactis bacteriophage infections in milk prevent the proper lactic fermentation, leading to the production of unsaleable low-quality products and great economic losses in the dairy industry. In this work, we present an innovative biosensing approach for the cost-effective detection of L. lactis phages in milk-based samples through electrochemical impedance spectroscopy (EIS). The detection is based on the evident parametric shifts in the charge transfer resistance and in the impedance phase at 100 Hz caused by different bacteria proliferation due to phage activity. The EIS results are compared with optical absorbance measurements at 600 nm, in order to validate the proposed method. Preliminary experimental tests with filtered milk-based samples confirm the sensor capability to detect different phage concentrations in milk-based solutions in less than 4 hours. In order to reach a higher sensitivity, we propose a new milk pre-treatment adding calcium chloride (CaCl2) to the samples. The EIS results with CaCl2-treated milk evidence an enhanced phage activity, which leads to a much larger parametric shift. Lastly, the sensor is tested with CaCl2-treated milk-based samples at different phages concentrations, obtaining an enhanced performance reaching the limit of detection of 10^3 PFU/mL. Indeed, the sensor has proved to be a rapid and effective device for the detection of different level of phage contamination for the final application, being capable of satisfying the industry safety requirements since the minimum contamination level detected in dairy plants is 10^4 PFU/mL.

Nr: 282
Title:

Playing 3D Puzzle Activates Brain Waves in Elderly Higher Than in Young People

Authors:

DAI KITAGAWA, Shuhei Oba, Yuwa Osada and Takeo Shibata

Abstract: Introduction The number of dementia patients has increased 5 million in Japan. It is due to the super-aging society. Elderly people account for 29.1% of Japanese population. It is important to maintain and improve cognitive functions in elderly. 3D puzzle (Cuboro; https://cuboro.ch/en/) is an educational toy to create their own path inside building blocks. It is widely used as an educational material for programming in Europe. This study aims to examine differences of brain wave responses with playing the 3D puzzle among young and elderly people. In addition, the relationship between brain wave responses and cognitive function test scores of Mini-Mental State Examination (MMSE) are examined. Methods Fifty young people (healthy 18–24-year-olds) and fifty elderly people (65 years old or older with a score of 24 points or higher on MMSE) were corrected. The brain waves were measured for a total of 5 minutes: 1 minute before start (closing eyes), 3 minutes during playing 3D puzzle, and 1 minute after playing (closing eyes). A linear mixed model was used to compare time series variations in brain waves every minute. Mindwave mobile 2 (Neuro sky co. ltd.) was used in this study. It measures alpha (low and high), beta (low and high), gamma (low and mid), theta, delta, then it estimates attention scores and meditation scores. SPSS Ver.26 was used for statistical analysis. Results Significant differences for groups, time series, and groups x time series (shape of the line graph) were observed in attentions, high-alpha waves, low-gamma waves, and low-beta waves. high-alpha wave, high- beta wave / low-alpha wave, high-beta wave / high-alpha wave, (low-beta wave + high-beta wave) / (low-alpha wave + high-alpha wave). Significant differences for time series and between groups x time series were observed in meditations, low-alpha waves, and low-beta waves / low-alpha waves. Significant differences for time series were observed in high-beta waves. Significant differences were seen in the time series in the low-beta waves. In elderly, brain wave activities were compared between people with and without mild cognitive impairments (MMSE score was 27 or less and 28 and above). Significant group differences were shown for attention, low-alfa, high-alfa, low-beta, low-gamma, low-beta / low-alfa, low-beta / high-alfa, high-beta / low-alfa, high-beta / high-alfa, and (low-beta wave + high-beta wave) / (low-alpha wave + high-alpha wave). Discussions Our results revealed that many brain wave activities were higher in elderly than young people. Gamma waves are expected to reduce amyloid beta (Cell. 2019 Apr 4;177(2):256-271.e22). Because this study showed higher low-gamma waves activity in elderly, playing Cuboro is expected to prevent Alzheimer's disease. In addition, many other brain waves were activated higher in healthy elderly. Furthermore, brain wave activities with mild cognitive impairments showed lower than without cognitive impairments. Long term trials for people with mild cognitive impairments are needed to evaluate preventing dementia.

Nr: 283
Title:

Implantable Bone Conduction Transducer with Improved MRI Compatibility

Authors:

Dong Ho Shin

Abstract: Recently, the Bone Conduction Implant (BCI) has emerged as an attractive option for individuals unable to use conventional hearing aids. Since the external auditory canal remains unaffected, there is no discomfort or sense of occlusion due to ear blockage, and no risk to residual hearing. The BCI transmits sound through bone to the inner ear, with a transducer implanted in the mastoid region behind the ear. However, there is a limitation in its applicability to hearing loss with a hearing threshold of 70 dBHL or higher, as the vibrations generated in the transducer are attenuated during propagation through the skull, resulting in a loss of vibration. To address this limitation, recent research in the development of implantable bone conduction hearing aids has focused on increasing the vibration output of the transducer. Most bone conduction transducers utilize coils and permanent magnets, presenting challenges during MRI imaging due to distortion caused by the magnetic field of the permanent magnet, making accurate lesion diagnosis difficult. Therefore, there is a need to develop bone conduction transducers that can minimize image distortion in the MRI environment. This study describes the design of bone conduction transducers to improve MRI compatibility of bone conduction implants. To reduce the magnitude of image artifacts generated in the MRI environment, the permanent magnet constituting the transducer was made to have a three-pole structure. After modelling the 1.5T static magnetic field environment using finite element analysis software, the image artifact magnitude generated by the proposed three-pole transducer in the MRI environment was calculated. And to confirm that the proposed transducer can reduce the artifacts magnitude, the same analysis was performed on the transducer of two-pole structure and the image artifacts magnitude was compared. Comparing the analysis results, the artifacts magnitude of the three-pole transducer was reduced by at least over 50% compared to the two-pole transducer. (This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) NRF-2022R1A2C400123).

Nr: 285
Title:

In-Depth Analysis of Taxonomic Classification Markers of DNA Viruses: From Sequence to Structure

Authors:

Hyeon S. Son, Mikyung Je and Myeongji Cho

Abstract: Virus taxonomy is a challenging task due to the high genetic diversity and wide host range of viruses. Moreover, the acceleration of environmental change and the rapid accumulation of relevant data today raise concerns that the current classification system may not adequately cope with the emergence of new viruses. In response, the International Committee on Taxonomy of Viruses (ICTV) is actively exploring multifaceted approaches to effectively classify viruses. In the field of bioinformatics, various clustering techniques based on conserved domains, divergence times, and distance spaces of individual viruses in the phylogeny have been applied to rationally place unclassified or already classified viruses into more sophisticated classification systems. Generally, viruses are classified into seven groups according to the Baltimore classification system. Each group contains conserved regions known as an RNA recognition motifs, which sometimes play a crucial role in the life cycle of viruses. These motifs, such as RNA-dependent RNA polymerases, reverse transcriptase, and superfamily 3 helicase, are involved in viral genome replication and can have a significant impact on virus survival when mutations occur. While they can serve as markers to some extent, there is ambiguity in that they are not all present in a single group. In this study, we aimed to investigate the utility of rolling-circle replication endonuclease (RCRE) and double-jelly-roll capsid protein (DJR-CP) as markers for DNA viruses (ssDNA, dsDNA) at the family level. Biological sequences and structural data for the analysis were obtained from publicly available large-scale repositories. Using bioinformatics techniques such as protein structure alignment and analysis of physiochemical parameters such as molecular weight, grand average of hydropathicity, and instability index, we compared the characteristics expressed in different groups to identify specific similarities and differences of these two markers in DNA viruses.

Nr: 289
Title:

Comprehensive Exploration of Intradialysis Hypotension Prediction via Deep Learning with Therapy Data Management System

Authors:

Siun Kim, Jiwon Ryu, Su Hwan Kim, Myeongju Kim, Sejoong Kim and Hyung-Jin Yoon

Abstract: Intradialysis Hypotension (IDH) is one of the most frequent problems in dialysis, occurring in approximately 8-15% of dialysis sessions. Previous studies using deep learning techniques to predicting IDH have differentiated between pre-dialysis (i.e., pre-session) and intra-dialysis (i.e., intra-session) models in terms of the time of prediction. However, there is no in-depth comparison of model performances between these two task formulation, and the lack of consensus on the definitions of IDH makes it difficult to compare the model performances. In this context, this study aims to develop diverse IDH prediction models and compare their performances, considering different task settings and definitions of IDH. Additionally, time series data of dialyzer state from the Therapy Data Management System (TDMS) were collected to develop intra-session prediction models. We prospectively collected data from 51 outpatient dialysis patients at Seoul National University Bundang Hospital from July 2022 to October 2023 (5050 sessions). Covariates included patient demographics, lab results, medical history, prescriptions, dialysis-related data, and blood pressure metrics. Three IDH definitions (SBP90, SBP100, and KDOQI) were employed. IDH occurred in 10.8%, 19.5%, and 71.3% of sessions in total using data for SBP90, SBP100, and KDOQI, respectively. Pre-session models achieved AUROCs of 0.770, 0.912, and 0.799, while intra-session models, utilizing TDMS data 30 minutes prior to blood pressure measurement, yielded AUROCs of 0.797, 0.830, and 0.917 for SBP90, SBP100, and KDOQI, respectively. When comparing pre-/intra-session models in predicting the time of IDH occurrence, intra-session models exhibited average performance improvements of 0.114, 0.252, and 0.134 for SBP90, SBP100, and KDOQI, respectively, based on AUROC. Although TDMS data proved valuable, limitations include the small patient sample and suboptimal pre-session model performance. Future plans involve retrospectively collecting data from a larger patient pool to enhance the pre-session model and assess TDMS's ongoing utility in predicting the time of IDH occurrence. This study is sponsored by Ainex Corporation.

Nr: 291
Title:

A New Approach to Evaluate Blood Flow Properties by Measuring the AC Phase Shift Response in Parallel with Blood Viscosity

Authors:

Nadia Mladenova Antonova, Roumen K Zlatev, Rogelio Arturo Ramos, Margarita Stoytcheva and Vasilka Paskova

Abstract: There are various methods for assessing blood flow properties and their changes. One of the methods of their evaluation is the determination of their electrical properties - electrical conductivity and impedance at flow. A virtual instrument based on the LabVIEW platform was developed for the measurement of the AC (alternating current) vs. the voltage phase shift caused by a blood sample at 100 mV p-p AC voltage application within the frequency range between 1 Hz and 10 KHz. Based on the developed instrument and software a new approach was proposed and applied to study the changes in the dynamic viscosity and shear stresses of the human blood in parallel with the conductivity and the AC vs. the voltage phase shift caused by the blood samples. Conserved human blood samples from healthy subjects, collected in CPD bags with 200 ml blood/63 ml CPD conserving solution, obtained from the National Center for Clinical and Transfusion Hematology in Sofia were investigated. The rheological properties were examined by the rotational viscometer Low Shear 300 ProRheo (Germany). The dynamic viscosity/shear stress dependencies on the shear rates of the examined blood samples were measured at a trapezium-like flow regime of shear rates over a shear rate range of 0 s-1 to 50 s-1 with a duration of 6 minutes at 37º C. In parallel the kinetics of blood conductivity and the phase shift within the frequency range between 1 Hz and 10 KHz were evaluated at the above flow conditions. Each sample was characterized by non-Newtonian rheological properties. The flow curves have been analyzed by the power law, Bingham law, and Casson rheological equations. As a result, the characteristic parameters of the samples were obtained. The phase shift of the blood samples is negative, which correlates very well with the theoretical models and the equivalent electrical circuit configuration of an electrode-electrolyte interface. The combination of the AC/voltage phase shift with the rheological parameters mostly the blood viscosity allows enhancing the understanding and the interpretation of the hemorheological disturbances in terms of the blood circulation. Acknowledgments: The study has been supported by the project КП-06-Н57/14 from 16.11.2021: “Investigation of the hemorheological parameters, the mechanical properties of the blood cells as a basis for mathematical modeling of their role for the blood flow in cerebrovascular, peripheral vascular diseases and Diabetes mellitus type 2“, funded by the Bulgarian National Science Fund.

Nr: 292
Title:

Disease Detection Methods Using Independent Data Analysis of Telemonitoring Vital Data

Authors:

Naoki Kobayashi, Shota Ueki and Satoki Homma

Abstract: [Purpose] Among elderly people with chronical illnesses, it is important to detect deterioration due to acute diseases and to convey this information to doctors. We are trying to discover a method to detect disease deterioration by analyzing time series of multiple vital data using independent component analysis. [Method] Nineteen participants participated in this study, four of whom suffered from acute illnesses during this study (patients A, B, C and D). We performed independent component analysis (ICA) on time-series signals and attempted to detect worsening of diseases. The participants were instructed to measure five vital data daily: systolic blood pressure, diastolic blood pressure, heart rate, body temperature, and body weight. [Results] The ICA scores from the vital data of 19 participants were plotted. Chronic heart failure in patient A was concentrated in the upper left of the plot for participants without acute diseases, even though the scatter plot for ICA scores was normal. Patient B’s peak, who was affected with cholelithiasis, was plotted on the upper left could also be clearly detected. Patient C, who had influenza, also showed acute changes. However, it was difficult to detect deterioration of patient D’s pneumonia. In the ICA, Component 1 focused on diastolic blood pressure and pulse rate, and Component 2 focused on systolic blood pressure. [Conclusion] By using Components 1 and 2 of the ICA plot from five vital data, ICA showed the potential to reveal the characteristics of each patient and acute illness in more detail.

Nr: 295
Title:

Physio-Tracker: A Camera-Computer Application Supervising Physiotherapeutic Home Exercise Programs

Authors:

Verena Stieve

Abstract: Regular execution of physiotherapeutic exercises is important for the therapeutic outcome, but usually the adherence to home exercise programs is very low. Reasons are missing motivation and knowledge. A technical supervision of the training process can address those problems. Therefore, we are developing an application that is installed on a camera computer, which patients can then use at home. The developed application includes exercise explanations as well as supervision during the exercises. The application installed on the camera tracks the movements, determines joint coordinates of the skeleton and uses them to evaluate the quality of the movements. The quality of the skeleton detection is essential for the appreciation of the product. Unfortunately, the detection quality is poor for frontal movements in existing products. Therefore, our research currently focuses on the improvement of tracking for our application.