RespAI-HealthInfo 2026 Abstracts


Area 1 - RespAI-HealthInfo

Full Papers
Paper Nr: 7
Title:

Chronic Wound Assessment Solution with Secure Communication and Explainability as Key Enablers

Authors:

Andreea Ancuta Corici, Kalpana Chaudhary, Benny Häusler, Martin Hocquel-Hans, Nils Lahmann, Hemant Zope, Marius Corici, Mareike Tabea Jansen and Simone Kuntz

Abstract: In the context of chronic wounds, that require constant monitoring, the close collaboration between the nursing and the technical teams is a driver for improvement. Leveraging AI and visual computing technologies when assessing the wounds can lead to higher accuracy, at the same time the results have to be validated by nurses in order to ensure a high quality data storage and processing results. Explainability is thus essential for gaining the trust of the nurses to use the new technology as well as ensure a high technology adoption. In this paper, the turning points and results in the design, implementation and evaluation of a solution for chronic wound assessment using 3D scans and 3D wound model analysis are presented, having the security of data exchange as well as the AI explainability as key enablers.
Download

Paper Nr: 8
Title:

Using Responsible AI for Prediction Applied to a Brazilian Dataset on Sepsis

Authors:

Thiago Q. Oliveira, Leandro A. Carvalho, Flávio R. C. Sousa and João B. F. Filho

Abstract: Machine learning models for clinical prediction must be not only accurate but also fair, transparent, and reliable, so that physicians feel confident in their decision-making process. Responsible AI practices are crucial for predicting sepsis, a disease with high mortality rates and length of hospital stay. Few real-world datasets are available for this disease. This study used a novel dataset containing real-world information on sepsis from a Brazilian public hospital to predict patient mortality and length of stay (LOS), applying the principles of responsible AI, such as explainability, transparency, and privacy, while addressing a significant gap in AI research in healthcare for languages other than English. The results demonstrated robust predictive performance, with Extreme Gradient Boosting (XGBoost) achieving an Area Under the Receiver Operating Characteristic Curve (AUROC) of 0.959 for mortality prediction and Random Forest attaining an AUROC of 0.846 for length of stay. Furthermore, explainability analysis validated clinical coherence, identifying septic shock and noroadrenaline as main predictors.
Download

Short Papers
Paper Nr: 9
Title:

Assessing the Trustworthiness of AI-Generated Online Health Information: Toward a Socio-Technical Framework and System

Authors:

Tahir Hameed, Andrew Tollison and Lisa Perks

Abstract: AI-generated online health information (AI-OHI) systems, including conversational agents and virtual doctors, have transformed how the public seeks health information. AI-OHI systems address limitations related to information overload, personalization, and interaction, but they also introduce hallucinations, uneven information quality, and bias. These issues are related to the trustworthiness of information with dire consequences for patient safety. Public trust in AI-OHI is shaped by multiple social and technical factors, of which trustworthiness is a primary driver. This work-in-progress paper presents an assessment approach for evaluating the trustworthiness of AI-OHI as a core component of a broader socio-technical framework and system for AI-OHI management. A narrative systematic review of literature on online health information, AI-driven health systems, and trust in digital information identifies five tenets of AI-OHI trustworthiness: accuracy, safety, clarity, validity, and fairness. A weighted scoring model is developed using established benchmarks and measurement instruments for each dimension. The model produces a standardized trustworthiness score and tiered classification of AI-OHI content trustworthiness. Information that does not meet acceptable risk thresholds is routed for human-in-the-loop review. This approach provides a transparent, auditable, and scalable mechanism for assessing AI-OHI trustworthiness prior to dissemination, supporting patient safety and responsible AI deployment. Future work will calibrate model weights through hybrid approaches and implement the model within the AI-driven socio-technical platform.
Download

Paper Nr: 10
Title:

Designing Guardrails to Align LLM-Generated Social Scenarios with the Needs of Autistic Learners

Authors:

Mahmoud Elbattah and Federica Cilia

Abstract: Large Language Models (LLMs) offer new opportunities for generating flexible social interaction scenarios within digital tools that support health and wellbeing. However, their use with autistic learners requires careful attention to safety, predictability, and appropriateness. This paper explores the use of explicit guardrails to guide and constrain LLM-generated social scenarios. Our approach relies on explicitly defined learning objectives, structured prompt templates, and scenario-specific constraints to shape model outputs in low-stakes educational settings. We illustrate this design through a small set of everyday scenarios situated in academic and social environments, including library interactions, online study groups, and basic emotion recognition tasks. Rather than aiming for clinical assessment or behavioural diagnosis, the scenarios are intended to support exploratory practice of social communication and emotional regulation skills. The paper concludes with a discussion of design considerations, ethical implications, and practical limitations relevant to the use of guardrails when deploying LLMs in supportive digital applications.
Download

Paper Nr: 11
Title:

A Systematic Literature Review of Generative AI Approaches for Synthetic Clinical Data Generation: Balancing Realism and Privacy

Authors:

Daniel Bhola and Syed Ahmad Chan Bukhari

Abstract: Healthcare artificial intelligence (AI) development is constrained by stringent data privacy regulations, restrictive data-sharing agreements, and complex patient consent requirements, limiting access to large and diverse clinical datasets needed for robust model development. Generative artificial intelligence offers a promising solution by enabling the creation of synthetic clinical data that preserves statistical and clinical utility while protecting patient privacy. This systematic review evaluates generative AI approaches for balancing data realism and privacy across clinical data modalities. Following PRISMA guidelines, we analyzed studies published between 2018 and 2024 from PubMed, IEEE Xplore, ACM Digital Library, and Web of Science. We reviewed Generative Adversarial Networks (GANs), Variational Autoencoders (VAEs), diffusion models, and transformer-based architectures applied to structured electronic health records, medical imaging, genomic data, clinical time series, and text. Results indicate that the realism–privacy tradeoff is highly context-dependent and varies by data modality and application. GANs and diffusion models perform well for temporal and longitudinal data, while large language models dominate clinical text generation. Despite privacy being cited as a primary objective in 94% of studies, rigorous evaluation of re-identification risk remains limited. This review provides evidence-based guidance for selecting appropriate generative AI methods for clinical applications and highlights the need for standardized evaluation frameworks to ensure privacy-preserving synthetic data generation.
Download

Paper Nr: 5
Title:

Semantic Web Knowledge Graphs for Patient-Centred Healthcare: A Systematic Review and Design Framework

Authors:

Chenyang Ding and João Ponciano

Abstract: Knowledge graphs (KGs) show great potential in the field of patient-centred healthcare data analytics by integrating heterogeneous clinical data sources, presenting complex associative relationships, and playing an important role in tasks such as personalized therapy, diagnostic support, operational optimization, and drug discovery. Following PRISMA 2020, we identified 72 original studies applying KG based methods to patient-centred healthcare data. Due to space constraints, this short paper reports aggregate patterns across the full set and presents detailed comparative illustrations using a representative subset of 28 studies spanning the major application domains and methodological paradigms. Given the heterogeneity of tasks, datasets, and assessment metrics, we used an interpretability-scalability perspective for qualitative synthesis. The research focuses on four core areas: patient similarity and classification, diagnostic support, personalized treatment and medication reuse, and operational insights such as infection tracking. Ontology based knowledge graphs enable semantic traceability and guideline aligned representations, embedding-based approaches enhance high-dimensional network scalability, natural language processing driven pipelines extract structured knowledge from clinical text, and hybrid systems provide a more balanced direction. Key translational priorities include standardized reporting and benchmarking, privacy preservation and federated knowledge graph construction, and incremental knowledge graph updates that support deployable auditable decision support.
Download