cv

Quantifying Inconsistencies in the Hamburg Sign Language Notation System

This work thoroughly analyzes the HamNoSys labels provided by various maintainers of open sign language corpora in five sign languages, in order to examine the challenges encountered in labeling video data and investigate the consistency and objectivity of Ham noSys-based labels for the purpose of training machine learning models.

Debugging my STEM journey

Sylwia will talk about her STEM career starting from choosing technical studies and ending at her current positon at AstraZeneca. Her speech will cover her academic and professional career, together with volonteering activities - participating in open-source AI4Good projects, and serving as a mentor for kids.

The Non-Profit HearAI Project - A Case Study of the Hamburg Notation System for Sign Languages

Sylwia will talk about advantages and limitations of sign language corpora for sign language recognition via deep learning-based methods. Her speech will cover the challenges posed by the diversity of sign languages across nationalities and insights from the research conducted by the non-profit HearAI project.

Towards explainable motion prediction using heterogeneous graph representations

We proposed a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. We provided detailed explainability analysis, which is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies.

Leveraging Synthetic Data for Skin Lesion Analysis

This research delves into the application of unconditional and conditional Generative Adversarial Networks (GANs) in both centralized and decentralized settings. The centralized approach replicates studies on a large but imbalanced skin lesion dataset, while the decentralized approach emulates a more realistic hospital scenario with data from three institutions. We meticulously assess the models' performance in terms of fidelity, diversity, training speed, and the predictive capabilities of classifiers trained on synthetic data. Moreover, we delve into the explainability of the models, focusing on both global and local features. Crucially, we validate the authenticity of the generated samples by calculating the distance between real images and their respective projections in the latent space, addressing a key concern in such applications.

Unlocking the Heart Using Adaptive Locked Agnostic Networks

We introduce the Adaptive Locked Agnostic Network (ALAN), a concept involving self-supervised visual feature extraction using a large backbone model to produce anatomically robust semantic self-segmentation. In the ALAN methodology, this self-supervised training occurs only once on a large and diverse dataset. We applied the ALAN approach to three publicly available echocardiography datasets and designed two downstream models, one for segmenting a target anatomical region, and a second for echocardiogram view classification.

Unlocking the Heart Using Adaptive Locked Agnostic Networks

We introduce the Adaptive Locked Agnostic Network (ALAN), a concept involving self-supervised visual feature extraction using a large backbone model to produce anatomically robust semantic self-segmentation. In the ALAN methodology, this self-supervised training occurs only once on a large and diverse dataset. We applied the ALAN approach to three publicly available echocardiography datasets and designed two downstream models, one for segmenting a target anatomical region, and a second for echocardiogram view classification.

Assessing GAN-Based Generative Modeling on Skin Lesions Images

We evaluated models' performance in terms of fidelity, diversity, speed of training, and predictive ability of classifiers trained on the generated synthetic data. In addition we provided explainability through exploration of latent space and embeddings projection focused both on global and local explanations.

Towards trustworthy multi-modal motion prediction: Holistic evaluation and interpretability of outputs

We aim to advance towards the design of trustworthy motion prediction systems, based on some of the requirements for the design of Trustworthy Artificial Intelligence. The focus is on evaluation criteria, robustness, and interpretability of outputs.

Overcoming Barriers to Data-Driven Workflows in Healthcare and Mobility

This work explores a usability of synthetic data for training visual deep learning models in real-world applications.