We introduced the Annotated Germs for Automated Recognition (AGAR) dataset, an image database of microbial colonies cultured on agar plates. It contains 18000 photos of five different microorganisms as single or mixed cultures, taken under diverse lighting conditions with two different cameras.
We employed self-supervised learning for automated semantic segmentation of echocardiogram sequences on open datasets like EchoNet-Dynamic and CAMUS. Our approach effectively segments the left ventricle by identifying and aggregating anatomically relevant subregions across cardiac phases.
Welcome to the “International Workshop on Promoting AI-Supported Eco-Activism in the Western Balkans”. This workshop aims to bring together researchers, activists, policymakers, and practitioners from the region to explore innovative ways of utilizing artificial intelligence (AI) to enhance environmental activism.
We employed self-supervised learning for automated semantic segmentation of echocardiogram sequences on open datasets like EchoNet-Dynamic and CAMUS. Our approach effectively segments the left ventricle by identifying and aggregating anatomically relevant subregions across cardiac phases.
This work thoroughly analyzes the HamNoSys labels provided by various maintainers of open sign language corpora in five sign languages, in order to examine the challenges encountered in labeling video data and investigate the consistency and objectivity of Ham noSys-based labels for the purpose of training machine learning models.
Sylwia will talk about her STEM career starting from choosing technical studies and ending at her current positon at AstraZeneca. Her speech will cover her academic and professional career, together with volonteering activities - participating in open-source AI4Good projects, and serving as a mentor for kids.
Sylwia will talk about advantages and limitations of sign language corpora for sign language recognition via deep learning-based methods. Her speech will cover the challenges posed by the diversity of sign languages across nationalities and insights from the research conducted by the non-profit HearAI project.
The purpose of the workshop is to facilitate a comprehensive understanding of multimodal data science applications within medical domain. The code provided with the classes serves to support the delivery of a cutting-edge workshop designed to introduce researchers to the rapidly evolving field of multimodal data science. Classes were designed for AstraZeneca employers.
We proposed a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. We provided detailed explainability analysis, which is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies.
This research delves into the application of unconditional and conditional Generative Adversarial Networks (GANs) in both centralized and decentralized settings. The centralized approach replicates studies on a large but imbalanced skin lesion dataset, while the decentralized approach emulates a more realistic hospital scenario with data from three institutions. We meticulously assess the models' performance in terms of fidelity, diversity, training speed, and the predictive capabilities of classifiers trained on synthetic data. Moreover, we delve into the explainability of the models, focusing on both global and local features. Crucially, we validate the authenticity of the generated samples by calculating the distance between real images and their respective projections in the latent space, addressing a key concern in such applications.