We introduce the Adaptive Locked Agnostic Network (ALAN), a concept involving self-supervised visual feature extraction using a large backbone model to produce anatomically robust semantic self-segmentation. In the ALAN methodology, this self-supervised training occurs only once on a large and diverse dataset. We applied the ALAN approach to three publicly available echocardiography datasets and designed two downstream models, one for segmenting a target anatomical region, and a second for echocardiogram view classification.
We introduce the Adaptive Locked Agnostic Network (ALAN), a concept involving self-supervised visual feature extraction using a large backbone model to produce anatomically robust semantic self-segmentation. In the ALAN methodology, this self-supervised training occurs only once on a large and diverse dataset. We applied the ALAN approach to three publicly available echocardiography datasets and designed two downstream models, one for segmenting a target anatomical region, and a second for echocardiogram view classification.
We evaluated models' performance in terms of fidelity, diversity, speed of training, and predictive ability of classifiers trained on the generated synthetic data. In addition we provided explainability through exploration of latent space and embeddings projection focused both on global and local explanations.
We aim to advance towards the design of trustworthy motion prediction systems, based on some of the requirements for the design of Trustworthy Artificial Intelligence. The focus is on evaluation criteria, robustness, and interpretability of outputs.
The Eye for AI program is uniquely designed to attract high-performing AI talent with a Master’s or Doctor’s degree and 0-3 years of work experience, typically in the fields of data management, data science and machine learning.
The detect-waste team conducted comprehensive research on Artificial Intelligence usage in waste detection and classification to fight the world's waste pollution problem.
This work thoroughly analyzes the HamNoSys labels provided by various maintainers of open sign language corpora in five sign languages, in order to examine the challenges encountered in labeling video data and investigate the consistency and objectivity of Ham noSys-based labels for the purpose of training machine learning models.
We evaluated models' performance in terms of fidelity, diversity, speed of training, and predictive ability of classifiers trained on the generated synthetic data. In addition we provided explainability through exploration of latent space and embeddings projection focused both on global and local explanations.
We proposed a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. We provided detailed explainability analysis, which is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies.