The Eye for AI program is uniquely designed to attract high-performing AI talent with a Master’s or Doctor’s degree and 0-3 years of work experience, typically in the fields of data management, data science and machine learning.
The detect-waste team conducted comprehensive research on Artificial Intelligence usage in waste detection and classification to fight the world's waste pollution problem.
This work thoroughly analyzes the HamNoSys labels provided by various maintainers of open sign language corpora in five sign languages, in order to examine the challenges encountered in labeling video data and investigate the consistency and objectivity of Ham noSys-based labels for the purpose of training machine learning models.
We evaluated models' performance in terms of fidelity, diversity, speed of training, and predictive ability of classifiers trained on the generated synthetic data. In addition we provided explainability through exploration of latent space and embeddings projection focused both on global and local explanations.
We proposed a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. We provided detailed explainability analysis, which is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies.
We identified the gaps in current evaluation methodologies and proposed a more comprehensive and holistic evaluation framework for multi-modal motion prediction autonomus vehicle system.
We identified the gaps in current evaluation methodologies and proposed a more comprehensive and holistic evaluation framework for multi-modal motion prediction autonomus vehicle system.
This work explored unconditional and conditional GANs to compare their bias inheritance and how the synthetic data influenced the models, and examined classification models trained on both real and synthetic data with counterfactual bias explanations.
We evaluated models' performance in terms of fidelity, diversity, speed of training, and predictive ability of classifiers trained on the generated synthetic data. In addition we provided explainability through exploration of latent space and embeddings projection focused both on global and local explanations.
We compared the performance of three well-known deep learning approaches for object detection on the AGAR dataset, namely two-stage, one-stage, and transformer-based neural networks.