We identified the gaps in current evaluation methodologies and proposed a more comprehensive and holistic evaluation framework for multi-modal motion prediction autonomus vehicle system.
We identified the gaps in current evaluation methodologies and proposed a more comprehensive and holistic evaluation framework for multi-modal motion prediction autonomus vehicle system.
This work explored unconditional and conditional GANs to compare their bias inheritance and how the synthetic data influenced the models, and examined classification models trained on both real and synthetic data with counterfactual bias explanations.
We evaluated models' performance in terms of fidelity, diversity, speed of training, and predictive ability of classifiers trained on the generated synthetic data. In addition we provided explainability through exploration of latent space and embeddings projection focused both on global and local explanations.
We compared the performance of three well-known deep learning approaches for object detection on the AGAR dataset, namely two-stage, one-stage, and transformer-based neural networks.
In the HearAI non-profit project, we investigated different multilingual open sign language corpora labeled by linguists in the language-agnostic HAMburg NOtation SYStem.
In our solution, we proposed computer-friendly numeric multilabels that greatly simplify the structure of the language-agnostic HamNoSys without significant loss of glos meaning.
In the HearAI non-profit project, we investigated different multilingual open sign language corpora labeled by linguists in the language-agnostic HAMburg NOtation SYStem.