We introduce the Adaptive Locked Agnostic Network (ALAN), a concept involving self-supervised visual feature extraction using a large backbone model to produce anatomically robust semantic self-segmentation. In the ALAN methodology, this self-supervised training occurs only once on a large and diverse dataset. We applied the ALAN approach to three publicly available echocardiography datasets and designed two downstream models, one for segmenting a target anatomical region, and a second for echocardiogram view classification.
We evaluated models' performance in terms of fidelity, diversity, speed of training, and predictive ability of classifiers trained on the generated synthetic data. In addition we provided explainability through exploration of latent space and embeddings projection focused both on global and local explanations.
This work explored unconditional and conditional GANs to compare their bias inheritance and how the synthetic data influenced the models, and examined classification models trained on both real and synthetic data with counterfactual bias explanations.
We evaluated models' performance in terms of fidelity, diversity, speed of training, and predictive ability of classifiers trained on the generated synthetic data. In addition we provided explainability through exploration of latent space and embeddings projection focused both on global and local explanations.
Using generative modelling techniques for generating synthetic data of skin diseases.