We introduce the Adaptive Locked Agnostic Network (ALAN), a concept involving self-supervised visual feature extraction using a large backbone model to produce anatomically robust semantic self-segmentation. In the ALAN methodology, this self-supervised training occurs only once on a large and diverse dataset. We applied the ALAN approach to three publicly available echocardiography datasets and designed two downstream models, one for segmenting a target anatomical region, and a second for echocardiogram view classification.