# Does implying robust loss always help? ## Learning a custom classifier for CIFAR10 Dataset based on Variational Autoencoder embeddings The VAE and VQ-VAE algoritms were tested on CIFAR10 Dataset, the best of two is VAE due to its higher performance with all loss functions, the latent dim of VAE is set to 128 as its dim is the best among tested ([64, 128, 256, 512]) To start training, install the requirements, change configs if necessary in 'configs.py' and run 'run.py' As the experiment, a number of robust loss functions were compared to each other and with the standard MSE Loss

Which realizations can be found in 'blocks.py' ### For comparing I used visualization of the hidden representation of the validaition dataset with Multidimensional Scaling