undercomplete
Undercomplete describes a data representation or neural network architecture in which the latent or hidden layer has fewer dimensions than the input. In autoencoders, an undercomplete autoencoder compresses the input to a bottleneck layer with fewer neurons than the input features, and then reconstructs the original data from this compact representation.
The key idea is to enforce a compact encoding so the model must capture the most salient
Undercomplete representations are contrasted with overcomplete representations, where the latent space has more dimensions than the
Training an undercomplete autoencoder aims to minimize reconstruction error while avoiding a trivial copy of inputs.
Applications include dimensionality reduction, unsupervised feature learning, and data compression. The approach is most effective when