Revisiting non-linear PCA with progressively grown autoencoders
By José Lezama
In this talk I will revisit the old problem of nonlinear dimensionality reduction with hierarchical representations. That is, representations where the first n components induce the n-dimensional manifold (with some degree of smoothness) that best approximates the data points, as in standard PCA. I will introduce a method that allows to progressively grow the latent dimension of an autoencoder, without losing the hierarchy condition. Experimental results using real data in both unsupervised and supervised scenarios will be shown.