Authors:Keisuke Toida1;Naoki Kato2;Osamu Segawa2;Takeshi Nakamura2 andKazuhiro Hotta1
Affiliations:1Meijo University, 1-501 Shiogamaguchi, Tempaku-ku, Nagoya 468-8502, Japan;2Chubu Electric Power Co., Inc., 1-1 Higashishin-cho, Higashi-ku, Nagoya 461-8680, Japan
Keyword(s):Variational Autoencoder, Homography Transformation, Unsupervised Learning.
Abstract:We propose Homography VAE, a novel architecture that combines Variational AutoEncoders with Homog-raphy transformation for unsupervised standardized view image reconstruction. By incorporating coordinate transformation into the VAE framework, our model decomposes the latent space into feature and transformation components, enabling the generation of consistent standardized view from multi-viewpoint images without explicit supervision. Effectiveness of our approach is demonstrated through experiments on MNIST and GRID datasets, where standardized reconstructions show significantly improved consistency across all evaluation metrics. For the MNIST dataset, the cosine similarity among standardized view achieved 0.66, while original and transformed views show 0.29 and 0.37, respectively. The number of PCA components required to explain 95% of the variance decreases from 193.5 to 33.2, indicating more consistent representations. Even more pronounced improvements are observed on GRID dataset, in which standardized view achieved a cosine similarity of 0.92 and required only 7 PCA components compared to 167 for original images. Furthermore, the first principal component of standardized view explains 71% of the total variance, suggesting highly consistent geometric patterns. These results validate that Homography VAE successfully learns to generate consistent standardized view representations from various viewpoints without requiring ground truth Homog-raphy matrices.(More)
We propose Homography VAE, a novel architecture that combines Variational AutoEncoders with Homog-raphy transformation for unsupervised standardized view image reconstruction. By incorporating coordinate transformation into the VAE framework, our model decomposes the latent space into feature and transformation components, enabling the generation of consistent standardized view from multi-viewpoint images without explicit supervision. Effectiveness of our approach is demonstrated through experiments on MNIST and GRID datasets, where standardized reconstructions show significantly improved consistency across all evaluation metrics. For the MNIST dataset, the cosine similarity among standardized view achieved 0.66, while original and transformed views show 0.29 and 0.37, respectively. The number of PCA components required to explain 95% of the variance decreases from 193.5 to 33.2, indicating more consistent representations. Even more pronounced improvements are observed on GRID dataset, in which standardized view achieved a cosine similarity of 0.92 and required only 7 PCA components compared to 167 for original images. Furthermore, the first principal component of standardized view explains 71% of the total variance, suggesting highly consistent geometric patterns. These results validate that Homography VAE successfully learns to generate consistent standardized view representations from various viewpoints without requiring ground truth Homog-raphy matrices.