- Zhiyuan Yang ORCID:orcid.org/0000-0003-1738-609612 &
- Qingfu Zhang ORCID:orcid.org/0000-0003-0786-067112
Part of the book series:Lecture Notes in Computer Science ((LNCS,volume 13623))
Included in the following conference series:
1654Accesses
Abstract
Neural Radiance Fields (NeRF) learn a model for the high-quality 3D-view reconstruction of a single object. Category-specific representation makes it possible to generalize to the reconstruction and even generation of multiple objects. Existing efforts mainly focus on the reconstruction performance including speed and quality. The steerability of generation processes has not been well studied while semantic attributes still exist in 3D neural representations. Inspired by interpreting underlying factors of GANs, this paper proposes a novel method named EigenGRF to disentangle the latent semantic subspace in an unsupervised manner. By learning a set of eigenbasis, we can readily control the process and the result of object synthesis accordingly. Concretely, our method brings a mapping network to NeRF by conditioning on a FiLM-SIREN layer. Then we use a component analysis method for discovering steerable latent subspaces. Our experiments reveal that the proposed method is powerful for the 3D-aware generation with steerability by both synthetic and real-world datasets.
This is a preview of subscription content,log in via an institution to check access.
Access this chapter
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
- Chapter
- JPY 3498
- Price includes VAT (Japan)
- eBook
- JPY 11439
- Price includes VAT (Japan)
- Softcover Book
- JPY 14299
- Price includes VAT (Japan)
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bojanowski, P., Joulin, A., Lopez-Paz, D., Szlam, A.: Optimizing the latent space of generative networks. In: Proceedings of ICML (2017)
Cai, S., Obukhov, A., Dai, D., Van Gool, L.: Pix2Nerf: unsupervised conditional pi-GAN for single image to neural radiance fields translation. In: Proceedings of CVPR (2022)
Chan, E.R., Monteiro, M., Kellnhofer, P., Wu, J., Wetzstein, G.: pi-GAN: periodic implicit generative adversarial networks for 3D-aware image synthesis. In: Proceedings of CVPR (2021)
Chen, S.A., Li, C.L., Lin, H.T.: A unified view of CGANs with and without classifiers. In: Proceedings of NeurIPS (2021)
Chen, Z., Zhang, H.: Learning implicit fields for generative shape modeling. In: Proceedings of CVPR (2019)
Chong, M.J., Lee, H.Y., Forsyth, D.: Stylegan of all trades: image manipulation with only pretrained styleGAN. arXiv preprintarXiv:2111.01619 (2021)
Deng, Y., Yang, J., Xiang, J., Tong, X.: Gram: generative radiance manifolds for 3D-aware image generation. In: Proceedings of CVPR (2022)
Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V.: CARLA: an open urban driving simulator. In: Proceedings of CORL (2017)
Goodfellow, I.J., et al.: Generative adversarial nets. In: Proceedings of NeurIPS (2014)
Grigoryev, T., Voynov, A., Babenko, A.: When, why, and which pretrained GANs are useful? In: Proceedings of ICLR (2022)
Gropp, A., Yariv, L., Haim, N., Atzmon, M., Lipman, Y.: Implicit geometric regularization for learning shapes. In: Proceedings of ICML (2020)
Härkönen, E., Hertzmann, A., Lehtinen, J., Paris, S.: GANspace: discovering interpretable GAN controls. In: Proceedings of NeurIPS (2021)
He, Z., Kan, M., Shan, S.: EigenGAN: layer-wise eigen-learning for GANs. In: Proceedings of ICCV (2021)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science (2006)
Jang, W., Agapito, L.: CodeNerf: disentangled neural radiance fields for object categories. In: Proceedings of ICCV (2021)
Kania, K., Yi, K.M., Kowalski, M., Trzciński, T., Tagliasacchi, A.: CoNerf: controllable neural radiance fields. In: Proceedings of ICCV (2021)
Karras, T., Laine, S., Aila, T.: A style-based generator architecture for generative adversarial networks. In: Proceedings of CVPR (2019)
Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., Aila, T.: Analyzing and improving the image quality of styleGAN. In: Proceedings of CVPR (2020)
Liu, S., Zhang, X., Zhang, Z., Zhang, R., Zhu, J.Y., Russell, B.: Editing conditional radiance fields. In: Proceedings of ICCV (2021)
Liu, Z., Luo, P., Wang, X., Tang, X.: Deep learning face attributes in the wild. In: Proceedings of ICCV (2015)
Mescheder, L., Oechsle, M., Niemeyer, M., Nowozin, S., Geiger, A.: Occupancy networks: learning 3D reconstruction in function space. In: Proceedings of CVPR (2019)
Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: NeRF: representing scenes as neural radiance fields for view synthesis. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12346, pp. 405–421. Springer, Cham (2020).https://doi.org/10.1007/978-3-030-58452-8_24
Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprintarXiv:1411.1784 (2014)
Oechsle, M., Niemeyer, M., Reiser, C., Mescheder, L., Strauss, T., Geiger, A.: Learning implicit surface light fields. In: Proceedings of 3DV (2020)
Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: DeepSDF: learning continuous signed distance functions for shape representation. In: Proceedings of CVPR (2019)
Perez, E., Strub, F., De Vries, H., Dumoulin, V., Courville, A.: Film: Visual reasoning with a general conditioning layer. In: Proceedings of AAAI (2018)
Ramirez, P.Z., Tonioni, A., Tombari, F.: Unsupervised novel view synthesis from a single image. arXiv preprintarXiv:2102.03285 (2021)
Ranjan, A., Bolkart, T., Sanyal, S., Black, M.J.: Generating 3D faces using convolutional mesh autoencoders. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 725–741. Springer, Cham (2018).https://doi.org/10.1007/978-3-030-01219-9_43
Rezende, D.J., Mohamed, S.: Variational inference with normalizing flows (2016)
Schwarz, K., Liao, Y., Niemeyer, M., Geiger, A.: Graf: generative radiance fields for 3d-aware image synthesis. In: Proceedings of NeurIPS (2020)
Shen, Y., Gu, J., Tang, X., Zhou, B.: Interpreting the latent space of GANs for semantic face editing. In: Proceedings of CVPR (2020)
Shen, Y., Zhou, B.: Closed-form factorization of latent semantics in GANs. In: Proceedings of CVPR (2021)
Sitzmann, V., Martel, J.N., Bergman, A.W., Lindell, D.B., Wetzstein, G.: Implicit neural representations with periodic activation functions. In: Proceedings of NeurIPS (2020)
Srinivasan, P.P., Deng, B., Zhang, X., Tancik, M., Mildenhall, B., Barron, J.T.: Nerv: neural reflectance and visibility fields for relighting and view synthesis. In: Proceedings of CVPR (2021)
Sun, S.-H., Huh, M., Liao, Y.-H., Zhang, N., Lim, J.J.: Multi-view to novel view: synthesizing novel views with self-learned confidence. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 162–178. Springer, Cham (2018).https://doi.org/10.1007/978-3-030-01219-9_10
Tian, Y., Peng, X., Zhao, L., Zhang, S., Metaxas, D.N.: CR-GAN: learning complete representations for multi-view generation. In: Proceedings of IJCAI (2018)
Tran, L., Yin, X., Liu, X.: Disentangled representation learning GAN for pose-invariant face recognition. In: Proceedings of CVPR (2017)
Wang, N., Zhang, Y., Li, Z., Fu, Y., Liu, W., Jiang, Y.-G.: Pixel2Mesh: generating 3D mesh models from single RGB images. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 55–71. Springer, Cham (2018).https://doi.org/10.1007/978-3-030-01252-6_4
Wu, J., Zhang, C., Zhang, X., Zhang, Z., Freeman, W.T., Tenenbaum, J.B.: Learning shape priors for single-view 3D completion and reconstruction. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 673–691. Springer, Cham (2018).https://doi.org/10.1007/978-3-030-01252-6_40
Wu, Z., et al.: 3D shapenets: a deep representation for volumetric shapes. In: Proceedings of CVPR (2015)
Zhao, J.J., Mathieu, M., LeCun, Y.: Energy-based generative adversarial network. In: Proceedings of ICLR (2017)
Zhu, J., et al.: Low-rank subspaces in GANs. In: Proceedings of NeurIPS (2021)
Zhuang, P., Koyejo, O., Schwing, A.G.: Enjoy your editing: controllable GANs for image editing via latent space navigation. In: Proceedings of ICLR (2021)
Author information
Authors and Affiliations
Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong
Zhiyuan Yang & Qingfu Zhang
- Zhiyuan Yang
You can also search for this author inPubMed Google Scholar
- Qingfu Zhang
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toZhiyuan Yang.
Editor information
Editors and Affiliations
Indian Institute of Technology Indore, Indore, India
Mohammad Tanveer
Indian Institute of Information Technology - Allahabad, Prayagraj, India
Sonali Agarwal
Kobe University, Kobe, Japan
Seiichi Ozawa
Indian Institute of Technology Patna, Patna, India
Asif Ekbal
University of Innsbruck, Innsbruck, Austria
Adam Jatowt
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Yang, Z., Zhang, Q. (2023). EigenGRF: Layer-Wise Eigen-Learning for Controllable Generative Radiance Fields. In: Tanveer, M., Agarwal, S., Ozawa, S., Ekbal, A., Jatowt, A. (eds) Neural Information Processing. ICONIP 2022. Lecture Notes in Computer Science, vol 13623. Springer, Cham. https://doi.org/10.1007/978-3-031-30105-6_16
Download citation
Published:
Publisher Name:Springer, Cham
Print ISBN:978-3-031-30104-9
Online ISBN:978-3-031-30105-6
eBook Packages:Computer ScienceComputer Science (R0)
Share this paper
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative