829Accesses
7Citations
1Altmetric
Abstract
We present an object relighting system that allows an artist to select an object from an image and insert it into a target scene. Through simple interactions, the system can adjust illumination on the inserted object so that it appears naturally in the scene. To support image-based relighting, we build object model from the image, and propose aperceptually-inspired approximate shading model for the relighting. It decomposes the shading field into (a) a rough shape term that can be reshaded, (b) a parametric shading detail that encodes missing features from the first term, and (c) a geometric detail term that captures fine-scale material properties. With this decomposition, the shading model combines 3D rendering and image-based composition and allows more flexible compositing than image-based methods. Quantitative evaluation and a set of user studies suggest our method is a promising alternative to existing methods of object insertion.
This is a preview of subscription content,log in via an institution to check access.
Access this article
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
Price includes VAT (Japan)
Instant access to the full article PDF.














Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Agarwala, A., Dontcheva, M., Agrawala, M., Drucker, S., Colburn, A., Curless, B., et al. (2004). Interactive digital photomontage.ACM Transactions on Graphics,23(3), 294–302.
Arsalan Soltani, A., Huang, H., Wu, J., Kulkarni, T. D., & Tenenbaum, J. B. (2017). Synthesizing 3D shapes via modeling multi-view depth maps and silhouettes with deep generative networks. InThe IEEE conference on computer vision and pattern recognition (CVPR).
Barron, J. T., & Malik, J. (2012). Color constancy, intrinsic images, and shape estimation. InECCV.
Basri, R., & Jacobs, D. (2003). Lambertian reflectance and linear subspaces. InPAMI.
Beck, J., & Prazdny, S. (1981). Highlights and the perception of glossiness.Attention, Perception and Psychophysics,30, 407–410.
Berzhanskaya, J., Swaminathan, G., Beck, J., & Mingolla, E. (2005). Remote effects of highlights on gloss perception.Perception,34, 565–575.
Burt, P. J., & Adelson, E. H. (1983). A multiresolution spline with application to image mosaics.ACM Transactions on Graphics,2(4), 217–236.
Cavanagh, P. (2005). The artist as neuroscientist.Nature,434, 301–307.
Chen, T., Cheng, M.-M., Tan, P., Shamir, A., & Hu, S.-M. (2009). Sketch2photo: internet image montage.ACM Transactions on Graphics,28(5), 124:1–124:10.
Debevec, P. (1998). Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. InSIGGRAPH’98 (pp. 189–198). ACM.
Deshpande, A., Lu, J., Yeh, M.-C., Jin Chong, M., & Forsyth, D. (2017). Learning diverse image colorization. InThe IEEE conference on computer vision and pattern recognition (CVPR).
Durou, J.-D., Falcone, M., & Sagona, M. (2008). Numerical methods for shape-from-shading: A new survey with benchmarks.Computer Vision and Image Understanding,109(1), 22–43.
Esteban, C. H., Vogiatzis, G., & Cipolla, R. (2008). Multiview photometric stereo.IEEE Transactions on Pattern Analysis and Machine Intelligence,30(3), 548–554.
Furukawa, Y., & Hernandez, C. (2015). Multi-vew Stereo: A tutorial. Foundations and trends?. InComputer graphics and vision.
Fyffe, G., Jones, A., Alexander, O., Ichikari, R., Graham, P., Nagano, K., Busch, J., & Debevec P. (2013). Driving high-resolution facial blendshapes with video performance capture. InACM SIGGRAPH 2013 Talks, SIGGRAPH ’13 (p. 33:1), New York, NY, USA, 2013. ACM.
Fyffe, G., Nagano, K., Huynh, L., Saito, S., Busch, J., Jones, A., Li, H., & Debevec, P. (2017). Multi-view stereo on consistent face topology. InComputer Graphics Forum.
Ghosh, A., Fyffe, G., Tunwattanapong, B., Busch, J., Yu, X., & Debevec, P. (2011). Multiview face capture using polarized spherical gradient illumination.ACM Transactions on Graphics,30(6), 129:1–129:10.
Grosse, R., Johnson, M. K., Adelson, E. H., & Freeman, W. T. (2009). Ground-truth dataset and baseline evaluations for intrinsic image algorithms. InICCV (pp. 2335–2342).
Hartley, R. I., & Zisserman, A. (2004).Multiple view geometry in computer vision (2nd ed.). Cambridge: Cambridge University Press. (ISBN: 0521540518).
Johnston, S. F. (2002). Lumo: Illumination for cel animation. InNPAR ’02.
Karsch, K., Hedau, V., Forsyth, D., & Hoiem, D. (2011). Rendering synthetic objects into legacy photographs.ACM Transactions on Graphics (SIGGRAPH Asia),30(6), 157:1–157:12.
Karsch, K., Liu, C., Kang, S. B., & England, N. (2012). Depth extraction from video using non-parametric sampling. InECCV.
Karsch, K., Sunkavalli, K., Hadap, S., Carr, N., Jin, H., Fonte, R., et al. (2014). Automatic scene inference for 3D object compositing.ACM Transactions on Graphics,33(3), 32:1–32:15.
Kim, S., Park, K., Sohn, K., & Lin, S. (2016). Unified depth prediction and intrinsic image decomposition from a single image via joint convolutional neural fields. InEuropean conference on computer vision.
Lalonde, J.-F., Hoiem, D., Efros, A. A., Rother, C., Winn, J., & Criminisi, A. (2007). Photo clip art.ACM Transactions on Graphics (SIGGRAPH 2007),26(3), 3.
Lettry, L., Vanhoey, K., & Van Gool L. (2016). Darn: a deep adversial residual network for intrinsic image decomposition.arXiv:1612.07899.
Liao, Z., Karsch, K., & Forsyth, D. (2015). An approximate shading model for object relighting. InCVPR.
Liao, Z., Rock, J., Wang, Y., & Forsyth, D. (2013). Non-parametric filtering for geometric detail extraction and material representation. InCVPR (pp. 963–970).
Narihira, T., Maire, M., & Yu, S. X. (2015). Direct intrinsics: Learning albedo-shading decomposition by convolutional regression. InProceedings of the IEEE international conference on computer vision (pp. 2992–2992).
Niebner, M., Keinert, B., Fisher, M., Stamminger, M., Loop, C., & Schäfer, H. (2016). Real-time rendering techniques with hardware tessellation.Computer Graphics Forum,35(1), 113–137.
Ostrovsky, Y., Cavanagh, P., & Sinha, P. (2005). Perceiving illumination inconsistencies in scenes.Perception,34, 1301–1314.
Pérez, P., Gangnet, M., & Blake, A. (2003). Poisson image editing.ACM Transactions on Graphics,22(3), 313–318.
Prasad, M., & Fitzgibbon, A. (2006). Single view reconstruction of curved surfaces. InCVPR (pp. 1345–1354).
Ramamoorthi, R., & Hanrahan, P. (2001). An efficient representation for irradiance environment maps. InProceedings of the 28th annual conference on computer graphics and interactive techniques, SIGGRAPH’01 (pp. 497–500).
Richardson, E., Sela, M., Or-El, R., & Kimmel R. (July 2017). Learning detailed face reconstruction from a single image. InThe IEEE conference on computer vision and pattern recognition (CVPR).
Shu, Z., Yumer, E., Hadap, S., Sunkavalli, K., Shechtman, E., & Samaras, D. (2017). Neural face editing with intrinsic image disentangling. InThe IEEE conference on computer vision and pattern recognition (CVPR).
Tarr, M. J., Kersten, D., & Bülthoff, H. H. (1998). Why the visual recognition system might encode the effects of illumination.Vision Research,38, 2259–2275.
Trigeorgis, G., Snape, P., Kokkinos, I., & Zafeiriou, S. (2017). Face normals “in-the-wild” using fully convolutional networks. InThe IEEE conference on computer vision and pattern recognition (CVPR).
Twarog, N., Tappen, M., & Adelson, E. (2012). Playing with puffball: simple scale-invariant inflation for use in vision and graphics. InProceedings of the ACM symposium on applied perception, SAP’12 (pp. 45–54).
Wu, T.-P., Sun, J., Tang, C.-K., & Shum, H.-Y. (2008). Interactive normal reconstruction from a single image.ACM Transactions on Graphics,27(5), 119:1–119:9.
Xia, T., Liao, B., & Yu, Y. (2009). Patch-based image vectorization with automatic curvilinear feature alignment.ACM Transactions on Graphics,28(5), 115:1–115:10.
Zhang, R., Tsai, P.-S., Cryer, J., & Shah, M. (1999). Shape-from-shading: A survey.IEEE Transactions on Pattern Analysis and Machine Intelligence,21(8), 690–706.
Acknowledgements
DAF is supported in part by Division of Information and Intelligent Systems (US) (IIS 09-16014), Division of Information and Intelligent Systems (IIS-1421521) and Office of Naval Research (N00014-10-10934). ZL is supported in part by NSFC Grant No. 61602406, ZJNSF Grant No. Q15F020006 and a special fund from Alibaba – Zhejiang University Joint Institute of Frontier Technologies.
Author information
Authors and Affiliations
College of Computer Science, Zhejiang University, Hangzhou, China
Zicheng Liao & Hongyi Zhang
Department of Computer Science, University of Illinois at Urbana-Champaign, Champaign, USA
Zicheng Liao, Kevin Karsch & David Forsyth
- Zicheng Liao
You can also search for this author inPubMed Google Scholar
- Kevin Karsch
You can also search for this author inPubMed Google Scholar
- Hongyi Zhang
You can also search for this author inPubMed Google Scholar
- David Forsyth
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toZicheng Liao.
Additional information
Communicated by Zhouchen Lin.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Liao, Z., Karsch, K., Zhang, H.et al. An Approximate Shading Model with Detail Decomposition for Object Relighting.Int J Comput Vis127, 22–37 (2019). https://doi.org/10.1007/s11263-018-1090-6
Received:
Accepted:
Published:
Issue Date:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative