Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Extended Co-occurrence HOG with Dense Trajectories for Fine-Grained Activity Recognition

  • Conference paper
  • First Online:

Part of the book series:Lecture Notes in Computer Science ((LNIP,volume 9007))

Included in the following conference series:

Abstract

In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.

This is a preview of subscription content,log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Similar content being viewed by others

References

  1. Moeslund, T.B., Hilton, A., Kruger, V., Sigal, L.: Visual Analysis of Humans: Looking at People. Springer, London (2011)

    Book  Google Scholar 

  2. Aggarwal, J.K., Cai, Q.: Human motion analysis: a review. Comput. Vis. Image Underst. (CVIU)73(3), 428–440 (1999)

    Article  Google Scholar 

  3. Moeslund, T.B., Hilton, A., Kruger, V.: A survey of advances in vision-based human motion capture and analysis. Comput. Vis. Image Underst. (CVIU)104(2), 90–126 (2006)

    Article  Google Scholar 

  4. Ryoo, M.S., Aggarwal, J.K.: Human activity analysis: a review. ACM Comput. Surv. (CSUR)43(3), 16 (2011)

    Google Scholar 

  5. Rohrbach, M., Amin, S., Andriluka, M., Schiele, B.: A database for fine grained activity detection of cooking activities. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2012)

    Google Scholar 

  6. Watanabe, T., Ito, S., Yokoi, K.: Co-occurrence histograms of oriented gradients for pedestrian detection. In: Wada, T., Huang, F., Lin, S. (eds.) PSIVT 2009. LNCS, vol. 5414, pp. 37–47. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  7. Huang, C.-H., Boyer, E., Navab, N., Ilic, S.: Human shape and pose tracking using keyframes. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2014)

    Google Scholar 

  8. Wang, H., Klaser, A., Schmid, C., Liu, C.L.: Action recognition by dense trajectories. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3169–3176 (2011)

    Google Scholar 

  9. Wang, H., Klaser, A., Schmid, C., Liu, C.L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. (IJCV)103, 60–79 (2013)

    Article MathSciNet  Google Scholar 

  10. Laptev, I.: On space-time interest points. Int. J. Comput. Vis. (IJCV)64, 107–123 (2005)

    Article  Google Scholar 

  11. Laptev, I., Marszalek, M., Schmid, C., Rozenfeld, B.: Learning realistic human actions from movies. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2008)

    Google Scholar 

  12. Klaser, A., Marszalek, M., Schmid, C.: A spatio-temporal descriptor based on 3D-gradients. In: British Machine Vision Conference (BMVC) (2008)

    Google Scholar 

  13. Marszalek, M., Laptev, I., Schmid, C.: Actions in context. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2929–2936 (2009)

    Google Scholar 

  14. Everts, I., Gemert, J.C., Gevers, T.: Evaluation of color STIPs for human activity recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2850–2857 (2013)

    Google Scholar 

  15. Zinnen, A., Blanke, U., Schiele, B.: An analysis of sensor-oriented vs. model - based activity recognition. In: IEEE International Symposium on Wearable Computers (ISWC) (2009)

    Google Scholar 

  16. Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: IEEE Conference on Computer Vision and Pattern Recognitino (CVPR), pp. 886–893 (2005)

    Google Scholar 

  17. Dalal, N., Triggs, B., Schmid, C.: Human detection using oriented histograms of flow and appearance. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 428–441. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  18. Raptis, M., Kokkinos, I., Soatto, S.: Discovering discriminative action parts from mid-level video representation. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1242–1249 (2013)

    Google Scholar 

  19. Li, B., Camps, O., Sznaier, M.: Cross-view activity recognition using hankelets. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1362–1369 (2012)

    Google Scholar 

  20. Jain, M., Jegou, H., Bouthemy, P.: Better exploiting motion for better action recognition. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2555–2562 (2013)

    Google Scholar 

  21. Peng, X., Qiao, Y., Peng, Q., Qi, X.: Exploring motion boundary based sampling and spatial temporal context descriptors for action recognition. In: British Machine Vision Conference (BMVC) (2013)

    Google Scholar 

  22. Wang, H., Schmid, C.: Action recognition with improved trajectories. In: International Conference on Computer Vision (ICCV), pp. 3551–3558 (2013)

    Google Scholar 

  23. Csurka, G., Bray, C., Dance, C., Fan, L.: Visual categorization with bags of keypoints. In: European Conference on Computer Vision (ECCV) Workshop on Statistical Learning in Computer Vision, pp. 59–74 (2004)

    Google Scholar 

  24. Perronnin, F., Sánchez, J., Mensink, T.: Improving the fisher kernel for large-scale image classification. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 143–156. Springer, Heidelberg (2010)

    Chapter  Google Scholar 

  25. Bhattacharyya, A.: On a measure of divergence between two statistical populations defined by their probability distributions. Bull. Calcutta Math. Soc.35, 99–109 (1943)

    MATH MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

  1. The University of Tokyo, Tokyo, Japan

    Hirokatsu Kataoka

  2. Keio University, Minato, Japan

    Hirokatsu Kataoka, Kiyoshi Hashimoto & Yoshimitsu Aoki

  3. National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan

    Kenji Iwata & Yutaka Satoh

  4. Technische Universität München (TUM), Munich, Germany

    Nassir Navab & Slobodan Ilic

Authors
  1. Hirokatsu Kataoka

    You can also search for this author inPubMed Google Scholar

  2. Kiyoshi Hashimoto

    You can also search for this author inPubMed Google Scholar

  3. Kenji Iwata

    You can also search for this author inPubMed Google Scholar

  4. Yutaka Satoh

    You can also search for this author inPubMed Google Scholar

  5. Nassir Navab

    You can also search for this author inPubMed Google Scholar

  6. Slobodan Ilic

    You can also search for this author inPubMed Google Scholar

  7. Yoshimitsu Aoki

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toHirokatsu Kataoka.

Editor information

Editors and Affiliations

  1. Technische Universität München, Garching, Germany

    Daniel Cremers

  2. University of Adelaide, Adelaide, South Australia, Australia

    Ian Reid

  3. Keio University, Yokohama, Kanagawa, Japan

    Hideo Saito

  4. University of California at Merced, Merced, California, USA

    Ming-Hsuan Yang

Rights and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Kataoka, H.et al. (2015). Extended Co-occurrence HOG with Dense Trajectories for Fine-Grained Activity Recognition. In: Cremers, D., Reid, I., Saito, H., Yang, MH. (eds) Computer Vision -- ACCV 2014. ACCV 2014. Lecture Notes in Computer Science(), vol 9007. Springer, Cham. https://doi.org/10.1007/978-3-319-16814-2_22

Download citation

Publish with us

Access this chapter

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
JPY 3498
Price includes VAT (Japan)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
JPY 5719
Price includes VAT (Japan)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
JPY 7149
Price includes VAT (Japan)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide -see info

Tax calculation will be finalised at checkout

Purchases are for personal use only


[8]ページ先頭

©2009-2025 Movatter.jp