Movatterモバイル変換


[0]ホーム

URL:


Skip to main content
Springer Nature Link
Log in

Accurate target tracking via Gaussian sparsity and locality-constrained coding in heavy occlusion

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

This paper presents a Gaussian sparse representation cooperative model for tracking a target in heavy occlusion video sequences by combining sparse coding and locality-constrained linear coding algorithms. Different from the usual method of using1-norm regularization term in the framework of particle filters to form the sparse collaborative appearance model (SCM), we employed the1-norm and2-norm to calculate feature selection, and then encoded the candidate samples to generate the sparse coefficients. Consequently, our method not only easily obtained sparse solutions but also reduced reconstruction error. Compared to state-of-the-art algorithms, our scheme achieved better performance in heavy occlusion video sequences for tracking a target. Extensive experiments on target tracking were carried out to show the results of our proposed algorithm compared with various other target tracking methods.

This is a preview of subscription content,log in via an institution to check access.

Access this article

Log in via an institution

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Ahuja N, Liu S, Ghanem B, Zhang T (2012) Robust visual tracking via multi-task sparse learning. In: IEEE conference on computer vision and pattern recognition, pp 2042–2049

  2. Babenko B, Yang MH, Belongie S (2011) Robust object tracking with online multiple instance learning. IEEE Trans Pattern Anal Mach Intell 33(8):1619–1632

    Article  Google Scholar 

  3. Bai Q, Wu Z, Sclaroff S, Betke M, Monnier C (2013) Randomized ensemble tracking. In: IEEE international conference on computer vision, pp 2040–2047

  4. Bertinetto L, Valmadre J, Golodetz S, Miksik O, Torr PHS (2015) Staple: complementary learners for real-time tracking, pp 1401–1409

  5. Cehovin L, Leonardis A, Kristan M (2016) Visual object tracking performance measures revisited. IEEE Trans Image Process 25(3):1261–1274

    MathSciNet MATH  Google Scholar 

  6. Chang X, Yu YL, Yang Y, Xing EP (2016) Semantic pooling for complex event analysis in untrimmed videos. IEEE Trans Pattern Anal Mach Intell 39(8):1617–1632

    Article  Google Scholar 

  7. Chang X, Ma Z, Lin M, Yang Y, Hauptmann A (2017) Feature interaction augmented sparse learning for fast kinect motion detection. IEEE Trans Image Process 26(8):3911–3920

    Article MathSciNet  Google Scholar 

  8. Chang X, Ma Z, Yi Y, Zeng Z, Hauptmann AG (2017) Bi-level semantic representation analysis for multimedia event detection. IEEE Trans Cybern 47(5):1180–1197

    Article  Google Scholar 

  9. Han G, Wang X, Liu J, Sun N, Wang C (2016) Robust object tracking based on local region sparse appearance model. Neurocomputing 184(C):145–167

    Article  Google Scholar 

  10. Hare S, Saffari A, Torr PHS (2011) Struck: structured output tracking with kernels. In: IEEE international conference on computer vision, pp 263–270

  11. Henriques JF, Rui C, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with kernels. In: European conference on computer vision, pp 702–715

    Chapter  Google Scholar 

  12. Henriques JF, Rui C, Martins P, Batista J (2014) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596

    Article  Google Scholar 

  13. Ji H (2012) Real time robust l1 tracker using accelerated proximal gradient approach. In: IEEE conference on computer vision and pattern recognition, pp 1830–1837

  14. Ji J, Ji H, Bai M (2015) Sparse representation with regularization term for face recognition. Springer, Berlin

    Book  Google Scholar 

  15. Kalal Z, Mikolajczyk K, Matas J (2011) Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell 34(7):1409–22

    Article  Google Scholar 

  16. Li FF, Perona P (2005) A bayesian hierarchical model for learning natural scene categories. In: IEEE computer society conference on computer vision and pattern recognition, pp 524–531

  17. Liang P, Pang Y, Liao C, Mei X, Ling H (2016) Adaptive objectness for object tracking. IEEE Signal Process Lett 23(7):949–953

    Article  Google Scholar 

  18. Ling H (2012) Online robust image alignment via iterative convex optimization. In: IEEE conference on computer vision and pattern recognition, pp 1808–1814

  19. Liu B, Huang J, Yang L, Kulikowsk C (2011) Robust tracking using local sparse appearance model and k-selection. In: IEEE conference on computer vision and pattern recognition, pp 1313–1320

  20. Mei X, Ling H (2009) Robust visual tracking using l1 minimization. In: IEEE international conference on computer vision, pp 1436–1443

  21. Nam H, Han B (2016) Learning multi-domain convolutional neural networks for visual tracking. In: Computer vision and pattern recognition, pp 4293–4302

  22. Ou W, Yuan D, Liu Q, Cao Y (2017) Object tracking based on online representative sample selection via non-negative least square. Multimed Tools Appl, 1–19

  23. Perez P, Hue C, Vermaak J, Gangnet M (2002) Color-based probabilistic tracking. In: Computer vision - ECCV 2002, European conference on computer vision, Copenhagen, Denmark, May 28-31, 2002, proceedings, pp 661–675

    Chapter  Google Scholar 

  24. Ross DA, Lim J, Lin RS, Yang MH (2008) Incremental learning for robust visual tracking. Int J Comput Vis 77(1):125–141

    Article  Google Scholar 

  25. Scholkopf B, Platt J, Hofmann T (2006) Efficient sparse coding algorithms. In: International conference on neural information processing systems, pp 801–808

  26. Sevilla-Lara L, Learned-Miller E (2012) Distribution fields for tracking. In: IEEE Conference on computer vision and pattern recognition, pp 1910–1917

  27. Tang F, Brennan S, Zhao Q, Tao H (2007) Co-tracking using semi-supervised support vector machines. In: IEEE international conference on computer vision, pp 1–8

  28. Wang Q (2011) An experimental comparison of online object-tracking algorithms. Proc SPIE - Int Soc Opt Eng 8138(3):815–822

    Google Scholar 

  29. Wang J, Yang J, Yu K, Lv F, Huang T, Gong Y (2010) Locality-constrained linear coding for image classification. In: IEEE conference on computer vision and pattern recognition, pp 3360– 3367

  30. Wang Q, Chen F, Xu W, Yang MH (2012) Online discriminative object tracking with local sparse representation. In: Applications of computer vision, pp 425–432

  31. Wang D, Lu H, Yang MH (2013) Online object tracking with sparse prototypes. IEEE Trans Image Process Publ IEEE Signal Process Soc 22(1):314

    Article MathSciNet  Google Scholar 

  32. Wang L, Ouyang W, Wang X, Lu H (2016) Stct: sequentially training convolutional networks for visual tracking. In: Computer vision and pattern recognition, pp 1373–1381

  33. Wu Y, Lim J, Yang MH (2013) Online object tracking: a benchmark. In: IEEE conference on computer vision and pattern recognition, pp 2411–2418

  34. Yang MH, Lu H, Zhong W (2012) Robust object tracking via sparsity-based collaborative model. In: Computer vision and pattern recognition, pp 1838–1845

  35. Yang Z, Du B, Xiong H (2014) Gaussian distance coding for image classification. In: International conference on computer science and network technology, pp 517–520

  36. Yang Y, Xie Y, Zhang W, Hu W, Tan Y (2015) Global coupled learning and local consistencies ensuring for sparse-based tracking. Neurocomputing 160(C):191–205

    Article  Google Scholar 

  37. Yang Y, Hu W, Xie Y, Zhang W, Zhang T (2016) Temporal restricted visual tracking via reverse-low-rank sparse learning. IEEE Trans Syst Man Cybern, 1–14

  38. Yang H, Qu S, Zheng Z (2017) Visual tracking via online discriminative multiple instance metric learning. Multimed Tools Appl, 1–19

  39. Yang Y, Chen N, Jiang S (2017) Collaborative strategy for visual object tracking. Multimed Tools Appl, 1–21

  40. Zhang K, Zhang L, Yang MH (2012) Real-time compressive tracking. In: European conference on computer vision, pp 864–877

    Chapter  Google Scholar 

  41. Zhang C, Liu R, Qiu T, Su Z (2014) Robust visual tracking via incremental low-rank features learning. Neurocomputing 131(131):237–247

    Article  Google Scholar 

  42. Zhang K, Liu Q, Wu Y, Yang MH (2015) Robust visual tracking via convolutional networks without training. IEEE Trans Image Process Publ IEEE Signal Process Soc 25(4):1779–1792

    MathSciNet MATH  Google Scholar 

  43. Zhang T, Liu S, Ahuja N, Yang MH, Ghanem B (2015) Robust visual tracking via consistent low-rank sparse learning. Int J Comput Vis 111(2):171–190

    Article  Google Scholar 

  44. Zhong W, Lu H, Yang MH (2014) Robust object tracking via sparse collaborative appearance model. IEEE Trans Image Process Publ IEEE Signal Process Soc 23(5):2356–2368

    Article MathSciNet  Google Scholar 

  45. Zhou Y, Bai X, Liu W, Latecki LJ (2016) Similarity fusion for visual tracking. Int J Comput Vis 118(3):337–363

    Article MathSciNet  Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China of (61603161, 61650402); a Natural Science Foundation of Jiangxi Province of China (20151BAB207049); a Key Science Foundation of Educational Commission of Jiangxi Province of China (GJJ160768); a scholastic youth talent support program of Jiangxi Science and Technology Normal University (2016QNBJRC004); the Natural Science Foundation of Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing (2016WICSIP031); and the Key Science Foundation of Jiangxi Science and Technology Normal University (2014XJZD002). We would like to thank LetPub (www.letpub.com) for providing linguistic assistance during the preparation of this manuscript.

Author information

Authors and Affiliations

  1. School of Communication and Electronics, Jiangxi Science and Technology Normal University, 605 Fenglin Rd., Nanchang, China

    Zhijian Yin, Linhan Dai, Fan Yang & Zhen Yang

  2. Department of Automation, Shanghai Jiao Tong University, 800 Dongchuan Rd., Shanghai, 200240, China

    Huilin Xiong

Authors
  1. Zhijian Yin

    You can also search for this author inPubMed Google Scholar

  2. Linhan Dai

    You can also search for this author inPubMed Google Scholar

  3. Huilin Xiong

    You can also search for this author inPubMed Google Scholar

  4. Fan Yang

    You can also search for this author inPubMed Google Scholar

  5. Zhen Yang

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toZhen Yang.

Rights and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yin, Z., Dai, L., Xiong, H.et al. Accurate target tracking via Gaussian sparsity and locality-constrained coding in heavy occlusion.Multimed Tools Appl77, 26485–26507 (2018). https://doi.org/10.1007/s11042-018-5872-1

Download citation

Keywords

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp