620Accesses
13Citations
1 Altmetric
Abstract
Automatically generating captions of an image is not closely related to every spatial area of the visual information, but always related to the topic of the image expression. Aiming at the decoupling problem of visual spatial feature attention and semantic decoder, a topic-based multi-channel attention model (TMA) under hybrid mode for image caption is proposed. First, natural language processing (NLP) technology is used to preprocess the caption references, including filtering stop words, analyzing word frequency and constructing a semantic network graph with node labels. Then, combined with the image features extracted by the convolutional neural network (CNN), a semantic perception network is designed to achieve cross-domain prediction from image to topic. Next, a topic-based multi-channel attention fusion mechanism is proposed to realize image-text attention fusion representation under the joint action of the global spatial features of the image, the local semantic features of the graph nodes and the hidden layer features of the long short-term memory (LSTM) decoder. Finally, multi-task loss function is used to train the TMA. Experimental results show that the proposed model has better evaluation performance with topic-focused attention than state-of-the-art (SOTA) methods.
This is a preview of subscription content,log in via an institution to check access.
Access this article
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
Price includes VAT (Japan)
Instant access to the full article PDF.








Similar content being viewed by others
Explore related subjects
Discover the latest articles and news from researchers in related subjects, suggested using machine learning.References
Zhao ZQ, Zheng P, Xu ST, Wu X (2019) Object detection with deep learning: a review. IEEE Trans Neural Netw Learn Syst 30(11):3212–3232
Öztürk Ş (2021) Class-driven content-based medical image retrieval using hash codes of deep features. Biomed Signal Process Control 68:102601
Öztürk Ş (2020) Stacked auto-encoder based tagging with deep features for content-based medical image retrieval. Expert Syst Appl 161:113693
Öztürk Ş (2021) Convolutional neural network based dictionary learning to create hash codes for content-based image retrieval. Proced Comput Sci 183:624–629
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural networks 61:85–117
Miguel A, Gonzalo J, García-Lagos F (2020) Advances in computational intelligence. Neural Comput Appl 32(2):309–311
Deng L, Yu D (2014) Deep learning: methods and applications. Found Trends Signal Process 7(3–4):197–387
Ordonez V, Kulkarni G, Berg T (2011) Im2text: Describing images using 1 million captioned photographs. Adv Neural Inf Process Syst 24:1143–1151
Su JH, Chou CL, Lin CY, Tseng VS (2011) Effective semantic annotation by image-to-concept distribution model. IEEE Trans Multimed 13(3):530–538
Feng Y, Lapata M (2012) Automatic caption generation for news images. IEEE Trans Pattern Anal Mach Intell 35(4):797–812
Ballan L, Uricchio T, Seidenari L, Del Bimbo A (2014) A cross-media model for automatic image annotation. In Proceedings of international conference on multimedia retrieval pp. 73–80
Makadia A, Pavlovic V, Kumar S (2010) Baselines for image annotation. Int J Comput Vis 90(1):88–105
Zahangir AM, Mahmudul H, Chris Y, Taha TM, Asari VK (2020) Improved inception-residual convolutional neural network for object recognition. Neural Comput Appl 32(1):279–293
Qian K, Tian L, Liu Y, Wen X, Bao J (2021) Image robust recognition based on feature-entropy-oriented differential fusion capsule network. Appl Intell 51(2):1108–1117
LeCun Y, Kavukcuoglu K, Farabet C (2010). Convolutional networks and applications in vision. In: Proceedings of 2010 IEEE international symposium on circuits and systems pp. 253-256
Sharif Razavian A, Azizpour H, Sullivan J, Carlsson S (2014) CNN features off-the-shelf: an astounding baseline for recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops pp. 806–813
Raj JS, Ananthi JV (2019) Recurrent neural networks and nonlinear prediction in support vector machines. J Soft Comput Paradigm (JSCP) 1(01):33–40
Lin TY, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Zitnick CL (2014) Microsoft coco: Common objects in context. In: European conference on computer vision, pp. 740-755. Springer, Cham
Cho K, Van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder-decoder for statistical machine translation, In: EMNLP
Badrinarayanan V, Kendall A, Cipolla R (2017) Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans Pattern Anal Mach Intell 39(12):2481–2495
Du S, Li T, Yang Y, Horng SJ (2020) Multivariate time series forecasting via attention-based encoder-decoder framework. Neurocomputing 388:269–279
Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: A neural image caption generator. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3156–3164
Xu K, Ba J, Kiros R, Cho K, Courville A, Salakhudinov R, Bengio Y (2015) Show, attend and tell: neural image caption generation with visual attention. In: International conference on machine learning. pp. 2048–2057, PMLR
Li L, Tang S, Deng L, Zhang Y, Tian Q (2017) Image caption with global-local attention. In: Proceedings of the AAAI conference on artificial intelligence (Vol. 31, No. 1)
Liu M, Li L, Hu H, Guan W, Tian J (2020) Image caption generation with dual attention mechanism. Inf Process Manag 57(2):102178
He X, Yang Y, Shi B, Bai X (2019) VD-SAN: Visual-densely semantic attention network for image caption generation. Neurocomputing 328:48–55
Zhang W, Tang S, Su J, Xiao J, Zhuang Y (2020) Tell and guess: cooperative learning for natural image caption generation with hierarchical refined attention. Multimed Tools Appl 80:1–16
Hirschberg J, Manning CD (2015) Advances in natural language processing. Science 349(6245):261–266
Yang X, Zhu X, Zhao H, Zhang Q, Feng Y (2019) Enhancing unsupervised pretraining with external knowledge for natural language inference. In: Canadian conference on artificial intelligence. pp. 413–419. Springer, Cham
Ralph MAL, Jefferies E, Patterson K, Rogers TT (2017) The neural and computational bases of semantic cognition. Nat Rev Neurosci 18(1):42–55
Jackson RL, Rogers TT, Ralph MAL (2021) Reverse-engineering the cortical architecture for controlled semantic cognition. Nat Hum Behav 5:1–13
Bai S, An S (2018) A survey on automatic image caption generation. Neurocomputing 311:291–304
Ding S, Qu S, Xi Y, Sangaiah AK, Wan S (2019) Image caption generation with high-level image features. Pattern Recognition Lett 123:89–95
Khademi M, Schulte O (2018) Image caption generation with hierarchical contextual visual spatial attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops. pp. 1943–1951
You Q, Jin H, Wang Z, Fang C, Luo J (2016) Image captioning with semantic attention. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4651–4659
Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779–788
Papineni K, Roukos S, Ward T, Zhu W J (2002) Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the association for computational linguistics. pp. 311–318
Yang J, Wang M, Zhou H, Zhao C, Zhang W, Yu Y, Li L (2020) Towards making the most of bert in neural machine translation. In: Proceedings of the AAAI conference on artificial intelligence 34(5):9378–9385
Denkowski M, Lavie A (2014) Meteor universal: Language specific translation evaluation for any target language. In: Proceedings of the ninth workshop on statistical machine translation. pp. 376–380
Lin C Y (2004) Rouge: A package for automatic evaluation of summaries. In: Text summarization branches out. pp. 74–81
Sun S, Nenkova A (2019) The feasibility of embedding based automatic evaluation for single document summarization. In: Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). pp. 1216–1221
Vedantam, R., Lawrence Zitnick, C., & Parikh, D. (2015). Cider: Consensus-based image description evaluation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 4566–4575
Wang Z, Huang Z, Luo Y (2020) Human consensus-oriented image captioning. In: Proceedings of international joint conference on artificial intelligence, IJCAI. pp. 659–665
Acknowledgements
This paper is supported by Nanjing Institute of Technology High-level Scientific Research Foundation for the introduction of talent (No. YKJ201918), the Natural Science Foundation-Youth Fund of Jiangsu Province of China(No.BK20210931), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (No. 20KJB510049) and partially supported by the National Natural Science Foundation of China (No. 61902179).
Author information
Authors and Affiliations
School of Automation, Nanjing Institute of Technology, Nanjing, China
Kui Qian & Lei Tian
- Kui Qian
Search author on:PubMed Google Scholar
- Lei Tian
Search author on:PubMed Google Scholar
Corresponding author
Correspondence toKui Qian.
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Qian, K., Tian, L. A topic-based multi-channel attention model under hybrid mode for image caption.Neural Comput & Applic34, 2207–2216 (2022). https://doi.org/10.1007/s00521-021-06557-8
Received:
Accepted:
Published:
Issue Date:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative