Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Improving recognition of deteriorated historical Persian geometric patterns by fusion decision methods

  • S.I.: Visual Pattern Recognition and Extraction for Cultural Heritage
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

This article has beenupdated

Abstract

Historical architecture has different special styles attributed to each era, dynasty, or region. These styles are common features such as geometric properties, ratios, scales, colors, and artistic techniques. Historical geometric ornaments have an enormous capability for classification based on their geometric characteristics. Smart pattern recognition allows researchers to classify huge databases of heritage for useful internet searches. So, our main goal in this paper is to implement the detection of categories in geometric patterns for classification and documentation in which by the photography of ornaments in every monument, the type of patterns and the number of every type of pattern would be estimated quickly. Furthermore, due to occurring deterioration in these patterns, our method also contributes to recognizing the deteriorated patterns. When we encounter numerous pieces of deteriorated patterns, manual recognition in order to reassemble and reconstruct is usually impossible or time-consuming. With the aid of artificial intelligence, in this paper, our aim is to seek to solve the automatic recognition of historical geometric patterns, even patterns having deterioration as an occlusion via image processing and machine learning methods. A challenging issue that researchers would tackle in detecting historical geometric pattern’s types is the variety in geometric textures, especially when they have occlusion such as deterioration. This issue leads to limited success in classifying via extracting only one feature. The other issue is that the extracted feature must be invariant to the transformation, such as scale, rotation, and noise variation. To cope with the challenges mentioned above and accurately classify, we plan to use the fusion method based on extracting global and local features. So, the features extracted from images in this research are based on local and global. In other words, the proposed fusion strategy lies both in feature and decision level, but the core is the proposed three combination methods in fusion decision methods. In this method, the dataset is composed of four main Persian geometric pattern types: Tond dah, Kond tablghenas, Hashtva 4 lengeh, and HashtvatablKond. So, the model will be trained by extracting global and local features of the images separately. Random forest, as the prevalent machine learning algorithm, is proposed for training data and predicting the class of input images. Finally, the probability of prediction for random forest classifiers is fused by the Decision Templates (DT) combiner, Naïve Bayes (NB) combiner, and Dempster–Shafer combination methods. In comparison with the individual classifier accuracy results of 80% and 85% for global and local features respectively, our proposed approach achieves an improved accuracy of 90%, 88%, and 90% in three fusion decision methods including DT combiner, NB combiner, and Dempster–Shafer combination methods.

This is a preview of subscription content,log in via an institution to check access.

Access this article

Log in via an institution

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

Data availability

Data will be made available on reasonable request.

Change history

  • 28 June 2024

    The original online version of this article was revised to include the ORCID for the author Bita Hajebi.

References

  1. Grünbaum B, Shephard GC (1992) Interlace patterns in Islamic and Moorish art. Leonardo, pp. 331–339

  2. Abas SJ, Salman AS (1994) Symmetries of Islamic geometrical patterns. World Scientific, London

    Book  Google Scholar 

  3. Nasri A, Benslimane R (2015) A rotation symmetry group detection technique for the characterization of Islamic Rosette Patterns. Pattern Recogn Lett 68:111–117

    Article  Google Scholar 

  4. Djibril MO, Hadi Y, Thami ROH (2006) Fundamental region based indexing and classification of islamic star pattern images. In: International conference image analysis and recognition, pp 865–876. Springer, Berlin, Heidelberg

  5. Ahadian M, Bastanfard A (2011, August) Classification of islamic geometric pattern images using Zernike moments. In: 2011 Eighth International Conference Computer Graphics, Imaging and Visualization, pp 19–24. IEEE

  6. Ahadian M, Bastanfard A (2011) Islamic star pattern images recognition by central moment invariants. In: Proceedings of the International Conference on Image Processing, Computer Vision, and Pattern Recognition (IPCV), p 1. The Steering Committee of The World Congress in Computer Science, Computer Engineering and Applied Computing (WorldComp)

  7. Aoulalay A, El Mhouti A, Massar M (2022, March) Classification of Islamic geometric patterns based on machine learning techniques. In 2022 2nd international conference on innovative research in applied science, engineering and technology (IRASET), pp 1–6. IEEE

  8. Corradini E, Porcino G, Scopelliti A, Ursino D, Virgili L (2022) Fine-tuning SalGAN and PathGAN for extending saliency map and gaze path prediction from natural images to websites. Expert Syst Appl 191:116282

    Article  Google Scholar 

  9. Amelio A, Bonifazi G, Corradini E, Di Saverio S, Marchetti M, Ursino D, Virgili L (2022) Defining a deep neural network ensemble for identifying fabric colors. Appl Soft Comput 130:109687

    Article  Google Scholar 

  10. Amelio A, Bonifazi G, Corradini E, Ursino D, Virgili L (2023) A multilayer network-based approach to represent, explore and handle convolutional neural networks. Cogn Comput 15(1):61–89

    Article  Google Scholar 

  11. Amelio A, Bonifazi G, Cauteruccio F, Corradini E, Marchetti M, Ursino D, Virgili L (2023) Representation and compression of Residual Neural Networks through a multilayer network based approach. Expert Syst Appl 215:119391

    Article  Google Scholar 

  12. Zaklouta F, Stanciulescu B (2014) Real-time traffic sign recognition in three stages. Robot Auton Syst 62(1):16–24

    Article  Google Scholar 

  13. Singh C, Walia E, Mittal N (2012, March) Fusion of Zernike moments and SIFT features for improved face recognition. In: Proceedings of the international conference on recent advances and future trends in information technology, Punjab, India, pp 21–23

  14. Lowe David G (2004) Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60(2):91–110

    Article  Google Scholar 

  15. Bay H, Tuytelaars T, Gool LV (2006) Surf: speeded up robust features. In: European conference on computer vision, pp 404–417. Springer, Berlin

  16. Rublee E, Rabaud V, Konolige K, Bradski G (2011) ORB: an efficient alternative to SIFT or SURF. In: 2011 International conference on computer vision, pp 2564–2571. IEEE

  17. Ojala T, Pietikainen M, Maenpaa T (2002) Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell 24(7):971–987

    Article  Google Scholar 

  18. Brodić D, Amelio A, Milivojević ZN (2015) Characterization and distinction between closely related south Slavic languages on the example of Serbian and Croatian. In Computer Analysis of Images and Patterns: 16th International Conference, CAIP 2015, Valletta, Malta, September 2-4, 2015 Proceedings, Part I 16 (pp. 654–666). Springer International Publishing

  19. Hajebi B, Hajebi P (2021) Intelligent restoration of historical parametric geometric patterns by Zernike moments and neural networks. J Comput Cult Heritage (JOCCH) 14(4):1–27

    Article  Google Scholar 

  20. Gultom Y, Arymurthy AM, Masikome RJ (2018) Intelligent restoration of historical parametric geometric patterns by Zernike moments and neural networks. Jurnal Ilmu Komputer dan Informasi 11(2):59–66

    Article  Google Scholar 

  21. Altun S, GÜNEŞ MC (2020) Classification of historic ornaments with CNN: issues for interdisciplinary studies. J Comput Des 1(3):115–130

  22. Flusser J, Suk T, Zitova B (2016) 2D and 3D image analysis by moments. Wiley, London

    Book  Google Scholar 

  23. Kim YS, Kim WY (1998) Content-based trademark retrieval system using a visually salient feature. Image Vis Comput 16(12–13):931–939

    Article  Google Scholar 

  24. Flusser J, Zitova B, Suk T (2009) Moments and moment invariants in pattern recognition. Wiley, London

    Book  Google Scholar 

  25. Bin Y, Jia-Xiong P (2002) Invariance analysis of improved Zernike moments. J Opt A Pure Appl Opt 4(6):606

    Article  Google Scholar 

  26. Nor’aini AJ, Raveendran P, Selvanathan N (2006) Human face recognition using Zernike moments and nearest neighbor classifier. In: 2006 4th student conference on research and development, pp 120–123. IEEE

  27. Padilla-Vivanco A, Martinez-Ramirez A, Granados-Agustin F-S (2004) Digital image reconstruction using Zernike moments. In: Optics in atmospheric propagation and adaptive systems VI, Vol. 5237. International Society for Optics and Photonics, pp 281–289

  28. Haralick RM, Shanmugam K, Dinstein IH (1973) Textural features for image classification. IEEE Trans Syst Man Cybern 6:610–621

    Article  Google Scholar 

  29. Singhal N, Singhal N, Kalaichelvi V (2017) Image classification using bag of visual words model with FAST and FREAK. In: 2017 Second International Conference on Electrical, Computer and Communication Technologies (ICECCT), pp 1–5. IEEE

  30. Calonder M, Lepetit V, Strecha C, Brief FP, Binary robust independent elementary features. In: Proceedings of the European Conference on Computer Vision, pp 778–792

  31. Rosten E, Drummond T (2006) Machine learning for high-speed corner detection. In: European conference on computer vision, pp 430–443. Springer, Berlin

  32. Rosin PL (1999) Measuring corner properties. Comput Vis Image Underst 73(2):291–307

    Article  Google Scholar 

  33. Brown M, Szeliski R, Winder S (2005) Multi-image matching using multi-scale oriented patches. In: 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR’05), Vol 1, pp 510–517. IEEE

  34. Ho TK (1995) Random decision forests. In: Proceedings of 3rd international conference on document analysis and recognition, Vol 1, pp 278–282. IEEE

  35. Breiman L (2001) Random forests. Mach Learn 45(1):5–32

    Article  Google Scholar 

  36. Kuncheva LI (2014) Combining pattern classifiers: methods and algorithms. Wiley, London

    Book  Google Scholar 

  37. Yager RR (1987) On the Dempster-Shafer framework and new combination rules. Inf Sci 41(2):93–137

    Article MathSciNet  Google Scholar 

  38. Gilpin LH, Bau D, Yuan BZ, Bajwa A, Specter M, Kagal L (2018) Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on data science and advanced analytics (DSAA), pp 80–89. IEEE

  39. Doshi-Velez F, Kim B (2017) Towards a rigorous science of interpretable machine learning. arXiv preprintarXiv:1702.08608

  40. Vilone G, Longo L (2021) Notions of explainability and evaluation approaches for explainable artificial intelligence. Inf Fusion 76:89–106

    Article  Google Scholar 

Download references

Acknowledgements

This project was funded by Iran Science Elites Federation. The authors are thankful to Dr. Leily A.Bakhtiar for her support and critical comments regarding this article.

Author information

Author notes
  1. Bita Hajebi and Pooya Hajebi contributed equally to this work.

Authors and Affiliations

  1. Department of Architectural and Urban Conservation, Art University of Isfahan, Hakim Nezami, Isfahan, 8175894418, Esfahan, Iran

    Bita Hajebi

  2. Department of Electrical Engineering, Yazd University, University Boulevard, Yazd, 8915818411, Yazd, Iran

    Pooya Hajebi

  3. Department of Mechanical Engineering, Isfahan University of Technology, Isfahan, 8415683111, Iran

    Pooya Hajebi

Authors
  1. Bita Hajebi
  2. Pooya Hajebi

Corresponding author

Correspondence toPooya Hajebi.

Ethics declarations

Conflict of interest

The authors have no Conflict of interest to declare that are relevant to the content of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hajebi, B., Hajebi, P. Improving recognition of deteriorated historical Persian geometric patterns by fusion decision methods.Neural Comput & Applic36, 11809–11831 (2024). https://doi.org/10.1007/s00521-024-09932-3

Download citation

Keywords

Associated Content

Part of a collection:

Special Issue on Visual Pattern Recognition and Extraction for Cultural Heritage

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp