Computer Science > Machine Learning
arXiv:2305.17473 (cs)
[Submitted on 27 May 2023 (v1), last revised 17 Mar 2025 (this version, v4)]
Title:A Comprehensive Overview and Comparative Analysis on Deep Learning Models: CNN, RNN, LSTM, GRU
View a PDF of the paper titled A Comprehensive Overview and Comparative Analysis on Deep Learning Models: CNN, RNN, LSTM, GRU, by Farhad Mortezapour Shiri and 3 other authors
View PDFAbstract:Deep learning (DL) has emerged as a powerful subset of machine learning (ML) and artificial intelligence (AI), outperforming traditional ML methods, especially in handling unstructured and large datasets. Its impact spans across various domains, including speech recognition, healthcare, autonomous vehicles, cybersecurity, predictive analytics, and more. However, the complexity and dynamic nature of real-world problems present challenges in designing effective deep learning models. Consequently, several deep learning models have been developed to address different problems and applications. In this article, we conduct a comprehensive survey of various deep learning models, including Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Temporal Convolutional Networks (TCN), Transformer, Kolmogorov-Arnold networks (KAN), Generative Models, Deep Reinforcement Learning (DRL), and Deep Transfer Learning. We examine the structure, applications, benefits, and limitations of each model. Furthermore, we perform an analysis using three publicly available datasets: IMDB, ARAS, and Fruit-360. We compared the performance of six renowned deep learning models: CNN, RNN, Long Short-Term Memory (LSTM), Bidirectional LSTM, Gated Recurrent Unit (GRU), and Bidirectional GRU alongside two newer models, TCN and Transformer, using the IMDB and ARAS datasets. Additionally, we evaluated the performance of eight CNN-based models, including VGG (Visual Geometry Group), Inception, ResNet (Residual Network), InceptionResNet, Xception (Extreme Inception), MobileNet, DenseNet (Dense Convolutional Network), and NASNet (Neural Architecture Search Network), for image classification tasks using the Fruit-360 dataset.
Comments: | 62 pages, 37 figures |
Subjects: | Machine Learning (cs.LG); Artificial Intelligence (cs.AI) |
Cite as: | arXiv:2305.17473 [cs.LG] |
(orarXiv:2305.17473v4 [cs.LG] for this version) | |
https://doi.org/10.48550/arXiv.2305.17473 arXiv-issued DOI via DataCite | |
Journal reference: | Journal on Artificial Intelligence 2024 Vol. 6 Issue 1 Pages 301-360 |
Related DOI: | https://doi.org/10.32604/jai.2024.054314 DOI(s) linking to related resources |
Submission history
From: Farhad Mortezapour Shiri [view email][v1] Sat, 27 May 2023 13:23:21 UTC (1,384 KB)
[v2] Thu, 1 Jun 2023 16:53:28 UTC (1,455 KB)
[v3] Thu, 24 Oct 2024 17:41:58 UTC (3,143 KB)
[v4] Mon, 17 Mar 2025 10:18:52 UTC (3,246 KB)
Full-text links:
Access Paper:
- View PDF
- Other Formats
View a PDF of the paper titled A Comprehensive Overview and Comparative Analysis on Deep Learning Models: CNN, RNN, LSTM, GRU, by Farhad Mortezapour Shiri and 3 other authors
References & Citations
Bibliographic and Citation Tools
Bibliographic Explorer(What is the Explorer?)
Connected Papers(What is Connected Papers?)
Litmaps(What is Litmaps?)
scite Smart Citations(What are Smart Citations?)
Code, Data and Media Associated with this Article
alphaXiv(What is alphaXiv?)
CatalyzeX Code Finder for Papers(What is CatalyzeX?)
DagsHub(What is DagsHub?)
Gotit.pub(What is GotitPub?)
Hugging Face(What is Huggingface?)
Papers with Code(What is Papers with Code?)
ScienceCast(What is ScienceCast?)
Demos
Recommenders and Search Tools
Influence Flower(What are Influence Flowers?)
CORE Recommender(What is CORE?)
IArxiv Recommender(What is IArxiv?)
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community?Learn more about arXivLabs.