Movatterモバイル変換


[0]ホーム

URL:


Skip to main content

Advertisement

Springer Nature Link
Log in

Automated design of error-resilient and hardware-efficient deep neural networks

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Applying deep neural networks (DNNs) in mobile and safety-critical systems, such as autonomous vehicles, demands a reliable and efficient execution on hardware. The design of the neural architecture has a large influence on the achievable efficiency and bit error resilience of the network on hardware. Since there are numerous design choices for the architecture of DNNs, with partially opposing effects on the preferred characteristics (such as small error rates at low latency), multi-objective optimization strategies are necessary. In this paper, we develop an evolutionary optimization technique for the automated design of hardware-optimized DNN architectures. For this purpose, we derive a set of inexpensively computable objective functions, which enable the fast evaluation of DNN architectures with respect to their hardware efficiency and error resilience. We observe a strong correlation between predicted error resilience and actual measurements obtained from fault injection simulations. Furthermore, we analyze two different quantization schemes for efficient DNN computation and find one providing a significantly higher error resilience compared to the other. Finally, a comparison of the architectures provided by our algorithm with the popular MobileNetV2 and NASNet-A models reveals an up to seven times improved bit error resilience of our models. We are the first to combine error resilience, efficiency, and performance optimization in a neural architecture search framework.

This is a preview of subscription content,log in via an institution to check access.

Access this article

Log in via an institution

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mane D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viegas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X (2015) Tensorflow: large-scale machine learning on heterogeneous distributed systems.https://www.tensorflow.org/

  2. Aitken R, Cannon EH, Pant M, Tahoori MB (2015) Resiliency challenges in sub-10nm technologies. In: IEEE 33rd VLSI Test Symposium (VTS), pp 1–4.https://doi.org/10.1109/VTS.2015.7116281

  3. Azizimazreah A, Gu Y, Gu X, Chen L (2018) Tolerating soft errors in deep learning accelerators with reliable on-chip memory designs. In: IEEE international conference on networking, architecture and storage (NAS), pp 1–10.https://doi.org/10.1109/NAS.2018.8515692

  4. Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE.https://doi.org/10.1371/journal.pone.0130140

    Article  Google Scholar 

  5. Baker B, Gupta O, Naik N, Raskar R (2017) Designing neural network architectures using reinforcement learning. In: International conference on learning representations (ICLR)

  6. Baker B, Gupta O, Raskar R, Naik N (2017) Accelerating neural architecture search using performance prediction. In: NIPS workshop on meta-learning

  7. Bender G, Kindermans PJ, Zoph B, Vasudevan V, Le Q (2018) Understanding and simplifying one-shot architecture search. In: International conference on machine learning (ICML)

  8. Blasco X, Herrero JM, Sanchis J, Martínez M (2008) A new graphical visualization of n-dimensional Pareto front for decision-making in multiobjective optimization. Inf Sci 178(20):3908–3924.https://doi.org/10.1016/j.ins.2008.06.010

    Article MATH  Google Scholar 

  9. Cai H, Chen T, Zhang W, Yu Y, Wang J (2018) Efficient architecture search by network transformation. In: AAAI

  10. Cai H, Yang J, Zhang W, Han S, Yu Y (2018) Path-level network transformation for efficient architecture search. In: International conference on machine learning (ICML)

  11. Cai H, Zhu L, Han S (2019) ProxylessNAS: direct neural architecture search on target task and hardware. In: International conference on learning representations (ICLR)

  12. Cai L, Barneche AM, Herbout A, Sheng Foo C, Lin J, Ramaseshan Chandrasekhar V, Sabry M (2019) TEA-DNN: the quest for time-energy-accuracy co-optimized deep neural networks. In: International symposium on low power electronics and design (ISLPED).https://doi.org/10.1109/ISLPED.2019.8824934

  13. Carter NP, Naeimi H, Gardner DS (2010) Design techniques for cross-layer resilience. In: Design, automation & test in Europe conference & exhibition (DATE), pp 1023–1028.https://doi.org/10.1109/DATE.2010.5456960

  14. Chen T, Goodfellow IJ, Shlens J (2016) Net2Net: accelerating learning via knowledge transfer. In: International conference on learning representations (ICLR)

  15. Chen YH, Krishna T, Emer JS, Sze V (2017) Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J Solid-State Circuits 52(1):127–138.https://doi.org/10.1109/JSSC.2016.2616357

    Article  Google Scholar 

  16. Cheng AC, Dong JD, Hsu CH, Chang SH, Sun M, Chang SC, Pan JY, Chen YT, Wei W, Juan DC (2018) Searching toward pareto-optimal device-aware neural architectures. In: Proceedings of the international conference on computer-aided design (ICCAD), ICCAD ’18.https://doi.org/10.1145/3240765.3243494

  17. Chenxi L, Liang Chieh C, Florian S, Hartwig A, Wei H, Alan L Y, Li FF (2019) Auto-deeplab: hierarchical neural architecture search for semantic image segmentation. In: Conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2019.00017

  18. Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1800–1807.https://doi.org/10.1109/CVPR.2017.195

  19. Chollet F et al (2015) Keras.https://keras.io

  20. Deb K, Kalyanmoy D (2001) Multi-objective optimization using evolutionary algorithms. Wiley, New York.https://doi.org/10.5555/559152

    Book MATH  Google Scholar 

  21. Deb K, Agrawal S, Pratap A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: Schoenauer M, Deb K, Rudolph G, Yao X, Lutton E, Merelo JJ, Schwefel HP (eds) Parallel problem solving from nature PPSN VI. Springer, Heidelberg, pp 849–858

    Chapter  Google Scholar 

  22. Deng J, Fang Y, Du Z, Wang Y, Li H, Temam O, Ienne P, Novo D, Li X, Chen Y, Wu C (2015) Retraining-based timing error mitigation for hardware neural networks. In: Design, automation and test in Europe conference and exhibition (DATE), pp 593–596

  23. DeVries T, Taylor GW (2017) Improved regularization of convolutional neural networks with cutout. eprintarXiv:1708.04552

  24. Dias FM, Borralho R, Fontes P, Antunes A (2010) FTSET: a software tool for fault tolerance evaluation and improvement. Neural Comput Appl 19(5):701–712.https://doi.org/10.1007/s00521-009-0329-0

    Article  Google Scholar 

  25. Dong JD, Cheng AC, Juan DC, Wei W, Sun M (2018) Dpp-net: Device-aware progressive search for pareto-optimal neural architectures. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) 15th European conference on computer vision (ECCV).https://doi.org/10.1007/978-3-030-01252-6_32

  26. Dreslinski RG, Wieckowski M, Blaauw D, Sylvester D, Mudge T (2010) Near-threshold computing: reclaiming Moore’s law through energy efficient integrated circuits. Proc IEEE 98(2):253–266.https://doi.org/10.1109/JPROC.2009.2034764

    Article  Google Scholar 

  27. Ehrgott M, Tenfelde-Podehl D (2003) Computation of ideal and Nadir values and implications for their use in MCDM methods. Eur J Oper Res 151(1):119–139.https://doi.org/10.1016/S0377-2217(02)00595-7

    Article MathSciNet MATH  Google Scholar 

  28. El Mhamdi EM, Guerraoui R (2017) When neurons fail. In: IEEE international parallel and distributed processing symposium (IPDPS), pp 1028–1037.https://doi.org/10.1109/IPDPS.2017.66

  29. Elsken T, Metzen JH, Hutter F (2017) Simple and efficient architecture search for convolutional neural networks. In: NIPS workshop on meta-learning

  30. Elsken T, Metzen JH, Hutter F (2019) Efficient multi-objective neural architecture search via Lamarckian evolution. In: International conference on learning representations (ICLR)

  31. Elsken T, Metzen JH, Hutter F (2019) Neural architecture search: a survey. J Mach Learn Res 20(55):1–21

    MathSciNet MATH  Google Scholar 

  32. Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: International conference on artificial intelligence and statistics (AISTATS), vol 15

  33. Gomez LB, Cappello F, Carro L, DeBardeleben N, Fang B, Gurumurthi S, Pattabiraman K, Rech P, Reorda MS (2014) GPGPUs: how to combine high computational power with high reliability. In: Design, automation and test in Europe conference and exhibition (DATE).https://doi.org/10.7873/DATE.2014.354

  34. Gu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B, Liu T, Wang X, Wang G, Cai J, Chen T (2018) Recent advances in convolutional neural networks. Pattern Recognit 77:354–377.https://doi.org/10.1016/j.patcog.2017.10.013

    Article  Google Scholar 

  35. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778.https://doi.org/10.1109/CVPR.2016.90

  36. Henkel J, Bauer L, Dutt N, Gupta P, Nassif S, Shafique M, Tahoori M, Wehn N (2013) Reliable on-chip systems in the nano-era. In: 50th annual design automation conference (DAC), pp 695–704.https://doi.org/10.1145/2463209.2488857

  37. Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arXiv preprintarxiv: 1503.02531

  38. Horowitz M (2014) Computing’s energy problem (and what we can do about it). In: IEEE international solid-state circuits conference (ISSCC), pp 10–14.https://doi.org/10.1109/ISSCC.2014.6757323

  39. Hsu CH, Chang SH, Juan DC, Pan JY, Chen YT, Wei W, Chang SC (2018) MONAS: multi-objective neural architecture search. arXiv preprint

  40. Huang G, Liu Z, van der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 4700–4708.https://doi.org/10.1109/CVPR.2017.243

  41. Hutter F, Kotthoff L, Vanschoren J (eds) (2019) Automated machine learning: methods, systems, challenges. Springer, Berlin.https://doi.org/10.1007/978-3-030-05318-5

    Book  Google Scholar 

  42. Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning (ICML)

  43. Jacob B, Kligys S, Chen B, Zhu M, Tang M, Howard AG, Adam H, Kalenichenko D (2018) Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: IEEE conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2018.00286

  44. Kerlirzin P, Vallet F (1993) Robustness in multilayer perceptrons. Neural Comput 5(3):473–482.https://doi.org/10.1162/neco.1993.5.3.473

    Article  Google Scholar 

  45. Kim S, Howe P, Moreau T, Alaghi A, Ceze L, Visvesh S (2018) MATIC: Learning around errors for efficient low-voltage neural network accelerators. In: Design, automation and test in Europe conference and exhibition (DATE).https://doi.org/10.23919/DATE.2018.8341970

  46. Kim YH, Reddy B, Yun S, Seo C (2017) NEMO: neuro-evolution with multiobjective optimization of deep neural network for speed and accuracy. In: ICML’17 AutoML workshop

  47. Klein A, Falkner S, Springenberg JT, Hutter F (2017) Learning curve prediction with Bayesian neural networks. In: International conference on learning representations (ICLR)

  48. Koopman P, Wagner M (2016) Challenges in autonomous vehicle testing and validation. SAE Int J Transp Saf 4(1):15–24.https://doi.org/10.4271/2016-01-0128

    Article  Google Scholar 

  49. Krizhevsky A (2009) Learning multiple layers of features from tiny images. Master Thesis, University of Toronto

  50. Krogh A, Hertz JA (1991) A simple weight decay can improve generalization. In: Advances in neural information processing systems

  51. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444.https://doi.org/10.1038/nature14539

    Article  Google Scholar 

  52. Leveugle R, Calvez A, Maistri P, Vanhauwaert P (2009) Statistical fault injection: quantified error and confidence. In: Design, automation and test in Europe conference and exhibition (DATE), pp 502–506.https://doi.org/10.1109/DATE.2009.5090716

  53. Li G, Hari SKS, Sullivan M, Tsai T, Pattabiraman K, Emer J, Keckler SW (2017) Understanding error propagation in deep learning neural network (DNN) accelerators and applications. In: Proceedings of the international conference for high performance computing, networking, storage and analysis.https://doi.org/10.1145/3126908.3126964

  54. Li J, Wen G, Gan J, Zhang L, Zhang S (2019) Sparse nonlinear feature selection algorithm via local structure learning. Emerg Sci J.https://doi.org/10.28991/esj-2019-01175

    Article  Google Scholar 

  55. Lin DD, Talathi SS, Annapureddy VS (2016) Fixed point quantization of deep convolutional networks. In: International conference on machine learning (ICML), vol 48, pp 2849–2858

  56. Lin SC, Zhang Y, Hsu CH, Skach M, Haque ME, Tang L, Mars J (2018) The architectural implications of autonomous driving: constraints and acceleration. In: International conference on architectural support for programming languages and operating systems, pp 751–766.https://doi.org/10.1145/3173162.3173191

  57. Liu C, Hu M, Strachan JP, Li H (2017) Rescuing memristor-based neuromorphic design with high defects. In: 54th annual design automation conference (DAC), pp 1–6.https://doi.org/10.1145/3061639.3062310

  58. Liu C, Zoph B, Neumann M, Shlens J, Hua W, Li LJ, Fei-Fei L, Yuille A, Huang J, Murphy K (2018) Progressive neural architecture search. In: 15th European conference on computer vision (ECCV).https://doi.org/10.1007/978-3-030-01246-5_2

  59. Liu H, Simonyan K, Yang Y (2019) DARTS: differentiable architecture search. In: International conference on learning representations (ICLR)

  60. Loshchilov I, Hutter F (2017) SGDR: stochastic gradient descent with warm restarts. In: International conference on learning representations (ICLR)

  61. Lu Z, Whalen I, Boddeti V, Dhebar Y, Deb K, Goodman E, Banzhaf W (2019) NSGA-net: a multi-objective genetic algorithm for neural architecture search. In: Genetic and evolutionary computation conference (GECCO).https://doi.org/10.1145/3321707.3321729

  62. Mahdiani HR, Fakhraie SM, Lucas C (2012) Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors. IEEE Trans Neural Netw Learn Syst 23(8):1215–1228.https://doi.org/10.1109/TNNLS.2012.2199517

    Article  Google Scholar 

  63. Marques J, Andrade J, Falcao G (2017) Unreliable memory operation on a convolutional neural network processor. In: IEEE international workshop on signal processing systems (SiPS).https://doi.org/10.1109/SiPS.2017.8110024

  64. Miettinen K (1999) Nonlinear multiobjective optimization. Springer, Berlin

    MATH  Google Scholar 

  65. Miikkulainen R, Liang J, Meyerson E, Rawal A, Fink D, Francon O, Raju B, Shahrzad H, Navruzyan A, Duffy N, Hodjat B (2017) Evolving deep neural networks.arXiv:1703.00548

  66. Mittal S (2016) A survey of techniques for approximate computing. ACM Comput Surv 48(4):1–33.https://doi.org/10.1145/2893356

    Article  Google Scholar 

  67. Mittal S (2020) A survey on modeling and improving reliability of DNN algorithms and accelerators. J Syst Archit.https://doi.org/10.1016/j.sysarc.2019.101689

    Article  Google Scholar 

  68. Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn 65:211–222.https://doi.org/10.1016/j.patcog.2016.11.008

    Article  Google Scholar 

  69. Montavon G, Samek W, Müller KR (2018) Methods for interpreting and understanding deep neural networks. Digit Signal Proc 73:1–15.https://doi.org/10.1016/j.dsp.2017.10.011

    Article MathSciNet  Google Scholar 

  70. Mutlu O (2017) The Row–Hammer problem and other issues we may face as memory becomes denser. In: Design, automation and test in Europe conference and exhibition (DATE).https://doi.org/10.23919/DATE.2017.7927156

  71. Pham H, Guan MY, Zoph B, Le QV, Dean J (2018) Efficient neural architecture search via parameter sharing. In: International conference on machine learning (ICML)

  72. Piuri V (2001) Analysis of fault tolerance in artificial neural networks. J Parallel Distrib Comput 61(1):18–48.https://doi.org/10.1006/jpdc.2000.1663

    Article MATH  Google Scholar 

  73. Reagen B, Whatmough P, Adolf R, Rama S, Lee H, Lee SK, Hernandez-Lobato JM, Wei GY, Brooks D (2016) Minerva: enabling low-power, highly-accurate deep neural network accelerators. In: ACM/IEEE 43rd annual international symposium on computer architecture (ISCA), pp 267–278.https://doi.org/10.1109/ISCA.2016.32

  74. Reagen B, Gupta U, Pentecost L, Whatmough P, Lee SK, Mulholland N, Brooks D, Wei GY (2018) Ares: a framework for quantifying the resilience of deep neural networks. In: 55th annual design automation conference (DAC).https://doi.org/10.1109/DAC.2018.8465834

  75. Real E, Moore S, Selle A, Saxena S, Suematsu YL, Tan J, Le QV, Kurakin A (2017) Large-scale evolution of image classifiers. In: Precup D, Teh YW (eds) International conference on machine learning (ICML), PMLR, International Convention Centre, Sydney, Australia, proceedings of machine learning research, vol 70, pp 2902–2911

  76. Real E, Aggarwal A, Huang Y, Le QV (2019) Regulraized evolution for image classifier architecture search. In: AAAI

  77. Saikia T, Marrakchi Y, Zela A, Hutter F, Brox T (2019) AutoDispNet: improving disparity estimation with AutoML

  78. Salami B, Unsal OS, Kestelman AC (2018) On the resilience of RTL NN accelerators: fault characterization and mitigation. In: 30th international symposium on computer architecture and high performance computing (SBAC-PAD), pp 322–329.https://doi.org/10.1109/CAHPC.2018.8645906

  79. Saljoughi AS, Mehvarz M, Mirvaziri H (2017) Attacks and intrusion detection in cloud computing using neural networks and particle swarm optimization algorithms. Emerg Sci J.https://doi.org/10.28991/ijse-01120

    Article  Google Scholar 

  80. Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) MobileNetV2: Inverted residuals and linear bottlenecks. In: IEEE conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2018.00474

  81. Santos FF, Pimenta PF, Lunardi C, Draghetti L, Carro L, Kaeli D, Rech P (2019) Analyzing and increasing the reliability of convolutional neural networks on GPUs. IEEE Trans Reliab 68(2):663–677.https://doi.org/10.1109/TR.2018.2878387

    Article  Google Scholar 

  82. Saxena S, Verbeek J (2016) Convolutional neural fabrics. In: Lee DD, Sugiyama M, Luxburg UV, Guyon I, Garnett R (eds) Conference on neural information processing systems (NIPS). Curran Associates Inc., Red Hook, pp 4053–4061

    Google Scholar 

  83. Schorn C, Guntoro A, Ascheid G (2018) Accurate neuron resilience prediction for a flexible reliability management in neural network accelerators. In: Design, automation and test in Europe conference and exhibition (DATE).https://doi.org/10.23919/DATE.2018.8342151

  84. Schorn C, Guntoro A, Ascheid G (2018) Efficient on-line error detection and mitigation for deep neural network accelerators. In: Gallina B, Skavhaug A, Bitsch F (eds) Computer safety, reliability, and security (SAFECOMP), LNCS, vol 11093. Springer, Berlin.https://doi.org/10.1007/978-3-319-99130-6_14

  85. Schorn C, Guntoro A, Ascheid G (2019) An efficient bit-flip resilience optimization method for deep neural networks. In: Design, automation and test in Europe conference and exhibition (DATE), pp 1486–1491.https://doi.org/10.23919/DATE.2019.8714885

  86. Sridharan V, DeBardeleben N, Blanchard S, Ferreira KB, Stearley J, Shalf J, Gurumurthi S (2015) Memory errors in modern systems: the good, the bad, and the ugly. In: Twentieth international conference on architectural support for programming languages and operating systems (ASPLOS), pp 297–310.https://doi.org/10.1145/2694344.2694348

  87. Srinivasan G, Wijesinghe P, Sarwar SS, Jaiswal A, Roy K (2016) Significance driven hybrid 8T-6T SRAM for energy-efficient synaptic storage in artificial neural networks. In: Design, automation and test in Europe conference and exhibition (DATE)

  88. Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov RR (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958

    MathSciNet MATH  Google Scholar 

  89. Stallkamp J, Schlipsing M, Salmen J, Igel C (2012) Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw 32:323–332.https://doi.org/10.1016/j.neunet.2012.02.016

    Article  Google Scholar 

  90. Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10:99–127.https://doi.org/10.1162/106365602320169811

    Article  Google Scholar 

  91. Sze V, Chen YH, Yang TJ, Emer JS (2017) Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE 105(12):2295–2329.https://doi.org/10.1109/JPROC.2017.2761740

    Article  Google Scholar 

  92. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 2818–2826.https://doi.org/10.1109/CVPR.2016.308

  93. Tan M, Chen B, Pang R, Vasudevan V, Le QV (2019) MnasNet: platform-aware neural architecture search for mobile. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2019.00293

  94. Torres-Huitzil C, Girau B (2017) Fault and error tolerance in neural networks: a review. IEEE Access 5:17322–17341.https://doi.org/10.1109/ACCESS.2017.2742698

    Article  Google Scholar 

  95. Vaezi Nejad SM, Marandi SM, Salajegheh E (2019) A hybrid of artificial neural networks and particle swarm optimization algorithm for inverse modeling of leakage in earth dams. Civ Eng J.https://doi.org/10.28991/cej-2019-03091392

    Article  Google Scholar 

  96. Vanhoucke V, Senior A, Mao MZ (2011) Improving the speed of neural networks on CPUs. In: Deep learning and unsupervised feature learning workshop, NIPS 2011

  97. Venkataramani S, Ranjan A, Roy K, Raghunathan A (2014) AxNN: energy-efficient neuromorphic systems using approximate computing. In: IEEE/ACM international symposium on low power electronics and design (ISLPED), pp 27–32.https://doi.org/10.1145/2627369.2627613

  98. Vogel S, Springer J, Guntoro A, Ascheid G (2019) Self-supervised quantization of pre-trained neural networks for multiplierless acceleration. In: Design, automation and test in Europe conference and exhibition (DATE), pp 1088–1093.https://doi.org/10.23919/DATE.2019.8714901

  99. Wei T, Wang C, Rui Y, Chen CW (2016) Network morphism. In: Balcan MF, Weinberger KQ (eds) International conference on machine learning (ICML), PMLR, New York, New York, USA, Proceedings of machine learning research, vol 48, pp 564–572

  100. Whatmough PN, Lee SK, Brooks D, Wei GY (2018) Dnn engine: a 28-nm timing-error tolerant sparse deep neural network processor for IoT applications. IEEE J Solid-State Circuits 53(9):2722–2731.https://doi.org/10.1109/JSSC.2018.2841824

    Article  Google Scholar 

  101. WikiChip (2019) FSD Chip—Tesla.https://en.wikichip.org/wiki/fsd_chip

  102. Williams S, Waterman A, Patterson D (2009) Roofline: an insightful visual performance model for multicore architectures. Commun ACM 52(4):65–76.https://doi.org/10.1145/1498765.1498785

    Article  Google Scholar 

  103. Wu B, Dai X, Zhang P, Wang Y, Sun F, Wu Y, Tian Y, Vajda P, Jia Y, Keutzer K (2019) FBNet: hardware-aware efficient convnet design via differentiable neural architecture search. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2019.01099

  104. Xia L, Liu M, Ning X, Chakrabarty K, Wang Y (2017) Fault-tolerant training with on-line fault detection for RRAM-based neural computing systems. In: 54th annual design automation conference (DAC).https://doi.org/10.1145/3061639.3062248

  105. Xie S, Zheng H, Liu C, Lin L (2019) SNAS: stochastic neural architecture search. In: International conference on learning representations (ICLR)

  106. Yang L, Murmann B (2017) SRAM voltage scaling for energy-efficient convolutional neural networks. In: 18th international symposium on quality electronic design (ISQED), pp 7–12.https://doi.org/10.1109/ISQED.2017.7918284

  107. Zhang C, Sun G, Fang Z, Zhou P, Pan P, Cong J (2018) Caffeine: towards uniformed representation and acceleration for deep convolutional neural networks. IEEE Trans Comput Aid Des Integr Circuits Syst.https://doi.org/10.1109/TCAD.2017.2785257

    Article  Google Scholar 

  108. Zhang H, Cisse M, Dauphin YN, Lopez-Paz D (2018) mixup: beyond empirical risk minimization. In: International conference on learning representations (ICLR)

  109. Zhang Q, Wang T, Tian Y, Yuan F, Xu Q (2015) ApproxANN: an approximate computing framework for artificial neural network. In: Design, automation and test in Europe conference and exhibition (DATE), pp 701–706

  110. Zhong Z, Yang Z, Deng B, Yan J, Wu W, Shao J, Liu CL (2018) BlockQNN: efficient block-wise neural network architecture generation. arXiv preprint

  111. Zoph B, Le QV (2017) Neural architecture search with reinforcement learning. In: International conference on learning representations (ICLR)

  112. Zoph B, Vasudevan V, Shlens J, Le QV (2018) Learning transferable architectures for scalable image recognition. In: Conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2018.00907

Download references

Author information

Authors and Affiliations

  1. Bosch Corporate Research, Robert Bosch GmbH, Renningen, Germany

    Christoph Schorn, Sebastian Vogel, Armin Runge & Andre Guntoro

  2. Institute for Communication Technologies and Embedded Systems, RWTH Aachen University, Aachen, Germany

    Christoph Schorn, Sebastian Vogel & Gerd Ascheid

  3. Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany

    Thomas Elsken

  4. Department of Computer Science, University of Freiburg, Freiburg im Breisgau, Germany

    Thomas Elsken

Authors
  1. Christoph Schorn

    You can also search for this author inPubMed Google Scholar

  2. Thomas Elsken

    You can also search for this author inPubMed Google Scholar

  3. Sebastian Vogel

    You can also search for this author inPubMed Google Scholar

  4. Armin Runge

    You can also search for this author inPubMed Google Scholar

  5. Andre Guntoro

    You can also search for this author inPubMed Google Scholar

  6. Gerd Ascheid

    You can also search for this author inPubMed Google Scholar

Corresponding author

Correspondence toChristoph Schorn.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Schorn, C., Elsken, T., Vogel, S.et al. Automated design of error-resilient and hardware-efficient deep neural networks.Neural Comput & Applic32, 18327–18345 (2020). https://doi.org/10.1007/s00521-020-04969-6

Download citation

Keywords

Access this article

Subscribe and save

Springer+ Basic
¥17,985 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (Japan)

Instant access to the full article PDF.

Advertisement


[8]ページ先頭

©2009-2025 Movatter.jp