1356Accesses
27Citations
3Altmetric
Abstract
Applying deep neural networks (DNNs) in mobile and safety-critical systems, such as autonomous vehicles, demands a reliable and efficient execution on hardware. The design of the neural architecture has a large influence on the achievable efficiency and bit error resilience of the network on hardware. Since there are numerous design choices for the architecture of DNNs, with partially opposing effects on the preferred characteristics (such as small error rates at low latency), multi-objective optimization strategies are necessary. In this paper, we develop an evolutionary optimization technique for the automated design of hardware-optimized DNN architectures. For this purpose, we derive a set of inexpensively computable objective functions, which enable the fast evaluation of DNN architectures with respect to their hardware efficiency and error resilience. We observe a strong correlation between predicted error resilience and actual measurements obtained from fault injection simulations. Furthermore, we analyze two different quantization schemes for efficient DNN computation and find one providing a significantly higher error resilience compared to the other. Finally, a comparison of the architectures provided by our algorithm with the popular MobileNetV2 and NASNet-A models reveals an up to seven times improved bit error resilience of our models. We are the first to combine error resilience, efficiency, and performance optimization in a neural architecture search framework.
This is a preview of subscription content,log in via an institution to check access.
Access this article
Subscribe and save
- Get 10 units per month
- Download Article/Chapter or eBook
- 1 Unit = 1 Article or 1 Chapter
- Cancel anytime
Buy Now
Price includes VAT (Japan)
Instant access to the full article PDF.












Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M, Ghemawat S, Goodfellow I, Harp A, Irving G, Isard M, Jia Y, Jozefowicz R, Kaiser L, Kudlur M, Levenberg J, Mane D, Monga R, Moore S, Murray D, Olah C, Schuster M, Shlens J, Steiner B, Sutskever I, Talwar K, Tucker P, Vanhoucke V, Vasudevan V, Viegas F, Vinyals O, Warden P, Wattenberg M, Wicke M, Yu Y, Zheng X (2015) Tensorflow: large-scale machine learning on heterogeneous distributed systems.https://www.tensorflow.org/
Aitken R, Cannon EH, Pant M, Tahoori MB (2015) Resiliency challenges in sub-10nm technologies. In: IEEE 33rd VLSI Test Symposium (VTS), pp 1–4.https://doi.org/10.1109/VTS.2015.7116281
Azizimazreah A, Gu Y, Gu X, Chen L (2018) Tolerating soft errors in deep learning accelerators with reliable on-chip memory designs. In: IEEE international conference on networking, architecture and storage (NAS), pp 1–10.https://doi.org/10.1109/NAS.2018.8515692
Bach S, Binder A, Montavon G, Klauschen F, Müller KR, Samek W (2015) On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PLoS ONE.https://doi.org/10.1371/journal.pone.0130140
Baker B, Gupta O, Naik N, Raskar R (2017) Designing neural network architectures using reinforcement learning. In: International conference on learning representations (ICLR)
Baker B, Gupta O, Raskar R, Naik N (2017) Accelerating neural architecture search using performance prediction. In: NIPS workshop on meta-learning
Bender G, Kindermans PJ, Zoph B, Vasudevan V, Le Q (2018) Understanding and simplifying one-shot architecture search. In: International conference on machine learning (ICML)
Blasco X, Herrero JM, Sanchis J, Martínez M (2008) A new graphical visualization of n-dimensional Pareto front for decision-making in multiobjective optimization. Inf Sci 178(20):3908–3924.https://doi.org/10.1016/j.ins.2008.06.010
Cai H, Chen T, Zhang W, Yu Y, Wang J (2018) Efficient architecture search by network transformation. In: AAAI
Cai H, Yang J, Zhang W, Han S, Yu Y (2018) Path-level network transformation for efficient architecture search. In: International conference on machine learning (ICML)
Cai H, Zhu L, Han S (2019) ProxylessNAS: direct neural architecture search on target task and hardware. In: International conference on learning representations (ICLR)
Cai L, Barneche AM, Herbout A, Sheng Foo C, Lin J, Ramaseshan Chandrasekhar V, Sabry M (2019) TEA-DNN: the quest for time-energy-accuracy co-optimized deep neural networks. In: International symposium on low power electronics and design (ISLPED).https://doi.org/10.1109/ISLPED.2019.8824934
Carter NP, Naeimi H, Gardner DS (2010) Design techniques for cross-layer resilience. In: Design, automation & test in Europe conference & exhibition (DATE), pp 1023–1028.https://doi.org/10.1109/DATE.2010.5456960
Chen T, Goodfellow IJ, Shlens J (2016) Net2Net: accelerating learning via knowledge transfer. In: International conference on learning representations (ICLR)
Chen YH, Krishna T, Emer JS, Sze V (2017) Eyeriss: an energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J Solid-State Circuits 52(1):127–138.https://doi.org/10.1109/JSSC.2016.2616357
Cheng AC, Dong JD, Hsu CH, Chang SH, Sun M, Chang SC, Pan JY, Chen YT, Wei W, Juan DC (2018) Searching toward pareto-optimal device-aware neural architectures. In: Proceedings of the international conference on computer-aided design (ICCAD), ICCAD ’18.https://doi.org/10.1145/3240765.3243494
Chenxi L, Liang Chieh C, Florian S, Hartwig A, Wei H, Alan L Y, Li FF (2019) Auto-deeplab: hierarchical neural architecture search for semantic image segmentation. In: Conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2019.00017
Chollet F (2017) Xception: deep learning with depthwise separable convolutions. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 1800–1807.https://doi.org/10.1109/CVPR.2017.195
Chollet F et al (2015) Keras.https://keras.io
Deb K, Kalyanmoy D (2001) Multi-objective optimization using evolutionary algorithms. Wiley, New York.https://doi.org/10.5555/559152
Deb K, Agrawal S, Pratap A, Meyarivan T (2000) A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II. In: Schoenauer M, Deb K, Rudolph G, Yao X, Lutton E, Merelo JJ, Schwefel HP (eds) Parallel problem solving from nature PPSN VI. Springer, Heidelberg, pp 849–858
Deng J, Fang Y, Du Z, Wang Y, Li H, Temam O, Ienne P, Novo D, Li X, Chen Y, Wu C (2015) Retraining-based timing error mitigation for hardware neural networks. In: Design, automation and test in Europe conference and exhibition (DATE), pp 593–596
DeVries T, Taylor GW (2017) Improved regularization of convolutional neural networks with cutout. eprintarXiv:1708.04552
Dias FM, Borralho R, Fontes P, Antunes A (2010) FTSET: a software tool for fault tolerance evaluation and improvement. Neural Comput Appl 19(5):701–712.https://doi.org/10.1007/s00521-009-0329-0
Dong JD, Cheng AC, Juan DC, Wei W, Sun M (2018) Dpp-net: Device-aware progressive search for pareto-optimal neural architectures. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) 15th European conference on computer vision (ECCV).https://doi.org/10.1007/978-3-030-01252-6_32
Dreslinski RG, Wieckowski M, Blaauw D, Sylvester D, Mudge T (2010) Near-threshold computing: reclaiming Moore’s law through energy efficient integrated circuits. Proc IEEE 98(2):253–266.https://doi.org/10.1109/JPROC.2009.2034764
Ehrgott M, Tenfelde-Podehl D (2003) Computation of ideal and Nadir values and implications for their use in MCDM methods. Eur J Oper Res 151(1):119–139.https://doi.org/10.1016/S0377-2217(02)00595-7
El Mhamdi EM, Guerraoui R (2017) When neurons fail. In: IEEE international parallel and distributed processing symposium (IPDPS), pp 1028–1037.https://doi.org/10.1109/IPDPS.2017.66
Elsken T, Metzen JH, Hutter F (2017) Simple and efficient architecture search for convolutional neural networks. In: NIPS workshop on meta-learning
Elsken T, Metzen JH, Hutter F (2019) Efficient multi-objective neural architecture search via Lamarckian evolution. In: International conference on learning representations (ICLR)
Elsken T, Metzen JH, Hutter F (2019) Neural architecture search: a survey. J Mach Learn Res 20(55):1–21
Glorot X, Bordes A, Bengio Y (2011) Deep sparse rectifier neural networks. In: International conference on artificial intelligence and statistics (AISTATS), vol 15
Gomez LB, Cappello F, Carro L, DeBardeleben N, Fang B, Gurumurthi S, Pattabiraman K, Rech P, Reorda MS (2014) GPGPUs: how to combine high computational power with high reliability. In: Design, automation and test in Europe conference and exhibition (DATE).https://doi.org/10.7873/DATE.2014.354
Gu J, Wang Z, Kuen J, Ma L, Shahroudy A, Shuai B, Liu T, Wang X, Wang G, Cai J, Chen T (2018) Recent advances in convolutional neural networks. Pattern Recognit 77:354–377.https://doi.org/10.1016/j.patcog.2017.10.013
He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 770–778.https://doi.org/10.1109/CVPR.2016.90
Henkel J, Bauer L, Dutt N, Gupta P, Nassif S, Shafique M, Tahoori M, Wehn N (2013) Reliable on-chip systems in the nano-era. In: 50th annual design automation conference (DAC), pp 695–704.https://doi.org/10.1145/2463209.2488857
Hinton G, Vinyals O, Dean J (2015) Distilling the knowledge in a neural network. arXiv preprintarxiv: 1503.02531
Horowitz M (2014) Computing’s energy problem (and what we can do about it). In: IEEE international solid-state circuits conference (ISSCC), pp 10–14.https://doi.org/10.1109/ISSCC.2014.6757323
Hsu CH, Chang SH, Juan DC, Pan JY, Chen YT, Wei W, Chang SC (2018) MONAS: multi-objective neural architecture search. arXiv preprint
Huang G, Liu Z, van der Maaten L, Weinberger KQ (2017) Densely connected convolutional networks. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 4700–4708.https://doi.org/10.1109/CVPR.2017.243
Hutter F, Kotthoff L, Vanschoren J (eds) (2019) Automated machine learning: methods, systems, challenges. Springer, Berlin.https://doi.org/10.1007/978-3-030-05318-5
Ioffe S, Szegedy C (2015) Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning (ICML)
Jacob B, Kligys S, Chen B, Zhu M, Tang M, Howard AG, Adam H, Kalenichenko D (2018) Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: IEEE conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2018.00286
Kerlirzin P, Vallet F (1993) Robustness in multilayer perceptrons. Neural Comput 5(3):473–482.https://doi.org/10.1162/neco.1993.5.3.473
Kim S, Howe P, Moreau T, Alaghi A, Ceze L, Visvesh S (2018) MATIC: Learning around errors for efficient low-voltage neural network accelerators. In: Design, automation and test in Europe conference and exhibition (DATE).https://doi.org/10.23919/DATE.2018.8341970
Kim YH, Reddy B, Yun S, Seo C (2017) NEMO: neuro-evolution with multiobjective optimization of deep neural network for speed and accuracy. In: ICML’17 AutoML workshop
Klein A, Falkner S, Springenberg JT, Hutter F (2017) Learning curve prediction with Bayesian neural networks. In: International conference on learning representations (ICLR)
Koopman P, Wagner M (2016) Challenges in autonomous vehicle testing and validation. SAE Int J Transp Saf 4(1):15–24.https://doi.org/10.4271/2016-01-0128
Krizhevsky A (2009) Learning multiple layers of features from tiny images. Master Thesis, University of Toronto
Krogh A, Hertz JA (1991) A simple weight decay can improve generalization. In: Advances in neural information processing systems
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444.https://doi.org/10.1038/nature14539
Leveugle R, Calvez A, Maistri P, Vanhauwaert P (2009) Statistical fault injection: quantified error and confidence. In: Design, automation and test in Europe conference and exhibition (DATE), pp 502–506.https://doi.org/10.1109/DATE.2009.5090716
Li G, Hari SKS, Sullivan M, Tsai T, Pattabiraman K, Emer J, Keckler SW (2017) Understanding error propagation in deep learning neural network (DNN) accelerators and applications. In: Proceedings of the international conference for high performance computing, networking, storage and analysis.https://doi.org/10.1145/3126908.3126964
Li J, Wen G, Gan J, Zhang L, Zhang S (2019) Sparse nonlinear feature selection algorithm via local structure learning. Emerg Sci J.https://doi.org/10.28991/esj-2019-01175
Lin DD, Talathi SS, Annapureddy VS (2016) Fixed point quantization of deep convolutional networks. In: International conference on machine learning (ICML), vol 48, pp 2849–2858
Lin SC, Zhang Y, Hsu CH, Skach M, Haque ME, Tang L, Mars J (2018) The architectural implications of autonomous driving: constraints and acceleration. In: International conference on architectural support for programming languages and operating systems, pp 751–766.https://doi.org/10.1145/3173162.3173191
Liu C, Hu M, Strachan JP, Li H (2017) Rescuing memristor-based neuromorphic design with high defects. In: 54th annual design automation conference (DAC), pp 1–6.https://doi.org/10.1145/3061639.3062310
Liu C, Zoph B, Neumann M, Shlens J, Hua W, Li LJ, Fei-Fei L, Yuille A, Huang J, Murphy K (2018) Progressive neural architecture search. In: 15th European conference on computer vision (ECCV).https://doi.org/10.1007/978-3-030-01246-5_2
Liu H, Simonyan K, Yang Y (2019) DARTS: differentiable architecture search. In: International conference on learning representations (ICLR)
Loshchilov I, Hutter F (2017) SGDR: stochastic gradient descent with warm restarts. In: International conference on learning representations (ICLR)
Lu Z, Whalen I, Boddeti V, Dhebar Y, Deb K, Goodman E, Banzhaf W (2019) NSGA-net: a multi-objective genetic algorithm for neural architecture search. In: Genetic and evolutionary computation conference (GECCO).https://doi.org/10.1145/3321707.3321729
Mahdiani HR, Fakhraie SM, Lucas C (2012) Relaxed fault-tolerant hardware implementation of neural networks in the presence of multiple transient errors. IEEE Trans Neural Netw Learn Syst 23(8):1215–1228.https://doi.org/10.1109/TNNLS.2012.2199517
Marques J, Andrade J, Falcao G (2017) Unreliable memory operation on a convolutional neural network processor. In: IEEE international workshop on signal processing systems (SiPS).https://doi.org/10.1109/SiPS.2017.8110024
Miettinen K (1999) Nonlinear multiobjective optimization. Springer, Berlin
Miikkulainen R, Liang J, Meyerson E, Rawal A, Fink D, Francon O, Raju B, Shahrzad H, Navruzyan A, Duffy N, Hodjat B (2017) Evolving deep neural networks.arXiv:1703.00548
Mittal S (2016) A survey of techniques for approximate computing. ACM Comput Surv 48(4):1–33.https://doi.org/10.1145/2893356
Mittal S (2020) A survey on modeling and improving reliability of DNN algorithms and accelerators. J Syst Archit.https://doi.org/10.1016/j.sysarc.2019.101689
Montavon G, Lapuschkin S, Binder A, Samek W, Müller KR (2017) Explaining nonlinear classification decisions with deep Taylor decomposition. Pattern Recogn 65:211–222.https://doi.org/10.1016/j.patcog.2016.11.008
Montavon G, Samek W, Müller KR (2018) Methods for interpreting and understanding deep neural networks. Digit Signal Proc 73:1–15.https://doi.org/10.1016/j.dsp.2017.10.011
Mutlu O (2017) The Row–Hammer problem and other issues we may face as memory becomes denser. In: Design, automation and test in Europe conference and exhibition (DATE).https://doi.org/10.23919/DATE.2017.7927156
Pham H, Guan MY, Zoph B, Le QV, Dean J (2018) Efficient neural architecture search via parameter sharing. In: International conference on machine learning (ICML)
Piuri V (2001) Analysis of fault tolerance in artificial neural networks. J Parallel Distrib Comput 61(1):18–48.https://doi.org/10.1006/jpdc.2000.1663
Reagen B, Whatmough P, Adolf R, Rama S, Lee H, Lee SK, Hernandez-Lobato JM, Wei GY, Brooks D (2016) Minerva: enabling low-power, highly-accurate deep neural network accelerators. In: ACM/IEEE 43rd annual international symposium on computer architecture (ISCA), pp 267–278.https://doi.org/10.1109/ISCA.2016.32
Reagen B, Gupta U, Pentecost L, Whatmough P, Lee SK, Mulholland N, Brooks D, Wei GY (2018) Ares: a framework for quantifying the resilience of deep neural networks. In: 55th annual design automation conference (DAC).https://doi.org/10.1109/DAC.2018.8465834
Real E, Moore S, Selle A, Saxena S, Suematsu YL, Tan J, Le QV, Kurakin A (2017) Large-scale evolution of image classifiers. In: Precup D, Teh YW (eds) International conference on machine learning (ICML), PMLR, International Convention Centre, Sydney, Australia, proceedings of machine learning research, vol 70, pp 2902–2911
Real E, Aggarwal A, Huang Y, Le QV (2019) Regulraized evolution for image classifier architecture search. In: AAAI
Saikia T, Marrakchi Y, Zela A, Hutter F, Brox T (2019) AutoDispNet: improving disparity estimation with AutoML
Salami B, Unsal OS, Kestelman AC (2018) On the resilience of RTL NN accelerators: fault characterization and mitigation. In: 30th international symposium on computer architecture and high performance computing (SBAC-PAD), pp 322–329.https://doi.org/10.1109/CAHPC.2018.8645906
Saljoughi AS, Mehvarz M, Mirvaziri H (2017) Attacks and intrusion detection in cloud computing using neural networks and particle swarm optimization algorithms. Emerg Sci J.https://doi.org/10.28991/ijse-01120
Sandler M, Howard A, Zhu M, Zhmoginov A, Chen LC (2018) MobileNetV2: Inverted residuals and linear bottlenecks. In: IEEE conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2018.00474
Santos FF, Pimenta PF, Lunardi C, Draghetti L, Carro L, Kaeli D, Rech P (2019) Analyzing and increasing the reliability of convolutional neural networks on GPUs. IEEE Trans Reliab 68(2):663–677.https://doi.org/10.1109/TR.2018.2878387
Saxena S, Verbeek J (2016) Convolutional neural fabrics. In: Lee DD, Sugiyama M, Luxburg UV, Guyon I, Garnett R (eds) Conference on neural information processing systems (NIPS). Curran Associates Inc., Red Hook, pp 4053–4061
Schorn C, Guntoro A, Ascheid G (2018) Accurate neuron resilience prediction for a flexible reliability management in neural network accelerators. In: Design, automation and test in Europe conference and exhibition (DATE).https://doi.org/10.23919/DATE.2018.8342151
Schorn C, Guntoro A, Ascheid G (2018) Efficient on-line error detection and mitigation for deep neural network accelerators. In: Gallina B, Skavhaug A, Bitsch F (eds) Computer safety, reliability, and security (SAFECOMP), LNCS, vol 11093. Springer, Berlin.https://doi.org/10.1007/978-3-319-99130-6_14
Schorn C, Guntoro A, Ascheid G (2019) An efficient bit-flip resilience optimization method for deep neural networks. In: Design, automation and test in Europe conference and exhibition (DATE), pp 1486–1491.https://doi.org/10.23919/DATE.2019.8714885
Sridharan V, DeBardeleben N, Blanchard S, Ferreira KB, Stearley J, Shalf J, Gurumurthi S (2015) Memory errors in modern systems: the good, the bad, and the ugly. In: Twentieth international conference on architectural support for programming languages and operating systems (ASPLOS), pp 297–310.https://doi.org/10.1145/2694344.2694348
Srinivasan G, Wijesinghe P, Sarwar SS, Jaiswal A, Roy K (2016) Significance driven hybrid 8T-6T SRAM for energy-efficient synaptic storage in artificial neural networks. In: Design, automation and test in Europe conference and exhibition (DATE)
Srivastava N, Hinton GE, Krizhevsky A, Sutskever I, Salakhutdinov RR (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929–1958
Stallkamp J, Schlipsing M, Salmen J, Igel C (2012) Man vs. computer: benchmarking machine learning algorithms for traffic sign recognition. Neural Netw 32:323–332.https://doi.org/10.1016/j.neunet.2012.02.016
Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10:99–127.https://doi.org/10.1162/106365602320169811
Sze V, Chen YH, Yang TJ, Emer JS (2017) Efficient processing of deep neural networks: a tutorial and survey. Proc IEEE 105(12):2295–2329.https://doi.org/10.1109/JPROC.2017.2761740
Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: IEEE conference on computer vision and pattern recognition (CVPR), pp 2818–2826.https://doi.org/10.1109/CVPR.2016.308
Tan M, Chen B, Pang R, Vasudevan V, Le QV (2019) MnasNet: platform-aware neural architecture search for mobile. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2019.00293
Torres-Huitzil C, Girau B (2017) Fault and error tolerance in neural networks: a review. IEEE Access 5:17322–17341.https://doi.org/10.1109/ACCESS.2017.2742698
Vaezi Nejad SM, Marandi SM, Salajegheh E (2019) A hybrid of artificial neural networks and particle swarm optimization algorithm for inverse modeling of leakage in earth dams. Civ Eng J.https://doi.org/10.28991/cej-2019-03091392
Vanhoucke V, Senior A, Mao MZ (2011) Improving the speed of neural networks on CPUs. In: Deep learning and unsupervised feature learning workshop, NIPS 2011
Venkataramani S, Ranjan A, Roy K, Raghunathan A (2014) AxNN: energy-efficient neuromorphic systems using approximate computing. In: IEEE/ACM international symposium on low power electronics and design (ISLPED), pp 27–32.https://doi.org/10.1145/2627369.2627613
Vogel S, Springer J, Guntoro A, Ascheid G (2019) Self-supervised quantization of pre-trained neural networks for multiplierless acceleration. In: Design, automation and test in Europe conference and exhibition (DATE), pp 1088–1093.https://doi.org/10.23919/DATE.2019.8714901
Wei T, Wang C, Rui Y, Chen CW (2016) Network morphism. In: Balcan MF, Weinberger KQ (eds) International conference on machine learning (ICML), PMLR, New York, New York, USA, Proceedings of machine learning research, vol 48, pp 564–572
Whatmough PN, Lee SK, Brooks D, Wei GY (2018) Dnn engine: a 28-nm timing-error tolerant sparse deep neural network processor for IoT applications. IEEE J Solid-State Circuits 53(9):2722–2731.https://doi.org/10.1109/JSSC.2018.2841824
WikiChip (2019) FSD Chip—Tesla.https://en.wikichip.org/wiki/fsd_chip
Williams S, Waterman A, Patterson D (2009) Roofline: an insightful visual performance model for multicore architectures. Commun ACM 52(4):65–76.https://doi.org/10.1145/1498765.1498785
Wu B, Dai X, Zhang P, Wang Y, Sun F, Wu Y, Tian Y, Vajda P, Jia Y, Keutzer K (2019) FBNet: hardware-aware efficient convnet design via differentiable neural architecture search. In: IEEE/CVF conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2019.01099
Xia L, Liu M, Ning X, Chakrabarty K, Wang Y (2017) Fault-tolerant training with on-line fault detection for RRAM-based neural computing systems. In: 54th annual design automation conference (DAC).https://doi.org/10.1145/3061639.3062248
Xie S, Zheng H, Liu C, Lin L (2019) SNAS: stochastic neural architecture search. In: International conference on learning representations (ICLR)
Yang L, Murmann B (2017) SRAM voltage scaling for energy-efficient convolutional neural networks. In: 18th international symposium on quality electronic design (ISQED), pp 7–12.https://doi.org/10.1109/ISQED.2017.7918284
Zhang C, Sun G, Fang Z, Zhou P, Pan P, Cong J (2018) Caffeine: towards uniformed representation and acceleration for deep convolutional neural networks. IEEE Trans Comput Aid Des Integr Circuits Syst.https://doi.org/10.1109/TCAD.2017.2785257
Zhang H, Cisse M, Dauphin YN, Lopez-Paz D (2018) mixup: beyond empirical risk minimization. In: International conference on learning representations (ICLR)
Zhang Q, Wang T, Tian Y, Yuan F, Xu Q (2015) ApproxANN: an approximate computing framework for artificial neural network. In: Design, automation and test in Europe conference and exhibition (DATE), pp 701–706
Zhong Z, Yang Z, Deng B, Yan J, Wu W, Shao J, Liu CL (2018) BlockQNN: efficient block-wise neural network architecture generation. arXiv preprint
Zoph B, Le QV (2017) Neural architecture search with reinforcement learning. In: International conference on learning representations (ICLR)
Zoph B, Vasudevan V, Shlens J, Le QV (2018) Learning transferable architectures for scalable image recognition. In: Conference on computer vision and pattern recognition (CVPR).https://doi.org/10.1109/CVPR.2018.00907
Author information
Authors and Affiliations
Bosch Corporate Research, Robert Bosch GmbH, Renningen, Germany
Christoph Schorn, Sebastian Vogel, Armin Runge & Andre Guntoro
Institute for Communication Technologies and Embedded Systems, RWTH Aachen University, Aachen, Germany
Christoph Schorn, Sebastian Vogel & Gerd Ascheid
Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany
Thomas Elsken
Department of Computer Science, University of Freiburg, Freiburg im Breisgau, Germany
Thomas Elsken
- Christoph Schorn
You can also search for this author inPubMed Google Scholar
- Thomas Elsken
You can also search for this author inPubMed Google Scholar
- Sebastian Vogel
You can also search for this author inPubMed Google Scholar
- Armin Runge
You can also search for this author inPubMed Google Scholar
- Andre Guntoro
You can also search for this author inPubMed Google Scholar
- Gerd Ascheid
You can also search for this author inPubMed Google Scholar
Corresponding author
Correspondence toChristoph Schorn.
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Schorn, C., Elsken, T., Vogel, S.et al. Automated design of error-resilient and hardware-efficient deep neural networks.Neural Comput & Applic32, 18327–18345 (2020). https://doi.org/10.1007/s00521-020-04969-6
Received:
Accepted:
Published:
Issue Date:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative