CIVIL ENGINEERING 365 ALL ABOUT CIVIL ENGINEERING


  • 1.

    Zhu, J.-Y., Park, T., Isola, P. & Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In 2017 IEEE Int. Conference on Computer Vision (ICCV) 2223–2232 (IEEE, 2017).

  • 2.

    Oord, A. V. D. et al. Wavenet: A generative model for raw audio. Preprint at https://arxiv.org/abs/1609.03499 (2016).

  • 3.

    Wu, Y. et al. Google’s neural machine translation system: Bridging the gap between human and machine translation. Preprint at https://arxiv.org/abs/1609.08144 (2016).

  • 4.

    Johnson, J., Alahi, A. & Li, F.-F. Perceptual losses for real-time style transfer and super-resolution. In Proc. European Conference on Computer Vision 694–711 (Springer, 2016).

  • 5.

    He, Y. et al. Streaming end-to-end speech recognition for mobile devices. In IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 6381–6385 (IEEE, 2019).

  • 6.

    Ali Eslami, S. M. et al. Neural scene representation and rendering. Science 360, 1204–1210 (2018).

    Article 

    Google Scholar
     

  • 7.

    Anderson, P. et al. Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Proc. IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 3674–3683 (IEEE, 2018).

  • 8.

    Yi, K. et al. Neural-symbolic VQA: Disentangling reasoning from vision and language understanding. In Advances in Neural Information Processing Systems 31 (NIPS 2018) 1031–1042 (MIT Press, 2018).

  • 9.

    Cloud TPU (Google, 2020); https://cloud.google.com/tpu

  • 10.

    Intel Movidius Myriad X Vision Processing Unit Technical Specifications (Intel, 2020); https://www.intel.com/content/www/us/en/products/processors/movidius-vpu/movidius-myriad-x.html.

  • 11.

    Chen, Y.-H., Yang, T.-J., Emer, J. & Sze, V. Eyeriss v2: A flexible accelerator for emerging deep neural networks on mobile devices. IEEE J. Em. Sel. Top. Circuits Syst. 9, 292–308 (2019).

    Article 

    Google Scholar
     

  • 12.

    Guo, K. et al. Angel-eye: A complete design flow for mapping CNN onto embedded fpga. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 37, 35–47 (2017).

    Article 

    Google Scholar
     

  • 13.

    Valavi, H., Ramadge, P. J., Nestler, E. & Verma, N. A 64-tile 2.4-mb in-memory-computing CNN accelerator employing charge-domain compute. IEEE J. Solid-State Circuits 54, 1789–1799 (2019).

    Article 

    Google Scholar
     

  • 14.

    Xu et al. Scaling for edge inference of deep neural networks. Nat. Electron. 1, 216–222 (2018).

    Article 

    Google Scholar
     

  • 15.

    Zhang, S. et al. Cambricon-x: An accelerator for sparse neural networks. In 49th Annual IEEE/ACM Int. Symposium on Microarchitecture https://doi.org/10.1109/MICRO.2016.7783723 (IEEE, 2016).

  • 16.

    Holzinger, A., Biemann, C., Pattichis, C. S. & Kell, D. B. What do we need to build explainable ai systems for the medical domain? Preprint at https://arxiv.org/abs/1712.09923 (2017).

  • 17.

    Vovk, V., Gammerman, A. & Shafer, G. Algorithmic Learning in a Random World (Springer, 2005).

  • 18.

    Papadopoulos, H., Vovk, V. & Gammermam, A. Conformal prediction with neural networks. In 19th IEEE Int. Conference on Tools with Artificial Intelligence (ICTAI 2007) 2, 388–395 (IEEE, 2007).

  • 19.

    DeVries, T. & Taylor, G.W. Leveraging uncertainty estimates for predicting segmentation quality. Preprint at https://arxiv.org/abs/1807.00502 (2018).

  • 20.

    DeVries, T. & Taylor, G.W. Learning confidence for out-of-distribution detection in neural networks. Preprint at https://arxiv.org/abs/1802.04865 (2018).

  • 21.

    Guo, C., Pleiss, G., Sun, Y. & Weinberger, K.Q. On calibration of modern neural networks. In Proc. 34th International Conference on Machine Learning 70, 1321–1330 (ACM, 2017).

  • 22.

    Malinin, A. & Gales, M. Predictive uncertainty estimation via prior networks. In Advances in Neural Information Processing Systems 31 (NIPS 2018) 7047–7058 (MIT Press, 2018).

  • 23.

    Lakshminarayanan, B., Pritzel, A. & Blundell, C. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems 30 (NIPS 2017) 6402–6413 (MIT Press, 2017).

  • 24.

    Geifman, Y., Uziel, G. & El-Yaniv, R. Bias-reduced uncertainty estimation for deep neural classifiers. In Proc. 7th Int. Conference on Learning Representations (ICLR) (ICLR, 2019).

  • 25.

    Ding, Y., Liu, J., Xiong, J. & Shi, Y. Revisiting the evaluation of uncertainty estimation and its application to explore model complexity-uncertainty trade-off. In CVPR workshop on Fair, Data Efficient and Trusted Computer Vision 4–5 (IEEE, 2020).

  • 26.

    Roy, A. G., Conjeti, S., Navab, N. & Wachinger, C. Inherent brain segmentation quality control from fully convnet Monte Carlo sampling. In Int. Conference on Medical Image Computing and Computer-Assisted Intervention 664–672 (Springer, 2018).

  • 27.

    Su, H., Yin, Z., Huh, S., Kanade, T. & Zhu, J. Interactive cell segmentation based on active and semi-supervised learning. IEEE Trans. Med. Imaging 35, 762–777 (2015).

    Article 

    Google Scholar
     

  • 28.

    McAllister, R. et al. Concrete problems for autonomous vehicle safety: advantages of Bayesian deep learning. In Int. Joint Conferences on Artificial Intelligence, 4745–4753 (IJCAI, 2017).

  • 29.

    Gasser, U. & Almeida, V. A. A layered model for ai governance. IEEE Internet Comput. 21, 58–62 (2017).

    Article 

    Google Scholar
     

  • 30.

    O’sullivan, S. et al. Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 15, e1968 (2019).

    Article 

    Google Scholar
     

  • 31.

    Shih, P.-J. Ethical guidelines for artificial intelligence (AI) development and the new “trust” between humans and machines. Int. J. Autom. Smart Technol. 9, 41–43 (2019).


    Google Scholar
     

  • 32.

    Liang, S., Li, Y. & Srikant, R. Enhancing the reliability of out-of-distribution image detection in neural networks. In Proc. 6th Int. Conference on Learning Representations (ICLR) (ICLR, 2018).

  • 33.

    Kumar, A., Sarawagi, S. & Jain, U. Trainable calibration measures for neural networks from kernel mean embeddings. In Int. Conference on Machine Learning 2810–2819 (MLR, 2018).

  • 34.

    Naeini, M. P., Cooper, G. & Hauskrecht, M. Obtaining well calibrated probabilities using Bayesian binning. In 29th AAAI Conference on Artificial Intelligence 2901–2907 (AAAI, 2015).

  • 35.

    Kiureghian, A. D. & Ditlevsen, O. Aleatory or epistemic? Does it matter? Struct, Saf. 31, 105–112 (2009).

    Article 

    Google Scholar
     

  • 36.

    Kendall, A. & Gal, Y. What uncertainties do we need in bayesian deep learning for computer vision? In Advances in Neural Information Processing Systems 30 (NIPS 2017) 5574–5584 (MIT Press, 2017).

  • 37.

    Shalev, G., Adi, Y. & Keshet, J. Out-of-distribution detection using multiple semantic label representations. In Advances in Neural Information Processing Systems 31 (NIPS 2018) 7375–7385 (MIT Press, 2018).

  • 38.

    Lee, K., Lee, K., Lee, K. and Shin, J. Training confidence-calibrated classifiers for detecting out-of-distribution samples. In Proc. 6th Int. Conference on Learning Representations (ICLR) (ICLR, 2018).

  • 39.

    Hendrycks, D. & Gimpel, K. A baseline for detecting misclassified and out-of-distribution examples in neural networks. In Proc. 5th Int. Conference on Learning Representations (ICLR) (ICLR, 2017).

  • 40.

    Brock, A., Donahue, J. & Simonyan, K. Large scale GAN training for high fidelity natural image synthesis. In Proc. 7th International Conference on Learning Representations (ICLR) (ICLR, 2019).

  • 41.

    Devlin, J. et al. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies 4171–4186 (MIT Press, 2019).

  • 42.

    Sandler, M. et al. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, 4510–4520 (IEEE, 2018).

  • 43.

    Chen, Y.-H., Emer, J. Sze, V. Eyeriss: a spatial architecture for energy-efficient dataflow for convolutional neural networks. In Proc. 43rd International Symposium on Computer Architecture, 367–379 (IEEE, 2016).

  • 44.

    Gao, M., Ayers, G. & Kozyrakis, C. Practical near-data processing for in-memory analytics frameworks. In 2015 International Conference on Parallel Architecture and Compilation (PACT) 113–124 (IEEE, 2015).

  • 45.

    Xue, C.-X. et al. 24.1 a 1Mb multibit ReRAM computing-in-memory macro with 14.6 ns parallel MAC computing time for CNN based AI edge processors. In 2019 IEEE International Solid-State Circuits Conference (ISSCC) 388–390 (IEEE, 2019).

  • 46.

    Jiang, W., Xie, B. & Liu, C. et al. Integrating memristors and CMOS for better AI. Nat. Electron. 2, 376–377 (2019).

    Article 

    Google Scholar
     

  • 47.

    Huang, G., Liu, Z., Maaten, L.V.D. & Weinberger, K.Q. Densely connected convolutional networks. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 4700–4708 (IEEE, 2017).

  • 48.

    Zagoruyko, S. & Komodakis, N. Wide residual networks. In Proc. British Machine Vision Conference (BMVC) 87.1–87.12 (BMVA, 2016).

  • 49.

    Ni, K. et al. Fundamental understanding and control of device-to-device variation in deeply scaled ferroelectric FETs. In Proc. 2019 Symposium on VLSI Technology 40–41 (IEEE, 2019).

  • 50.

    Jerry, M. et al. Ferroelectric FET analog synapse for acceleration of deep neural network training. In Proc. 2017 IEEE International Electron Devices Meeting (IEDM) 6.2.1–6.2.4 (IEEE, 2017).

  • 51.

    Zhao, M. et al. Investigation of statistical retention of filamentary analog RRAM for neuromophic computing. In Proc. 2017 IEEE International Electron Devices Meeting (IEDM) 39.4.1–39.4.4 (IEEE, 2017).

  • 52.

    Chou, T., Tang, W., Botimer, J. & Zhang, Z. CASCADE: Connecting RRAMs to Extend Analog Dataflow In An End-To-End In-Memory Processing Paradigm. In Proc. 52nd Annual IEEE/ACM International Symposium on Microarchitecture 114–125 (IEEE, 2019).

  • 53.

    MacKay, D. J. A practical Bayesian framework for backpropagation networks. Neural Comput. 4, 448–472 (1992).

    Article 

    Google Scholar
     

  • 54.

    Neal, R. M. Bayesian Learning for Neural Networks 118 (Springer, 2012).

  • 55.

    Blundell, C., Cornebise, J., Kavukcuoglu, K. & Wierstra, D. Weight uncertainty in neural network. In Proc. International Conference on Machine Learning 1613–1622 (ACM, 2015).

  • 56.

    Graves, A. Practical variational inference for neural networks. In Advances in Neural Information Processing Systems 24 (NIPS 2011) 2348–2356 (MIT Press, 2011).

  • 57.

    Louizos C. & Welling, M. Multiplicative normalizing flows for variational Bayesian neural networks. In Proc. 34th International Conference on Machine Learning https://doi.org/10.5555/3305890.3305910 (ACM, 2017).

  • 58.

    Nair, T., Precup, D., Arnold, D. L. & Arbel, T. Exploring uncertainty measures in deep networks for multiple sclerosis lesion detection and segmentation. In Proc. Int. Conference on Medical Image Computing and Computer-Assisted Intervention 655–663 (Springer, 2018).

  • 59.

    Ovadia, Y. et al. Can You Trust Your Model’s Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift. In Advances in Neural Information Processing Systems 32 (NIPS 2019) 13991–14002 (MIT Press, 2019).

  • 60.

    Dhamija, A. R., Günther, M. & Boult, T. Reducing network agnostophobia. In Advances in Neural Information Processing Systems 31 (NIPS 2018) 9175–9186 (MIT Press, 2018).

  • 61.

    Hein, M., Andriushchenko, M. & Bitterwolf, J. Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In Proc. Conference on Computer Vision and Pattern Recognition (CVPR) 41–50 (IEEE, 2018).

  • 62.

    Alexandari, A., Kundaje, A. & Shrikumar, A. Maximum likelihood with bias-corrected calibration is hard-to-beat at label shift adaptation. Preprint at https://arxiv.org/abs/1901.06852 (2019).

  • 63.

    Chen, T., Navrátil, J., Iyengar, V. & Shanmugam, K. Confidence scoring using whitebox meta-models with linear classifier probes. In Proc. 22nd International Conference on Artificial Intelligence and Statistics 1467–1475 (PMLR, 2019).

  • 64.

    Mandelbaum, A. & Weinshall, D. Distance-based confidence score for neural network classifiers. Preprint at https://arxiv.org/abs/1709.09844 (2017).

  • 65.

    Oberdiek, P., Rottmann, M. & Gottschalk, H. Classification uncertainty of deep neural networks based on gradient information. In IAPR Workshop on Artificial Neural Networks in Pattern Recognition 113–125 (Springer, 2018).

  • 66.

    Teerapittayanon, S., McDanel, B. & Kung, H.-T. Branchynet: Fast inference via early exiting from deep neural networks. In 2016 23rd Int. Conference on Pattern Recognition (ICPR) 2464–2469 (Springer, 2016).

  • 67.

    Wang, X. et al. Idk cascades: fast deep learning by learning not to overthink. In Proc. Conference on Uncertainty in Artificial Intelligence 580–590 (UAIA, 2018).

  • 68.

    Sze, V., Chen, Y.-H., Yang, T.-J. & Emer, J. S. Efficient processing of deep neural networks: a tutorial and survey. Proc. IEEE 105, 2295–2329 (2017).

    Article 

    Google Scholar
     

  • 69.

    Geifman, Y. & El-Yaniv, R. Selectivenet: a deep neural network with an integrated reject option. In Proc. Int. Conference on Machine Learning 2151–2159 (MLR, 2019).

  • 70.

    Song, C., Liu, B., Wen, W., Li, H. & Chen, Y. A quantization-aware regularized learning method in multilevel memristor-based neuromorphic computing system. In 2017 IEEE 6th Non-Volatile Memory Systems and Applications Symposium (NVMSA) https://doi.org/10.1109/NVMSA.2017.8064465 (IEEE, 2017).

  • 71.

    He, K., Zhang, X., Ren, S. & Sun, J. Deep residual learning for image recognition. In Proc. IEEE conference on Computer Vision and Pattern Recognition 770–778 (IEEE, 2016).

  • 72.

    Gaier, A. & Ha, D. Weight agnostic neural networks. In Advances in Neural Information Processing Systems 32 (NIPS 2019) 5364–5378 (MIT Press, 2019).

  • 73.

    Nguyen-Phuoc, T., Li, C., Theis, L. Richardt, C. & Yang, Y.-L. Hologan: Unsupervised learning of 3D representations from natural images. In Proc. IEEE Int. Conference on Computer Vision 7588–7597 (IEEE, 2019).

  • 74.

    Nalisnick, E. et al. Hybrid models with deep and invertible features. In Proc. Int. Conference on Machine Learning 4723–4732 (MLR, 2019).

  • 75.

    Wu, C.-J. et al. Machine learning at Facebook: understanding inference at the edge. In 2019 IEEE International Symposium on High Performance Computer Architecture (HPCA) 331–344 (IEEE, 2019).

  • 76.

    Gupta, U. et al. The architectural implications of Facebook’s DNN-based personalized recommendation. In 2020 IEEE Int. Symposium on High Performance Computer Architecture (HPCA) 488–501 (IEEE, 2020).

  • 77.

    Krizhevsky, A., Sutskever, I. & Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems 25 (NIPS 2012) 1097–1105 (MIT Press, 2012).

  • 78.

    Zoph, B. & Le, Q.V. Neural architecture search with reinforcement learning. In Proc. 5th Int. Conference on Learning Representations (ICLR) (ICLR, 2017).

  • 79.

    Tan, M. et al. Mnasnet: Platform-aware neural architecture search for mobile. In Proc. IEEE Conference on Computer Vision and Pattern Recognition 2820–2828 (IEEE, 2019).

  • 80.

    Yang, L. et al. Co-exploring neural architecture and network-on-chip design for real-time artificial intelligence. In Proc. Asia and South Pacific Design Automation Conference (ASP-DAC) 85–90 (2020).

  • 81.

    Liu, H., Simonyan, K. & Yang, Y. DARTS: Differentiable architecture search. In Proc. 7th Int. Conference on Learning Representations (ICLR) (ICLR, 2019).

  • 82.

    Hendrycks, D. & Dietterich, T. Benchmarking neural network robustness to common corruptions and perturbations. In Proc. 7th Int. Conference on Learning Representations (ICLR) (ICLR, 2019).

  • 83.

    Huang, X., Kwiatkowska, M., Wang, S. & Wu, M. Safety verification of deep neural networks. In Proc. Int. Conference on Computer Aided Verification 3–29 (Springer, 2017).

  • 84.

    Papernot, N. et al. Practical black-box attacks against machine learning. In Proc. 2017 ACM Asia Conference on Computer and Communications Security 506–519 (ACM, 2017).

  • 85.

    Abadi, M. et al. Deep learning with differential privacy. In Proc. 2016 ACM SIGSAC Conference on Computer and Communications Security 308–318 (ACM, 2016).

  • 86.

    Papernot, N. et al. Practical black-box attacks against machine learning. In Proc. 2017 ACM Asia Conference on Computer and Communications security 506–519 (ACM, 2017).

  • 87.

    Gal, Y. & Ghahramani, Z. Dropout as a bayesian approximation: representing model uncertainty in deep learning. In Proc Int. Conference on Machine Learning 1050–1059 (MLR, 2016).

  • 88.

    Liu, S. et al. Cambricon: An instruction set architecture for neural networks. In 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA) 393–405 (IEEE, 2016).

  • 89.

    Chen, Y.-H., Krishna, T., Emer, J. S. & Sze, V. Eyeriss: An energy-efficient reconfigurable accelerator for deep convolutional neural networks. IEEE J. Solid-State Circ. 52, 127–138 (2016).

    Article 

    Google Scholar
     

  • 90.

    Chen, T. et al. Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning. ACM SIGARCH Comput. Archit. News 42, 269–284 (ACM, 2014).

  • 91.

    Chen, Y. et al. Dadiannao: A machine-learning supercomputer. In 2014 47th Annual IEEE/ACM Int. Symposium on Microarchitecture 609–622 (IEEE, 2014).

  • 92.

    Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In Proc. 44th Annual Int. Symposium on Computer Architecture https://doi.org/10.1145/3140659.3080246 (ACM, 2017).

  • 93.

    Han, S. et al. EIE: efficient inference engine on compressed deep neural network. ACM SIGARCH Comput. Archit. News 44, 243–254 (2016).

    Article 

    Google Scholar
     

  • 94.

    Farabet, C. et al. Neuflow: A runtime reconfigurable dataflow processor for vision. In CVPR2011 Workshops 109–116 (IEEE, 2011).

  • 95.

    Yoo, H.-J. et al. A 1.93 tops/w scalable deep learning/inference processor with tetra-parallel MIMD architecture for big data applications. In IEEE Int. Solid-State Circuits Conference 80–81 (IEEE, 2015).

  • 96.

    Du, Z. et al. ShiDianNao: Shifting vision processing closer to the sensor. In Proc. 42nd Annual International Symposium on Computer Architecture 92–104 (IEEE, 2015).

  • 97.

    Moons, B. & Verhelst, M. A 0.3–2.6 TOPS/W precision-scalable processor for real-time large-scale ConvNets. In Proc. 2016 IEEE Symposium on VLSI Circuits (VLSI-Circuits) https://doi.org/10.1109/VLSIC.2016.7573525 (IEEE, 2016).

  • 98.

    Whatmough, P. N. et al. 14.3 A 28nm SoC with a 1.2 GHz 568nJ/prediction sparse deep-neural-network engine with >0.1 timing error rate tolerance for IoT applications. In Proc. 2017 IEEE Int. Solid-State Circuits Conference (ISSCC) 242–243 (IEEE, 2017).

  • 99.

    Zhou, X. et al. Cambricon-s: Addressing irregularity in sparse neural networks through a cooperative software/hardware approach. In Proc. 2018 51st Annual IEEE/ACM Int. Symposium on Microarchitecture (MICRO) 15–28 (IEEE, 2018).

  • 100.

    Song, J. et al. 7.1 An 11.5 TOPS/W 1024-MAC butterfly structure dual-core sparsity-aware neural processing unit in 8nm flagship mobile SoC. In 2019 IEEE Int. Solid-State Circuits Conference-(ISSCC) 130–132 (IEEE, 2019).

  • 101.

    Desoli, G. et al. 14.1 a 2.9 TOPS/W deep convolutional neural network SOC in FD-SOI 28nm for intelligent embedded systems. In 2017 IEEE Int. Solid-State Circuits Conference (ISSCC) 238–239 (IEEE, 2017).

  • 102.

    Lee, J. et al. UNPU: A 50.6 TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision. In 2018 IEEE Int. Solid-State Circuits Conference-(ISSCC) 218-220 (IEEE, 2018).

  • 103.

    Park, E., Kim, D. & Yoo, S. Energy-efficient neural network accelerator based on outlier-aware low-precision computation. In 2018 ACM/IEEE 45th Annual Int. Symposium on Computer Architecture (ISCA) 688–698 (IEEE, 2018).

  • 104.

    Judd, P., Albericio, J., Hetherington, T., Aamodt, T. M. & Moshovos, A. Stripes: Bit-serial deep neural network computing. In 2016 49th Annual IEEE/ACM Int. Symposium on Microarchitecture (MICRO) https://doi.org/10.1109/MICRO.2016.7783722. (IEEE, 2016).

  • 105.

    Sharma, H. et al. Bit fusion: Bit-level dynamically composable architecture for accelerating deep neural network. In 2018 ACM/IEEE 45th Annual Int. Symposium on Computer Architecture (ISCA) 764–775 (IEEE, 2018).

  • 106.

    Aimar, A. et al. Nullhop: A flexible convolutional neural network accelerator based on sparse representations of feature maps. IEEE Trans. Neural Netw. Learn. Syst. 30, 644–656 (2018).

    Article 

    Google Scholar
     

  • 107.

    Parashar, A. et al. Scnn: An accelerator for compressed-sparse convolutional neural networks. ACM SIGARCH Comput. Archit. News 45, 27–40 (2017).

    Article 

    Google Scholar
     

  • 108.

    Moloney, D. et al. Myriad 2: Eye of the computational vision storm. In 2014 IEEE Hot Chips 26 Symposium (HCS) https://doi.org/10.1109/HOTCHIPS.2014.7478823 (IEEE, 2014).

  • 109.

    Intel Agilex FPGAs and SOCs (Intel, 2020); https://www.intel.com/content/www/us/en/products/programmable/fpga/agilex.html

  • 110.

    List of Graphics Processing Units (Wikipedia, 2020); https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units

  • 111.

    Kumbhare, P. et al. A selectorless RRAM with record memory window and nonlinearity based on trap filled limit mechanism. In 2015 15th Non-Volatile Memory Technology Symposium (NVMTS) https://doi.org/10.1109/NVMTS.2015.7457491 (IEEE, 2015).

  • 112.

    Larcher, L. et al. A compact model of program window in HfO x RRAM devices for conductive filament characteristics analysis. IEEE Trans. Electron Dev. 61, 2668–2673 (2014).

    Article 

    Google Scholar
     

  • 113.

    Lee, S. et al. Engineering oxygen vacancy of tunnel barrier and switching layer for both selectivity and reliability of selector-less ReRAM. IEEE Electron Dev. Lett. 35, 1022–1024 (2014).

    Article 

    Google Scholar
     

  • 114.

    Lee, S. et al. Selector-less ReRAM with an excellent non-linearity and reliability by the band-gap engineered multi-layer titanium oxide and triangular shaped AC pulse. In 2013 IEEE Int. Electron Devices Meeting 10.6.1-10.6.4 (IEEE, 2013).

  • 115.

    Woo, J. et al. Selector-less RRAM with non-linearity of device for cross-point array applications. Microelectron. Eng. 109, 360–363 (2013).

    Article 

    Google Scholar
     

  • 116.

    Lee, S. et al. Effect of AC pulse overshoot on nonlinearity and reliability of selectorless resistive random access memory in AC pulse operation. Solid-State Electron. 104, 70–74 (2015).

    Article 

    Google Scholar
     

  • 117.

    Dongale, T. D. et al. Effect of write voltage and frequency on the reliability aspects of memristor-based RRAM. Int. Nano Lett. 7, 209–216 (2017).

    Article 

    Google Scholar
     

  • 118.

    Gismatulin, A., Volodin, V., Gritsenko, V. & Chin, A. All nonmetal resistive random access memory. Sci. Rep. 9, 6144 (2019).

    Article 

    Google Scholar
     

  • 119.

    Grossi, A. et al. Experimental investigation of 4-kb RRAM arrays programming conditions suitable for TCAM. IEEE Trans. VLSI Syst. 26, 2599–2607 (2018).

    Article 

    Google Scholar
     

  • 120.

    Mulaosmanovic, H. et al. Evidence of single domain switching in hafnium oxide based FeFETs: Enabler for multi-level FeFET memory cells. In Proc. 2015 IEEE Int. Electron Devices Meeting (IEDM) 26.8.1–26.8.3 (IEEE, 2015).

  • 121.

    Ni, K., Li, X, Smith, J. A., Jerry, M. & Datta, S. Write disturb in ferroelectric FETs and its implication for 1T-FeFET AND memory arrays. IEEE Electron Device Lett. 39, 1656–1659 (2018)

  • 122.

    Zhang, Z., Dalca, A. V. & Sabuncu, M. R. Confidence calibration for convolutional neural networks using structured dropout. Preprint at https://arxiv.org/abs/1906.09551 (2019).

  • 123.

    Atanov, A., Ashukha, A., Molchanov, D., Neklyudov, K. & Vetrov, D. Uncertainty estimation via stochastic batch normalization. In Proc. Int. Symposium on Neural Networks 261–269 (Springer, 2019).

  • 124.

    Liu, Z. et al. Deep gamblers: learning to abstain with portfolio theory. In Advances in Neural Information Processing Systems 32 (NIPS 2019) 10622–10632 (MIT Press, 2019).

  • 125.

    Qiu, X., Meyerson, E. & Miikkulainen, R. Quantifying point-prediction uncertainty in neural networks via residual estimation with an I/O kernel. In Proc. 8th Int. Conference on Learning Representations (ICLR) (ICLR, 2020).



  • Source link

    Leave a Reply

    Your email address will not be published. Required fields are marked *