Tutorials
Date: Monday, January 13, 2020 (9:00-18:00)
Room 306A |
Room 306B |
Room 307A |
Room 307B |
|
9:00 - 11:30 |
Tutorial-1 [9:00-11:30] AI Chip Technologies and DFT Methodologies |
Tutorial-2 [9:00-11:30] A Journey from Devices to Systems with FeFETs and NCFETs |
Tutorial-3 [9:00-11:30] Comparison and Summary of Impulse-Sensitivity-Function (ISF) Extraction for Oscillator Phase Noise Optimization |
Tutorial-4 [9:00-11:30] General Trends of Security Engineering for In-vehicle Network Architecture in Modern Electric Vehicle |
13:00 - 18:00 |
Tutorial-5 [15:30-18:00] Machine Learning for Reliability of ICs and Systems |
Tutorial-6 [13:00-15:20] Compression and Neural Architecture Search for Efficient Deep Learning Tutorial-7 [15:30-18:00] Designing Application-Specific AI Processors |
Tutorial-8 [14:00-18:00] Hardware-based Security Solutions for the Internet of Things |
Tutorial-9 [14:00-18:00] An Emerging Trend in Post Moore Era: Monolithic 3D IC Technology |
Tutorial-1:Monday, January 13, 9:00-11:30@Room306A
AI Chip Technologies and DFT Methodologies
- Organizer:
- Yu Huang (Mentor, A Siemens Business)
- Speakers:
- Rahul Singhal (Mentor, A Siemens Business)
- Yu Huang (Mentor, A Siemens Business)
Abstract:
Hardware acceleration for Artificial Intelligence (AI) is now a very competitive and rapidly evolving market. In this tutorial, we will start by covering the basics of deep learning. We will proceed to give an overview of the new and exciting field of using AI chips to accelerate deep learning computations. It will cover the critical and special characteristics and the architecture of the most popular AI chips. Next we will summarize the features of the AI chips from design-for-test (DFT) perspective and introduce the DFT technologies that can help testing AI chips and speeding up time-to-market. Finally, we will present a few case studies on how DFT is implemented on the real AI chips.Biography:


Tutorial-2:Monday, January 13, 9:00-11:30@Room306B
A Journey from Devices to Systems with FeFETs and NCFETs
- Organizers:
- Prof. X.Sharon Hu (University of Notre Dame)
- Dr. Hussam Amrouch (Karlsruhe Institute of Technology)
- Speakers:
- Prof. X.Sharon Hu (University of Notre Dame) or Prof. Xunzhao Yin (Zhejiang University)
- Dr. Hussam Amrouch (Karlsruhe Institute of Technology)
Abstract:
FeFETs for In-Memory Computing:Data transfer between a processor and memory is a major bottleneck in improving application-level performance. This is particularly the case for data intensive tasks such as some machine learning applications. In-memory computing, where certain data processing is performed in memory, can be an effective solution to address this bottleneck. Thus, compact, low-power, fast and non-volatile in-memory computing is highly desirable. This talk presents a cross-layer effort of designing in-memory computing modules based on ferroelectric field effect transistors (FeFETs), an emerging, non-volatile device. An FeFET is made by integrating a ferroelectric material layer in the gate stack of a MOSFET, and can behave as both a transistor and a non-volatile storage element. This unique property enables area efficient and low-power finely integrated logic and memory. After introducing the basics of FeFETs, this talk will focus on two major topics on FeFET based circuit and architecture designs: (i) FeFET based ternary content addressable memory (TCAM), and (ii) FeFET based Compute-In-Memory (CiM). For each topic, issues related to circuits, architectures and application-level benchmarking will be elaborated. We will culminate the talk with a specific application-level case study, i.e., memory augmented neural networks for few-shot learning.
NCFETs to Address the Fundamental Limits in Technology Scaling:
The inability of MOSFET transistors to switch faster than 60mV/decade, due to the nonscalable Boltzmann factor, is one of the key fundamental limits in physics for technology scaling. This is, in fact, the bottleneck in voltage scaling, which had led to the discontinuation of Dennard’s scaling more than a decade ago. As a result, on-chip power densities have continuously increased and the operating frequency of processors stopped improving in the last decade to prevent unsustainable on-chip temperatures. In this talk, we will demonstrate how improvements in the electrical characteristics of transistors, obtained by a ferroelectric material, can be investigated from physics, where they do originate, all the way up to the system level, where they ultimately affect the efficiency of computing. We will focus on the Negative Capacitance FET (NCFET), which is unlike the abovementioned FeFET devices operate in the hysteresis-free region. We will explain how NCFET pushes the sub-threshold swing to below its fundamental limit and how this can revive the prior trends in processor design with respect to voltage and frequency scaling. We will focus on answering the following three key questions to draw the impact of NCFET technology on computing efficiency: In how far NCFET technology will enable processors (i) to operate at higher frequencies without increasing voltage? (ii) to operate at higher frequencies without increasing power density? and (iii) to operate at lower voltages, while still fulfilling performance requirement? The latter is substantial for IoT devices, where available power budgets are extremely restricted. We will also demonstrate how employing NCFET technology will have a significant impact not only on circuits but also on architecture- and system-level management techniques. For example, as opposed to conventional CMOS technology in which reducing the voltage minimizes the leakage power, NCFET has an inverse dependency. This means that conventional power management techniques will not work any longer since they would lead to suboptimal results depending on system-level workload properties. Such an example and others of the implications at the architectural and system levels will be also discussed during this tutorial talk towards providing the audience with the big picture behind NCFET technology.
Biography:



Tutorial-3:Monday, January 13, 9:00-11:30@Room307A
Comparison and Summary of Impulse-Sensitivity-Function (ISF) Extraction for Oscillator Phase Noise Optimization
- Organizer:
- Dr. Yong Chen(Nick) (University of Macau)
- Speakers:
- Dr. Yong Chen(Nick) (University of Macau)
Abstract:
TBDBiography:

Tutorial-4:Monday, January 13, 9:00-11:30@Room307B
General Trends of Security Engineering for In-vehicle Network Architecture in Modern Electric Vehicle
- Organizer:
- Dr. Yi (Estelle) Wang (Continental Automotive Singapore)
- Speakers:
- Dr. Yi (Estelle) Wang (Continental Automotive Singapore)
- Prof. Naehyuck Chang (Korea Advanced Institute of Science and Technology)
Abstract:
In this tutorial, we will cover two parts. The first part depicts the in-vehicle network architecture of the modern electric vehicle. The second part analyzes the security-related issue in an automotive domain from an automotive industrial tier-1 perspective (Top 5 in the worldwide and Top2 in Germany).The motivation of this tutorial is to have a brief view about current hot topic, automotive electric vehicle and automotive security, which has attracted more and more attentions, especially in ASP-DAC in terms of the number of submissions related to this fields.
The first part presents the architecture of a modern electric vehicle and covers three aspects:
(1) Why do we drive electric vehicles? It is not easy to say that electric vehicles are higher performance compared with a similar price range of internal combustion engine vehicles. There are financial benefits including Government subsidies and tax deduction, which cannot be sustainable. A low maintenance cost is a good advantage, but vehicle depreciation is a big question. Therefore, environmental friendliness should be one clear motivation to drive electric vehicles. However, electric vehicles are only "zero exhaust emission" because of tire and brake emissions, which occupy a large portion of the total vehicle emissions. Even putting aside the tire and brake emissions, electric vehicles still contribute to a significant amount of pollution because of the source of electricity. Electric vehicles produce less than half of equivalent exhaust emissions compared with gasoline vehicles and not much different from that of hybrid vehicles. Higher MPGe (mile per gallon gasoline equivalent) of electric vehicles can largely mislead the energy efficiency when it comes to "well to wheel" efficiency taking the entire energy ecosystem into account.
(2) Challenges to make more efficient electric vehicles: It is challenging to make electric vehicles more fuel-efficient because the key powertrain components are already highly efficient, and therefore, there is a very narrow headroom for further enhancement. Consequently, the challenges for extended range of electric vehicles end up with the deployment of more lighter materials, which directly impacts on the manufacturing and repair costs, and it may make the actual cost of ownership very high. An extended driving range of electric vehicles is one of the most demanding requirements of the current and potential electric vehicle owners, but the use of a larger-capacity battery pack makes the vehicle curb weight heavier and thus the fuel efficiency worse.
(3) A system-level solution to enhance electric vehicle fuel efficiency with current powertrain technology: First, we develop an instantaneous power consumption modeling of electric vehicles by the curb weights, speed, acceleration, road slope, passenger and cargo weights, motor capacity, and so on, as a battery discharge model. We ensure the model fidelity as we fabricate a lightweight custom electric vehicle perform an extensive measurement. The model fidelity enables us to achieve a more accurate range estimation. We attempt both design and runtime energy optimization using electric-vehicle-specific energy characteristics. We emphasize that electric vehicles show completely different fuel consumption behaviors from internal combustion engine vehicles due to the significant discrepancy in the drivetrain. We introduce minimum-energy driving methods for electric vehicles, which are largely different from eco-driving methods of internal combustion engine vehicles. We also propose a rapid energy-aware electric vehicle synthesis that allows users to quickly customize their own electric vehicle powertrain specification without understanding the technology.
The second part presents a general introduction to security in automotive engineering. Continental, as a leading automotive technology company, has a holistic security architecture to cope with current and future challenges for electric vehicles, which details protection from a single ECU to the gateway and from in-vehicle communication to car-to-car/car-to-infrastructure communication and backend communication.
From the Continental's perspective, vehicle architecture renovation enlarges the attacking surface of a vehicle and brings more challenges in automotive security. Current challenges are feasible implementation, security process, state-of-the-art technology, heterogeneity architecture, secure development, and legislation, etc. Future automotive security engineering includes the discussions about automotive Ethernet, anomaly detection system, over the air update, post-quantum cryptography, and crypto agility. We detailed key technologies in this domain and provide an overview of each technology used in a modern electric vehicle.
Biography:

Dr. Chang is an ACM Fellow and an IEEE Fellow for contribution to low-power systems. He was the Chair of the ACM SIGDA (Special Interest Group on Design Automation) and the Past Chair of ACM SIGDA. Dr. Chang is the Editor-in-Chief of the ACM (Association for Computing Machinery) Transactions on Design Automation of Electronic Systems, and an Associate Editor of IEEE Transactions on Very Large Scale Integration Systems. He also served for IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, IEEE Embedded Systems Letters, ACM Transactions on Embedded Computing Systems, and so on, as an Associate Editor.
Dr. Chang is (was) the General Co-Chair of VLSI-SoC (Very Large Scale Integration) 2015, ICCD (International Conference on Computer Design) 2014 and 2015, ISLPED (International Symposium on Low-Power Electronics and Design) 2011, etc. Dr. Chang is the Technical Program Chair of DAC (Design Automation Conference) 2016. He was the Technical Program (Co-)Chair of ASP-DAC (Asia and South Pacific Design Automation Conference) 2015, ICCD 2014, CODES+ISSS (Hardware-Software Codesign and System Synthesis) 2012, ISLPED 2009, etc.
Dr. Chang is the winner of the 2014 ISLPED Best Paper Award, 2011 SAE Vincent Bendix Automotive Electronics Engineering Award, 2011 Sinyang Academic Award, 2009 IEEE SSCS International SoC Design Conference Seoul Chapter Award, and several ISLPED Low-Power Design Contest Awards in 2002, 2003, 2004, 2007, 2012, 2014, and 2017.
Dr. Chang is a co-founder and the founding CEO of EMVcon Inc., a company for a battery solution of electric vehicle conversion.

Dr. Wang is also active in society activities with more than 40 international top journal (IEEE transactions and ACM transactions)/conference papers and 5 patents. She is a senior IEEE member. She is served as a Committee Member of the Singapore Chapter of the IEEE Circuit and System. She has been served as a Technical Program Committee member for ASP-DAC 2016, ASP-DAC 2018 (security track), CPSS 2018. I have been a program committee member of various reputable conferences, such as FPT-2013, FPT-2014, WESS-2014, UIC-2010, UIC-2011, UIC-2012, UIC-2013, etc., and a reviewer for many conferences/journals, such as TVLSI, TCAS-I, TCAS-II, TRETS, JSA, MICPRO, CSSP, CHES, FPT, WESS, ISCAS, ASP-DAC, VLSI, Latnicrypt, ICCIA, UIC, TSP, etc.
Tutorial-5: Monday, January 13, 15:30 -18:00@Room306A
Machine Learning for Reliability of ICs and Systems
- Organizer:
- Mehdi B. Tahoori (Karlsruhe Institute of Technology)
- Speakers:
- Krishnendu Chakrabarty (Duke University)
- Mehdi B. Tahoori (Karlsruhe Institute of Technology)
Abstract:
With increasing the complexity of digital systems and the use of advanced nanoscale technology nodes, various process and runtime variabilities threaten the correct operation of these systems. The interdependence of these reliability detractors and their dependencies to circuit structure as well as running workloads makes it very hard to derive simple deterministic models to analyze and target them. As a result, machine learning techniques can be used to extract useful information which can be used to effectively monitor and improve the reliability of digital systems. These learning schemes are typically performed offline on large data sets in order to obtain various regression models which then are used during runtime operation to predict the health of the system and guide appropriate adaptation and countermeasure schemes. The purpose of this tutorial is to discuss and evaluate various learning schemes in order to analyze the reliability of the ICs and systems due to various runtime failure mechanisms which originate from process and runtime variabilities such as thermal and voltage fluctuations, device and interconnect aging mechanisms, as well as radiation-induced soft errors. The tutorial will also describe how time-series data analytics based on key performance indicators can be used to detect anomalies and predict failure in complex electronic systems. A comprehensive set of experimental results will be presented for data collected during 30 days of field operation from over 20 core routers.Biography:


Prof. Chakrabarty is a recipient of the National Science Foundation CAREER award, the Office of Naval Research Young Investigator award, the Humboldt Research Award from the Alexander von Humboldt Foundation, Germany, the IEEE Transactions on CAD Donald O. Pederson Best Paper Award (2015), the ACM Transactions on Design Automation of Electronic Systems Best Paper Award (2017), and over a dozen best paper awards at major conferences. He is also a recipient of the IEEE Computer Society Technical Achievement Award (2015), the IEEE Circuits and Systems Society Charles A. Desoer Technical Achievement Award (2017), the Semiconductor Research Corporation Technical Excellence Award (2018), and the Distinguished Alumnus Award from the Indian Institute of Technology, Kharagpur (2014). He is a Research Ambassador of the University of Bremen (Germany) and a Hans Fischer Senior Fellow (named after Nobel Laureate Prof. Hans Fischer) at the Institute for Advanced Study, Technical University of Munich, Germany. He is a 2018 recipient of the Japan Society for the Promotion of Science (JSPS) Fellowship in the "Short Term S: Nobel Prize Level" category (typically awarded to eminent researchers who have won the Nobel Prize or similar honors), and he was a 2009 Invitational Fellow of JSPS. He has held Visiting Professor positions at University of Tokyo and the Nara Institute of Science and Technology (NAIST) in Japan, and Visiting Chair Professor positions at Tsinghua University (Beijing, China) and National Cheng Kung University (Tainan, Taiwan). He is currently an Honorary Chair Professor at National Tsing Hua University in Hsinchu, Taiwan, and an Honorary Professor at Xidian University in Xi’an, China.
Prof. Chakrabarty’s current research projects include: testing and design-for-testability of integrated circuits and systems; digital microfluidics, biochips, and cyberphysical systems; data analytics for fault diagnosis, failure prediction, anomaly detection, and hardware security; neuromorphic computing systems. He has authored 20 books on these topics (with one translated into Chinese), published over 660 papers in journals and refereed conference proceedings, and given over 300 invited, keynote, and plenary talks. He has also presented 60 tutorials at major international conferences, including DAC, ICCAD, DATE, ITC, and ISCAS. Prof. Chakrabarty is a Fellow of ACM, a Fellow of IEEE, and a Golden Core Member of the IEEE Computer Society. He holds 11 US patents, with several patents pending. He is a recipient of the 2008 Duke University Graduate School Dean’s Award for excellence in mentoring, and the 2010 Capers and Marion McDonald Award for Excellence in Mentoring and Advising, Pratt School of Engineering, Duke University. He has served as a Distinguished Visitor of the IEEE Computer Society (2005-2007, 2010-2012), a Distinguished Lecturer of the IEEE Circuits and Systems Society (2006-2007, 2012-2013), and an ACM Distinguished Speaker (2008-2016).
Prof. Chakrabarty served as the Editor-in-Chief of IEEE Design & Test of Computers during 2010-2012 and ACM Journal on Emerging Technologies in Computing Systems during 2010-2015. Currently he serves as the Editor-in-Chief of IEEE Transactions on VLSI Systems. He is also an Associate Editor of IEEE Transactions on Biomedical Circuits and Systems, IEEE Transactions on Multiscale Computing Systems, and ACM Transactions on Design Automation of Electronic Systems, and a coordinating editor for Springer Journal of Electronic Testing (JETTA).
Tutorial-6:Monday, January 13, 13:00-15:20@Room306B
Compression and Neural Architecture Search for Efficient Deep Learning
- Organizer:
- Song Han (MIT EECS)
- Speakers:
- Song Han (MIT EECS)
Abstract:
Efficient deep learning computing requires algorithm and hardware co-design to enable specialization: we usually need to change the algorithm to reduce memory footprint and improve energy efficiency. However, the extra degree of freedom creates a much larger design space. Human engineers can hardly exhaust the design space by heuristics, and there’s a shortage of machine learning engineers. We propose techniques to architect efficient neural networks efficiently and automatically.We first introduce Deep Compression (ICLR’16) techniques to reduce the size of neural networks, followed by EIE accelerator (ISCA’16) that directly accelerate a sparse and compressed model. Then investigate automatically designing small and fast models (ProxylessNAS, ICLR’19), auto channel pruning (AMC, ECCV’18), and auto mixed-precision quantization (HAQ, CVPR’19). We demonstrate such learning-based, automated design achieves superior performance and efficiency than rule-based human design. Finally, we accelerate computation-intensive AI applications including TSM (ICCV’19) for efficient video recognition and PVCNN (NeurIPS’19) for efficient 3D point cloud recognition.
Biography:

Tutorial-7: Monday, January 13, 15:30-18:00@Room306B
Designing Application-Specific AI Processors
- Organizer:
- Zhiru Zhang (Cornell University)
- Speakers:
- Manish Pandey (Synopsys, Inc.)
- Claudionor Coelho (Google, Inc. )
- Zhiru Zhang (Cornell University)
Abstract:
As machine learning is used in increasingly diverse applications, ranging from autonomous drones and IoT edge devices to self-driving vehicles, specialized computing architectures and platforms are emerging as alternatives to CPUs and GPUs, to meet energy, cost and performance (throughput/latency) requirements imposed by these applications. The proposed tutorial starts with an overview of the compute and data complexity for deep neural networks (DNNs), the underlying operations and how these can be realized as Application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). Exploiting the underlying parallelism in DNNs requires large computational arrays and high-bandwidth memory accesses for weights, feature maps and inter-layer communication. These arrays, consisting of adders, multipliers, square root and other arithmetic circuits consume expensive chip real estate. Memory accesses, necessary to store network parameters and activation values, impose high bandwidth requirements, necessitating both on-chip memory as well as high-bandwidth off-chip memory interconnects. The tutorial discusses algorithm-hardware co-design, starting with benchmarking metrics and energy-driven DNN models and covers a number of different hardware optimizations including reduction of parameters and floating-point operations, network pruning and compression, and data size reduction. The power and latency cost of memory accesses have prompted new near-memory and in-memory computing architectures which reduce energy cost by embedding computations in memory structures.Biography:



Tutorial-8:Monday, January 13, 14:00-18:00@Room307A
Hardware-based Security Solutions for the Internet of Things
- Organizer:
- Dr. Basel Halak (University of Southampton)
- Speakers:
- Prof. Gang Qu
- Dr. Yier Jin
- Dr. Chongyan Gu
- Dr. Basel Halak
Abstract:
The internet of Things technology is expected to generate tremendous economic benefits, this promise is undermined by major security threats. First of all the vast majority of these devices are expected to communicate wirelessly, and will be connected to the Internet, which makes them especially susceptible to confidentiality threats from attackers snooping for messages contents. Second, most IoT devices are expected to be deployed in remote locations with little or no protection. Therefore, they can be vulnerable to both invasive and side channel attacks, malicious adversaries can potentially gain access to a device and apply well know power or timing analyses to extract sensitive data that might be stored on the IoT node, such as encryption keys, digital identifiers, and recorded measurements. Furthermore, with ubiquitous systems, it can no longer be assumed that the attacker is remote. Indeed, the attack could even come from within the system itself, from rogue embedded hardware (e.g. Trojans) or a malicious software (e.g. a malware). A large proportion of IoT devices operate in an energy- constrained environment with very limited computing resources, this makes the use of typical defence mechanisms such as classic cryptography algorithms prohibitively expensive. The challenges for building secure IoT systems can be stated as three questions:1) How to develop cryptographic primitives which are secure and energy-efficient
2) How to implement complex security protocols with very limited resources
3) How to remotely verify the secure and reliable operation of an IoT node
This tutorial provides detailed explanation of the state of the art techniques used to tackle the above three challenges, these includes lightweight authentication protocols, attestation schemes and physically unclonable functions
Biography:

Speakers’ related Experience:
This tutorial was presented in IEEE DATE 2019 for the first time, in addition, participating speakers have delivered related materials in other events. We have included below a list of related talks for each speaker:

Invited Speaker, University of Kaiserslautern, Germany, On the Personalities of Electronics Systems and their Security Applications, 2018.
Embedded Tutorial: Physically Unclonable Functions: Design Principles, Applications and Outstanding Challenges, IEEE International Verification and Security Workshop (IVSW), Spain, 2018.
Half-Day Tutorial: Hardware-based Security Solutions for the Internet of Things. 59th IEEE International Midwest Symposium on Circuits and Systems, Abu Dhabi, 2016

Tutorial: Hardware based Lightweight Authentication for IoT Application. ACM SIGDA Design Automation Summer School at DAC, 2017
Tutorial: Hardware based Lightweight Authentication for IoT Application VLSI Test Technology Workshop, 2017
Invited Talk on Lightweight Authentication for IoT RISE Spring School at the University of Cambridge.2018

Invited Talk: Hardware Supported Cybersecurity for Internet of Things, Northwestern University, Chicago, IL February 2018
Keynote Hardware Supported Cybersecurity for Internet of Things 18th International Workshop on Microprocessor/SoC Test, Security & Verification (keynote), Austin, TX December 2017

Invited Talk on IoT security, Global Grand Challenges Summit organized by the Chinese Academy of Engineering, UK Royal Academy of Engineering and US National Academy of Engineering, Beijing, September 2015.
Tutorial-9:Monday, January 13, 14:00 -18:00@Room307B
An Emerging Trend in Post Moore Era: Monolithic 3D IC Technology
- Organizer:
- Yuanqing Cheng (Beihang University)
- Speakers:
- Sébastien Thuries (CEA-Leti)
- Mohamed M. Sabry Aly (Nanyang Technological University)
- Aida Todri-Sanial (LIRMM/University of Montpellier)
- Ricardo Reis (UFRGS)
- Yuanqing Cheng (Beihang University)
Abstract:
The semiconductor industry enters the post Moore era where transistor scaling is not the sole driving force in delivering the required performance and functionality. To harvest the gains of higher integration densities without shrinking the feature size, monolithic three-dimensional (3D) integration has been proposed as a highly promising alternative to lead system integration in the post Moore regime. This tutorial aims to introduce this emerging technology from the perspectives of fabrication process, design automation, reliable and low power design, and architecutre innovations.Biography:




