Hardware Acceleration for AI on the Edge

Track Chair(s)

Po-Tsang Huang, (NYCU, Taiwan)

Topics of Interest

AI as a field has experienced significant advancement in recent years with the onset of deep neural networks (DNNs) that are able to carry out cognitive tasks with excellent performance. However, the algorithmic performance of DNNs comes with massive computational and memory costs that pose serious challenges to the hardware platforms on which they are executed. Therefore the exploration of new devices, architectures, and algorithms, especially as the complexity of DNNs increases, is necessary to improve processing efficiency. This track will promote innovation, adoption, and early access to advanced technologies including silicon and systems for accelerating AI edge workloads. The topics of interest of the track include, but are not limited to:

  • Deep learning with real-time and low-power efficiency,
  • Applications of deep learning on a smart mobile platform, and IoT devices,
  • Hardware acceleration for machine learning,
  • Algorithm-Hardware codesign and optimization for ML
  • Benchmarking machine learning workloads
  • Latest trends in AI chip design and commercialization
  • Applying AI for CAD & EDA design
  • Inference in Edge Computing
  • Applications for AI accelerators in consumer Electronics, including robotics, autonomous vehicles prosthetics, etc.

Former Chair(s)

  • Lan-Da Van, NYCU, Taiwan, MCSoC 2021
  • Yoichi Tomioka, The University of Aizu, Japan, MCSoC 2018 (Note: The track was renamed)