Embedded Hardware Acceleration for AI on the Edge

Chairs

 Po-Tsang Huang (National Chiao Tung University, Taiwan)

 Po-Tsang Huang, National Chiao Tung University, Taiwan

Shaswot Shresthamali, Keio University, Japan

AI as a field has experienced significant advancement in recent years with the onset of deep neural networks (DNNs) that can carry out cognitive tasks with excellent performance.  However, the algorithmic performance of DNNs comes with massive computational and memory costs that pose severe challenges to the hardware platforms on which they are executed. Therefore the exploration of new devices, architectures, and algorithms, especially as the complexity of DNNs increases, is necessary to improve processing efficiency. The topics of interest of the track include, but are not limited to:

  • Novel methods and architectures to Accelerate Deep Neural Networks (DNN) 
  • Deep learning with real-time and low-power efficiency
  • Applications of deep learning on an intelligent mobile platform and IoT devices,
  • Hardware acceleration for machine learning,
  • Algorithm-Hardware codesign and optimization for ML
  • Benchmarking machine learning workloads
  • Latest trends in AI chip design and commercialization
  • Applying AI for CAD & EDA design
  • Inference in Edge Computing
  • Applications for AI accelerators in Consumer Electronics, including robotics, autonomous vehicles, prosthetics, etc

Former Chairs

  • Po-Tsang Huang, National Chiao Tung University, Taiwan (16th IEEE MCSoC-2023)