Neuromorphic Approaches for Foundation Models and On‑Device AI

← Back to the NeuroCore 2026 Page

Session Chair


Dr.

TBC
TBC

Foundation models are rapidly transforming the landscape of artificial intelligence, yet their computational and energy demands remain a major barrier to scalable, real‑time, and ubiquitous deployment. Neuromorphic engineering offers a compelling path forward—leveraging event‑driven computation, sparse representations, in‑memory processing, and biologically inspired learning to enable efficient, adaptive, and privacy‑preserving intelligence at the edge. This track invites original research that explores how neuromorphic principles, circuits, systems, and algorithms can reshape the design, training, and deployment of foundation models and on‑device AI. Submissions that bridge the gap between large‑scale AI and low‑power neuromorphic hardware are especially encouraged.

Topics of interest include, but are not limited to:

Neuromorphic Foundations for Large‑Scale AI

  • Neuromorphic acceleration of foundation models (LLMs, vision‑language models, multimodal models)
  • Sparse, event‑driven, or spiking formulations of transformer architectures
  • Neuromorphic approaches to scaling laws, model compression, and efficient inference
  • In‑memory and compute‑near‑memory techniques for large‑parameter models

On‑Device and Edge Intelligence

  • Ultra‑low‑power neuromorphic processors for on‑device AI
  • Real‑time learning and adaptation on edge devices
  • Privacy‑preserving and federated neuromorphic learning
  • Neuromorphic SoCs for mobile, wearable, and IoT applications

Algorithms, Learning Rules, and Co‑Design

  • Spiking or event‑driven variants of attention, diffusion, and generative models
  • Local learning rules and continual learning for foundation‑model‑scale systems
  • Hardware–algorithm co‑design for efficient training and inference
  • Compilers, toolchains, and software frameworks for neuromorphic deployment

Applications and Emerging Directions

  • Neuromorphic acceleration for robotics, autonomous systems, and embodied AI
  • Event‑based sensing integrated with foundation‑model pipelines
  • Hybrid neuromorphic–digital architectures for real‑time multimodal processing
  • Novel benchmarks, datasets, and evaluation methodologies for neuromorphic FM/edge AI

Former Chairs

  • Prof. Khanh Dang, University of Aizu, Japan (16th IEEE MCSoC 2023)
  • Prof. Anh Vu Doan, Infineon, Germany (16th IEEE MCSoC 2023)

Permanent link to this article: https://mcsoc-forum.org/site/index.php/neurocore-t10/