Fundings

Funding supports granted for our research projects.

Natural Science Foundation of Sichuan Province(四川省自然科学基金-青年项目)

2026.01-2027.12

Research on Hybrid Spiking Models for Multimodal Visual Fusion(面向多模态视觉融合的混合脉冲模型研究),No. 2026NSFSC1468

Visual models based on traditional cameras suffer from degraded imaging quality under complex lighting conditions such as dynamic light interference and non-uniform illumination, resulting in poor application performance. This has become a common challenge hindering the development of autonomous driving, industrial inspection, and other fields. As a novel brain-inspired visual sensor, event cameras feature high dynamic range, low latency, and sparse event streams, offering high robustness to lighting variations. Integrating them with traditional cameras presents an effective approach to addressing complex lighting scenarios. Current approaches exhibit limited exploitation of complementary advantages when processing dual-modality information, particularly demonstrating low efficiency in utilizing the discrete temporal information of events. Therefore, this project proposes to leverage the sparse encoding characteristics of spiking neural networks to investigate a hybrid spiking model for event-traditional camera multimodal visual fusion. Focusing on three aspects—low-level visual encoding, high-level feature fusion, and collaborative optimization of hybrid models—it explores multimodal visual encoding based on dynamic-static complementarity, modular learning mechanisms for hybrid spiking models. This approach aims to resolve cross-modal semantic association challenges between traditional images and event data, as well as collaborative optimization issues in heterogeneous hybrid networks. The ultimate goal is to efficiently integrate multimodal visual information, construct high-performance models robust to lighting variations, and provide core technological support for visual applications.

Fundamental Research Funds for the Central Universities(中央高校基本科研业务费-青年教师成长项目)

2026.01-2026.12

Research on Spiking Transfer Learning Methods for Event-Driven Cameras(面向事件相机的脉冲迁移学习方法研究),No. JBK202511077

In recent years, event cameras have gained prominence due to their advantages such as high dynamic range and high temporal resolution. However, their high hardware costs and limited publicly available datasets result in high training expenses for new models, while their low-power benefits remain underutilized on traditional networks. To address this challenge, this project proposes an innovative spiking transfer learning approach. It transfers knowledge from well-labeled traditional vision tasks to event cameras, training low-power spiking models. First, we extract prior knowledge about color and texture information associated with structural features from traditional vision domains. Then, we construct a spiking model based on structural knowledge transfer and design a transfer loss function from a spatio-temporal alignment perspective. This approach is expected to enable low-cost training of spiking models on event cameras, providing a theoretical foundation for the development of brain-inspired vision systems and offering new low-power solutions for defense, military, and security surveillance applications.