Publications

(2024). A Case for Low Bitwidth Floating Point Arithmetic on FPGA for Transformer Based DNN Inference. In IPDPSW 2024.

PDF

(2024). SqueezeBlock: A Transparent Weight Compression Scheme for Deep Neural Networks. In ICFPT 2023.

DOI

(2023). DyBit: Dynamic Bit-Precision Numbers for Efficient Quantized Neural Network Inference. In TCAD.

Code DOI

(2023). MSD: Mixing Signed Digit Representations for Hardware-efficient DNN Acceleration on FPGA with Heterogeneous Resources. In FCCM 2023.

PDF Code DOI

(2023). Model-Platform Optimized Deep Neural Network Accelerator Generation through Mixed-integer Geometric Programming. In FCCM 2023.

PDF Code DOI

(2022). Energy-Efficient Intelligent Pulmonary Auscultation for Post COVID-19 Era Wearable Monitoring Enabled by Two-Stage Hybrid Neural Network. In IEEE ISCAS 2022.

(2021). An Energy-efficient Deep Belief Network Processor Based on Heterogeneous Multi-core Architecture with Transposable Memory and On-chip Learning. In IEEE JETCAS.

Code DOI

(2021). A Reconfigurable Area and Energy Efficient Hardware Accelerator of Five High-order Operators for Vision Sensor Based Robot Systems. In IEEE ICTA 2021.

DOI

(2021). In Situ Aging-Aware Error Monitoring Scheme for IMPLY-Based Memristive Computing-in-Memory Systems. In IEEE TCAS-I.

DOI

(2021). Efficient Design of Spiking Neural Network with STDP Learning Based on Fast CORDIC. In TCAS-I.

Slides DOI

(2020). An Energy-efficient Multi-core Restricted Boltzmann Machine Processor with On-chip Bio-plausible Learning and Reconfigurable Sparsity. In A-SSCC 2020.

Code Slides Video DOI