I am a Research Scientist & affiliated Postdoctoral Researcher at Samsung AI Center Cambridge & University of Cambridge (Affiliated with Clare Hall College and Machine Learning Systems Lab with Prof. Nicholas Lane). My research interests focus on hardware system and machine learning. During my Ph.D., I was studying high-performance machine learning algorithms and hardware, supervised by Prof. Wayne Luk at Imperial College London. My current research includes:

Our research has received Best Paper Nomination at ASAP’19, FPT’18.


News!

2024/03: One paper on edge CGRA hardware is accepted by ISCA’24!

2024/02: One paper on hardware acceleration of Trustworthy AI is accepted by DAC’24!

2024/01: One paper on language model and AI privacy is accepted by ICLR’24!

2023/11: Hongxiang is invited to serve on the DAC 2024 TPC, welcome to submit!

2023/09: Four ACM reproducibility badges are awarded to our MICRO’23 paper (artifacts available, reusable, functional and reproducible)! The code is available here.

2023/08: Our preprint for the DAC’23 paper is now available on ArXiv at the link! Try our open-sourced code if you are interested in accelerating robust deep learning on FPGA! You can also try our other projects by checking out the code release available at this link.

2023/07: One paper titled “Sparse-DySta: Sparsity-Aware Dynamic and Static Scheduling for Sparse Multi-DNN Workloads” is accepted by MICRO’23!

2023/06: Invited to serve on the FPT 2023 TPC, welcome to submit!

2023/02: One paper is accepted by DAC’23! See you in San Francisco this July.

2022/12: One paper titled “Design of Fully Spectral CNNs for Efficient FPGA-Based Acceleration” is accepted by TNNLS!

2022/11: Hongxiang is invited to serve on the DAC 2023 TPC, welcome to submit!

2022/10: One paper titled “Design Space Exploration for Efficient Quantum Most-Significant Digit-First Arithmetic” is accepted by TC!

2022/08: Three ACM reproducibility badges are awarded to our MICRO’22 paper (artifacts availability, functionality and reproducibility)! The code is available here.

2022/07: One paper titled “Adaptable Butterfly Accelerator for Attention-based NNs via Hardware and Algorithm Co-design” is accepted by MICRO’22!

2022/05: Hongxiang is invited to serve on the FPT 2022 TPC, welcome to submit!

2022/05: One paper titled “Remarn: A Reconfigurable Multi-threaded Multi-core Accelerator for Recurrent Neural Networks” is accepted by TRETS.

2022/02: Two papers are accepted by DAC’22:

  • “Optimizing Quantum Circuit Placement via Machine Learning”, Hongxiang Fan et al.
  • "Fast Uncertainty Estimation by Accelerating Bayesian Transformers", Hongxiang Fan et al.

2022/02: One paper titled “FPGA-based Acceleration for Bayesian Convolutional Neural Networks” is accepted by TCAD.

2022/01: One paper titled “Accelerating Bayesian Neural Networks via Algorithmic and Hardware Optimizations” is accepted by TPDS.

2022/01: One paper titled “Customizable FPGA-based Accelerator for Binarized Graph Neural Networks” is accepted by ISCAS. This is my first time to supervise a first-year Ph.D. student to publish a conference paper as the corresponding author!

2021/12: One co-author paper related recurrent neural network is accepted by TVLSI.

2021/11: Joining Samsung AI Center, Cambridge as Research Intern. Working with Prof. Nicholas Lane, Dr. Mohamed Abdelfattah and Dr. Thomas C P Chau.

2021/09: One paper titled “High-Performance Acceleration of 2-D and 3-D CNNs on FPGAs Using Static Block Floating Point” is accepted by TNNLS.

2021/09: One paper titled “Algorithm and Hardware Co-design for Reconfigurable CNN Accelerator” is accepted by ASP-DAC’22.

…………….

2019/07: Our paper “F-E3D: FPGA-based Acceleration of An Efficient 3D Convolutional Neural Network for Human Action Recognition” receive Best Paper Nomination in ASAP’19.

…………….

2018/12: Our paper “A Real-Time Object Detection Accelerator with Compressed SSDLite on FPGA” receive Best Paper Nomination in FPT’18.

…………….