8 | A Neural Probe with Up to 966 Electrodes and Up to 384 Configurable Channels in 0.13um SOI CMOS | IEEE transactions on biomedical circuits and systems 11.3 (2017): 510-522 | Neuropixels的电路版本,和Nature文章同时发表的,讲述了很多电路实现方面的细节 | 我们后续研发感算一体电极时,单元设计可以参考该文章,尤其是一些电路方面的设计对我们很有启发 | 孙彪 | [点此下载](https://ieeexplore.ieee.org/abstract/document/7900417/) |
9 | Deep compressive autoencoder for action potential compression in large-scale neural recording | Journal of neural engineering 15.6 (2018): 066019 | 杨知教授组的文章,使用autoencoder来做神经信号压缩,性能很好 | 做神经信号压缩和量化的同学必读,并且可以作为对比的baseline | 孙彪 | [点此下载](https://iopscience.iop.org/article/10.1088/1741-2552/aae18d/meta)
10 | Sparse Bayesian Learning for End-to-End EEG Decoding | IEEE Transactions on Pattern Analysis and Machine Intelligence 45.12 (2023): 15632-15649 | David Wipf是SBL的创始人,他把SBL用在EEG信号解码上 | 神经解码方向同学必读,可以考虑把SBL做成电路 | 孙彪 | [点此下载](https://ieeexplore.ieee.org/abstract/document/10197212)
1 | AdderNet: Do We Really Need Multiplications in Deep Learning? | Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020 | Addernet最初提出的论文,后续的一系列工作都是基于该论文 | 我们后续做的无乘法网络的idea都是基于该论文,做AI电路的同学必读 | 孙彪 | [点此下载](https://openaccess.thecvf.com/content_CVPR_2020/html/Chen_AdderNet_Do_We_Really_Need_Multiplications_in_Deep_Learning_CVPR_2020_paper.html) |
2 | Conjugate Adder Net (CAddNet) - A Space-Efficient Approximate CNN | Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022 | 另外一种无乘法网络CAdder | 做AI电路的同学必读 | 孙彪 | [点此下载](https://openaccess.thecvf.com/content/CVPR2022W/ECV/html/Shen_Conjugate_Adder_Net_CAddNet_-_A_Space-Efficient_Approximate_CNN_CVPRW_2022_paper.html) |
3 | Redistribution of Weights and Activations for AdderNet Quantization | Advances in Neural Information Processing Systems 35 (2022): 22739-22751 | 华为做Addernet量化的文章,号称可以量化到4bit | 做AI电路的同学必读 | 孙彪 | [点此下载](https://proceedings.neurips.cc/paper_files/paper/2022/hash/8f15e0b418ccdefec8313affc897dc8c-Abstract-Conference.html) |
4 | WSQ-AdderNet: Efficient Weight Standardization Based Quantized AdderNet FPGA Accelerator Design with High-Density INT8 DSP-LUT Co-Packing Optimization | Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided Design. 2022 | 我们团队发表的Addernet量化文章,可以量化到8bit | 做AI电路的同学必读 | 孙彪 | [点此下载](https://dl.acm.org/doi/abs/10.1145/3508352.3549439) |
5 | Adder Attention for Vision Transformer | Advances in Neural Information Processing Systems 34 (2021): 19899-19909 | 华为将Adder应用到Vision Transformer上的文章 | 做AI电路的同学必读 | 孙彪 | [点此下载](https://proceedings.neurips.cc/paper/2021/hash/a57e8915461b83adefb011530b711704-Abstract.html) |
6 | Searching for Energy-Efficient Hybrid Adder-Convolution Neural Networks | Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022 | Adder和CNN结合用NAS搜索混合网络结构 | 对于我们做NAS方法有一定启发,可以作为baseline | 孙彪 | [点此下载](https://openaccess.thecvf.com/content/CVPR2022W/NAS/html/Li_Searching_for_Energy-Efficient_Hybrid_Adder-Convolution_Neural_Networks_CVPRW_2022_paper.html) |