Yunhe Wang

I am a principal researcher at Huawei Noah's Ark Lab, where I work on deep learning, model compression, and computer vision, etc.

I did my PhD at school of EECS, Peking University, where I was co-advised by Prof. Chao Xu and Prof. Dacheng Tao. I did my bachelors at school of Science Xidian University.

Email  /  Google Scholar  /  LinkedIn  /  Zhi Hu

profile photo
Research

I'm interested in computer vision, deep learning, model compression, and machine learning. Much of my research is about developing resource efficient neural networks for computer vision tasks (e.g. classification, detection, segmentation, and super-resolution).

News

2/2020, seven papers have been accepted by CVPR 2020. Great!

1/2020, one paper has been accepted by IEEE TNNLS.

11/2019, three papers have been accepted by AAAI 2020.

Publications
Beyond Dropout: Feature Map Distortion to Regularize Deep Neural Networks
Yehui Tang, Yunhe Wang, Yixing Xu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu
AAAI, 2020
pdf / bibtex

A novel regularization method for improving the performance deep neural networks.

Distilling Portable Generative Adversarial Networks for Image Translation
Hanting Chen, Yunhe Wang, Han Shu, Changyuan Wen, Chunjing Xu, Boxin Shi, Chao Xu, Chang Xu
AAAI, 2020
pdf / bibtex

Knowledge distillation for generative adversarial networks.

Efficient Residual Dense Block Search for Image Super-Resolution
Dehua Song, Chang Xu, Xu Jia, Yiyi Chen, Chunjing Xu, Yunhe Wang
AAAI, 2020
pdf / bibtex

NAS for super-resolution.

Positive-Unlabeled Compression on the Cloud
Yixing Xu, Yunhe Wang, Hanting Chen, Kai Han, Chunjing Xu, Dacheng Tao, Chang Xu
NeurIPS, 2019
pdf / code / bibtex / supplement

Using a small proportion of labeled data and massive unlabled data on the cloud to conduct the model compression task.

Data-Free Learning of Student Networks
Hanting Chen, Yunhe Wang, Chang Xu, Zhaohui Yang, Chuanjian Liu, Boxin Shi, Chunjing Xu, Chao Xu, Chang Xu
ICCV, 2019
pdf / code / bibtex

The first work for distilling student networks without any training data by exploiting a generator.

Co-Evolutionary Compression for Unpaired Image Translation
Han Shu, Yunhe Wang, Xu Jia, Kai Han, Hanting Chen, Chunjing Xu, Chang Xu, Chang Xu
ICCV, 2019
pdf / code / bibtex

Compressing CycleGAN using evolution algorithm.

Searching for Accurate Binary Neural Architectures
Mingzhu Shen, Kai Han, Chunjing Xu, Yunhe Wang
ICCV Neural Architectures Workshop, 2019
pdf / bibtex

Searching for binary networks which can achieve the performance of full-precision models.

LegoNet: Efficient Convolutional Neural Networks with Lego Filters
Zhaohui Yang, Yunhe Wang, Hanting Chen, Chuanjian Liu, Boxin Shi, Chao Xu, Chunjing Xu, Chang Xu
ICML, 2019
pdf / bibtex / code

A split-transform merge strategy for An efficient convolution.

Learning Instance-wise Sparsity for Accelerating Deep Models
Chuanjian Liu, Yunhe Wang, Kai Han, Chunjing Xu, Chang Xu
IJCAI, 2019
pdf / bibtex

An instance-wise feature pruning method during online inference.

Attribute Aware Pooling for Pedestrian Attribute Recognition
Kai Han, Yunhe Wang, Han Shu, Chuanjian Liu, Chunjing Xu, Chang Xu
IJCAI, 2019
pdf / bibtex

Attribute Aware Pooling for multi-attribute classification on the pedestrian attribute recognition problem.

Crafting Efficient Neural Graph of Large Entropy
Minjing Dong, Hanting Chen, Yunhe Wang, Chang Xu
IJCAI, 2019
pdf / bibtex

Pruning neural networks under the supervision of graph entropy.

Low Resolution Visual Recognition via Deep Feature Distillation
Mingjian Zhu, Kai Han, Chao Zhang, Jinlong Lin, Yunhe Wang
ICASSP, 2019
pdf / bibtex

Exploiting feature distillation to learn well-performed models for recognizing low-resolution objects.

Learning Versatile Filters for Efficient Convolutional Neural Networks
Yunhe Wang, Chang Xu, Chunjing Xu, Chao Xu, Dacheng Tao
NeuriPS, 2018
pdf / code / bibtex / supplement

Versatile filters to construct efficient convolutional neural network..

Towards Evolutionary Compression
Yunhe Wang, Chang Xu, Jiayan Qiu, Chao Xu, Dacheng Tao
SIGKDD, 2018
pdf / bibtex

Using evolutionary algorithm to compress and accelerate CNNs.

Autoencoder Inspired Unsupervised Feature Selection
Kai Han, Yunhe Wang, Chao Zhang, Chao Li, Chao Xu
ICASSP, 2018
pdf / bibtex

AutoEncoder Feature Selector (AEFS) for unsupervised feature selection.

Adversarial Learning of Portable Student Networks
Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
AAAI, 2018
pdf / bibtex

Knowledge distillation by introducing a discriminator.

Packing Convolutional Neural Networks in the Frequency Domain
Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
IEEE TPAMI, 2018
pdf / bibtex

Compressing and speeding up CNNs in the frequency domain.

Beyond Filters: Compact Feature Map for Portable Deep Model
Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
ICML, 2017
pdf / code/ bibtex / supplement

Eliminate the redundancy of the feature maps in CNNs.

Beyond RPCA: Flattening Complex Noise in the Frequency Domain
Yunhe Wang, Chang Xu, Chao Xu, Dacheng Tao
AAAI, 2017
pdf / bibtex

Image de-noise in the frequency domain.

Privileged Multi-Label Learning
Shan You, Chang Xu, Yunhe Wang, Chao Xu, Dacheng Tao.
IJCAI, 2017
pdf / bibtex

Exploit the relationship between labels in multi-label learning problems.

DCT Regularized Extreme Visual Recovery
Yunhe Wang, Chang Xu, Shan You, Chao Xu, Dacheng Tao
IEEE TIP, 2017
pdf / bibtex

Extreme visual recovery based on discrete cosine transform.

CNNpack: Packing Convolutional Neural Networks in the Frequency Domain
Yunhe Wang, Chang Xu, Shan You, Dacheng Tao, Chao Xu
NIPS, 2016
pdf / bibtex / supplement

Transforming convolutional filters and feature maps into frequency domain to compress and accelerate CNNs.

DCT inspired feature transform for image retrieval and reconstruction
Yunhe Wang, Miaojing Shi, Shan You, Chao Xu
IEEE TIP, 2016
pdf / bibtex

A new DCT inspired feature transform for representing images in computer vision tasks.

Service
Senior PC Member, IJCAI 2020

Senior PC Member, IJCAI 2019
Awards
Google PhD Fellowship, 2017
Baidu Scholarship, 2017

Flag Counter

Thanks to Jon Barron for sharing his code.