Qianyi Wu ("吴潜溢" in Chinese)
Email : wqy9619 [at] gmail.com    
Github     Resume     Scholar     Twitter    

About Me

Qianyi Wu is currently a PhD student at Monash University, Department of Data Science and AI. Qianyi received B.S degree from Special Class for the Gifted Youth at University of Science and Technology of China (USTC) in 2016. And he received M.Sc degree from Graphics and Geometric Computing Laboratory of School of Mathematical Sciences at USTC in 2019, under the supervision of Prof. Juyong Zhang. He spent one year as a research intern at Nanyang Technological University, mentored by Prof. Jianfei Cai and Prof. Jianmin Zheng. He worked as a research scientist at SenseTime from 2019 to 2021, working closely with Dr. Wayne Wu.

News

  • Expected graduation in the end of 2024, open to research scientist and postdoc positions (email, CV).

  • 2024: 2 CVPR are accepted.
  • 2023: 1 ICCV, 1 SIGGRAPH, 1 AAAI are accepted.
  • 2022: 1 NeurIPS, 3 ECCV, 2 CVPR, 2 SIGGRAPH/SIGGRAPH Asia, 1 MICCAI are accepted.
  • 07/21: After two wonderful years in Sensetime, I start pursuing my PhD at Monash University! 🐨

  • 2020: 1 ECCV, 1 NeurIPS are accepted.
  • 2019: 1 CVPR is accepted.
  • 2018: 1 CVPR spotlight is accepted.

Selected Publications [Google Scholar]

ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces.
Qianyi Wu, Kaisiyuan Wang, Kejie Li, Jianmin Zheng, Jianfei Cai
International Conference on Computer Vision (ICCV), 2023
[PDF] [Project Page] [Code]
Explicit Correspondence Matching for Generalizable Neural Radiance Fields.
Yuedong Chen, Haofei Xu, Qianyi Wu, Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai
arxiv
[PDF] [Project Page] [Code]
Efficient Video Portrait Reenactment via Grid-based Codebook.
Kaisiyuan Wang, Hang Zhou, Qianyi Wu, Jiaxiang Tang, Zhiliang Xu, Borong Liang, Tianshu Hu, Errui Ding, Jingtuo Liu, Ziwei Liu, Jingdong Wang
ACM SIGGRAPH 2023 (Conference Proceedings)
[PDF] [Project Page] [Code]
Audio-Driven Co-Speech Gesture Image Generation.
Xian Liu, Qianyi Wu, Hang Zhou, Yuanqi Du, Wayne Wu, Dahua Lin, Ziwei Liu
Neural Information Processing Systems (NeurIPS), 2022

Spotlight Presentation


[PDF] [Project Page] [Code]
Object-Compositional Neural Implicit Surfaces.
Qianyi Wu, Xian Liu, Yuedong Chen, Kejie Li, Chuanxia Zheng, Jianfei Cai, Jianmin Zheng
European Conference on Computer Vision (ECCV), 2022
[PDF] [Project Page] [Code]
Sem2NeRF: Converting Single-View Semantic Masks to Neural Radiance Fields.
Yuedong Chen, Qianyi Wu, Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai
European Conference on Computer Vision (ECCV), 2022
[PDF] [Project Page] [Code]
Semantic-Aware Implicit Neural Audio-Driven Video Portrait Generation.
Xian Liu, Yinghao Xu, Qianyi Wu, Hang Zhou, Wayne Wu, Bolei Zhou
European Conference on Computer Vision (ECCV), 2022

Oral Presentation


[PDF] [Project Page] [Code]
EAMM: One-Shot Emotional Talking Face via Audio-Based Emotion-Aware Motion Model.
Xinya Ji, Hang Zhou, Kaisiyuan Wang, Qianyi Wu, Wayne Wu, Feng Xu, Xu Cao
ACM SIGGRAPH 2022 (Conference Proceedings)
[PDF] [Project Page] [Code]
Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation.
Xian Liu, Qianyi Wu, Hang Zhou, Yinghao Xu, Rui Qian, Xinyi Lin, Xiaowei Zhou, Wayne Wu, Bo Dai, Bolei Zhou
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2022
[PDF] [Project Page] [Code]
MEAD: A Large-scale Audio-visual Dataset for Emotional Talking Face Generation.
Kaisiyuan Wang*, Qianyi Wu*, Linsen Song*, Zhuoqian Yang, Wayne Wu, Chen Qian, Ran He, Yu Qiao, Chen Change Loy
European Conference on Computer Vision (ECCV), 2020
[PDF] [Project Page] [Code]
Disentangled Representation Learning for 3D Face Shape
Zi-Hang Jiang, Qianyi Wu, Keyu Chen, Juyong Zhang
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019
[PDF] [Code]
Alive Caricature from 2D to 3D.
Qianyi Wu, Juyong Zhang, Yu-Kun Lai, Jianmin Zheng, Jianfei Cai.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018

Spotlight Presentation


[PDF] [Data]

Industrial experience

SenseAR DigitalHuman - Audio-Driven Virtual Human
SenseAR Digital Human is a human-like intelligent multi-modal interactive system. As a primary member, my colleagues and I research and develop several key algorithms for audio-driven virtual human.
[Product] [Press(China Daily)]

Awards

  • National Scholarship, USTC, 2018.