12月21日北京大学王奕森老师报告通知
1424
0
2020-12-16

报告题目:Towards Trustworthy Machine Learning
报告时间:12月21日 14:00 ~ 15:00
报告地点:信息北楼A404 

报告摘要:Machine learning/deep learning has achieved tremendous success in various application areas. Unfortunately, it relies on huge high-quality labelled data and shows serious vulnerability to adversarial examples. These issues dramatically hinder the deployment of machine learning in practice, since most real-world data are easily imperfect and corrupted. Therefore, in this talk, I will introduce our recent works on trustworthy machine learning from a theoretical view of robust optimization, including the reliability on noisy labels and the robustness against adversarial examples.

讲者简介:王奕森,北京大学助理教授,博导。2018年博士毕业于清华大学计算机系,研究方向为机器学习/深度学习,在人工智能/机器学习领域顶级会议和期刊发表论文30余篇,包括ICML、NeurIPS、ICLR、CVPR、ICCV、ECCV、AAAI、IJCAI等。曾获得百度奖学金(全球共10位)、ACM中国优秀博士论文提名(全国共5位)等荣誉。

王奕森老师个人主页:http://www.cis.pku.edu.cn/info/1084/1637.htm (中文)   https://sites.google.com/site/csyisenwang/ (英文)


下面是王老师2020年最新的论文,欢迎大家一起来讨论:

  • Adversarial Weight Perturbation Helps Robust Generalization,Dongxian Wu, Shu-Tao Xia, Yisen Wang# , NeurIPS 2020.

  • Normalized Loss Functions for Deep Learning with Noisy Labels, Xingjun Ma*, Hanxun Huang*, Yisen Wang#, Simone Romano, Sarah Erfani and James Bailey, ICML 2020.

  • Improving Adversarial Robustness Requires Revisiting Misclassified Examples, Yisen Wang*, Difan Zou*, Jinfeng Yi, James Bailey, Xingjun Ma, Quanquan Gu.  ICLR 2020

  • Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets . Dongxian Wu, Yisen Wang#, Shu-Tao Xia, James Bailey, Xingjun Ma. ICLR 2020

  • Adversarial Camouflage: Hiding Physical-World Attacks with Natural Styles . Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, Kai Qin, Yun Yang. CVPR 2020.

  • Improving Query Efficiency of Black-box Adversarial Attack, Yang Bai*, Yuyuan Zeng*, Yong Jiang#, Yisen Wang#, Shu-Tao Xia, Weiwei Guo.  ECCV 2020. 


登录用户可以查看和发表评论, 请前往  登录 或  注册
SCHOLAT.com 学者网
免责声明 | 关于我们 | 用户反馈
联系我们: