One paper has been accepted to PRCV 2023.
来源: 刘伟锋/
中国石油大学(华东)
295
0
0
2023-08-25

Our paper entitled "A Stable Vision Transformer for Out-of-Distribution Generalization" has been accepted to PRCV 2023.

 

A Stable Vision Transformer for Out-of-Distribution Generalization

Haoran Yu, Baodi Liu, Yingjie Wang, Kai Zhang, Dapeng Tao, Weifeng Liu

Vision Transformer (ViT) has achieved amazing results in many visual applications where training and testing instances are drawn from the independent and identical distribution. The performance will drop drastically when the distribution of testing instances is different from that of training ones in real open environments. To tackle this challenge, we propose a Stable Vision Transformer (SViT) for out-of-distribution (OOD) generalization. In particular, the SViT weights samples to eliminate spurious correlations of token features in Vision Transformer and finally boosts the performance for OOD generalization. We conduct extensive experiments on the popular PACS dataset. The experimental results demonstrate the superiority of the SViT for OOD generalization tasks.


登录用户可以查看和发表评论, 请前往  登录 或  注册
SCHOLAT.com 学者网
免责声明 | 关于我们 | 联系我们
联系我们: