21
点赞
0
评论
1
转载
收藏

实验室论文获NAACL顶会录用

实验室论文获NLP领域的5大顶会之一NAACL录用,第1作者为研二朱晓智同学,第1作者和通讯作者单位均为华南师范大学。

Title: A Self-supervised Joint Training Framework for Document Reranking

Author: Xiaozhi Zhu, Tianyong Hao, Sijie Cheng, Fu Lee Wang, Hai Liu

Abstract: Pretrained language models such as BERT have been successfully applied to a wide range of natural language processing tasks and also achieved impressive performance in document reranking tasks. Recent works indicate that further pretraining the language models on the task-specific datasets before fine-tuning helps improve reranking performance. However, the pre-training tasks like masked language model and next sentence prediction were based on the context of documents instead of encouraging the model to understand the content of queries in document reranking task. In this paper, we propose a new self-supervised joint training framework (SJTF) with a self-supervised method called Masked Query Prediction (MQP) to establish semantic relations between given queries and positive documents. The framework randomly masks a token of query and encodes the masked query paired with positive documents, and uses a linear layer as a decoder to predict the masked token. In addition, the MQP is used to jointly optimize the models with supervised ranking objective during fine-tuning stage without an extra further pre-training stage. Extensive experiments on the MS MARCO passage ranking and TREC Robust datasets show that models trained with our framework obtain significant improvements compared to original models.

录用类型:Long paper

声明:本内容系学者网用户个人学术动态分享,不代表平台立场。

评论 0

华南师范大学 计算机学院
近期热门动态
实验室研究生论文获ACL 2024主会录用
9618 2024-05-18 10:04:05
NCAA 2023国际会议征稿
6014 2023-01-25 12:16:00
欢迎投稿参会NCAA 2024国际会议(桂林)
4829 2024-02-04 21:12:01
ICWL 2023 & SETE 2023
4631 2023-07-31 08:01:52
ICPCSEE 2023国际会议征稿
4358 2023-01-25 12:13:34
SCHOLAT.com 学者网
免责声明 | 关于我们 | 联系我们
联系我们:
返回顶部