논문 간단 정리

· Paper Review
Advancing continual lifelong learning in neural information retrieval: definition, dataset, framework, and empirical evaluationPublication Info: Information Sciences 2025URL: https://www.sciencedirect.com/science/article/pii/S0020025524012829ContributionIR task의 맥락에서 Continual Learning 패러다임에 대해 명확히 정의함Continual IR을 평가하기 위해, Topic-MS-MARCO 데이터셋을 제안함주제별 IR task와 predefined task similarity가 포함CLNIR..
· Paper Review
Dense Retrieval Adaptation using Target Domain DescriptionCited by 3 ('2024-10-22)Publication Info: ACM ICTIR 2023URL: https://arxiv.org/abs/2307.02740 Dense Retrieval Adaptation using Target Domain DescriptionIn information retrieval (IR), domain adaptation is the process of adapting a retrieval model to a new domain whose data distribution is different from the source domain. Existing methods ..
· Paper Review
GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense RetrievalCited by 142 (’2024-10-22)Publication Info: NAACL 2022URL: https://aclanthology.org/2022.naacl-main.168 GPL: Generative Pseudo Labeling for Unsupervised Domain Adaptation of Dense RetrievalKexin Wang, Nandan Thakur, Nils Reimers, Iryna Gurevych. Proceedings of the 2022 Conference of the North American Chapter of..
· Paper Review
Continual Learning of Long Topic Sequences in Neural Information RetrievalCited by 6 ('2024-10-22)Publication Info: ECIR 2022URL: https://arxiv.org/abs/2201.03356 Continual Learning of Long Topic Sequences in Neural Information RetrievalIn information retrieval (IR) systems, trends and users' interests may change over time, altering either the distribution of requests or contents to be recommend..
· Paper Review
Cited by 23 ('2024-10-22)Publication Info: ECIR 2021URL: https://arxiv.org/abs/2101.06984 Studying Catastrophic Forgetting in Neural Ranking ModelsSeveral deep neural ranking models have been proposed in the recent IR literature. While their transferability to one target domain held by a dataset has been widely addressed using traditional domain adaptation strategies, the question of their cross..
· Paper Review
(EMNLP 2023) SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language ModelsarXiv: https://arxiv.org/abs/2303.08896code: https://github.com/potsawee/selfcheckgpt 1. ProblemHallucination Detection기존의 fact verification 방법은 ChatGPT와 같은 블랙박스 모델에서는 작동하지 않을 수 있으므로 외부 리소스 없이도 Hallucination을 Detection 할 수 있는 새로운 접근 방식이 필요함 2. Related Worksintrinsic uncertainty metrics ..
· Paper Review
(ICLR 2023 notable-top-25%) Semantic Uncertainty: Linguistic Invariances for Uncertainty Estimation in Natural Language GenerationarXiv: https://arxiv.org/abs/2302.09664code: https://github.com/lorenzkuhn/semantic_uncertainty 1. MotivationLLM이 생성한 답변의 uncertainty를 추정하는 것은 Trustworthy LLM과 관련하여 중요한 문제임그러나 답변의 uncertainty를 추정하는 기존의 token-likelihood 기반 방법들은 semantic equivalence 문제를 고려하지 않음 semantic..
· Paper Review
(venue year) TitlearXiv:code: 1. Problem..2. Importance of the Problem..3. Related Works..4. Proposed Key Ideas..5. Summary of Experimental Results..
oneonlee
'논문 간단 정리' 태그의 글 목록