Distill-VQ: Learning Retrieval Oriented Vector Quantization ... - ArXiv
Có thể bạn quan tâm
Computer Science > Information Retrieval arXiv:2204.00185 (cs) [Submitted on 1 Apr 2022 (v1), last revised 28 Apr 2022 (this version, v2)] Title:Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings Authors:Shitao Xiao, Zheng Liu, Weihao Han, Jianjin Zhang, Defu Lian, Yeyun Gong, Qi Chen, Fan Yang, Hao Sun, Yingxia Shao, Denvy Deng, Qi Zhang, Xing Xie View a PDF of the paper titled Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings, by Shitao Xiao and 12 other authors View PDF
Abstract:Vector quantization (VQ) based ANN indexes, such as Inverted File System (IVF) and Product Quantization (PQ), have been widely applied to embedding based document retrieval thanks to the competitive time and memory efficiency. Originally, VQ is learned to minimize the reconstruction loss, i.e., the distortions between the original dense embeddings and the reconstructed embeddings after quantization. Unfortunately, such an objective is inconsistent with the goal of selecting ground-truth documents for the input query, which may cause severe loss of retrieval quality. Recent works identify such a defect, and propose to minimize the retrieval loss through contrastive learning. However, these methods intensively rely on queries with ground-truth documents, whose performance is limited by the insufficiency of labeled data. In this paper, we propose Distill-VQ, which unifies the learning of IVF and PQ within a knowledge distillation framework. In Distill-VQ, the dense embeddings are leveraged as "teachers", which predict the query's relevance to the sampled documents. The VQ modules are treated as the "students", which are learned to reproduce the predicted relevance, such that the reconstructed embeddings may fully preserve the retrieval result of the dense embeddings. By doing so, Distill-VQ is able to derive substantial training signals from the massive unlabeled data, which significantly contributes to the retrieval quality. We perform comprehensive explorations for the optimal conduct of knowledge distillation, which may provide useful insights for the learning of VQ based ANN index. We also experimentally show that the labeled data is no longer a necessity for high-quality vector quantization, which indicates Distill-VQ's strong applicability in practice.
Comments: | Accepted by SIGIR 2022 |
Subjects: | Information Retrieval (cs.IR); Artificial Intelligence (cs.AI) |
Cite as: | arXiv:2204.00185 [cs.IR] |
(or arXiv:2204.00185v2 [cs.IR] for this version) | |
https://doi.org/10.48550/arXiv.2204.00185 Focus to learn more arXiv-issued DOI via DataCite |
Submission history
From: Shitao Xiao [view email] [v1] Fri, 1 Apr 2022 03:30:40 UTC (1,086 KB) [v2] Thu, 28 Apr 2022 09:46:15 UTC (1,088 KB) Full-text links:Access Paper:
- View a PDF of the paper titled Distill-VQ: Learning Retrieval Oriented Vector Quantization By Distilling Knowledge from Dense Embeddings, by Shitao Xiao and 12 other authors
- View PDF
- TeX Source
- Other Formats
References & Citations
- NASA ADS
- Google Scholar
- Semantic Scholar
BibTeX formatted citation
× loading... Data provided by:Bookmark
Bibliographic Tools Bibliographic and Citation Tools Bibliographic Explorer Toggle Bibliographic Explorer (What is the Explorer?) Connected Papers Toggle Connected Papers (What is Connected Papers?) Litmaps Toggle Litmaps (What is Litmaps?) scite.ai Toggle scite Smart Citations (What are Smart Citations?) Code, Data, Media Code, Data and Media Associated with this Article alphaXiv Toggle alphaXiv (What is alphaXiv?) Links to Code Toggle CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub Toggle DagsHub (What is DagsHub?) GotitPub Toggle Gotit.pub (What is GotitPub?) Huggingface Toggle Hugging Face (What is Huggingface?) Links to Code Toggle Papers with Code (What is Papers with Code?) ScienceCast Toggle ScienceCast (What is ScienceCast?) Demos Demos Replicate Toggle Replicate (What is Replicate?) Spaces Toggle Hugging Face Spaces (What is Spaces?) Spaces Toggle TXYZ.AI (What is TXYZ.AI?) Related Papers Recommenders and Search Tools Link to Influence Flower Influence Flower (What are Influence Flowers?) Core recommender toggle CORE Recommender (What is CORE?)- Author
- Venue
- Institution
- Topic
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)Từ khóa » Vq Documents
-
VQ Documentation - RSL Awards
-
Vendor Qualification (VQ) - PharmaState Academy
-
VQ Scan: Purpose, Preparation, And Expectation - Healthline
-
Official Documents
-
Cách Cài Nhạc Chuông IPhone Bằng GarageBand Và Documents
-
Di Chuyển Desktop, Download Và Documents ...
-
Thiết Lập ICloud Drive Trên Mọi Thiết Bị - Apple Support
-
Documents Of The Second Vatican Council
-
Di Chuyển Desktop, Download Và Documents ... - Thủ Thuật Tin Học
-
Tạo Nhạc Chuông Bằng Garageband - Và Document Cho IPhone Nhé!