Alex Vaith
Jul 24, 2023

--

from my knowledge you need to use the same model for decoding (converting text into an embedding) and for the query. All LLMs, BERT models etc. have a completly different embedding space. So querying with another model should lead to some random results. langchain also allows you to calculate the embedding directly with the model you want to use for the queries.

--

--

Alex Vaith
Alex Vaith

Written by Alex Vaith

Machine Learning Engineer / Data Scientist who likes to learn new stuff about AI every day.

Responses (1)