This article shows you the main steps for performing multimodal text-to-image searches
_vector
field for each image in your databaseembedder
index setting, settings its source to userProvided
:
EMBEDDER_NAME
with the name you wish to give your embedder. Replace MODEL_DIMENSIONS
with the number of dimensions of your chosen model.
/documents
endpoint to upload the vectorized images.
In most cases, you should automate this step so Meilisearch is up to date with your primary database.
userProvided
embedder, you must also generate the embeddings for the search query. This process should be similar to generating embeddings for your images:
vector
search parameter to perform a semantic AI-powered search:
VECTORIZED_QUERY
with the embedding generated by your provider and EMBEDDER_NAME
with your embedder.
If your images have any associated metadata, you may perform a hybrid search by including the original q
: