SemanticSearch#
- class SemanticSearch#
semantic | Perform approximate k-NN search on the paragraph embeddings using the HNSW vector index, and optionally rescore the results.
Leverages registered embeddings service to encode the query. Note: index embeddings have to be compatible with query embeddings
- pydantic model PluginConfig#
- Fields:
apply_phrases_as_knn_filter (bool)
apply_phrases_to_encoder (bool)
apply_query_context_to_encoder (bool)
apply_rescore (bool)
embedding_type (squirro.common.clients.transformers.EmbeddingDataType | None)
filter_query (str)
k (int)
knn_boost (float)
knn_filter_stage (squirro.lib.search.relevancy.plugins.retrieve.embedding_retrievers.KnnFilterStage)
num_candidates (int)
perform_only_knn (bool)
similarity_threshold (float | None)
text (str)
truncate_dimensions (int | None)
vector_field (str | None)
worker (str)
-
PluginConfig.plugin_name:
ClassVar
[str
] = 'semantic'# Used to register and reference the plugin within a query.
-
field PluginConfig.filter_query:
str
= ''# Squirro Query used to filter the KNN scope. Query against the paragraph index like filtering on facets, phrases or even terms. (no support of entities)
-
field PluginConfig.worker:
str
= 'query-fast'# What deployed sentence-embeddings worker (@transformer-service) should be used
-
field PluginConfig.similarity_threshold:
Optional
[float
] = None# The required minimum similarity for a vector to be considered a match (optional). The scale of the threshold is dependent on the used similarity metric, and refers to the true similarity before it has been transformed into _score and boost applied - use the corresponding inverted score function.
-
field PluginConfig.embedding_type:
Optional
[EmbeddingDataType
] = None# The data type used to encode embeddings. Either float or byte. If set to byte, embeddings are quantized. If not set, the default type is read from the project configuration using the topic.search.default-embedding-settings config.
-
field PluginConfig.truncate_dimensions:
Optional
[int
] = 0# The dimension to truncate sentence embeddings to. 0 does no truncation. Only applicable for models that are trained with MRL.
-
field PluginConfig.vector_field:
Optional
[str
] = None# The name of the vector field in Elasticsearch used for querying e.g. 384-byte-intfloat/multilingual-e5-small.
-
field PluginConfig.apply_phrases_as_knn_filter:
bool
= True# Query Processing might rewrite the user query to match detected entities exactly as a phrase. If this is enabled, then semantic search is only applied on documents that match this phrase exactly (with configured proximity slop). All searchable fields are used for the pre-selection to fulfill the condition.
-
field PluginConfig.apply_phrases_to_encoder:
bool
= True# Query Processing might rewrite the user query to match detected entities exactly as a phrase. If this is enabled, then the additional phrase is appended to the user terms and used in the query embedding call.
-
field PluginConfig.apply_query_context_to_encoder:
bool
= False# The text snipped used to create the embedding is created from the text argument, or from all terms automatically injected from the overall query. If this flag is true, then all information will be combined (profile.text + outer-scope-query-context-terms)
-
field PluginConfig.perform_only_knn:
bool
= False# The query generator always adds some top-level query filtering options (project scoring roles, ACL, filtering on deleted sources etc). Those can be removed to only execute the standalone knn query. This is especially useful when RRF hybrid search is performend, and the underlying semantic search should be as fast & pure as possible.
-
field PluginConfig.apply_rescore:
bool
= False# Whether to apply rescoring after the initial kNN search
-
field PluginConfig.knn_filter_stage:
KnnFilterStage
= KnnFilterStage.pre#