OptionalcontextControls the context snippets sent to the LLM.
OptionalfilterOptionally filter which documents can be retrieved using the following metadata fields.
OptionalincludeIf true, the assistant will be instructed to return highlights from the referenced documents that support its response.
OptionaljsonIf true, the assistant will be instructed to return a JSON response. Cannot be used with streaming.
The MessagesModel to send to the Assistant. Can be a list of strings or a list of objects. If sent as a list of
objects, must have exactly two keys: role and content. The role key can only be one of user or assistant.
OptionalmodelThe large language model to use for answer generation
OptionaltemperatureControls the randomness of the model's output: lower values make responses more deterministic, while higher values increase creativity and variability. If the model does not support a temperature parameter, the parameter will be ignored.
The list of queries / chats to chat an assistant