The list of queries / chats to chat an assistant

interface ChatOptions {
    contextOptions?: ChatContextOptions;
    filter?: object;
    includeHighlights?: boolean;
    jsonResponse?: boolean;
    messages: MessagesModel;
    model?: string;
    temperature?: number;
}

Properties

contextOptions?: ChatContextOptions

Controls the context snippets sent to the LLM.

filter?: object

Optionally filter which documents can be retrieved using the following metadata fields.

includeHighlights?: boolean

If true, the assistant will be instructed to return highlights from the referenced documents that support its response.

jsonResponse?: boolean

If true, the assistant will be instructed to return a JSON response. Cannot be used with streaming.

messages: MessagesModel

The MessagesModel to send to the Assistant. Can be a list of strings or a list of objects. If sent as a list of objects, must have exactly two keys: role and content. The role key can only be one of user or assistant.

model?: string

The large language model to use for answer generation

temperature?: number

Controls the randomness of the model's output: lower values make responses more deterministic, while higher values increase creativity and variability. If the model does not support a temperature parameter, the parameter will be ignored.