Interface ChatContextOptions

Controls the context snippets sent to the LLM.

interface ChatContextOptions {
    includeBinaryContent?: boolean;
    multimodal?: boolean;
    snippetSize?: number;
    topK?: number;
}

Properties

includeBinaryContent?: boolean

If image-related context snippets are sent to the LLM, this field determines whether or not they should include base64 image data. If false, only the image caption is sent. Only available when multimodal=true.

multimodal?: boolean

Whether or not to send image-related context snippets to the LLM. If false, only text context snippets are sent.

snippetSize?: number

The maximum context snippet size. Default is 2048 tokens. Minimum is 512 tokens. Maximum is 8192 tokens.

topK?: number

The maximum number of context snippets to use. Default is 16. Maximum is 64.