List Llm Backends

Headers

AuthorizationstringRequired

Header authentication of the form Bearer <token>

Response

Successful Response

modelstring

LLM model name.

model_endpoint_typeenum

The endpoint type for the model.

context_windowinteger

The context window size for the model.

model_endpointstringOptional

The endpoint for the model.

model_wrapperstringOptional

The wrapper for the model.

put_inner_thoughts_in_kwargsbooleanOptional

Puts ‘inner_thoughts’ as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts.

handlestringOptional

The handle for this config, in the format provider/model-name.

temperaturedoubleOptionalDefaults to 0.7

The temperature to use when generating text with the model. A higher temperature will result in more random text.

max_tokensintegerOptional

The maximum number of tokens to generate. If not set, the model will use its default value.

enable_reasonerbooleanOptionalDefaults to false

Whether or not the model should use extended thinking if it is a ‘reasoning’ style model

reasoning_effortenumOptional

The reasoning effort to use when generating text reasoning models

Allowed values:
max_reasoning_tokensintegerOptionalDefaults to 0

Configurable thinking budget for extended thinking, only used if enable_reasoner is True. Minimum value is 1024.