List Llm Backends
Headers
Header authentication of the form Bearer <token>
Response
Successful Response
LLM model name.
The endpoint type for the model.
The context window size for the model.
The endpoint for the model.
The wrapper for the model.
Puts ‘inner_thoughts’ as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts.
The handle for this config, in the format provider/model-name.
The temperature to use when generating text with the model. A higher temperature will result in more random text.
The maximum number of tokens to generate. If not set, the model will use its default value.
Whether or not the model should use extended thinking if it is a ‘reasoning’ style model
The reasoning effort to use when generating text reasoning models
Configurable thinking budget for extended thinking, only used if enable_reasoner is True. Minimum value is 1024.