List LLM Models
List available LLM models using the asynchronous implementation for improved performance
Headers
Authorization
Header authentication of the form Bearer <token>
Query parameters
provider_category
Allowed values:
provider_name
provider_type
Response
Successful Response
model
LLM model name.
model_endpoint_type
The endpoint type for the model.
context_window
The context window size for the model.
model_endpoint
The endpoint for the model.
provider_name
The provider name for the model.
provider_category
The provider category for the model.
Allowed values:
model_wrapper
The wrapper for the model.
put_inner_thoughts_in_kwargs
Puts ‘inner_thoughts’ as a kwarg in the function call if this is set to True. This helps with function calling performance and also the generation of inner thoughts.
handle
The handle for this config, in the format provider/model-name.
temperature
The temperature to use when generating text with the model. A higher temperature will result in more random text.
max_tokens
The maximum number of tokens to generate. If not set, the model will use its default value.
enable_reasoner
Whether or not the model should use extended thinking if it is a 'reasoning' style model
reasoning_effort
The reasoning effort to use when generating text reasoning models
Allowed values:
max_reasoning_tokens
Configurable thinking budget for extended thinking, only used if enable_reasoner is True. Minimum value is 1024.