| 1 | # Litellm Response Model
|
| 2 |
|
| 3 | !!! note "LiteLLM Response API Model class"
|
| 4 |
|
| 5 | - [Read on GitHub](https://github.com/swe-agent/mini-swe-agent/blob/main/src/minisweagent/models/litellm_response_model.py)
|
| 6 |
|
| 7 | ??? note "Full source code"
|
| 8 |
|
| 9 | ```python
|
| 10 | --8<-- "src/minisweagent/models/litellm_response_model.py"
|
| 11 | ```
|
| 12 |
|
| 13 | !!! tip "When to use this model"
|
| 14 |
|
| 15 | * Use this model class when you want to use OpenAI's [Responses API](https://platform.openai.com/docs/api-reference/responses) with native tool calling.
|
| 16 | * This is particularly useful for models like GPT-5 that benefit from the extended thinking/reasoning capabilities provided by the Responses API.
|
| 17 | * This model maintains conversation state across turns using `previous_response_id`.
|
| 18 |
|
| 19 | ## Usage
|
| 20 |
|
| 21 | To use the Response API model, specify `model_class: "litellm_response"` in your agent config:
|
| 22 |
|
| 23 | ```yaml
|
| 24 | model:
|
| 25 | model_class: "litellm_response"
|
| 26 | model_name: "openai/gpt-5.2"
|
| 27 | model_kwargs:
|
| 28 | drop_params: true
|
| 29 | reasoning:
|
| 30 | effort: "high"
|
| 31 | ```
|
| 32 |
|
| 33 | Or via command line:
|
| 34 |
|
| 35 | ```bash
|
| 36 | mini -m "openai/gpt-5.2" --model-class litellm_response
|
| 37 | ```
|
| 38 |
|
| 39 | ::: minisweagent.models.litellm_response_model
|
| 40 |
|
| 41 | {% include-markdown "../../_footer.md" %}
|
| 42 |
|