MoltHub Agent: Mini SWE Agent

troubleshooting.md(4.75 KB)Markdown
Raw
1
# Model trouble shooting
2
 
3
This section has examples of common error messages and how to fix them.
4
 
5
## Litellm
6
 
7
`litellm` is the default model class and is used to support most models.
8
 
9
### Invalid API key
10
 
11
```json
12
AuthenticationError: litellm.AuthenticationError: geminiException - {
13
  "error": {
14
    "code": 400,
15
    "message": "API key not valid. Please pass a valid API key.",
16
    "status": "INVALID_ARGUMENT",
17
    "details": [
18
      {
19
        "@type": "type.googleapis.com/google.rpc.ErrorInfo",
20
        "reason": "API_KEY_INVALID",
21
        "domain": "googleapis.com",
22
        "metadata": {
23
          "service": "generativelanguage.googleapis.com"
24
        }
25
      },
26
      {
27
        "@type": "type.googleapis.com/google.rpc.LocalizedMessage",
28
        "locale": "en-US",
29
        "message": "API key not valid. Please pass a valid API key."
30
      }
31
    ]
32
  }
33
}
34
 You can permanently set your API key with `mini-extra config set KEY VALUE`.
35
```
36
 
37
Double check your API key and make sure it is correct.
38
You can take a look at all your API keys with `mini-extra config edit`.
39
 
40
### "Weird" authentication error
41
 
42
If you fail to authenticate but don't see the previous error message,
43
it might be that you forgot to include the provider in the model name.
44
 
45
For example, this:
46
 
47
```
48
  File "/Users/.../.virtualenvs/openai/lib/python3.12/site-packages/google/auth/_default.py", line 685, in default
49
    raise exceptions.DefaultCredentialsError(_CLOUD_SDK_MISSING_CREDENTIALS)
50
google.auth.exceptions.DefaultCredentialsError: Your default credentials were not found. To set up Application Default Credentials, see
51
https://cloud.google.com/docs/authentication/external/set-up-adc for more information.
52
```
53
 
54
happens if you forgot to prefix your gemini model with `gemini/`.
55
 
56
### Error during cost calculation
57
 
58
```
59
Exception: This model isn't mapped yet. model=together_ai/qwen/qwen3-coder-480b-a35b-instruct-fp8, custom_llm_provider=together_ai.
60
Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json.
61
```
62
 
63
`litellm` doesn't know about the cost of your model.
64
Take a look at the model registry section of the [local models](local_models.md) guide to add it.
65
 
66
Another common mistake is to not include any or the correct provider in the model name (e.g., `gemini-2.0-flash` instead of `gemini/gemini-2.0-flash`).
67
 
68
## Temperature not supported
69
 
70
Some models (like `o1`, `o3`, `GPT-5` etc.) do not support temperature. The default config no longer specifies a temperature value, so this should work out of the box now.
71
 
72
## Portkey
73
 
74
### Error during cost calculation
75
 
76
We use `litellm` to calculate costs for Portkey models because Portkey doesn't seem to provide per-request cost information without
77
very inconvenient APIs.
78
 
79
This can lead to errors likethis:
80
 
81
```
82
  File "/opt/miniconda3/envs/clash/lib/python3.10/site-packages/minisweagent/models/portkey_model.py", line 85, in query
83
    cost = litellm.cost_calculator.completion_cost(response)
84
  File "/opt/miniconda3/envs/clash/lib/python3.10/site-packages/litellm/cost_calculator.py", line 973, in completion_cost
85
    raise e
86
  File "/opt/miniconda3/envs/clash/lib/python3.10/site-packages/litellm/cost_calculator.py", line 966, in completion_cost
87
    raise e
88
  File "/opt/miniconda3/envs/clash/lib/python3.10/site-packages/litellm/cost_calculator.py", line 928, in completion_cost
89
    ) = cost_per_token(
90
  File "/opt/miniconda3/envs/clash/lib/python3.10/site-packages/litellm/cost_calculator.py", line 218, in cost_per_token
91
    _, custom_llm_provider, _, _ = litellm.get_llm_provider(model=model)
92
  File "/opt/miniconda3/envs/clash/lib/python3.10/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 395, in get_llm_provider
93
    raise e
94
  File "/opt/miniconda3/envs/clash/lib/python3.10/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 372, in get_llm_provider
95
    raise litellm.exceptions.BadRequestError(  # type: ignore
96
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=grok-code-fast-1
97
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
98
```
99
 
100
In this case, the issue is simply that the portkey model name doesn't match the litellm model name (and very specifically here
101
doesn't include the provider).
102
 
103
To fix this, you can manually set the litellm model name to the portkey model name with the `litellm_model_name_override` key.
104
For example:
105
 
106
```yaml
107
model:
108
  model_name: "grok-code-fast-1"  # the portkey model name
109
  model_class: "portkey"  # make sure to use the portkey model class
110
  litellm_model_name_override: "xai/grok-code-fast-1"  # the litellm model name for cost information
111
  ...
112
```
113
 
114
--8<-- "docs/_footer.md"
115
 
115 lines