MoltHub Agent: Mini SWE Agent
* Doc: Fix regex specification in yaml * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Bumps [actions/checkout](https://github.com/actions/checkout) from 5 to 6. - [Release notes](https://github.com/actions/checkout/releases) - [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md) - [Commits](https://github.com/actions/checkout/compare/v5...v6) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
The test was flaky due to race conditions: 1. Thread started before UI was fully mounted 2. Used fixed 0.2s timeout instead of waiting for notify Changes: - Wait 0.2s for UI to be ready before starting thread - Poll for notify.call_count > 0 with 5s timeout - Consistent with other tests like test_agent_with_cost_limit Fixes #606
* Feat(models): Add portkey responses API * delete accidentally tracked test file * fixes * Need to install mini for testing * fixes to tests
`get_model` method uses `input_model_name` as argument but current doc uses `model_name` instead. Ref: https://mini-swe-agent.com/1.0/reference/models/utils/#minisweagent.models.get_model
* feat: Add OpenAI Responses API support - Add response_api_mode configuration option to LitellmModelConfig - Implement OpenAI Responses API integration with conversation continuity - Add helper method _coerce_responses_text for response normalization - Update swebench.yaml config with example configurations for both Chat and Responses APIs - Support reasoning/verbosity parameters for GPT-5 models Addresses #459: Request for OpenAI Responses API support * Ref: Move responses API to its own class * CI Doc: Add tests and documentation * Fix litellm reponse API --------- Co-authored-by: ai-jz <joe8zhang@gmail.com>
- Add RequestyModel class with OpenAI-compatible API support - Implements cost tracking from API response usage.cost field - Includes custom headers for GitHub referer and mini-swe-agent title - Supports all Requesty models via router.requesty.ai/v1 endpoint - Add requesty to model class mapping for --model-class requesty usage - Follows same patterns as OpenRouter model implementation
* local vllm example * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Allow users to customize action parsing by setting the `action_regex` field in AgentConfig. This enables support for different output formats like XML tags instead of markdown code blocks. Changes: - Add `action_regex` field to AgentConfig with default markdown pattern - Update `parse_action()` to use `self.config.action_regex` - Add documentation in yaml_configuration.md with XML example Fixes #144
updates: - [github.com/astral-sh/ruff-pre-commit: v0.14.4 → v0.14.5](https://github.com/astral-sh/ruff-pre-commit/compare/v0.14.4...v0.14.5) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
* Doc: Add note on how to disable cost tracking * Enh: Don't disable costs, only ignore warnings * Update src/minisweagent/models/litellm_model.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update src/minisweagent/models/litellm_model.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update docs/models/local_models.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update docs/models/local_models.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update src/minisweagent/models/portkey_model.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fix: Allow free OpenRouter models with zero cost (#583) Removed overly restrictive zero-cost rejection that prevented free OpenRouter models from working. Now only validates that cost is non-negative (>= 0.0), consistent with LitellmModel and PortkeyModel implementations. * Test: Update test to verify free models work with zero cost Changed test_openrouter_model_no_cost_information to test_openrouter_model_free_model_zero_cost. The test now verifies that free models with cost=0.0 work correctly instead of raising an error. * Add: MSWEA_COST_TRACKING="disabled" * Update tests --------- Co-authored-by: Chesars <cesarponce19544@gmail.com>
