LM Studio can run a local model server that Sero treats like an OpenAI-compatible provider. Use this when you want to test local models, work with private local endpoints, or avoid sending a specific task to a hosted model provider.
Local models still need enough memory and GPU/CPU capacity on your machine. They may be slower or less capable than hosted models, and they are not automatically available inside every workspace container unless the URL is reachable from the process that needs it.
http://localhost:1234/v1.lm-studio or none unless your server requires a specific key.In LM Studio:
The Sero preset expects:
| Field | Value |
|---|---|
| Provider name | lm-studio |
| Base URL | http://localhost:1234/v1 |
| API shape | openai-completions |
| API key | lm-studio |
| Compatibility | developer role off, reasoning effort off |
If your LM Studio server uses a different port, update the base URL before testing.
/models endpoint.Sero writes local provider configuration to <SERO_HOME>/agent/models.json and refreshes model availability after saving.
After saving the provider, open Admin → Model and choose LOW, MED, and HIGH defaults.
A practical local setup is:
| Tier | Suggested use |
|---|---|
| LOW | Small/fast local model for quick edits or summaries |
| MED | Stronger local model for everyday development work |
| HIGH | Best local model you can run comfortably, or a hosted fallback |
Thinking levels only appear when Sero believes the selected model supports them. Many OpenAI-compatible local servers do not support reasoning-effort controls, so the LM Studio preset disables that compatibility flag.
localhost means “this process's machine or network namespace.” That is usually fine for Sero desktop talking to LM Studio on your Mac. If a tool inside a workspace container must call the same local server directly, localhost from inside the container may point at the container, not the host.
If a containerized command cannot reach LM Studio:
localhost when your setup supports itFor most model selection and chat usage, configure the provider through Sero and let Sero manage model calls from the desktop process.
| Problem | What to check |
|---|---|
| Test connection fails | LM Studio server is running, base URL includes /v1, port is correct, firewall is not blocking loopback. |
| Fetch returns no models | A model is loaded in LM Studio and the server's /models endpoint returns data. |
HTTP 404 |
Base URL may be missing /v1 or using the wrong port. |
| Auth error | Use lm-studio or none unless your server requires a real API key. |
| Model appears but tier save warns | The model may have been unloaded, renamed, or removed from the local server. Fetch again or choose another model. |
| Container tool cannot reach the server | See the host/container reachability caveat above. |