GET /v1/models
OpenAI-compatible model listing.
Source: src/api/routes/models.ts
Endpoint
GET http://localhost:8080/v1/models
Auth
Public — listed in PUBLIC_ROUTES, no Authorization header required, even on Pro.
Behaviour
- Calls
GET <OLLAMA_HOST>/api/tags(8 s timeout). Each model becomes:{ "id": "<model name>", "object": "model", "created": <unix>, "owned_by": "local" } - Free tier (
isProUser() === false) — returns the local list as-is. - Pro tier — additionally calls
GET /api/modelson LocoPilot Cloud and concatenates the result. Cloud entries arrive withowned_by: "remote"from the upstream manifest. If the cloud call fails or times out, the response degrades gracefully to the local list and a warning is logged. - If Ollama itself is unreachable, the local list is empty (
[]) and a warning is logged.
Response
{
"object": "list",
"data": [
{ "id": "llama3:8b", "object": "model", "owned_by": "local", "created": 1746820000 },
{ "id": "mistral:7b", "object": "model", "owned_by": "local", "created": 1746820000 },
{ "id": "llama-3.1-70b-instruct", "object": "model", "owned_by": "remote", "created": 1746820000 }
]
}
owned_by | Meaning |
|---|---|
local | Pulled into your Ollama runtime — served locally, free |
remote | Available via LocoPilot Cloud — requires Pro tier to invoke |
note
The endpoint never errors out — Ollama failures and cloud failures both degrade silently to a partial list. Use /v1/locopilot/health when you need a true reachability signal.
Use cases
- Populate a model picker in your own UI
- Detect whether a given model would route locally or to the cloud before you call
chat/completions - CI checks (assert that a list of expected models is present)