Skip to main content

GET /v1/models

OpenAI-compatible model listing.

Source: src/api/routes/models.ts

Endpoint

GET http://localhost:8080/v1/models

Auth

Public — listed in PUBLIC_ROUTES, no Authorization header required, even on Pro.

Behaviour

  1. Calls GET <OLLAMA_HOST>/api/tags (8 s timeout). Each model becomes:
    { "id": "<model name>", "object": "model", "created": <unix>, "owned_by": "local" }
  2. Free tier (isProUser() === false) — returns the local list as-is.
  3. Pro tier — additionally calls GET /api/models on LocoPilot Cloud and concatenates the result. Cloud entries arrive with owned_by: "remote" from the upstream manifest. If the cloud call fails or times out, the response degrades gracefully to the local list and a warning is logged.
  4. If Ollama itself is unreachable, the local list is empty ([]) and a warning is logged.

Response

{
"object": "list",
"data": [
{ "id": "llama3:8b", "object": "model", "owned_by": "local", "created": 1746820000 },
{ "id": "mistral:7b", "object": "model", "owned_by": "local", "created": 1746820000 },
{ "id": "llama-3.1-70b-instruct", "object": "model", "owned_by": "remote", "created": 1746820000 }
]
}
owned_byMeaning
localPulled into your Ollama runtime — served locally, free
remoteAvailable via LocoPilot Cloud — requires Pro tier to invoke
note

The endpoint never errors out — Ollama failures and cloud failures both degrade silently to a partial list. Use /v1/locopilot/health when you need a true reachability signal.

Use cases

  • Populate a model picker in your own UI
  • Detect whether a given model would route locally or to the cloud before you call chat/completions
  • CI checks (assert that a list of expected models is present)