Run large language models locally. Llama, Mistral, Gemma, and custom models on your machine.
npx -y @ollama/mcp-server-ollama
Add to your Claude Desktop config:
```json
{
"mcpServers": {
"ollama": {
"command": "npx",
"args": ["-y", "@ollama/mcp-server-ollama"]
}
}
}
```
run_inferencelist_modelsget_predictionstrain_modelRun large language models locally. Llama, Mistral, Gemma, and custom models on your machine.
Language: Go
Category: AI & ML