This page describes how to run OpenChat Playground (OCP) with Docker Model Runner integration.
-
Get the repository root.
# bash/zsh REPOSITORY_ROOT=$(git rev-parse --show-toplevel)
# PowerShell $REPOSITORY_ROOT = git rev-parse --show-toplevel
-
Make sure the Docker Model Runner is up and running, and ready to accept requests with the following command.
docker model status
It should say
Docker Model Runner is running.# bash/zsh curl http://localhost:12434# Powershell Invoke-WebRequest http://localhost:12434
It should say
The Service is runningIf it says
Connection refused, turn on "Enable host-side TCP support" option in Docker Desktop Settings. -
Download the model. The default model OCP uses is ai/smollm2.
docker model pull ai/smollm2
Alternatively, if you want to run with a different model, say ai/qwen3, other than the default one, download it first by running the following command.
docker model pull ai/qwen3
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Run the app.
# bash/zsh dotnet run --project $REPOSITORY_ROOT/src/OpenChat.PlaygroundApp -- \ --connector-type DockerModelRunner
# PowerShell dotnet run --project $REPOSITORY_ROOT\src\OpenChat.PlaygroundApp -- ` --connector-type DockerModelRunner
-
Open your web browser, navigate to
http://localhost:5280, and enter prompts.
-
Make sure the Docker Model Runner is up and running, and ready to accept requests with the following command.
docker model status
It should say
Docker Model Runner is running.# bash/zsh curl http://localhost:12434# Powershell Invoke-WebRequest http://localhost:12434
It should say
The Service is runningIf it says
Connection refused, turn on "Enable host-side TCP support" option in Docker Desktop Settings. -
Download the model. The default model OCP uses is ai/smollm2.
docker model pull ai/smollm2
Alternatively, if you want to run with a different model, say ai/qwen3, other than the default one, download it first by running the following command.
docker model pull ai/qwen3
-
Make sure you are at the repository root.
cd $REPOSITORY_ROOT
-
Build a container.
docker build -f Dockerfile -t openchat-playground:latest . -
Run the app. The default model OCP uses is ai/smollm2.
# bash/zsh - from locally built container docker run -i --rm -p 8080:8080 openchat-playground:latest \ --connector-type DockerModelRunner \ --base-url http://host.docker.internal:12434# PowerShell - from locally built container docker run -i --rm -p 8080:8080 openchat-playground:latest ` --connector-type DockerModelRunner ` --base-url http://host.docker.internal:12434
# bash/zsh - from GitHub Container Registry docker run -i --rm -p 8080:8080 ghcr.io/aliencube/open-chat-playground/openchat-playground:latest \ --connector-type DockerModelRunner \ --base-url http://host.docker.internal:12434# PowerShell - from GitHub Container Registry docker run -i --rm -p 8080:8080 ghcr.io/aliencube/open-chat-playground/openchat-playground:latest ` --connector-type DockerModelRunner ` --base-url http://host.docker.internal:12434
Alternatively, if you want to run with a different model, say ai/qwen3, make sure you've already downloaded the model by running the
docker model pull ai/qwen3command.# bash/zsh - from locally built container docker run -i --rm -p 8080:8080 openchat-playground:latest \ --connector-type DockerModelRunner \ --base-url http://host.docker.internal:12434 \ --model ai/qwen3# PowerShell - from locally built container docker run -i --rm -p 8080:8080 openchat-playground:latest ` --connector-type DockerModelRunner ` --base-url http://host.docker.internal:12434 ` --model ai/qwen3
# bash/zsh - from GitHub Container Registry docker run -i --rm -p 8080:8080 ghcr.io/aliencube/open-chat-playground/openchat-playground:latest \ --connector-type DockerModelRunner \ --base-url http://host.docker.internal:12434 \ --model ai/qwen3# PowerShell - from GitHub Container Registry docker run -i --rm -p 8080:8080 ghcr.io/aliencube/open-chat-playground/openchat-playground:latest ` --connector-type DockerModelRunner ` --base-url http://host.docker.internal:12434 ` --model ai/qwen3
-
Open your web browser, navigate to
http://localhost:8080, and enter prompts.