Port artifacts are now modeled as logical references plus concrete variants.
The sample config in examples/port.toml binds that
contract to demo-kernel and demo-guest.
Each artifact now carries four related concepts:
- Reference:
a logical identifier such as
demo-fs/port/demo-kernel:v1 - Variant:
one concrete selection of
architecture,substrate, andprotection_mode - Distribution backend:
where
pushpublishes andpullfetches that variant - Cache root: where Port stores a pulled or published local copy outside the canonical build output path
In the sample model:
- the shipped backends are
file-systemandhosted-api oci-registryis also executable when the selected variant points at a reachable OCI registry andorasis available onPATH- the sample store root is
artifact-store/demo-fs/ - the sample cache root is
.port/cache/ - hosted control-plane storage lives under
.port/hosted/<control-plane>/artifacts/
Build and validate one native variant:
port --config examples/port.toml artifacts build --artifact demo-kernel --architecture native
port --config examples/port.toml artifacts validate --artifact demo-kernel --architecture native
port --config examples/port.toml artifacts build --artifact demo-guest --architecture native
port --config examples/port.toml artifacts validate --artifact demo-guest --architecture nativePublish and fetch one selected variant:
port --config examples/port.toml artifacts push --artifact demo-kernel --architecture x86-64
rm -f artifacts/kernel/demo/x86_64/firecracker/standard/vmlinux
port --config examples/port.toml artifacts pull --artifact demo-kernel --architecture x86-64Local OCI registry proof:
- Copy
examples/port.tomlto a temp config. - Replace
[artifacts.kernels.demo-kernel.reference] registry = "demo-fs"withregistry = "127.0.0.1:5510". - Replace the existing
pushandpullsections with:
[artifacts.kernels.demo-kernel.distribution.push]
backend = "oci-registry"
transport = "plain-http"
[artifacts.kernels.demo-kernel.distribution.push.auth]
kind = "anonymous"
[artifacts.kernels.demo-kernel.distribution.pull]
backend = "oci-registry"
transport = "plain-http"
[artifacts.kernels.demo-kernel.distribution.pull.auth]
kind = "anonymous"- Start a local OCI registry. The repo-local proof tasks use Docker plus the
standard
registry:2image on127.0.0.1:5510. - Run the round trip:
just demo-push-oci
just demo-pull-ociThat proof:
- builds the native
demo-kernelvariant - pushes it through
port artifacts push - removes the canonical local kernel path
- restores it through
port artifacts pull - never depends on a public registry
Publish and fetch the protected x86_64/firecracker/pvm variant:
port --config examples/port.toml artifacts push --artifact demo-kernel --architecture x86-64 --substrate firecracker --protection-mode pvm
rm -f artifacts/kernel/demo/x86_64/firecracker/pvm/vmlinux
port --config examples/port.toml artifacts pull --artifact demo-kernel --architecture x86-64 --substrate firecracker --protection-mode pvmHosted backend proof:
- Copy
examples/port.tomlto a temp config. - Replace the existing
[control_planes.demo],[artifacts.kernels.demo-kernel.distribution.push], and[artifacts.kernels.demo-kernel.distribution.pull]sections with:
[control_planes.demo]
endpoint = "http://127.0.0.1:7040"
audience = "port-hosted-demo"
[artifacts.kernels.demo-kernel.distribution.push]
backend = "hosted-api"
endpoint = "http://127.0.0.1:7040"
[artifacts.kernels.demo-kernel.distribution.pull]
backend = "hosted-api"
endpoint = "http://127.0.0.1:7040"- Run the hosted round trip through the canonical CLI:
cp examples/port.toml /tmp/port-hosted-artifacts.toml
PORT_DEMO_TOKEN=demo-token port --config /tmp/port-hosted-artifacts.toml control-plane serve --control-plane demo --bind 127.0.0.1:7040
port --config /tmp/port-hosted-artifacts.toml artifacts build --artifact demo-kernel --architecture native
PORT_DEMO_TOKEN=demo-token port --config /tmp/port-hosted-artifacts.toml artifacts push --artifact demo-kernel --architecture native
# On x86_64 remove artifacts/kernel/demo/x86_64/firecracker/standard/vmlinux.
# On aarch64 remove artifacts/kernel/demo/aarch64/firecracker/standard/vmlinux.
PORT_DEMO_TOKEN=demo-token port --config /tmp/port-hosted-artifacts.toml artifacts pull --artifact demo-kernel --architecture nativeThe hosted control plane owns the stored bytes for that workflow. For
demo-kernel, Port persists the selected variant under:
.port/hosted/<control-plane>/artifacts/<registry>/<repository>/<version>/<architecture>/<substrate>/<protection-mode>/<filename>
Auth expectations:
- the hosted artifact route uses the configured bearer token contract
- the sample control plane reads that token from
PORT_DEMO_TOKEN - the CLI sends it through the configured
authorizationheader
The selector flags are the canonical compatibility surface:
--architecture <native|x86-64|aarch64>--substrate <firecracker|cloud-hypervisor|avf>--protection-mode <standard|pvm>
Current runtime boundary:
- the demo build and validate pipelines only run for the native host architecture
- the sample config ships Firecracker/standard variants plus
x86_64/firecracker/pvmkernel and guest-image variants aarch64/firecracker/pvmremains research-only and fails fast in the build/validate scripts- push/pull are already variant-aware even when the selected variant is published or fetched on another host
The sample config points at deterministic local output paths:
- kernel:
artifacts/kernel/demo/<architecture>/firecracker/standard/vmlinux - kernel PVM:
artifacts/kernel/demo/x86_64/firecracker/pvm/vmlinux - guest image:
artifacts/guest/demo/<architecture>/firecracker/standard/rootfs.ext4 - guest image PVM:
artifacts/guest/demo/x86_64/firecracker/pvm/rootfs.ext4
The file-backed store layout is similarly deterministic:
- store path:
artifact-store/demo-fs/<registry>/<repository>/<version>/<architecture>/<substrate>/<protection-mode>/<filename> - cache path:
.port/cache/<registry>/<repository>/<version>/<architecture>/<substrate>/<protection-mode>/<filename>
For demo-kernel on x86_64/firecracker/standard, that becomes:
- local path:
artifacts/kernel/demo/x86_64/firecracker/standard/vmlinux - store path:
artifact-store/demo-fs/demo-fs/port/demo-kernel/v1/x86_64/firecracker/standard/vmlinux - cache path:
.port/cache/demo-fs/port/demo-kernel/v1/x86_64/firecracker/standard/vmlinux
- Logical reference:
demo-fs/port/demo-kernel:v1 - Source:
pinned upstream kernel sources for each compatibility lane:
Firecracker CI demo kernels for the standard Firecracker lanes, and the
sibling
pvm-buildsflake for thex86_64/firecracker/pvmlane - Current pinned keys:
firecracker-ci/v1.14/x86_64/vmlinux-6.1.155firecracker-ci/v1.14/aarch64/vmlinux-6.1.155 - Validation: compare the built artifact against the lane-specific upstream source plus a non-zero file size check
- PVM note:
x86_64/firecracker/pvmnow resolves a dedicated PVM guest kernel frompvm-buildsviaPORT_PVM_BUILD_FLAKE_REFor a sibling../pvm-buildscheckout. It no longer reuses the standard Firecracker CI demo kernel.
- Logical reference:
demo-fs/port/demo-guest:v1 - Inputs:
release build of
port-guest-agent, BusyBox userspace when available, runtime shared libraries discovered withldd, and aninitscript authored in-repo - Files installed into the image:
/init/bin/busybox/usr/bin/port-guest-agentminimal/etc/passwdand/etc/group - Validation:
e2fsck -fnfor filesystem integrity plusdebugfschecks that/init,/bin/busybox, and/usr/bin/port-guest-agentexist and that/initlaunchesport-guest-agent - PVM note:
the
x86_64/firecracker/pvmguest-image variant carries explicit/etc/port-protection-modeand/etc/port-guest-architecturemarkers so validation can distinguish it from the standard Firecracker lane, andx86_64/firecracker/*guest images now ship a siblinginitrd.cpio.gzso Port can boot a read-only base image plus writable overlay on the hosted AWS PVM lane
Port's Firecracker/PVM lane is not a metadata-only variation of the standard artifacts.
Required contract:
- kernel variant path under
.../x86_64/firecracker/pvm/ - guest-image variant path under
.../x86_64/firecracker/pvm/ - dedicated validation for those PVM variants
- no silent fallback to
standardartifacts
The current foundation slice materializes those x86_64 PVM selectors and
keeps aarch64 as research-only. The full host-kit and artifact-kit contract
for that lane lives in
pvm.md.
- Run the artifact workflows from the repository root so
examples/port.tomlresolves correctly. nix developis one way to providecurl,mkfs.ext4,debugfs, and related tooling, and it exportsPORT_BUSYBOX_BINfor workflows that need BusyBox. Port also works when those tools are installed directly on the host.pushandpullare the canonical artifact-mobility verbs even though the shipped runtime now implements the file-backed, hosted-api, and oci-registry backends.- The hosted control plane owns
.port/hosted/<control-plane>/artifacts/for hosted pushes and pulls; the local caller still keeps the selected cache copy under.port/cache/. - The file-backed backend is still useful for repo-local proofs, but it is no longer the only executable artifact distribution path.
- The repo-local OCI proof uses
orasplus a local registry container; direct CLI use works the same way when those tools are installed on the host.