k0rdent AI Docs
WIP

Routing

Routing conventions and best practices for the k0rdent platform

Multi-Region API Routing Architecture

Status: Draft Date: 2026-02-08


1. Region Naming Convention

1.1 Industry Research

Every major platform has converged on a similar hierarchy but with different encoding choices:

ProviderFormatExampleHierarchy
AWS{continent}{cardinal}-{n}us-east-1continent-direction-number
AWS (short){continent}{cardinal}{n}use1compressed, no delimiters
AWS (AZ){region}{letter}us-east-1aregion + zone letter
GCP{continent}-{subregion}{n}us-central1continent-subregion-number
GCP (zone){region}-{letter}us-central1-aregion + zone letter
Azure{region}eastus, westus2flat slug, no delimiters
Azure (regional DNS){service}-{region}-01contoso-westus2-01.regional.azure-api.netservice-region-instance
Cloudflare{iata}DFW, IAD3-letter IATA, uppercase
Vercel{iata}{n}sfo1, iad1, dub1city code + number
Hetzner{location}{n}fsn1, nbg1, ash1location code + number
Hetzner (DC){location}-dc{n}fsn1-dc14location + datacenter number
Fly.io{iata}dfw, iad, ams3-letter IATA airport code
Kubernetestopology.kubernetes.io/zoneus-east-2afollows cloud provider convention

1.2 Format

IATA airport codes map directly to metro areas where datacenters operate. Developers recognize them immediately. The number distinguishes multiple facilities in the same metro.

1.3 Why IATA

Fly.io, Cloudflare, Fastly, and Wikimedia all use IATA codes for physical infrastructure. k0rdent sells bare metal in specific physical locations — the same category. Cloud-style codes (us-west-1) encode abstract geography because you never know the specific datacenter. With bare metal, the physical location is the product.

IATA codes are globally unique, assigned by an international body, 3 characters, and human-memorable. sfo1 carries more information in 4 characters than usca1 does in 5 — it tells you the metro, not just the state.

1.4 Subdomain Format

1.5 Region Registry

One code used everywhere — subdomains, request IDs, database columns, Kubernetes labels, log correlation. No translation tables (FR-18).


2. Request IDs vs Resource IDs

2.1 Request IDs

Every API request gets a unique Request ID for tracing, debugging, and correlation across services. Request IDs embed the region to enable grep-based debugging across the entire stack.

Format: req_{region}-{timestamp}-{entropy}

Components:

  • req_ — Platform prefix for easy grepping in logs
  • sfo1 — Region code where request was processed
  • 1770564159296 — Unix timestamp in milliseconds (sortable, debuggable)
  • 7d4b9e1f3a5b — Random entropy (12 hex characters)

Why embed region in Request IDs:

  • Grep any log file for sfo1 and see all activity in that region
  • Trace a request across services without additional lookups
  • Instant debugging: see the request ID, know which region handled it
  • Correlate metrics, traces, and logs by region prefix

Usage:

  • Returned in X-Request-Id response header
  • Logged by all services handling the request
  • Used for distributed tracing correlation
  • NOT exposed to end users (internal debugging only)

2.2 Resource IDs

Resources (clusters, servers, organizations, etc.) get globally unique, opaque IDs that do NOT contain region information. This decouples resource identity from physical location.

Format: {prefix}_{base62}

ResourcePrefixExample
Organizationorg_org_8TcVx2WkZddNmK3Pt9JwX7BzWrLM
Serversrv_srv_3KpQm9WnXccFjH2Ls8DkT6VzRqYU
Clustercls_cls_6NZtkvWLBbbmHfPi7L6oz7KZpqET
Stackstk_stk_5MfRp4WjYbbHmG8Nt2LvS9CxPqZK
Workflow Runrun_run_7NhTq6WlAbbKmF5Rt3MxU8DzSqWJ
Poolpool_pool_2LgPn8WmXccGjE7Mt4KwV9BySrTL
Allocationalloc_alloc_9QjSr3WnZddMmH6Pt5LxW2CzUrYK
API Keykey_key_4KfQm7WkYccJmG3Nt8MvX9BzSqWL
Eventevt_evt_6MgRp2WlXbbKmF9Rt5NxU3DzTqZJ

Special ID: org_system is reserved for platform-level admin operations. TBD if this is needed still. Originally it was for something else.

Components:

  • {prefix}_ — Resource type for debugging clarity (e.g., cls_, srv_, org_)
  • {base62} — 26-character base62-encoded unique identifier (case-sensitive)

Why NOT embed region:

  • Resources can migrate between regions without ID changes
  • Multi-region resources don't have a single "home" region
  • Simpler, shorter IDs (30 chars vs 40+)
  • Region is stored as metadata on the resource, queryable via API
  • Resource identity is independent of physical topology

Region resolution: Region information is resolved through the routing cascade:

  1. Subdomain (sfo1.api.example.com)
  2. Header (X-Region: sfo1)
  3. Query param (?region=sfo1)
  4. Request body ({ "region": "sfo1" })
  5. Session context (project/org default region)
  6. Database lookup (stored on resource record)

3. API Design

2.1 Core Principle

Region is a routing concern, not a resource hierarchy. The API path describes what. The subdomain/header/session describes where (FR-06).

2.2 Atlas API (Operator Console)

Regions are queryable resources, not path hierarchy. Atlas operators use subdomain pinning or query filters to scope: sfo1.api.example.com/v1/region/global/servers returns only servers in that region.

2.3 Arc API (Customer Console)

Organization resolved from session, never in the URL (FR-09). Region specified in request body for creates: POST /compute/clusters { "name": "prod", "region": "sfo1" }. Once a resource exists, its region is embedded in its ID (FR-08).

2.4 Region Resolution Cascade

Every request resolves a region before reaching application code (FR-01):

Explicit always wins (FR-02). A single-region org user never thinks about regions — the session default handles everything (FR-10).

2.5 Persona Experiences

Arc developer, single-region org:

Arc developer, multi-region org:

Atlas operator:


3. Database Topology

3.1 Option A: Single Primary + Regional Read Replicas (MVP)

Infrastructure mutations (create cluster, provision server) go through the workflow orchestrator to the primary (FR-14). User-facing writes (settings, notifications) go cross-region to the primary — 20-80ms extra, acceptable for MVP. See §8.1 for evolution triggers.

3.2 Option B: Hybrid — Regional for Fast Writes, Central for State of Record

Operational overhead: Medium. Central PG for source of truth (you already manage this). Regional Redis or lightweight store for fast user-facing writes that are either eventually consistent (notification read status) or can be synced lazily.

The insight: "Mark notification as read" and "update my theme preference" don't need the same consistency guarantees as "create a Kubernetes cluster." Separating the write path by consistency requirement lets you keep the simple central database for things that matter while giving snappy UX for things that don't.

Verdict: Good upgrade path from Option A. Start with Option A, identify which user-facing writes are actually causing UX pain from cross-region latency, then selectively move those to regional stores.

3.3 Recommendation: Start A, Evolve to B

3.4 Write Latency by Type

WriteLatency SensitivityPath
Create clusterLow (async workflow)Arc → Workflow Queue → Primary
Provision serverLow (async workflow)Atlas → Workflow Queue → Primary
Update user settingsHighArc → Primary (cross-region)
Mark notification readHighArc → Primary (cross-region)
Login/sessionHigh (auth handles this)Auth → Primary

4. Routing Architecture

4.1 Request Flow

4.2 Request Flow Examples

Arc user creates a cluster (multi-region org, no subdomain):

Arc user marks a notification as read:

Atlas operator lists all servers globally:

4.3 Gateway Implementation (Hono)

4.4 SDK Configuration


5. Request ID Generation

Request IDs are generated at the gateway for every API request. They provide distributed tracing and enable grep-based debugging across services.

5.1 Generation Function

5.2 Region Extraction from Request IDs

The region code in Request IDs is the same string used in subdomains, database columns, Kubernetes labels, and log correlation (FR-18). This enables powerful debugging workflows:


6. Functional Requirements

Any alternative proposal must satisfy these. If it can't, the burden is on the proposer to explain why the requirement is wrong.

Critical Requirements

IDRequirementRationale
FR-01Resolve target region for every request before it reaches application code.Routing is infrastructure, not application logic.
FR-02Follow deterministic priority: Subdomain → Header → Query → Body → Session → DB Lookup → Fan-out/Error.Developers must predict where requests go.
FR-06Region never appears as a path segment.Decouples API contract from topology changes.
FR-08Resource IDs are globally unique and opaque (base62, 26 chars). Region stored as metadata.Enables resource migration between regions without ID changes.
FR-09Request IDs embed region, timestamp, and entropy. Format: req_{region}-{ts}-{entropy}.Enables grep-based debugging and distributed tracing.
FR-10Single-region org users never need to specify a region.The DX bar. If they think about regions, we failed.
FR-11Arc requests only return resources for the authenticated org, regardless of gateway.Tenant isolation is non-negotiable.
FR-14Infrastructure mutations go through the workflow orchestrator to primary DB.Eventual consistency on "does this cluster exist" is not acceptable.

All functional requirements including FR-03 through FR-19


7. Non-Functional Requirements

MVP and production targets. The gap between them defines when to scale.

Critical Targets

IDMetricMVPProduction
NFR-02Same-region read (Arc list/get)< 100ms p95< 50ms p95
NFR-03Cross-region write (settings, notifications)< 150ms p95< 100ms p95
NFR-07Per-region read availability99.5% monthly99.9% monthly
NFR-10Mothership availability99.5% monthly99.9% monthly
NFR-13Primary DB downAll writes fail. Reads continue. 503 + Retry-After.Managed failover within 60-120s. RTO < 5 min.
NFR-19Every request emits OTel span: region, region_source, request_id, org_id, latency_ms, status_codeRequiredRequired

All non-functional requirements including NFR-01 through NFR-23


8. Scaling Roadmap

Each item is a deliberate MVP limitation. The table below summarizes what to monitor, what triggers action, and what to build. Details follow.

#MonitorTriggerBuildEffort
8.1Cross-region write p95> 200msRegional Redis for user state2-3 sprints
8.2Write availability, failover RTORTO > 5 min or SLA > 99.5%Managed PG failover → standby → multi-primary1 → 2 → 6+ sprints
8.3Customer demandHA across failure domains requestedFederation layer over single-region clusters4-6 sprints
8.4Gateway latency from distancep95 > 100ms or 3+ regions activeGeoDNS → edge gateways → edge compute1 → 2-3 → 4+ sprints
8.5Session validation latency> 50ms per request or > 500 RPSRegional Redis session cache1-2 sprints

8.1 Single-Primary Write Latency

What you'll see: NFR-03 cross-region write latency for user settings and notification reads climbing toward 200ms p95. UI feels sluggish on interactions that trigger writes. Customers in distant regions experience noticeably worse responsiveness than those near the mothership.

Why it happens: All writes go to the single PG primary in the mothership region. A user in ams1 marking a notification as read incurs a transatlantic round-trip. This is acceptable at 20-80ms but degrades as regions get farther from the primary or write volume increases.

What to build: Regional Redis (or Turso/SQLite) for user-scoped state — notification read status, user preferences, UI state. These writes hit the local store immediately and async-sync to central PG. Infrastructure state (clusters, servers, org config) stays in central PG where consistency matters. Effort: 2-3 sprints. Requires defining consistency model per data type, deploying regional Redis, and implementing sync workers.

Dependencies: None. Can be deployed independently per region.


8.2 Single-Primary Availability

What you'll see: NFR-13 mothership primary goes down. All writes across all regions fail immediately. Reads continue from replicas. Duration depends on recovery — manual intervention could take 10-30 minutes. This is the architecture's single biggest risk.

Why it happens: One PG primary handles all writes. No standby, no automatic failover. The managed database service provides backups but not instant promotion.

What to build (staged):

Stage 1 — Managed failover. Enable Multi-AZ on managed PostgreSQL. Standby in a second availability zone within the mothership region. Automatic failover in 60-120s. No application code changes. Effort: 1 sprint (infrastructure configuration).

Stage 2 — Cross-region standby. Promote a replica in a second region to synchronous standby. If mothership fails, DNS update points writes to the standby. Requires connection drain and cache invalidation. Effort: 2 sprints.

Stage 3 — Multi-primary. CockroachDB, Spanner, or PostgreSQL BDR. Local writes in every region. Only justified by contractual SLA requirements from very large customers. Effort: 6+ sprints with dedicated database engineering.

Dependencies: Stage 1 is independent. Stage 2 requires monitoring from NFR-19. Stage 3 requires significant application changes.


8.3 Single-Region Clusters

What you'll see: Customers ask for high-availability workloads that survive a full region failure. Competitors offer multi-region Kubernetes. Sales loses deals where cross-region resilience is a hard requirement.

Why it happens: A cluster lives in exactly one region (FR-08 embeds region in the cluster ID). All nodes are co-located. If sfo1 goes down, clusters in sfo1 are down.

What to build: A federation layer. A "multi-region deployment" is a logical resource that owns multiple single-region clusters (e.g., one in sfo1, one in lax1). The routing architecture doesn't change — each underlying cluster still has a single region. The federation layer handles cross-cluster orchestration, health monitoring, and failover. Effort: 4-6 sprints.

Dependencies: Requires §8.4 GeoDNS for meaningful cross-region traffic steering. Requires fat-pipe interconnects between DCs for viable cross-region networking.


8.4 Manual Region Selection

What you'll see: api.example.com resolves to a single gateway (or round-robin). Users in Amsterdam hit a US gateway before being routed to ams1. NFR-02 read latency inflated by unnecessary cross-region hop.

Why it happens: No geographic DNS routing. api.example.com points to one place. Developers pick their region via subdomain, header, or SDK config — there's no automatic "nearest region" behavior.

What to build (staged):

Stage 1 — GeoDNS. Route53 latency-based routing or Cloudflare load balancing. api.example.com resolves to the nearest healthy regional gateway. DNS configuration only, no code changes. Effort: 1 sprint.

Stage 2 — Edge gateways. Lightweight gateways at CDN edge (Cloudflare Workers, CloudFront Functions). TLS termination + region resolution at edge. Reduces first-byte latency. Effort: 2-3 sprints.

Stage 3 — Edge compute. Move read-heavy operations (list caches, notification counts) to edge. Requires cache invalidation design. Effort: 4+ sprints.

Dependencies: Stage 1 requires 3+ active regions to be meaningful. Stage 2 requires NFR-19 observability to measure improvement. Stage 3 pairs with §8.1 regional state.


8.5 Centralized Session Auth

What you'll see: Session validation adds measurable latency to every request because the auth DB is in the mothership region. At scale, the auth service becomes a throughput bottleneck.

Why it happens: Stateful sessions live in mothership PG. Every request that needs session validation from a non-mothership region pays a cross-region round-trip.

What to build: Regional Redis session cache, populated on login, validated locally. Session revocation propagated via pub/sub with < 5s delay. This uses the same regional Redis infrastructure as §8.1 — same deployment, same operational cost. Effort: 1-2 sprints if regional Redis already exists from 8.1.

Dependencies: Pairs naturally with §8.1. Deploy together for shared infrastructure cost.


8.6 Phase Summary

Each phase triggered by measurable thresholds, not calendar dates. Observability (NFR-19 through NFR-23) provides the data to know when.


9. Decision Summary

DecisionChoiceReference
Region code formatIATA + number → sfo1, lax2FR-17
Physical topology (zone, rack, cage)Metadata on server records, not routingFR-19
Subdomain formatsfo1.api.example.com§1.4
API prefix/v1/{service}/v1/region/global, /v1/region/{region}§2.2
Region in API pathNeverFR-06
Resource ID format{prefix}_{base62} (26 chars, opaque)§2.2, FR-08
Request ID formatreq_{region}-{timestamp}-{entropy}§2.1, FR-09
Database topology (MVP)Single primary + regional read replicas§3, NFR-03
Resolution prioritySubdomain → Header → Query → Body → Session → DB → Fan-out/ErrorFR-02
Cross-region clustersNot for MVP§8.3
Write availability riskAccepted for MVP, managed failover firstNFR-13, §8.2
Monitoring baselineOTel on every request from day oneNFR-19

Appendix A: All Functional Requirements

Region Resolution

IDRequirementRationale
FR-01Resolve target region for every request before it reaches application code.Routing is infrastructure, not application logic.
FR-02Follow deterministic priority: Subdomain → Header → Query → Body → Session → DB Lookup → Fan-out/Error.Developers must predict where requests go.
FR-03Mutations without a resolved region return 400, never silently default.Wrong-region creates are infrastructure incidents.
FR-04List operations without a region fan out to all org-accessible regions.GET /clusters returns all clusters, not a random subset.
FR-05Resolution adds < 5ms overhead.It's a parse, not a network call.

API Contract

IDRequirementRationale
FR-06Region never appears as a path segment.Decouples API contract from topology changes.
FR-07OpenAPI spec is identical across all regional gateways.One SDK, one set of docs.
FR-08Resource IDs are globally unique and opaque ({prefix}_{base62}, 26 chars). Region stored as metadata.Enables resource migration between regions without ID changes.
FR-09Request IDs embed region, timestamp, and entropy (req_{region}-{ts}-{entropy}).Enables grep-based debugging and distributed tracing.
FR-10-ORGOrganization resolved from session, never from URL.Prevents enumeration, simplifies API surface.
FR-10Single-region org users never need to specify a region.The DX bar. If they think about regions, we failed.

Multi-Tenancy

IDRequirementRationale
FR-11Arc requests only return resources for the authenticated org, regardless of gateway.Tenant isolation is non-negotiable.
FR-12Resource creation rejected if target region not in org's allowed set.API enforces region access, not just UI.
FR-13Atlas requests require platform-level auth, enforced at gateway before routing.Atlas exposes cross-org data.

Data Consistency

IDRequirementRationale
FR-14Infrastructure mutations go through the workflow orchestrator to primary DB.Eventual consistency on "does this cluster exist" is not acceptable.
FR-15Read replicas serve Arc list/get operations. Reads don't hit primary unless read-after-write is needed.Local read latency is the point of replicas.
FR-16After creation, resource visible in same-session lists within 5 seconds.Read-after-write for the creating user.

Naming

IDRequirementRationale
FR-17Region codes use IATA + number format (e.g., sfo1, lax2).Industry standard for physical infrastructure, instantly recognizable.
FR-18Same region string used in subdomains, IDs, DB columns, K8s labels, and logs.One code everywhere, no translation tables.
FR-19Physical topology below region (zone, rack, cage) is metadata on resources, not part of the routing hierarchy or API contract.Provider topology changes don't break API contracts.

Appendix B: All Non-Functional Requirements

Latency

IDMetricMVPProductionMeasured At
NFR-01Gateway resolution overhead< 5ms p99< 2ms p99Request arrival → region resolved
NFR-02Same-region read (Arc list/get)< 100ms p95< 50ms p95Gateway → response, local replica
NFR-03Cross-region write (settings, notifications)< 150ms p95< 100ms p95Round-trip including primary write
NFR-04Infrastructure mutation acceptance< 500ms p95< 300ms p95Time to 202 Accepted + enqueue
NFR-05Fan-out list (all org regions)< 500ms p95< 250ms p95Parallel query + merge
NFR-06Replication lag< 5s p99< 1s p99pg_stat_replication

Availability

IDMetricMVPProduction
NFR-07Per-region reads99.5% monthly99.9% monthly
NFR-08Per-region writes99.0% monthly99.5% monthly
NFR-09Global endpoint (api.example.com)99.5% monthly99.9% monthly
NFR-10Mothership availability99.5% monthly99.9% monthly

Failover

IDScenarioMVP BehaviorProduction Behavior
NFR-11Regional gateway down{region}.api.example.com returns 503. Global endpoint routes to healthy regions.Automatic DNS failover within 60s.
NFR-12Regional replica downReads fall back to primary (higher latency).Automatic replica promotion within 30s.
NFR-13Primary DB downAll writes fail. Reads continue from replicas. API returns 503 with Retry-After.Managed failover restores writes within 60-120s. RTO < 5 min.
NFR-14Mesh partitionLocal reads continue. Writes fail with clear error. No silent data loss.Regional write buffer retries for 5 min, then fails with audit trail.
NFR-15Degraded modeResponses include X-Region, X-Request-Id. Degraded responses add X-Degraded: true + reason.Same, plus GET /health/region with replica lag, primary connectivity, uptime.

Throughput

IDMetricMVPProduction
NFR-16RPS per regional gateway100 sustained1,000 sustained
NFR-17Concurrent fan-out ops10100
NFR-18Region registry lookupIn-memory, zero network callsSame

Observability

IDRequirementMVPProduction
NFR-19Every request emits OTel span: region, region_source, request_id, org_id, latency_ms, status_codeRequiredRequired
NFR-20Cross-region write latency tracked separately from same-region reads, per-region p50/p95/p99RequiredRequired
NFR-21Replication lag monitored per replica, alert at 5s (MVP) / 1s (prod)RequiredRequired
NFR-22Fan-out tracks per-region sub-request latencyNice-to-haveRequired
NFR-23Gateway exposes Prometheus /metrics: resolution latency, routing source distribution, error ratesRequiredRequired

Last updated on

How is this guide?

On this page