Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.scomp.dev/llms.txt

Use this file to discover all available pages before exploring further.

This page is the SDK-agnostic mental model. It uses JSON-RPC wire shapes (not Rust or TypeScript code) so the picture holds whether you’re building against the Rust SDK, the planned TypeScript SDK, or writing your own implementation from PROTOCOL.md. Each section has a “go deeper” link to the corresponding protocol page.

Peers

A scomp connection has two peers. One runs an LLM agent’s harness and originates code submissions; the other hosts a runtime that evaluates them. The protocol calls these the client and server, but they’re symmetric at the wire level — either side declares capabilities, either side can be invoked. The role is determined by who sends the handshake first. That peer is the client; the responder is the server.

Go deeper

Architecture — the five-layer model (agent, harness, client SDK, server SDK, runtime) and which boundaries the protocol governs.

Runtimes and bindings

A runtime is whatever the server uses to evaluate submitted code — typically a JS sandbox (QuickJS in the reference implementation), but the protocol is runtime-agnostic. The runtime hosts persistent state and a set of bindings: capabilities exposed by name with declared input/output schemas.
{
  "name": "webSearch",
  "description": "Search the web. Returns a list of result objects.",
  "input": { "type": "object", "properties": { "query": { "type": "string" } } },
  "output": { "type": "array", "items": { "$ref": "#/$defs/SearchResult" } },
  "effects": ["read", "external"]
}
Both sides declare bindings. Server-declared bindings are callable from inside evaluated code; client-declared bindings are callable from the server back to the client.

Go deeper

Bindings — the full metadata shape, effects, hints, and JSON Schema requirements.

Handshake

The connection opens with a single round-trip. The client sends handshake with its protocol version, declared bindings, and optional metadata (auth claims, client identity). The server responds with its protocol version, its own declared bindings, and a server-issued sessionId.
// → handshake request
{ "jsonrpc": "2.0", "id": 1, "method": "handshake",
  "params": { "protocol": "0.1", "bindings": [/* ... */] } }

// ← handshake response
{ "jsonrpc": "2.0", "id": 1,
  "result": { "protocol": "0.1", "sessionId": "sess_8f2a3c", "bindings": [/* ... */] } }
A handshake error is terminal: the client must close the connection to retry.

Go deeper

Lifecycle — the full state machine from open through steady state to close.

Eval

The client submits code to the runtime via eval. The runtime evaluates it (typically using captured bindings, possibly invoking back into the client mid-eval), and returns the result of the final expression.
// → eval
{ "jsonrpc": "2.0", "id": 2, "method": "eval",
  "params": { "code": "const o = await getOrder({ id: 'o_001' }); o.total" } }

// ← eval response
{ "jsonrpc": "2.0", "id": 2, "result": { "value": 142.50 } }
Evals serialize on the server: one in flight at a time per port. The protocol returns the result, not logs or traces — observability is the harness’s job, wired up via reverse-invoke bindings.

Go deeper

Lifecycle — eval serialization, queueing, and how invokes interleave with active evals.

Invoke

invoke calls a binding by name. Either peer may originate it.
// Server-to-client invoke (the runtime calls a client-declared binding)
{ "jsonrpc": "2.0", "id": 100, "method": "invoke",
  "params": { "name": "notify", "args": { "message": "build finished" } } }
Client-to-server invokes also work, calling server-declared bindings without going through eval. That’s deliberate: it keeps the calling path runtime-agnostic — a Lua server, a Wasm server, and a QuickJS server all accept the same invoke shape.

Go deeper

Lifecycle — invoke semantics, the bidirectional symmetry, and why nested invokes don’t deadlock.

Sessions

A successful handshake establishes a session, identified by the server-issued sessionId. Sessions outlive connections: if the transport drops, the client can reconnect and supply the same sessionId in its next handshake to resume — same runtime, same captured state, fresh transport. The runtime’s state (globals, captured callbacks) persists across the disconnection. Function references (see below) do not — those are connection-scoped.

Go deeper

Sessions — resumption semantics, what survives a reconnect, and the not-found failure mode.

Function references

The protocol carries function-valued arguments. When a binding receives a callable (e.g., an inline JS arrow function passed into a server binding), it crosses the wire as a sentinel:
{ "$scomp": { "kind": "fn", "id": "fn_7af3",
              "input": {/* schema */}, "output": {/* schema */} } }
The holder of the reference can invoke it like any other binding, using ref instead of name:
{ "jsonrpc": "2.0", "id": 101, "method": "invoke",
  "params": { "ref": "fn_7af3", "args": { "order": {/* ... */} } } }
References are connection-scoped — when the connection ends, all refs from that connection are dropped. Sessions persist; refs don’t.

Go deeper

Function references — the $scomp.fn sentinel, the release protocol, and why refs are connection-scoped (not session-scoped).

Putting it together

A minimum-viable session is four messages:
1.  client → server   handshake          declare bindings, request session
2.  server → client   handshake response declare bindings, issue sessionId
3.  client → server   eval               submit code
4.  server → client   eval response      return result
…with any number of invoke messages flying in either direction between steps 2 and 4 (or after, until the transport closes). That’s the entire protocol surface; everything else is convention.

Build it

Quickstart — the same flow in Rust, runnable in 5 minutes.