Skip to main content

Bird’s eye view

GoAkt is a framework for building concurrent, distributed, and fault-tolerant systems in Go using the actor model. Every unit of computation is an actorβ€”a lightweight, isolated entity that communicates exclusively through messages. The Actor System is the top-level runtime that hosts actors and orchestrates messaging, clustering, and lifecycle.

Three deployment modes

ModeDescription
StandaloneSingle process. Actors communicate in-process.
ClusteredMultiple nodes. Discovery via Consul, etcd, Kubernetes, NATS, mDNS, or static. Location-transparent actors.
Multi-DatacenterMultiple clusters across DCs. Pluggable control plane (NATS JetStream or etcd). DC-aware placement.

Core building blocks

ConceptRole
Reactive streamComposable stream.Source β†’ Flow stages β†’ Sink pipelines, materialized with RunnableGraph.Run on an ActorSystem. Stages run as actors with demand-driven backpressure. See Streams.
CRDT / ReplicatorConflict-free replicated data types (crdt package), merged by a per-node Replicator system actor. Replication is eventually consistentβ€”separate from the Olric actor/grain registry (quorum-strong). See Distributed data.

Message flow

Message flow - Sender sends Tell (fire-and-forget) or Ask (request-response) to Receiver via local or TCP transport, messages enqueue to mailbox then dequeue to Receive For remote messages, the remoting layer serializes the payload over a custom TCP frame protocol with optional compression.

Actor hierarchy

Actor hierarchy - root guardian at top, /system branch for internal actors, /user branch for user-defined actors with nested children When a parent stops, all children stop first (depth-first). A parent supervises its children.

Cluster architecture

Cluster architecture - GoAkt Cluster Node with ActorSystem connecting to Cluster (Olric DMap), Discovery, Remoting (TCP/TLS), and TCP Server Placement and membership state (which actors and grains live where) is stored in Olric (distributed hash map) with configurable quorum. Node membership uses Hashicorp Memberlist. CRDT application state (when enabled via ClusterConfig.WithCRDT) is replicated separately by the Replicator actor using delta broadcast over the topic busβ€”not through Olric DMap writes. See Code Map for package layout and Distributed data for CRDT behaviour.