Skip to content

Distributed Concourse#

Alpha Feature

Distributed Concourse is currently in alpha. It is available for testing and evaluation but is not yet recommended for production deployments.

Concourse supports forming a distributed cluster of multiple nodes that together act as a single logical database. Data is automatically partitioned and replicated across nodes for fault tolerance and availability.

Architecture#

Peer-to-Peer#

Concourse uses a peer-to-peer architecture with no single leader. Every node in the cluster can accept client connections and process requests. Requests are transparently routed to the node(s) that own the relevant data.

Consistency Model#

Distributed Concourse is a CP system (Consistency and Partition tolerance) with optimistic availability:

  • Strong consistency: All reads reflect the most recent write. There are no stale reads.
  • Partition tolerance: The cluster continues to operate correctly when network partitions occur, as long as a majority of nodes are reachable.
  • Optimistic availability: The system remains available for reads and writes as long as a quorum of replicas is reachable.

Replication#

Each piece of data is stored on multiple nodes according to the configured replication_factor. The default replication factor is n/2 + 1 (a majority of nodes), which ensures that any quorum of nodes contains at least one copy of every piece of data.

Configuration#

To form a cluster, configure the cluster section in concourse.yaml on every node:

1
2
3
4
5
6
cluster:
  nodes:
    - node1.example.com:1717
    - node2.example.com:1717
    - node3.example.com:1717
  replication_factor: 2

Settings#

Setting Description
cluster.nodes List of all node addresses (host:port) in the cluster, including this node
cluster.replication_factor Number of nodes that store a copy of each piece of data

Requirements#

  • All nodes must list the same set of nodes in their configuration.
  • All nodes must use the same replication_factor.
  • All nodes must run the same version of Concourse.
  • All nodes must have identical configurations (aside from node-specific settings like buffer_directory).

Client Connections#

Clients can connect to any node in the cluster. The node receiving the connection acts as a coordinator, transparently routing requests to the appropriate node(s) that own the data.

1
2
3
4
5
// Java - connect to any node
Concourse concourse = Concourse.at()
    .host("node1.example.com")
    .port(1717)
    .connect();

From the client’s perspective, the cluster behaves identically to a single-node deployment. No changes to application code are required.

Current Limitations#

The alpha release of distributed Concourse has the following limitations:

  • No conversion from standalone: Existing single-node deployments cannot be converted to a cluster. Clusters must be started fresh.
  • No dynamic membership: Nodes cannot be added to or removed from a running cluster. The cluster membership is fixed at startup.
  • Identical configurations: All nodes must have identical configurations.
  • Same version required: All nodes must run the same version of Concourse. Rolling upgrades are not supported.

These limitations will be addressed in future releases as the distributed feature matures.