Technology

Dual Control Planes for Geo-Partitioned Event Sourcing: Surviving Region Blackouts

Audience: platform, SRE, and staff engineers running multi-region Kafka/postgres event stores who need to survive full regional loss without breaking audit guarantees.

Md Sanwar Hossain March 2026 15 min read Technology
Dual Control Planes for Geo-Partitioned Event Sourcing

Table of Contents

  1. Introduction
  2. Story: When the East Coast Disappears
  3. Why Dual Control Planes in Event-Sourced Stacks
  4. Architecture Blueprint
  5. Event Store Topology
  6. Control Plane Mechanics and Failover
  7. Step-by-Step Recovery Flow
  8. Operational Commands and Config Snippets
  9. Failure Modes to Drill
  10. Observability and Runbooks
  11. Data Repair and Consistency
  12. Trade-offs and Costs
  13. Mistakes to Avoid
  14. Key Takeaways

Introduction

Dual Control Plane Architecture | mdsanwarhossain.me
Dual Control Plane Architecture — mdsanwarhossain.me

Event-sourced systems love audit trails, reliable replays, and append-only histories. They hate ambiguous failovers, partial replicas, and control planes that vanish mid-incident. In a single region, the control plane that owns schemas, topic policies, consumer offsets, and projection rollouts can be centralized. In a geo-partitioned deployment, centralization becomes a liability: if the region hosting the control plane disappears, operators are blind and the replay choreography that keeps projections coherent freezes.

This piece distills the pattern into concrete commands, configs, and a step-by-step recovery flow you can lift into a runbook.

Story: When the East Coast Disappears

It is 08:12 UTC on a Tuesday. An upstream cloud networking incident silently isolates us-east-1. Your Kafka control plane, Schema Registry primary, and GitOps controllers all live there. Producers in eu-west-1 keep writing locally. Consumers in ap-southeast-1 stall because their offset commits target a control plane that is now unreachable. Within 15 minutes, incident commanders say, “Fail east traffic to EU.” The runbook says nothing about projection rebuild order or who owns schema evolution. Without a second control plane, you are left with hand-edited configs and hope.

Why Dual Control Planes in Event-Sourced Stacks

Multi-Cluster Architecture | mdsanwarhossain.me
Multi-Cluster Architecture — mdsanwarhossain.me

Dual control planes are not about hot spares; they are about autonomy. Each region needs a control plane that can:

Instead of a single orchestration brain, you operate two peer brains that exchange state through a low-frequency, signed policy mirror. During an outage, each region keeps processing with its last validated policy and queues diffs for later reconciliation.

Architecture Blueprint

The pattern builds four planes per region: data (Kafka + event store), control (GitOps + registries + orchestration), compute (services + projections), and observability (metrics/logs/traces). Duplicate the control plane in at least two regions and mirror policy as code through signed Git remotes. In practice:

Dual Control Planes Architecture | mdsanwarhossain.me
Dual Control Planes Architecture — mdsanwarhossain.me
# flux-kustomization.yaml (region-scoped)
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
  name: control-plane
spec:
  interval: 2m
  targetNamespace: platform-system
  path: ./clusters/us-east-1
  prune: true
  sourceRef:
    kind: GitRepository
    name: platform-config
  suspend: false
  postBuild:
    substitute:
      REGION: us-east-1
      KAFKA_CLUSTER: kafka-east
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: policy-signing
  namespace: platform-system
data:
  publicKey.pem: |
    -----BEGIN PUBLIC KEY-----
    
    -----END PUBLIC KEY-----

Each region runs its own Flux/ArgoCD instance pointed to the same Git mirror but scoped paths, so a regional loss does not freeze reconciliation elsewhere. The orchestration code that fans out projection rebuilds should still enforce structured lifecycles: every replay job owns its child tasks and cancels them cleanly when switching failover states.

Event Store Topology

For geo-partitioned event sourcing, the typical topology is:

Example topic creation that enforces region pinning and retention aligned to replay windows:

$ kafka-topics --bootstrap-server kafka-east:9092 \
  --create --topic orders.us \
  --partitions 12 --replication-factor 3 \
  --config min.insync.replicas=2 \
  --config retention.ms=1209600000 # 14 days

# single-quoted heredoc prevents shell expansion and keeps the MM2 config literal
$ cat > mm2.properties <<'EOF'
clusters = east,eu
east.bootstrap.servers = kafka-east:9092
eu.bootstrap.servers = kafka-eu:9092
east->eu.enabled = true
east->eu.topics = orders.us
sync.topic.acls.enabled = true
tasks.max = 4
EOF
$ connect-mirror-maker.sh mm2.properties

Control Plane Mechanics and Failover

Dual control planes operate in active-active mode with a treaty:

  1. Policy is mirrored Git: schemas, ACLs, consumer group ownership, projection rollout manifests.
  2. Runtime state is local: offsets, DLQ topics, replay cursors.
  3. Leaders are regional: each control plane is authoritative for its region’s compute plane.

During normal operations, policy PRs merge in either region, signed, and mirrored. During a blackout, the surviving control plane freezes cross-region policy merges but keeps applying region-local manifests, then reconciles deterministically when the failed region returns.

Step-by-Step Recovery Flow

The runbook below assumes us-east-1 went dark while eu-west-1 survived:

  1. Freeze cross-region changes: Pause MirrorMaker/Cluster Linking for topics sourced from the failed region to prevent stale backfill.
  2. Promote standby schemas: In EU, set Schema Registry compatibility to BACKWARD and apply the last signed schema bundle.
  3. Re-point producers: Toggle DNS or service mesh to route US traffic to EU ingress; writes land in orders.eu.
  4. Replay critical projections: Kick off scoped replays (orders, payments) from a checkpointed offset snapshot, canceling on first failure.
  5. Switch read models: Update API read endpoints to point to EU projections.
  6. Drain DLQs locally: EU control plane owns DLQ processing; halt cross-region DLQ shipping until east recovers.
  7. Observe lag budgets: Alert if replication lag for orders.us mirrors exceeds your RPO (e.g., 5 minutes).
  8. Restore east: When east recovers, keep it read-only, reconcile schemas/ACLs, then re-enable mirrors from EU to US.
  9. Offset reconciliation and unfreeze: Export EU offsets, import to US once caught up, then reopen cross-region Git mirrors and automation.

Operational Commands and Config Snippets

# Pause mirror links from failed region
$ kafka-cluster-links --bootstrap-server kafka-eu:9092 \
  --alter --link east-to-eu --config "link.mode=paused"

# Promote EU schema bundle
$ curl -X PUT http://schema-eu:8081/config \
  -H "Content-Type: application/vnd.schemaregistry.v1+json" \
  -d '{"compatibility": "BACKWARD"}'
$ tar -xf schemas/signed-bundle-us-east.tar.gz -C /tmp/schemas
$ curl -X POST http://schema-eu:8081/subjects/orders-value/versions \
  -H "Content-Type: application/vnd.schemaregistry.v1+json" \
  -d @/tmp/schemas/orders-value.json

# Replay projections with checkpointed offsets
$ kubectl -n platform-system create job replay-orders \
  --image=registry/replayer:1.4 \
  -- \
  --topic orders.eu --group projections.orders \
  --offset-snapshot s3://backups/checkpoints/orders-eu.json

# Drain DLQ locally
$ kafka-console-consumer --bootstrap-server kafka-eu:9092 \
  --topic orders.dlq --from-beginning --property print.headers=true

Configuring consumer failover with region affinity:

spring:
  kafka:
    bootstrap-servers:
      - kafka-eu:9092
      - kafka-east:9092
    properties:
      client.rack: eu-west-1
      partition.assignment.strategy: org.apache.kafka.clients.consumer.StickyAssignor
    consumer:
      group-id: projections.orders
      auto-offset-reset: latest
      enable-auto-commit: false

Failure Modes to Drill

Observability and Runbooks

Blackouts blur visibility. Keep observability regional and federate asynchronously. Runbooks should express steps as structured workflows rather than ad-hoc commands; nested tasks that own their children prevent orphaned replays when failover decisions change. The orchestration discipline in structured concurrency maps directly to incident automations.

Data Repair and Consistency

Once the failed region returns:

  1. Read-only staging: Keep producers on the surviving region; mount recovered brokers as mirrors.
  2. Verify deltas: Compare partition checksums and event counts; only proceed when parity holds.
  3. Import and restore: Import surviving offsets, run a dry-run replay to confirm idempotency, then shift traffic back gradually (10%, 25%, 50%, 100%).

Trade-offs and Costs

Dual control planes add expense: duplicate GitOps controllers, schema registries, and CI runners per region, plus more storage for mirrored topics. The payoff is autonomy: regional outages become localized incidents, and you keep audit-grade guarantees because replay order and schema governance never rely on a single region.

Mistakes to Avoid

Key Takeaways

Conclusion

Surviving a regional blackout in an event-sourced world is less about “flip DNS” and more about choreography: pausing mirrors, rerouting producers, replaying projections in the right order, and reconciling offsets with proof. Dual control planes give you that choreography even when one side of the world goes dark. For the concurrency discipline behind those workflows, revisit structured concurrency and apply it to your incident automations.

Read Full Blog Here

For orchestration patterns behind these workflows, read the structured concurrency guide: https://mdsanwarhossain.me/blog-java-structured-concurrency.html.

Implementation Deep Dive: Kafka MirrorMaker 2 for Cross-Region Replication

MirrorMaker 2 (MM2) is the production-grade solution for cross-cluster Kafka replication, replacing the fragile single-consumer MirrorMaker 1 with a Kafka Connect-based architecture that supports bi-directional replication, offset translation, and consumer group migration. In a dual control plane setup, MM2 handles the data plane replication while your control plane manages topic policies, ACLs, and schema synchronization independently.

A complete MM2 configuration for a geo-partitioned deployment targeting us-east-1 and eu-west-1:

# mm2-config.properties — MirrorMaker 2 deployed as Kafka Connect cluster
clusters = east, eu
east.bootstrap.servers = kafka-east:9092
eu.bootstrap.servers   = kafka-eu:9092

# Replication rules: east → eu (active mirror)
east->eu.enabled = true
east->eu.topics = orders\..*,inventory\..*,payments\..*
east->eu.topics.exclude = .*\.internal,.*\.dlq

# Offset synchronisation — allows consumer failover without full replay
east->eu.sync.consumer.groups.enabled      = true
east->eu.emit.heartbeats.enabled           = true
east->eu.emit.checkpoints.enabled          = true
east->eu.sync.group.offsets.enabled        = true
east->eu.sync.group.offsets.interval.seconds     = 60
east->eu.emit.checkpoints.interval.seconds = 30

# Replication factor for mirror-internal topics
replication.factor                        = 3
checkpoints.topic.replication.factor      = 3
heartbeats.topic.replication.factor       = 3

# Exclude internal consumer groups
groups.exclude = connect-.*,__consumer_offsets.*

# Performance tuning
tasks.max = 4
producer.override.compression.type = lz4
producer.override.linger.ms        = 5
consumer.poll.timeout.ms           = 1000

Critical MM2 operational concerns that documentation understates:

MM2 Connector Purpose Key Config Alert Threshold
MirrorSourceConnector Replicates topic data refresh.topics.interval.seconds=30 Lag > 30 s
MirrorCheckpointConnector Translates consumer offsets emit.checkpoints.interval.seconds=30 Checkpoint age > 5 min
MirrorHeartbeatConnector Cluster connectivity probe emit.heartbeats.interval.seconds=1 No heartbeat > 10 s

Quorum Fencing in Practice: ZooKeeper, etcd, and Custom Implementations

Fencing tokens are the foundational primitive that prevents split-brain writes during failover. The invariant is simple: any operation that could cause data divergence must carry a monotonically increasing fencing token, and the resource being written must reject any operation whose token is lower than the highest it has seen. Without fencing, two control-plane instances that each believe themselves to be primary will issue conflicting schema changes or projection-restart commands, producing silent data divergence that surfaces hours later during replay.

A ZooKeeper-based leader election with fencing token generation using Apache Curator:

import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.recipes.leader.LeaderSelector;
import org.apache.curator.framework.recipes.leader.LeaderSelectorListenerAdapter;
import java.util.concurrent.atomic.AtomicLong;

public class ControlPlaneLeaderElector extends LeaderSelectorListenerAdapter {
    private final LeaderSelector leaderSelector;
    private final AtomicLong fencingToken = new AtomicLong(0);

    public ControlPlaneLeaderElector(CuratorFramework client) {
        this.leaderSelector = new LeaderSelector(
            client, "/control-plane/leader", this);
        this.leaderSelector.autoRequeue(); // re-enter election after losing leadership
    }

    @Override
    public void takeLeadership(CuratorFramework client) throws Exception {
        // Increment fencing token on every leadership acquisition
        long token = fencingToken.incrementAndGet();
        client.setData()
              .withVersion(-1)
              .forPath("/control-plane/fence",
                       Long.toString(token).getBytes());
        log.info("Acquired leadership, fencing token={}", token);
        try {
            runControlPlane(token); // blocks until leadership is lost
        } finally {
            log.info("Lost leadership — fencing token {} invalidated", token);
        }
    }

    public boolean validateToken(long incoming) {
        return incoming >= fencingToken.get();
    }
}

// etcd-based election using Jetcd (preferred for KRaft-mode Kafka deployments)
import io.etcd.jetcd.Client;
import io.etcd.jetcd.Election;
import io.etcd.jetcd.election.CampaignResponse;
import static java.nio.charset.StandardCharsets.UTF_8;

public class EtcdLeaderElector {
    private final Election electionClient;
    private static final String ELECTION = "/control-plane-election";

    public void campaign(String instanceId) throws Exception {
        ByteSequence key = ByteSequence.from(ELECTION, UTF_8);
        ByteSequence val = ByteSequence.from(instanceId, UTF_8);
        // campaign() blocks until this instance wins the election
        CampaignResponse response = electionClient.campaign(key, 10L, val).get();
        // etcd revision is the fencing token — globally monotonically increasing
        long fenceToken = response.getLeader().getRevision();
        log.info("Won election, etcd revision {} is fencing token", fenceToken);
        runControlPlaneWithToken(fenceToken);
    }
}

The etcd revision number is an excellent fencing token because etcd guarantees a globally monotonically increasing revision across all write operations in a cluster. When writing to any protected resource — Schema Registry, topic ACLs, or the GitOps policy repository — include the current revision as a header. A guard layer in each resource rejects writes whose fencing token is lower than the highest seen:

Testing Geo-Partition Failures: Chaos Engineering Approaches

Chaos engineering for geo-partitioned systems requires simulating network partitions between regions, not just killing individual pods. The failure modes that matter most are: (1) complete network isolation between two regions, (2) asymmetric partitions where region A can reach region B but not vice versa, (3) partial partitions where data-plane traffic continues but control-plane API calls fail, and (4) clock skew between regions causing fencing token comparisons to behave unpredictably.

Simulating a region partition with Linux tc and iptables on test Kubernetes nodes hosting east-region Kafka brokers:

# Block all traffic from eu-region (10.2.0.0/16) on the Kafka port
iptables -I INPUT  -s 10.2.0.0/16 -p tcp --dport 9092 -j DROP
iptables -I OUTPUT -d 10.2.0.0/16 -p tcp --sport 9092 -j DROP

# Alternatively — add 500 ms latency with ±100 ms jitter to simulate a degraded link
tc qdisc add dev eth0 root handle 1: prio
tc qdisc add dev eth0 parent 1:3 handle 30: netem delay 500ms 100ms
tc filter add dev eth0 protocol ip parent 1:0 prio 3 u32 \
    match ip dst 10.2.0.0/16 flowid 1:3

# Add 5 % packet loss to stress-test MM2 retry and exactly-once semantics
tc qdisc change dev eth0 parent 1:3 handle 30: netem delay 200ms loss 5%

# Restore
tc qdisc del dev eth0 root
iptables -D INPUT  -s 10.2.0.0/16 -p tcp --dport 9092 -j DROP
iptables -D OUTPUT -d 10.2.0.0/16 -p tcp --sport 9092 -j DROP

For Kubernetes-native chaos, Chaos Mesh provides cleaner namespace-scoped experiments without requiring node-level access:

Recovery validation checklist after each chaos experiment:

Operational Runbook: Responding to Region Outages

A runbook without specific, timed steps is a wishlist. The following procedure targets an RTO of 15 minutes for traffic failover and an RPO of under 60 seconds of committed-but-not-yet-replicated events. These targets assume MM2 is running with checkpoint intervals of 30 seconds and heartbeat intervals of 1 second.

Time Action Owner Target SLO Verification
T+0 Alert fires: MM2 heartbeat absent >10 s from us-east-1 On-call Ack in 2 min PagerDuty acknowledged
T+2 Confirm partition: ping east brokers, check cloud-provider health dashboard for regional AZ status On-call 2 min Health-check endpoints unreachable
T+4 Pause MM2 links: kafka-cluster-links --alter --link east-to-eu --config "link.mode=paused" On-call 1 min Connector status shows PAUSED
T+5 Promote EU control plane: ./scripts/promote-control-plane.sh eu-west-1 — script triggers etcd campaign and invalidates east fencing tokens On-call 2 min etcd leader shows eu-west-1 instance
T+7 Redirect producers: update DNS CNAME or service-mesh weighted routing to point to EU brokers On-call 2 min Producer metrics show EU as bootstrap target
T+9 Restart projection consumers using translated offsets from MM2 checkpoints — avoid auto-offset-reset=latest On-call 3 min Consumer lag reaches zero on EU cluster
T+12 Validate business metrics: order-processing rate, error rate, projection staleness indicators on Grafana On-call + Lead 3 min All dashboards green
T+15 Post-incident communication: update status page, notify stakeholders, open post-mortem ticket Lead Ongoing Status page updated; stakeholders notified

RTO / RPO targets and communication checklist:

Leave a Comment

Related Posts

Md Sanwar Hossain - Software Engineer
Md Sanwar Hossain

Software Engineer · Java · Spring Boot · Microservices

Last updated: March 22, 2026