I help teams fix systemic engineering issues: processes, architecture, and clarity.
→ See how I work with teams.
HBase offers built-in multi-site replication for disaster recovery (DR). Replication streams write-ahead log (WAL) edits from one cluster to another cluster or a set of clusters. Because replication is asynchronous, it does not provide automatic failover or zero-data-loss guarantees; applications must handle HA logic at the architectural level.
Replication Topologies
Modern HBase supports several replication patterns:
- Master → Slave: a primary cluster replicates edits to one or more secondary clusters. Simple and widely used for DR.
- Master ↔ Master: two clusters replicate to each other. HBase prevents loops via cluster IDs and mutation tracking.
- Cyclic Replication: multiple clusters replicate in a ring or mesh, combining master-slave and master-master relationships.
The choice of topology depends on consistency requirements, operational workflows and failover plans.
How Replication Works
Replication is WAL-based. Each RegionServer tails its WAL, identifies edits marked for replication and sends them to the target cluster’s RegionServers. Replication does not copy table definitions—only WAL edits—so the target must be pre-configured with matching schema objects.
Key Requirements and Best Practices
- Version compatibility: All clusters must run compatible HBase and HDFS versions. Cross-version replication between major releases is not supported.
- Independent ZooKeeper ensembles: Do not share ZooKeeper between clusters. Each HBase deployment needs its own quorum.
- Full network connectivity: Every RegionServer in cluster A must reach every RegionServer in cluster B, including ZooKeeper access.
- Matching schemas: All replicated tables and column families must exist on all clusters with identical names and settings.
- WAL settings: Ensure WAL compression and encryption settings are compatible across clusters. Some features may not replicate seamlessly depending on the version.
- Deletes and TTLs: Consider increasing TTL and minimum versions on DR clusters to retain more history and support recovery scenarios.
Enabling Replication
Step 1: Enable replication in configuration
Set the following property on every cluster participating in replication:
hbase.replication = true
Step 2: Enable replication on the column family
From the HBase shell:
alter 'your_table', { NAME => 'family_name', REPLICATION_SCOPE => '1' }
A replication scope of 1 marks WAL edits for replication.
Step 3: Add a replication peer
add_peer '1', "zk1,zk2,zk3:2181:/hbase-backup"
List peers:
list_peers
Operational Considerations
Disaster recovery clusters often retain more history:
- Increase
TTLon replicated tables - Increase
MIN_VERSIONSto preserve older cells
To disable a peer:
disable_peer '1'
Disabling replication does not remove previously replicated edits. To fully reset state, logs must be rolled and the peer re-added. Procedures vary by HBase version; use the appropriate WAL roll command or administrative API for your distribution.
Important Notes
- Replication is asynchronous; applications must manage failover, consistency checks and cutover workflows.
- Bulk loads, snapshots and filesystem-level copies are not automatically replicated unless specifically integrated.
- Rolling upgrades must maintain version compatibility across all clusters to avoid replication stalls.
References
If you need help with distributed systems, backend engineering, or data platforms, check my Services.