etcd HA template
This commit is contained in:
parent
4ba17efe58
commit
6542b975ca
37
templates/etcd-ha/0/README.md
Normal file
37
templates/etcd-ha/0/README.md
Normal file
@ -0,0 +1,37 @@
|
||||
# Etcd
|
||||
|
||||
A distributed, highly-available key/value store written in Go.
|
||||
|
||||
### Info:
|
||||
|
||||
This creates an N-node etcd cluster on top of Rancher. The bootstrap process is performed using a standalone etcd discovery node. Upon the cluster entering a running state, this discovery service will shutdown. The state of the etcd cluster is a key/value pair stored on etcd itself, in a hidden key at location `/_state`.
|
||||
|
||||
Etcd node restarts and upgrades are fully supported. Please only restart/upgrade `floor(N/2)` nodes at one time in order to maintain service stability. Restarting or upgrading all nodes at once will cause service downtime, but the volume containerization should prevent data loss. While this template can survive `floor(N/2)` node/data volume failures while maintaining service uptime, manual intervention may be required in the face of a corrupted data volume.
|
||||
|
||||
Scaling up an existing etcd cluster is fully automated using the [Etcd Members API](https://coreos.com/etcd/docs/2.3.0/members_api.html).
|
||||
|
||||
Scaling down is unsupported..
|
||||
|
||||
|
||||
### Usage:
|
||||
|
||||
Select etcd the catalog page.
|
||||
|
||||
Fill in the number of nodes desired. This should be an **ODD** number. Recommended configurations are 3, 5, or 7 node deployments. More nodes increases read availability while decreasing read latency. Less nodes decreases write latency, but sacrifices read latency and availability.
|
||||
|
||||
Click deploy.
|
||||
|
||||
Once the stack is deployed and assuming your application is deployed within it, you can access the etcd cluster in your application like so:
|
||||
|
||||
```
|
||||
etcdctl --endpoints http://etcd:2379 member list
|
||||
```
|
||||
|
||||
On the etcd cluster itself, ETCDCTL_ENDPOINT environment variable is set so you may inspect like so:
|
||||
|
||||
```
|
||||
etcdctl member list
|
||||
```
|
||||
|
||||
It is always possible that DNS will return an IP address for an etcd node that is dying. Your application should ensure connection retry logic exists when it uses etcd, or alternatively provide 2+ endpoints using IP addresses to ensure high availability.
|
||||
|
38
templates/etcd-ha/0/docker-compose.yml
Normal file
38
templates/etcd-ha/0/docker-compose.yml
Normal file
@ -0,0 +1,38 @@
|
||||
etcd:
|
||||
image: rancher/etcd:v2.3.0
|
||||
labels:
|
||||
io.rancher.sidekicks: data
|
||||
# try not to schedule etcd nodes on the same host
|
||||
io.rancher.scheduler.affinity:container_label_soft_ne: etcd=node
|
||||
etcd: node
|
||||
expose:
|
||||
- "2379"
|
||||
- "2380"
|
||||
environment:
|
||||
ETCDCTL_ENDPOINT: http://etcd:2379
|
||||
volumes_from:
|
||||
- data
|
||||
# containerize data volume to enable restarts and upgrades
|
||||
data:
|
||||
image: busybox
|
||||
command: /bin/true
|
||||
net: none
|
||||
volumes:
|
||||
- /data
|
||||
labels:
|
||||
io.rancher.container.start_once: 'true'
|
||||
|
||||
# Discovery containers are used for bootstrapping a cluster.
|
||||
# They will shutdown once this process is completed.
|
||||
etcd-discovery:
|
||||
image: rancher/etcd:v2.3.0
|
||||
command: discovery_node
|
||||
labels:
|
||||
io.rancher.container.start_once: 'true'
|
||||
io.rancher.sidekicks: bootstrap
|
||||
bootstrap:
|
||||
image: rancher/etcd:v2.3.0
|
||||
command: bootstrap ${REPLICAS}
|
||||
link: container:etcd-discovery
|
||||
labels:
|
||||
io.rancher.container.start_once: 'true'
|
16
templates/etcd-ha/0/rancher-compose.yml
Normal file
16
templates/etcd-ha/0/rancher-compose.yml
Normal file
@ -0,0 +1,16 @@
|
||||
.catalog:
|
||||
name: "Etcd"
|
||||
version: "2.3.0-rancher1"
|
||||
description: |
|
||||
Distributed highly-available key-value store
|
||||
minimum_rancher_version: "v0.46.0"
|
||||
questions:
|
||||
- variable: "REPLICAS"
|
||||
description: "Number of Etcd nodes. 3, 5, or 7 are good choices"
|
||||
label: "Number of Nodes:"
|
||||
required: true
|
||||
default: 3
|
||||
type: "int"
|
||||
etcd:
|
||||
retain_ip: true
|
||||
scale: ${REPLICAS}
|
Before Width: | Height: | Size: 2.7 KiB After Width: | Height: | Size: 2.7 KiB |
5
templates/etcd-ha/config.yml
Normal file
5
templates/etcd-ha/config.yml
Normal file
@ -0,0 +1,5 @@
|
||||
name: Etcd
|
||||
description: |
|
||||
A highly-available key value store
|
||||
version: 2.3.0-rancher1
|
||||
category: Clustering
|
@ -1,25 +0,0 @@
|
||||
# etcd (Experimental)
|
||||
|
||||
|
||||
### Info:
|
||||
|
||||
This creates an N-node etcd cluster on top of Rancher. The bootstrap process is done statically, and the adjustment of cluster scale needs to be managed manually. The cluster is available for immediate use.
|
||||
|
||||
|
||||
### Usage:
|
||||
|
||||
Select etcd the catalog page.
|
||||
|
||||
Fill in the number of nodes desired. This should be an **ODD** number.
|
||||
|
||||
Click deploy.
|
||||
|
||||
Once the stack is deployed, you can access the cluster in your application via its IP or DNS addresses like so:
|
||||
|
||||
|
||||
```
|
||||
etcdctl -C http://10.42.16.231:2379,.... member list
|
||||
```
|
||||
|
||||
|
||||
|
@ -1,10 +0,0 @@
|
||||
etcd:
|
||||
image: rancher/etcd:v2.2.1-2
|
||||
labels:
|
||||
io.rancher.container.hostname_override: container_name
|
||||
ports:
|
||||
- '4001'
|
||||
- '2380'
|
||||
- '2379'
|
||||
- '7001'
|
||||
command: /opt/rancher/run.sh
|
@ -1,15 +0,0 @@
|
||||
.catalog:
|
||||
name: "Etcd"
|
||||
version: "2.2.1-rancher1"
|
||||
description: |
|
||||
(Experimental)Distributed reliable key-value store
|
||||
minimum_rancher_version: "v0.46.0"
|
||||
questions:
|
||||
- variable: "scale"
|
||||
description: "Number of Etcd nodes"
|
||||
label: "Number of Nodes:"
|
||||
required: true
|
||||
default: 1
|
||||
type: "int"
|
||||
etcd:
|
||||
scale: "${scale}"
|
@ -1,5 +0,0 @@
|
||||
name: Etcd
|
||||
description: |
|
||||
(Experimental) A highly-available key value store
|
||||
version: 2.2.1-rancher1
|
||||
category: Clustering
|
Loading…
x
Reference in New Issue
Block a user