In part 1 we have created 4 nodes running coreos using Vagrant and installed necessary components on the coreos for Ansible to work. In this part, we’re going to configure another key component of Kubernetes cluster: etcd
etcd
etcd
is a distributed key-value store, which is the heart of a Kubernetes cluster as it holds the state of the cluster. The number one rule of high availability is to protect the data, so we have to cluster etcd to make it redundant and reliable.
The official site here gives a very detailed instruction of how to setting up a clustered etcd, we just need to convert this into an Ansible role to configure and run etcd on the 3 master nodes.
We want to establish a SSL protected cluster so the first step would be generate the necessary certs and keys. We do it using ruby code inside the VagrantFile, what we need to generate are:
- Root CA cert and key for etcd
- Server cert and key signed by the root etcd CA
- Client cert and key signed by the root etcd CA
- Peer cert and key for each master signed by the root etcd CA.
1 | def signTLS(is_ca:, subject:, issuer_subject:'', issuer_cert:nil, public_key:, ca_private_key:, key_usage:'', extended_key_usage:'', san:'') |
Tips:
- Be care with the subject and issuer_subject fields, they have to be consistently.
- You have to pass in all the domain names and reachable IP addresses for your etcd nodes in the san when generating the server certificate to make the handshaking work properly.
The full content VagrantFile can be found at here. Once we have all these in place, we use an Ansible role to setup the etcd cluster
1 | ######################## |
The content of the dropin file 40-cluster-ips.conf.j2
configures the etcd cluster
1 | [Service] |
The key is to set the etcd cluster ips, certs and keys correctly, here we go with the static configuration method because we can use Ansible to get all masters’ IP easily. The full source code for the role can be found at etcd
After having the role, update the playbook and add the following section.
1 | ################################ |
Now use vagrant and Ansible to apply the changes to the masters.
1 | vagrant up --provision |
After this is done, ssh into any of the master machine and check the etcd cluster healthiness and member list.
1 | core@master03 ~ $ etcdctl --ca-file /etc/ssl/etcd/ca.crt --cert-file /etc/ssl/etcd/etcd-client.crt --key-file /etc/ssl/etcd/etcd-client.key --endpoints https://127.0.0.1:2379 cluster-health |
By now the etcd cluster is up and running on 3 master nodes, and we have the backbone of a Kubernetes cluster ready. In the next part, we’ll write an Ansible role to configure and run kubelet
on all nodes.