From 65f8ff7b725f17145a7488da34fc079cfabf0171 Mon Sep 17 00:00:00 2001
From: "Elijah C. Voigt" <elijah.caine@coreos.com>
Date: Mon, 15 May 2017 16:52:31 -0700
Subject: [PATCH 1/2] etcd: configuring etcd-member by hand.

---
 etcd/getting-started-with-etcd-manually.md | 127 +++++++++++++++++++++
 etcd/getting-started-with-etcd.md          |   3 +
 2 files changed, 130 insertions(+)
 create mode 100644 etcd/getting-started-with-etcd-manually.md

diff --git a/etcd/getting-started-with-etcd-manually.md b/etcd/getting-started-with-etcd-manually.md
new file mode 100644
index 000000000..9d755799a
--- /dev/null
+++ b/etcd/getting-started-with-etcd-manually.md
@@ -0,0 +1,127 @@
+# Setting up etcd v3 on Container Linux "by hand"
+
+The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt!
+
+**Before we begin** if you are able to use Container Linux Configs or ignition configs [to provision your Container Linux nodes][easier-setup], you should go that route. Only follow this guide if you *have* to setup etcd the 'hard' way.
+
+This tutorial outlines how to setup the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.
+
+We will deploy a simple 2 node etcd v3 cluster on two local Virtual Machines. This tutorial does not cover setting up TLS, however principles and commands in the [etcd clustering guide][etcd-clustering] carry over into this workflow.
+
+| Node # | IP              | etcd member name |
+| ------ | --------------- | ---------------- |
+| 0      | 192.168.100.100 | my-etcd-0        |
+| 1      | 192.168.100.101 | my-etcd-1        |
+
+First, run `sudo systemctl edit etcd-member` and paste the following code into the editor:
+
+```ini
+[Service]
+Environment="ETCD_IMAGE_TAG=v3.1.7"
+Environment="ETCD_OPTS=\
+  --name=\"my-etcd-0\" \
+  --listen-client-urls=\"http://192.168.100.100:2379\" \
+  --advertise-client-urls=\"http://192.168.100.100:2379\" \
+  --listen-peer-urls=\"http://192.168.100.100:2380\" \
+  --initial-advertise-peer-urls=\"http://192.168.100.100:2380\" \
+  --initial-cluster=\"my-etcd-0=http://192.168.100.100:2380,my-etcd-1=http://192.168.100.101:2380\" \
+  --initial-cluster-token=\"f7b787ea26e0c8d44033de08c2f80632\" \
+  --initial-cluster-state=\"new\""
+```
+
+Replace:
+
+| Variable                           | value                                                                                        |
+| ---------------------------------- | -------------------------------------------------------------------------------------------- |
+| `http://192.168.100.100`           | Your first node's IP address. Found easily by running `ifconfig`.                            |
+| `http://192.168.100.101`           | The second node's IP address.                                                                |
+| `my-etcd-0`                        | The first node's name (can be whatever you want).                                            |
+| `my-etcd-1`                        | The other node's name.                                                                       |
+| `f7b787ea26e0c8d44033de08c2f80632` | The discovery token obtained from https://discovery.etcd.io/new?size=2 (generate your own!). |
+
+*If you want a cluster of more than 2 nodes, make sure `size=#` where # is the number of nodes you want. Otherwise the extra ndoes will become proxies.*
+
+1. Edit the file appropriately and save it. Run `systemctl daemon-reload`.
+2. Do the same on the other node, swapping the names and ip-addresses appropriately. It should look like this:
+
+
+```ini
+[Service]
+Environment="ETCD_IMAGE_TAG=v3.1.7"
+Environment="ETCD_OPTS=\
+  --name=\"my-etcd-1\" \
+  --listen-client-urls=\"http://192.168.100.101:2379\" \
+  --advertise-client-urls=\"http://192.168.100.101:2379\" \
+  --listen-peer-urls=\"http://192.168.100.101:2380\" \
+  --initial-advertise-peer-urls=\"http://192.168.100.101:2380\" \
+  --initial-cluster=\"my-etcd-0=http://192.168.100.100:2380,my-etcd-1=http://192.168.100.101:2380\" \
+  --initial-cluster-token=\"f7b787ea26e0c8d44033de08c2f80632\" \
+  --initial-cluster-state=\"new\""
+```
+
+*If at any point you get confused about this configuration file, keep in mind that these arguments are the same as those passed to the etcd binary when starting a cluster. With that in mind, reference the [etcd clustering guide][etcd-clustering] for help and sanity-checks.*
+
+## Verification
+
+You can verify that the services have been configured by running `systemctl cat etcd-member`. This will print the service and it's override conf to the screen. You should see your changes on both nodes.
+
+On both nodes run `systemctl enable etcd-member && systemctl start etcd-member`.
+
+If this command hangs for a very long time, <Ctrl>+c to exit out and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:
+
+```sh
+$ rm -rf /var/lib/etcd
+$ systemctl restart etcd-member
+```
+
+On your local machine, you should be able to run etcdctl commands which talk to this etcd cluster.
+
+```sh
+$ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379" cluster-health
+member fccad8b3e5be5a7 is healthy: got healthy result from http://192.168.100.100:2379
+member c337d56ffee02e40 is healthy: got healthy result from http://192.168.100.101:2379
+cluster is healthy
+$ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379" set it-works true
+true
+$ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379" get it-works 
+true
+```
+
+There you have it! You have now setup etcd v3 by hand. Pat yourself on the back. Take five.
+
+## Troubleshooting
+
+In the process of setting up your etcd cluster you got it into a non-working state, you have a few options:
+
+1. Reference the [runtime configuration guide][runtime-guide].
+2. Reset your environment.
+
+Since etcd is running in a container, the second option is very easy.
+
+Start by stopping the `etcd-member` service (run these commands *on* the Container Linux nodes).
+
+```sh
+$ systemctl stop etcd-member
+$ systemctl status etcd-member
+● etcd-member.service - etcd (System Application Container)
+   Loaded: loaded (/usr/lib/systemd/system/etcd-member.service; disabled; vendor preset: disabled)
+  Drop-In: /etc/systemd/system/etcd-member.service.d
+           └─override.conf
+   Active: inactive (dead)
+     Docs: https://github.com/coreos/etcd
+```
+
+Next, delete the etcd data (again, run on the Container Linux nodes):
+
+```sh
+$ rm /var/lib/etcd2
+$ rm /var/lib/etcd
+```
+
+*If you set the etcd-member to have a custom data directory, you will need to run a different `rm` command.*
+
+Edit the etcd-member service, restart the `etcd-member` service, and basically start this guide again from the top.
+
+[runtime-guide]: https://coreos.com/etcd/docs/latest/op-guide/runtime-configuration.html
+[etcd-clustering]: https://coreos.com/etcd/docs/latest/op-guide/clustering.html
+[easier-setup]: getting-started-with-etcd.md
diff --git a/etcd/getting-started-with-etcd.md b/etcd/getting-started-with-etcd.md
index be73b7fd1..a94e78d43 100644
--- a/etcd/getting-started-with-etcd.md
+++ b/etcd/getting-started-with-etcd.md
@@ -30,6 +30,8 @@ etcd:
   initial_cluster_state:       new
 ```
 
+If you are unable to provision your machine using Container Linux configs, check out the [Setting up etcd v3 on Container Linux "by hand"][by-hand]
+
 ## Reading and writing to etcd
 
 The HTTP-based API is easy to use. This guide will show both `etcdctl` and `curl` examples.
@@ -235,3 +237,4 @@ $ curl http://127.0.0.1:2379/v2/keys/foo
 [etcd-v3-upgrade]: https://github.com/coreos/etcd/blob/master/Documentation/upgrades/upgrade_3_0.md
 [os-faq]: os-faq.md
 [setup-internal-anchor]: #setting-up-etcd
+[by-hand]: getting-started-with-etcd-manually.md

From fb51bf7ed6b356e6071b41744dbbdec20e4bb68b Mon Sep 17 00:00:00 2001
From: "Elijah C. Voigt" <elijah.caine@coreos.com>
Date: Tue, 16 May 2017 14:57:31 -0700
Subject: [PATCH 2/2] etcd: feedback on etcd-member doc.

---
 etcd/getting-started-with-etcd-manually.md | 65 ++++++++++++----------
 etcd/getting-started-with-etcd.md          |  2 +-
 2 files changed, 38 insertions(+), 29 deletions(-)

diff --git a/etcd/getting-started-with-etcd-manually.md b/etcd/getting-started-with-etcd-manually.md
index 9d755799a..0ea4dd7eb 100644
--- a/etcd/getting-started-with-etcd-manually.md
+++ b/etcd/getting-started-with-etcd-manually.md
@@ -1,19 +1,23 @@
-# Setting up etcd v3 on Container Linux "by hand"
+# Manual configuration of etcd3 on Container Linux
 
-The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt!
+The etcd v3 binary is not slated to ship with Container Linux. With this in mind, you might be wondering, how do I run the newest etcd on my Container Linux node? The short answer: systemd and rkt.
 
-**Before we begin** if you are able to use Container Linux Configs or ignition configs [to provision your Container Linux nodes][easier-setup], you should go that route. Only follow this guide if you *have* to setup etcd the 'hard' way.
+**Before we begin**: If you are able to use Container Linux Configs [to provision your Container Linux nodes][easier-setup], you should go that route. Use this guide only if you must set up etcd the *hard* way.
 
-This tutorial outlines how to setup the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.
+This tutorial outlines how to set up the newest version of etcd on a Container Linux cluster using the `etcd-member` systemd service. This service spawns a rkt container which houses the etcd process.
 
-We will deploy a simple 2 node etcd v3 cluster on two local Virtual Machines. This tutorial does not cover setting up TLS, however principles and commands in the [etcd clustering guide][etcd-clustering] carry over into this workflow.
+It is expected that you have some familiarity with etcd operations before entering this guide and have at least skimmed the [etcd clustering guide][etcd-clustering] first.
+
+We will deploy a simple 2 node etcd v3 cluster on two local virtual machines. This tutorial does not cover setting up TLS, however principles and commands in the [etcd clustering guide][etcd-clustering] carry over into this workflow.
 
 | Node # | IP              | etcd member name |
 | ------ | --------------- | ---------------- |
 | 0      | 192.168.100.100 | my-etcd-0        |
 | 1      | 192.168.100.101 | my-etcd-1        |
 
-First, run `sudo systemctl edit etcd-member` and paste the following code into the editor:
+These IP addresses are visible from within your two machines as well as on the host machine. Once the VMs are setup you should be able to run `ping 192.168.100.100` and `ping 192.168.100.101`, where those are the ip addresses of the VMs.
+
+SSH into your first node and run `systemctl edit etcd-member` and paste the following code into the editor:
 
 ```ini
 [Service]
@@ -29,6 +33,8 @@ Environment="ETCD_OPTS=\
   --initial-cluster-state=\"new\""
 ```
 
+This will create a systemd unit *override* and open the new file in `vi`. The file is empty to begin and *you* populate it with the above code. Paste the above code into the editor and `:wq` to save it.
+
 Replace:
 
 | Variable                           | value                                                                                        |
@@ -39,10 +45,12 @@ Replace:
 | `my-etcd-1`                        | The other node's name.                                                                       |
 | `f7b787ea26e0c8d44033de08c2f80632` | The discovery token obtained from https://discovery.etcd.io/new?size=2 (generate your own!). |
 
-*If you want a cluster of more than 2 nodes, make sure `size=#` where # is the number of nodes you want. Otherwise the extra ndoes will become proxies.*
+> To create a cluster of more than 2 nodes, set `size=#`, where `#` is the number of nodes you wish to create. If not set, any extra nodes will become proxies.
 
-1. Edit the file appropriately and save it. Run `systemctl daemon-reload`.
-2. Do the same on the other node, swapping the names and ip-addresses appropriately. It should look like this:
+1. Edit the service override.
+2. Save the changes.
+3. Run `systemctl daemon-reload`.
+4. Do the same on the other node, swapping the names and ip-addresses appropriately. It should look something like this:
 
 
 ```ini
@@ -59,15 +67,17 @@ Environment="ETCD_OPTS=\
   --initial-cluster-state=\"new\""
 ```
 
-*If at any point you get confused about this configuration file, keep in mind that these arguments are the same as those passed to the etcd binary when starting a cluster. With that in mind, reference the [etcd clustering guide][etcd-clustering] for help and sanity-checks.*
+Note that the arguments used in this configuration file are the same as those passed to the etcd binary when starting a cluster. For more information on help and sanity checks, see the [etcd clustering guide][etcd-clustering].
 
 ## Verification
 
-You can verify that the services have been configured by running `systemctl cat etcd-member`. This will print the service and it's override conf to the screen. You should see your changes on both nodes.
+1. To verify that services have been configured, run `systemctl cat etcd-member` on the manually configured nodes. This will print the service and it's override conf to the screen. You should see the overrides on both nodes.
 
-On both nodes run `systemctl enable etcd-member && systemctl start etcd-member`.
+2. To enable the service on boot, run `systemctl enable etcd-member` on all nodes.
 
-If this command hangs for a very long time, <Ctrl>+c to exit out and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:
+3. To start the service, run `systemctl start etcd-member`. This command may take a while to complete becuase it is downloading a rkt container and setting up etcd.
+
+If the last command hangs for a very long time (10+ minutes), press <Ctrl>+c on your keyboard to exit the commadn and run `journalctl -xef`. If this outputs something like `rafthttp: request cluster ID mismatch (got 7db8ba5f405afa8d want 5030a2a4c52d7b21)` this means there is existing data on the nodes. Since we are starting completely new nodes we will wipe away the existing data and re-start the service. Run the following on both nodes:
 
 ```sh
 $ rm -rf /var/lib/etcd
@@ -87,22 +97,24 @@ $ etcdctl --endpoints="http://192.168.100.100:2379,http://192.168.100.101:2379"
 true
 ```
 
-There you have it! You have now setup etcd v3 by hand. Pat yourself on the back. Take five.
+There you have it! You have now set up etcd v3 by hand. Pat yourself on the back. Take five.
 
 ## Troubleshooting
 
-In the process of setting up your etcd cluster you got it into a non-working state, you have a few options:
+In the process of setting up your etcd cluster if you got it into a non-working state, you have a few options:
 
-1. Reference the [runtime configuration guide][runtime-guide].
-2. Reset your environment.
+* Reference the [runtime configuration guide][runtime-guide].
+* Reset your environment.
 
 Since etcd is running in a container, the second option is very easy.
 
-Start by stopping the `etcd-member` service (run these commands *on* the Container Linux nodes).
+Run the following commands on the Container Linux nodes:
+
+
+1. `systemctl stop etcd-member` to stop the service.
+2. `systemctl status etcd-member` to verify the service has exited. The output should look like:
 
 ```sh
-$ systemctl stop etcd-member
-$ systemctl status etcd-member
 ● etcd-member.service - etcd (System Application Container)
    Loaded: loaded (/usr/lib/systemd/system/etcd-member.service; disabled; vendor preset: disabled)
   Drop-In: /etc/systemd/system/etcd-member.service.d
@@ -111,16 +123,13 @@ $ systemctl status etcd-member
      Docs: https://github.com/coreos/etcd
 ```
 
-Next, delete the etcd data (again, run on the Container Linux nodes):
-
-```sh
-$ rm /var/lib/etcd2
-$ rm /var/lib/etcd
-```
+3. `rm /var/lib/etcd2` to remove the etcd v2 data.
+4. `rm /var/lib/etcd` to remove the etcd v3 data.
 
-*If you set the etcd-member to have a custom data directory, you will need to run a different `rm` command.*
+> If you set a custom data directory for the etcd-member service, you will need to run a modified `rm` command.
 
-Edit the etcd-member service, restart the `etcd-member` service, and basically start this guide again from the top.
+5. Edit the etcd-member service with `systemctl edit etcd-member`.
+6. Restart the etcd-member service with `systemctl start etcd-member`.
 
 [runtime-guide]: https://coreos.com/etcd/docs/latest/op-guide/runtime-configuration.html
 [etcd-clustering]: https://coreos.com/etcd/docs/latest/op-guide/clustering.html
diff --git a/etcd/getting-started-with-etcd.md b/etcd/getting-started-with-etcd.md
index a94e78d43..ff7aaefc2 100644
--- a/etcd/getting-started-with-etcd.md
+++ b/etcd/getting-started-with-etcd.md
@@ -30,7 +30,7 @@ etcd:
   initial_cluster_state:       new
 ```
 
-If you are unable to provision your machine using Container Linux configs, check out the [Setting up etcd v3 on Container Linux "by hand"][by-hand]
+If you are unable to provision your machine using Container Linux configs, refer to [Setting up etcd v3 on Container Linux "by hand"][by-hand].
 
 ## Reading and writing to etcd