-
Notifications
You must be signed in to change notification settings - Fork 19
/
Copy pathmeta.yml
270 lines (221 loc) · 9.86 KB
/
meta.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
---
fullname: Verdi Raft
shortname: verdi-raft
opam_name: coq-verdi-raft
organization: uwplse
community: false
action: true
dune: false
coqdoc: false
synopsis: Verified implementation of the Raft distributed consensus protocol in Coq
description: |-
Raft is a distributed consensus algorithm that is designed to be easy to understand
and is equivalent to Paxos in fault tolerance and performance. Verdi Raft is a
verified implementation of Raft in Coq, constructed using the Verdi framework.
Included is a verified fault-tolerant key-value store using Raft.
publications:
- pub_url: https://homes.cs.washington.edu/~mernst/pubs/verify-distsystem-pldi2015.pdf
pub_title: 'Verdi: A Framework for Implementing and Verifying Distributed Systems'
pub_doi: 10.1145/2737924.2737958
- pub_url: https://homes.cs.washington.edu/~mernst/pubs/raft-proof-cpp2016.pdf
pub_title: 'Planning for Change in a Formal Verification of the Raft Consensus Protocol'
pub_doi: 10.1145/2854065.2854081
authors:
- name: Justin Adsuara
- name: Steve Anton
- name: Ryan Doenges
- name: Karl Palmskog
- name: Pavel Panchekha
- name: Zachary Tatlock
- name: James R. Wilcox
- name: Doug Woos
maintainers:
- name: Karl Palmskog
nickname: palmskog
opam-file-maintainer: [email protected]
opam-file-version: dev
license:
fullname: BSD 2-Clause "Simplified" license
identifier: BSD-2-Clause
supported_coq_versions:
text: 8.14 or later
opam: '{>= "8.14"}'
tested_coq_opam_versions:
- version: dev
- version: '8.18'
- version: '8.17'
- version: '8.16'
- version: '8.15'
- version: '8.14'
dependencies:
- opam:
name: coq-verdi
version: '{= "dev"}'
description: |-
[Verdi](https://github.com/uwplse/verdi)
- opam:
name: coq-struct-tact
version: '{= "dev"}'
description: |-
[StructTact](https://github.com/uwplse/StructTact)
- opam:
name: coq-cheerios
version: '{= "dev"}'
description: |-
[Cheerios](https://github.com/uwplse/cheerios)
ci_extra_dev: true
namespace: VerdiRaft
keywords:
- name: program verification
- name: distributed algorithms
- name: fault tolerance
- name: key-value store
- name: raft
categories:
- name: Computer Science/Concurrent Systems and Protocols/Theory of concurrent systems
build: |-
## Optional requirements
Executable `vard` key-value store:
- [`OCaml`](https://ocaml.org/docs/install.html) (4.02.3 or later)
- [`OCamlbuild`](https://github.com/ocaml/ocamlbuild)
- [`verdi-runtime`](https://github.com/DistributedComponents/verdi-runtime)
- [`cheerios-runtime`](https://github.com/uwplse/cheerios)
Client for `vard`:
- [`Python 2.7`](https://www.python.org/download/releases/2.7/)
Integration testing of `vard`:
- [`Python 2.7`](https://www.python.org/download/releases/2.7/)
Unit testing of unverified `vard` code:
- [`OUnit`](http://ounit.forge.ocamlcore.org) (2.0.0 or later)
## Building and installation instructions
We recommend installing the dependencies of Verdi Raft via
[opam](http://opam.ocaml.org/doc/Install.html):
```shell
opam repo add coq-extra-dev https://coq.inria.fr/opam/extra-dev
opam install coq-struct-tact coq-cheerios coq-verdi
```
Then, run `make` in the root directory. This will compile the Raft
implementation and proof interfaces, and check all the proofs.
To speed up proof checking on multi-core machines, use `make -jX`,
where `X` is at least the number of cores on your machine.
To build the `vard` key-value store program in `extraction/vard`,
you first need to install its requirements. Then, run `make vard`
in the root directory. If the Coq implementation has been compiled
as above, this simply compiles the extracted OCaml code to a native
executable; otherwise, the implementation is extracted to OCaml and
compiled without checking any proofs.
documentation: |-
## Files
The `Raft` and `RaftProofs` subdirectories of `theories` contain the implementation
and verification of Raft. For each proof interface file in `Raft`, there is a
corresponding proof file in `RaftProofs`. The files in the `Raft`
subdirectory include:
- `Raft.v`: an implementation of Raft in Verdi
- `RaftRefinementInterface.v`: an application of the ghost-variable transformer
to Raft which tracks several ghost variables used in the
verification of Raft
- `CommonTheorems.v`: several useful theorems about functions used by
the Raft implementation
- `OneLeaderPerTermInterface`: a statement of Raft's *election
safety* property. See also the corresponding proof file in `RaftProofs`.
- `CandidatesVoteForSelvesInterface.v`, `VotesCorrectInterface.v`, and
`CroniesCorrectInterface.v`: statements of properties used by the proof
`OneLeaderPerTermProof.v`
- `LogMatchingInterface.v`: a statement of Raft's *log matching*
property. See also `LogMatchingProof.v` in `RaftProofs`
- `LeaderSublogInterface.v`, `SortedInterface.v`, and `UniqueIndicesInterface.v`: statements
of properties used by `LogMatchingProof.v`
The file `EndToEndLinearizability.v` in `RaftProofs` uses the proofs of
all proof interfaces to show Raft's *linearizability* property.
## The `vard` Key-Value Store
`vard` is a simple key-value store implemented using
Verdi. `vard` is specified and verified against Verdi's state-machine
semantics in the `VarD.v` example system distributed with Verdi. When the Raft transformer
is applied, `vard` can be run as a strongly-consistent, fault-tolerant key-value store
along the lines of [`etcd`](https://github.com/coreos/etcd).
After running `make vard` in the root directory, OCaml code for `vard`
is extracted, compiled, and linked against a Verdi shim and some `vard`-specific
serialization/debugging code, to produce a `vard.native` binary in `extraction/vard`.
Running `make bench-vard` in `extraction/vard` will produce some
benchmark numbers, which are largely meaningless on
`localhost` (multiple processes writing and fsync-ing to the same disk
and communicating over loopback doesn't accurately model real-world
use cases). Running `make debug` will get you a `tmux` session where
you can play around with a vard cluster in debug mode; look in
`bench/vard.py` for a simple Python `vard` client.
As the name suggests, `vard` is designed to be comparable to the `etcd`
key-value store (although it currently supports many fewer
features). To that end, we include a very simple `etcd` "client" which
can be used for benchmarking. Running `make bench-etcd` will run the
vard benchmarks against `etcd` (although see above for why these results
are not particularly meaningful). See below for instructions to run
both stores on a cluster in order to get a more useful performance
comparison.
### Running `vard` on a cluster
`vard` accepts the following command-line options:
```
-me NAME name for this node
-port PORT port for client commands
-dbpath DIRECTORY directory for storing database files
-node NAME,IP:PORT node in the cluster
-debug run in debug mode
```
Note that `vard` node names are integers starting from 0.
For example, to run `vard` on a cluster with IP addresses
`192.168.0.1`, `192.168.0.2`, `192.168.0.3`, client (input) port 8000,
and port 9000 for inter-node communication, use the following:
```
# on 192.168.0.1
$ ./vard.native -dbpath /tmp/vard-8000 -port 8000 -me 0 -node 0,192.168.0.1:9000 \
-node 1,192.168.0.2:9000 -node 2,192.168.0.3:9000
# on 192.168.0.2
$ ./vard.native -dbpath /tmp/vard-8000 -port 8000 -me 1 -node 0,192.168.0.1:9000 \
-node 1,192.168.0.2:9000 -node 2,192.168.0.3:9000
# on 192.168.0.3
$ ./vard.native -dbpath /tmp/vard-8000 -port 8000 -me 2 -node 0,192.168.0.1:9000 \
-node 1,192.168.0.2:9000 -node 2,192.168.0.3:9000
```
When the cluster is set up, a benchmark can be run as follows:
```
# on the client machine
$ python2 bench/setup.py --service vard --keys 50 \
--cluster "192.168.0.1:8000,192.168.0.2:8000,192.168.0.3:8000"
$ python2 bench/bench.py --service vard --keys 50 \
--cluster "192.168.0.1:8000,192.168.0.2:8000,192.168.0.3:8000" \
--threads 8 --requests 100
```
### Running `etcd` on a cluster
We can compare numbers for `vard` and `etcd` running on the same cluster as
follows:
```
# on 192.168.0.1
$ etcd --name=one \
--listen-client-urls http://192.168.0.1:8000 \
--advertise-client-urls http://192.168.0.1:8000 \
--initial-advertise-peer-urls http://192.168.0.1:9000 \
--listen-peer-urls http://192.168.0.1:9000 \
--data-dir=/tmp/etcd \
--initial-cluster "one=http://192.168.0.1:9000,two=http://192.168.0.2:9000,three=http://192.168.0.3:9000"
# on 192.168.0.2
$ etcd --name=two \
--listen-client-urls http://192.168.0.2:8000 \
--advertise-client-urls http://192.168.0.2:8000 \
--initial-advertise-peer-urls http://192.168.0.2:9000 \
--listen-peer-urls http://192.168.0.2:9000 \
--data-dir=/tmp/etcd \
--initial-cluster "one=http://192.168.0.1:9000,two=http://192.168.0.2:9000,three=http://192.168.0.3:9000"
# on 192.168.0.3
$ etcd --name=three \
--listen-client-urls http://192.168.0.3:8000 \
--advertise-client-urls http://192.168.0.3:8000 \
--initial-advertise-peer-urls http://192.168.0.3:9000 \
--listen-peer-urls http://192.168.0.3:9000 \
--data-dir=/tmp/etcd \
--initial-cluster "one=http://192.168.0.1:9000,two=http://192.168.0.2:9000,three=http://192.168.0.3:9000"
# on the client machine
$ python2 bench/setup.py --service etcd --keys 50 \
--cluster "192.168.0.1:8000,192.168.0.2:8000,192.168.0.3:8000"
$ python2 bench/bench.py --service etcd --keys 50 \
--cluster "192.168.0.1:8000,192.168.0.2:8000,192.168.0.3:8000" \
--threads 8 --requests 100
```
---