Skip to content

Commit 8c6f863

Browse files
author
Felix Hennig
committed
Spike/distributed component (#313)
# Description *Please add a description here. This will become the commit message of the merge request later.*
1 parent a4b7ec9 commit 8c6f863

26 files changed

+31
-35
lines changed

docs/antora.yml

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,3 @@
1-
name: hbase
1+
---
2+
name: home
23
version: "nightly"
3-
title: Stackable Operator for Apache HBase
4-
nav:
5-
- modules/getting_started/nav.adoc
6-
- modules/ROOT/nav.adoc
7-
prerelease: true

docs/modules/ROOT/nav.adoc

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/modules/getting_started/nav.adoc

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/modules/getting_started/pages/first_steps.adoc renamed to docs/modules/hbase/pages/getting_started/first_steps.adoc

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
= First steps
22

3-
Once you have followed the steps in the xref:installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).
3+
Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).
44

55
== Setup
66

@@ -9,30 +9,30 @@ Once you have followed the steps in the xref:installation.adoc[] section to inst
99
To deploy a ZooKeeper cluster create one file called `zk.yaml`:
1010

1111
[source,yaml]
12-
include::example$code/zk.yaml[]
12+
include::example$getting_started/zk.yaml[]
1313

1414
We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper. Create another file called `znode.yaml`:
1515

1616
[source,yaml]
17-
include::example$code/znode.yaml[]
17+
include::example$getting_started/znode.yaml[]
1818

1919
Apply both of these files:
2020

2121
[source]
22-
include::example$code/getting_started.sh[tag=install-zk]
22+
include::example$getting_started/getting_started.sh[tag=install-zk]
2323

2424
The state of the ZooKeeper cluster can be tracked with `kubectl`:
2525

2626
[source]
27-
include::example$code/getting_started.sh[tag=watch-zk-rollout]
27+
include::example$getting_started/getting_started.sh[tag=watch-zk-rollout]
2828

2929
=== HDFS
3030

3131
An HDFS cluster has three components: the `namenode`, the `datanode` and the `journalnode`. Create a file named `hdfs.yaml` defining 2 `namenodes` and one `datanode` and `journalnode` each:
3232

3333
[source,yaml]
3434
----
35-
include::example$code/hdfs.yaml[]
35+
include::example$getting_started/hdfs.yaml[]
3636
----
3737

3838
Where:
@@ -47,21 +47,21 @@ It should generally be safe to simply use the latest image version that is avail
4747
Create the actual HDFS cluster by applying the file:
4848

4949
----
50-
include::example$code/getting_started.sh[tag=install-hdfs]
50+
include::example$getting_started/getting_started.sh[tag=install-hdfs]
5151
----
5252

5353
Track the progress with `kubectl` as this step may take a few minutes:
5454

5555
[source]
56-
include::example$code/getting_started.sh[tag=watch-hdfs-rollout]
56+
include::example$getting_started/getting_started.sh[tag=watch-hdfs-rollout]
5757

5858
=== HBase
5959

6060
You can now create the HBase cluster. Create a file called `hbase.yaml` containing the following:
6161

6262
[source,yaml]
6363
----
64-
include::example$code/hbase.yaml[]
64+
include::example$getting_started/hbase.yaml[]
6565
----
6666

6767
== Verify that it works
@@ -71,7 +71,7 @@ To test the cluster you will use the REST API to check its version and status, a
7171
First, check the cluster version with this callout:
7272

7373
[source]
74-
include::example$code/getting_started.sh[tag=cluster-version]
74+
include::example$getting_started/getting_started.sh[tag=cluster-version]
7575

7676
This will return the version that was specified in the HBase cluster definition:
7777

@@ -81,7 +81,7 @@ This will return the version that was specified in the HBase cluster definition:
8181
The cluster status can be checked and formatted like this:
8282

8383
[source]
84-
include::example$code/getting_started.sh[tag=cluster-status]
84+
include::example$getting_started/getting_started.sh[tag=cluster-status]
8585

8686
which will display cluster metadata that looks like this (only the first region is included for the sake of readability):
8787

@@ -123,12 +123,12 @@ which will display cluster metadata that looks like this (only the first region
123123
You can now create a table like this:
124124

125125
[source]
126-
include::example$code/getting_started.sh[tag=create-table]
126+
include::example$getting_started/getting_started.sh[tag=create-table]
127127

128128
This will create a table `users` with a single column family `cf`. Its creation can be verified by listing it:
129129

130130
[source]
131-
include::example$code/getting_started.sh[tag=get-table]
131+
include::example$getting_started/getting_started.sh[tag=get-table]
132132

133133
[source,json]
134134
{
@@ -142,7 +142,7 @@ include::example$code/getting_started.sh[tag=get-table]
142142
An alternative way to interact with HBase is to use the https://phoenix.apache.org/index.html[Phoenix] library that is pre-installed on the Stackable HBase image (in the /stackable/phoenix directory). Use the python utility `psql.py` (found in /stackable/phoenix/bin) to create, populate and query a table called `WEB_STAT`:
143143

144144
[source]
145-
include::example$code/getting_started.sh[tag=phoenix-table]
145+
include::example$getting_started/getting_started.sh[tag=phoenix-table]
146146

147147
The final command will display some grouped data like this:
148148

@@ -156,7 +156,7 @@ Time: 0.017 sec(s)
156156
Check the tables again with:
157157

158158
[source]
159-
include::example$code/getting_started.sh[tag=get-table]
159+
include::example$getting_started/getting_started.sh[tag=get-table]
160160

161161
This time the list includes not just `users` (created above with the REST API) and `WEB_STAT`, but several other tables too:
162162

@@ -200,4 +200,4 @@ This is because Phoenix requires these `SYSTEM.` tables for its own internal map
200200

201201
== What's next
202202

203-
Look at the xref:ROOT:usage.adoc[Usage page] to find out more about configuring your HBase cluster.
203+
Look at the xref:usage.adoc[Usage page] to find out more about configuring your HBase cluster.

docs/modules/getting_started/pages/index.adoc renamed to docs/modules/hbase/pages/getting_started/index.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,5 +20,5 @@ Resource sizing depends on cluster type(s), usage and scope, but as a starting p
2020

2121
The Guide is divided into two steps:
2222

23-
* xref:installation.adoc[Installing the Operators].
24-
* xref:first_steps.adoc[Setting up the HBase cluster and verifying it works].
23+
* xref:getting_started/installation.adoc[Installing the Operators].
24+
* xref:getting_started/first_steps.adoc[Setting up the HBase cluster and verifying it works].

docs/modules/getting_started/pages/installation.adoc renamed to docs/modules/hbase/pages/getting_started/installation.adoc

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,13 +19,13 @@ After you have installed stackablectl run the following command to install all o
1919

2020
[source,bash]
2121
----
22-
include::example$code/getting_started.sh[tag=stackablectl-install-operators]
22+
include::example$getting_started/getting_started.sh[tag=stackablectl-install-operators]
2323
----
2424

2525
The tool will show
2626

2727
[source]
28-
include::example$code/install_output.txt[]
28+
include::example$getting_started/install_output.txt[]
2929

3030

3131
TIP: Consult the xref:stackablectl::quickstart.adoc[] to learn more about how to use stackablectl. For example, you can use the `-k` flag to create a Kubernetes cluster with link:https://kind.sigs.k8s.io/[kind].
@@ -35,17 +35,17 @@ TIP: Consult the xref:stackablectl::quickstart.adoc[] to learn more about how to
3535
You can also use Helm to install the operators. Add the Stackable Helm repository:
3636
[source,bash]
3737
----
38-
include::example$code/getting_started.sh[tag=helm-add-repo]
38+
include::example$getting_started/getting_started.sh[tag=helm-add-repo]
3939
----
4040

4141
Then install the Stackable Operators:
4242
[source,bash]
4343
----
44-
include::example$code/getting_started.sh[tag=helm-install-operators]
44+
include::example$getting_started/getting_started.sh[tag=helm-install-operators]
4545
----
4646

4747
Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the HBase cluster (as well as the CRDs for the required operators). You are now ready to deploy HBase in Kubernetes.
4848

4949
== What's next
5050

51-
xref:first_steps.adoc[Set up an HBase cluster] and its dependencies and xref:first_steps.adoc#_verify_that_it_works[verify that it works].
51+
xref:getting_started/first_steps.adoc[Set up an HBase cluster] and its dependencies and xref:getting_started/first_steps.adoc#_verify_that_it_works[verify that it works].

docs/modules/hbase/partials/nav.adoc

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
* xref:hbase:getting_started/index.adoc[]
2+
** xref:hbase:getting_started/installation.adoc[]
3+
** xref:hbase:getting_started/first_steps.adoc[]
4+
* xref:hbase:configuration.adoc[]
5+
* xref:hbase:usage.adoc[]
6+
* xref:hbase:discovery.adoc[]

0 commit comments

Comments
 (0)