You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/modules/hbase/pages/getting_started/first_steps.adoc
+16-16Lines changed: 16 additions & 16 deletions
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
= First steps
2
2
3
-
Once you have followed the steps in the xref:installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).
3
+
Once you have followed the steps in the xref:getting_started/installation.adoc[] section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. Afterwards you can <<_verify_that_it_works, verify that it works>> by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).
4
4
5
5
== Setup
6
6
@@ -9,30 +9,30 @@ Once you have followed the steps in the xref:installation.adoc[] section to inst
9
9
To deploy a ZooKeeper cluster create one file called `zk.yaml`:
10
10
11
11
[source,yaml]
12
-
include::example$code/zk.yaml[]
12
+
include::example$getting_started/zk.yaml[]
13
13
14
14
We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper. Create another file called `znode.yaml`:
An HDFS cluster has three components: the `namenode`, the `datanode` and the `journalnode`. Create a file named `hdfs.yaml` defining 2 `namenodes` and one `datanode` and `journalnode` each:
32
32
33
33
[source,yaml]
34
34
----
35
-
include::example$code/hdfs.yaml[]
35
+
include::example$getting_started/hdfs.yaml[]
36
36
----
37
37
38
38
Where:
@@ -47,21 +47,21 @@ It should generally be safe to simply use the latest image version that is avail
47
47
Create the actual HDFS cluster by applying the file:
An alternative way to interact with HBase is to use the https://phoenix.apache.org/index.html[Phoenix] library that is pre-installed on the Stackable HBase image (in the /stackable/phoenix directory). Use the python utility `psql.py` (found in /stackable/phoenix/bin) to create, populate and query a table called `WEB_STAT`:
TIP: Consult the xref:stackablectl::quickstart.adoc[] to learn more about how to use stackablectl. For example, you can use the `-k` flag to create a Kubernetes cluster with link:https://kind.sigs.k8s.io/[kind].
@@ -35,17 +35,17 @@ TIP: Consult the xref:stackablectl::quickstart.adoc[] to learn more about how to
35
35
You can also use Helm to install the operators. Add the Stackable Helm repository:
Helm will deploy the operators in a Kubernetes Deployment and apply the CRDs for the HBase cluster (as well as the CRDs for the required operators). You are now ready to deploy HBase in Kubernetes.
48
48
49
49
== What's next
50
50
51
-
xref:first_steps.adoc[Set up an HBase cluster] and its dependencies and xref:first_steps.adoc#_verify_that_it_works[verify that it works].
51
+
xref:getting_started/first_steps.adoc[Set up an HBase cluster] and its dependencies and xref:getting_started/first_steps.adoc#_verify_that_it_works[verify that it works].
0 commit comments