Skip to content

Commit e2b9aab

Browse files
fhennigmaltesander
andauthored
Docs: new index page (#366)
* Added metadata * Better intro and some restructuring * Added diagram * index page done * Split usage guide (still messy) * ... * Update docs/modules/hive/pages/usage-guide/configuration-environment-overrides.adoc Co-authored-by: Malte Sander <[email protected]> * Update docs/modules/hive/pages/usage-guide/derby-example.adoc Co-authored-by: Malte Sander <[email protected]> * Update docs/modules/hive/pages/usage-guide/derby-example.adoc Co-authored-by: Malte Sander <[email protected]> * Update docs/modules/hive/pages/usage-guide/configuration-environment-overrides.adoc Co-authored-by: Malte Sander <[email protected]> --------- Co-authored-by: Malte Sander <[email protected]>
1 parent fe38f7d commit e2b9aab

15 files changed

+409
-379
lines changed

docs/modules/hive/images/hive_overview.drawio.svg

+4
Loading

docs/modules/hive/pages/getting_started/first_steps.adoc

+1-1
Original file line numberDiff line numberDiff line change
@@ -72,4 +72,4 @@ For further testing we recommend to use e.g. the python https://github.com/quint
7272

7373
== What's next
7474

75-
Have a look at the xref:usage.adoc[] page to find out more about the features of the Operator.
75+
Have a look at the xref:usage-guide/index.adoc[usage guide] to find out more about the features of the Operator.

docs/modules/hive/pages/index.adoc

+41-20
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,53 @@
11
= Stackable Operator for Apache Hive
2+
:description: The Stackable Operator for Apache Hive is a Kubernetes operator that can manage Apache Hive metastores. Learn about its features, resources, dependencies and demos, and see the list of supported Hive versions.
3+
:keywords: Stackable Operator, Hadoop, Apache Hive, Kubernetes, k8s, operator, engineer, big data, metadata, storage, query
24

3-
This is an operator for Kubernetes that can manage https://hive.apache.org[Apache Hive] metastores.
4-
The Apache Hive metastore (HMS) stores information on the location of tables and partitions in file and blob storages such as HDFS and S3.
5+
This is an operator for Kubernetes that can manage https://hive.apache.org[Apache Hive] metastores.
6+
The Apache Hive metastore (HMS) was originally developed as part of Apache Hive. It stores information on the location of tables and partitions in file and blob storages such as xref:hdfs:index.adoc[Apache HDFS] and S3 and is now used by other tools besides Hive as well to access tables in files.
7+
This Operator does not support deploying Hive itself, but xref:trino:index.adoc[Trino] is recommended as an alternative query engine.
58

6-
Only the metastore is supported, not Hive itself.
7-
There are several reasons why running Hive on Kubernetes may not be an optimal solution.
8-
The most obvious reason is that Hive requires YARN as an execution framework, and YARN assumes much of the same role as Kubernetes - i.e. assigning resources.
9-
For this reason we provide xref:trino:index.adoc[Trino] as a query engine in the Stackable Data Platform instead of Hive. Trino still uses the Hive Metastore, hence the inclusion of this operator as well.
10-
There are multiple tools that can use the HMS:
11-
12-
* HiveServer2
13-
** This is the "original" tool using the HMS.
14-
** It offers an endpoint, where you can submit HiveQL (similar to SQL) queries.
15-
** It needs a execution engine, e.g. YARN or Spark.
16-
*** This operator does not support running the Hive server because of the complexity needed to operate YARN on Kubernetes. YARN is a resource manager which is not meant to be running on Kubernetes as Kubernetes already manages its own resources.
17-
*** We offer Trino as a (often times drop-in) replacement (see below)
18-
* Trino
19-
** Takes SQL queries and executes them against the tables, whose metadata are stored in HMS.
20-
** It should offer all the capabilities Hive offers including a lot of additional functionality, such as connections to other data sources.
21-
* Spark
22-
** Takes SQL or programmatic jobs and executes them against the tables, whose metadata are stored in HMS.
23-
* And others
9+
== Getting started
10+
11+
Follow the xref:getting_started/index.adoc[Getting started guide] which will guide you through installing the Stackable Hive Operator and its dependencies. It walks you through setting up a Hive metastore and connecting it to a demo Postgres database and a Minio instance to store data in.
12+
13+
Afterwards you can consult the xref:usage-guide/index.adoc[] to learn more about tailoring your Hive metastore configuration to your needs, or have a look at the <<demos, demos>> for some example setups with either xref:trino:index.adoc[Trino] or xref:spark-k8s:index.adoc[Spark].
14+
15+
== Operator model
16+
17+
The Operator manages the _HiveCluster_ custom resource. The cluster implements a single `metastore` xref:home:concepts:roles-and-role-groups.adoc[role].
18+
19+
image::hive_overview.drawio.svg[A diagram depicting the Kubernetes resources created by the Stackable Operator for Apache Hive]
20+
21+
For every role group the Operator creates a ConfigMap and StatefulSet which can have multiple replicas (Pods). Every role group is accessible through its own Service, and there is a Service for the whole cluster.
22+
23+
The Operator creates a xref:concepts:service_discovery.adoc[service discovery ConfigMap] for the Hive metastore instance. The discovery ConfigMap contains information on how to connect to the HMS.
24+
25+
== Dependencies
26+
27+
The Stackable Operator for Apache Hive depends on the Stackable xref:commons-operator:index.adoc[commons] and xref:secret-operator:index.adoc[secret] operators.
2428

2529
== Required external component: An SQL database
2630

2731
The Hive metastore requires a database to store metadata.
2832
Consult the xref:required-external-components.adoc[required external components page] for an overview of the supported databases and minimum supported versions.
2933

34+
== [[demos]]Demos
35+
36+
Three demos make use of the Hive metastore.
37+
38+
The xref:stackablectl::demos/spark-k8s-anomaly-detection-taxi-data.adoc[] and xref:stackablectl::demos/trino-taxi-data.adoc[] use the HMS to store metadata information about taxi data. The first demo then analyzes the data using xref:spark-k8s:index.adoc[Apache Spark] and the second one using xref:trino:index.adoc[Trino].
39+
40+
The xref:stackablectl::demos/data-lakehouse-iceberg-trino-spark.adoc[] demo is the biggest demo available. It uses both Spark and Trino for analysis.
41+
42+
== Why is the Hive query engine not supported?
43+
44+
Only the metastore is supported, not Hive itself.
45+
There are several reasons why running Hive on Kubernetes may not be an optimal solution.
46+
The most obvious reason is that Hive requires YARN as an execution framework, and YARN assumes much of the same role as Kubernetes - i.e. assigning resources.
47+
For this reason we provide xref:trino:index.adoc[Trino] as a query engine in the Stackable Data Platform instead of Hive. Trino still uses the Hive Metastore, hence the inclusion of this operator as well. Trino should offer all the capabilities Hive offers including a lot of additional functionality, such as connections to other data sources.
48+
49+
Additionally, Tables in the HMS can also be accessed from xref:spark-k8s:index.adoc[Apache Spark].
50+
3051
== Supported Versions
3152

3253
The Stackable Operator for Apache Hive currently supports the following versions of Hive:
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11

2-
= Cluster Operation
2+
= Cluster operation
33

44
Hive installations can be configured with different cluster operations like pausing reconciliation or stopping the cluster. See xref:concepts:cluster_operations.adoc[cluster operations] for more details.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
= Configuration & environment overrides
2+
3+
The cluster definition also supports overriding configuration properties and environment variables, either per role or per role group, where the more specific override (role group) has precedence over the less specific one (role).
4+
5+
IMPORTANT: Overriding certain properties, which are set by the operator (such as the HTTP port) can interfere with the operator and can lead to problems.
6+
7+
== Configuration Properties
8+
9+
For a role or role group, at the same level of `config`, you can specify: `configOverrides` for the following files:
10+
11+
- `hive-site.xml`
12+
- `security.properties`
13+
14+
For example, if you want to set the `datanucleus.connectionPool.maxPoolSize` for the metastore to 20 adapt the `metastore` section of the cluster resource like so:
15+
16+
[source,yaml]
17+
----
18+
metastore:
19+
roleGroups:
20+
default:
21+
config: {}
22+
configOverrides:
23+
hive-site.xml:
24+
datanucleus.connectionPool.maxPoolSize: "20"
25+
replicas: 1
26+
----
27+
28+
Just as for the `config`, it is possible to specify this at role level as well:
29+
30+
[source,yaml]
31+
----
32+
metastore:
33+
configOverrides:
34+
hive-site.xml:
35+
datanucleus.connectionPool.maxPoolSize: "20"
36+
roleGroups:
37+
default:
38+
config: {}
39+
replicas: 1
40+
----
41+
42+
All override property values must be strings. The properties will be formatted and escaped correctly into the XML file.
43+
44+
For a full list of configuration options we refer to the Hive https://cwiki.apache.org/confluence/display/hive/configuration+properties[Configuration Reference].
45+
46+
== The security.properties file
47+
48+
The `security.properties` file is used to configure JVM security properties. It is very seldom that users need to tweak any of these, but there is one use-case that stands out, and that users need to be aware of: the JVM DNS cache.
49+
50+
The JVM manages its own cache of successfully resolved host names as well as a cache of host names that cannot be resolved. Some products of the Stackable platform are very sensible to the contents of these caches and their performance is heavily affected by them. As of version 3.1.3 Apache Hive performs poorly if the positive cache is disabled. To cache resolved host names, you can configure the TTL of entries in the positive cache like this:
51+
52+
[source,yaml]
53+
----
54+
metastores:
55+
configOverrides:
56+
security.properties:
57+
networkaddress.cache.ttl: "30"
58+
networkaddress.cache.negative.ttl: "0"
59+
----
60+
61+
NOTE: The operator configures DNS caching by default as shown in the example above.
62+
63+
For details on the JVM security see https://docs.oracle.com/en/java/javase/11/security/java-security-overview1.html
64+
65+
66+
== Environment Variables
67+
68+
In a similar fashion, environment variables can be (over)written. For example per role group:
69+
70+
[source,yaml]
71+
----
72+
metastore:
73+
roleGroups:
74+
default:
75+
config: {}
76+
envOverrides:
77+
MY_ENV_VAR: "MY_VALUE"
78+
replicas: 1
79+
----
80+
81+
or per role:
82+
83+
[source,yaml]
84+
----
85+
metastore:
86+
envOverrides:
87+
MY_ENV_VAR: "MY_VALUE"
88+
roleGroups:
89+
default:
90+
config: {}
91+
replicas: 1
92+
----
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
= Data storage backends
2+
3+
Hive does not store data, only metadata. It can store metadata about data stored in various places. The Stackable Operator currently supports S3 and HFS.
4+
5+
== [[s3]]S3 support
6+
7+
Hive supports creating tables in S3 compatible object stores.
8+
To use this feature you need to provide connection details for the object store using the xref:concepts:s3.adoc[S3Connection] in the top level `clusterConfig`.
9+
10+
An example usage can look like this:
11+
12+
[source,yaml]
13+
----
14+
clusterConfig:
15+
s3:
16+
inline:
17+
host: minio
18+
port: 9000
19+
accessStyle: Path
20+
credentials:
21+
secretClass: simple-hive-s3-secret-class
22+
----
23+
24+
25+
== [[hdfs]]Apache HDFS support
26+
27+
As well as S3, Hive also supports creating tables in HDFS.
28+
You can add the HDFS connection in the top level `clusterConfig` as follows:
29+
30+
[source,yaml]
31+
----
32+
clusterConfig:
33+
hdfs:
34+
configMap: my-hdfs-cluster # Name of the HdfsCluster
35+
----
36+
37+
Read about the xref:hdfs:index.adoc[Stackable Operator for Apache HDFS] to learn more about setting up HDFS.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,142 @@
1+
2+
= Derby example
3+
4+
Please note that the version you need to specify is not only the version of Apache Hive which you want to roll out, but has to be amended with a Stackable version as shown.
5+
This Stackable version is the version of the underlying container image which is used to execute the processes.
6+
For a list of available versions please check our https://repo.stackable.tech/#browse/browse:docker:v2%2Fstackable%2Fhive%2Ftags[image registry].
7+
It should generally be safe to simply use the latest image version that is available.
8+
9+
.Create a single node Apache Hive Metastore cluster using Derby:
10+
[source,yaml]
11+
----
12+
---
13+
apiVersion: hive.stackable.tech/v1alpha1
14+
kind: HiveCluster
15+
metadata:
16+
name: simple-hive-derby
17+
spec:
18+
image:
19+
productVersion: 3.1.3
20+
clusterConfig:
21+
database:
22+
connString: jdbc:derby:;databaseName=/tmp/metastore_db;create=true
23+
user: APP
24+
password: mine
25+
dbType: derby
26+
metastore:
27+
roleGroups:
28+
default:
29+
replicas: 1
30+
----
31+
32+
WARNING: You should not use the `Derby` database in production. Derby stores data locally which does not work in high availability setups (multiple replicas) and all data is lost after Pod restarts.
33+
34+
To create a single node Apache Hive Metastore (v3.1.3) cluster with derby and S3 access, deploy a minio (or use any available S3 bucket):
35+
[source,bash]
36+
----
37+
helm install minio \
38+
minio \
39+
--repo https://charts.bitnami.com/bitnami \
40+
--set auth.rootUser=minio-access-key \
41+
--set auth.rootPassword=minio-secret-key
42+
----
43+
44+
In order to upload data to minio we need a port-forward to access the web ui.
45+
[source,bash]
46+
----
47+
kubectl port-forward service/minio 9001
48+
----
49+
Then, connect to localhost:9001 and login with the user `minio-access-key` and password `minio-secret-key`. Create a bucket and upload data.
50+
51+
Deploy the hive cluster:
52+
[source,yaml]
53+
----
54+
---
55+
apiVersion: hive.stackable.tech/v1alpha1
56+
kind: HiveCluster
57+
metadata:
58+
name: simple-hive-derby
59+
spec:
60+
image:
61+
productVersion: 3.1.3
62+
clusterConfig:
63+
database:
64+
connString: jdbc:derby:;databaseName=/stackable/metastore_db;create=true
65+
user: APP
66+
password: mine
67+
dbType: derby
68+
s3:
69+
inline:
70+
host: minio
71+
port: 9000
72+
accessStyle: Path
73+
credentials:
74+
secretClass: simple-hive-s3-secret-class
75+
metastore:
76+
roleGroups:
77+
default:
78+
replicas: 1
79+
---
80+
apiVersion: secrets.stackable.tech/v1alpha1
81+
kind: SecretClass
82+
metadata:
83+
name: simple-hive-s3-secret-class
84+
spec:
85+
backend:
86+
k8sSearch:
87+
searchNamespace:
88+
pod: {}
89+
---
90+
apiVersion: v1
91+
kind: Secret
92+
metadata:
93+
name: simple-hive-s3-secret
94+
labels:
95+
secrets.stackable.tech/class: simple-hive-s3-secret-class
96+
stringData:
97+
accessKey: minio-access-key
98+
secretKey: minio-secret-key
99+
----
100+
101+
102+
To create a single node Apache Hive Metastore using PostgreSQL, deploy a PostgreSQL instance via helm.
103+
104+
[sidebar]
105+
PostgreSQL introduced a new way to encrypt its passwords in version 10.
106+
This is called `scram-sha-256` and has been the default as of PostgreSQL 14.
107+
Unfortunately, Hive up until the latest 3.3.x version ships with JDBC drivers that do https://wiki.postgresql.org/wiki/List_of_drivers[_not_ support] this method.
108+
You might see an error message like this:
109+
`The authentication type 10 is not supported.`
110+
If this is the case please either use an older PostgreSQL version or change its https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-PASSWORD-ENCRYPTION[`password_encryption`] setting to `md5`.
111+
112+
This installs PostgreSQL in version 10 to work around the issue mentioned above:
113+
[source,bash]
114+
----
115+
helm install hive bitnami/postgresql --version=12.1.5 \
116+
--set postgresqlUsername=hive \
117+
--set postgresqlPassword=hive \
118+
--set postgresqlDatabase=hive
119+
----
120+
121+
.Create Hive Metastore using a PostgreSQL database
122+
[source,yaml]
123+
----
124+
apiVersion: hive.stackable.tech/v1alpha1
125+
kind: HiveCluster
126+
metadata:
127+
name: simple-hive-postgres
128+
spec:
129+
image:
130+
productVersion: 3.1.3
131+
clusterConfig:
132+
database:
133+
connString: jdbc:postgresql://hive-postgresql.default.svc.cluster.local:5432/hive
134+
user: hive
135+
password: hive
136+
dbType: postgres
137+
metastore:
138+
roleGroups:
139+
default:
140+
replicas: 1
141+
----
142+
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
= Usage guide
2+
:page-aliases: usage.adoc
3+
4+
This Section will help you to use and configure the Stackable Operator for Apache Hive in various ways. You should already be familiar with how to set up a basic instance. Follow the xref:getting_started/index.adoc[] guide to learn how to set up a basic instance with all the required dependencies.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
= Log aggregation
2+
3+
The logs can be forwarded to a Vector log aggregator by providing a discovery
4+
ConfigMap for the aggregator and by enabling the log agent:
5+
6+
[source,yaml]
7+
----
8+
spec:
9+
clusterConfig:
10+
vectorAggregatorConfigMapName: vector-aggregator-discovery
11+
metastore:
12+
config:
13+
logging:
14+
enableVectorAgent: true
15+
----
16+
17+
Further information on how to configure logging, can be found in
18+
xref:concepts:logging.adoc[].
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
= Monitoring
2+
3+
The managed Hive instances are automatically configured to export Prometheus metrics. See
4+
xref:operators:monitoring.adoc[] for more details.

0 commit comments

Comments
 (0)