You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/deploy/install/services.md
+12-12
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ The final installation step is to install the Rhize services in your Kubernetes
11
11
## Prerequisites
12
12
13
13
This topic assumes you have done the following:
14
-
-[Set up Kubernetes](/deploy/install/setup-kubernetes) and [Configured Keycloak]({{< relref "keycloak" >}}). All the prerequisites for those topics apply here.
14
+
-[Set up Kubernetes]({{< relref "setup-kubernetes" >}}) and [Configured Keycloak]({{< relref "keycloak" >}}). All the prerequisites for those topics apply here.
15
15
- Configured load balancing for the following DNS records:
16
16
17
17
{{< reusable/default-urls >}}
@@ -123,7 +123,7 @@ If enabling the Audit Trail, also the include the configuration in [Enable chang
123
123
1. Go to Keycloak UI and add all new {{< param db >}} roles to the `ADMIN` group.
124
124
125
125
If the install is successful, the Keycloak UI is available on its
The Rhize [Audit]({{< relref "/how-to/audit">}}) service provides an audit trail for database changes to install. The Audit service uses PostgreSQL for storage.
298
+
The Rhize [Audit]({{< relref "../../how-to/audit">}}) service provides an audit trail for database changes to install. The Audit service uses PostgreSQL for storage.
299
299
300
300
Install Audit Service with these steps:
301
301
@@ -314,11 +314,11 @@ Install Audit Service with these steps:
For details about maintaining the Audit trail, read [Archive the PostgresQL Audit trail]({{< relref "/deploy/maintain/audit/">}}).
317
+
For details about maintaining the Audit trail, read [Archive the PostgresQL Audit trail]({{< relref "../maintain/audit/">}}).
318
318
319
319
### Enable change data capture
320
320
321
-
The Audit trail requires [change data capture (CDC)]({{<ref "track-changes">}}) to function. To enable CDC in {{< param application_name >}} BAAS, include the following values for the Helm chart overrides:
321
+
The Audit trail requires [change data capture (CDC)]({{<relref "../../how-to/publish-subscribe/track-changes">}}) to function. To enable CDC in {{< param application_name >}} BAAS, include the following values for the Helm chart overrides:
The [{{< param brand_name >}} calendar service]({{< relref "/how-to/work-calendars">}}) monitors work calendar definitions and creates work calendar entries in real time, both in the [Graph](#db) and time-series databases.
366
+
The [{{< param brand_name >}} calendar service]({{< relref "../../how-to/work-calendars">}}) monitors work calendar definitions and creates work calendar entries in real time, both in the [Graph](#db) and time-series databases.
367
367
368
368
>**Requirements:** The calendar service requires the [GraphDB](#db), [Keycloak](#keycloak), and [NATS](#nats) services.
369
369
@@ -452,7 +452,7 @@ Install the calendar service with these steps:
452
452
## Optional: change service configuration
453
453
454
454
The services installed in the previous step have many parameters that you can configure for your performance and deployment requirements.
455
-
Review the full list in the [Service configuration]({{< relref "/reference/service-config">}}) reference.
455
+
Review the full list in the [Service configuration]({{< relref "../../reference/service-config">}}) reference.
456
456
457
457
## Troubleshoot
458
458
@@ -473,7 +473,7 @@ For particular problems, try these commands:
473
473
- **Access service through browser**
474
474
475
475
Some services are accessible through the browser.
476
-
To access them, visit local host on the service's [default port]({{< ref "default-ports" >}}).
476
+
To access them, visit local host on the service's [default port]({{< relref "../../reference/default-ports" >}}).
477
477
478
478
- **I installed a service too early**.
479
479
If you installed a service too early, use Helm to uninstall:
Copy file name to clipboardExpand all lines: content/deploy/maintain/audit.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ description: How to archive a partition of the Audit trail on your Rhize deploym
6
6
weight: 100
7
7
---
8
8
9
-
The [audit trial]({{< relref "/how-to/audit" >}}) can generate a high volume of data, so it is a good practice to periodically _archive_ portions of it.
9
+
The [audit trail]({{< relref "../../how-to/audit" >}}) can generate a high volume of data, so it is a good practice to periodically _archive_ portions of it.
10
10
An archive separates a portion of the data from the database and keeps it for long-term storage. This process involves the use of PostgreSQL [Table Partitions](https://www.postgresql.org/docs/current/ddl-partitioning.html).
11
11
12
12
Archiving a partition improves query speed for current data, while providing a cost-effective way to store older.
@@ -17,7 +17,7 @@ Archiving a partition improves query speed for current data, while providing a c
17
17
Before you start, ensure you have the following:
18
18
19
19
- A designated backup location, for example `~/rhize-archives/libre-audit`.
20
-
- Access to the [Rhize Kubernetes Environment](/deploy/install/setup-kubernetes) {{% param pre_reqs %}}
20
+
- Access to the [Rhize Kubernetes Environment]({{< relref "../install/setup-kubernetes" >}}) {{% param pre_reqs %}}
21
21
22
22
Also, before you start, confirm you are in the right context and namespace.
23
23
@@ -66,7 +66,7 @@ To archive the PostgreSQL Audit trail, follow these steps:
66
66
## Next Steps
67
67
68
68
- For full backups or Rhize services, read how to back up:
Copy file name to clipboardExpand all lines: content/deploy/maintain/bpmn-nodes.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ description: >-
6
6
categories: ["concepts"]
7
7
---
8
8
9
-
[{{< abbr "BPMN" >}} processes]({{< relref "/how-to/bpmn" >}}) often have longer execution durations and many steps.
9
+
[{{< abbr "BPMN" >}} processes]({{< relref "../../how-to/bpmn" >}}) often have longer execution durations and many steps.
10
10
If a BPMN node suddenly fails (for example through a panic or loss of power),
11
11
Rhize needs to ensure that the workflow completes.
12
12
@@ -42,4 +42,4 @@ To learn more, read the NATS topic on [Disaster recovery](https://docs.nats.io/r
42
42
If an element in a BPMN workflow takes longer than 10 minutes, NATS ages the workflow out of the queue. The process continues, but if the pod executing the element dies or is interrupted, that workflow is permanently dropped.
43
43
44
44
This ten-minute execution limit should be sufficient for any element in a BPMN process.
45
-
Processes that take longer, such as cooling or fermentation periods, should be implemented as [BPMN event triggers]({{< relref "/how-to/bpmn/bpmn-elements" >}}) or as polls that periodically check data sources between intervals of sleep.
45
+
Processes that take longer, such as cooling or fermentation periods, should be implemented as [BPMN event triggers]({{ relref "../../how-to/bpmn/bpmn-elements" >}}) or as polls that periodically check data sources between intervals of sleep.
Copy file name to clipboardExpand all lines: content/get-started/introduction.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ description: A Hub to join all manufacturing data in place. Build manufacturing
7
7
weight: 1
8
8
---
9
9
10
-
Rhize is a real-time, event-driven [manufacturing data hub]({{< relref "/explanations/manufacturing-data-hub" >}}).
10
+
Rhize is a real-time, event-driven [manufacturing data hub]({{< relref "../explanations/manufacturing-data-hub" >}}).
11
11
It unites data analysis, event monitoring, and process execution in one platform.
12
12
Its interface and architecture is designed to conform to your processes.
13
13
We assume nothing about what your manufacturing workflows look like.
@@ -51,7 +51,7 @@ Some examples of the flexibility include:
51
51
-**Low-code interface**. Model your schema and execute processes using BPMN, a visual programming language. The visual interface makes Rhize and your manufacturing automation accessible to the widest possible audience.
52
52
-**Generic data collection**. Rhize receives data from all levels of the manufacturing process. The [NATS](https://nats.io) broker publishes and subscribes to low-level data from [MQTT](https://mqtt.org/) and [OPC-UA](https://opcfoundation.org/about/opc-technologies/opc-ua/), but the database can also receive ERP inventories and documents sent over HTTP.
53
53
54
-
[Read about use cases]({{< relref "/use-cases" >}}).
54
+
[Read about use cases]({{< relref "../use-cases" >}}).
| Message | The topic the message publishes to on the Rhize Broker. The topic structure follows MQTT syntax |
130
-
| Inputs | Variables to send in the body. For messages to the Rhize broker, use the [special variable]({{< relref "/how-to/bpmn/variables">}}) `BODY`. Value can be JSON or JSONata. |
130
+
| Inputs | Variables to send in the body. For messages to the Rhize broker, use the [special variable]({{< relref "./variables">}}) `BODY`. Value can be JSON or JSONata. |
131
131
| Headers | {{< param boilerplate.headers >}} |
132
132
| Outputs | JSON or JSONata. Optional variables to add to the {{< abbr "process variable context" >}}. |
133
133
@@ -209,7 +209,7 @@ Besides the call parameters, the Query task has following additional fields:
| Input response | {{% param boilerplate.jsonata_response %}}. For GraphQL operations, use this only to map values. Rely on [GQL filters]({{< relref "/how-to/gql/filter" >}}) to limit the payload. |
212
+
| Input response | {{% param boilerplate.jsonata_response %}}. For GraphQL operations, use this only to map values. Rely on [GQL filters]({{< relref "../gql/filter" >}}) to limit the payload. |
213
213
| Headers | {{< param boilerplate.headers >}} |
214
214
215
215
### GraphQL Mutation
@@ -332,7 +332,7 @@ The outputs have the following parameters:
332
332
|Local variable name | What to name the incoming data, as it will be accessed in the parent process (that is, the key name) |
333
333
| assignment value | The value to pass from the child variable context |
334
334
335
-
For a guide to reusing functions, read the [Reuse workflows section]({{< relref "/how-to/bpmn/create-workflow/#reuse-workflows" >}}) in the "Create workflow" guide.
335
+
For a guide to reusing functions, read the [Reuse workflows section]({{< relref "./create-workflow/#reuse-workflows" >}}) in the "Create workflow" guide.
0 commit comments