Skip to content

Commit

Permalink
Merge remote-tracking branch 'origin/foundation-2021' into release-29.x
Browse files Browse the repository at this point in the history
  • Loading branch information
Benjamin Reed committed Dec 10, 2021
2 parents 6e0e422 + 7b4d5e0 commit 8f738b0
Show file tree
Hide file tree
Showing 55 changed files with 3,346 additions and 63 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -314,6 +314,10 @@ public static boolean isInetAddressInRange(final String addrString, final String
}
}

public static boolean areSameInetAddress(final byte[] leftInetAddr, final byte[] rightInetAddr){
return s_BYTE_ARRAY_COMPARATOR.compare(leftInetAddr, rightInetAddr) == 0;
}

public static boolean inSameScope(final InetAddress addr1, final InetAddress addr2) {
if (addr1 instanceof Inet4Address) {
return (addr2 instanceof Inet4Address);
Expand Down
2,366 changes: 2,366 additions & 0 deletions docs/modules/operation/images/flows/flow_integration_overview.graphml

Large diffs are not rendered by default.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion docs/modules/operation/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -129,7 +129,7 @@
** xref:elasticsearch/features/alarm-history.adoc[]
* xref:flows/introduction.adoc[]
** xref:flows/setup.adoc[]
** xref:flows/basic.adoc[]
** xref:flows/classification-engine.adoc[]
** xref:flows/aggregation.adoc[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ Configuration changes can require a restart of OpenNMS and some daemons are able

NOTE: Check the xref:reference:daemons/introduction#ga-daemons[daemon reference section] for an overview of all daemons, their related configuration files, and which ones you can reload without restarting OpenNMS.

[[daemon-reload]]
== Reload daemons by CLI

To use the reload commands in the CLI, log into the Karaf Shell on your system using:
Expand Down
143 changes: 143 additions & 0 deletions docs/modules/operation/pages/flows/basic.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,143 @@

[[flows-basic]]
= Basic Flows Setup

This section describes how to get started with flows to collect, enrich (classify), persist, and visualize flows.

== Requirements

Make sure you have the following before you set up flows:

* OpenNMS up and running.
* Device(s) that sends flows visible to OpenNMS and monitored with SNMP.
* Elasticsearch cluster set up with the link:https://github.com/OpenNMS/elasticsearch-drift-plugin[Elasticsearch Drift plugin] installed on every Elasticsearch node.
** The Drift plugin persists and queries flows that {page-component-title} collects.
The Drift version must match the targeted Elasticsearch version.
** (optional) Configure Elasticsearch variables like `search.max_buckets` or maximum heap size `ES-JAVA_OPTS`if the default values are not sufficient for your volume of flows or number of nodes.
** (optional) Create a job to clean the indices so that the disk does not fill up; for example, keep X days of flows.
Filled disks are a challenging problem to address for non-Elasticsearch experts.
We recommend the Elasticsearch link:https://www.elastic.co/guide/en/elasticsearch/client/curator/current/index.html[Curator tool] to do this.
** Monitor the Elasticsearch stack in OpenNMS to get an alarm if it goes down.
* Set up OpenNMS Helm with Grafana to visualize flows.
** Configure the flow and performance data sources.

== Configure OpenNMS to communicate with Elasticsearch

OpenNMS needs to talk to Elasticsearch and know where to store the flows data it collects (persistence).

From a Karaf shell on your {page-component-title} instance, update `$\{OPENNMS_HOME}/etc/org.opennms.features.flows.persistence.elastic.cfg` to configure the flow persistence to use your Elasticsearch cluster:

.Connect to Karaf shell
[source, console]
----
ssh -p 8101 admin@localhost
----

.Configure Elasticsearch settings within Karaf
[source, karaf]
----
config:edit org.opennms.features.flows.persistence.elastic
config:property-set elasticUrl http://elastic:9200
config:update
----

We also recommend setting the following:

.Edit (or create) `$\{OPENNMS_HOME}/etc/org.opennms.features.flows.persistence.elastic.cfg`
[source, xml]
----
# ElasticSearch persistence configuration
elasticUrl = http://10.10.3.218:9200 <1>
connTimeout = 30000
readTimeout = 300000
settings.index.number_of_replicas = 0
settings.index.number_of_shards=1
settings.index.refresh_interval=10s
elasticIndexStrategy=daily
----
<1> Replace with comma-separated list of Elasticsearch nodes.

See <<elasticsearch/introduction.adoc#ga-elasticsearch-integration-configuration, General Elasticsearch Configuration>> for a complete set of options.

== Enable protocols

Update `$\{OPENNMS_HOME}/etc/telemetryd-configuration.xml` to enable one or more of the protocols you want to handle.

This example enables the NetFlow v5 protocol.
Use the same process for any of the other flow-related protocols.

[source, xml]
----
<listener name="Netflow-5-UDP-8877" class-name="org.opennms.netmgt.telemetry.listeners.UdpListener" enabled="true">
<parameter key="port" value="8877"/>
<parser name="Netflow-5-Parser" class-name="org.opennms.netmgt.telemetry.protocols.netflow.parser.Netflow5UdpParser" queue="Netflow-5" />
</listener>
<queue name="Netflow-5">
<adapter name="Netflow-5-Adapter" class-name="org.opennms.netmgt.telemetry.protocols.netflow.adapter.netflow5.Netflow5Adapter" enabled="true">
</adapter>
</queue>
----

Send a `reloadDaemonConfig` event via the CLI to apply the changes without restarting:

[source, console]
----
$\{OPENNMS_HOME}/bin/send-event.pl -p 'daemonName Telemetryd' uei.opennms.org/internal/reloadDaemonConfig
----

This opens a UDP socket bound to `0.0.0.0:8877` to which NetFlow v5 messages are forwarded.
(See also xref:operation:admin/daemon-config-files.adoc#daemon-reload[Reload daemons by CLI].)

=== Multi-port listener

If you are monitoring multiple flow protocols, you normally need to set up a flow listener for each one, on its own UDP port.

By default, {page-component-title} enables a multi-port listener option, which monitors multiple protocols on a single UDP port (9999).
If desired, edit `$\{OPENNMS_HOME}/etc/telemetryd-configuration.xml` to change the port number or add/remove protocols.

IMPORTANT: Make sure any ports you configure for receiving flow data are added to your firewall allow list.

== Enable flows on your device(s)

Configure your devices to send flows.
Refer to the manufacturer's documentation.
You may need to set up the flow receiver, which is OpenNMS {page-component-title}, and enable sending flows per interface on the firewall.

== Link the web UI to Helm

To access flow-related graphs from the {page-component-title} web interface, you must configure a link to your instance of OpenNMS Helm.

.Connect to Karaf shell
[source, console]
----
ssh -p 8101 admin@localhost
----

.Configure Helm settings within Karaf
[source, karaf]
----
config:edit org.opennms.netmgt.flows.rest
config:property-set flowGraphUrl 'http://grafana:3000/dashboard/flows?node=$nodeId&interface=$ifIndex'
config:update
----

NOTE: This URL can optionally point to other tools as well.
It supports placeholders for `$nodeId`, `$ifIndex`, `$start`, and `$end`.

Once configured, an icon appears on the top-right corner of a resource graph for an SNMP interface if there is flow data for that interface.

*You have completed a basic flows set up.*
If you have issues, refer to the troubleshooting flows section.

== Beyond basic flows setup

You may want to do the following:

* *Classify data flows*
** OpenNMS resolves flows to application names.
Create rules to override the default classifications, to customize for your preference.
See xref:flows/classification-engine.adoc#ga-flow-support-classification-engine[Application Classification].

* *Enable remote flows data collection* (Add cross-reference to Minion section.)
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@

[[ga-flow-support-classification-engine]]
= Classification Engine
= Application Classification

The Classification Engine applies a set of user- and/or system-defined rules to each flow to classify it.
This lets users group flows by applications, for example, if all flows to port 80 are marked as `http`.
Expand Down
37 changes: 28 additions & 9 deletions docs/modules/operation/pages/flows/introduction.adoc
Original file line number Diff line number Diff line change
@@ -1,22 +1,41 @@

[[ga-flow-support-introduction]]
= Flow Support
= Flows

{page-component-title} supports receiving, decoding, and persisting flow information sent from your network devices.
A list of currently supported protocols is available in the <<reference:telemetryd/protocols/introduction.adoc#ref-protocol, Telemetry>> section.
While flows offer a great breadth of information, the current focus of the support in {page-component-title} is aimed at:
Flows refers to the summary of network traffic sent by network devices (switches, routers, and so on).
This information includes, but is not limited to, source and destination address, source and destination port, octet count, and duration of activity.
Collecting and analyzing flows data provides a picture of network usage and helps to diagnose network issues.
Persisting flows for long-term storage can aid in forensic analysis.

* Network diagnostic: viewing the top protocols and top talkers within the context of a particular network interface.
* Forensic analysis: persisting the flows for long-term storage.
{page-component-title} provides the following:

* A platform to collect, persist, and visualize flows, with support for NetFlow versions 5 and 9, IPFIX, and sFlow
* Inventory enrichment (mapping to OpenNMS nodes)
* Application classification
* Horizontal scaling
* Enterprise reporting (generate PDF reports)
* Top K statistics by interface, application, host, conversation with QoS

See the <<reference:telemetryd/protocols/introduction.adoc#ref-protocol, Telemetry>> section for a list of supported protocols.

This section presents a set of procedures to set up flows that progress from a basic environment to more complex:

* xref:operation:flows/basic.adoc#flows-basic[Basic setup] (out-of-the-box)
* Flows data in a distributed/remote network (add a Minion)
* Processing large volume of flows data (add Sentinel to scale)
* Issues with flows at scale and queries taking too long (add Nephron for aggregation and streaming analytics)

.Flow integration overview
image::flows/flow_integration_overview.png[width=70%]

== How it works

At a high level:
At a high level, with a xref:operation:flows/basic.adoc#flows-basic[basic setup], OpenNMS processes flows as follows:

* <<telemetryd/introduction.adoc#ga-telemetryd, Telemetryd>> receives and decodes flows on both {page-component-title} and Minion.
* <<telemetryd/introduction.adoc#ga-telemetryd, Telemetryd>> receives and decodes flows on {page-component-title}.
* Telemetryd adapters convert the flows to a canonical flow model.
* Flows are enriched:
** Flows are tagged with an application name via the <<flows/classification-engine.adoc#ga-flow-support-classification-engine, classification engine>>.
** The <<flows/classification-engine.adoc#ga-flow-support-classification-engine, classification engine>> tags flows with an application name.
** Metadata related to associated nodes (such as IDs and categories) are also added to the flows.
* Enriched flows are persisted in Elasticsearch and/or forwarded to Kafka.
* You can use <<flows/nephron.adoc#ga-nephron, Nephron>> to aggregate flows and output aggregates to Elasticsearch, Cortex, or Kafka.
Expand Down
4 changes: 2 additions & 2 deletions docs/modules/operation/pages/meta-data.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ The metadata is a simple triad of strings containing a context, a key and the as
Each node, each interface and each service can have an arbitrary number of metadata elements assigned to it.
The only restriction is that the tuple of context and key must be unique in the element with which it is associated.

The association of metadata with nodes, interfaces, and services happens during provisioning with the use of <<reference:configuration/provisioning/detectors.adoc#ref-provisioning-meta-data, detectors>>.
The association of metadata with nodes, interfaces, and services happens during provisioning with the use of <<reference:provisioning/detectors.adoc#ref-provisioning-meta-data, detectors>>.
Users can add, query, modify, or delete metadata through the requisition editor in the web UI, or through the xref:development:rest/meta-data.adoc#metadata-rest[ReST endpoints].

A <<ga-meta-data-dsl, simple domain-specific language>> (DSL) lets users access the metadata associated with the elements they are working on, and use it as a variable in parameters and expressions.
Expand Down Expand Up @@ -169,7 +169,7 @@ admin@opennms>
=== Uses
The following places allow the use the Metadata DSL:

* <<reference:configuration/provisioning/detectors.adoc#ref-provisioning-meta-data,Provisioning Detectors>>
* <<reference:provisioning/detectors.adoc#ref-provisioning-meta-data,Provisioning Detectors>>
* <<service-assurance/configuration.adoc#ga-pollerd-configuration-meta-data, Service Assurance>>
* <<performance-data-collection/collectd/collection-packages.adoc#ga-collectd-packages-services-meta-data, Performance Management>>
* <<reference:configuration/ttl-rpc.adoc#metadata-ttls, Using metadata for TTLs>>
Expand Down
4 changes: 2 additions & 2 deletions docs/modules/operation/pages/provisioning/detectors.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Use detectors within the provisioning process to detect available services on no

== Supported detectors

For information on supported detectors and how to configure them, see xref:reference:configuration/provisioning/detectors.adoc[provisioning detectors reference section].
For information on supported detectors and how to configure them, see xref:reference:provisioning/detectors.adoc[provisioning detectors reference section].

[[ga-detector-provisioning-meta-data]]
== Metadata DSL
Expand All @@ -16,4 +16,4 @@ The syntax lets you use patterns in an expression, whereby the metadata is repla
During evaluation of an expression, the following scopes are available:

* Node metadata
* Interface metadata
* Interface metadata
Original file line number Diff line number Diff line change
Expand Up @@ -129,4 +129,4 @@ Once created, you can add nodes to the requisition.

* xref:provisioning/directed-discovery.adoc#directed-discovery[Manually specify nodes to add to a requisition]
* xref:provisioning/auto-discovery.adoc#auto-discovery[Automatically discover nodes to add to a requisition]
* Customize a requisition with xref:reference:configuration/provisioning/detectors.adoc#ref-detectors[detectors] and xref:provisioning/policies.adoc#policies[policies]
* Customize a requisition with xref:reference:provisioning/detectors.adoc#ref-detectors[detectors] and xref:provisioning/policies.adoc#policies[policies]
2 changes: 1 addition & 1 deletion docs/modules/operation/pages/provisioning/policies.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,4 @@ String values are assumed to be a substring match, unless the parameter is prefi

== Supported policies

For information on supported policies and how to configure them, see xref:reference:configuration/provisioning/policies.adoc[policies reference section].
For information on supported policies and how to configure them, see xref:reference:provisioning/policies.adoc[policies reference section].
2 changes: 1 addition & 1 deletion docs/modules/operation/pages/snmp-poller/concepts.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ Defaults to "2".

Besides enabling the service and configuring packages and interfaces to match your use case, you must add a policy that enables polling to the foreign source definition of the import requisition(s) for the devices on which you want to use this feature.

Use the `ENABLE_POLLING` and `DISABLE_POLLING` actions of the <<reference:configuration/provisioning/policies/snmp-interface.adoc#snmp-interface-policy, matching SNMP interface policy>> to manage which SNMP interfaces this service polls along with the appropriate `matchBehavior` and parameters for your use case.
Use the `ENABLE_POLLING` and `DISABLE_POLLING` actions of the <<reference:provisioning/policies/snmp-interface.adoc#snmp-interface-policy, matching SNMP interface policy>> to manage which SNMP interfaces this service polls along with the appropriate `matchBehavior` and parameters for your use case.

As an example, you could create a policy named pollVoIPDialPeers that marks interfaces with `ifType 104` to be polled.
Set the `action` to `ENABLE_POLLING` and `matchBehavior` to `ALL_PARAMETERS`.
Expand Down
44 changes: 22 additions & 22 deletions docs/modules/reference/nav.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -113,28 +113,28 @@
*** xref:provisioning/handlers/file.adoc[File]
*** xref:provisioning/handlers/http.adoc[HTTP]
*** xref:provisioning/handlers/vmware.adoc[VMware]
** xref:configuration/provisioning/policies.adoc[]
*** xref:configuration/provisioning/policies/ip-interface.adoc[]
*** xref:configuration/provisioning/policies/metadata.adoc[]
*** xref:configuration/provisioning/policies/node-category.adoc[]
*** xref:configuration/provisioning/policies/script.adoc[]
*** xref:configuration/provisioning/policies/snmp-interface.adoc[]
** xref:configuration/provisioning/detectors.adoc[]
*** xref:configuration/provisioning/detectors/ActiveMQDetector.adoc[ActiveMQ]
*** xref:configuration/provisioning/detectors/BgpSessionDetector.adoc[BGP Session]
*** xref:configuration/provisioning/detectors/BsfDetector.adoc[Bean Script]
*** xref:configuration/provisioning/detectors/DnsDetector.adoc[DNS]
*** xref:configuration/provisioning/detectors/FtpDetector.adoc[FTP]
*** xref:configuration/provisioning/detectors/HostResourceSWRunDetector.adoc[HostResourceSWRun]
*** xref:configuration/provisioning/detectors/HttpDetector.adoc[HTTP]
*** xref:configuration/provisioning/detectors/HttpsDetector.adoc[HTTPS]
*** xref:configuration/provisioning/detectors/ReverseDNSLookupDetector.adoc[Reverse DNS]
*** xref:configuration/provisioning/detectors/SnmpDetector.adoc[SNMP]
*** xref:configuration/provisioning/detectors/WebDetector.adoc[Web]
*** xref:configuration/provisioning/detectors/Win32ServiceDetector.adoc[Win32 Service]
*** xref:configuration/provisioning/detectors/WmiDetector.adoc[WMI]
*** xref:configuration/provisioning/detectors/WsmanDetector.adoc[WS-MAN]
*** xref:configuration/provisioning/detectors/WsmanWqlDetector.adoc[WS-MAN WQL]
** xref:provisioning/policies.adoc[]
*** xref:provisioning/policies/ip-interface.adoc[]
*** xref:provisioning/policies/metadata.adoc[]
*** xref:provisioning/policies/node-category.adoc[]
*** xref:provisioning/policies/script.adoc[]
*** xref:provisioning/policies/snmp-interface.adoc[]
** xref:provisioning/detectors.adoc[]
*** xref:provisioning/detectors/ActiveMQDetector.adoc[ActiveMQ]
*** xref:provisioning/detectors/BgpSessionDetector.adoc[BGP Session]
*** xref:provisioning/detectors/BsfDetector.adoc[Bean Script]
*** xref:provisioning/detectors/DnsDetector.adoc[DNS]
*** xref:provisioning/detectors/FtpDetector.adoc[FTP]
*** xref:provisioning/detectors/HostResourceSWRunDetector.adoc[HostResourceSWRun]
*** xref:provisioning/detectors/HttpDetector.adoc[HTTP]
*** xref:provisioning/detectors/HttpsDetector.adoc[HTTPS]
*** xref:provisioning/detectors/ReverseDNSLookupDetector.adoc[Reverse DNS]
*** xref:provisioning/detectors/SnmpDetector.adoc[SNMP]
*** xref:provisioning/detectors/WebDetector.adoc[Web]
*** xref:provisioning/detectors/Win32ServiceDetector.adoc[Win32 Service]
*** xref:provisioning/detectors/WmiDetector.adoc[WMI]
*** xref:provisioning/detectors/WsmanDetector.adoc[WS-MAN]
*** xref:provisioning/detectors/WsmanWqlDetector.adoc[WS-MAN WQL]
* xref:daemons/introduction.adoc[]
** xref:daemons/daemon-config-files/alarmd.adoc[]
** xref:daemons/daemon-config-files/collectd.adoc[]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ Since this detector uses SNMP to accomplish its work, systems you use it against
Most modern SNMP agents, including most distributions of the Net-SNMP agent and the SNMP service that ship with Microsoft Windows, support this MIB.
Out-of-the-box support for HOST-RESOURCES-MIB among commercial Unix operating systems may be spotty.

NOTE: This detector implements the configuration parameters inherited from the xref:configuration/provisioning/detectors/SnmpDetector.adoc[SNMP Detector].
NOTE: This detector implements the configuration parameters inherited from the xref:provisioning/detectors/SnmpDetector.adoc[SNMP Detector].

== Detector facts

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,5 +14,5 @@ Use this detector to find and assign services based on HTTPS.

.Parameters for the HTTPS service detector
|===
| Parameters for the HTTPS detector are the same parameters as the <<configuration/provisioning/detectors/HttpDetector.adoc#HttpDetector, HttpDetector>>.
| Parameters for the HTTPS detector are the same parameters as the <<provisioning/detectors/HttpDetector.adoc#HttpDetector, HttpDetector>>.
|===
Loading

0 comments on commit 8f738b0

Please sign in to comment.