Skip to content
This repository has been archived by the owner on Oct 8, 2020. It is now read-only.

Commit

Permalink
Merge branch 'develop' of github.com:SANSA-Stack/SANSA-Inference into…
Browse files Browse the repository at this point in the history
… develop
  • Loading branch information
LorenzBuehmann committed May 23, 2017
2 parents 7a6bca0 + 76a51d9 commit 6b06c38
Showing 1 changed file with 59 additions and 15 deletions.
74 changes: 59 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,30 @@


# SANSA Inference Layer
[![Maven Central](https://maven-badges.herokuapp.com/maven-central/net.sansa-stack/sansa-inference-parent_2.11/badge.svg)](https://maven-badges.herokuapp.com/maven-central/net.sansa-stack/sansa-inference-parent_2.11)
[![Build Status](https://ci.aksw.org/jenkins/job/SANSA%20Inference%20Layer/job/develop/badge/icon)](https://ci.aksw.org/jenkins/job/SANSA%20Inference%20Layer/job/develop/)

**Table of Contents**

- [SANSA Inference Layer](#)
- [Structure](#structure)
- [sansa-inference-common](#sansa-inference-common)
- [sansa-inference-spark](#sansa-inference-spark)
- [sansa-inference-flink](#sansa-inference-flink)
- [sansa-inference-tests](#sansa-inference-tests)
- [Setup](#setup)
- [Prerequisites](#prerequisites)
- [From source](#from-source)
- [Using Maven pre-build artifacts](#)
- [Using SBT](#using-SBT)
- [Usage](#usage)
- [Example](#example)
- [Supported Reasoning Profiles](#)
- [RDFS](#rdfs)
- [RDFS Simple](#rdfs-simple)
- [OWL Horst](#owl-horst)


## Structure
### sansa-inference-common
* common datastructures
Expand Down Expand Up @@ -124,27 +147,48 @@ and for Apache Flink add
where `VERSION` is the released version you want to use.

## Usage
Besides using the Inference API in your application code, we also provide a command line interface with various options that allow for a convenient way to use the core reasoning algorithms:
```
RDFGraphMaterializer 0.1.0
Usage: RDFGraphMaterializer [options]
-i <file> | --input <file>
the input file in N-Triple format
-o <directory> | --out <directory>
the output directory
--single-file
write the output to a single file in the output directory
--sorted
sorted output of the triples (per file)
-p {rdfs | owl-horst} | --profile {rdfs | owl-horst}
the reasoning profile
--help
prints this usage text
-i, --input <path1>,<path2>,...
path to file or directory that contains the input files (in N-Triples format)
-o, --out <directory> the output directory
--properties <property1>,<property2>,...
list of properties for which the transitive closure will be computed (used only for profile 'transitive')
-p, --profile {rdfs | rdfs-simple | owl-horst | transitive}
the reasoning profile
--single-file write the output to a single file in the output directory
--sorted sorted output of the triples (per file)
--parallelism <value> the degree of parallelism, i.e. the number of Spark partitions used in the Spark operations
--help prints this usage text
```
This can easily be used when submitting the Job to Spark (resp. Flink), e.g. for Spark

```bash
/PATH/TO/SPARK/sbin/spark-submit [spark-options] /PATH/TO/INFERENCE-SPARK-DISTRIBUTION/FILE.jar [inference-api-arguments]
```

and for Flink

```bash
/PATH/TO/FLINK/bin/flink run [flink-options] /PATH/TO/INFERENCE-FLINK-DISTRIBUTION/FILE.jar [inference-api-arguments]
```

In addition, we also provide Shell scripts that wrap the Spark (resp. Flink) deployment and can be used by first
setting the environment variable `SPARK_HOME` (resp. `FLINK_HOME`) and then calling
```bash
/PATH/TO/INFERENCE-DISTRIBUTION/bin/cli [inference-api-arguments]
```
(Note, that setting Spark (resp. Flink) options isn't supported here and has to be done via the corresponding config files)

### Example

`RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs` will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`.
```bash
RDFGraphMaterializer -i /PATH/TO/FILE/test.nt -o /PATH/TO/TEST_OUTPUT_DIRECTORY/ -p rdfs
```
will compute the RDFS materialization on the data contained in `test.nt` and write the inferred RDF graph to the given directory `TEST_OUTPUT_DIRECTORY`.

## Supported Reasoning Profiles

Expand Down

0 comments on commit 6b06c38

Please sign in to comment.