diff --git a/docs/about.md b/docs/about.md index bbf969c..ea35889 100644 --- a/docs/about.md +++ b/docs/about.md @@ -1,4 +1,6 @@ -# About +• [Home](index.md) • About + +# About voice2json `voice2json` was created and is currently maintained by [Michael Hansen](https://synesthesiam.com/). diff --git a/docs/commands.md b/docs/commands.md index ff5e4f0..6ecc83a 100644 --- a/docs/commands.md +++ b/docs/commands.md @@ -1,3 +1,5 @@ +• [Home](index.md) • Commands + # Command-Line Tools ```bash diff --git a/docs/formats.md b/docs/formats.md index 6e52c8f..6b0214d 100644 --- a/docs/formats.md +++ b/docs/formats.md @@ -1,3 +1,5 @@ +• [Home](index.md) • Formats + # Data Formats `voice2json` strives to use only common data formats, preferably text-based. Some artifacts generated during [training](commands.md#train-profile), such as your [language model](#language-models), are even usable by [other speech systems](https://github.com/mozilla/DeepSpeech). diff --git a/docs/install.md b/docs/install.md index 86ccb2a..1c45ecf 100644 --- a/docs/install.md +++ b/docs/install.md @@ -1,3 +1,5 @@ +• [Home](index.md) • Install + # Installing voice2json `voice2json` has been tested on Ubuntu 18.04. It should be able to run on most any flavor of Linux using the [Docker image](#docker-image). It may even run on Mac OSX, but I don't have a Mac to test this out. @@ -45,7 +47,7 @@ DEB_BUILD_ARCH=amd64 Next, install the `.deb` file: ```bash -$ sudo dpkg -i /path/to/voice2json__.deb +$ sudo apt install /path/to/voice2json__.deb ``` where where `` is `voice2json`'s version (probably 1.0) and `` is your build architecture. diff --git a/docs/profiles.md b/docs/profiles.md index 25ca5a3..a87362c 100644 --- a/docs/profiles.md +++ b/docs/profiles.md @@ -1,4 +1,6 @@ -# Profiles +• [Home](index.md) • Profiles + +# Your Profile A `voice2json` profile contains everything necessary to recognize voice commands, including: diff --git a/docs/recipes.md b/docs/recipes.md index 7d307a9..e36fecb 100644 --- a/docs/recipes.md +++ b/docs/recipes.md @@ -1,3 +1,5 @@ +• [Home](index.md) • Recipes + # voice2json Recipes Below are small demonstrations of how to use `voice2json` for a specific problem or as part of a larger system. @@ -471,7 +473,7 @@ $ gst-launch-1.0 \ where `` matches the first command and `` is [wait-wake](commands.md#wait-wake), [record-command](commands.md#record-command), or [record-examples](commands.md#record-examples). -See the GStreamer [multiudpsink plugin](https://gstreamer.freedesktop.org/documentation/udp/multiudpsink.html) for streaming to multiple machines simultaneously (it also has multicast support too). +See the GStreamer [multiudpsink plugin](https://gstreamer.freedesktop.org/documentation/udp/multiudpsink.html) for streaming to multiple machines simultaneously (it also has multicast support). --- diff --git a/docs/sentences.md b/docs/sentences.md index 39cb7fc..e1832a3 100644 --- a/docs/sentences.md +++ b/docs/sentences.md @@ -1,3 +1,5 @@ +• [Home](index.md) • Sentences + # Template Language Voice commands are recognized by `voice2json` from a set of **template sentences** that you define in your [profile](profiles.md). These are stored in an [ini file](https://docs.python.org/3/library/configparser.html) (`sentences.ini`) whose section values are simplified [JSGF grammars](https://www.w3.org/TR/jsgf/). The set of all sentences *represented* in these grammars is used to create an [ARPA language model](https://cmusphinx.github.io/wiki/arpaformat/) and an intent recognizer. See [the whitepaper](whitepaper.md) for details. diff --git a/docs/whitepaper.md b/docs/whitepaper.md index 230309f..40c23bf 100644 --- a/docs/whitepaper.md +++ b/docs/whitepaper.md @@ -1,3 +1,7 @@ +• [Home](index.md) • Whitepaper + +# How voice2json Works + At a high level, `voice2json` transforms audio data (voice commands) into JSON events. ![voice2json overview](img/overview-1.svg)