From 0ff7b0373670ad434bcefb7caa3e4c7afa0c292a Mon Sep 17 00:00:00 2001 From: Karthik Muthuraman Date: Fri, 11 Sep 2020 16:07:21 -0700 Subject: [PATCH 1/2] initial pass --- notebooks/Integrate_NLP_Libraries.ipynb | 30 ++++++++++++++++--------- 1 file changed, 19 insertions(+), 11 deletions(-) diff --git a/notebooks/Integrate_NLP_Libraries.ipynb b/notebooks/Integrate_NLP_Libraries.ipynb index 75df4090..3e0983ef 100644 --- a/notebooks/Integrate_NLP_Libraries.ipynb +++ b/notebooks/Integrate_NLP_Libraries.ipynb @@ -15,8 +15,8 @@ "source": [ "# Introduction\n", "\n", - "This notebook shows how the open source library [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas) lets you use use [Pandas](https://pandas.pydata.org/) DataFrames as a bridge between multiple natural language processing libraries. \n", - "The example that we show here uses the capabilities of the [Watson Natural Language Understanding](https://www.ibm.com/cloud/watson-natural-language-understanding) service and [SpaCy](https://spacy.io/) to implement a complex NLP task." + "This notebook demonstrates the interoperable capabilities of the open source library [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas). Specifically we use [Pandas](https://pandas.pydata.org/) DataFrames as a bridge between multiple natural language processing libraries. \n", + "The example that we show here uses the capabilities of IBM's [Watson Natural Language Understanding](https://www.ibm.com/cloud/watson-natural-language-understanding) service and [SpaCy](https://spacy.io/) to solve a complex NLP task." ] }, { @@ -27,14 +27,14 @@ "\n", "This notebook requires a Python 3.7 or later environment with the following packages:\n", "* The dependencies listed in the [\"requirements.txt\" file for Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas/blob/master/requirements.txt)\n", - "* The [ibm_watson](https://pypi.org/project/ibm-watson/) package, available via `pip install ibm-watson`\n", - "* `spacy`\n", + "* The [ibm_watson](https://pypi.org/project/ibm-watson/) package, available via PyPi. It can be installed with a simple `pip install ibm-watson` command.\n", + "* The [spacy](https://pypi.org/project/spacy/) package, available via PyPi. It can be installed with a simple `pip install spacy` command. \n", "* `text_extensions_for_pandas`\n", "\n", "You can satisfy the dependency on `text_extensions_for_pandas` in either of two ways:\n", "\n", - "* Run `pip install text_extensions_for_pandas` before running this notebook. This command adds the library to your Python environment.\n", - "* Run this notebook out of your local copy of the Text Extensions for Pandas project's [source tree](https://github.com/CODAIT/text-extensions-for-pandas). In this case, the notebook will use the version of Text Extensions for Pandas in your local source tree **if the package is not installed in your Python environment**." + "* Run `pip install text_extensions_for_pandas` before running this notebook. This command adds the library to your Python environment from the latest PyPi release.\n", + "* Or optionally, run this notebook out of your local copy of the Text Extensions for Pandas project's [source tree](https://github.com/CODAIT/text-extensions-for-pandas). In this case, the notebook will use the version of Text Extensions for Pandas in your local source tree **if the package is not installed in your Python environment**." ] }, { @@ -163,7 +163,7 @@ "The [example document](https://raw.githubusercontent.com/CODAIT/text-extensions-for-pandas/master/resources/holy_grail_short.txt) that we use here is an excerpt from\n", "the plot summary for *Monty Python and the Holy Grail*, drawn from the [Wikipedia entry](https://en.wikipedia.org/wiki/Monty_Python_and_the_Holy_Grail) for that movie.\n", "\n", - "Let's show what the raw text looks like:" + "Let's preview what the raw text looks like:" ] }, { @@ -197,7 +197,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "In the code below, we instruct Watson Natural Language Understanding to perform five different kinds of analysis on the example document:\n", + "Watson Natural Language Understanding can perform multiple kinds of analysis on the example document. \n", + "\n", + "We will be looking at the following:\n", "* entities (with sentiment)\n", "* keywords (with sentiment and emotion)\n", "* relations\n", @@ -262,7 +264,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "Text Extensions for Pandas includes a function `watson_nlu_parse_response()` that turns the output of Watson NLU's `analyze()` function into a dictionary of Pandas DataFrames. Let's run our response object through that conversion." + "Text Extensions for Pandas includes a handy function `watson_nlu_parse_response()` that turns the output of Watson NLU's `analyze()` function into a dictionary of Pandas DataFrames. This makes it much easier to process the output from NLU and perform downstream operations. Let us run the NLU response object through that conversion below." ] }, { @@ -538,6 +540,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "\n", + "As you can see above, it is much more organized and convenient to deal with once we have it as a DataFrame.\n", + "\n", "Each row in the DataFrame contains information about a single relationship that Watson Natural Language Understanding\n", "identified in our input text. As you can see, Watson NLU returns a lot of information about each relationship.\n", "For simplicity, let's focus on three columns:\n", @@ -716,7 +721,8 @@ "For example, `SpanDtype` defines the `+` (also known as `__add__()`) operation \n", "for spans to mean \"the shortest span that completely covers both input spans\". So we can\n", "\"add\" the contents of the \"arguments.0.span\" and \"arguments.1.span\" columns of our DataFrame\n", - "to obtain a span that covers both arguments, plus the text in between them:" + "to obtain a span that covers both arguments, plus the text in between them. \n", + "The cell below demonstrates a simple `+` operation with Spans. " ] }, { @@ -934,6 +940,8 @@ "cell_type": "markdown", "metadata": {}, "source": [ + "This makes it very easy to visually inspect relevant portions of a document and to present any findings.\n", + "\n", "You can also convert an individual element of the array into a Python object of type `Span` that\n", "represents that single span as a scalar value:" ] @@ -3502,7 +3510,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.7.7" + "version": "3.7.1" }, "toc-autonumbering": false }, From fef28078b130cd9892b1e16daf59bfa91edc5a30 Mon Sep 17 00:00:00 2001 From: Karthik Muthuraman Date: Fri, 25 Sep 2020 15:26:30 -0700 Subject: [PATCH 2/2] add table of contents and instructions for Watson Studio --- notebooks/Integrate_NLP_Libraries.ipynb | 273 ++++++++++++++---------- 1 file changed, 162 insertions(+), 111 deletions(-) diff --git a/notebooks/Integrate_NLP_Libraries.ipynb b/notebooks/Integrate_NLP_Libraries.ipynb index 3e0983ef..6fbac942 100644 --- a/notebooks/Integrate_NLP_Libraries.ipynb +++ b/notebooks/Integrate_NLP_Libraries.ipynb @@ -16,14 +16,28 @@ "# Introduction\n", "\n", "This notebook demonstrates the interoperable capabilities of the open source library [Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas). Specifically we use [Pandas](https://pandas.pydata.org/) DataFrames as a bridge between multiple natural language processing libraries. \n", - "The example that we show here uses the capabilities of IBM's [Watson Natural Language Understanding](https://www.ibm.com/cloud/watson-natural-language-understanding) service and [SpaCy](https://spacy.io/) to solve a complex NLP task." + "The example that we show here uses the capabilities of IBM's [Watson Natural Language Understanding](https://www.ibm.com/cloud/watson-natural-language-understanding) service and [SpaCy](https://spacy.io/) to perform a number of NLP tasks such as extracting entities, relations, spans and sentiment." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ - "# Environment Setup\n", + "# Table of Contents\n", + "\n", + "* [Environment Setup](#environment-setup)\n", + "* [Set up the and use Watson Natural Language Understanding Service](#watson-nlu)\n", + "* [Manipulate Span Data with Text Extensions for Pandas](#manipulate-span)\n", + "* [Extract Additional Features with SpaCy](#spacy)\n", + "* [Combine Outputs from Various Packages](#combine-outputs)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Environment Setup \n", + "\n", "\n", "This notebook requires a Python 3.7 or later environment with the following packages:\n", "* The dependencies listed in the [\"requirements.txt\" file for Text Extensions for Pandas](https://github.com/CODAIT/text-extensions-for-pandas/blob/master/requirements.txt)\n", @@ -42,6 +56,20 @@ "execution_count": 1, "metadata": {}, "outputs": [], + "source": [ + "# Uncomment and run this cell if you are using this notebook in a cloud environment such as IBM Watson Studio or Google Colab and you want to install the required packages. \n", + "# Note: This will install packages to your environment so only run if you need to install these packages.\n", + "\n", + "\n", + "# Uncomment below cell to install packages\n", + "# !pip install ibm_watson spacy text_extensions_for_pandas" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], "source": [ "# Core Python libraries\n", "import json\n", @@ -76,7 +104,13 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Set up the Watson Natural Language Understanding Service\n", + "# Using the Watson Natural Language Understanding Service \n", + "\n", + "In this section, we will setup various parts of the Watson NLU service and pass our documents through the service to obtain various features and outputs from the service. \n", + "\n", + "This section is divided into subsections to setup, connect and use the service. \n", + "\n", + "## Set up the Watson Natural Language Understanding Service \n", "\n", "In this part of the notebook, we will use the Watson Natural Language Understanding (NLU) service to extract key features from our example document.\n", "\n", @@ -98,7 +132,7 @@ }, { "cell_type": "code", - "execution_count": 2, + "execution_count": 3, "metadata": {}, "outputs": [], "source": [ @@ -117,7 +151,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Connect to the Watson Natural Language Understanding Python API\n", + "## Connect to the Watson Natural Language Understanding Python API\n", "\n", "This notebook uses the IBM Watson Python SDK to perform authentication on the IBM Cloud via the \n", "`IAMAuthenticator` class. See [the IBM Watson Python SDK documentation](https://github.com/watson-developer-cloud/python-sdk#iam) for more information. \n", @@ -128,16 +162,16 @@ }, { "cell_type": "code", - "execution_count": 3, + "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "" + "" ] }, - "execution_count": 3, + "execution_count": 4, "metadata": {}, "output_type": "execute_result" } @@ -155,7 +189,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Pass a Document through the Watson NLU Service\n", + "## Pass a Document through the Watson NLU Service\n", "\n", "Once you've opened a connection to the Watson NLU service, you can pass documents through \n", "the service by invoking the [`analyze()` method](https://cloud.ibm.com/apidocs/natural-language-understanding?code=python#analyze).\n", @@ -168,7 +202,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 5, "metadata": {}, "outputs": [ { @@ -211,7 +245,7 @@ }, { "cell_type": "code", - "execution_count": 5, + "execution_count": 6, "metadata": {}, "outputs": [], "source": [ @@ -242,7 +276,7 @@ }, { "cell_type": "code", - "execution_count": 6, + "execution_count": 7, "metadata": {}, "outputs": [ { @@ -251,7 +285,7 @@ "dict_keys(['usage', 'syntax', 'semantic_roles', 'relations', 'language', 'keywords', 'entities', 'analyzed_text'])" ] }, - "execution_count": 6, + "execution_count": 7, "metadata": {}, "output_type": "execute_result" } @@ -269,7 +303,7 @@ }, { "cell_type": "code", - "execution_count": 7, + "execution_count": 8, "metadata": {}, "outputs": [ { @@ -278,7 +312,7 @@ "dict_keys(['syntax', 'entities', 'keywords', 'relations', 'semantic_roles'])" ] }, - "execution_count": 7, + "execution_count": 8, "metadata": {}, "output_type": "execute_result" } @@ -299,7 +333,7 @@ }, { "cell_type": "code", - "execution_count": 8, + "execution_count": 9, "metadata": { "scrolled": true }, @@ -363,7 +397,7 @@ " 'entities': [{'type': 'EventCommunication', 'text': 'speaks'}]}]}]" ] }, - "execution_count": 8, + "execution_count": 9, "metadata": {}, "output_type": "execute_result" } @@ -381,7 +415,7 @@ }, { "cell_type": "code", - "execution_count": 9, + "execution_count": 10, "metadata": {}, "outputs": [ { @@ -527,7 +561,7 @@ "5 speaks " ] }, - "execution_count": 9, + "execution_count": 10, "metadata": {}, "output_type": "execute_result" } @@ -553,7 +587,7 @@ }, { "cell_type": "code", - "execution_count": 10, + "execution_count": 11, "metadata": {}, "outputs": [ { @@ -633,7 +667,7 @@ "5 affectedBy [572, 576): 'them' [562, 568): 'speaks'" ] }, - "execution_count": 10, + "execution_count": 11, "metadata": {}, "output_type": "execute_result" } @@ -647,7 +681,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Manipulate Span Data\n", + "# Manipulate Span Data \n", "\n", "Text Extensions for Pandas uses Pandas *extension types* to represent spans (regions of a document) and tensors (multi-dimensional arrays).\n", "For example, the \"arguments.0.span\" and \"arguments.1.span\" columns in the above DataFrame are both stored using the extension type for spans.\n", @@ -657,7 +691,7 @@ }, { "cell_type": "code", - "execution_count": 11, + "execution_count": 12, "metadata": {}, "outputs": [ { @@ -669,7 +703,7 @@ "dtype: object" ] }, - "execution_count": 11, + "execution_count": 12, "metadata": {}, "output_type": "execute_result" } @@ -694,7 +728,7 @@ }, { "cell_type": "code", - "execution_count": 12, + "execution_count": 13, "metadata": {}, "outputs": [ { @@ -727,7 +761,7 @@ }, { "cell_type": "code", - "execution_count": 13, + "execution_count": 14, "metadata": {}, "outputs": [ { @@ -822,7 +856,7 @@ "5 [562, 576): 'speaks to them' " ] }, - "execution_count": 13, + "execution_count": 14, "metadata": {}, "output_type": "execute_result" } @@ -846,7 +880,7 @@ }, { "cell_type": "code", - "execution_count": 14, + "execution_count": 15, "metadata": {}, "outputs": [ { @@ -927,7 +961,7 @@ "Length: 6, dtype: SpanDtype" ] }, - "execution_count": 14, + "execution_count": 15, "metadata": {}, "output_type": "execute_result" } @@ -948,7 +982,7 @@ }, { "cell_type": "code", - "execution_count": 15, + "execution_count": 16, "metadata": {}, "outputs": [ { @@ -957,7 +991,7 @@ "[323, 328): 'their'" ] }, - "execution_count": 15, + "execution_count": 16, "metadata": {}, "output_type": "execute_result" } @@ -979,7 +1013,7 @@ }, { "cell_type": "code", - "execution_count": 16, + "execution_count": 17, "metadata": {}, "outputs": [ { @@ -1038,7 +1072,7 @@ "1 [266, 328): 'Lancelot, and Sir Not-Appearing-i... " ] }, - "execution_count": 16, + "execution_count": 17, "metadata": {}, "output_type": "execute_result" } @@ -1058,7 +1092,7 @@ }, { "cell_type": "code", - "execution_count": 17, + "execution_count": 18, "metadata": {}, "outputs": [ { @@ -1084,18 +1118,24 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Extract Additional Features with SpaCy\n", + "# Extract Additional Features with SpaCy \n", "\n", "With Text Extensions for Pandas, you can use Pandas DataFrames as a common representation for you NLP application's intermediate data, regardless of which NLP library you used to produce that data.\n", "\n", "In the cell that follows, we take the text that we just ran through Watson NLU and feed that text through a \n", "[SpaCy](https://spacy.io/) langauge model. Then we use the `make_tokens_and_features()` function from Text \n", - "Extensions for Pandas to convert this output to a Pandas DataFrame of token features." + "Extensions for Pandas to convert this output to a Pandas DataFrame of token features.\n", + "\n", + "In order to load the spacy language model, download the spacy model using the following command:\n", + "`$ python -m spacy download en_core_web_sm`\n", + "\n", + "You can also add a line in the below cell to install it inline:\n", + "`!python -m spacy download en_core_web_sm`" ] }, { "cell_type": "code", - "execution_count": 18, + "execution_count": 19, "metadata": {}, "outputs": [ { @@ -1359,7 +1399,7 @@ "[147 rows x 13 columns]" ] }, - "execution_count": 18, + "execution_count": 19, "metadata": {}, "output_type": "execute_result" } @@ -1382,7 +1422,7 @@ }, { "cell_type": "code", - "execution_count": 19, + "execution_count": 20, "metadata": {}, "outputs": [ { @@ -1391,7 +1431,7 @@ "[208, 328): 'Galahad the Pure, Sir Robin the Not-Quite-So-Brave-as-Sir-Lancelot, and [...]'" ] }, - "execution_count": 19, + "execution_count": 20, "metadata": {}, "output_type": "execute_result" } @@ -1411,7 +1451,7 @@ }, { "cell_type": "code", - "execution_count": 20, + "execution_count": 21, "metadata": {}, "outputs": [ { @@ -2147,7 +2187,7 @@ "79 [280, 361): 'Sir Not-Appearing-in-this-Film, a... " ] }, - "execution_count": 20, + "execution_count": 21, "metadata": {}, "output_type": "execute_result" } @@ -2161,14 +2201,14 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "# Combine Outputs of Both Libraries\n", + "# Combine Outputs of Both Libraries \n", "\n", "Notice that the \"sentence\" column of the SpaCy output in the previous cell contains multiple different values, even though all the tokens are actually from the same sentence. SpaCy's language model has incorrectly split this sentence into multiple smaller sentences. We can use `pandas.DataFrame.drop_duplicates()` to show exactly which sentence fragments are present in this slice of the SpaCy output: " ] }, { "cell_type": "code", - "execution_count": 21, + "execution_count": 22, "metadata": {}, "outputs": [ { @@ -2224,7 +2264,7 @@ "66 [280, 361): 'Sir Not-Appearing-in-this-Film, a..." ] }, - "execution_count": 21, + "execution_count": 22, "metadata": {}, "output_type": "execute_result" } @@ -2242,7 +2282,7 @@ }, { "cell_type": "code", - "execution_count": 22, + "execution_count": 23, "metadata": {}, "outputs": [ { @@ -2319,7 +2359,7 @@ "Length: 4, dtype: TokenSpanDtype" ] }, - "execution_count": 22, + "execution_count": 23, "metadata": {}, "output_type": "execute_result" } @@ -2339,7 +2379,7 @@ }, { "cell_type": "code", - "execution_count": 23, + "execution_count": 24, "metadata": {}, "outputs": [ { @@ -2704,7 +2744,7 @@ "79 [130, 361): 'Along the way, he recruits Sir Be... " ] }, - "execution_count": 23, + "execution_count": 24, "metadata": {}, "output_type": "execute_result" } @@ -2724,7 +2764,7 @@ }, { "cell_type": "code", - "execution_count": 24, + "execution_count": 25, "metadata": {}, "outputs": [ { @@ -2774,7 +2814,7 @@ "Length: 1, dtype: TokenSpanDtype" ] }, - "execution_count": 24, + "execution_count": 25, "metadata": {}, "output_type": "execute_result" } @@ -2792,7 +2832,7 @@ }, { "cell_type": "code", - "execution_count": 25, + "execution_count": 26, "metadata": {}, "outputs": [ { @@ -3034,7 +3074,7 @@ "53 [130, 361): 'Along the way, he recruits Sir Be... " ] }, - "execution_count": 25, + "execution_count": 26, "metadata": {}, "output_type": "execute_result" } @@ -3062,13 +3102,13 @@ }, { "cell_type": "code", - "execution_count": 26, + "execution_count": 27, "metadata": {}, "outputs": [ { "data": { "text/html": [ - "\n", + "\n", "\n", " Galahad\n", " NNP\n", @@ -3250,225 +3290,225 @@ "\n", "\n", "\n", - " \n", + " \n", " \n", - " det\n", + " det\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " dobj\n", + " dobj\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " compound\n", + " compound\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " neg\n", + " neg\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " advmod\n", + " advmod\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " prep\n", + " prep\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " compound\n", + " compound\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " pobj\n", + " pobj\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " cc\n", + " cc\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " nmod\n", + " nmod\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " neg\n", + " neg\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " nmod\n", + " nmod\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " prep\n", + " prep\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " pobj\n", + " pobj\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " punct\n", + " punct\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " prep\n", + " prep\n", " \n", " \n", "\n", "\n", "\n", - " \n", + " \n", " \n", - " prep\n", + " prep\n", " \n", " \n", "\n", @@ -3486,6 +3526,17 @@ "tp.render_parse_tree(spacy_context_tokens)" ] }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Conclusion\n", + "\n", + "In this notebook we demonstrated how Text Extensions for Pandas can be used to perform various NLP tasks. We started by loading our document and passing it through Watson NLU service. We extracted various entities and relations. We used Text Extensions for Pandas to manipualte the Span data and visualize some of our findings. Finally we pass this through a language model using SpaCy which gives us more insights such as parts of speech tagging. We then combine all the results to render a parse tree. \n", + "\n", + "This notebook also demonstrates how easy it is to inter-operate with other popular NLP packages such as SpaCy, pandas and IBM Watson NLU. " + ] + }, { "cell_type": "code", "execution_count": null,