Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1,233 changes: 109 additions & 1,124 deletions examples/Sample Skyspark VAV Validation With API Request.ipynb

Large diffs are not rendered by default.

978 changes: 73 additions & 905 deletions examples/Sample Skyspark VAV Validation.ipynb

Large diffs are not rendered by default.

356 changes: 356 additions & 0 deletions examples/Skyspark Validation Batched.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,356 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "electoral-palestinian",
"metadata": {},
"source": [
"# Sample Skyspark Batched"
]
},
{
"cell_type": "markdown",
"id": "numeric-spelling",
"metadata": {},
"source": [
"This is a modified version of the \"Skyspark Validation\" notebook. It allows a user to query for a set of similar equipment and validate each equipemnt in turn."
]
},
{
"cell_type": "markdown",
"id": "possible-bidding",
"metadata": {},
"source": [
"# 1) Setup"
]
},
{
"cell_type": "markdown",
"id": "resident-brook",
"metadata": {},
"source": [
"## Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "irish-applicant",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# ----------------------------------------\n",
"# Imports\n",
"# ----------------------------------------\n",
"import os\n",
"import json\n",
"import re\n",
"\n",
"from rdflib import Namespace, SH, RDF, BNode, Graph\n",
"from pyshacl import validate\n",
"from dotenv import load_dotenv\n",
"load_dotenv()\n",
"\n",
"from tasty import constants as tc\n",
"from tasty import graphs as tg\n",
"from tasty.skyspark import client as cl\n",
"from tasty.skyspark import process_graphs as pg\n",
"from tasty.skyspark import helpers"
]
},
{
"cell_type": "markdown",
"id": "editorial-consolidation",
"metadata": {},
"source": [
"## Inputs\n",
"Define the key variables and input information here\n",
"\n",
"***Items to Change***\n",
"- `equip_tag`: this is the equipment tag to use in the axon query to find the relavent equipment; this can also be an axon query in itself (e.g. 'vav and hotWateHeat')\n",
"- `building`: this is the building name to use in the axon query (should match the building's \"dis\" name in Skyspark)\n",
"- `SHAPE`: this is the name of the SHACL equipment shape against which you would like to validate your sample equipment in the instance data\n",
"- `input_namespace_uri`: this is the namespace uri used for your sample equipment in the instance data\n",
"- `raw_data_graph_filename`: this is the filename/filepath to save the raw instance data (in turtle format) retrieved from the Skyspark API call\n",
"- `data_graph_filename`: this is the filename/filepath to save the cleaned/processed instance data for the data graph to be used for validation\n",
"- `shapes_graph_filename`: this it the filename/filepath of the SHACL shapes data for the shape graph \n",
"***Remaining Items*** </br>\n",
"These items should be okay as is, but can be changed if need be. If you are printing out results, <u>*make sure that the output directory exists in your local file structure*</u>.\n",
"- `output_directory`: this is the directory where output files will be printed to below\n",
"- `tasty_main_directory`: this is the absolute path of the main tasty directory. It should just be the parent directory of the current working directory."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "informative-chassis",
"metadata": {},
"outputs": [],
"source": [
"# ----------------------------------------\n",
"# User Defined Variables\n",
"# ----------------------------------------\n",
"\n",
"equip_tag = 'fcu'\n",
"# equip_tag = 'ahu'\n",
"building = 'OTF'\n",
"\n",
"SHAPE = 'NREL-FCU-CS-HW-CHW-Shape'\n",
"# SHAPE = 'NREL-AHU-VAV-MZ-HW-CHW-Evap-Shape'\n",
"input_namespace_uri = 'urn:/_#'\n",
"\n",
"raw_data_graph_filename = 'examples/output/sample_skyspark_vav_raw.ttl'\n",
"data_graph_filename = 'examples/output/sample_skyspark_vav_clean.ttl'\n",
"shapes_graph_filename = 'tasty/generated_shapes/haystack_all.ttl'\n",
"validation_results_output_filename = 'examples/output/OTF_FCU_results.txt'\n",
"\n",
"output_directory = os.path.join(os.path.abspath(''), 'example_data/output')\n",
"tasty_main_directory = os.path.join(os.path.abspath(''), '../')\n",
"# print(tasty_main_directory)\n",
"\n",
"# ----------------------------------------\n",
"# Variables and Constants\n",
"# ----------------------------------------\n",
"skyspark_api_url = os.environ.get('API_ENDPOINT')\n",
"\n",
"NAMESPACE = Namespace(input_namespace_uri)\n",
"shape_name = tc.PH_SHAPES_NREL[SHAPE]\n",
"output_file = os.path.join(tasty_main_directory, validation_results_output_filename)"
]
},
{
"cell_type": "markdown",
"id": "married-interpretation",
"metadata": {},
"source": [
"## API Request From Skyspark \n",
"NOTE - Must be connected to NREL network to access the api endpoint"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ordinary-gazette",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"client = cl.SkysparkClient(skyspark_api_url)"
]
},
{
"cell_type": "markdown",
"id": "artificial-allocation",
"metadata": {},
"source": [
"make query for list of equipment with equip_tag (defined above)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "living-mirror",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"from IPython.display import JSON\n",
"\n",
"axon_query_string = client.generate_axon_query_for_equip_type(equip_tag, building)\n",
"print(axon_query_string)\n",
"\n",
"response = client.make_get_request(axon_query_string, 'json')\n",
"\n",
"print(response.status_code, end = \" - \")\n",
"if response.status_code == 200:\n",
" print(\"Sucess\")\n",
"elif response.status_code == 404:\n",
" print(\"Not Found\")\n",
"\n",
"raw_skyspark_data = json.loads(response.text)\n",
"JSON(raw_skyspark_data['rows'], expanded=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "valid-gnome",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"equip_list = []\n",
"\n",
"for item in raw_skyspark_data['rows']:\n",
" new_equip = item['navName']\n",
" equip_list.append(new_equip)\n",
"for equip in equip_list:\n",
" print(equip)"
]
},
{
"cell_type": "markdown",
"id": "invalid-award",
"metadata": {},
"source": [
"### Define the Validate Equip Function"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "coated-astronomy",
"metadata": {},
"outputs": [],
"source": [
"def validate_equip(equip_name, building):\n",
" helpers.append_data_to_file(f\"Equipment Name:\\t{equip_name} \\n\", output_file) \n",
" # ----------------------------------------\n",
" # Get Axon Query\n",
" # ----------------------------------------\n",
" axon_query_string = client.generate_axon_query_for_equip(equip_name, building)\n",
" print(\"Generated axon query: \" + axon_query_string)\n",
"\n",
" # ----------------------------------------\n",
" # Perform Request\n",
" # ----------------------------------------\n",
" response = client.make_get_request(axon_query_string, 'turtle')\n",
"\n",
" print(response.status_code, end = \" - \")\n",
" if response.status_code == 200:\n",
" print(\"Sucess\")\n",
" elif response.status_code == 404:\n",
" print(\"Not Found\")\n",
"\n",
" raw_skyspark_data = response.text\n",
" # print(raw_skyspark_data)\n",
"\n",
" # ----------------------------------------\n",
" # save Response to File\n",
" # ----------------------------------------\n",
" f = os.path.join(tasty_main_directory, raw_data_graph_filename)\n",
" helpers.save_data_to_file(raw_skyspark_data, f)\n",
" print(f\"raw instance data saved to '{raw_data_graph_filename}' \")\n",
"\n",
" # ----------------------------------------\n",
" # Get Equip ID\n",
" # ----------------------------------------\n",
" equip_id = client.get_equip_id(equip_name, building)\n",
" print(\"Equipment id: \" + equip_id)\n",
" helpers.append_data_to_file(f\"Equipment ID:\\t{equip_id}\\n\", output_file) \n",
" target_node = NAMESPACE[equip_id]\n",
" # ----------------------------------------\n",
" # Create instance of SkysparkGraphProcessor\n",
" # ----------------------------------------\n",
" schema = tc.HAYSTACK\n",
" version = tc.V3_9_10\n",
" sgp = pg.SkysparkGraphProcessor(input_namespace_uri,schema, version)\n",
"\n",
" # ----------------------------------------\n",
" # Pre Process raw skyspark .ttl file \n",
" # ----------------------------------------\n",
" f1 = os.path.join(tasty_main_directory, raw_data_graph_filename)\n",
" f2 = os.path.join(tasty_main_directory, data_graph_filename)\n",
" sgp.clean_raw_skyspark_turtle(f1,f2)\n",
" print(f\"cleaned instance data saved to '{data_graph_filename}' \")\n",
"\n",
" # ----------------------------------------\n",
" # Generate Graphs\n",
" # ----------------------------------------\n",
"\n",
" # Data Graph\n",
" dg_file = os.path.join(tasty_main_directory, data_graph_filename)\n",
" data_graph = sgp.get_data_graph(dg_file)\n",
" print(\"...loaded data graph\")\n",
"\n",
" # Shapes Graph\n",
" sg_file = os.path.join(tasty_main_directory, shapes_graph_filename)\n",
" shapes_graph = sgp.get_shapes_graph(sg_file, target_node, shape_name)\n",
" print(\"...loaded shapes graph\")\n",
" helpers.append_data_to_file(f\"SHACL Shape:\\t{shape_name}\\n\", output_file) \n",
"\n",
" # Ontology Graph\n",
" ont_graph = sgp.get_ontology_graph()\n",
" print(\"...loaded ontology graph\")\n",
"\n",
" # ----------------------------------------\n",
" # Run pySCHACL Validation\n",
" # ---------------------------------------- \n",
" result = validate(data_graph, shacl_graph=shapes_graph, ont_graph=ont_graph)\n",
" conforms, results_graph, results = result\n",
" \n",
" print(f\"Conforms: {conforms}\")\n",
" helpers.append_data_to_file(f\"Conforms:\\t\\t{conforms}\\n\", output_file) \n",
"\n",
" # ----------------------------------------\n",
" # Determine Missing Points\n",
" # ----------------------------------------\n",
" missing_points = sgp.determine_missing_points(results_graph)\n",
" \n",
" report_string = \"\"\n",
" if len(missing_points['required']) <= 0:\n",
" print(\"No Required Points Missing\")\n",
" report_string += \"No Required Points Missing\\n\"\n",
" else:\n",
" print(f\"{len(missing_points['required'])} Missing Required Points:\")\n",
" report_string += f\"{len(missing_points['required'])} Missing Required Points:\\n\"\n",
" for point in missing_points['required']:\n",
" print(f\"\\t{point}\")\n",
" point_trunc = point[point.rfind('#') + 1:]\n",
" report_string += f\"\\t{point_trunc}\\n\"\n",
"\n",
" if len(missing_points['optional']) <= 0:\n",
" print(\"No Optional Points Missing\")\n",
" report_string += \"No Optional Points Missing\\n\"\n",
" else:\n",
" print(f\"{len(missing_points['optional'])} Missing Optional Points:\")\n",
" report_string += f\"{len(missing_points['optional'])} Missing Optional Points:\\n\"\n",
" for point in missing_points['optional']:\n",
" print(f\"\\t{point}\")\n",
" point_trunc = point[point.rfind('#') + 1:]\n",
" report_string += f\"\\t{point_trunc}\\n\"\n",
" helpers.append_data_to_file(report_string, output_file) "
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "elder-farmer",
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"helpers.save_data_to_file(\"\", output_file) # clear the output file\n",
"for equip in equip_list:\n",
" validate_equip(equip, building)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading