-
Notifications
You must be signed in to change notification settings - Fork 3
Developer Documentation
Welcome to the Developer API page. Here you'll find documentation on functionality only available to developers via downloading the source code of nexport. Clone or fork this repository to get started!
Generic functions (generic.py ) |
Utility functions (utils.py ) |
Model classes (models.py ) |
---|---|---|
append_extension() |
detect_framework() |
BFNetwork() |
FFNetwork() |
||
ICARNetwork() |
Export functions (*/exporting.py ) |
Import functions (*/importing.py ) |
---|---|
pytorch.exporting.create_layer_object() |
pytorch.importing.import_from_file() |
pytorch.exporting.create_model_metadata() |
|
pytorch.exporting.create_model_object() |
|
pytorch.exporting.create_parameter_arrays() |
|
pytorch.exporting.export_to_file() |
|
pytorch.exporting.export_to_json() |
When you call a function - we'll call these "user functions" - in nexport, it'll call "hidden functions" that handle data input & output for you. nexport automatically detects which deep learning framework you are using from which modules you've imported in your current Python session. It uses this information to make decisions on which hidden functions to use, therefore dynamically selecting and executing code depending on environment variables. You can check which deep learning framework nexport thinks you're using by executing the following line after importing nexport:
print(nexport.__framework__)
When nexport calls hidden functions, it passes the arguments you provide when calling user functions across all functions called. This wiki page endeavors to explain where and how nexport calls hidden functions and which arguments are passed.
Utility functions are functions which can be called directly from nexport that don't involve exporting or importing deep neural networks. These are helper functions that can provide information about your neural networks and are useful when you need architecture information quickly.
Generic functions are functions which are shared by the deep learning frameworks' nexport functions. These are not usually callable by the user but are separated from the framework-specific hidden functions to keep the module DRY.
This function verifies that the provided file extension is supported by nexport and appends the extension to the provided file name. The function then returns the full filename. This function removes the reliance on the user to remember to append their own file extension and eliminates the error introduced with such a reliance.
filename = nexport.append_extension(filename="model",
extension="txt"
)
Argument | Data type | Default | Description |
---|---|---|---|
filename |
string | Name to be used for exported file | |
extension |
string | Extension of filetype to be used for exported file |
Output | Data type | Description |
---|---|---|
filename |
string | File name including file extension |
This function allows users to export an instantiated PyTorch model to .txt
format. It is essentially a parameter dump of multi-dimensional arrays.
nexport.export_to_file(model=model,
filename="model file"
)
Argument | Data type | Default | Description |
---|---|---|---|
model |
object | Instantiated PyTorch model | |
filename |
string | model |
Name to be used for output file |
This function does not currently return anything, but a success boolean return is planned.
This function allows users to export an instantiated PyTorch model to .json
format. The file is structured to be as human-readable as possible and parameters are grouped into layers, then neurons, then weights or biases.
nexport.export_to_json(model=model,
filename="model file",
indent=2,
verbose=2,
include_metadata=True,
model_name="My Neural Network",
model_author="Andrew Ng"
)
Argument | Data type | Default | Description |
---|---|---|---|
model |
object | Instantiated PyTorch model | |
filename |
string | model |
Name to be used for output file |
indent |
integer | 4 |
Number of spaces to be used to indent JSON object |
verbose |
integer | 1 |
Degree of status output for model export |
include_metadata |
boolean | False |
Whether to include arguments as metadata header in output file |
model_name |
string | My Model |
Name of model to be used in metadata header |
model_author |
string | Name of logged-in user | Name of author of model to be used in metadata header |
Accepted values for
verbose
:[0, 1, 2, 3]
This function does not currently return anything, but a success boolean return is planned.
This function allows the user to import a PyTorch model by parsing the .txt
file generated by export_to_file()
. It outputs an instantiated PyTorch model with initialized parameters.
model = nexport.import_from_file(filepath="Desktop/model.txt",
framework="PyTorch",
architecture="linear"
)
Argument | Data type | Default | Description |
---|---|---|---|
filepath |
string | Path to file to be used for import | |
framework |
string | PyTorch |
Framework used to initially instantiate model |
architecture |
string | linear |
Architecture used to connect model's hidden layers |
Accepted values for
framework
:['PyTorch']
Accepted values for
architecture
:['linear']
Output | Data type | Description |
---|---|---|
model |
object | Instantiated model |
Created and maintained by Jordan Welsman @ LBNL.
nexport
Copyright (c) 2022-2023, The Regents of the University of California, through Lawrence Berkeley National Laboratory (subject to receipt of any required approvals from the U.S. Dept. of Energy).
All rights reserved.