Skip to content

Releases: BerkeleyLab/fiats

0.2.0 JSON file read/write capability, more accurate default inference strategy

23 Jan 00:24
790cff0
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.1.2...0.2.0

0.1.2 Bug fix and documentation edits

02 Dec 08:41
08bd468
Compare
Choose a tag to compare

What's Changed

  • fix(README.md): correct typo, eliminate redundancy by @rouson in #21
  • Fix handling of example-program input arguments in run-fpm.sh script by @rouson in #22
  • Increment version number to 0.1.2 by @rouson in #23

Full Changelog: 0.1.1...0.1.2

0.1.1

29 Nov 03:53
6addc82
Compare
Choose a tag to compare

What's Changed

  • Prepare open-source release by @rouson in #18
    • Add LICENSE.txt file with copyright notice and license agreement
    • Add statement referring to the license at the top of each source file
    • Add build instructions to the README.md
    • Add a basic ford project file
    • Set up the CI to post the ford documentation to GitHub Pages
  • Add asymmetric network test by @rouson in #19
    • The new test uses a network that encodes a two-input/one-output digital logic circuit that performs operations equivalent to "XOR AND input-2".
  • Fix asymmetric-network test of matmul-based inference by @rouson in #20
    • address an issue that the new asymmetric test exposed.

Full Changelog: 0.1.0...0.1.1

Initial Release: Concurrent Inference Capability

22 Nov 02:08
a1089ce
Compare
Choose a tag to compare

This release provides an inference_engine_t type that encapsulates the state and behavior of a dense neural network with

  1. State:
    a. Weights and biases biases gathered into contiguous, multidimensional arrays, 🏋️
    b. Hidden layers with a uniform number of neurons. 🧠
  2. Behavior:
    a. A pure infer type-bound procedure that propagates input through the above architecture to produce output. ❄️
    b. An elemental interface for activation functions with one currently implemented: a step function. 🪜
    c. Runtime selection of inference method via the Strategy Pattern:
    • concurrent execution of dot_product intrinsic function invocations or
    • matmul intrinsic function invocations.
      d. Runtime selection of activation functions: currently only a step function is implemented to support the unit tests.
  3. Unit tests that
    a. Read and write neural networks to files, 📁
    b. Construct a network that implements an exclusive-or (XOR) gate in the first hidden layer followed by a second layer with weights described by the identity matrix so that the second layer serves a pass-through role. 🚧
  4. Examples
    a. Concurrent inference on multiple independent neural networks encapsulated in inference_engine_t objects. 🤔
    b. Construction and writing of a neural network to a file starting from user-defined weights and biases (useful for unit testing).
    c. Reading a neural network from a file and querying it for basic properties.
    d. Reading and writing from a NetCDF file (for future incorporation into the Inference-Engine library for purposes of reading validation data sets). 🥅

Because the infer procedure is pure, it can be called inside a do concurrent construct, which facilitates concurrent inference using multiple, independent neural networks.

Full Changelog: https://github.com/BerkeleyLab/inference-engine/commits/0.1.0