You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The classify function takes several parameters: nodes, which are the input nodes for classification; h, the feature matrix; epochs, which defaults to 3000 and represents the number of training epochs; patience, which defaults to 100 and denotes the early stopping patience; and lr, the learning rate, which defaults to 0.01. The function returns the softmax output for the specified nodes. Configuration defaults are indicated by a colon (:) in the function signatures. The same notation is used to set/overwrite them when calling functions, as we do for softmax to apply it row-wise. Configuration values have the priority:
23
+
The classify function takes two inputs: nodes are the input nodes for classification; h is the feature matrix. The function returns a softmax output for the specified nodes. It also considers several configuration values, whose defaults are indicated by a colon (:) in the function signatures. The same notation is used to set/overwrite them when calling functions, as we do for softmax to apply it row-wise. Think of them as keyword arguments. These defaults for the classify function are: epochs, which defaults to 3000 and represents the number of training epochs; patience, which defaults to 100 and denotes the early stopping patience; and lr the learning rate that defaults to 0.01.
24
+
25
+
Exclamation marks (!) before numbers broadcast them to all subsequent function calls as new defaults for the same configurations. Broadcasted configurations are retrievable from JGNN's Neuralang model builder, which is useful for Java integration later. Configuration values have the priority:
24
26
1. function call arguments
25
-
2.Java configurations
27
+
2.broacasted configurations (last value, includes configurations set by Java)
26
28
3. function signature defaults
27
29
28
30
```rust
@@ -70,6 +72,6 @@ ModelTraining trainer = new ModelTraining()
70
72
.setValidationLoss(newCategoricalCrossEntropy());
71
73
```
72
74
73
-
In this example, a dataset (Cora) is loaded, and its graph is prepared by adding self-loops (this is known as the renormalization trick) and normalizing symmetrically. A Neuralang instance is created, which is a ModelBuilder that can parse scripts as either file Paths or pure text. Constants like the adjacency matrix A and feature matrix h are set, along with variables (nodes) and configurations (classes, hidden). The model and its output is defined with a Neuralang statement. Finally, dimension names and sizes for ? found model declaration are filled by calling autosize. In the example we use empty tensors to avoid unecessary computations while determining the dimensions.
75
+
In the above example, a dataset (Cora) is loaded, and its graph is prepared by adding self-loops (the renormalization trick) and performing symmetric normalization. A Neuralang instance is ten created; this is a ModelBuilder that can parse scripts as either file Paths or pure text. Constants like the adjacency matrix A and feature matrix h are set, along with variables (nodes) and configurations (classes, hidden). The model and its output is defined with a Neuralang statement. Finally, dimension names and sizes for ? found model declaration are filled by calling autosize. In the example we use empty tensors to avoid unecessary computations while determining the dimensions.
74
76
75
-
A ModelTraining instance is then configured using parameters from the ModelBuilder, utilizing the configurations found in the classification method.
77
+
A ModelTraining instance is finally configured using parameters from the ModelBuilder, utilizing the configurations found in the classification method. Don't forget to broadcast configuration values that you need to access from Java code later.
0 commit comments