diff --git a/docs/basics/appa_build_basics.md b/docs/basics/appa_build_basics.md
index 9554b3c..898a6c8 100644
--- a/docs/basics/appa_build_basics.md
+++ b/docs/basics/appa_build_basics.md
@@ -138,13 +138,14 @@ outputs:
appabuild_version: "0.2" #(8)!
parameters: #(9)!
- name: cuda_core #(10)!
- type: float #(15)!
+ type: float #(11)!
default: 512 #(12)!
- pm_perc: 0.1 #(13)!
+ min: 256 #(16)!
+ max: 4096 #(17)!
- name: architecture
- type: enum #(14)!
+ type: enum #(15)!
default: Maxwell
- weights: #(10)!
+ weights: #(16)!
Maxwell: 1
Pascal: 1
- name: usage_location
@@ -155,16 +156,15 @@ outputs:
EU: 1
- name: energy_per_inference
type: float
- default: 0.05
- min: 0.01 #(16)!
- max: 0.1 #(17)!
+ default: 110.0
+ pm_perc: 0.2 #(13)!
- name: lifespan
type: float
- default: 2
+ default: 2.0
pm: 1 #(18)!
- name: inference_per_day
type: float
- default: 3600
+ default: 86400000.0
pm_perc: 0 #(19)!
:::
@@ -175,14 +175,14 @@ outputs:
5. Name of the yaml file corresponding to the impact model (do not include file extension)
6. If True, all symbolic expressions needed by Appa Run to generate any kind of results will be precomputed by Appa Build and stored in the impact model. This will result in larger impact model files and a longer building time. If False, those symbolic expressions will be computed by Appa Run when required, which will lead to more time during runtime (but only for the first computation).
7. The following fields are informative and will be included in the impact model, and are meant to help the user of the impact model to better understand the LCA leading to the impact model, as well as to facilitate reproducibility.
-8. Appa Build version used to create the impact model. Note that this has to be entered manually at the moment. The Appa Build version can be found in setup.py.
+8. Appa Build version used to create the impact model. Note that this has to be entered manually at the moment. The Appa Build version can be found in setup.py.
9. Information about all free parameters needed by the FU.
10. Name of the parameter. For float parameter, this name will be present in the amount expression of some exchanges. For enum parameter, this will correspond to the name of a switch.
11. Type of the parameter, which can either be float or enum. Float parameters are used to parameterize the amount of exchange(s).
12. Default value used by Appa Run if the user does not specify a new one.
13. Used to determine the lower and upper bounds of the parameter, for features using Monte Carlo simulation. Pm_perc (plus/minus, in percent) dynamically sets the minimum and maximum values to default*(1-pm_perc) and default*(1+pm_perc), respectively.
14. Type of the parameter, which can be either float or enum. Enum parameter can be used to include modularity in LCA, i.e. to modify the exchange's amount, exchange's input activity, or exchange's input activity's parameterization depending on the value of a variable.
-15. Contains two information: the possible values of the enum parameter (should correspond to the name of the switch's options), and their corresponding probability, for features such as Monte Carlo simulation.
+15. Contains two information: the possible values of the enum parameter (should correspond to the name of the switch's options), and their corresponding probability, for features such as Monte Carlo simulation.
16. The minimum limit of the parameter, used for features such as Monte Carlo simulation.
17. The maximum limit of the parameter, used for features such as Monte Carlo simulation.
18. Used to set the lower and upper bounds of the parameter, for features using Monte Carlo simulation. Pm (plus/minus) dynamically sets the minimum and maximum values as default-pm and default+pm, respectively.
diff --git a/docs/basics/appa_run_basics.md b/docs/basics/appa_run_basics.md
index 142b5e6..85cfa2f 100644
--- a/docs/basics/appa_run_basics.md
+++ b/docs/basics/appa_run_basics.md
@@ -19,14 +19,23 @@ You can click on the
to
lifespan: 3 #(1)!
architecture: Maxwell #(2)!
-cuda_core: [256, 512, 1024] #(3)!
-energy_per_inference: [0.05, 0.06, 0.065] #(4)!
+cuda_core: #(3)!
+ architecture:
+ Maxwell: 1344
+ Pascal: 1280
+usage_location: [FR, EU, EU] #(4)!
+inference_per_day: [86400000, 172800000, 259200000] #(5)!
+energy_per_inference:
+ architecture:
+ Maxwell: 0.0878 * cuda_core
+ Pascal: 0.0679 * cuda_core
:::
1. Float type parameter
2. Enum type parameter. The value must match with one of the possible options.
-3. Parameters (float and enum) can also be given as a list, which will give a set of scores for each set of parameters. When list parameters coexist with single value parameters as, in this example, the single value is duplicated to the size of the list parameters.
-4. When you use two list parameters, their size should match.
+3. It is possible to use expressions as values, for more details, see the section [Appa Run in depth](../in_depth/appa_run_in_depth.md).
+4. Parameters (float and enum) can also be given as a list, which will give a set of scores for each set of parameters. When list parameters coexist with single value parameters as, in this example, the single value is duplicated to the size of the list parameters.
+5. When you use at least two list parameters, their size should match.
The following command calculates the scores. You need to tell Appa Run where the impact models are stored by setting the `APPARUN_IMPACT_MODELS_DIR` environment variable (here, to `samples/`).
@@ -42,7 +51,7 @@ Finally, `--output-file-path outputs/scores.yaml` is an optional argument to sav
Here is what the command should print (and optionally save):
```
-{'scores': {'EFV3_CLIMATE_CHANGE': [6.814605183702477, 23.409114107243994, 124.77822686500075]}}
+{'scores': {'EFV3_CLIMATE_CHANGE': [234869.69128796642, 111981.06440540637, 167839.53117020638]}}
```
### Using Python API
@@ -55,8 +64,20 @@ The equivalent using the Python API is as follows, and should produce the same r
scores = impact_model.get_scores(lifespan=3,
architecture="Maxwell",
- cuda_core=[256, 512, 1024],
- energy_per_inference=[0.05, 0.06, 0.065])
+ cuda_core={
+ "architecture": {
+ "Maxwell": 1344,
+ "Pascal": 1280
+ }
+ },
+ usage_location: [FR, EU, EU],
+ inference_per_day: [86400000, 172800000, 259200000],
+ energy_per_inference={
+ "architecture": {
+ "Maxwell": "0.0878 * cuda_core",
+ "Pascal": "0.0679 * cuda_core"
+ }
+ })
print(scores)
:::
@@ -77,7 +98,7 @@ apparun compute-nodes nvidia_ai_gpu_chip samples/conf/parameters.yaml --output-f
Result:
```
-[NodeScores(name='ai_use_phase', parent='nvidia_ai_gpu_chip', properties=NodeProperties(properties={}), lcia_scores=LCIAScores(scores={'EFV3_CLIMATE_CHANGE': [1.420092, 1.7041104000000002, 1.8461196000000002]})), NodeScores(name='nvidia_gpu_chip_manufacturing', parent='nvidia_ai_gpu_chip', properties=NodeProperties(properties={}), lcia_scores=LCIAScores(scores={'EFV3_CLIMATE_CHANGE': [6.742999152294716, 23.323186869554682, 124.68513902417065]})), NodeScores(name='nvidia_ai_gpu_chip', parent='', properties=NodeProperties(properties={}), lcia_scores=LCIAScores(scores={'EFV3_CLIMATE_CHANGE': [8.163091152294715, 25.027297269554683, 126.53125862417065]}))]
+[NodeScores(name='ai_use_phase', parent='nvidia_ai_gpu_chip', properties=NodeProperties(properties={}), lcia_scores=LCIAScores(scores={'EFV3_CLIMATE_CHANGE': [234605.56041216006, 111716.93352960002, 167575.40029440002]})), NodeScores(name='nvidia_gpu_chip_manufacturing', parent='nvidia_ai_gpu_chip', properties=NodeProperties(properties={}), lcia_scores=LCIAScores(scores={'EFV3_CLIMATE_CHANGE': [264.13087580635846, 264.13087580635846, 264.13087580635846]})), NodeScores(name='nvidia_ai_gpu_chip', parent='', properties=NodeProperties(properties={}), lcia_scores=LCIAScores(scores={'EFV3_CLIMATE_CHANGE': [234869.69128796642, 111981.06440540637, 167839.53117020638]}))]
```
@@ -91,8 +112,20 @@ The equivalent using the Python API is as follows, and should produce the same r
nodes_scores = impact_model.get_nodes_scores(lifespan=3,
architecture="Maxwell",
- cuda_core=[256, 512, 1024],
- energy_per_inference=[0.05, 0.06, 0.065])
+ cuda_core={
+ "architecture": {
+ "Maxwell": 1344,
+ "Pascal": 1280
+ }
+ },
+ usage_location: [FR, EU, EU],
+ inference_per_day: [86400000, 172800000, 259200000],
+ energy_per_inference={
+ "architecture": {
+ "Maxwell": "0.0878 * cuda_core",
+ "Pascal": "0.0679 * cuda_core"
+ }
+ })
print(nodes_scores)
:::
@@ -136,6 +169,7 @@ You can click on the
to
6. Path to save plots as a pdf file. If this argument is not set, not pdf file will be generated.
7. Path to save the table. If this argument is not set, not table file will be generated.
8. Path to save plots as a png file. If this argument is not set, not png file will be generated.
+9. Width of the generated images.
10. Height of the generated images.
Here is a figure we can obtain:
diff --git a/docs/in_depth/appa_run_in_depth.md b/docs/in_depth/appa_run_in_depth.md
index b61bfb4..29aa6b0 100644
--- a/docs/in_depth/appa_run_in_depth.md
+++ b/docs/in_depth/appa_run_in_depth.md
@@ -1 +1,72 @@
-# Appa run in depth
\ No newline at end of file
+# Appa run in depth
+
+## Expressions as values for impact models parameters
+
+When computing the FU scores or the nodes scores of an impact model, it is possible to use
+expressions instead of constants for the parameters.
+
+There are two types of expressions : enum type expressions and float type expressions.
+
+
+Float type expressions are arithmetic expressions where the variables are existing parameters
+of the impact model. Each parameter used in a float type expression must be a float type parameter.
+
+
+Enum type expressions are similar to the match structure in python. Each enum type expression
+use one and only one enum type parameter from the impact model and each option of the parameter
+must have a sub-expression associated. The sub-expressions can be enum or float type expressions
+and also constants.
+
+### Using CLI
+
+Here's an example about how to use expressions in a yaml file:
+:::{code-block} yaml
+:caption: samples/conf/parameters.yaml
+:lineno-start: 1
+
+lifespan: log(inference_per_day) * 0.1 - cuda_core * pow(energy_per_inference, 3) #(1)!
+architecture: [Maxwell, Pascal, Maxwell]
+cuda_core:
+ architecture: #(2)!
+ Maxwell: 512
+ Pascal: 560
+energy_per_inference:
+ - architecture: #(3)!
+ Maxwell: 0.024
+ Pascal: 0.0198
+ - log(inference_per_day) * 0.006
+ - 0.0235
+:::
+
+1. Float type expression
+2. Enum type expression
+3. Expressions can be used in list just as constants
+
+### Using the Python API
+
+The equivalent using the Python API is as follows, and should produce the same result:
+
+:::{code-block} python
+:caption: samples/conf/parameters.yaml
+:lineno-start: 1
+
+scores = impact_model.get_scores(lifespan="log(inference_per_day) * 0.1 - cuda_core * pow(energy_per_inference, 3)",
+ architecture=["Maxwell", "Pascal", "Maxwell"],
+ cuda_core={
+ "architecture": {
+ "Maxwell": 512,
+ "Pascal": 560
+ }
+ },
+ energy_per_inference=[
+ {
+ "architecture": {
+ "Maxwell": 0.024,
+ "Pascal": 0.0198
+ }
+ },
+ "log(inference_per_day) * 0.006",
+ 0.235
+ ])
+print(scores)
+:::
\ No newline at end of file