Skip to content

Commit

Permalink
deploy: ea1044a
Browse files Browse the repository at this point in the history
  • Loading branch information
HugoGranstrom committed Mar 29, 2024
1 parent 3c7b713 commit 68bea55
Show file tree
Hide file tree
Showing 9 changed files with 39 additions and 39 deletions.
Binary file modified __pycache__/mymod.cpython-310.pyc
Binary file not shown.
2 changes: 1 addition & 1 deletion basics/common_datatypes.html
Original file line number Diff line number Diff line change
Expand Up @@ -374,7 +374,7 @@ <h3>A few more typical ways to create sequences</h3>
standard library provides a <code>rand</code> procedure we can combine with <code>mapIt</code>:</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">import</span> random
randomize()
<span class="hljs-keyword">echo</span> toSeq(<span class="hljs-number">0</span> ..&lt; <span class="hljs-number">5</span>).mapIt(rand(<span class="hljs-number">10.0</span>))</code></pre><pre class="nb-output">@[3.248404591893539, 0.8147250549024365, 6.561275897577749, 8.244125886845488, 8.793871974402697]</pre>
<span class="hljs-keyword">echo</span> toSeq(<span class="hljs-number">0</span> ..&lt; <span class="hljs-number">5</span>).mapIt(rand(<span class="hljs-number">10.0</span>))</code></pre><pre class="nb-output">@[9.579251111264801, 1.578132457619805, 4.889709401360747, 8.848071916580956, 2.386300214710697]</pre>
<p>samples 5 floating point numbers between 0 and 10.</p>
<h2><code>Tensor[T]</code> - an ND-array type from <a href="https://github.com/mratsim/Arraymancer">Arraymancer</a></h2>
<p>Arraymancer provides an ND-array type called <code>Tensor[T]</code> that is best compared to a numpy
Expand Down
12 changes: 6 additions & 6 deletions external_language_integration/julia/nimjl_arrays.html
Original file line number Diff line number Diff line change
Expand Up @@ -436,13 +436,13 @@ <h2>Conversion between JlArray[T] and Arraymancer's Tensor[T] (and dealing with
<span class="hljs-keyword">echo</span> localTensor

<span class="hljs-keyword">var</span> localTensor2 = localArray.to(<span class="hljs-type">Tensor</span>[<span class="hljs-built_in">int</span>], rowMajor)
<span class="hljs-keyword">assert</span>(localTensor == localTensor2)</code></pre><pre class="nb-output">[3 5 4 1 3; 4 2 1 1 5; 1 1 1 1 1; 1 3 5 4 2; 1 3 3 4 2]
<span class="hljs-keyword">assert</span>(localTensor == localTensor2)</code></pre><pre class="nb-output">[3 5 2 1 2; 5 1 3 4 4; 3 3 5 4 3; 5 2 5 4 2; 5 2 3 4 5]
Tensor[system.int] of shape &quot;[5, 5]&quot; on backend &quot;Cpu&quot;
|3 5 4 1 3|
|4 2 1 1 5|
|1 1 1 1 1|
|1 3 5 4 2|
|1 3 3 4 2|</pre>
|3 5 2 1 2|
|5 1 3 4 4|
|3 3 5 4 3|
|5 2 5 4 2|
|5 2 3 4 5|</pre>
<p>Both Tensors have identical indexed values but the buffer are different according to the memory layout argument.</p>
<p>When passing Tensor directly as values in a <code>jlCall</code> / <code>Julia.</code> expression, a <code>JlArray[T]</code> will be constructed by buffer; so you should be aware about the memory layout of the buffer.</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">var</span> orderedTensor = newTensor[<span class="hljs-built_in">int</span>]([<span class="hljs-number">3</span>, <span class="hljs-number">2</span>])
Expand Down
Binary file modified images/levmarq_comparision.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified images/levmarq_rawdata.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
28 changes: 14 additions & 14 deletions numerical_methods/curve_fitting.html
Original file line number Diff line number Diff line change
Expand Up @@ -290,14 +290,14 @@ <h2>Curve fitting using <code>numericalnim</code></h2>
<p>Now we are ready to do the actual fitting:</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">let</span> solution = levmarq(fitFunc, initialGuess, t, y)
<span class="hljs-keyword">echo</span> solution</code></pre><pre class="nb-output">Tensor[system.float] of shape &quot;[4]&quot; on backend &quot;Cpu&quot;
0.49973 5.96365 0.114903 0.981149</pre>
0.503366 6.03074 0.0489922 1.00508</pre>
<p>As we can see, the found parameters are very close to the actual ones. But maybe we can do better,
<code>levmarq</code> accepts an <code>options</code> parameter which is the same as the one described in <a href="./optimization.html">Optimization</a>
with the addition of the <code>lambda0</code> parameter. We can reduce the <code>tol</code> and see if we get an even better fit:</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">let</span> options = levmarqOptions(tol=<span class="hljs-number">1e-10</span>)
<span class="hljs-keyword">let</span> solution = levmarq(fitFunc, initialGuess, t, y, options=options)
<span class="hljs-keyword">echo</span> solution</code></pre><pre class="nb-output">Tensor[system.float] of shape &quot;[4]&quot; on backend &quot;Cpu&quot;
0.49973 5.96365 0.114903 0.981149</pre>
0.503366 6.03074 0.0489921 1.00508</pre>
<p>As we can see, there isn't really any difference. So we can conclude that the
found solution has in fact converged.</p>
<p>Here's a plot comparing the fitted and original function:</p>
Expand All @@ -321,7 +321,7 @@ <h2>Errors &amp; Uncertainties</h2>
fitFunc(solution, x)</code></pre>
<p>Now we can calculate the $\chi^2$ using numericalnim's <code>chi2</code> proc:</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">let</span> chi = chi2(y, yCurve, yError)
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;χ² = &quot;</span>, chi</code></pre><pre class="nb-output">χ² = 16.72018127975399</pre>
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;χ² = &quot;</span>, chi</code></pre><pre class="nb-output">χ² = 16.93637503832095</pre>
<p>Great! Now we have a measure of how good the fit is, but what if we add more points?
Then we will get a better fit, but we will also get more points to sum over.
And what if we choose another curve to fit with more parameters?
Expand All @@ -337,7 +337,7 @@ <h2>Errors &amp; Uncertainties</h2>
score adjusted to penalize too complex curves.</p>
<p>Let's calculate it!</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">let</span> reducedChi = chi / (y.len - solution.len).<span class="hljs-built_in">float</span>
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;Reduced χ² = &quot;</span>, reducedChi</code></pre><pre class="nb-output">Reduced χ² = 1.045011329984624</pre>
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;Reduced χ² = &quot;</span>, reducedChi</code></pre><pre class="nb-output">Reduced χ² = 1.058523439895059</pre>
<p>As a rule of thumb, values around 1 are desirable. If it is much larger
than 1, it indicates a bad fit. And if it is much smaller than 1 it means
that the fit is much better than the uncertainties suggested. This could
Expand All @@ -351,28 +351,28 @@ <h3>Parameter uncertainties</h3>
<code>numericalnim</code> can compute the covariance matrix for us using <code>paramUncertainties</code>:</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">let</span> cov = paramUncertainties(solution, fitFunc, t, y, yError, returnFullCov=<span class="hljs-literal">true</span>)
<span class="hljs-keyword">echo</span> cov</code></pre><pre class="nb-output">Tensor[system.float] of shape &quot;[4, 4]&quot; on backend &quot;Cpu&quot;
|1.68076e-05 1.53698e-05 -9.88303e-06 -2.1667e-06|
|1.53698e-05 0.000670963 -0.000247717 5.82197e-06|
|-9.88303e-06 -0.000247717 0.000246359 -1.88845e-06|
|-2.1667e-06 5.82197e-06 -1.88845e-06 0.000409496|</pre>
|1.70541e-05 1.58002e-05 -1.04688e-05 -2.4089e-06|
|1.58002e-05 0.000707119 -0.000250193 -9.67158e-06|
|-1.04688e-05 -0.000250193 0.000245993 4.47236e-06|
|-2.4089e-06 -9.67158e-06 4.47236e-06 0.000446313|</pre>
<p>That is the full covariance matrix, but we are only interested in the diagonal elements.
By default <code>returnFullCov</code> is false and then we get a 1D Tensor with the diagonal instead:</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">let</span> variances = paramUncertainties(solution, fitFunc, t, y, yError)
<span class="hljs-keyword">echo</span> variances</code></pre><pre class="nb-output">Tensor[system.float] of shape &quot;[4]&quot; on backend &quot;Cpu&quot;
1.68076e-05 0.000670963 0.000246359 0.000409496</pre>
1.70541e-05 0.000707119 0.000245993 0.000446313</pre>
<p>It's important to note that these values are the <strong>variances</strong> of the parameters.
So if we want the standard deviations we will have to take the square root:</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">let</span> paramUncertainty = sqrt(variances)
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;Uncertainties: &quot;</span>, paramUncertainty</code></pre><pre class="nb-output">Uncertainties: Tensor[system.float] of shape &quot;[4]&quot; on backend &quot;Cpu&quot;
0.0040997 0.0259029 0.0156958 0.020236</pre>
0.00412966 0.0265917 0.0156842 0.0211261</pre>
<p>All in all, these are the values and uncertainties we got for each of the parameters:</p>
<pre><code class="nohighlight hljs nim"><span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;α = &quot;</span>, solution[<span class="hljs-number">0</span>], <span class="hljs-string">&quot; ± &quot;</span>, paramUncertainty[<span class="hljs-number">0</span>]
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;β = &quot;</span>, solution[<span class="hljs-number">1</span>], <span class="hljs-string">&quot; ± &quot;</span>, paramUncertainty[<span class="hljs-number">1</span>]
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;γ = &quot;</span>, solution[<span class="hljs-number">2</span>], <span class="hljs-string">&quot; ± &quot;</span>, paramUncertainty[<span class="hljs-number">2</span>]
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;δ = &quot;</span>, solution[<span class="hljs-number">3</span>], <span class="hljs-string">&quot; ± &quot;</span>, paramUncertainty[<span class="hljs-number">3</span>]</code></pre><pre class="nb-output">α = 0.4997296913630543 ± 0.004099702618360112
β = 5.963645135105484 ± 0.02590294391910104
γ = 0.1149030201635752 ± 0.01569584154385546
δ = 0.9811490467398611 ± 0.02023601553057025</pre>
<span class="hljs-keyword">echo</span> <span class="hljs-string">&quot;δ = &quot;</span>, solution[<span class="hljs-number">3</span>], <span class="hljs-string">&quot; ± &quot;</span>, paramUncertainty[<span class="hljs-number">3</span>]</code></pre><pre class="nb-output">α = 0.5033663155477502 ± 0.004129656819321223
β = 6.030735258169046 ± 0.02659170150059456
γ = 0.0489920969605781 ± 0.01568416864219338
δ = 1.005076206375797 ± 0.02112613290725861</pre>
<h2>Further reading</h2>
<ul>
<li><a href="https://scinim.github.io/numericalnim/numericalnim/optimize.html">numericalnim's documentation on optimization</a></li>
Expand Down
12 changes: 6 additions & 6 deletions numerical_methods/integration1d.html
Original file line number Diff line number Diff line change
Expand Up @@ -331,12 +331,12 @@ <h3>Let the integration begin!</h3>

timeIt <span class="hljs-string">&quot;AdaGauss&quot;</span>:
keep adaptiveGauss(f, a, b, tol=tol)</code></pre><pre class="nb-output"> min time avg time std dv runs name
0.059 ms 0.069 ms ±0.062 x1000 Trapz
0.059 ms 0.069 ms ±0.056 x1000 Simpson
0.101 ms 0.112 ms ±0.058 x1000 GaussQuad
0.077 ms 0.101 ms ±0.128 x1000 Romberg
0.274 ms 0.470 ms ±1.165 x1000 AdaSimpson
0.077 ms 0.123 ms ±0.266 x1000 AdaGauss</pre>
0.017 ms 0.019 ms ±0.001 x1000 Trapz
0.018 ms 0.019 ms ±0.005 x1000 Simpson
0.066 ms 0.076 ms ±0.128 x1000 GaussQuad
0.036 ms 0.040 ms ±0.002 x1000 Romberg
0.293 ms 0.493 ms ±1.211 x1000 AdaSimpson
0.040 ms 0.045 ms ±0.003 x1000 AdaGauss</pre>
<p>As we can see, all methods except AdaSimpson were roughly equally fast. So if I were to choose
a winner, it would be <code>adaptiveGauss</code> because it was the most accurate while still being among
the fastest methods.</p>
Expand Down
16 changes: 8 additions & 8 deletions numerical_methods/ode.html
Original file line number Diff line number Diff line change
Expand Up @@ -332,10 +332,10 @@ <h2>Let's code</h2>
<code>odeOptions</code> would probably get them on-par with the others. There is one parameter we haven't talked about though,
the execution time. Let's look at that and see if it bring any further insights:</p>
<pre><code class="nohighlight hljs nim"></code></pre><pre class="nb-output"> min time avg time std dv runs name
7.521 ms 16.037 ms ±6.048 x307 heun2
10.688 ms 14.887 ms ±5.421 x334 rk4
0.278 ms 0.327 ms ±0.175 x1000 rk21
0.342 ms 0.385 ms ±0.124 x1000 tsit54</pre>
11.259 ms 20.406 ms ±6.787 x243 heun2
14.426 ms 16.404 ms ±3.768 x295 rk4
0.259 ms 0.284 ms ±0.009 x1000 rk21
0.304 ms 0.361 ms ±0.118 x1000 tsit54</pre>
<p>As we can see, the adaptive methods are orders of magnitude faster while achieving roughly the same errors.
This is because they take fewer and longer steps when the function is flatter and only decrease the
step-size when the function is changing more rapidly.</p>
Expand Down Expand Up @@ -393,10 +393,10 @@ <h2>Vector-valued functions</h2>
<ul>
<li>tsit54: 2.114975e-13</li>
</ul>
<pre><code class="nohighlight hljs nim"></code></pre><pre class="nb-output">4572.201 ms 4575.286 ms ±4.364 x2 heun2
9347.713 ms 9347.713 ms ±0.000 x1 rk4
88.028 ms 88.096 ms ±0.039 x57 rk21
411.828 ms 411.949 ms ±0.091 x13 tsit54</pre>
<pre><code class="nohighlight hljs nim"></code></pre><pre class="nb-output">2615.167 ms 2626.198 ms ±15.601 x2 heun2
5261.133 ms 5261.133 ms ±0.000 x1 rk4
50.778 ms 51.349 ms ±0.291 x98 rk21
224.308 ms 225.149 ms ±0.704 x23 tsit54</pre>
<p>We can once again see that the high-order adaptive methods are both more accurate and faster than the fixed-order ones.</p>
<h2>Further reading</h2>
<ul>
Expand Down
8 changes: 4 additions & 4 deletions numerical_methods/optimization.html
Original file line number Diff line number Diff line change
Expand Up @@ -300,10 +300,10 @@ <h1>Optimize functions in Nim</h1>
LBFGS: Tensor[system.float] of shape &quot;[2]&quot; on backend &quot;Cpu&quot;
1 1
min time avg time std dv runs name
357.320 ms 429.597 ms ±45.295 x12 Steepest
0.172 ms 0.328 ms ±0.858 x1000 Newton
2.839 ms 12.164 ms ±7.746 x412 BFGS
5.458 ms 15.298 ms ±7.515 x324 LBFGS</pre>
214.122 ms 315.523 ms ±49.139 x16 Steepest
0.201 ms 0.268 ms ±0.332 x1000 Newton
3.051 ms 9.665 ms ±7.740 x509 BFGS
4.878 ms 9.730 ms ±5.731 x504 LBFGS</pre>
<p>As we can see, Newton, BFGS and LBFGS found the exact solution and they did it fast.
Steepest descent didn't find as good a solution and took by far the longest time.
Steepest descent has an order of convergence of $O(N)$, Newton has $O(N^2)$ and (L)BFGS
Expand Down

0 comments on commit 68bea55

Please sign in to comment.