diff --git a/docs/SimpleBayesianNetwork.ipynb b/docs/SimpleBayesianNetwork.ipynb new file mode 100644 index 0000000..dad3453 --- /dev/null +++ b/docs/SimpleBayesianNetwork.ipynb @@ -0,0 +1,56 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Hoi" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "$P(Q) = \\hat{Y}$\n", + "\n", + "```\n", + "\n", + "hoi \n", + "mooi man\n", + "```\n", + "\n", + "```mermaid\n", + "\n", + "graph TD\n", + " Q((Q))\n", + " Y((Y))\n", + "\n", + " Q-->Y\n", + " \n", + "```" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": ".venv", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.7" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/docs/index.md b/docs/index.md index 3f4b542..b86a978 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,6 +1,8 @@ +[TOC] + # Original sum-product algorithm -## Simple example Baysian network +## Simple example Bayesian network First consider a simple Bayesian network, in which there is one hidden variable $Q$ and one observed variable $Y$. @@ -78,7 +80,7 @@ f_1(q) = P(Q) \\ f_2(q, y) = P(Y|Q) ``` -### Message definitions +#### Message definitions ```math @@ -88,8 +90,8 @@ d_3(y) & = P(\hat{Y}|Y) \qquad \qquad \qquad \qquad \qquad \qquad \qquad & \begin{cases} - 1 & \text{if } \hat{y} = y \\ - 0 & \text{if } \hat{y} \ne y \\ + 1 & \text{if } y = \hat{y} \\ + 0 & \text{if } y \ne \hat{y} \\ \end{cases} \\ a_3(y) & = d_3(y) @@ -115,4 +117,34 @@ b_1(q) & = \end{align} +``` + +#### Inference + +Since most messages depend on other message, a number of iterations is needed to calculate the final values for all messages (in this case 4 iterations: $d_3$→$a_3$→$b_2$→$a_2$→$b_3$). + +After that, the messages can be used to perform inference on the hidden variable $Q$: + +```math + +a_1(q)b_1(q) = P(\hat{Y}|Q)P(Q) + +``` + +Normalizing yields the posterior distribution for $Q$: + +```math + +P(Q|\hat{Y}) = \frac{P(\hat{Y}|Q)P(Q)}{P(\hat{Y})} + = \frac{P(\hat{Y}|Q)P(Q)}{\sum\limits_{q}{P(\hat{Y}|Q)P(Q)}} + = \frac{a_1(q)b_1(q)}{\sum\limits_{q}{a_1(q)b_1(q)}} + +``` + +Furthermore, the messages can be used to calculate the conditional probability distributions between hidden and/or observed variables: + +```math + + + ``` \ No newline at end of file