diff --git a/code/asp/pdist.lp b/code/asp/pdist.lp index a76b72f..a62615b 100644 --- a/code/asp/pdist.lp +++ b/code/asp/pdist.lp @@ -1,3 +1,2 @@ a; -a. -b; c :- a. -e :- 1 { b ; c }. \ No newline at end of file +b; c :- a. \ No newline at end of file diff --git a/students/amartins/tarefas/__pycache__/bninput.cpython-39.pyc b/students/amartins/tarefas/__pycache__/bninput.cpython-39.pyc new file mode 100644 index 0000000..3e10148 Binary files /dev/null and b/students/amartins/tarefas/__pycache__/bninput.cpython-39.pyc differ diff --git a/students/amartins/tarefas/tarefa2.md b/students/amartins/tarefas/tarefa2.md new file mode 100644 index 0000000..ebb3b04 --- /dev/null +++ b/students/amartins/tarefas/tarefa2.md @@ -0,0 +1,87 @@ +# Tarefa 2: Ler Redes Bayesianas, Escrever Programas Lógicos + +> **Estado da Tarefa.** Importação de Redes Bayesianas - OK; Construção de Programa Lógico a Partir de uma RB - Em Curso. + +## Importar uma Rede Bayesiana + +Passos: + +- [x] Implementar +- [ ] Testar e Documentar +- [x] Usar + +Função `summary_dag(filename)` no módulo `bninput`. **Deve ser testada e documentada.** + +## Construir um Programa Lógico dada uma Rede Bayesiana + +Passos: + +- [/] Implementar +- [ ] Testar e Documentar +- [ ] Usar + +### 2023-07-20 + +O ficheiro `tarefa2.py` está **quase** adequado para esta tarefa. Em particular, tem código para converter a descrição de uma bn em _algo que se assemelha a um programa lógico_. No entanto: + +**Criar funções.** À semelhança do que fez no `bninput`, deve **colocar o código "essencial" em funções**. Isto é, o essencial de + +```python +if __name__ == "__main__": + summary = summary_dag("asia2.bif") + model = summary["bnmodel"] + probabilities = get_yes_probabilities(model) + for node, yes_prob in probabilities.items(): + parents = model.get_parents(node) + s = "" + if len(parents) == 0: +... +``` + +deve ir para uma função. A minha sugestão é que o argumento dessa função seja um `model` que poderá resultar de, por exemplo, `summary_dag(...)`. + +**Adaptar a notação dos programas lógicos.** + +A sintaxe para os programas lógicos é a seguinte: +```prolog +f. /* Facto Determinista */ +h :- b1, ..., bN. /* Regra Determinista */ +p::f. /* Facto Probabilístico */ +p::h :- b1, ..., bN./* Regra Probabilística */ +``` + +em que `p` é uma probabilidade (um `float` entre 0 e 1); `f` é um "facto" (por exemplo, `asia`) e `h :- b1, ..., bN` é uma "regra" em que `h` é a "cabeça" (_"head"_) e o "corpo" (_"body"_) tem "literais" (factos ou negações de factos) `b1, ..., bN`. O símbolo "`,`" denota a _conjunção_ ($\wedge$), "`-`" a negação ($\neg$) e "`:-`" (em vez de "`<-`", e lê-se "_if_" ou "se") denota $\leftarrow$. + +Além disso, em relação ao que o seu programa produz, cada regra e cada facto termina em "`.`". Portanto, **falta acertar a sintaxe com a dos programas lógicos.** + +**Sintaxe, parte 2** + +Há, ainda, um aspeto adicional: Os programas que processam os programas lógicos não suportam (mais ou menos, em geral, por enquanto) factos e regras probabilísticas. Isso significa que a sintaxe +```prolog +p::f. /* Facto Probabilístico */ +p::h :- b1, ..., bN./* Regra Probabilística */ +``` +está "errada" para esses programas. O que podemos fazer, por enquanto, é escrever +```prolog +%* p::f. *% +f ; -f. +%* p::h. *% +h ; -h :- b1, ..., bN. +``` + +Por exemplo, +```prolog +%* 0.01::asia. *% +asia ; -asia. +``` +em vez de +```prolog +0.01::asia. +``` + +Nestes exemplos a sintaxe dos programas lógicos está acrescentada com "`;`" para denotar a disjunção ($\vee$) e "`%* ... *%`" para blocos de comentários. Isto é, +```prolog +%* 0.01::asia. *% +asia ; -asia. +``` +diz que temos um **facto disjuntivo**, `asia ; -asia` que indica que ou "acontece" `asia` ou "acontece" não `asia`. O comentário `%* 0.01::asia. *%` serve para "transportar" a informação sobre as probabilidades. Esta informação será tratada posteriormente, talvez na tarefa 4 ou na 5. \ No newline at end of file diff --git a/students/amartins/tarefas/tarefa2.pdf b/students/amartins/tarefas/tarefa2.pdf new file mode 100644 index 0000000..d751b67 Binary files /dev/null and b/students/amartins/tarefas/tarefa2.pdf differ diff --git a/text/paper_01/pre-paper.pdf b/text/paper_01/pre-paper.pdf index 7863c0f..8aa43bd 100644 Binary files a/text/paper_01/pre-paper.pdf and b/text/paper_01/pre-paper.pdf differ diff --git a/text/paper_01/pre-paper.tex b/text/paper_01/pre-paper.tex index d43c474..6e53ece 100644 --- a/text/paper_01/pre-paper.tex +++ b/text/paper_01/pre-paper.tex @@ -5,6 +5,7 @@ bibstyle=numeric, citestyle=numeric ]{biblatex} %Imports biblatex package \addbibresource{zugzwang.bib} %Import the bibliography file + \usepackage[x11colors]{xcolor} \usepackage{tikz} @@ -95,7 +96,7 @@ citecolor=blue, \acrodef{KL}[KL]{Kullback-Leibler} \title{An Algebraic Approach to Stochastic ASP - %Zugzwang\\\emph{Logic and Artificial Intelligence}\\{\bruno Why this title?} + %Zugzwang\\\emph{Logic and Artificial Intelligence}\\ } \author{ @@ -132,7 +133,7 @@ citecolor=blue, \Acf{ASP} is a logic programming paradigm based on the \ac{SM} semantics of \acp{NP} that can be implemented using the latest advances in SAT solving technology. Unlike ProLog, \ac{ASP} is a truly declarative language that supports language constructs such as disjunction in the head of a clause, choice rules, and hard and weak constraints. \todo{references} -The \ac{DS} is a key approach to extend logical representations with probabilistic reasoning. \Acp{PF} are the most basic \ac{DS} stochastic primitives and take the form of logical facts, $a$, labelled with probabilities, $p$, such as $\probfact{p}{a}$; Each \ac{PF} represents a boolean random variable that is true with probability $p$ and false with probability $\co{p} = 1 - p$. A (consistent) combination of the \acp{PF} defines a \acf{TC} $t = \set{\probfact{p}{a}, \ldots}$ such that \franc{changed \acl{TC} $c$ to $t$ everywhere.} +The \ac{DS} is a key approach to extend logical representations with probabilistic reasoning. \Acp{PF} are the most basic \ac{DS} stochastic primitives and take the form of logical facts, $a$, labelled with probabilities, $p$, such as $\probfact{p}{a}$; each \ac{PF} represents a boolean random variable that is true with probability $p$ and false with probability $\co{p} = 1 - p$. A (consistent) combination of the \acp{PF} defines a \acf{TC} $t = \set{\probfact{p}{a}, \ldots}$ such that %\franc{changed \acl{TC} $c$ to $t$ everywhere.} \begin{equation} \pr{T = t} = \prod_{a\in t} p \prod_{a \not\in t} \co{p}. @@ -146,9 +147,9 @@ Our goal is to extend this probability, from \acp{TC}, to cover the \emph{specif \item Also, given a dataset and a divergence measure, the specification can be scored (by the divergence w.r.t.\ the \emph{empiric} distribution of the dataset), and weighted or sorted amongst other specifications. These are key ingredients in algorithms searching, for example, optimal specifications of a dataset. \end{enumerate} -Our idea to extend probabilities starts with the stance that a specification describes an \emph{observable system} and that observed events must be related with the \acp{SM} of that specification. From here, probabilities must be extended from \aclp{TC} to \acp{SM} and then from \acp{SM} to any event. +Our idea to extend probabilities from \acp{TC} starts with the stance that a specification describes an \emph{observable system} and that observed events must be related with the \acp{SM} of that specification. From here, probabilities must be extended from \aclp{TC} to \acp{SM} and then from \acp{SM} to any event. -Extending probability from \acp{TC} to \acp{SM} faces a critical problem, illustrated by the example in \cref{sec:example.1}, concerning situations where multiple \acp{SM}, $ab$ and $ac$, result from a single \ac{TC}, $a$, but there is not enough information (in the specification) to assign a single probability to each \ac{SM}. We propose to address this issue by using algebraic variables to describe that lack of information and then estimate the value of those variables from empirical data. +Extending probabilities from \acp{TC} to \acp{SM} faces a critical problem, illustrated by the example in \cref{sec:example.1}, concerning situations where multiple \acp{SM}, $ab$ and $ac$, result from a single \ac{TC}, $a$, but there is not enough information (in the specification) to assign a single probability to each \ac{SM}. We propose to address this issue by using algebraic variables to describe that lack of information and then estimate the value of those variables from empirical data. In a related work, \cite{verreet2022inference}, epistemic uncertainty (or model uncertainty) is considered as a lack of knowledge about the underlying model, that may be mitigated via further observations. This seems to presuppose a Bayesian approach to imperfect knowledge in the sense that having further observations allows to improve/correct the model. Indeed, the approach in that work uses Beta distributions in order to be able to learn the full distribution. This approach seems to be specially fitted to being able to tell when some probability lies beneath some given value. \todo{Our approach seems to be similar in spirit. If so, we should mention this in the introduction.} \todo{Also remark that our apporach remains algebraic in the way that we address the problems concerning the extension of probabilities.} @@ -161,7 +162,10 @@ In a related work, \cite{verreet2022inference}, epistemic uncertainty (or model \section{A simple but fruitful example}\label{sec:example.1} -\todo{Write an introduction to the section} +%\todo{Write an introduction to the section} + +{\bruno In this section we consider a somewhat simple example that showcases the problem of extending probabilities from \aclp{TC} to \aclp{SM}. As mentioned before, the main issue arises from the lack of information in the specification to assign a single probability to each \aclp{SM}. This becomes a crucial problem in situations where multiple \aclp{SM} result from a single \aclp{TC}. We will come back to the example given in this section in \cref{S:SBF_developed}, after we present our proposal for extending the probabilities from \aclp{TC} to \aclp{SM} in \cref{sec:extending.probalilities}.} + \begin{example}\label{running.example} Consider the following specification @@ -268,12 +272,12 @@ The \aclp{SM} $ab, ac$ from \cref{running.example} result from the clause $b \v \label{fig:running.example} \end{figure} -\todo{Somewhere, we need to shift the language from extending \emph{probabilities} to extending \emph{measures}} - -\note{$\emptyevent$ notation introduced in \cref{fig:running.example}.} +%\note{$\emptyevent$ notation introduced in \cref{fig:running.example}.} The diagram in \cref{fig:running.example} illustrates the problem of extending probabilities from \acp{TC} nodes to \acp{SM} and then to general events in a \emph{node-wise} process. This quickly leads to \remark{coherence problems}{for example?} concerning probability, with no clear systematic approach --- Instead, weight extension can be based in the relation an event has with the \aclp{SM}. +{\bruno We will consider first the problem of extending measures, since the problem of extending probabilities easily follows by means of a suitable normalization (see \eqref{E:Normalization} and \eqref{E:measure_to_prob}).} + \subsection{An Equivalence Relation}\label{subsec:equivalence.relation} \begin{figure}[t] @@ -383,11 +387,10 @@ The diagram in \cref{fig:running.example} illustrates the problem of extending p \label{fig:running.example.classes} \end{figure} -Given an ASP specification, -\remark{{\bruno Introduce also the sets mentioned below}}{how?} -we consider the \emph{atoms} $a \in \fml{A}$ and \emph{literals}, $z \in \fml{L}$, \emph{events} $e \in \fml{E} \iff e \subseteq \fml{L}$ and \emph{worlds} $w \in \fml{W}$ (consistent events), \emph{\aclp{TC} } $t \in \fml{T} \iff t = a \vee \neg a$ and \emph{\aclp{SM} } $s \in \fml{S}\subset\fml{W}$. +Given an ASP specification, we consider {\bruno a set of \emph{atoms} $ \fml{A}$, a set of \emph{literals}, $\fml{L}$, and a set of \emph{events} $\fml{E}$ such that $e \in \fml{E} \iff e \subseteq \fml{L}$. We also consider a set of \emph{worlds} $\fml{W}$ (consistent events), a set of \emph{\aclp{TC} } such that for every $a \in \fml{A}$ we have $t \in \fml{T} \iff t = a \vee \neg a$, and a set of \emph{\aclp{SM} } such that $ \fml{S}\subset\fml{W}$.} +%the \emph{atoms} $a \in \fml{A}$ and \emph{literals}, $z \in \fml{L}$, \emph{events} $e \in \fml{E} \iff e \subseteq \fml{L}$ and \emph{worlds} $w \in \fml{W}$ (consistent events), \emph{\aclp{TC} } $t \in \fml{T} \iff t = a \vee \neg a$ and \emph{\aclp{SM} } $s \in \fml{S}\subset\fml{W}$. -Our path starts with a perspective of \aclp{SM} as playing a role similar to \emph{prime} factors. The \aclp{SM} of a specification are the irreducible events entailed from that specification and any event must be \replace{interpreted}{considered} under its relation with the \aclp{SM}. +Our path starts with a perspective of \aclp{SM} as playing a role similar to \emph{prime} factors. The \aclp{SM} of a specification are the irreducible events entailed from that specification and any event must be considered under its relation with the \aclp{SM}. %\remark{\todo{Introduce a structure with worlds, events, and \aclp{SM} }}{seems irrelevant} This focus on the \acp{SM} leads to the following definition: @@ -427,7 +430,7 @@ Observe that the minimality of \aclp{SM} implies that, in \cref{def:stable.core \end{cases}\label{eq:event.class} \end{equation} -The subsets of the \aclp{SM}, together with $\inconsistent$, form a set of representatives. Consider again Example~\ref{running.example}. As previously mentioned, the \aclp{SM} are $\fml{S} = \co{a}, ab, ac$ so the quotient set of this relation is: +The subsets of the \aclp{SM}, together with $\inconsistent$, form a set of representatives. Consider again \cref{running.example}. As previously mentioned, the \aclp{SM} are $\fml{S} = \co{a}, ab, ac$ so the quotient set of this relation is: \begin{equation} \class{\fml{E}} = \set{ \inconsistent, @@ -521,7 +524,7 @@ where $\indepclass$ denotes both the class of \emph{independent} events $e$ such \item Normalization of the weights. \end{enumerate} -The ``extension'' phase, traced by equations (\ref{eq:prob.total.choice}) and (\ref{eq:weight.tchoice} --- \ref{eq:weight.events}), starts with the weight (probability) of \aclp{TC}, $\pw{t} = \pr{T = t}$, expands it to \aclp{SM}, $\pw{s}$, and then, within the equivalence relation from \cref{eq:equiv.rel}, to (general) events, $\pw{e}$, including (consistent) worlds. +The ``extension'' phase, traced by \cref{eq:prob.total.choice} and eqs. \eqref{eq:weight.tchoice} to \eqref{eq:weight.events}, starts with the weight (probability) of \aclp{TC}, $\pw{t} = \pr{T = t}$, expands it to \aclp{SM}, $\pw{s}$, and then, within the equivalence relation from \cref{eq:equiv.rel}, to (general) events, $\pw{e}$, including (consistent) worlds. \begin{description} % @@ -555,7 +558,7 @@ The ``extension'' phase, traced by equations (\ref{eq:prob.total.choice}) and (\ \pw{\indepclass, t} := 0. \label{eq:weight.class.independent} \end{equation} - \item[Other Classes.] The extension must be constant within a class, its value should result from the elements in the \acl{SC}, and respect the assumption \ref{assumption:smodels.independence} (\aclp{SM} independence): + \item[Other Classes.] The extension must be constant within a class, its value should result from the elements in the \acl{SC}, and respect assumption \ref{assumption:smodels.independence} (\aclp{SM} independence): \begin{equation} \pw{\class{e}, t} := \sum_{k=1}^{n}\pw{s_k, t},~\text{if}~\stablecore{e} = \set{s_1, \ldots, s_n}. \label{eq:weight.class.other} @@ -605,9 +608,9 @@ Equation \eqref{eq:weight.class.other} results from conditional independence of \section{Developed Examples} -\subsection{The SBF Example} +\subsection{The SBF Example}\label{S:SBF_developed} -We continue with the specification from Equation \eqref{eq:example.1}. +We continue with the specification from \cref{eq:example.1}. \begin{description} % @@ -682,14 +685,14 @@ We continue with the specification from Equation \eqref{eq:example.1}. \end{array} \end{equation*} \item[Normalization.] To get a weight that sums up to one, we compute the \emph{normalization factor}. Since $\pw{\cdot}$ is constant on classes,\todo{prove that we get a probability.} - \begin{equation*} + \begin{equation}\label{E:Normalization} Z := \sum_{e\in\fml{E}} \pw{e} = \sum_{\class{e} \in\class{\fml{E}}} \frac{\pw{\class{e}}}{\#\class{e}}, - \end{equation*} + \end{equation} that divides the weight function into a normalized weight - \begin{equation*} + \begin{equation}\label{E:measure_to_prob} \pr{e} := \frac{\pw{e}}{Z}. - \end{equation*} + \end{equation} such that $$ \sum_{e \in \fml{E}} \pr{e} = 1. @@ -782,83 +785,75 @@ We continue with the specification from Equation \eqref{eq:example.1}. % \subsection{An example involving Bayesian networks} -\franc{Comentários:} -\begin{itemize} - \item Há uma macro, $\backslash\text{pr}\{A\}$, para denotar a função de probabilidade, $\pr{A}$ em vez de $P(A)$. Já agora, para a condicional também há um comando, $\backslash\text{given}$: $\pr{A \given B}$. - \item E, claro, para factos+probabilidades: $\probfact{p}{a}$. - \item A designação dos `pesos' não está consistente: $pj\_a$ e $a\_be$. Fiz uma macro (\emph{hehe}) para sistematizar isto: \condsymb{a}{bnc}. - \item Nos programas, alinhei pelos factos. Isto é, $\probfact{0.3}{a}$ e $a \leftarrow b$ alinham pelo (fim do) $a$. -\end{itemize} - -As it turns out, our framework is suitable to deal with more sophisticated cases, \replace{for example}{in particular} cases involving Bayesian networks. In order to illustrate this, in this section we see how the classical example of the Burglary, Earthquake, Alarm \cite{Judea88} works in our setting. This example is a commonly used example in Bayesian networks because it illustrates reasoning under uncertainty. The gist of example is given in \cref{Figure_Alarm}. It involves a simple network of events and conditional probabilities. +As it turns out, our framework is suitable to deal with more sophisticated cases, in particular cases involving Bayesian networks. In order to illustrate this, in this section we see how the classical example of the Burglary, Earthquake, Alarm \cite{Judea88} works in our setting. This example is a commonly used example in Bayesian networks because it illustrates reasoning under uncertainty. The gist of the example is given in \cref{Figure_Alarm}. It involves a simple network of events and conditional probabilities. -The events are: Burglary ($B$), Earthquake ($E$), Alarm ($A$), Mary calls ($M$) and John calls ($J$). The initial events $B$ and $E$ are assumed to be independent events that occur with probabilities $P(B)$ and $P(E)$, respectively. There is an alarm system that can be triggered by either of the initial events $B$ and $E$. The probability of the alarm going off is a conditional probability given that $B$ and $E$ have occurred. One denotes these probabilities, as per usual, by $P(A|B)$, and $P(A|E)$. There are two neighbours, Mary and John who have agreed to call if they hear the alarm. The probability that they do actually call is also a conditional probability denoted by $P(M|A)$ and $P(J|A)$, respectively. +The events are: Burglary ($B$), Earthquake ($E$), Alarm ($A$), Mary calls ($M$) and John calls ($J$). The initial events $B$ and $E$ are assumed to be independent events that occur with probabilities $\pr{B}$ and $\pr{E}$, respectively. There is an alarm system that can be triggered by either of the initial events $B$ and $E$. The probability of the alarm going off is a conditional probability given that $B$ and $E$ have occurred. One denotes these probabilities, as per usual, by $\pr{A \given B}$, and $\pr{A \given E}$. There are two neighbours, Mary and John who have agreed to call if they hear the alarm. The probability that they do actually call is also a conditional probability denoted by $\pr{M \given A}$ and $\pr{J \given A}$, respectively. \begin{figure} \begin{center} \begin{tikzpicture}[node distance=2.5cm] - + % Nodes \node[smodel, circle] (A) {A}; \node[tchoice, above right of=A] (B) {B}; \node[tchoice, above left of=A] (E) {E}; \node[tchoice, below left of=A] (M) {M}; \node[tchoice, below right of=A] (J) {J}; - + % Edges - \draw[->] (B) to[bend left] (A) node[right,xshift=1.1cm,yshift=0.8cm] {\footnotesize{$P(B)=0.001$}} ; - \draw[->] (E) to[bend right] (A) node[left, xshift=-1.4cm,yshift=0.8cm] {\footnotesize{$P(E)=0.002$}} ; - \draw[->] (A) to[bend right] (M) node[left,xshift=0.2cm,yshift=0.7cm] {\footnotesize{$P(M|A)$}}; - \draw[->] (A) to[bend left] (J) node[right,xshift=-0.2cm,yshift=0.7cm] {\footnotesize{$P(J|A)$}} ; + \draw[->] (B) to[bend left] (A) node[right,xshift=1.1cm,yshift=0.8cm] {\footnotesize{$\pr{B}=0.001$}} ; + \draw[->] (E) to[bend right] (A) node[left, xshift=-1.4cm,yshift=0.8cm] {\footnotesize{$\pr{E}=0.002$}} ; + \draw[->] (A) to[bend right] (M) node[left,xshift=0.2cm,yshift=0.7cm] {\footnotesize{$\pr{M \given A}$}}; + \draw[->] (A) to[bend left] (J) node[right,xshift=-0.2cm,yshift=0.7cm] {\footnotesize{$\pr{J \given A}$}} ; \end{tikzpicture} \end{center} - + \begin{multicols}{3} - + \footnotesize{ - \begin{equation*} - \begin{split} - &P(M|A)\\ - & \begin{array}{c|cc} - & m & \neg m \\ - \hline - a & 0.9 & 0.1 \\ - \neg a & 0.05 & 0.95 - \end{array} - \end{split} - \end{equation*} + \begin{equation*} + \begin{split} + &\pr{M \given A}\\ + & \begin{array}{c|cc} + & m & \neg m \\ + \hline + a & 0.9 & 0.1\\ + \neg a& 0.05 & 0.95 + \end{array} + \end{split} + \end{equation*} } - + \footnotesize{ - \begin{equation*} - \begin{split} - &P(J|A)\\ - & \begin{array}{c|cc} - & j & \neg j \\ - \hline - a & 0.7 & 0.3 \\ - \neg a & 0.01 & 0.99 - \end{array} - \end{split} - \end{equation*} + \begin{equation*} + \begin{split} + &\pr{J \given A}\\ + & \begin{array}{c|cc} + & j & \neg j \\ + \hline + a & 0.7 & 0.3\\ + \neg a& 0.01 & 0.99 + \end{array} + \end{split} + \end{equation*} } \footnotesize{ - \begin{equation*} - \begin{split} - P(A|B \wedge E)\\ - \begin{array}{c|c|cc} - & & a & \neg a \\ - \hline - b & e & 0.95 & 0.05 \\ - b & \neg e & 0.94 & 0.06 \\ - \neg b & e & 0.29 & 0.71 \\ - \neg b & \neg e & 0.001 & 0.999 - \end{array} - \end{split} - \end{equation*} + \begin{equation*} + \begin{split} + \pr{A \given B \wedge E}\\ + \begin{array}{c|c|cc} + & & a & \neg a \\ + \hline + b & e & 0.95 & 0.05\\ + b & \neg e & 0.94 & 0.06\\ + \neg b & e & 0.29 & 0.71\\ + \neg b & \neg e & 0.001 & 0.999 + \end{array} + \end{split} + \end{equation*} } \end{multicols} \caption{The Earthquake, Burglary, Alarm model} @@ -866,64 +861,63 @@ The events are: Burglary ($B$), Earthquake ($E$), Alarm ($A$), Mary calls ($M$) \end{figure} -Considering the probabilities given in \cref{Figure_Alarm} we obtain the following spe\-ci\-fi\-ca\-tion +Considering the probabilities given in \cref{Figure_Alarm} we obtain the following spe\-ci\-fi\-ca\-tion: \begin{equation*} \begin{aligned} - \probfact{0.001}{b} & ,\cr - \probfact{0.002}{e} & ,\cr - \end{aligned} + \probfact{0.001}{b}&,\cr + \probfact{0.002}{e}&,\cr + \end{aligned} \label{eq:not_so_simple_example} \end{equation*} -For the table giving the probability $P(M|A)$ we obtain the specification: +For the table giving the probability $\pr{M \given A}$ we obtain the specification: \begin{equation*} \begin{aligned} - \probfact{0.9}{pm\_a} & ,\cr - \probfact{0.05}{pm\_na} & ,\cr - m & \leftarrow a, pm\_a,\cr - \neg m & \leftarrow a, \neg pm\_a. - \end{aligned} + \probfact{0.9}{\condsymb{m}{a}}&,\cr + \probfact{0.05}{\condsymb{m}{na}}&,\cr + m & \leftarrow a, \condsymb{m}{a},\cr + \neg m & \leftarrow a, \neg \condsymb{m}{a}. + \end{aligned} \end{equation*} This latter specification can be simplified by writing $\probfact{0.9}{m \leftarrow a}$ and $\probfact{0.05}{m \leftarrow \neg a}$. -Similarly, for the probability $P(J|A)$ we obtain +Similarly, for the probability $\pr{J \given A}$ we obtain \begin{equation*} \begin{aligned} - \probfact{0.7}{pj\_a} & ,\cr - \probfact{0.01}{pj\_na} & ,\cr - j & \leftarrow a, pj\_a,\cr - \neg j & \leftarrow a, \neg pj\_a.\cr - \end{aligned} + \probfact{0.7}{\condsymb{j}{a}}&,\cr + \probfact{0.01}{\condsymb{j}{na}}&,\cr + j & \leftarrow a, \condsymb{j}{a},\cr + \neg j & \leftarrow a, \neg \condsymb{j}{a}.\cr + \end{aligned} \end{equation*} Again, this can be simplified by writing $\probfact{0.7}{j \leftarrow a}$ and $\probfact{0.01}{j \leftarrow \neg a}$. -Finally, for the probability $P(A|B \wedge E)$ we obtain +Finally, for the probability $\pr{A \given B \wedge E}$ we obtain \begin{equation*} \begin{aligned} - \probfact{0.95}{a\_be} & ,\cr - \probfact{0.94}{a\_bne} & ,\cr - \probfact{0.29}{a\_nbe} & ,\cr - \probfact{0.001}{a\_nbne} & ,\cr - a & \leftarrow b, e, a\_be,\cr - \neg a & \leftarrow b,e, \neg a\_be, \cr - a & \leftarrow b,e, a\_bne,\cr - \neg a & \leftarrow b,e, \neg a\_bne, \cr - a & \leftarrow b,e, a\_nbe,\cr - \neg a & \leftarrow b,e, \neg a\_nbe, \cr - a & \leftarrow b,e, a\_nbne,\cr - \neg a & \leftarrow b,e, \neg a\_nbne. \cr - \end{aligned} + \probfact{0.95}{\condsymb{a}{be}}&,\cr + \probfact{0.94}{\condsymb{a}{bne}}&,\cr + \probfact{0.29}{\condsymb{a}{nbe}}&,\cr + \probfact{0.001}{\condsymb{a}{nbne}}&,\cr + a & \leftarrow b, e, \condsymb{a}{be},\cr + \neg a & \leftarrow b,e, \neg \condsymb{a}{be}, \cr + a & \leftarrow b,e, \condsymb{a}{bne},\cr + \neg a & \leftarrow b,e, \neg \condsymb{a}{bne}, \cr + a & \leftarrow b,e, \condsymb{a}{nbe},\cr + \neg a & \leftarrow b,e, \neg \condsymb{a}{nbe}, \cr + a & \leftarrow b,e, \condsymb{a}{nbne},\cr + \neg a & \leftarrow b,e, \neg \condsymb{a}{nbne}. \cr + \end{aligned} \end{equation*} -One can then proceed as in the previous subsection and analyse this example. The details of such analysis are not given here since they are analogous, albeit admittedly more cumbersome. - +One can then proceed as in the previous subsection and analyse this example. The details of such analysis are not given here since they are analogous, albeit admittedly more cumbersome. \section{Discussion} -- libgit2 0.21.2