Commit 652cc0d69db72f1419a45c191b97c59d29458925

Authored by Francisco Coelho
1 parent d6ce783e
Exists in master

updated amartins tasks

code/asp/pdist.lp
1 a; -a. 1 a; -a.
2 -b; c :- a.  
3 -e :- 1 { b ; c }.  
4 \ No newline at end of file 2 \ No newline at end of file
  3 +b; c :- a.
5 \ No newline at end of file 4 \ No newline at end of file
students/amartins/tarefas/__pycache__/bninput.cpython-39.pyc 0 → 100644
No preview for this file type
students/amartins/tarefas/tarefa2.md 0 → 100644
@@ -0,0 +1,87 @@ @@ -0,0 +1,87 @@
  1 +# Tarefa 2: Ler Redes Bayesianas, Escrever Programas Lógicos
  2 +
  3 +> **Estado da Tarefa.** Importação de Redes Bayesianas - OK; Construção de Programa Lógico a Partir de uma RB - Em Curso.
  4 +
  5 +## Importar uma Rede Bayesiana
  6 +
  7 +Passos:
  8 +
  9 +- [x] Implementar
  10 +- [ ] Testar e Documentar
  11 +- [x] Usar
  12 +
  13 +Função `summary_dag(filename)` no módulo `bninput`. **Deve ser testada e documentada.**
  14 +
  15 +## Construir um Programa Lógico dada uma Rede Bayesiana
  16 +
  17 +Passos:
  18 +
  19 +- [/] Implementar
  20 +- [ ] Testar e Documentar
  21 +- [ ] Usar
  22 +
  23 +### 2023-07-20
  24 +
  25 +O ficheiro `tarefa2.py` está **quase** adequado para esta tarefa. Em particular, tem código para converter a descrição de uma bn em _algo que se assemelha a um programa lógico_. No entanto:
  26 +
  27 +**Criar funções.** À semelhança do que fez no `bninput`, deve **colocar o código "essencial" em funções**. Isto é, o essencial de
  28 +
  29 +```python
  30 +if __name__ == "__main__":
  31 + summary = summary_dag("asia2.bif")
  32 + model = summary["bnmodel"]
  33 + probabilities = get_yes_probabilities(model)
  34 + for node, yes_prob in probabilities.items():
  35 + parents = model.get_parents(node)
  36 + s = ""
  37 + if len(parents) == 0:
  38 +...
  39 +```
  40 +
  41 +deve ir para uma função. A minha sugestão é que o argumento dessa função seja um `model` que poderá resultar de, por exemplo, `summary_dag(...)`.
  42 +
  43 +**Adaptar a notação dos programas lógicos.**
  44 +
  45 +A sintaxe para os programas lógicos é a seguinte:
  46 +```prolog
  47 +f. /* Facto Determinista */
  48 +h :- b1, ..., bN. /* Regra Determinista */
  49 +p::f. /* Facto Probabilístico */
  50 +p::h :- b1, ..., bN./* Regra Probabilística */
  51 +```
  52 +
  53 +em que `p` é uma probabilidade (um `float` entre 0 e 1); `f` é um "facto" (por exemplo, `asia`) e `h :- b1, ..., bN` é uma "regra" em que `h` é a "cabeça" (_"head"_) e o "corpo" (_"body"_) tem "literais" (factos ou negações de factos) `b1, ..., bN`. O símbolo "`,`" denota a _conjunção_ ($\wedge$), "`-`" a negação ($\neg$) e "`:-`" (em vez de "`<-`", e lê-se "_if_" ou "se") denota $\leftarrow$.
  54 +
  55 +Além disso, em relação ao que o seu programa produz, cada regra e cada facto termina em "`.`". Portanto, **falta acertar a sintaxe com a dos programas lógicos.**
  56 +
  57 +**Sintaxe, parte 2**
  58 +
  59 +Há, ainda, um aspeto adicional: Os programas que processam os programas lógicos não suportam (mais ou menos, em geral, por enquanto) factos e regras probabilísticas. Isso significa que a sintaxe
  60 +```prolog
  61 +p::f. /* Facto Probabilístico */
  62 +p::h :- b1, ..., bN./* Regra Probabilística */
  63 +```
  64 +está "errada" para esses programas. O que podemos fazer, por enquanto, é escrever
  65 +```prolog
  66 +%* p::f. *%
  67 +f ; -f.
  68 +%* p::h. *%
  69 +h ; -h :- b1, ..., bN.
  70 +```
  71 +
  72 +Por exemplo,
  73 +```prolog
  74 +%* 0.01::asia. *%
  75 +asia ; -asia.
  76 +```
  77 +em vez de
  78 +```prolog
  79 +0.01::asia.
  80 +```
  81 +
  82 +Nestes exemplos a sintaxe dos programas lógicos está acrescentada com "`;`" para denotar a disjunção ($\vee$) e "`%* ... *%`" para blocos de comentários. Isto é,
  83 +```prolog
  84 +%* 0.01::asia. *%
  85 +asia ; -asia.
  86 +```
  87 +diz que temos um **facto disjuntivo**, `asia ; -asia` que indica que ou "acontece" `asia` ou "acontece" não `asia`. O comentário `%* 0.01::asia. *%` serve para "transportar" a informação sobre as probabilidades. Esta informação será tratada posteriormente, talvez na tarefa 4 ou na 5.
0 \ No newline at end of file 88 \ No newline at end of file
students/amartins/tarefas/tarefa2.pdf 0 → 100644
No preview for this file type
text/paper_01/pre-paper.pdf
No preview for this file type
text/paper_01/pre-paper.tex
@@ -5,6 +5,7 @@ bibstyle=numeric, @@ -5,6 +5,7 @@ bibstyle=numeric,
5 citestyle=numeric 5 citestyle=numeric
6 ]{biblatex} %Imports biblatex package 6 ]{biblatex} %Imports biblatex package
7 \addbibresource{zugzwang.bib} %Import the bibliography file 7 \addbibresource{zugzwang.bib} %Import the bibliography file
  8 +
8 \usepackage[x11colors]{xcolor} 9 \usepackage[x11colors]{xcolor}
9 10
10 \usepackage{tikz} 11 \usepackage{tikz}
@@ -95,7 +96,7 @@ citecolor=blue, @@ -95,7 +96,7 @@ citecolor=blue,
95 \acrodef{KL}[KL]{Kullback-Leibler} 96 \acrodef{KL}[KL]{Kullback-Leibler}
96 97
97 \title{An Algebraic Approach to Stochastic ASP 98 \title{An Algebraic Approach to Stochastic ASP
98 - %Zugzwang\\\emph{Logic and Artificial Intelligence}\\{\bruno Why this title?} 99 + %Zugzwang\\\emph{Logic and Artificial Intelligence}\\
99 } 100 }
100 101
101 \author{ 102 \author{
@@ -132,7 +133,7 @@ citecolor=blue, @@ -132,7 +133,7 @@ citecolor=blue,
132 \Acf{ASP} is a logic programming paradigm based on the \ac{SM} semantics of \acp{NP} that can be implemented using the latest advances in SAT solving technology. Unlike ProLog, \ac{ASP} is a truly declarative language that supports language constructs such as disjunction in the head of a clause, choice rules, and hard and weak constraints. 133 \Acf{ASP} is a logic programming paradigm based on the \ac{SM} semantics of \acp{NP} that can be implemented using the latest advances in SAT solving technology. Unlike ProLog, \ac{ASP} is a truly declarative language that supports language constructs such as disjunction in the head of a clause, choice rules, and hard and weak constraints.
133 134
134 \todo{references} 135 \todo{references}
135 -The \ac{DS} is a key approach to extend logical representations with probabilistic reasoning. \Acp{PF} are the most basic \ac{DS} stochastic primitives and take the form of logical facts, $a$, labelled with probabilities, $p$, such as $\probfact{p}{a}$; Each \ac{PF} represents a boolean random variable that is true with probability $p$ and false with probability $\co{p} = 1 - p$. A (consistent) combination of the \acp{PF} defines a \acf{TC} $t = \set{\probfact{p}{a}, \ldots}$ such that \franc{changed \acl{TC} $c$ to $t$ everywhere.} 136 +The \ac{DS} is a key approach to extend logical representations with probabilistic reasoning. \Acp{PF} are the most basic \ac{DS} stochastic primitives and take the form of logical facts, $a$, labelled with probabilities, $p$, such as $\probfact{p}{a}$; each \ac{PF} represents a boolean random variable that is true with probability $p$ and false with probability $\co{p} = 1 - p$. A (consistent) combination of the \acp{PF} defines a \acf{TC} $t = \set{\probfact{p}{a}, \ldots}$ such that %\franc{changed \acl{TC} $c$ to $t$ everywhere.}
136 137
137 \begin{equation} 138 \begin{equation}
138 \pr{T = t} = \prod_{a\in t} p \prod_{a \not\in t} \co{p}. 139 \pr{T = t} = \prod_{a\in t} p \prod_{a \not\in t} \co{p}.
@@ -146,9 +147,9 @@ Our goal is to extend this probability, from \acp{TC}, to cover the \emph{specif @@ -146,9 +147,9 @@ Our goal is to extend this probability, from \acp{TC}, to cover the \emph{specif
146 \item Also, given a dataset and a divergence measure, the specification can be scored (by the divergence w.r.t.\ the \emph{empiric} distribution of the dataset), and weighted or sorted amongst other specifications. These are key ingredients in algorithms searching, for example, optimal specifications of a dataset. 147 \item Also, given a dataset and a divergence measure, the specification can be scored (by the divergence w.r.t.\ the \emph{empiric} distribution of the dataset), and weighted or sorted amongst other specifications. These are key ingredients in algorithms searching, for example, optimal specifications of a dataset.
147 \end{enumerate} 148 \end{enumerate}
148 149
149 -Our idea to extend probabilities starts with the stance that a specification describes an \emph{observable system} and that observed events must be related with the \acp{SM} of that specification. From here, probabilities must be extended from \aclp{TC} to \acp{SM} and then from \acp{SM} to any event. 150 +Our idea to extend probabilities from \acp{TC} starts with the stance that a specification describes an \emph{observable system} and that observed events must be related with the \acp{SM} of that specification. From here, probabilities must be extended from \aclp{TC} to \acp{SM} and then from \acp{SM} to any event.
150 151
151 -Extending probability from \acp{TC} to \acp{SM} faces a critical problem, illustrated by the example in \cref{sec:example.1}, concerning situations where multiple \acp{SM}, $ab$ and $ac$, result from a single \ac{TC}, $a$, but there is not enough information (in the specification) to assign a single probability to each \ac{SM}. We propose to address this issue by using algebraic variables to describe that lack of information and then estimate the value of those variables from empirical data. 152 +Extending probabilities from \acp{TC} to \acp{SM} faces a critical problem, illustrated by the example in \cref{sec:example.1}, concerning situations where multiple \acp{SM}, $ab$ and $ac$, result from a single \ac{TC}, $a$, but there is not enough information (in the specification) to assign a single probability to each \ac{SM}. We propose to address this issue by using algebraic variables to describe that lack of information and then estimate the value of those variables from empirical data.
152 153
153 In a related work, \cite{verreet2022inference}, epistemic uncertainty (or model uncertainty) is considered as a lack of knowledge about the underlying model, that may be mitigated via further observations. This seems to presuppose a Bayesian approach to imperfect knowledge in the sense that having further observations allows to improve/correct the model. Indeed, the approach in that work uses Beta distributions in order to be able to learn the full distribution. This approach seems to be specially fitted to being able to tell when some probability lies beneath some given value. \todo{Our approach seems to be similar in spirit. If so, we should mention this in the introduction.} 154 In a related work, \cite{verreet2022inference}, epistemic uncertainty (or model uncertainty) is considered as a lack of knowledge about the underlying model, that may be mitigated via further observations. This seems to presuppose a Bayesian approach to imperfect knowledge in the sense that having further observations allows to improve/correct the model. Indeed, the approach in that work uses Beta distributions in order to be able to learn the full distribution. This approach seems to be specially fitted to being able to tell when some probability lies beneath some given value. \todo{Our approach seems to be similar in spirit. If so, we should mention this in the introduction.}
154 \todo{Also remark that our apporach remains algebraic in the way that we address the problems concerning the extension of probabilities.} 155 \todo{Also remark that our apporach remains algebraic in the way that we address the problems concerning the extension of probabilities.}
@@ -161,7 +162,10 @@ In a related work, \cite{verreet2022inference}, epistemic uncertainty (or model @@ -161,7 +162,10 @@ In a related work, \cite{verreet2022inference}, epistemic uncertainty (or model
161 162
162 \section{A simple but fruitful example}\label{sec:example.1} 163 \section{A simple but fruitful example}\label{sec:example.1}
163 164
164 -\todo{Write an introduction to the section} 165 +%\todo{Write an introduction to the section}
  166 +
  167 +{\bruno In this section we consider a somewhat simple example that showcases the problem of extending probabilities from \aclp{TC} to \aclp{SM}. As mentioned before, the main issue arises from the lack of information in the specification to assign a single probability to each \aclp{SM}. This becomes a crucial problem in situations where multiple \aclp{SM} result from a single \aclp{TC}. We will come back to the example given in this section in \cref{S:SBF_developed}, after we present our proposal for extending the probabilities from \aclp{TC} to \aclp{SM} in \cref{sec:extending.probalilities}.}
  168 +
165 169
166 \begin{example}\label{running.example} 170 \begin{example}\label{running.example}
167 Consider the following specification 171 Consider the following specification
@@ -268,12 +272,12 @@ The \aclp{SM} $ab, ac$ from \cref{running.example} result from the clause $b \v @@ -268,12 +272,12 @@ The \aclp{SM} $ab, ac$ from \cref{running.example} result from the clause $b \v
268 \label{fig:running.example} 272 \label{fig:running.example}
269 \end{figure} 273 \end{figure}
270 274
271 -\todo{Somewhere, we need to shift the language from extending \emph{probabilities} to extending \emph{measures}}  
272 -  
273 -\note{$\emptyevent$ notation introduced in \cref{fig:running.example}.} 275 +%\note{$\emptyevent$ notation introduced in \cref{fig:running.example}.}
274 276
275 The diagram in \cref{fig:running.example} illustrates the problem of extending probabilities from \acp{TC} nodes to \acp{SM} and then to general events in a \emph{node-wise} process. This quickly leads to \remark{coherence problems}{for example?} concerning probability, with no clear systematic approach --- Instead, weight extension can be based in the relation an event has with the \aclp{SM}. 277 The diagram in \cref{fig:running.example} illustrates the problem of extending probabilities from \acp{TC} nodes to \acp{SM} and then to general events in a \emph{node-wise} process. This quickly leads to \remark{coherence problems}{for example?} concerning probability, with no clear systematic approach --- Instead, weight extension can be based in the relation an event has with the \aclp{SM}.
276 278
  279 +{\bruno We will consider first the problem of extending measures, since the problem of extending probabilities easily follows by means of a suitable normalization (see \eqref{E:Normalization} and \eqref{E:measure_to_prob}).}
  280 +
277 \subsection{An Equivalence Relation}\label{subsec:equivalence.relation} 281 \subsection{An Equivalence Relation}\label{subsec:equivalence.relation}
278 282
279 \begin{figure}[t] 283 \begin{figure}[t]
@@ -383,11 +387,10 @@ The diagram in \cref{fig:running.example} illustrates the problem of extending p @@ -383,11 +387,10 @@ The diagram in \cref{fig:running.example} illustrates the problem of extending p
383 \label{fig:running.example.classes} 387 \label{fig:running.example.classes}
384 \end{figure} 388 \end{figure}
385 389
386 -Given an ASP specification,  
387 -\remark{{\bruno Introduce also the sets mentioned below}}{how?}  
388 -we consider the \emph{atoms} $a \in \fml{A}$ and \emph{literals}, $z \in \fml{L}$, \emph{events} $e \in \fml{E} \iff e \subseteq \fml{L}$ and \emph{worlds} $w \in \fml{W}$ (consistent events), \emph{\aclp{TC} } $t \in \fml{T} \iff t = a \vee \neg a$ and \emph{\aclp{SM} } $s \in \fml{S}\subset\fml{W}$. 390 +Given an ASP specification, we consider {\bruno a set of \emph{atoms} $ \fml{A}$, a set of \emph{literals}, $\fml{L}$, and a set of \emph{events} $\fml{E}$ such that $e \in \fml{E} \iff e \subseteq \fml{L}$. We also consider a set of \emph{worlds} $\fml{W}$ (consistent events), a set of \emph{\aclp{TC} } such that for every $a \in \fml{A}$ we have $t \in \fml{T} \iff t = a \vee \neg a$, and a set of \emph{\aclp{SM} } such that $ \fml{S}\subset\fml{W}$.}
  391 +%the \emph{atoms} $a \in \fml{A}$ and \emph{literals}, $z \in \fml{L}$, \emph{events} $e \in \fml{E} \iff e \subseteq \fml{L}$ and \emph{worlds} $w \in \fml{W}$ (consistent events), \emph{\aclp{TC} } $t \in \fml{T} \iff t = a \vee \neg a$ and \emph{\aclp{SM} } $s \in \fml{S}\subset\fml{W}$.
389 392
390 -Our path starts with a perspective of \aclp{SM} as playing a role similar to \emph{prime} factors. The \aclp{SM} of a specification are the irreducible events entailed from that specification and any event must be \replace{interpreted}{considered} under its relation with the \aclp{SM}. 393 +Our path starts with a perspective of \aclp{SM} as playing a role similar to \emph{prime} factors. The \aclp{SM} of a specification are the irreducible events entailed from that specification and any event must be considered under its relation with the \aclp{SM}.
391 394
392 %\remark{\todo{Introduce a structure with worlds, events, and \aclp{SM} }}{seems irrelevant} 395 %\remark{\todo{Introduce a structure with worlds, events, and \aclp{SM} }}{seems irrelevant}
393 This focus on the \acp{SM} leads to the following definition: 396 This focus on the \acp{SM} leads to the following definition:
@@ -427,7 +430,7 @@ Observe that the minimality of \aclp{SM} implies that, in \cref{def:stable.core @@ -427,7 +430,7 @@ Observe that the minimality of \aclp{SM} implies that, in \cref{def:stable.core
427 \end{cases}\label{eq:event.class} 430 \end{cases}\label{eq:event.class}
428 \end{equation} 431 \end{equation}
429 432
430 -The subsets of the \aclp{SM}, together with $\inconsistent$, form a set of representatives. Consider again Example~\ref{running.example}. As previously mentioned, the \aclp{SM} are $\fml{S} = \co{a}, ab, ac$ so the quotient set of this relation is: 433 +The subsets of the \aclp{SM}, together with $\inconsistent$, form a set of representatives. Consider again \cref{running.example}. As previously mentioned, the \aclp{SM} are $\fml{S} = \co{a}, ab, ac$ so the quotient set of this relation is:
431 \begin{equation} 434 \begin{equation}
432 \class{\fml{E}} = \set{ 435 \class{\fml{E}} = \set{
433 \inconsistent, 436 \inconsistent,
@@ -521,7 +524,7 @@ where $\indepclass$ denotes both the class of \emph{independent} events $e$ such @@ -521,7 +524,7 @@ where $\indepclass$ denotes both the class of \emph{independent} events $e$ such
521 \item Normalization of the weights. 524 \item Normalization of the weights.
522 \end{enumerate} 525 \end{enumerate}
523 526
524 -The ``extension'' phase, traced by equations (\ref{eq:prob.total.choice}) and (\ref{eq:weight.tchoice} --- \ref{eq:weight.events}), starts with the weight (probability) of \aclp{TC}, $\pw{t} = \pr{T = t}$, expands it to \aclp{SM}, $\pw{s}$, and then, within the equivalence relation from \cref{eq:equiv.rel}, to (general) events, $\pw{e}$, including (consistent) worlds. 527 +The ``extension'' phase, traced by \cref{eq:prob.total.choice} and eqs. \eqref{eq:weight.tchoice} to \eqref{eq:weight.events}, starts with the weight (probability) of \aclp{TC}, $\pw{t} = \pr{T = t}$, expands it to \aclp{SM}, $\pw{s}$, and then, within the equivalence relation from \cref{eq:equiv.rel}, to (general) events, $\pw{e}$, including (consistent) worlds.
525 528
526 \begin{description} 529 \begin{description}
527 % 530 %
@@ -555,7 +558,7 @@ The ``extension&#39;&#39; phase, traced by equations (\ref{eq:prob.total.choice}) and (\ @@ -555,7 +558,7 @@ The ``extension&#39;&#39; phase, traced by equations (\ref{eq:prob.total.choice}) and (\
555 \pw{\indepclass, t} := 0. 558 \pw{\indepclass, t} := 0.
556 \label{eq:weight.class.independent} 559 \label{eq:weight.class.independent}
557 \end{equation} 560 \end{equation}
558 - \item[Other Classes.] The extension must be constant within a class, its value should result from the elements in the \acl{SC}, and respect the assumption \ref{assumption:smodels.independence} (\aclp{SM} independence): 561 + \item[Other Classes.] The extension must be constant within a class, its value should result from the elements in the \acl{SC}, and respect assumption \ref{assumption:smodels.independence} (\aclp{SM} independence):
559 \begin{equation} 562 \begin{equation}
560 \pw{\class{e}, t} := \sum_{k=1}^{n}\pw{s_k, t},~\text{if}~\stablecore{e} = \set{s_1, \ldots, s_n}. 563 \pw{\class{e}, t} := \sum_{k=1}^{n}\pw{s_k, t},~\text{if}~\stablecore{e} = \set{s_1, \ldots, s_n}.
561 \label{eq:weight.class.other} 564 \label{eq:weight.class.other}
@@ -605,9 +608,9 @@ Equation \eqref{eq:weight.class.other} results from conditional independence of @@ -605,9 +608,9 @@ Equation \eqref{eq:weight.class.other} results from conditional independence of
605 608
606 \section{Developed Examples} 609 \section{Developed Examples}
607 610
608 -\subsection{The SBF Example} 611 +\subsection{The SBF Example}\label{S:SBF_developed}
609 612
610 -We continue with the specification from Equation \eqref{eq:example.1}. 613 +We continue with the specification from \cref{eq:example.1}.
611 614
612 \begin{description} 615 \begin{description}
613 % 616 %
@@ -682,14 +685,14 @@ We continue with the specification from Equation \eqref{eq:example.1}. @@ -682,14 +685,14 @@ We continue with the specification from Equation \eqref{eq:example.1}.
682 \end{array} 685 \end{array}
683 \end{equation*} 686 \end{equation*}
684 \item[Normalization.] To get a weight that sums up to one, we compute the \emph{normalization factor}. Since $\pw{\cdot}$ is constant on classes,\todo{prove that we get a probability.} 687 \item[Normalization.] To get a weight that sums up to one, we compute the \emph{normalization factor}. Since $\pw{\cdot}$ is constant on classes,\todo{prove that we get a probability.}
685 - \begin{equation*} 688 + \begin{equation}\label{E:Normalization}
686 Z := \sum_{e\in\fml{E}} \pw{e} 689 Z := \sum_{e\in\fml{E}} \pw{e}
687 = \sum_{\class{e} \in\class{\fml{E}}} \frac{\pw{\class{e}}}{\#\class{e}}, 690 = \sum_{\class{e} \in\class{\fml{E}}} \frac{\pw{\class{e}}}{\#\class{e}},
688 - \end{equation*} 691 + \end{equation}
689 that divides the weight function into a normalized weight 692 that divides the weight function into a normalized weight
690 - \begin{equation*} 693 + \begin{equation}\label{E:measure_to_prob}
691 \pr{e} := \frac{\pw{e}}{Z}. 694 \pr{e} := \frac{\pw{e}}{Z}.
692 - \end{equation*} 695 + \end{equation}
693 such that 696 such that
694 $$ 697 $$
695 \sum_{e \in \fml{E}} \pr{e} = 1. 698 \sum_{e \in \fml{E}} \pr{e} = 1.
@@ -782,83 +785,75 @@ We continue with the specification from Equation \eqref{eq:example.1}. @@ -782,83 +785,75 @@ We continue with the specification from Equation \eqref{eq:example.1}.
782 % 785 %
783 \subsection{An example involving Bayesian networks} 786 \subsection{An example involving Bayesian networks}
784 787
785 -\franc{Comentários:}  
786 -\begin{itemize}  
787 - \item Há uma macro, $\backslash\text{pr}\{A\}$, para denotar a função de probabilidade, $\pr{A}$ em vez de $P(A)$. Já agora, para a condicional também há um comando, $\backslash\text{given}$: $\pr{A \given B}$.  
788 - \item E, claro, para factos+probabilidades: $\probfact{p}{a}$.  
789 - \item A designação dos `pesos' não está consistente: $pj\_a$ e $a\_be$. Fiz uma macro (\emph{hehe}) para sistematizar isto: \condsymb{a}{bnc}.  
790 - \item Nos programas, alinhei pelos factos. Isto é, $\probfact{0.3}{a}$ e $a \leftarrow b$ alinham pelo (fim do) $a$.  
791 -\end{itemize}  
792 -  
793 788
794 -As it turns out, our framework is suitable to deal with more sophisticated cases, \replace{for example}{in particular} cases involving Bayesian networks. In order to illustrate this, in this section we see how the classical example of the Burglary, Earthquake, Alarm \cite{Judea88} works in our setting. This example is a commonly used example in Bayesian networks because it illustrates reasoning under uncertainty. The gist of example is given in \cref{Figure_Alarm}. It involves a simple network of events and conditional probabilities. 789 +As it turns out, our framework is suitable to deal with more sophisticated cases, in particular cases involving Bayesian networks. In order to illustrate this, in this section we see how the classical example of the Burglary, Earthquake, Alarm \cite{Judea88} works in our setting. This example is a commonly used example in Bayesian networks because it illustrates reasoning under uncertainty. The gist of the example is given in \cref{Figure_Alarm}. It involves a simple network of events and conditional probabilities.
795 790
796 -The events are: Burglary ($B$), Earthquake ($E$), Alarm ($A$), Mary calls ($M$) and John calls ($J$). The initial events $B$ and $E$ are assumed to be independent events that occur with probabilities $P(B)$ and $P(E)$, respectively. There is an alarm system that can be triggered by either of the initial events $B$ and $E$. The probability of the alarm going off is a conditional probability given that $B$ and $E$ have occurred. One denotes these probabilities, as per usual, by $P(A|B)$, and $P(A|E)$. There are two neighbours, Mary and John who have agreed to call if they hear the alarm. The probability that they do actually call is also a conditional probability denoted by $P(M|A)$ and $P(J|A)$, respectively. 791 +The events are: Burglary ($B$), Earthquake ($E$), Alarm ($A$), Mary calls ($M$) and John calls ($J$). The initial events $B$ and $E$ are assumed to be independent events that occur with probabilities $\pr{B}$ and $\pr{E}$, respectively. There is an alarm system that can be triggered by either of the initial events $B$ and $E$. The probability of the alarm going off is a conditional probability given that $B$ and $E$ have occurred. One denotes these probabilities, as per usual, by $\pr{A \given B}$, and $\pr{A \given E}$. There are two neighbours, Mary and John who have agreed to call if they hear the alarm. The probability that they do actually call is also a conditional probability denoted by $\pr{M \given A}$ and $\pr{J \given A}$, respectively.
797 792
798 793
799 794
800 \begin{figure} 795 \begin{figure}
801 \begin{center} 796 \begin{center}
802 \begin{tikzpicture}[node distance=2.5cm] 797 \begin{tikzpicture}[node distance=2.5cm]
803 - 798 +
804 % Nodes 799 % Nodes
805 \node[smodel, circle] (A) {A}; 800 \node[smodel, circle] (A) {A};
806 \node[tchoice, above right of=A] (B) {B}; 801 \node[tchoice, above right of=A] (B) {B};
807 \node[tchoice, above left of=A] (E) {E}; 802 \node[tchoice, above left of=A] (E) {E};
808 \node[tchoice, below left of=A] (M) {M}; 803 \node[tchoice, below left of=A] (M) {M};
809 \node[tchoice, below right of=A] (J) {J}; 804 \node[tchoice, below right of=A] (J) {J};
810 - 805 +
811 % Edges 806 % Edges
812 - \draw[->] (B) to[bend left] (A) node[right,xshift=1.1cm,yshift=0.8cm] {\footnotesize{$P(B)=0.001$}} ;  
813 - \draw[->] (E) to[bend right] (A) node[left, xshift=-1.4cm,yshift=0.8cm] {\footnotesize{$P(E)=0.002$}} ;  
814 - \draw[->] (A) to[bend right] (M) node[left,xshift=0.2cm,yshift=0.7cm] {\footnotesize{$P(M|A)$}};  
815 - \draw[->] (A) to[bend left] (J) node[right,xshift=-0.2cm,yshift=0.7cm] {\footnotesize{$P(J|A)$}} ; 807 + \draw[->] (B) to[bend left] (A) node[right,xshift=1.1cm,yshift=0.8cm] {\footnotesize{$\pr{B}=0.001$}} ;
  808 + \draw[->] (E) to[bend right] (A) node[left, xshift=-1.4cm,yshift=0.8cm] {\footnotesize{$\pr{E}=0.002$}} ;
  809 + \draw[->] (A) to[bend right] (M) node[left,xshift=0.2cm,yshift=0.7cm] {\footnotesize{$\pr{M \given A}$}};
  810 + \draw[->] (A) to[bend left] (J) node[right,xshift=-0.2cm,yshift=0.7cm] {\footnotesize{$\pr{J \given A}$}} ;
816 \end{tikzpicture} 811 \end{tikzpicture}
817 \end{center} 812 \end{center}
818 - 813 +
819 \begin{multicols}{3} 814 \begin{multicols}{3}
820 - 815 +
821 \footnotesize{ 816 \footnotesize{
822 - \begin{equation*}  
823 - \begin{split}  
824 - &P(M|A)\\  
825 - & \begin{array}{c|cc}  
826 - & m & \neg m \\  
827 - \hline  
828 - a & 0.9 & 0.1 \\  
829 - \neg a & 0.05 & 0.95  
830 - \end{array}  
831 - \end{split}  
832 - \end{equation*} 817 + \begin{equation*}
  818 + \begin{split}
  819 + &\pr{M \given A}\\
  820 + & \begin{array}{c|cc}
  821 + & m & \neg m \\
  822 + \hline
  823 + a & 0.9 & 0.1\\
  824 + \neg a& 0.05 & 0.95
  825 + \end{array}
  826 + \end{split}
  827 + \end{equation*}
833 } 828 }
834 - 829 +
835 \footnotesize{ 830 \footnotesize{
836 - \begin{equation*}  
837 - \begin{split}  
838 - &P(J|A)\\  
839 - & \begin{array}{c|cc}  
840 - & j & \neg j \\  
841 - \hline  
842 - a & 0.7 & 0.3 \\  
843 - \neg a & 0.01 & 0.99  
844 - \end{array}  
845 - \end{split}  
846 - \end{equation*} 831 + \begin{equation*}
  832 + \begin{split}
  833 + &\pr{J \given A}\\
  834 + & \begin{array}{c|cc}
  835 + & j & \neg j \\
  836 + \hline
  837 + a & 0.7 & 0.3\\
  838 + \neg a& 0.01 & 0.99
  839 + \end{array}
  840 + \end{split}
  841 + \end{equation*}
847 } 842 }
848 \footnotesize{ 843 \footnotesize{
849 - \begin{equation*}  
850 - \begin{split}  
851 - P(A|B \wedge E)\\  
852 - \begin{array}{c|c|cc}  
853 - & & a & \neg a \\  
854 - \hline  
855 - b & e & 0.95 & 0.05 \\  
856 - b & \neg e & 0.94 & 0.06 \\  
857 - \neg b & e & 0.29 & 0.71 \\  
858 - \neg b & \neg e & 0.001 & 0.999  
859 - \end{array}  
860 - \end{split}  
861 - \end{equation*} 844 + \begin{equation*}
  845 + \begin{split}
  846 + \pr{A \given B \wedge E}\\
  847 + \begin{array}{c|c|cc}
  848 + & & a & \neg a \\
  849 + \hline
  850 + b & e & 0.95 & 0.05\\
  851 + b & \neg e & 0.94 & 0.06\\
  852 + \neg b & e & 0.29 & 0.71\\
  853 + \neg b & \neg e & 0.001 & 0.999
  854 + \end{array}
  855 + \end{split}
  856 + \end{equation*}
862 } 857 }
863 \end{multicols} 858 \end{multicols}
864 \caption{The Earthquake, Burglary, Alarm model} 859 \caption{The Earthquake, Burglary, Alarm model}
@@ -866,64 +861,63 @@ The events are: Burglary ($B$), Earthquake ($E$), Alarm ($A$), Mary calls ($M$) @@ -866,64 +861,63 @@ The events are: Burglary ($B$), Earthquake ($E$), Alarm ($A$), Mary calls ($M$)
866 \end{figure} 861 \end{figure}
867 862
868 863
869 -Considering the probabilities given in \cref{Figure_Alarm} we obtain the following spe\-ci\-fi\-ca\-tion 864 +Considering the probabilities given in \cref{Figure_Alarm} we obtain the following spe\-ci\-fi\-ca\-tion:
870 865
871 \begin{equation*} 866 \begin{equation*}
872 \begin{aligned} 867 \begin{aligned}
873 - \probfact{0.001}{b} & ,\cr  
874 - \probfact{0.002}{e} & ,\cr  
875 - \end{aligned} 868 + \probfact{0.001}{b}&,\cr
  869 + \probfact{0.002}{e}&,\cr
  870 + \end{aligned}
876 \label{eq:not_so_simple_example} 871 \label{eq:not_so_simple_example}
877 \end{equation*} 872 \end{equation*}
878 873
879 -For the table giving the probability $P(M|A)$ we obtain the specification: 874 +For the table giving the probability $\pr{M \given A}$ we obtain the specification:
880 875
881 876
882 \begin{equation*} 877 \begin{equation*}
883 \begin{aligned} 878 \begin{aligned}
884 - \probfact{0.9}{pm\_a} & ,\cr  
885 - \probfact{0.05}{pm\_na} & ,\cr  
886 - m & \leftarrow a, pm\_a,\cr  
887 - \neg m & \leftarrow a, \neg pm\_a.  
888 - \end{aligned} 879 + \probfact{0.9}{\condsymb{m}{a}}&,\cr
  880 + \probfact{0.05}{\condsymb{m}{na}}&,\cr
  881 + m & \leftarrow a, \condsymb{m}{a},\cr
  882 + \neg m & \leftarrow a, \neg \condsymb{m}{a}.
  883 + \end{aligned}
889 \end{equation*} 884 \end{equation*}
890 885
891 This latter specification can be simplified by writing $\probfact{0.9}{m \leftarrow a}$ and $\probfact{0.05}{m \leftarrow \neg a}$. 886 This latter specification can be simplified by writing $\probfact{0.9}{m \leftarrow a}$ and $\probfact{0.05}{m \leftarrow \neg a}$.
892 887
893 -Similarly, for the probability $P(J|A)$ we obtain 888 +Similarly, for the probability $\pr{J \given A}$ we obtain
894 889
895 \begin{equation*} 890 \begin{equation*}
896 \begin{aligned} 891 \begin{aligned}
897 - \probfact{0.7}{pj\_a} & ,\cr  
898 - \probfact{0.01}{pj\_na} & ,\cr  
899 - j & \leftarrow a, pj\_a,\cr  
900 - \neg j & \leftarrow a, \neg pj\_a.\cr  
901 - \end{aligned} 892 + \probfact{0.7}{\condsymb{j}{a}}&,\cr
  893 + \probfact{0.01}{\condsymb{j}{na}}&,\cr
  894 + j & \leftarrow a, \condsymb{j}{a},\cr
  895 + \neg j & \leftarrow a, \neg \condsymb{j}{a}.\cr
  896 + \end{aligned}
902 \end{equation*} 897 \end{equation*}
903 898
904 Again, this can be simplified by writing $\probfact{0.7}{j \leftarrow a}$ and $\probfact{0.01}{j \leftarrow \neg a}$. 899 Again, this can be simplified by writing $\probfact{0.7}{j \leftarrow a}$ and $\probfact{0.01}{j \leftarrow \neg a}$.
905 900
906 -Finally, for the probability $P(A|B \wedge E)$ we obtain 901 +Finally, for the probability $\pr{A \given B \wedge E}$ we obtain
907 902
908 \begin{equation*} 903 \begin{equation*}
909 \begin{aligned} 904 \begin{aligned}
910 - \probfact{0.95}{a\_be} & ,\cr  
911 - \probfact{0.94}{a\_bne} & ,\cr  
912 - \probfact{0.29}{a\_nbe} & ,\cr  
913 - \probfact{0.001}{a\_nbne} & ,\cr  
914 - a & \leftarrow b, e, a\_be,\cr  
915 - \neg a & \leftarrow b,e, \neg a\_be, \cr  
916 - a & \leftarrow b,e, a\_bne,\cr  
917 - \neg a & \leftarrow b,e, \neg a\_bne, \cr  
918 - a & \leftarrow b,e, a\_nbe,\cr  
919 - \neg a & \leftarrow b,e, \neg a\_nbe, \cr  
920 - a & \leftarrow b,e, a\_nbne,\cr  
921 - \neg a & \leftarrow b,e, \neg a\_nbne. \cr  
922 - \end{aligned} 905 + \probfact{0.95}{\condsymb{a}{be}}&,\cr
  906 + \probfact{0.94}{\condsymb{a}{bne}}&,\cr
  907 + \probfact{0.29}{\condsymb{a}{nbe}}&,\cr
  908 + \probfact{0.001}{\condsymb{a}{nbne}}&,\cr
  909 + a & \leftarrow b, e, \condsymb{a}{be},\cr
  910 + \neg a & \leftarrow b,e, \neg \condsymb{a}{be}, \cr
  911 + a & \leftarrow b,e, \condsymb{a}{bne},\cr
  912 + \neg a & \leftarrow b,e, \neg \condsymb{a}{bne}, \cr
  913 + a & \leftarrow b,e, \condsymb{a}{nbe},\cr
  914 + \neg a & \leftarrow b,e, \neg \condsymb{a}{nbe}, \cr
  915 + a & \leftarrow b,e, \condsymb{a}{nbne},\cr
  916 + \neg a & \leftarrow b,e, \neg \condsymb{a}{nbne}. \cr
  917 + \end{aligned}
923 \end{equation*} 918 \end{equation*}
924 919
925 -One can then proceed as in the previous subsection and analyse this example. The details of such analysis are not given here since they are analogous, albeit admittedly more cumbersome.  
926 - 920 +One can then proceed as in the previous subsection and analyse this example. The details of such analysis are not given here since they are analogous, albeit admittedly more cumbersome.
927 921
928 \section{Discussion} 922 \section{Discussion}
929 923