Lista 04 de Exercı́cios - PGM 2015
Renato Assunção - DCC, UFMG
1. Instale OpenBUGS em sua máquina e rode o tutorial que está em OpenBUGS User
Manual.
2. In the one-dimensional robot example, the Kalman filter alternates between two
steps. On the first step, at time t, it has a Gaussian probability distribution to
represent the belief on the possible values for the true location µt of the robot at
the moment. Let µt ∼ N (mt , 1/taut ). The robot receives a noisy measurement yt
from a sensor that is centered around its true position µt with a certain variance,
also modeled as a Gaussian distribution: (yt |µt ) ∼ N (µt , 1/τy ). In class, it was
stated that, after receiving the sensor measurement, the belief on µt is updated
using Bayes theorem and as a result (µt |yt ) is a new Gaussian distribution mixing
up the two previous Gaussian distributions. In this problem, you are going to show
how exactly this is done.
Using Bayes, the posterior distribution density f (µt |yt ) is given by
f (µt |yt ) =
≈
≈
≈
≈
∼
f (yt |µt )f (µt )
f (yt )
f (yt |µt )f (µt ) (A)
τ
τ
y
t
exp − (yt − µt )2 exp − (µt − mt )2 (B)
2
2
1 2
exp − µt (τy + τt ) − 2µt (τy yt + τt mt )
(C)
2
τt + τy
2
exp −
(µt − at+1 )
(D)
2
N (at+1 , vt ) (E)
where at+1 = wt yt + (1 − wt )mt ) and wt = τy /(τy + τt ) and vt = 1/(τy + τt ).
Explain how the expressions (A), (B), (C), (D), and (E) are obtained from the
previous one.
SOLUÇÃO:
• (A): The marginal distribution f (yt ) in the denominator does not depend on
µt , is a constant with respect to µt . Hence, we can write the density up to a
proportionality constant.
1
• (B): Simply using the expressions of the Gaussian densities for f (yt |µt ) and
f (µt ) and dropping the constants.
• (C): Expand the two square terms and collect the terms involving µt and µ2t .
The terms that do not involve µt are constant with respect to µt and can be
absorbed in a new proportionality constant.
• (D): complete the squares adding and subtracting at+1 = wt yt + (1 − wt )mt )
where wt = τy /(τy + τt ) Note that this will result in a new proportionality
constant.
• (E): Up to a proportionality constant, the expression in (D) is identical to
the density of a Gaussian with expected value at+1 and variance 1/(τy + τt ).
Therefore, (µt |yt ) must be a Gaussian distribution with these parameters.
3. In the second step of the Kalman filter, there is another probability manipulation.
The presently belief distribution, (µt |yt ), for the robot location after receiving the
sensor measurement is the Gaussian obtained in the previous step, represented
by the Gaussian N (at+1 , vt ). The robot receives a command to move st units in
the time interval. And so he does, but it adds a certain independent Gaussian
noise t to this movement. This noise is given by a Gaussian, t ∼ N (0, 1/taud ),
independent of all other variables involved in the modeling up to now. Hence, the
true displacement of the robot is Dt = st + t and his new location is given by
µt + st + t . As a consequence, the probability distribution describing the belief on
the true new location µt+1 of the robot must be updated. Explain why this new
belief distribution is given by a Gaussian µt+1 ∼ N (at+1 + st , vt + 1/taud ).
SOLUÇÃO: The real displacement is the sum of displacement commanded st , a
non-random and known constant, plus the unknown random and Gaussian noise
t ∼ N (0, 1/τd ). A Gaussian added to a constant is a new Gaussian with its
expected value modified. Therefore, Dt = st + t ∼ N (0 + st , 1/τd ). The sum
of two Gaussian is also a Gaussian with expected value given by the sum of the
expected values of the components. If the terms in the sum are independent
random variables, the new variance is the sum of the variances. Therefore, µt+1 =
µt + Dt is a Gaussian N (at+1 + st , vt + 1/τd ).
4. A Figura 1 mostra uma pequena rede bayesiana procurando descrever os sintomas
associados com a gripe e pneumonia num paciente. Observe que nenhuma das
arestas pode ter sua direção invertida sem ferir o bom senso. Não parece razoável
tamém adicionar nenhuma aresta adicional. Partindo desta BN, e depois de ler o
texto DiscreteBNGibbs.pdf, faça as tarefas a seguir:
(a) Usando as regras de probabilidade condicional, calcule o valor exato das
seguintes probabilidades: P( FE = yes ) e P( MY = yes ). RESP: 0.125 e
0.276, respectivamente.
2
Figure 1: BN descrevendo os sintomas de gripe e pneumonia.
(b) Usando Monte Carlo simples (SEM o amostrador de Gibbs), gere uma amostra
de N = 10000 instâncias da rede e estime as seguintes probabilidades:
•
•
•
•
P(
P(
P(
P(
FE = yes ) e compare com o valor exato que você já calculou;
MY = yes ) e compare com o valor exato;
TEMP > 37.5, PN = yes );
TEMP > 37.5, PN = yes , MY = yes ).
(c) Usando Monte Carlo simples e o método da rejeição, a partir da amostra já
criada, estime as probabilidades abaixo. Em cada caso, verifique o tamanho
a que sua amostra ficou reduzida.
•
•
•
•
P(
P(
P(
P(
FE = yes | MY = yes);
MY = yes | FE = yes);
TEMP > 37.5 | PN = yes );
TEMP > 37.5 | PN = yes , MY = yes ).
(d) Para usar Gibbs sampler, obtenha as tabelas das distribuições condicionais
completas de cada vértice e gere uma amostra de tamanho 10000 da distribuição conjunta. Com sua amostra, estime as probabilidades não condicionais do item (b).
3
(e) Usando Gibbs sampler e as tabelas de condicionais completas, gere amostras
de tamanho 10000 para cada uma das quatro evidências fixadas no item (c).
4
Download

Lista 04 de Exerc´ıcios