# 鲍姆-韦尔奇算法

## 介绍

${\displaystyle A=\{a_{ij}\}=P(X_{t}=j|X_{t-1}=i)}$

${\displaystyle \pi _{i}=P(X_{1}=i)}$

${\displaystyle b_{j}(y_{i})=P(Y_{t}=y_{i}|X_{t}=j)}$

## 算法

#### 前向过程

${\displaystyle \alpha _{i}(t)=P(Y_{1}=y_{1},Y_{2}=y_{2},\cdots ,Y_{t}=y_{t},X_{t}=i|\theta )}$ 是参数${\displaystyle \theta }$ 的条件下，观测的序列是${\displaystyle y_{1},y_{2},\cdots ,y_{t}}$ ，时刻${\displaystyle t}$ 的状态是${\displaystyle i}$ 的概率。可以通过递归计算：

1. ${\displaystyle \alpha _{i}(1)=\pi _{i}b_{i}(y_{1})}$
2. ${\displaystyle \alpha _{i}(t+1)=b_{i}(y_{t+1})\sum _{j=1}^{N}\alpha _{j}(t)a_{ji}}$

#### 后向过程

${\displaystyle \beta _{i}(t)=P(Y_{t+1}=y_{t+1},\cdots ,Y_{T}=y_{T}|X_{t}=i,\theta )}$ 是参数是${\displaystyle \theta }$ ，在时刻${\displaystyle t}$ 的状态是${\displaystyle i}$ 的条件下，余下部分的观测序列是${\displaystyle y_{t+1},\cdots ,y_{T}}$ 的概率。

1. ${\displaystyle \beta _{i}(T)=1}$
2. ${\displaystyle \beta _{i}(t)=\sum _{j=1}^{N}\beta _{j}(t+1)a_{ij}b_{j}(y_{t+1})}$

#### 更新

• 根据贝叶斯公式计算临时变量。
• 在给定观测序列${\displaystyle Y}$ 和参数${\displaystyle \theta }$ 的情况下，在时间${\displaystyle t}$ 状态是${\displaystyle i}$ 的概率:${\displaystyle \gamma _{i}(t)=P(X_{t}=i|Y,\theta )={\frac {P(X_{t}=i,Y|\theta )}{P(Y|\theta )}}={\frac {\alpha _{i}(t)\beta _{i}(t)}{\sum _{j=1}^{N}\alpha _{j}(t)\beta _{j}(t)}}}$
• 在给定观测序列${\displaystyle Y}$ 和参数${\displaystyle \theta }$ 的情况下，在时间${\displaystyle t}$ 状态是${\displaystyle i}$ ，在时间${\displaystyle t+1}$ 状态是${\displaystyle j}$ 的概率:${\displaystyle \xi _{ij}(t)=P(X_{t}=i,X_{t+1}=j|Y,\theta )={\frac {P(X_{t}=t,X_{t+1}=j,Y|\theta )}{P(Y|\theta )}}={\frac {\alpha _{i}(t)a_{ij}\beta _{j}(t+1)b_{j}(y_{t}+1)}{\sum _{i=1}^{N}\sum _{j=1}^{N}\alpha _{i}(t)a_{ij}b_{j}(y_{t+1})\beta _{j}(t+1)}}}$
• ${\displaystyle \gamma _{i}(t)}$ ${\displaystyle \xi _{ij}(t)}$ 的分母一样，表示给定参数${\displaystyle \theta }$ 得到观测序列${\displaystyle Y}$ 的概率。
• 然后更新参数：
• ${\displaystyle \pi _{i}^{*}=\gamma _{i}(1)}$ ，在时间${\displaystyle 1}$ 状态是${\displaystyle i}$ 的概率
• ${\displaystyle a_{ij}^{*}={\frac {\sum _{t-1}^{T-1}\xi _{ij}(t)}{\sum _{t=1}^{T-1}\gamma _{i}(t)}}}$ ，等于期望的从状态${\displaystyle i}$ 转换到状态${\displaystyle j}$ 的数量除以从状态${\displaystyle i}$ 开始的转换的总数。
• ${\displaystyle b_{i}^{*}(v_{k})={\frac {\sum _{t=1}^{T}1_{y_{t}=v_{k}}\gamma _{i}(t)}{\sum _{t=1}^{T}\gamma _{i}(t)}}}$ ，其中${\displaystyle 1_{y_{t}=v_{k}}={\begin{cases}1{\text{ if }}y_{t}=v_{k},\\0{\text{ otherwise}}\end{cases}}}$ ${\displaystyle b_{i}^{*}(v_{k})}$ 是期望的从状态${\displaystyle i}$ 得到的观察值等于${\displaystyle v_{k}}$ 的数量除以从 状态${\displaystyle i}$ 开始的转换的总数。
• 重复上面的步骤直到收敛。算法可能过拟合，也不保证收敛到全局最大值。
• 其中计算${\displaystyle \gamma _{i}(t)}$ ${\displaystyle \xi _{ij}(t)}$ 相当于最大期望算法的E-Step，而更新${\displaystyle \pi _{i}^{*}\alpha _{ij}^{*},b_{i}^{*}(v_{k})}$ 的过程相当于最大期望算法的M-Step。

## 例子

Transition
State 1 State 2
State 1 0.5 0.5
State 2 0.3 0.7
Emission
No Eggs Eggs
State 1 0.3 0.7
State 2 0.8 0.2
 State 1 0.2 0.8

Observed sequence Probability of sequence and state is ${\displaystyle S_{1}}$  then ${\displaystyle S_{2}}$  Highest Probability of observing that sequence
NN 0.024 0.3584 ${\displaystyle S_{2}}$ ,${\displaystyle S_{2}}$
NN 0.024 0.3584 ${\displaystyle S_{2}}$ ,${\displaystyle S_{2}}$
NN 0.024 0.3584 ${\displaystyle S_{2}}$ ,${\displaystyle S_{2}}$
NN 0.024 0.3584 ${\displaystyle S_{2}}$ ,${\displaystyle S_{2}}$
NE 0.006 0.1344 ${\displaystyle S_{2}}$ ,${\displaystyle S_{1}}$
EE 0.014 0.0490 ${\displaystyle S_{1}}$ ,${\displaystyle S_{1}}$
EN 0.056 0.0896 ${\displaystyle S_{2}}$ ,${\displaystyle S_{2}}$
NN 0.024 0.3584 ${\displaystyle S_{2}}$ ,${\displaystyle S_{2}}$
NN 0.024 0.3584 ${\displaystyle S_{2}}$ ,${\displaystyle S_{2}}$
Total 0.22 2.4234

Old Transition Matrix
State 1 State 2
State 1 0.5 0.5
State 2 0.3 0.7
New Transition Matrix (Pseudo Probabilities)
State 1 State 2
State 1 0.0598 0.0908
State 2 0.2179 0.9705
New Transition Matrix (After Normalization)
State 1 State 2
State 1 0.3973 0.6027
State 2 0.1833 0.8167

Observed Sequence Highest probability of observing that sequence
if E is assumed to come from ${\displaystyle S_{1}}$
Highest Probability of observing that sequence
NE 0.1344 ${\displaystyle S_{2}}$ ,${\displaystyle S_{1}}$  0.1344 ${\displaystyle S_{2}}$ ,${\displaystyle S_{1}}$
EE 0.0490 ${\displaystyle S_{1}}$ ,${\displaystyle S_{1}}$  0.0490 ${\displaystyle S_{1}}$ ,${\displaystyle S_{1}}$
EN 0.0560 ${\displaystyle S_{1}}$ ,${\displaystyle S_{2}}$  0.0896 ${\displaystyle S_{2}}$ ,${\displaystyle S_{2}}$
Total 0.2394 0.2730

Old Emission Matrix
No Eggs Eggs
State 1 0.3 0.7
State 2 0.8 0.2
New Emission Matrix (Estimates)
No Eggs Eggs
State 1 0.0876 0.8769
State 2 1.0000 0.7385
New Emission Matrix (After Normalization)
No Eggs Eggs
State 1 0.0908 0.9092
State 2 0.5752 0.4248

## 代码

from hmmlearn import hmm
import numpy as np

X = np.array([1, 1, 1, 1, 1, 0, 0, 1, 1, 1]).reshape(-1, 1)
model = hmm.GaussianHMM(n_components=2, covariance_type='full')
model.fit(X)

model.monitor_.history

# pi
model.startprob_

# state transform matrix
model.transmat_

# emission_matrix
np.power(np.e, model._compute_log_likelihood(np.unique(X).reshape(-1, 1)))