Bayesian Network Representation of Joint Normal Distributions - Confounding Variables Model

Denis Papaioannou
March 21, 2022

This post is the second of a series discussing the Bayesian Network representation of multivariate normal distributions. In the first post we introduced a cascading regressions model leading to a Bayesian network representation of any joint normal distribution [DP22]. A joint normal distribution being fully specified by its mean vector and its covariance matrix is not simple to interact with as its Bayesian network equivalent. Representing a joint normal distribution as a Bayesian network enables visualizing and interact the distribution through the lens of probabilistic graphical models with TKRISK®.

We demonstrate in this post a simple yet powerful approach using a confounding variables model.

Introduction

A normal distribution N(m,v)\mathcal{N}(m, v) is fully specified using two parameters: its mean mRm\in\mathbb R and its variance vR+v\in\mathbb R_+. Likewise, a nn-dimensional multivariate normal distribution N(M,V)\mathcal{N}(M, V) is fully specified by its mean vector MRnM\in\mathbb R^n and covariance matrix VRn×RnV\in\mathbb R^n\times\mathbb R^n.

The latter can also be represented as a Bayesian Network, more specifically as a Gaussian Bayesian network [KF09] [IC20].

Theoretical Results

In what follows we will be working with a vector of random variables X=(X1,...,Xn)X=(X_1,..., X_n) following a multivariate normal distribution N(M,V)\mathcal{N}(M, V).

By definition M=(E[Xi])i1,nM = (\mathbb E[X_i])_{i \in \llbracket 1, n\rrbracket} and V=(cov(Xi,Xj))(i,j)1,n2V = (cov(X_i, X_j))_{(i,j) \in \llbracket 1, n\rrbracket^2}.

Confounding Variables Model
Mathematical Derivation

Through basic matrix operations we will show how N(M,V)\mathcal{N}(M, V) can be represented through a confounding variables model which is a particular case of a linear Gaussian Structural Equation Model (SEM) [MD11] [GP19], the latter having a direct Gaussian Bayesian Network equivalent.

Applying LDL decomposition on VV yields V=LDLTV = L D L^T with LL being unit lower triangular and DD diagonal.

Therefore:XMLEX - M\sim L E with EN(0,D)E\sim \mathcal{N}(0, D).

Instead of rewriting equation as in [DP22] leading to the cascading regressions model, we can directly interpret EE as a set of confounding variables. In writing the regression model as:Xi=mi+j=1i1lijej+eiX_i = m_i + \sum_{j=1}^{i-1}l_{ij}e_j + e_iwith L=(lij)1i,jnL=(l_{ij})_{1\leq i,j \leq n}, we obtain a direct Gaussian Bayesian Network representation where the variables (ei)1in(e_i)_{1\leq i \leq n} are confounding, meaning these are not observed, however allow introducing a correlation structure among observed ones (X1,...,XnX_1, ..., X_n in our case).

In this case Pa(Xi)={e1,...,ei1}Pa(X_i) = \{e_1, ..., e_{i-1}\}.

Algorithm

procedure BUILD-GAUSSIAN-BAYESIAN-NETWORK-FROM-COVARIANCE(MM, VV)
Set G\mathcal{G} to an empty graph
L,DLDL(M,V)L, D \gets \textit{LDL}(M, V)
apply LDL decomposition
for i=1i=1 to nn do
Add eiN(0,di,i)e_i\sim \mathcal{N}(0, d_{i,i}) to G\mathcal{G}
initialize variables with the right marginal distribution
Add XiN(mi,0)X_i\sim \mathcal{N}(m_i, 0) to G\mathcal{G}
Add eiundefined1Xie_i \xrightarrow{1} X_i to G\mathcal{G}
add XiX_i's parents with the right coefficients
XiX_i \gets mim_i
for j=1j=1 to i1i-1 do
if li,j0l_{i,j} \neq 0 then
Add ejundefinedli,jXie_j \xrightarrow{l_{i,j}} X_i to G\mathcal{G}
end if
end for
end for
return G\mathcal{G}
end procedure

It is important to note that a different order in the variables will lead to a different Bayesian Network structure, representing however the exact same joint distribution.

Like for the cascading regressions model [DP22], the above approach is simple, computationally efficient and allows Bayesian Networks generation straight from any dataset for which variables are assumed normally distributed.

The LDL decomposition is here sufficient to obtain all Bayesian network parameters and allows handling any case of covariance matrix (even non definite) and thus any multivariate normal distribution. The simplicity of the algorithm comes at the cost of additional variables - the confounding variables.

Normalization

It is common to work on standardized normal distributions, i.e. with zero mean and unit variance:

  • M=0M=0
  • Diag(V)=[1,,1]\textit{Diag}(V) = [1, \cdots, 1]

In this case covariance and correlation matrices are one and the same. This is a convenient setup which does not imply loss of generality as non-normalized marginals can be recovered by shifting and scaling the normalized ones. Working on normalized space allows for direct comparison of dependencies between variables as these are independent of their actual magnitude.

For a self-contained graph representation, contrary to the above generic approach, the Gaussian Bayesian Network is insufficient and would require additional deterministic nodes to apply shifting and scaling (shifting and scaling directly a node on the Bayesian Network would propagate on its children and thus alter the joint distribution).

For sake of simplicity we will present examples on standardized normal distributions only.

Generic Examples

To visualize and interpret the above approach, we will take basic examples of 4 dimensional multivariate normal distributions.

Independent    Independent variables naturally translate to no connections between nodes, with the correlation matrix being the identity matrix.

V=[1000010000100001]V = \begin{bmatrix}1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\\\end{bmatrix}     \implies vcv2bn_hv_theoretical_independent/

Bayesian Network resulting from a confounding variables model.
Sparse    In this case X3X_3 is independent from all other variables, hence the lack of incoming and outgoing edges.
V=[10.1000.1100.5001000.501]V = \begin{bmatrix}1 & 0.1 & 0 & 0 \\0.1 & 1 & 0 & 0.5 \\0 & 0 & 1 & 0\\0 & 0.5 & 0 & 1\\\end{bmatrix}     \implies vcv2bn_hv_theoretical_sparse/

Bayesian Network resulting from a confounding variables model.
Dense    In the case where all variables are correlated to each other, the corresponding network is fully connected.
V=[10.10.20.30.110.40.50.20.410.60.30.50.61]V = \begin{bmatrix}1 & 0.1 & 0.2 & 0.3 \\0.1 & 1 & 0.4 & 0.5 \\0.2 & 0.4 & 1 & 0.6\\0.3 & 0.5 & 0.6 & 1\\\end{bmatrix}     \implies vcv2bn_hv_theoretical_dense/

Bayesian Network resulting from a confounding variables model.
References
[KF09]
Daphne Koller and Nir Friedman. Probabilistic Graphical Models: Principles and Techniques, pages 251-254. The MIT Press, second edition, 2009.
[IC20]
Irene Córdoba, Concha Bielza, Pedro Larragaña, Gherardo Varando. Sparse Cholesky Covariance Parametrization for Recovering Latent Structure in Ordered Data. IEEE, 2020.
[GP19]
Gunwoong Park, Youngwhan Kim. Identifiability of Gaussian Linear Structural Equation Models with Homogeneous and Heterogeneous Error Variances. Journal of the Korean Statistical Society, 2019.
[MD11]
Mathias Drton, Rina Foygel, Seth Sullivant. Global Identifiability of Linear Structural Equation Models. The Annals of Statistics, 2011.
Tenokonda