Monday, June 30, 2014

The Power Spectrum under Linear Turbulence

In an attempt to square up my understanding of nonlinear fluid dynamics and the connection between Green's functions and two-point correlation functions, I have been playing with the following, very general equation

\begin{eqnarray}
L^0 u_\alpha(\vec{x},t) &=& f_\alpha (\vec{x},t)+ M_{\alpha \beta \gamma}(\vec{x},t) \int \beta (\vec{x},t) \gamma(\vec{q}-\vec{x},t) d^3q
\end{eqnarray}

Where, in the above, $L^0$ is a linear differential operator. The diffusion equation would be a good example:

\begin{eqnarray}
L^0 \sim \frac{\partial}{\partial t} - \nu \vec{\nabla}^2
\end{eqnarray}

By linear, I mean that $L^0$ is not a function of our function of interest $u_\alpha(\vec{x},t)$, which we are trying to solve/predict. I have given $u$ a subscript $\alpha$ to keep track of four components: the three cartesian velocities and the density. This fully describes fluid motion.

Let's call the  quantity $f_\alpha$ the "noise" or "stirring" term, that excites the systems over space and time. The final term in the above equation is our non-linear "mixing" function, but more on that later.

We can solve the linear system

\begin{eqnarray}
L^0 u_\alpha(\vec{x},t) &=& f_\alpha (\vec{x},t)
\end{eqnarray}

by just writing our function of interest $u_\alpha (\vec{x},t)$ in terms of what's called a linear Green's function. The Green's function has the following properties:

\begin{eqnarray}
L^0 G^0(\vec{x},t \vert \vec{x}^\prime, t^\prime) &=& \delta(\vec{x}-\vec{x}^\prime)\delta(\vec{t}-\vec{t}^\prime) \\
&=& 0 \ \ \ \mathrm{if} \ \ t<t^\prime
\end{eqnarray}

and can be built out of something called the propagator, which has some slightly different characteristics:

\begin{eqnarray}
L^0K^0(\vec{x},t \vert \vec{x}^\prime, t^\prime) &=& 0 \ \ \ \mathrm{if} \ \ t>t^\prime \\
K^0(\vec{x},t \vert \vec{x}^\prime, t^) &=&  \delta(\vec{x}-\vec{x}^\prime) \ \ \ \mathrm{for\ \ equal \ \ times}
\end{eqnarray}

If take a look at our linear system in fourier space, we will see that the differential operator -- transforming $\vec{x}$, not t -- becomes an algebraic function of $\vec{k}$ and $\partial_t$:

\begin{eqnarray}
\tilde{L}^0(\vec{k},\partial_t) u_\alpha(\vec{k},t) &=& f_\alpha (\vec{k},t)
\end{eqnarray}

We would like to construct our linear solution to this system using the Green's functions. Now we know, from above that $L^0$ applied to our Green's function yields

\begin{eqnarray}
\tilde{L}^0 G^0(\vec{k},t \vert \vec{k}^\prime, t^\prime) &=& 1 \cdot \delta(t-t^\prime)
\end{eqnarray}

Where I have taken the fourier transform of the spatial dirac delta function from before, which is unity. In a translationally -- or momentum conserving -- system, we expect all fourier transforms of functions to carry with them an extra dirac term:

\begin{eqnarray}
\tilde{L}^0 G^0(\vec{k},t \vert \vec{k}^\prime, t^\prime) \delta(k+k^\prime) &=& 1 \cdot \delta(t-t^\prime)
\end{eqnarray}

And so we find

\begin{eqnarray}
\tilde{L}^0 G^0(\vec{k},t \vert -\vec{k}, t^\prime) &=& 1 \cdot \delta(t-t^\prime)
\end{eqnarray}

Convolving the Green's function with our source function in k space, now, will yield our linear solution

\begin{eqnarray}
\tilde{L}^0 \left( G^0(\vec{k},t \vert -\vec{k}, t^\prime)\star f_\alpha(\vec{k}) \right) &=&  f_\alpha(\vec{k},t) \\
u_\alpha^0(\vec{k},t) &=& \int G^0(\vec{k},t \vert -\vec{k}, t^\prime)f_\alpha(\vec{k},t^\prime) dt^\prime
\end{eqnarray}

This immediately shows that the fourier decomposition of our linear system does not "mix modes" over time. We have no integral over $d^3k$ written above, and so we can claim, under linear forces, that the power spectrum of our system should retain the same shape.

Let's look into this more explicitly. If take the ensemble average of two of our vector functions of interest in k-space:

\begin{eqnarray*}
\langle u_\alpha^0(\vec{k_1},t_1) u_\alpha^0(\vec{k_2},t_2)  \rangle &=& \int \int G^0(\vec{k_1},t_1 \vert -\vec{k_1}, t_1^\prime)f G^0(\vec{k_2},t_2 \vert -\vec{k_2}, t_2^\prime) \langle f_\alpha(\vec{k_1},t_1^\prime) f_\alpha(\vec{k_2},t_2^\prime)  \rangle dt_1^\prime dt_2^\prime
\end{eqnarray*}

Now, if we assume the following isotropic, Gaussian form for our noise funtion (recalling that we've already assume a translationally invariant system)

\begin{eqnarray}
\langle f_\alpha(\vec{k_1},t_1^\prime) f_\alpha(\vec{k_2},t_2^\prime)  \rangle &=& \left(\delta_{\alpha \beta} - \frac{k_\alpha k_\beta}{k^2}\right) W(\vert k \vert) \delta(t_1^\prime - t_2^\prime) \delta(k_1 - k_2)
\end{eqnarray}

Then our expectation value for the second moment, or the pair correlation tensor, above, simplifies immensely:

\begin{eqnarray*}
\langle u_\alpha^0(\vec{k_1},t_1) u_\alpha^0(\vec{k_2},t_2)  \rangle &=& D_{\alpha \beta} \int  W(\vert \mathbf{k} \vert) G^0\left(\mathbf{k},t_1 \vert -\mathbf{k},t^\prime \right)G^0\left(\mathbf{k},t_2 \vert -\mathbf{k},t^\prime \right) \ dt^\prime
\end{eqnarray*}

Now I find this very interesting, because of we correlate two modes at equal time $t_1 = t_2$ and set $\alpha=0$, we've got the power spectrum:

\begin{eqnarray*}
P(k,t)\delta(k+k^\prime) &=& \langle u_0^0(\vec{k},t) u_0^0(\vec{k^\prime},t)  \rangle \\
&=& \langle u_0^0(\vec{k},t) u_0^0(\vec{-k},t)  \rangle \\
&=& \int_{t_0}^t  W(\vert \mathbf{k} \vert) \vert \hat{G}^0(\mathbf{k},t ,t^\prime) \vert^2 dt^\prime
\end{eqnarray*}

If we take the limit as $t \to t_0$, then we find that integral goes to zero "width" and we are left with just the value of our integrand, which, given the properties of the propagator, should be $W(k)$. Thus our Power spectrum under no time-evolution yields

\begin{eqnarray}
P(k,t_0) &=& W(k)
\end{eqnarray}

As one would expect. If $W(k)$ is time independent, then we would expect this function to represent the "seeds" for fluid perturbations -- or in cosmology, for gravitational growth!

Now the following could be perhaps spurious, but perhaps the propagator is the transfer function I am so used to seeing in Cosmology:

\begin{eqnarray*}
P(k) &=& T(k)^2 P(k,t_0)\\
&=&  P(k,t_0) \int_{t_0}^t  \vert \hat{G}^0(\mathbf{k},t ,t^\prime) \vert^2 dt^\prime \\
T(k)^2 &=&  \int_{t_0}^t  \vert \hat{G}^0(\mathbf{k},t ,t^\prime) \vert^2 dt^\prime
\end{eqnarray*}

The key here is that the time dependence in the above integral has been "shucked" out, by separating time -- or redshift -- into the linear growth factor $a(t)$.

Tuesday, June 24, 2014

Conditioning Multinormals

In the following, all equals signs will really mean "proportional" as we're going to work with probability densities and for the time being, don't want to worry about normalization.

Let $\mathbf{x}$ and $\mathbf{y}$ be random vectors drawn from the following multinormal:

\begin{eqnarray}
P(\mathbf{x},\mathbf{y}) \sim \mathrm{N}\left( \left[\begin{array}{c} \mu_x \\ \mu_y \end{array}\right], \left[\begin{array}{cc} A & C \\ C^T & B \end{array}\right]^{-1} \right)\\
\sim e^{-\frac{1}{2} \left( \left[\begin{array}{cc} x-\mu_x ,& y - \mu_y \end{array}\right] \cdot \left[\begin{array}{cc} A & C \\ C^T & B \end{array}\right]\cdot \left[\begin{array}{c} x-\mu_x, \\ y-\mu_y \end{array}\right] \right)}
\end{eqnarray}

The covariance matrix above is quite complicated, but we can think of $C^{-1}$ as cross correlating our vectors $\mathbf{x}$ and $\mathbf{y}$ and $A^{-1}$ and $B^{-1}$ "auto"-correlating within the vectors $\mathbf{x}$ and $\mathbf{y}$. We have set this up to be a Block Diagonal Matrix. We need not have $\mathbf{x}$ and $\mathbf{y}$ be of the same dimension.

To find the inverse of a block diagonal matrix, like the above, we write

\begin{equation}
A=\left[\begin{array}{cc} P & Q \\ R & S \end{array}\right]
\end{equation}

we write:

\begin{eqnarray}
\left[\begin{array}{cc} P & Q \\ R & S \end{array}\right]\left[\begin{array}{cc} X_1 & X_2 \\ X_3 & X_4 \end{array}\right] &=& \left[\begin{array}{cc} I & 0 \\ 0 & I \end{array}\right]
\end{eqnarray}

Which results in the following linear equations:

\begin{eqnarray}
PX_1+QX_3 &=& I \\
PX_2 &=& - Q X_4 \\
RX_1 = - S X_3 \\
RX_2+SX_4 &=& I
\end{eqnarray}

with the solutions:

\begin{eqnarray}
X_1 &=& P^{-1}+P^{-1}QMRP^{-1}\\
X_2 &=& -P^{-1}QM\\
X_3 &=& -MRP^{-1} \\
X_4 &=& M \\
M &=& (S-RP^{-1}Q)^{-1}
\end{eqnarray}

and

\begin{eqnarray}
X_1 &=& N\\
X_2 &=& -NQS^{-1}\\
X_3 &=& -S^{-1}RN \\
X_4 &=& S^{-1}+S^{-1}RNQS^{-1} \\
N &=& (P-QS^{-1}R)^{-1}
\end{eqnarray}

Now, let's say we are interested in the conditional multinormal

\begin{eqnarray}
(x \vert y) \sim ?
\end{eqnarray}

We expect ?, above, to be another Gaussian, which will be uniquely determined by its mean and variance -- the first two cumulants. Let's right down the conditional expectation value using Bayes' Theorem:

\begin{eqnarray}
\langle\mathbf{x} \vert \mathbf{y}\rangle &=& \int \mathbf{x} P(x\vert y) d\mathbf{x} \\
&=& \int \mathbf{x} \frac{P(\mathbf{x},\mathbf{y})}{P(\mathbf{y})} d\mathbf{x}
\end{eqnarray}

And now, to expand things to higher order in $\mathbf{x}$, we can create a conditional moment generating function:

\begin{eqnarray}
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& \int d\mathbf{x} \left(e^{\mathbf{\lambda}\cdot (\mathbf{x} \vert \mathbf{y})}\right)\ \ \frac{e^{-\frac{1}{2} \left( \left[\begin{array}{cc} x-\mu_x & y-\mu_y \end{array}\right] \cdot \left[\begin{array}{cc} A & C \\ C^T & B \end{array}\right]\cdot \left[\begin{array}{c} x-\mu_x \\ y-\mu_y \end{array}\right] \right)} }{e^{-\frac{1}{2} \left( \left[\begin{array}{cc} x-\mu_x & y-\mu_y \end{array}\right] \cdot \left[\begin{array}{cc} 0 & 0 \\ 0 & B \end{array}\right]\cdot \left[\begin{array}{c} x-\mu_x \\ y-\mu_y \end{array}\right] \right)}}
\end{eqnarray}

Notice how the denominator is just the Probability density on $\mathbf{y}$. Let's rewrite $\mathbf{x}^\prime = \mathbf{x}-\mathbf{\mu}_x$ and $\mathbf{y}^\prime = \mathbf{y}-\mathbf{\mu}_y$ to simplify things. Let's also get rid of the conditional notation $\vert y $ in the exponent to clean things up and make conditioning implicit:

\begin{eqnarray}
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& \int d\mathbf{x} \left(e^{\mathbf{\lambda}\cdot (\mathbf{x^\prime}+\mathbf{\mu}_x )}\right)\ \ e^{-\frac{1}{2} \left( \left[\begin{array}{cc} x^\prime & y^\prime \end{array}\right] \cdot \left[\begin{array}{cc} A & C \\ C^T & 0 \end{array}\right]\cdot \left[\begin{array}{c} x^\prime \\ y^\prime \end{array}\right] \right)} \\
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& \int d\mathbf{x} \left(e^{\mathbf{\lambda}\cdot (\mathbf{x^\prime}+\mathbf{\mu}_x )}\right)\ \ e^{-\frac{1}{2} \left( \left[\begin{array}{cc} x^\prime & y^\prime \end{array}\right] \cdot \left[\begin{array}{c} Ax^\prime+Cy^\prime \\ C^Tx^\prime \end{array}\right] \right)} \\
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& e^{\lambda \cdot \mathbf{\mu}_x }\int d\mathbf{x} \left(e^{\mathbf{\lambda}\cdot \mathbf{x^\prime}}\right)\ \ e^{-\frac{1}{2}\left( x^\prime_i A_{ij}x^\prime_j)- C_{ij} x^\prime_i y^\prime_j \right)}
\end{eqnarray}

Now we can simplify things a little bit, by writing $ik_i = \lambda_i-C_{ij}y^\prime_j$
\begin{eqnarray}
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& e^{\lambda \cdot \mathbf{\mu}_x }\int d\mathbf{x} \left(e^{(\lambda_i-C_{ij}y^\prime_j)x^\prime_i}\right)\ \ e^{-\frac{1}{2}\left( x^\prime_i A_{ij}x^\prime_j \right)}\\
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& e^{\lambda \cdot \mathbf{\mu}_x }\int d\mathbf{x} e^{ik_i x^\prime_i}\ \ e^{-\frac{1}{2}\left( x^\prime_i A_{ij}x^\prime_j \right)}
\end{eqnarray}

And now we have a multidimensional Fourier transform of a Gaussian, (which is another Gaussian)

\begin{eqnarray}
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& e^{\lambda \cdot \mathbf{\mu}_x }e^{-\frac{1}{2}k_i A^{-1}_{ij}k_j}\\
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& e^{\lambda \cdot \mathbf{\mu}_x -\frac{1}{2}(\lambda_i-C_{il}y^\prime_l)A^{-1}_{ij}(\lambda_j-C_{jm}y^\prime_m)}\\
\end{eqnarray}

Foiling out and collecting like terms in $\lambda $, we've got:

\begin{eqnarray}
\langle e^{(\mathbf{x} \vert \mathbf{y})} \rangle &=& e^{\lambda_i \left[\mathbf{\mu}^{(x)}_i  -\frac{1}{2}A^{-1}_{ij} C_{jm}(y_m-\mu^{(y)}_m)\right]-\left[\frac{1}{2}\lambda_i A^{-1}_{ij}\lambda_j\right] \ \ - \ \ (y_l-\mu^{(y)}_l)C_{il}A^{-1}_{ij}C_{jm}(y_m-\mu^{(y)}_m)} \\
\end{eqnarray}

And now we see that the first moment and variance behave differently than one would expect:
\begin{eqnarray}
\frac{\partial}{\partial \lambda_i}\langle e^{(\mathbf{x} \vert \mathbf{y})}\vert_{\mathbf{\lambda}=0} \rangle &=&\langle x_i \rangle = \mathbf{\mu}^{(x)}_i  - A^{-1}_{ij} C_{jm}(y_m-\mu^{(y)}_m) \\
\frac{\partial^2}{\partial \lambda_i \lambda_j}\langle e^{(\mathbf{x} \vert \mathbf{y})}\vert_{\mathbf{\lambda}=0} \rangle &=&\langle x_i x_j \rangle = A^{-1}_{ij} = A-C B^{-1}C^T
\end{eqnarray}

Where each of these expectation values are modified by the Gaussian on $\mathbf{y}$, which is essentially another normalization factor that we've thrown out:
\begin{eqnarray}
f(y) &=& \mathrm{Exp}\left[-\frac{1}{2}(y_l-\mu^{(y)}_l)C_{il}A^{-1}_{ij}C_{jm}(y_m-\mu^{(y)}_m)\right]
\end{eqnarray}

So now, if we write the conditional Probability density function $\mathbf{P(x \vert y)}$, we have:

\begin{eqnarray}
P(\mathbf{x} \vert \mathbf{y}) &=& N\left[ \mathbf{\mu}^{(x)}_i  - A^{-1}_{ij} C_{jm}(y_m-\mu^{(y)}_m), A^{-1}_{ij} \right] \\
P(\mathbf{x} \vert \mathbf{y}) &=& N\left[ \mathbf{\mu}^{(x)}_i  - A^{-1}_{ij} C_{jm}(y_m-\mu^{(y)}_m), A-C B^{-1}C^T \right]
\end{eqnarray}