Feeds:
Posts
We’re interested in the end in functions $\Phi: \mathbf{R}^d\rightarrow\mathbf{R}^N$ where we want to estimate some parameters of features in $\mathbf{R}^d$ based on observations in $\mathbf{R}^N$.  The application in the end will be multivariate financial data of high dimension $N >> d$ but before we consider this relatively complex problem we consider $d=N=1$ and a function $f:\mathbf{R}\rightarrow\mathbf{R}$ that is one-dimensional, and we consider $f(B_t) = \sum_{i=0}^\infty a_i H_i(t,B_t)$ (**) which has the nice property (see this) $dH_{k+1}(t,B_t) = H_k(t,B_t) dB_t$.  Now even powers of $B_t$ have the expectation $E(B_t^{2n}) =\frac{(2n)!}{n! 2^n} t^n$ which can be used to evaluate $E(H_k(t,B)dB_t)$ (*) using the Ito formula with $f(x) = x^k$ and then using the explicit expressions for the Hermite polynomials.  We can thus tabulate the expectations (*) as functions of $t$.  These explicit functions of $t$ can then be used to evaluate the coefficients $a_i$ of (**) using observations $(y_0,\cdots,y_T)$ of some observed univariate time series, thereby giving us parameters for the function $f$ which can be considered the functional.  This is a solution to the problem of determining $f$ of some twice-differentiable function of $B_t$ in the one-dimensional case.
Now for the multidimensional case let $\Phi:\mathbf{R}^d\rightarrow\mathbf{R}^N$ be $C^2$.  When $d=1$ then the above approach is sufficient for each component $(\Phi_1,\dots,\Phi_N)$.