While standard estimation assumes that all datapoints are from probability distribution of the same fixed parameters $theta$, we will focus on maximum likelihood (ML) adaptive estimation for nonstationary time series: separately estimating parameters $theta_T$ for each time $T$ based on the earlier values $(x_t)_{t<T}$ using (exponential) moving ML estimator $theta_T=argmax_theta l_T$ for $l_T=sum_{t<T} eta^{T-t} ln(rho_theta (x_t))$ and some $etain(0,1]$. Computational cost of such moving estimator is generally much higher as we need to optimize log-likelihood multiple times, however, in many cases it can be made inexpensive thanks to dependencies. We focus on such example: exponential power distribution (EPD) $rho(x)propto exp(-|(x-mu)/sigma|^kappa/kappa)$ family, which covers wide range of tail behavior like Gaussian ($kappa=2$) or Laplace ($kappa=1$) distribution. It is also convenient for such adaptive estimation of scale parameter $sigma$ as its standard ML estimation is $sigma^kappa$ being average $|x-mu|^kappa$. By just replacing average with exponential moving average: $(sigma_{T+1})^kappa=eta(sigma_T)^kappa +(1-eta)|x_T-mu|^kappa$ we can inexpensively make it adaptive. It is tested on daily log-return series for DJIA companies, leading to essentially better log-likelihoods than standard (static) estimation, surprisingly with optimal $kappa$ tails types varying between companies. Presented general alternative estimation philosophy provides tools which might be useful for building better models for analysis of nonstationary time-series.
↧