Rob Hicks (Posts about maximum likelihood)http://rlhick.people.wm.edu/enMon, 28 Jun 2021 12:20:40 GMTNikola (getnikola.com)http://blogs.law.harvard.edu/tech/rssTensorflow with Custom Likelihood Functionshttp://rlhick.people.wm.edu/posts/custom-likes-tensorflow.html<div><p>
This post builds on earlier ones dealing with <a href="http://rlhick.people.wm.edu/posts/estimating-custom-mle.html">custom likelihood
functions in python</a> and maximum likelihood estimation with <a href="http://rlhick.people.wm.edu/posts/mle-autograd.html">auto
differentiation</a>. This post is approaching tensorflow from an
econometrics perspective and is based on a series of tests and notes I
developed for using tensorflow for some of my work. In early
explorations of tensorflow nearly all of the examples I encountered
were from a machine learning perspective, making it difficult to fit
code examples to econometric problems. For the tensorflow uninitiated
who want to dive in (like me!), I hope this will prove useful.
</p>
<p>
The goals of the post:
</p>
<ol class="org-ol">
<li>Some tensorflow basics I wish I had known before I started this work</li>
<li>Define a custom log-likelihood function in tensorflow and perform
differentiation over model parameters to illustrate how, under the
hood, tensorflow's model graph is designed to calculate derivatives
"free of charge" (no programming required and very little to no
additional compute time).</li>
<li>Use the tensorflow log-likelihood to estimate a maximum likelihood
model using <code>tensorflow_probability.optimizer</code> capabilities.</li>
<li>Illustrate how the <code>tensorflow.probability.mcmc</code> libraries can be
used with custom log-likelihoods.</li>
</ol>
<p><a href="http://rlhick.people.wm.edu/posts/custom-likes-tensorflow.html">Read more…</a> (12 min remaining to read)</p></div>likelihoodmaximum likelihoodmcmctensorflowhttp://rlhick.people.wm.edu/posts/custom-likes-tensorflow.htmlMon, 24 Feb 2020 08:15:50 GMTUsing Autograd for Maximum Likelihood Estimationhttp://rlhick.people.wm.edu/posts/mle-autograd.html<div><p>
Thanks to an excellent series of posts on the python package <code>autograd</code> for automatic differentiation by John Kitchin (e.g. <a href="http://kitchingroup.cheme.cmu.edu/blog/2017/11/22/More-auto-differentiation-goodness-for-science-and-engineering/">More Auto-differentiation Goodness for Science and Engineering</a>), this post revisits some earlier work on <a href="http://rlhick.people.wm.edu/posts/estimating-custom-mle.html">maximum likelihood estimation in Python</a> and investigates the use of auto differentiation. As pointed out in <a href="https://arxiv.org/pdf/1502.05767.pdf">this article</a>, auto-differentiation "can be thought of as performing a non-standard interpretation of a computer program where this interpretation involves augmenting the standard computation with the calculation of various derivatives."
</p>
<p>
Auto-differentiation is neither symbolic differentiation nor numerical approximations using finite difference methods. What auto-differentiation provides is code augmentation where code is provided for derivatives of your functions free of charge. In this post, we will be using the <code>autograd</code> package in python after defining a function in the usual <code>numpy</code> way. In python, another auto-differentiation choice is the Theano package, which is used by PyMC3 a Bayesian probabilistic programming package that I use in my research and teaching. There are probably other implementations in python, as it is becoming a must-have in the machine learning field. Implementations also exist in C/C++, R, Matlab, and probably others.
</p>
<p>
The three primary reasons for incorporating auto-differentiation capabilities into your research are
</p>
<ol class="org-ol">
<li>In nearly all cases, your code will run faster. For some problems, much faster.</li>
<li>For difficult problems, your model is likely to converge closer to the true parameter values and may be less sensitive to starting values.</li>
<li>Your model will provide more accurate calculations for things like gradiants and hessians (so your standard errors will be more accurately calculated).</li>
</ol>
<p>
With auto-differentiation, gone are the days of deriving analytical derivatives and programming them into your estimation routine. In this short note, we show a simple example of auto-differentiation, expand on that for maximum likelihood estimation, and show that for problems where likelihood calculations are expensive, or for which there are many parameters being estimated there can be dramatic speed-ups.
</p>
<p><a href="http://rlhick.people.wm.edu/posts/mle-autograd.html">Read more…</a> (8 min remaining to read)</p></div>autogradipythonmaximum likelihoodhttp://rlhick.people.wm.edu/posts/mle-autograd.htmlTue, 06 Mar 2018 08:30:50 GMTEstimating Custom Maximum Likelihood Models in Python (and Matlab)http://rlhick.people.wm.edu/posts/estimating-custom-mle.html<div><p>
In this post I show various ways of estimating "generic" maximum likelihood models in python. For each, we'll recover standard errors.
</p>
<p>
We will implement a simple ordinary least squares model like this
</p>
\begin{equation}
\mathbf{y = x\beta +\epsilon}
\end{equation}
<p>
where \(\epsilon\) is assumed distributed i.i.d. normal with mean 0 and variance \(\sigma^2\). In our simple model, there is only a constant and one slope coefficient (\(\beta = \begin{bmatrix} \beta_0 & \beta_1 \end{bmatrix}\)).
</p>
<p>
For this model, we would probably never bother going to the trouble of manually implementing maximum likelihood estimators as we show in this post. However, for more complicated models for which there is no established package or command, there are benefits to knowing how to build your own likelihood function and use it for estimation. It is also worthwhile noting that most of the methods shown here don't use analytical gradiants or hessians, so are likely (1) to have longer execution times and (2) to be less precise than methods where known analytical gradiants and hessians are built into the estimation method. I might explore those issues in a later post.
</p>
<p>
<b>tl;dr</b>: There are numerous ways to estimate custom maximum likelihood models in Python, and what I find is:
</p>
<ol class="org-ol">
<li>For the most features, I recommend using the <a href="http://rlhick.people.wm.edu/posts/estimating-custom-mle.html#org1a27bdc"><code>Genericlikelihoodmodel</code> class from Statsmodels</a> even if it is the least intuitive way for programmers familiar with Matlab. If you are comfortable with object oriented programming you should definitely go this route.</li>
<li>For fastest run times and computationally expensive problems Matlab will most likely be significantly even with lots of code optimizations.</li>
</ol>
<p><a href="http://rlhick.people.wm.edu/posts/estimating-custom-mle.html">Read more…</a> (6 min remaining to read)</p></div>ipythonmatlabmaximum likelihoodhttp://rlhick.people.wm.edu/posts/estimating-custom-mle.htmlSat, 06 May 2017 08:15:50 GMT