{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "

Minimization

" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib\n", "%matplotlib inline\n", "import matplotlib.pyplot as plt\n", "from scipy.stats import chi2" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " Minimization, not using least squares

\n", "\n", "Let's try a general minimization function, for Poisson uncertainties on a Gaussian profile. First write our obligatory routine to return a Gaussian. Here, we'll add another constant background/continuum value. We will also add an option to return the vector of derivatives of the Gaussian with respect to each parameter.\n", "
\n", "if \n", "$$f = A \\exp{-0.5(x-x_0)/\\sigma^2} + c$$\n", "then the derivatives are:\n", "$${df\\over dA} = \\exp{-0.5(x-x_0)^2/\\sigma^2}$$\n", "$${df\\over dx_0} = f {(x-x_0)\\over \\sigma^2}$$\n", "$${df\\over d\\sigma} = f {(x-x_0)^2\\over \\sigma^3}$$\n", "$${df\\over dc } = 1$$" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# here we define the function. For use with curve_fit, we need each parameter to \n", "# be a separate argument (i.e., not have all arguments as a single list/array)\n", "def model(x,amp,cent,sigma,const,deriv=False) :\n", " \"\"\" Gaussian profile function with amplitude, center, sigma, plus constant\n", " with deriv=False, return array of model values at location of input x\n", " with deriv=True, also return array of derivatives\n", " \"\"\"\n", " \n", " if not deriv :\n", " return # return function\n", " else :\n", " # extra code here for computing derivatives if requested\n", " \n", " return # return function and derivatives\n", "\n", "# simulate some data. Note that I've defined a par array for convenience, but pass it as\n", "# individual arguments using *par\n", "x= # set independent variable\n", "par= # set some parameters\n", "\n", "# here's what we would get if we assumed Gaussian uncertainties\n", "sig=np.sqrt(model(x,*par))\n", "y=model(x,*par)+np.random.normal(0.,sig,size=len(x))\n", "plt.errorbar(x,y,sig,fmt='ro')\n", "\n", "# here's using Poisson uncertainties\n", "y=np.random.poisson(model(x,*par))\n", "plt.errorbar(x,y,sig,fmt='go')\n", "plt.plot(x,model(x,*par))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Given that, for a Poisson distribution, the probability of getting an observed value, $y_i$, given an underlying model value, $f_i$, at each $x_i$ is: \n", "$$P(x_i|f_i) = {\\exp(-f_i) f_i^{x_i} \\over x_i!}$$\n", "\n", "what is the expression/function for the log(likelihood) of this data set? Remember the probability of the data set is just the product of the probabilties of the individual points.\n", "\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ " ANSWER: " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "To maximize the likelihood, we will minimize the log(likelihood). For a Poisson distribution, the probability of getting an observed value, y, given an underlying model value, f, is: \n", "$$P(x|f) = {\\exp(-f) f^x \\over x!}$$\n", "Take the log to get:\n", "$$ ln(P(x)) = -f(x) + x ln(f) - ln(x!)$$\n", "Multiplying the probabilities of each individual point is taking the sum of the logs. Since the last term is independent of the model parameters, we can neglect it when looking for a maximum with respect to parameter values.\n", "$$ln(L) = \\sum -f_i + y ln(f_i)$$\n", "where $f_i$ is the model value at each $x_i$\n", "
\n", "Write a routine to routine the -ln(likelihood), which will use the data values, as well as the model values given the independent variable values. Include an option to return the vector of derivatives with respect to each parameters as well." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "def logl(pars,x,y,deriv=False) :\n", " \"\"\" log(likelihood) function\n", " Poisson p(x|f) = exp(-f)*f(y) / y!\n", " ln(p) = -f + y*f - ln(y!)\n", " but last term is independent of pars\n", " \"\"\"\n", " mod=model(x,*pars,deriv=deriv)\n", " if not deriv :\n", " return # return -ln(likelihood)\n", " else :\n", " return # return -ln(likelihood) and vector of derivatives with respect to the parameters\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Now we will use scipy.optimize.minimize() to find the minimum of our -ln(likelihood) function. You'll need to supply a starting guess." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "from scipy.optimize import minimize\n", "p0= # starting guess\n", "# default BFGS algorithm, numerical derivatives\n", "\n", "# notice how we use the args= keyword to supply extra parameters to our ln(l) routine \n", "# beyond the parameters to be fit; these are needed to calculate ln(l)\n", "out=minimize(logl,p0,args=(x,y))\n", "print(out)\n", "\n", "xfit=np.arange(-10,10,0.01)\n", "plt.errorbar(x,y,sig,fmt='go')\n", "plt.plot(xfit,model(xfit,*out.x),color='b')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here's an example of specifying bounds on a parameter" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "nobounds=(None,None)\n", "out=minimize(logl,p0,args=(x,y),bounds=[(0,None),nobounds,(0,None),(0,None)])\n", "\n", "print(out)\n", "\n", "xfit=np.arange(-10,10,0.01)\n", "plt.errorbar(x,y,sig,fmt='go')\n", "plt.plot(xfit,model(xfit,*out.x),color='b')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Here are some examples for timing different algorithms, with and without analytical derivatives" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "# default BFGS algorithm, numerical derivatives\n", "print('BFGS, numerical derivatives')\n", "%timeit -n100 out=minimize(logl,p0,args=(x,y))\n", "out=minimize(logl,p0,args=(x,y))\n", "print(out)\n", "\n", "# BFGS algorithm, supplying derivatives: jac=True means that function returns derivatives as well as value\n", "print('\\nBFGS, analytical derivatives')\n", "%timeit -n100 out=minimize(logl,p0,args=(x,y,True),jac=True)\n", "out=minimize(logl,p0,args=(x,y,True),jac=True)\n", "print(out)\n", "\n", "# Downhill simplex (Nelder-Mead) algorithm\n", "print('\\nDownhill simplex (Nelder-Mead)')\n", "%timeit -n100 out=minimize(logl,p0,args=(x,y),method='Nelder-Mead')\n", "out=minimize(logl,p0,args=(x,y),method='Nelder-Mead')\n", "print(out)\n", "\n", "# here's treating the problem with least squares\n", "print('\\nleast squares')\n", "from scipy.optimize import curve_fit\n", "%timeit -n100 fit=curve_fit(model,x,y,p0=p0,sigma=sig)\n", "fit=curve_fit(model,x,y,p0=p0,sigma=sig)\n", "print('least squares: ',fit[0])\n", "\n", "xfit=np.arange(-10,10,0.01)\n", "plt.errorbar(x,y,sig,fmt='go')\n", "plt.plot(xfit,model(xfit,*out.x),color='b')\n", "plt.plot(xfit,model(xfit,*fit[0]),color='r')\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "How do the speeds of the different algorithms compare? Are they what you would expect, given what you know about the algorithms?\n", "
ANSWER HERE: " ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.11" } }, "nbformat": 4, "nbformat_minor": 1 }