Difference between revisions of "Computation of Infinite Depth Green's Function"
(8 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | {{pages to be deleted}} | ||
+ | |||
The normal incidence Green's function for infinite depth has already been published [[Squire and Dixon 2000]], [[Dixon and Squire 2001]], the solution being expressed in terms of the auxilary sine and cosine integrals, or equivalently in terms of the exponential integral (see [[#Normal Incidence Simplification |Appendix C.2.3]]). Similarly, the finite depth Green's function was derived by [[Evans and Porter 2003a]] and was also presented in [[Section 3.1]] of this thesis for both normal and oblique incidence. However, at present, the required derivatives <math>g^{(n)}(x)</math> of the infinite depth Green's function for oblique incidence may only be calculated exactly when <math>x=0^\pm</math>. Formulae for these quantities were published by [[Williams and Squire 2002]] and are also derived below in [[#Exact Calculation of Derivatives at the Origin |Appendix C.2.2]]. Although the aforementioned derivatives may be calculated when <math>x\neq0</math> by Fast Fourier Transform, as shown in [[Section 3.2]], this is often not the most efficient approach, especially when the Green's function is only required to be evaluated at a few points. The following two sections outline two approaches that are more suitable for such a scenario. | The normal incidence Green's function for infinite depth has already been published [[Squire and Dixon 2000]], [[Dixon and Squire 2001]], the solution being expressed in terms of the auxilary sine and cosine integrals, or equivalently in terms of the exponential integral (see [[#Normal Incidence Simplification |Appendix C.2.3]]). Similarly, the finite depth Green's function was derived by [[Evans and Porter 2003a]] and was also presented in [[Section 3.1]] of this thesis for both normal and oblique incidence. However, at present, the required derivatives <math>g^{(n)}(x)</math> of the infinite depth Green's function for oblique incidence may only be calculated exactly when <math>x=0^\pm</math>. Formulae for these quantities were published by [[Williams and Squire 2002]] and are also derived below in [[#Exact Calculation of Derivatives at the Origin |Appendix C.2.2]]. Although the aforementioned derivatives may be calculated when <math>x\neq0</math> by Fast Fourier Transform, as shown in [[Section 3.2]], this is often not the most efficient approach, especially when the Green's function is only required to be evaluated at a few points. The following two sections outline two approaches that are more suitable for such a scenario. | ||
− | The first uses a partial fractions decomposition to reduce them (the required derivatives of <math>g</math>, or more specifically of <math>g_1</math>) into the sum of five simpler transforms which each satisfy linear inhomogeneous second order ordinary differential equations (ODEs). The inhomogeneous term is the zeroth order modified Bessel function of the second kind, <math>K_0</math>, which has a standard series representation ([[Abramowitz and Stegun | + | The first uses a partial fractions decomposition to reduce them (the required derivatives of <math>g</math>, or more specifically of <math>g_1</math>) into the sum of five simpler transforms which each satisfy linear inhomogeneous second order ordinary differential equations (ODEs). The inhomogeneous term is the zeroth order modified Bessel function of the second kind, <math>K_0</math>, which has a standard series representation ([[Abramowitz and Stegun 1964]], p375). Using a method that is similar to both the Method of Undetermined Coefficients ([[Boyce and DiPrima 1997]], p164 ff.) and the Method of Frobenius ([[Boyce and DiPrima 1997]], p262 ff.), we search for solutions that have similar expansions to that of <math>K_0</math>, and obtain simple recurrence relations for the series coefficients. The series obtained are convergent for all values of <math>x</math>, with less terms needed in the expansion when <math>x</math> is smaller. |
A second approach, which uses a contour integral to transform the inversion integral into one that may be calculated more easily by numerical quadrature, is more efficient for larger values of <math>x</math>. It is described below in [[#Contour Integration |Section C.2]], but first we will introduce the series expansion method. | A second approach, which uses a contour integral to transform the inversion integral into one that may be calculated more easily by numerical quadrature, is more efficient for larger values of <math>x</math>. It is described below in [[#Contour Integration |Section C.2]], but first we will introduce the series expansion method. | ||
Line 29: | Line 31: | ||
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | where <math>K_0</math> is a zeroth order modified Bessel function of the second type ([[Abramowitz and Stegun | + | where <math>K_0</math> is a zeroth order modified Bessel function of the second type ([[Abramowitz and Stegun 1964]]). For <math>|=Arg= (w)|<\pi</math>, it has the expansion |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
Line 36: | Line 38: | ||
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | where <math>\gamma_e</math> is the Euler constant, <math>\chi_m=\sum_{r=1}^m1/r</math>, and <math>I_0</math> is the modified Bessel function of the first kind (also given by [[Abramowitz and Stegun | + | where <math>\gamma_e</math> is the Euler constant, <math>\chi_m=\sum_{r=1}^m1/r</math>, and <math>I_0</math> is the modified Bessel function of the first kind (also given by [[Abramowitz and Stegun 1964]]): |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
Line 93: | Line 95: | ||
==Contour Integration== | ==Contour Integration== | ||
− | An alternative means of calculating <math>g</math> is by using contour integration to transform the inverse Fourier transform into an integral which is more easily evaluated numerically. This technique is more efficient for larger values of <math>x</math>, but it turns out that the resulting integral may actually be evaluated analytically when <math>x=0</math>. Section | + | An alternative means of calculating <math>g</math> is by using contour integration to transform the inverse Fourier transform into an integral which is more easily evaluated numerically. This technique is more efficient for larger values of <math>x</math>, but it turns out that the resulting integral may actually be evaluated analytically when <math>x=0</math>. [[#Derivation and Numerical Approximation |Section C.2.1]] explains the contour used and outlines the numerical method of approximation of the resulting integral for general <math>x</math> values, while [[#Exact Calculation of Derivatives at the Origin |Section C.2.2]] derives the formula for that integral when <math>x=0</math>. |
− | Some additional simplifications can be obtained for normal incidence. We show that the integral for general <math>x</math> reduces to an expression involving the well-known and easily evaluated exponential integral function. This also acts as a further check on the oblique incidence results because the normal incidence results can be compared to those obtained by | + | Some additional simplifications can be obtained for normal incidence. We show that the integral for general <math>x</math> reduces to an expression involving the well-known and easily evaluated exponential integral function. This also acts as a further check on the oblique incidence results because the normal incidence results can be compared to those obtained by [[Dixon and Squire 2001a]]. This is done in [[#Normal Incidence Simplification |Section C.2.3]]. |
===Derivation and Numerical Approximation=== | ===Derivation and Numerical Approximation=== | ||
Line 101: | Line 103: | ||
<center><math> | <center><math> | ||
− | g_{1n}(x)=\frac12\int_{-\infty}^\infty\frac{\mbox{e}^{\mbox{i} kx}\mbox{d} k}{\kappa(\alpha_n^2-k^2)} | + | g_{1n}(x)=\frac12\int_{-\infty}^\infty\frac{\mbox{e}^{\mbox{i} kx}\mbox{d} k}{\kappa(\alpha_n^2-k^2)} \qquad (C.13) |
</math></center> | </math></center> | ||
Line 107: | Line 109: | ||
<center><math> | <center><math> | ||
− | g_{1n}(x)=g_{3n}(x)-\frac{\epsilon_n\pi \mbox{i}}{2\alpha_n\gamma_n}\mbox{e}^{\mbox{i}\alpha_nx}, | + | g_{1n}(x)=g_{3n}(x)-\frac{\epsilon_n\pi \mbox{i}}{2\alpha_n\gamma_n}\mbox{e}^{\mbox{i}\alpha_nx}, \qquad (C.14) |
</math></center> | </math></center> | ||
Line 113: | Line 115: | ||
<center><math> | <center><math> | ||
− | g_{3n}(x)=\int_{\mbox{i} l}^{\mbox{i}\infty}\frac{\mbox{e}^{\mbox{i} kx}\mbox{d} k}{\kappa(\alpha_n^2-k^2)} | + | g_{3n}(x)=\int_{\mbox{i} l}^{\mbox{i}\infty}\frac{\mbox{e}^{\mbox{i} kx}\mbox{d} k}{\kappa(\alpha_n^2-k^2)}=\int_{l}^{\infty}\frac{\mbox{e}^{-\kappa x}\mbox{d}\kappa}{k(\kappa^2+\alpha_n^2)}. \qquad (C.15) |
− | =\int_{l}^{\infty}\frac{\mbox{e}^{-\kappa x}\mbox{d}\kappa}{k(\kappa^2+\alpha_n^2)}. | ||
</math></center> | </math></center> | ||
− | The <math>\epsilon_n</math> results from defining the square root in <math>\kappa</math> as having a branch cut along the negative real axis. Consequently, <math>\sqrt{a_n^2+l^2}=\epsilon_n\gamma_n</math> in the residue of <math>g_{1n}</math> at <math>\alpha_n</math>. In addition, <math>\kappa</math> in the left hand integral in ( | + | The <math>\epsilon_n</math> results from defining the square root in <math>\kappa</math> as having a branch cut along the negative real axis. Consequently, <math>\sqrt{a_n^2+l^2}=\epsilon_n\gamma_n</math> in the residue of <math>g_{1n}</math> at <math>\alpha_n</math>. In addition, <math>\kappa</math> in the left hand integral in (C.15) is taken from the positive imaginary axis. |
− | Now let us define the analogous sum to ( | + | Now let us define the analogous sum to (C.1): |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | g_3(x)=-\frac1\pi\sum_{n=-2}^2A_n\alpha_n\gamma_ng_{3n}(x) | + | g_3(x)=-\frac1\pi\sum_{n=-2}^2A_n\alpha_n\gamma_ng_{3n}(x)=-\frac1\pi\int_0^\infty\frac{k\mbox{e}^{-\kappa x}\mbox{d}\kappa}{\Lambda^2(k)k^2+1}. &\qquad (C.16) |
− | =-\frac1\pi\int_0^\infty\frac{k\mbox{e}^{-\kappa x}\mbox{d}\kappa}{\Lambda^2(k)k^2+1}. | ||
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | From ( | + | From (C.14), we now can write the relationship between <math>g_1</math> and <math>g_3</math>: |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | g_1(x)=g_3(x)+\frac \mbox{i}2\sum_{n=-2}^2A_n\epsilon_n\mbox{e}^{\mbox{i}\alpha_nx}. | + | g_1(x)=g_3(x)+\frac \mbox{i}2\sum_{n=-2}^2A_n\epsilon_n\mbox{e}^{\mbox{i}\alpha_nx}. &\qquad (C.17) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
Line 135: | Line 135: | ||
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | g_2(x)=\mbox{i}\sum_{n=-2}^0A_n\mbox{e}^{\mbox{i}\alpha_nx}. | + | g_2(x)=\mbox{i}\sum_{n=-2}^0A_n\mbox{e}^{\mbox{i}\alpha_nx}. &\qquad (C.18) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | This confirms the results ( | + | This confirms the results (3.20) and (3.21), which were obtained by taking the limit in (3.7) as <math>H\to\infty</math> directly. |
− | <math>g_3(x)</math> as given by ( | + | <math>g_3(x)</math> as given by (C.16) is an integral that is more amenable to calculation by numerical quadrature than the inverse Fourier transform <math>g_1</math>. The integrand will never have any singularities on the real line and, because it is also <math>O(\mbox{e}^{-kx})</math> as <math>k\to\infty</math>, <math>g_3</math> will be infinitely differentiable for <math>x>0</math>. However, since the non-exponential factor is only <math>O(k^{-9})</math> as <math>k\to\infty</math>, its eighth derivative will be singular at the origin, as was mentioned in [[#Ordinary Differential Equations |Section 3.1.1]]. |
− | When <math>\theta=0</math>, <math>g_3</math> may be calculated analytically in terms of the exponential integral <math>E_1</math> as described below in Section | + | When <math>\theta=0</math>, <math>g_3</math> may be calculated analytically in terms of the exponential integral <math>E_1</math> as described below in [[#Normal Incidence Simplification |Section C.2.3]]. First, however, we will show how any derivative of <math>g_3</math> may be efficiently calculated for oblique incidence. |
− | Differentiating ( | + | Differentiating (C.16) <math>n</math> times gives |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | g_3^{(n)}(x) | + | &g_3^{(n)}(x)=-\frac{(-1)^{n}}\pi\int_l^\infty\frac{k\kappa^ne^{-\kappa x}\mbox{d}\kappa}{\Lambda^2(k)k^2+1} &\qquad (C.19a)\\ |
&=-\frac{(-1)^{n}}\pi\times x^{8-n}\mbox{e}^{-lx}\int_0^\infty | &=-\frac{(-1)^{n}}\pi\times x^{8-n}\mbox{e}^{-lx}\int_0^\infty | ||
− | \frac{t^{1/2}(t+lx)^n(t+2lx)^{1/2}\mbox{e}^{-t}\mbox{d} t}{t(t+2lx)\big(t^2(t+2lx)^2+\varpi x^4\big)^2+x^{10}}( | + | \frac{t^{1/2}(t+lx)^n(t+2lx)^{1/2}\mbox{e}^{-t}\mbox{d} t}{t(t+2lx)\big(t^2(t+2lx)^2+\varpi x^4\big)^2+x^{10}} &\qquad (C.19b), |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | where <math>t=(\kappa-l)x</math>. This integral may now be found using generalised Gauss-Laguerre integration, using the weight function <math>t^{1/2}\mbox{e}^{-t}</math>. The remainder of the integrand will now be relatively smooth, with the exception of the <math>\sqrt{t+2lx}</math> term which has a branch cut on the line <math>(-\infty,-2lx]</math>. However, we don't need to integrate over that singularity and the larger <math>x</math> gets the further it moves away from the region of integration. Hence, the integral will converge faster for larger values of <math>x</math>. Also note that ( | + | where <math>t=(\kappa-l)x</math>. This integral may now be found using generalised Gauss-Laguerre integration, using the weight function <math>t^{1/2}\mbox{e}^{-t}</math>. The remainder of the integrand will now be relatively smooth, with the exception of the <math>\sqrt{t+2lx}</math> term which has a branch cut on the line <math>(-\infty,-2lx]</math>. However, we don't need to integrate over that singularity and the larger <math>x</math> gets the further it moves away from the region of integration. Hence, the integral will converge faster for larger values of <math>x</math>. Also note that (C.19b) shows that <math>g_3^{(n)}</math> decays exponentially as <math>x\to\infty</math>. |
===Exact Calculation of Derivatives at the Origin=== | ===Exact Calculation of Derivatives at the Origin=== | ||
− | Equation ( | + | Equation (C.15) when <math>x=0</math> is |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | q_n'=q_n+\frac{\mbox{i}\epsilon_n}{2\alpha_n\gamma_n}=g_{3n}(0^+)=I(\alpha_n)+I(-\alpha_n), | + | q_n'=q_n+\frac{\mbox{i}\epsilon_n}{2\alpha_n\gamma_n}=g_{3n}(0^+)=I(\alpha_n)+I(-\alpha_n), &\qquad (C.20) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
Line 165: | Line 165: | ||
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
I(\alpha)&=\frac1{2\mbox{i}\alpha}\int_l^\infty\frac{\mbox{d}\kappa}{k(\kappa-\mbox{i}\alpha)} | I(\alpha)&=\frac1{2\mbox{i}\alpha}\int_l^\infty\frac{\mbox{d}\kappa}{k(\kappa-\mbox{i}\alpha)} | ||
− | =-\frac1{\alpha}\int_0^1\frac{\mbox{d} w}{(\alpha-\mbox{i} l)w^2-(\alpha+\mbox{i} l)}, | + | =-\frac1{\alpha}\int_0^1\frac{\mbox{d} w}{(\alpha-\mbox{i} l)w^2-(\alpha+\mbox{i} l)}, &\qquad (C.21) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
and we have made the transformation <math>\kappa=l(1+w^2)/(1-w^2)</math>. If we also define | and we have made the transformation <math>\kappa=l(1+w^2)/(1-w^2)</math>. If we also define | ||
− | <math>\beta</math> and <math>\gamma</math> by the relations <math>\beta^2=(\alpha+\mbox{i} l)/(\alpha-\mbox{i} l)</math> and <math>\beta=\gamma/(\alpha-\mbox{i} l)=(\alpha+\mbox{i} l)/\gamma</math> (so that <math>\gamma^2=\alpha^2+l^2</math>), ( | + | <math>\beta</math> and <math>\gamma</math> by the relations <math>\beta^2=(\alpha+\mbox{i} l)/(\alpha-\mbox{i} l)</math> and <math>\beta=\gamma/(\alpha-\mbox{i} l)=(\alpha+\mbox{i} l)/\gamma</math> (so that <math>\gamma^2=\alpha^2+l^2</math>), (C.21) becomes |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | I(\alpha) | + | &I(\alpha)=-\frac1{\alpha(\alpha-\mbox{i} l)}\int_0^1\frac{\mbox{d} w}{w^2-\beta^2} |
=-\frac1{2\alpha\gamma}\left(\int_0^1\frac{\mbox{d} w}{w-\beta} | =-\frac1{2\alpha\gamma}\left(\int_0^1\frac{\mbox{d} w}{w-\beta} | ||
− | -\int_0^1\frac{\mbox{d} w}{w+\beta}\ | + | -\int_0^1\frac{\mbox{d} w}{w+\beta}\right)\\ |
− | &=\frac1{2\alpha\gamma}\big(\log(1+\beta)-\log(1-\beta)-\log\beta+\log(-\beta)\big). | + | &=\frac1{2\alpha\gamma}\big(\log(1+\beta)-\log(1-\beta)-\log\beta+\log(-\beta)\big). &\qquad (C.22) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | To simplify ( | + | To simplify (C.22), we choose <math>\beta</math> to be the square root from the upper half plane (which also implies that <math>1+\beta</math> is in the upper half plane, and that <math>\log(-\beta)=\log\beta-\mbox{i}\pi</math>). <math>1-\beta</math> will then be in the lower half-plane so the quotient <math>(1+\beta)/(1-\beta)</math> corresponds to an anti-clockwise rotation of <math>1+\beta</math>. Potentially this could rotate <math>1+\beta</math> across the branch cut in the logarithm on the negative real axis, but it can be shown explicitly that with our choice of <math>\beta</math> the aforementioned quotient remains in the upper half plane. Thus, the normal logarithm rules for quotients hold, and so (C.22) may be written |
− | <center><math>\begin{matrix} | + | |
− | I(\alpha) | + | <center><math>\begin{matrix} |
− | =\frac1{2\alpha\gamma}\big(\log\frac{\gamma+\alpha+\mbox{i} l}{\gamma-\alpha-\mbox{i} l}-\pi \mbox{i}\big). | + | &I(\alpha)=\frac1{2\alpha\gamma}\big(\log\frac{1+\beta}{1-\beta}-\pi \mbox{i}\big) |
+ | =\frac1{2\alpha\gamma}\big(\log\frac{\gamma+\alpha+\mbox{i} l}{\gamma-\alpha-\mbox{i} l}-\pi \mbox{i}\big). &\qquad (C.23) | ||
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
+ | |||
Note that the above expression is even in <math>\gamma</math>, so it doesn't matter after all which root we choose for it. Thus, | Note that the above expression is even in <math>\gamma</math>, so it doesn't matter after all which root we choose for it. Thus, | ||
− | <center><math>\begin{matrix} | + | |
+ | <center><math>\begin{matrix} | ||
q_n'=\frac1{2\alpha_n\gamma_n}\big(\log\frac{\gamma_n+\alpha_n+\mbox{i} l}{\gamma_n-\alpha_n-\mbox{i} l} | q_n'=\frac1{2\alpha_n\gamma_n}\big(\log\frac{\gamma_n+\alpha_n+\mbox{i} l}{\gamma_n-\alpha_n-\mbox{i} l} | ||
− | -\log\frac{\gamma_n-\alpha_n+\mbox{i} l}{\gamma_n+\alpha_n-\mbox{i} l}\big) | + | -\log\frac{\gamma_n-\alpha_n+\mbox{i} l}{\gamma_n+\alpha_n-\mbox{i} l}\big)=\frac1{2\alpha_n\gamma_n}\log\Theta_n, &\qquad (C.24) |
− | =\frac1{2\alpha_n\gamma_n}\log\Theta_n, | ||
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | where <math>\Theta_n=(\gamma_n+\alpha_n)/(\gamma_n-\alpha_n)</math>. The latter step is justifed since if <math>\im[\alpha_n]>0</math> the argument of the first logarithm will be in the first quadrant, while the argument of the second logarithm will be in the second quadrant. This means that we can now write | + | |
+ | where <math>\Theta_n=(\gamma_n+\alpha_n)/(\gamma_n-\alpha_n)</math>. The latter step is justifed since if <math>\mathfrak{im}[\alpha_n]>0</math> the argument of the first logarithm will be in the first quadrant, while the argument of the second logarithm will be in the second quadrant. This means that we can now write | ||
<center><math> | <center><math> | ||
g_3^{(2n)}(0^+)=-\frac1\pi\sum_{n=-2}^2A_n\alpha_n\gamma_nq'_n | g_3^{(2n)}(0^+)=-\frac1\pi\sum_{n=-2}^2A_n\alpha_n\gamma_nq'_n | ||
− | =-\frac1{2\pi}\sum_{m=-2}^2A_m(\mbox{i}\alpha_m)^{2n}\log{\Theta_m}. | + | =-\frac1{2\pi}\sum_{m=-2}^2A_m(\mbox{i}\alpha_m)^{2n}\log{\Theta_m}. \qquad (C.25) |
</math></center> | </math></center> | ||
− | Now, for small <math>l</math>, <math>\alpha_n\sim\epsilon_n\gamma_n\big(1-l^2/2\gamma_n+O(l^4)\big)</math>, where <math>\epsilon_n=\mbox{sgn}(\im[\gamma_n])</math>, so at first glance it seems like the above expression will become singular as <math>l\ | + | Now, for small <math>l</math>, <math>\alpha_n\sim\epsilon_n\gamma_n\big(1-l^2/2\gamma_n+ O(l^4)\big)</math>, where <math>\epsilon_n=\mbox{sgn}(\mathfrak{im}[\gamma_n])</math>, so at first glance it seems like the above expression will become singular as <math>l \to 0</math>. However, this is not the case. If we define <math>A'_n=A_n\alpha_n/\gamma_n=-\gamma_n^2/ (4\gamma_n^5+1)</math>, <math>A_n\sim\epsilon_nA_n'+O(l^2)</math> and it is easily shown that for <math>n=0,1,2,3</math> they satisfy the rules |
<center><math> | <center><math> | ||
− | \sum_{m=-2}^{2}A_m'\gamma_m^{2n}=0\quad | + | \sum_{m=-2}^{2}A_m'\gamma_m^{2n}=0 \quad \mbox{for} \ n=0,\ldots,3, \qquad (C.26) |
</math></center> | </math></center> | ||
− | which are similar to those presented by | + | which are similar to those presented by [[Squire and Dixon 2000]]. Thus as <math>l \to 0</math>, |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
g_3^{(2n)}(0^+)&\sim-\frac1{2\pi}\sum_{m=-2}^2A_m'\big(1+O(l^2)\big)\times | g_3^{(2n)}(0^+)&\sim-\frac1{2\pi}\sum_{m=-2}^2A_m'\big(1+O(l^2)\big)\times | ||
− | \left(\log\big( \gamma_n^2\big(1+O(l^2)\big) \big)-\log\frac{l^2}4\ | + | \left(\log\big( \gamma_n^2\big(1+O(l^2)\big) \big)-\log\frac{l^2}4\right)\\ |
− | &\to-\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log{\gamma^2_m}. | + | &\to-\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log{\gamma^2_m}. &\qquad (C.27) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
===Normal Incidence Simplification=== | ===Normal Incidence Simplification=== | ||
− | An alternative expansion to ( | + | An alternative expansion to (C.20) for the <math>g_{3n}</math> is |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | 2\gamma_n^2g_{3n}(x) | + | &2\gamma_n^2g_{3n}(x)=\int_l^\infty\left(\frac2k-\frac1{k+\mbox{i}\gamma_n}-\frac1{k-\mbox{i}\gamma_n}\right)\mbox{e}^{-\kappa x}\mbox{d}\kappa\\ |
− | &=2K_0(lx)-\int_l^\infty\left(\frac1{k+\mbox{i}\gamma_n}+\frac1{k-\mbox{i}\gamma_n}\ | + | &=2K_0(lx)-\int_l^\infty\left(\frac1{k+\mbox{i}\gamma_n}+\frac1{k-\mbox{i}\gamma_n}\right)\mbox{e}^{-\kappa x}\mbox{d}\kappa. &\qquad (C.28) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | Using the rules ( | + | Using the rules (C.26) to eliminate the Bessel function <math>K_0</math>, we can thus write the <math>2n</math>-th derivative of <math>g_3</math> as |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | g_3^{(2n)}(x) | + | &g_3^{(2n)}(x)=-\frac1\pi\sum_{m=-2}^2A_m'(\mbox{i}\alpha_m)^{2n}g_{3m}(x)\\ |
− | &=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\alpha_m)^{2n}\int_l^\infty\left(\frac1{k+\mbox{i}\gamma_n}+\frac1{k-\mbox{i}\gamma_n}\ | + | &=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\alpha_m)^{2n}\int_l^\infty\left(\frac1{k+\mbox{i}\gamma_n}+\frac1{k-\mbox{i}\gamma_n}\right)\mbox{e}^{-\kappa x}\mbox{d}\kappa. &\qquad (C.29) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | Now, if we let <math>\theta\to0</math>, <math>l\to0</math>, <math>\alpha_n^2\to\gamma_n^2</math>, and <math>\kappa\to k</math> on the real line, ( | + | Now, if we let <math>\theta\to0</math>, <math>l\to0</math>, <math>\alpha_n^2\to\gamma_n^2</math>, and <math>\kappa\to k</math> on the real line, (C.29) becomes |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
g_3^{(2n)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\left( | g_3^{(2n)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\left( | ||
− | \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)+\mbox{e}^{-\mbox{i}\gamma_mx}E_1(-\mbox{i}\gamma_mx)\ | + | \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)+\mbox{e}^{-\mbox{i}\gamma_mx}E_1(-\mbox{i}\gamma_mx) \right), &\qquad (C.30) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | where <math>E_1</math> is the well-known exponential integral function | + | where <math>E_1</math> is the well-known exponential integral function ([[Abramowitz and Stegun 1964]], p228ff.) defined as follows |
<center><math> | <center><math> | ||
− | E_1(w)=\int_w^\infty \frac{\mbox{e}^{-t}}t \mbox{d} t | + | E_1(w)=\int_w^\infty\frac{\mbox{e}^{-t}}t\mbox{d}t=-\gamma-\log w - \sum_{n=1}^\infty \frac{(-w)^n}{n\,n!} \quad \mbox{for} \ \mid \mbox{Arg} (w) \mid < \pi. \qquad (C.31) |
− | =-\gamma-\log w-\sum_{n=1}^\infty\frac{(-w)^n}{n\,n!} | ||
− | \quad | ||
</math></center> | </math></center> | ||
− | Note that, since no imaginary solutions exist for the infinite depth dispersion relation, <math>\big| | + | Note that, since no imaginary solutions exist for the infinite depth dispersion relation, <math>\big| \mbox{Arg} [\pm \mbox{i}\gamma_nx]\big|<\pi</math>, so the above formulae are valid. Also note that the <math>g_3^{2n}</math> will be everywhere bounded, despite the logarithmic singularities in <math>E_1</math>. This also follows from the rules (C.26), allowing us to deduce that taking the limit in (C.30) as <math>x \to 0^+</math> gives |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | g_3^{(2n)}(0^+) | + | &g_3^{(2n)}(0^+)=-\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n} |
\big(\log(\mbox{i}\gamma_m)+\log(-\mbox{i}\gamma_m)\big)\\ | \big(\log(\mbox{i}\gamma_m)+\log(-\mbox{i}\gamma_m)\big)\\ | ||
&=-\frac1\pi\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m | &=-\frac1\pi\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m | ||
− | -2i\big(A'_{-2}(\mbox{i}\gamma_1)^{2n}+A'_2(\mbox{i}\gamma_2)^{2n}\big)( | + | -2i\big(A'_{-2}(\mbox{i}\gamma_1)^{2n}+A'_2(\mbox{i}\gamma_2)^{2n}\big) &\qquad (C.32a)\\ |
− | &=-\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m^2,( | + | &=-\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m^2, &\qquad (C.32b) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | which agrees with ( | + | which agrees with (C.27). Recalling that <math>A_n\to\mbox{sgn}(\mathfrak{Im}[\gamma_n])A_n'</math> as <math>l \to 0</math>, adding <math>g_2^{(2n)}(0^+)</math> to (C.32a) and using (C.26) to simplify the product, then gives the following result for <math>g^{(2n)}(0^+)</math> itself: |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
g^{(2n)}(0^+)&=-\frac1\pi\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m | g^{(2n)}(0^+)&=-\frac1\pi\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m | ||
− | -2i\big(A'_{-2}(\mbox{i}\gamma_1)^{2n}+A'_2(\mbox{i}\gamma_2)^{2n}\big). | + | -2i\big(A'_{-2}(\mbox{i}\gamma_1)^{2n}+A'_2(\mbox{i}\gamma_2)^{2n}\big). &\qquad (C.33) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | The above result is unstable as <math>\gamma_2</math> becomes negative and so is only reliable for <math>\varpi>\varpi_\infty</math>. For a result that holds for all <math>\varpi</math>, use ( | + | The above result is unstable as <math>\gamma_2</math> becomes negative and so is only reliable for <math>\varpi>\varpi_\infty</math>. For a result that holds for all <math>\varpi</math>, use (C.32b) and the ordinary definition of <math>g_2</math>. |
− | Odd derivatives of <math>g_3</math> may also be obtained by differentiating ( | + | Odd derivatives of <math>g_3</math> may also be obtained by differentiating (C.30) directly, again making use of (C.31) and also (C.26) to eliminate any <math>1/x</math> type singularities produced. Thus, |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
g_3^{(2n+1)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n+1}\left( | g_3^{(2n+1)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n+1}\left( | ||
− | \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)-\mbox{e}^{-\mbox{i}\gamma_mx}E_1(-\mbox{i}\gamma_mx)\ | + | \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)-\mbox{e}^{-\mbox{i}\gamma_mx}E_1(-\mbox{i}\gamma_mx)\right), &\qquad (C.34) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
Line 270: | Line 271: | ||
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
g_3^{(n)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{n}\left( | g_3^{(n)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{n}\left( | ||
− | \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)+\mbox{e}^{\mbox{i}(n\pi-\gamma_mx)}E_1(-\mbox{i}\gamma_mx)\ | + | \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)+\mbox{e}^{\mbox{i}(n\pi-\gamma_mx)}E_1(-\mbox{i}\gamma_mx)\right). &\qquad (C.35) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | Recall that from ( | + | Recall that from (3.13) that we already know what values <math>g^{(2n+1)}</math> will take at the origin so there is no need to take the limit in (C.34) as <math>x \to 0^+</math>. In addition, we can also check the above results numerically by letting <math>l\to0</math> and <math>\kappa\to k</math> in (C.19a) and using generalised Gauss-Laguerre integration (with weight function <math>k^{n+1}\mbox{e}^{-kx}</math>). |
− | A further confirmation of ( | + | A further confirmation of (C.35) can be obtained by checking the <math>n=0</math> result against the equivalent result of [[Dixon and Squire 2001]]. Let us denote their nondimensional length variables as <math>x',z',\xi'</math> and <math>\zeta'</math>, which equate to <math>x/a,z/a,\xi/a</math> and <math>\zeta/a</math> respectively. In their expression (6) for the Green's function in that scheme, which we shall denote <math>G_\mbox{SD}</math>, the integral when <math>x'>\xi'</math> and <math>z',\zeta'\approx0</math> is obtained by doubling the real part of equation (14b). Doing so, and then differentiating with respect to <math>\zeta'</math> followed by <math>z'</math> gives the result |
<center><math>\begin{matrix} | <center><math>\begin{matrix} | ||
− | g(x-\xi) | + | &g(x-\xi)=\frac1{a^2}\partial^2_{z'\zeta'}G_\mbox{SD}(x'-\xi',0,0)+\mbox{i} A_0\cos\gamma_0x\\ |
&=\mbox{i}\sum_{m=-2}^0A'_m\epsilon_me^{\mbox{i}\epsilon_m\gamma_mx} | &=\mbox{i}\sum_{m=-2}^0A'_m\epsilon_me^{\mbox{i}\epsilon_m\gamma_mx} | ||
− | +\frac1\pi\sum_{m=-2}^2A_m'g_\mbox{cos}(\epsilon_m'\gamma_mx), | + | +\frac1\pi\sum_{m=-2}^2A_m'g_\mbox{cos}(\epsilon_m'\gamma_mx), &\qquad (C.36) |
\end{matrix}</math></center> | \end{matrix}</math></center> | ||
− | where <math>\epsilon_n'=\sgn\big(\mbox{e}[\gamma_n]\big)</math>. The cosine term needs to be added to allow for the difference between taking the Cauchy Principal value of the integral ( | + | where <math>\epsilon_n'=\sgn\big(\mbox{e}[\gamma_n]\big)</math>. The cosine term needs to be added to allow for the difference between taking the Cauchy Principal value of the integral (3.4) and letting the poles <math>\pm\alpha_0</math> approach the real line from the upper or lower half plane as we did, while <math>g_\mbox{cos}</math> is the auxiliary cosine function [[Abramowitz and Stegun 1964]] which may be written as |
<center><math> | <center><math> | ||
− | g_\mbox{cos}(w)=\int_0^\infty\frac{\cos t}{w+t}\mbox{d} t | + | g_\mbox{cos}(w)=\int_0^\infty\frac{\cos t}{w+t}\mbox{d} t=\frac12\big(\mbox{e}^{\mbox{i} w}E_1(\mbox{i} w)+\mbox{e}^{-\mbox{i} w}E_1(-\mbox{i} w)\big), \quad \mathfrak{Re}[w]>0. \qquad (C.37) |
− | =\frac12\big(\mbox{e}^{\mbox{i} w}E_1(\mbox{i} w)+\mbox{e}^{-\mbox{i} w}E_1(-\mbox{i} w)\big),\quad | ||
</math></center> | </math></center> | ||
− | Since the first summation in ( | + | Since the first summation in (C.36) is obviously <math>g_2(x)</math>, the second sum must be <math>g_3(x)</math>. In light of the definition of <math>g_\mbox{cos}</math>, the result presented by [[Dixon and Squire 2001]] clearly agrees with (C.35). |
[[Reflections on Ice]] | [[Reflections on Ice]] |
Latest revision as of 10:34, 10 September 2009
The normal incidence Green's function for infinite depth has already been published Squire and Dixon 2000, Dixon and Squire 2001, the solution being expressed in terms of the auxilary sine and cosine integrals, or equivalently in terms of the exponential integral (see Appendix C.2.3). Similarly, the finite depth Green's function was derived by Evans and Porter 2003a and was also presented in Section 3.1 of this thesis for both normal and oblique incidence. However, at present, the required derivatives [math]\displaystyle{ g^{(n)}(x) }[/math] of the infinite depth Green's function for oblique incidence may only be calculated exactly when [math]\displaystyle{ x=0^\pm }[/math]. Formulae for these quantities were published by Williams and Squire 2002 and are also derived below in Appendix C.2.2. Although the aforementioned derivatives may be calculated when [math]\displaystyle{ x\neq0 }[/math] by Fast Fourier Transform, as shown in Section 3.2, this is often not the most efficient approach, especially when the Green's function is only required to be evaluated at a few points. The following two sections outline two approaches that are more suitable for such a scenario.
The first uses a partial fractions decomposition to reduce them (the required derivatives of [math]\displaystyle{ g }[/math], or more specifically of [math]\displaystyle{ g_1 }[/math]) into the sum of five simpler transforms which each satisfy linear inhomogeneous second order ordinary differential equations (ODEs). The inhomogeneous term is the zeroth order modified Bessel function of the second kind, [math]\displaystyle{ K_0 }[/math], which has a standard series representation (Abramowitz and Stegun 1964, p375). Using a method that is similar to both the Method of Undetermined Coefficients (Boyce and DiPrima 1997, p164 ff.) and the Method of Frobenius (Boyce and DiPrima 1997, p262 ff.), we search for solutions that have similar expansions to that of [math]\displaystyle{ K_0 }[/math], and obtain simple recurrence relations for the series coefficients. The series obtained are convergent for all values of [math]\displaystyle{ x }[/math], with less terms needed in the expansion when [math]\displaystyle{ x }[/math] is smaller.
A second approach, which uses a contour integral to transform the inversion integral into one that may be calculated more easily by numerical quadrature, is more efficient for larger values of [math]\displaystyle{ x }[/math]. It is described below in Section C.2, but first we will introduce the series expansion method.
Note that in both methods it is assumed in their derivations that the poles [math]\displaystyle{ \alpha_n }[/math] have imaginary part greater than zero. However, it will be seen that the final expressions for [math]\displaystyle{ g }[/math] retain the same form in the limits as any poles that need to be are allowed to return to the real line. Consequently, all results hold for any value of [math]\displaystyle{ \varpi }[/math].
Series Expansion
As stated in the introduction to this appendix, the series representation of [math]\displaystyle{ g_1(x) }[/math] is obtained by breaking it up into the sum of five simpler transforms which each satisfy a different ODE. Section C.1.1 presents this decomposition and derives the ODEs that they satisfy. Section C.1.2 solves these ODEs by proposing series solutions of a particular type, which are similar to the series expansion of the inhomogeneous term (a modified Bessel function of the second kind).
Ordinary Differential Equations
From (3.18), the Fourier transform of the [math]\displaystyle{ 2n }[/math]-th derivative of [math]\displaystyle{ g_1 }[/math] may be expanded by partial fractions as
where the [math]\displaystyle{ A_n }[/math] were defined in Section 3.2 as [math]\displaystyle{ A_n=-\gamma_n^3/\alpha_n(4\gamma_n^5+1) }[/math], and
Multiplying this by [math]\displaystyle{ \alpha_n^2-k^2 }[/math] gives the Fourier transform of the ODE
where [math]\displaystyle{ K_0 }[/math] is a zeroth order modified Bessel function of the second type (Abramowitz and Stegun 1964). For [math]\displaystyle{ |=Arg= (w)|\lt \pi }[/math], it has the expansion
where [math]\displaystyle{ \gamma_e }[/math] is the Euler constant, [math]\displaystyle{ \chi_m=\sum_{r=1}^m1/r }[/math], and [math]\displaystyle{ I_0 }[/math] is the modified Bessel function of the first kind (also given by Abramowitz and Stegun 1964):
The coefficients [math]\displaystyle{ a_m }[/math] and [math]\displaystyle{ d_m }[/math] are given by [math]\displaystyle{ a_m=1/2^{2m}(m!)^2 }[/math] and if [math]\displaystyle{ \chi_0=0 }[/math], [math]\displaystyle{ d_m=\log2-\gamma_e+\chi_m }[/math], so the above series converges for all [math]\displaystyle{ w }[/math]. Thus we anticipate that series representations for the [math]\displaystyle{ g_{1n} }[/math] that satisfy (C.3) will also converge for all [math]\displaystyle{ x }[/math]. The [math]\displaystyle{ a_n }[/math] and [math]\displaystyle{ d_n }[/math] themselves may be calculated most efficiently by the recurrence relations [math]\displaystyle{ a_n=a_{m-1}/4m^2 }[/math] and [math]\displaystyle{ d_m=d_{m-1}+1/m }[/math], with [math]\displaystyle{ a_0=1 }[/math] and [math]\displaystyle{ d_0=\log2-\gamma_e. }[/math]
The [math]\displaystyle{ g_{1n} }[/math] should also satisfy the initial conditions
where the [math]\displaystyle{ q_n }[/math] can be evaluated analytically as shown in Section C.2.2 below.
In addition, we know that even derivatives of [math]\displaystyle{ g_1 }[/math] are even functions, and that odd derivatives are odd functions (since their Fourier transforms are). Consequently values of [math]\displaystyle{ g_1^{(n)} }[/math] for negative [math]\displaystyle{ x }[/math] can be generated in an obvious fashion from their values at [math]\displaystyle{ -x }[/math], and so we will assume [math]\displaystyle{ x\gt 0 }[/math] from here on.
Trial Series Solution
We will solve equations (C.3) by using a variation of the Method of Frobenius to obtain individual series expansions for each of the [math]\displaystyle{ g_{1n} }[/math]. Letting [math]\displaystyle{ w=lx }[/math], [math]\displaystyle{ \alpha=\alpha_n/l }[/math], and [math]\displaystyle{ J(w)=l^2\big(g_{1n}(x)-q_n\cos\alpha_nx\big) }[/math], [math]\displaystyle{ J }[/math] should now satisfy
We can see from their transforms that each of the [math]\displaystyle{ g_{1n} }[/math] are continuous, even, and have continuous first derivatives, but that they have logarithmic singularities in their second derivatives, and so we propose a trial solution for (C.8) of the form
which clearly satisfies (C.8b). This has first derivative
enabling odd derivatives of [math]\displaystyle{ g_1 }[/math] to also be produced once the previous even one has been calculated. Proceeding to find [math]\displaystyle{ J }[/math], however, we now substitute our trial solution into the left hand side of (C.8a), giving
Consequently, [math]\displaystyle{ J }[/math] will satisfy (C.8) if our [math]\displaystyle{ b_m }[/math] and [math]\displaystyle{ c_m }[/math] satisfy the recurrence relations
and if [math]\displaystyle{ 2c_0=-a_0 }[/math] and [math]\displaystyle{ 2b_0=a_0d_0-3c_0 }[/math] (i.e., if [math]\displaystyle{ b_0=(\log2+3/2-\gamma_e)/2 }[/math] and [math]\displaystyle{ c_0=-1/2 }[/math]). The latter relation (C.12b) implies that [math]\displaystyle{ |c_m/c_{m-1}|\to0 }[/math] as [math]\displaystyle{ m\to\infty }[/math], and so the series involving those coefficients in (C.9) will converge for all [math]\displaystyle{ w }[/math]. This also implies that the [math]\displaystyle{ c_m }[/math] themselves tend towards zero as [math]\displaystyle{ m\to\infty }[/math], and so we can also deduce from (C.12a) that the ratio [math]\displaystyle{ |b_m/b_{m-1}| }[/math] will do the same for large [math]\displaystyle{ m }[/math]. Consequently, (C.9) will converge in entirety for all [math]\displaystyle{ w }[/math], although it will have a branch cut along the negative real axis due to the logarithmic factor.
Contour Integration
An alternative means of calculating [math]\displaystyle{ g }[/math] is by using contour integration to transform the inverse Fourier transform into an integral which is more easily evaluated numerically. This technique is more efficient for larger values of [math]\displaystyle{ x }[/math], but it turns out that the resulting integral may actually be evaluated analytically when [math]\displaystyle{ x=0 }[/math]. Section C.2.1 explains the contour used and outlines the numerical method of approximation of the resulting integral for general [math]\displaystyle{ x }[/math] values, while Section C.2.2 derives the formula for that integral when [math]\displaystyle{ x=0 }[/math].
Some additional simplifications can be obtained for normal incidence. We show that the integral for general [math]\displaystyle{ x }[/math] reduces to an expression involving the well-known and easily evaluated exponential integral function. This also acts as a further check on the oblique incidence results because the normal incidence results can be compared to those obtained by Dixon and Squire 2001a. This is done in Section C.2.3.
Derivation and Numerical Approximation
For larger values of [math]\displaystyle{ x }[/math], when the series expansions for the [math]\displaystyle{ g_{1n} }[/math] take longer to converge, we would like to transform the integral
into one that is easier to evaluate numerically. To do this we close the contour in the upper half plane, allowing for the pole at [math]\displaystyle{ k=\alpha_n }[/math] and the branch cut along the line [math]\displaystyle{ k=\mbox{i}\times[l,\infty) }[/math]. Thus we can write
where [math]\displaystyle{ \epsilon_n=\mbox{sgn}\big(\mathfrak{Re}[\gamma_n]\big) }[/math], and
The [math]\displaystyle{ \epsilon_n }[/math] results from defining the square root in [math]\displaystyle{ \kappa }[/math] as having a branch cut along the negative real axis. Consequently, [math]\displaystyle{ \sqrt{a_n^2+l^2}=\epsilon_n\gamma_n }[/math] in the residue of [math]\displaystyle{ g_{1n} }[/math] at [math]\displaystyle{ \alpha_n }[/math]. In addition, [math]\displaystyle{ \kappa }[/math] in the left hand integral in (C.15) is taken from the positive imaginary axis.
Now let us define the analogous sum to (C.1):
From (C.14), we now can write the relationship between [math]\displaystyle{ g_1 }[/math] and [math]\displaystyle{ g_3 }[/math]:
Adding [math]\displaystyle{ g_0 }[/math] to [math]\displaystyle{ g_1 }[/math] then enables us to write [math]\displaystyle{ g(x)=g_2(x)+g_3(x) }[/math], where
This confirms the results (3.20) and (3.21), which were obtained by taking the limit in (3.7) as [math]\displaystyle{ H\to\infty }[/math] directly.
[math]\displaystyle{ g_3(x) }[/math] as given by (C.16) is an integral that is more amenable to calculation by numerical quadrature than the inverse Fourier transform [math]\displaystyle{ g_1 }[/math]. The integrand will never have any singularities on the real line and, because it is also [math]\displaystyle{ O(\mbox{e}^{-kx}) }[/math] as [math]\displaystyle{ k\to\infty }[/math], [math]\displaystyle{ g_3 }[/math] will be infinitely differentiable for [math]\displaystyle{ x\gt 0 }[/math]. However, since the non-exponential factor is only [math]\displaystyle{ O(k^{-9}) }[/math] as [math]\displaystyle{ k\to\infty }[/math], its eighth derivative will be singular at the origin, as was mentioned in Section 3.1.1.
When [math]\displaystyle{ \theta=0 }[/math], [math]\displaystyle{ g_3 }[/math] may be calculated analytically in terms of the exponential integral [math]\displaystyle{ E_1 }[/math] as described below in Section C.2.3. First, however, we will show how any derivative of [math]\displaystyle{ g_3 }[/math] may be efficiently calculated for oblique incidence.
Differentiating (C.16) [math]\displaystyle{ n }[/math] times gives
where [math]\displaystyle{ t=(\kappa-l)x }[/math]. This integral may now be found using generalised Gauss-Laguerre integration, using the weight function [math]\displaystyle{ t^{1/2}\mbox{e}^{-t} }[/math]. The remainder of the integrand will now be relatively smooth, with the exception of the [math]\displaystyle{ \sqrt{t+2lx} }[/math] term which has a branch cut on the line [math]\displaystyle{ (-\infty,-2lx] }[/math]. However, we don't need to integrate over that singularity and the larger [math]\displaystyle{ x }[/math] gets the further it moves away from the region of integration. Hence, the integral will converge faster for larger values of [math]\displaystyle{ x }[/math]. Also note that (C.19b) shows that [math]\displaystyle{ g_3^{(n)} }[/math] decays exponentially as [math]\displaystyle{ x\to\infty }[/math].
Exact Calculation of Derivatives at the Origin
Equation (C.15) when [math]\displaystyle{ x=0 }[/math] is
where
and we have made the transformation [math]\displaystyle{ \kappa=l(1+w^2)/(1-w^2) }[/math]. If we also define [math]\displaystyle{ \beta }[/math] and [math]\displaystyle{ \gamma }[/math] by the relations [math]\displaystyle{ \beta^2=(\alpha+\mbox{i} l)/(\alpha-\mbox{i} l) }[/math] and [math]\displaystyle{ \beta=\gamma/(\alpha-\mbox{i} l)=(\alpha+\mbox{i} l)/\gamma }[/math] (so that [math]\displaystyle{ \gamma^2=\alpha^2+l^2 }[/math]), (C.21) becomes
To simplify (C.22), we choose [math]\displaystyle{ \beta }[/math] to be the square root from the upper half plane (which also implies that [math]\displaystyle{ 1+\beta }[/math] is in the upper half plane, and that [math]\displaystyle{ \log(-\beta)=\log\beta-\mbox{i}\pi }[/math]). [math]\displaystyle{ 1-\beta }[/math] will then be in the lower half-plane so the quotient [math]\displaystyle{ (1+\beta)/(1-\beta) }[/math] corresponds to an anti-clockwise rotation of [math]\displaystyle{ 1+\beta }[/math]. Potentially this could rotate [math]\displaystyle{ 1+\beta }[/math] across the branch cut in the logarithm on the negative real axis, but it can be shown explicitly that with our choice of [math]\displaystyle{ \beta }[/math] the aforementioned quotient remains in the upper half plane. Thus, the normal logarithm rules for quotients hold, and so (C.22) may be written
Note that the above expression is even in [math]\displaystyle{ \gamma }[/math], so it doesn't matter after all which root we choose for it. Thus,
where [math]\displaystyle{ \Theta_n=(\gamma_n+\alpha_n)/(\gamma_n-\alpha_n) }[/math]. The latter step is justifed since if [math]\displaystyle{ \mathfrak{im}[\alpha_n]\gt 0 }[/math] the argument of the first logarithm will be in the first quadrant, while the argument of the second logarithm will be in the second quadrant. This means that we can now write
Now, for small [math]\displaystyle{ l }[/math], [math]\displaystyle{ \alpha_n\sim\epsilon_n\gamma_n\big(1-l^2/2\gamma_n+ O(l^4)\big) }[/math], where [math]\displaystyle{ \epsilon_n=\mbox{sgn}(\mathfrak{im}[\gamma_n]) }[/math], so at first glance it seems like the above expression will become singular as [math]\displaystyle{ l \to 0 }[/math]. However, this is not the case. If we define [math]\displaystyle{ A'_n=A_n\alpha_n/\gamma_n=-\gamma_n^2/ (4\gamma_n^5+1) }[/math], [math]\displaystyle{ A_n\sim\epsilon_nA_n'+O(l^2) }[/math] and it is easily shown that for [math]\displaystyle{ n=0,1,2,3 }[/math] they satisfy the rules
which are similar to those presented by Squire and Dixon 2000. Thus as [math]\displaystyle{ l \to 0 }[/math],
Normal Incidence Simplification
An alternative expansion to (C.20) for the [math]\displaystyle{ g_{3n} }[/math] is
Using the rules (C.26) to eliminate the Bessel function [math]\displaystyle{ K_0 }[/math], we can thus write the [math]\displaystyle{ 2n }[/math]-th derivative of [math]\displaystyle{ g_3 }[/math] as
Now, if we let [math]\displaystyle{ \theta\to0 }[/math], [math]\displaystyle{ l\to0 }[/math], [math]\displaystyle{ \alpha_n^2\to\gamma_n^2 }[/math], and [math]\displaystyle{ \kappa\to k }[/math] on the real line, (C.29) becomes
where [math]\displaystyle{ E_1 }[/math] is the well-known exponential integral function (Abramowitz and Stegun 1964, p228ff.) defined as follows
Note that, since no imaginary solutions exist for the infinite depth dispersion relation, [math]\displaystyle{ \big| \mbox{Arg} [\pm \mbox{i}\gamma_nx]\big|\lt \pi }[/math], so the above formulae are valid. Also note that the [math]\displaystyle{ g_3^{2n} }[/math] will be everywhere bounded, despite the logarithmic singularities in [math]\displaystyle{ E_1 }[/math]. This also follows from the rules (C.26), allowing us to deduce that taking the limit in (C.30) as [math]\displaystyle{ x \to 0^+ }[/math] gives
which agrees with (C.27). Recalling that [math]\displaystyle{ A_n\to\mbox{sgn}(\mathfrak{Im}[\gamma_n])A_n' }[/math] as [math]\displaystyle{ l \to 0 }[/math], adding [math]\displaystyle{ g_2^{(2n)}(0^+) }[/math] to (C.32a) and using (C.26) to simplify the product, then gives the following result for [math]\displaystyle{ g^{(2n)}(0^+) }[/math] itself:
The above result is unstable as [math]\displaystyle{ \gamma_2 }[/math] becomes negative and so is only reliable for [math]\displaystyle{ \varpi\gt \varpi_\infty }[/math]. For a result that holds for all [math]\displaystyle{ \varpi }[/math], use (C.32b) and the ordinary definition of [math]\displaystyle{ g_2 }[/math].
Odd derivatives of [math]\displaystyle{ g_3 }[/math] may also be obtained by differentiating (C.30) directly, again making use of (C.31) and also (C.26) to eliminate any [math]\displaystyle{ 1/x }[/math] type singularities produced. Thus,
and so we can write the general [math]\displaystyle{ n^\mbox{th} }[/math] ([math]\displaystyle{ 0\leq n\leq7 }[/math]) derivative of [math]\displaystyle{ g_3 }[/math] as
Recall that from (3.13) that we already know what values [math]\displaystyle{ g^{(2n+1)} }[/math] will take at the origin so there is no need to take the limit in (C.34) as [math]\displaystyle{ x \to 0^+ }[/math]. In addition, we can also check the above results numerically by letting [math]\displaystyle{ l\to0 }[/math] and [math]\displaystyle{ \kappa\to k }[/math] in (C.19a) and using generalised Gauss-Laguerre integration (with weight function [math]\displaystyle{ k^{n+1}\mbox{e}^{-kx} }[/math]).
A further confirmation of (C.35) can be obtained by checking the [math]\displaystyle{ n=0 }[/math] result against the equivalent result of Dixon and Squire 2001. Let us denote their nondimensional length variables as [math]\displaystyle{ x',z',\xi' }[/math] and [math]\displaystyle{ \zeta' }[/math], which equate to [math]\displaystyle{ x/a,z/a,\xi/a }[/math] and [math]\displaystyle{ \zeta/a }[/math] respectively. In their expression (6) for the Green's function in that scheme, which we shall denote [math]\displaystyle{ G_\mbox{SD} }[/math], the integral when [math]\displaystyle{ x'\gt \xi' }[/math] and [math]\displaystyle{ z',\zeta'\approx0 }[/math] is obtained by doubling the real part of equation (14b). Doing so, and then differentiating with respect to [math]\displaystyle{ \zeta' }[/math] followed by [math]\displaystyle{ z' }[/math] gives the result
where [math]\displaystyle{ \epsilon_n'=\sgn\big(\mbox{e}[\gamma_n]\big) }[/math]. The cosine term needs to be added to allow for the difference between taking the Cauchy Principal value of the integral (3.4) and letting the poles [math]\displaystyle{ \pm\alpha_0 }[/math] approach the real line from the upper or lower half plane as we did, while [math]\displaystyle{ g_\mbox{cos} }[/math] is the auxiliary cosine function Abramowitz and Stegun 1964 which may be written as
Since the first summation in (C.36) is obviously [math]\displaystyle{ g_2(x) }[/math], the second sum must be [math]\displaystyle{ g_3(x) }[/math]. In light of the definition of [math]\displaystyle{ g_\mbox{cos} }[/math], the result presented by Dixon and Squire 2001 clearly agrees with (C.35).