Computation of Infinite Depth Green's Function

From WikiWaves
Jump to navigationJump to search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.


The normal incidence Green's function for infinite depth has already been published Squire and Dixon 2000, Dixon and Squire 2001, the solution being expressed in terms of the auxilary sine and cosine integrals, or equivalently in terms of the exponential integral (see Appendix C.2.3). Similarly, the finite depth Green's function was derived by Evans and Porter 2003a and was also presented in Section 3.1 of this thesis for both normal and oblique incidence. However, at present, the required derivatives [math]\displaystyle{ g^{(n)}(x) }[/math] of the infinite depth Green's function for oblique incidence may only be calculated exactly when [math]\displaystyle{ x=0^\pm }[/math]. Formulae for these quantities were published by Williams and Squire 2002 and are also derived below in Appendix C.2.2. Although the aforementioned derivatives may be calculated when [math]\displaystyle{ x\neq0 }[/math] by Fast Fourier Transform, as shown in Section 3.2, this is often not the most efficient approach, especially when the Green's function is only required to be evaluated at a few points. The following two sections outline two approaches that are more suitable for such a scenario.

The first uses a partial fractions decomposition to reduce them (the required derivatives of [math]\displaystyle{ g }[/math], or more specifically of [math]\displaystyle{ g_1 }[/math]) into the sum of five simpler transforms which each satisfy linear inhomogeneous second order ordinary differential equations (ODEs). The inhomogeneous term is the zeroth order modified Bessel function of the second kind, [math]\displaystyle{ K_0 }[/math], which has a standard series representation (Abramowitz and Stegun 1964, p375). Using a method that is similar to both the Method of Undetermined Coefficients (Boyce and DiPrima 1997, p164 ff.) and the Method of Frobenius (Boyce and DiPrima 1997, p262 ff.), we search for solutions that have similar expansions to that of [math]\displaystyle{ K_0 }[/math], and obtain simple recurrence relations for the series coefficients. The series obtained are convergent for all values of [math]\displaystyle{ x }[/math], with less terms needed in the expansion when [math]\displaystyle{ x }[/math] is smaller.

A second approach, which uses a contour integral to transform the inversion integral into one that may be calculated more easily by numerical quadrature, is more efficient for larger values of [math]\displaystyle{ x }[/math]. It is described below in Section C.2, but first we will introduce the series expansion method.

Note that in both methods it is assumed in their derivations that the poles [math]\displaystyle{ \alpha_n }[/math] have imaginary part greater than zero. However, it will be seen that the final expressions for [math]\displaystyle{ g }[/math] retain the same form in the limits as any poles that need to be are allowed to return to the real line. Consequently, all results hold for any value of [math]\displaystyle{ \varpi }[/math].

Series Expansion

As stated in the introduction to this appendix, the series representation of [math]\displaystyle{ g_1(x) }[/math] is obtained by breaking it up into the sum of five simpler transforms which each satisfy a different ODE. Section C.1.1 presents this decomposition and derives the ODEs that they satisfy. Section C.1.2 solves these ODEs by proposing series solutions of a particular type, which are similar to the series expansion of the inhomogeneous term (a modified Bessel function of the second kind).

Ordinary Differential Equations

From (3.18), the Fourier transform of the [math]\displaystyle{ 2n }[/math]-th derivative of [math]\displaystyle{ g_1 }[/math] may be expanded by partial fractions as

[math]\displaystyle{ \begin{matrix} (-ik)^{(2n)}\hat g_1(k)=-\frac1{\pi \mbox{i}}\sum^2_{m=-2}A_m\gamma_m(\mbox{i}\alpha_m)^{2n+1}\hat g_{1m}(k), &\qquad (C.1) \end{matrix} }[/math]

where the [math]\displaystyle{ A_n }[/math] were defined in Section 3.2 as [math]\displaystyle{ A_n=-\gamma_n^3/\alpha_n(4\gamma_n^5+1) }[/math], and

[math]\displaystyle{ \begin{matrix} \hat g_{1n}(k)=\frac{\pi}{\kappa(\alpha_n^2-k^2)}. &\qquad (C.2) \end{matrix} }[/math]

Multiplying this by [math]\displaystyle{ \alpha_n^2-k^2 }[/math] gives the Fourier transform of the ODE

[math]\displaystyle{ \begin{matrix} \big(\partial_x^2+\alpha_n^2\big)g_{1n}(x)=K_0(l|x|), &\qquad (C.3) \end{matrix} }[/math]

where [math]\displaystyle{ K_0 }[/math] is a zeroth order modified Bessel function of the second type (Abramowitz and Stegun 1964). For [math]\displaystyle{ |=Arg= (w)|\lt \pi }[/math], it has the expansion

[math]\displaystyle{ \begin{matrix} K_0(w)&=-\big(\log(w/2)+\gamma_e\big)I_0(w)+\sum_{m=1}^\infty\chi_m\frac{(w/2)^{2m}}{(m!)^2}\\ &=\sum_{m=0}^\infty a_mw^{2m}\big(d_m-\log w\big), &\qquad (C.4) \end{matrix} }[/math]

where [math]\displaystyle{ \gamma_e }[/math] is the Euler constant, [math]\displaystyle{ \chi_m=\sum_{r=1}^m1/r }[/math], and [math]\displaystyle{ I_0 }[/math] is the modified Bessel function of the first kind (also given by Abramowitz and Stegun 1964):

[math]\displaystyle{ \begin{matrix} I_0(w)=\sum_{m=0}^\infty\frac{(w/2)^{2m}}{(m!)^2}=\sum_{m=0}^\infty a_mw^{2m}. &\qquad (C.6) \end{matrix} }[/math]

The coefficients [math]\displaystyle{ a_m }[/math] and [math]\displaystyle{ d_m }[/math] are given by [math]\displaystyle{ a_m=1/2^{2m}(m!)^2 }[/math] and if [math]\displaystyle{ \chi_0=0 }[/math], [math]\displaystyle{ d_m=\log2-\gamma_e+\chi_m }[/math], so the above series converges for all [math]\displaystyle{ w }[/math]. Thus we anticipate that series representations for the [math]\displaystyle{ g_{1n} }[/math] that satisfy (C.3) will also converge for all [math]\displaystyle{ x }[/math]. The [math]\displaystyle{ a_n }[/math] and [math]\displaystyle{ d_n }[/math] themselves may be calculated most efficiently by the recurrence relations [math]\displaystyle{ a_n=a_{m-1}/4m^2 }[/math] and [math]\displaystyle{ d_m=d_{m-1}+1/m }[/math], with [math]\displaystyle{ a_0=1 }[/math] and [math]\displaystyle{ d_0=\log2-\gamma_e. }[/math]

The [math]\displaystyle{ g_{1n} }[/math] should also satisfy the initial conditions

[math]\displaystyle{ \begin{matrix} g_{1n}(0)-q_n=g'_{1n}(0)=0, &\qquad (C.7) \end{matrix} }[/math]

where the [math]\displaystyle{ q_n }[/math] can be evaluated analytically as shown in Section C.2.2 below.

In addition, we know that even derivatives of [math]\displaystyle{ g_1 }[/math] are even functions, and that odd derivatives are odd functions (since their Fourier transforms are). Consequently values of [math]\displaystyle{ g_1^{(n)} }[/math] for negative [math]\displaystyle{ x }[/math] can be generated in an obvious fashion from their values at [math]\displaystyle{ -x }[/math], and so we will assume [math]\displaystyle{ x\gt 0 }[/math] from here on.

Trial Series Solution

We will solve equations (C.3) by using a variation of the Method of Frobenius to obtain individual series expansions for each of the [math]\displaystyle{ g_{1n} }[/math]. Letting [math]\displaystyle{ w=lx }[/math], [math]\displaystyle{ \alpha=\alpha_n/l }[/math], and [math]\displaystyle{ J(w)=l^2\big(g_{1n}(x)-q_n\cos\alpha_nx\big) }[/math], [math]\displaystyle{ J }[/math] should now satisfy

[math]\displaystyle{ \begin{matrix} J''(w)+\alpha^2J(w)=K_0(w) \quad \mbox{for} \ w\gt 0, &\qquad (C.8a)\\ J(0^+)=J'(0^+)=0. &\qquad (C.8b) \end{matrix} }[/math]

We can see from their transforms that each of the [math]\displaystyle{ g_{1n} }[/math] are continuous, even, and have continuous first derivatives, but that they have logarithmic singularities in their second derivatives, and so we propose a trial solution for (C.8) of the form

[math]\displaystyle{ J(w) = \sum_{m=0}^\infty \big( b_m + c_m \log w \big) w^{2m+2}, \qquad (C.9) }[/math]

which clearly satisfies (C.8b). This has first derivative

[math]\displaystyle{ \begin{matrix} J'(w)=\sum_{m=0}^\infty\big((2m+2)b_m+c_m+(2m+2)c_m\log w\big)w^{2m+1}, &\qquad (C.10) \end{matrix} }[/math]

enabling odd derivatives of [math]\displaystyle{ g_1 }[/math] to also be produced once the previous even one has been calculated. Proceeding to find [math]\displaystyle{ J }[/math], however, we now substitute our trial solution into the left hand side of (C.8a), giving

[math]\displaystyle{ \begin{matrix} &J''(w)+\alpha^2J=2b_0+3c_0+2c_0\log w\\ &\quad+\sum_{m=1}^\infty\big( (2m+2)(2m+1)b_{m}+\alpha^2b_{m-1}+(4m+3)c_m\big)w^{2m}\\ &\quad+\sum_{m=1}^\infty\big((2m+2)(2m+1)c_m+\alpha^2c_{m-1}\big)w^{2m}\log w. &\qquad (C.11) \end{matrix} }[/math]

Consequently, [math]\displaystyle{ J }[/math] will satisfy (C.8) if our [math]\displaystyle{ b_m }[/math] and [math]\displaystyle{ c_m }[/math] satisfy the recurrence relations

[math]\displaystyle{ \begin{matrix} (2m+2)(2m+1)b_{m}+\alpha^2b_{m-1}+(4m+3)c_{m}=a_{m}d_{m} \quad &\mbox{for} \ m\geq1, &\qquad (C.12a)\\ (2m+2)(2m+1)c_m+\alpha^2c_{m-1}=-a_m \quad & \mbox{for} \ m\geq1, &\qquad (C.12b) \end{matrix} }[/math]

and if [math]\displaystyle{ 2c_0=-a_0 }[/math] and [math]\displaystyle{ 2b_0=a_0d_0-3c_0 }[/math] (i.e., if [math]\displaystyle{ b_0=(\log2+3/2-\gamma_e)/2 }[/math] and [math]\displaystyle{ c_0=-1/2 }[/math]). The latter relation (C.12b) implies that [math]\displaystyle{ |c_m/c_{m-1}|\to0 }[/math] as [math]\displaystyle{ m\to\infty }[/math], and so the series involving those coefficients in (C.9) will converge for all [math]\displaystyle{ w }[/math]. This also implies that the [math]\displaystyle{ c_m }[/math] themselves tend towards zero as [math]\displaystyle{ m\to\infty }[/math], and so we can also deduce from (C.12a) that the ratio [math]\displaystyle{ |b_m/b_{m-1}| }[/math] will do the same for large [math]\displaystyle{ m }[/math]. Consequently, (C.9) will converge in entirety for all [math]\displaystyle{ w }[/math], although it will have a branch cut along the negative real axis due to the logarithmic factor.

Contour Integration

An alternative means of calculating [math]\displaystyle{ g }[/math] is by using contour integration to transform the inverse Fourier transform into an integral which is more easily evaluated numerically. This technique is more efficient for larger values of [math]\displaystyle{ x }[/math], but it turns out that the resulting integral may actually be evaluated analytically when [math]\displaystyle{ x=0 }[/math]. Section C.2.1 explains the contour used and outlines the numerical method of approximation of the resulting integral for general [math]\displaystyle{ x }[/math] values, while Section C.2.2 derives the formula for that integral when [math]\displaystyle{ x=0 }[/math].

Some additional simplifications can be obtained for normal incidence. We show that the integral for general [math]\displaystyle{ x }[/math] reduces to an expression involving the well-known and easily evaluated exponential integral function. This also acts as a further check on the oblique incidence results because the normal incidence results can be compared to those obtained by Dixon and Squire 2001a. This is done in Section C.2.3.

Derivation and Numerical Approximation

For larger values of [math]\displaystyle{ x }[/math], when the series expansions for the [math]\displaystyle{ g_{1n} }[/math] take longer to converge, we would like to transform the integral

[math]\displaystyle{ g_{1n}(x)=\frac12\int_{-\infty}^\infty\frac{\mbox{e}^{\mbox{i} kx}\mbox{d} k}{\kappa(\alpha_n^2-k^2)} \qquad (C.13) }[/math]

into one that is easier to evaluate numerically. To do this we close the contour in the upper half plane, allowing for the pole at [math]\displaystyle{ k=\alpha_n }[/math] and the branch cut along the line [math]\displaystyle{ k=\mbox{i}\times[l,\infty) }[/math]. Thus we can write

[math]\displaystyle{ g_{1n}(x)=g_{3n}(x)-\frac{\epsilon_n\pi \mbox{i}}{2\alpha_n\gamma_n}\mbox{e}^{\mbox{i}\alpha_nx}, \qquad (C.14) }[/math]

where [math]\displaystyle{ \epsilon_n=\mbox{sgn}\big(\mathfrak{Re}[\gamma_n]\big) }[/math], and

[math]\displaystyle{ g_{3n}(x)=\int_{\mbox{i} l}^{\mbox{i}\infty}\frac{\mbox{e}^{\mbox{i} kx}\mbox{d} k}{\kappa(\alpha_n^2-k^2)}=\int_{l}^{\infty}\frac{\mbox{e}^{-\kappa x}\mbox{d}\kappa}{k(\kappa^2+\alpha_n^2)}. \qquad (C.15) }[/math]

The [math]\displaystyle{ \epsilon_n }[/math] results from defining the square root in [math]\displaystyle{ \kappa }[/math] as having a branch cut along the negative real axis. Consequently, [math]\displaystyle{ \sqrt{a_n^2+l^2}=\epsilon_n\gamma_n }[/math] in the residue of [math]\displaystyle{ g_{1n} }[/math] at [math]\displaystyle{ \alpha_n }[/math]. In addition, [math]\displaystyle{ \kappa }[/math] in the left hand integral in (C.15) is taken from the positive imaginary axis.

Now let us define the analogous sum to (C.1):

[math]\displaystyle{ \begin{matrix} g_3(x)=-\frac1\pi\sum_{n=-2}^2A_n\alpha_n\gamma_ng_{3n}(x)=-\frac1\pi\int_0^\infty\frac{k\mbox{e}^{-\kappa x}\mbox{d}\kappa}{\Lambda^2(k)k^2+1}. &\qquad (C.16) \end{matrix} }[/math]

From (C.14), we now can write the relationship between [math]\displaystyle{ g_1 }[/math] and [math]\displaystyle{ g_3 }[/math]:

[math]\displaystyle{ \begin{matrix} g_1(x)=g_3(x)+\frac \mbox{i}2\sum_{n=-2}^2A_n\epsilon_n\mbox{e}^{\mbox{i}\alpha_nx}. &\qquad (C.17) \end{matrix} }[/math]

Adding [math]\displaystyle{ g_0 }[/math] to [math]\displaystyle{ g_1 }[/math] then enables us to write [math]\displaystyle{ g(x)=g_2(x)+g_3(x) }[/math], where

[math]\displaystyle{ \begin{matrix} g_2(x)=\mbox{i}\sum_{n=-2}^0A_n\mbox{e}^{\mbox{i}\alpha_nx}. &\qquad (C.18) \end{matrix} }[/math]

This confirms the results (3.20) and (3.21), which were obtained by taking the limit in (3.7) as [math]\displaystyle{ H\to\infty }[/math] directly.

[math]\displaystyle{ g_3(x) }[/math] as given by (C.16) is an integral that is more amenable to calculation by numerical quadrature than the inverse Fourier transform [math]\displaystyle{ g_1 }[/math]. The integrand will never have any singularities on the real line and, because it is also [math]\displaystyle{ O(\mbox{e}^{-kx}) }[/math] as [math]\displaystyle{ k\to\infty }[/math], [math]\displaystyle{ g_3 }[/math] will be infinitely differentiable for [math]\displaystyle{ x\gt 0 }[/math]. However, since the non-exponential factor is only [math]\displaystyle{ O(k^{-9}) }[/math] as [math]\displaystyle{ k\to\infty }[/math], its eighth derivative will be singular at the origin, as was mentioned in Section 3.1.1.

When [math]\displaystyle{ \theta=0 }[/math], [math]\displaystyle{ g_3 }[/math] may be calculated analytically in terms of the exponential integral [math]\displaystyle{ E_1 }[/math] as described below in Section C.2.3. First, however, we will show how any derivative of [math]\displaystyle{ g_3 }[/math] may be efficiently calculated for oblique incidence.

Differentiating (C.16) [math]\displaystyle{ n }[/math] times gives

[math]\displaystyle{ \begin{matrix} &g_3^{(n)}(x)=-\frac{(-1)^{n}}\pi\int_l^\infty\frac{k\kappa^ne^{-\kappa x}\mbox{d}\kappa}{\Lambda^2(k)k^2+1} &\qquad (C.19a)\\ &=-\frac{(-1)^{n}}\pi\times x^{8-n}\mbox{e}^{-lx}\int_0^\infty \frac{t^{1/2}(t+lx)^n(t+2lx)^{1/2}\mbox{e}^{-t}\mbox{d} t}{t(t+2lx)\big(t^2(t+2lx)^2+\varpi x^4\big)^2+x^{10}} &\qquad (C.19b), \end{matrix} }[/math]

where [math]\displaystyle{ t=(\kappa-l)x }[/math]. This integral may now be found using generalised Gauss-Laguerre integration, using the weight function [math]\displaystyle{ t^{1/2}\mbox{e}^{-t} }[/math]. The remainder of the integrand will now be relatively smooth, with the exception of the [math]\displaystyle{ \sqrt{t+2lx} }[/math] term which has a branch cut on the line [math]\displaystyle{ (-\infty,-2lx] }[/math]. However, we don't need to integrate over that singularity and the larger [math]\displaystyle{ x }[/math] gets the further it moves away from the region of integration. Hence, the integral will converge faster for larger values of [math]\displaystyle{ x }[/math]. Also note that (C.19b) shows that [math]\displaystyle{ g_3^{(n)} }[/math] decays exponentially as [math]\displaystyle{ x\to\infty }[/math].

Exact Calculation of Derivatives at the Origin

Equation (C.15) when [math]\displaystyle{ x=0 }[/math] is

[math]\displaystyle{ \begin{matrix} q_n'=q_n+\frac{\mbox{i}\epsilon_n}{2\alpha_n\gamma_n}=g_{3n}(0^+)=I(\alpha_n)+I(-\alpha_n), &\qquad (C.20) \end{matrix} }[/math]

where

[math]\displaystyle{ \begin{matrix} I(\alpha)&=\frac1{2\mbox{i}\alpha}\int_l^\infty\frac{\mbox{d}\kappa}{k(\kappa-\mbox{i}\alpha)} =-\frac1{\alpha}\int_0^1\frac{\mbox{d} w}{(\alpha-\mbox{i} l)w^2-(\alpha+\mbox{i} l)}, &\qquad (C.21) \end{matrix} }[/math]

and we have made the transformation [math]\displaystyle{ \kappa=l(1+w^2)/(1-w^2) }[/math]. If we also define [math]\displaystyle{ \beta }[/math] and [math]\displaystyle{ \gamma }[/math] by the relations [math]\displaystyle{ \beta^2=(\alpha+\mbox{i} l)/(\alpha-\mbox{i} l) }[/math] and [math]\displaystyle{ \beta=\gamma/(\alpha-\mbox{i} l)=(\alpha+\mbox{i} l)/\gamma }[/math] (so that [math]\displaystyle{ \gamma^2=\alpha^2+l^2 }[/math]), (C.21) becomes

[math]\displaystyle{ \begin{matrix} &I(\alpha)=-\frac1{\alpha(\alpha-\mbox{i} l)}\int_0^1\frac{\mbox{d} w}{w^2-\beta^2} =-\frac1{2\alpha\gamma}\left(\int_0^1\frac{\mbox{d} w}{w-\beta} -\int_0^1\frac{\mbox{d} w}{w+\beta}\right)\\ &=\frac1{2\alpha\gamma}\big(\log(1+\beta)-\log(1-\beta)-\log\beta+\log(-\beta)\big). &\qquad (C.22) \end{matrix} }[/math]

To simplify (C.22), we choose [math]\displaystyle{ \beta }[/math] to be the square root from the upper half plane (which also implies that [math]\displaystyle{ 1+\beta }[/math] is in the upper half plane, and that [math]\displaystyle{ \log(-\beta)=\log\beta-\mbox{i}\pi }[/math]). [math]\displaystyle{ 1-\beta }[/math] will then be in the lower half-plane so the quotient [math]\displaystyle{ (1+\beta)/(1-\beta) }[/math] corresponds to an anti-clockwise rotation of [math]\displaystyle{ 1+\beta }[/math]. Potentially this could rotate [math]\displaystyle{ 1+\beta }[/math] across the branch cut in the logarithm on the negative real axis, but it can be shown explicitly that with our choice of [math]\displaystyle{ \beta }[/math] the aforementioned quotient remains in the upper half plane. Thus, the normal logarithm rules for quotients hold, and so (C.22) may be written

[math]\displaystyle{ \begin{matrix} &I(\alpha)=\frac1{2\alpha\gamma}\big(\log\frac{1+\beta}{1-\beta}-\pi \mbox{i}\big) =\frac1{2\alpha\gamma}\big(\log\frac{\gamma+\alpha+\mbox{i} l}{\gamma-\alpha-\mbox{i} l}-\pi \mbox{i}\big). &\qquad (C.23) \end{matrix} }[/math]

Note that the above expression is even in [math]\displaystyle{ \gamma }[/math], so it doesn't matter after all which root we choose for it. Thus,

[math]\displaystyle{ \begin{matrix} q_n'=\frac1{2\alpha_n\gamma_n}\big(\log\frac{\gamma_n+\alpha_n+\mbox{i} l}{\gamma_n-\alpha_n-\mbox{i} l} -\log\frac{\gamma_n-\alpha_n+\mbox{i} l}{\gamma_n+\alpha_n-\mbox{i} l}\big)=\frac1{2\alpha_n\gamma_n}\log\Theta_n, &\qquad (C.24) \end{matrix} }[/math]

where [math]\displaystyle{ \Theta_n=(\gamma_n+\alpha_n)/(\gamma_n-\alpha_n) }[/math]. The latter step is justifed since if [math]\displaystyle{ \mathfrak{im}[\alpha_n]\gt 0 }[/math] the argument of the first logarithm will be in the first quadrant, while the argument of the second logarithm will be in the second quadrant. This means that we can now write

[math]\displaystyle{ g_3^{(2n)}(0^+)=-\frac1\pi\sum_{n=-2}^2A_n\alpha_n\gamma_nq'_n =-\frac1{2\pi}\sum_{m=-2}^2A_m(\mbox{i}\alpha_m)^{2n}\log{\Theta_m}. \qquad (C.25) }[/math]

Now, for small [math]\displaystyle{ l }[/math], [math]\displaystyle{ \alpha_n\sim\epsilon_n\gamma_n\big(1-l^2/2\gamma_n+ O(l^4)\big) }[/math], where [math]\displaystyle{ \epsilon_n=\mbox{sgn}(\mathfrak{im}[\gamma_n]) }[/math], so at first glance it seems like the above expression will become singular as [math]\displaystyle{ l \to 0 }[/math]. However, this is not the case. If we define [math]\displaystyle{ A'_n=A_n\alpha_n/\gamma_n=-\gamma_n^2/ (4\gamma_n^5+1) }[/math], [math]\displaystyle{ A_n\sim\epsilon_nA_n'+O(l^2) }[/math] and it is easily shown that for [math]\displaystyle{ n=0,1,2,3 }[/math] they satisfy the rules

[math]\displaystyle{ \sum_{m=-2}^{2}A_m'\gamma_m^{2n}=0 \quad \mbox{for} \ n=0,\ldots,3, \qquad (C.26) }[/math]

which are similar to those presented by Squire and Dixon 2000. Thus as [math]\displaystyle{ l \to 0 }[/math],

[math]\displaystyle{ \begin{matrix} g_3^{(2n)}(0^+)&\sim-\frac1{2\pi}\sum_{m=-2}^2A_m'\big(1+O(l^2)\big)\times \left(\log\big( \gamma_n^2\big(1+O(l^2)\big) \big)-\log\frac{l^2}4\right)\\ &\to-\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log{\gamma^2_m}. &\qquad (C.27) \end{matrix} }[/math]

Normal Incidence Simplification

An alternative expansion to (C.20) for the [math]\displaystyle{ g_{3n} }[/math] is

[math]\displaystyle{ \begin{matrix} &2\gamma_n^2g_{3n}(x)=\int_l^\infty\left(\frac2k-\frac1{k+\mbox{i}\gamma_n}-\frac1{k-\mbox{i}\gamma_n}\right)\mbox{e}^{-\kappa x}\mbox{d}\kappa\\ &=2K_0(lx)-\int_l^\infty\left(\frac1{k+\mbox{i}\gamma_n}+\frac1{k-\mbox{i}\gamma_n}\right)\mbox{e}^{-\kappa x}\mbox{d}\kappa. &\qquad (C.28) \end{matrix} }[/math]

Using the rules (C.26) to eliminate the Bessel function [math]\displaystyle{ K_0 }[/math], we can thus write the [math]\displaystyle{ 2n }[/math]-th derivative of [math]\displaystyle{ g_3 }[/math] as

[math]\displaystyle{ \begin{matrix} &g_3^{(2n)}(x)=-\frac1\pi\sum_{m=-2}^2A_m'(\mbox{i}\alpha_m)^{2n}g_{3m}(x)\\ &=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\alpha_m)^{2n}\int_l^\infty\left(\frac1{k+\mbox{i}\gamma_n}+\frac1{k-\mbox{i}\gamma_n}\right)\mbox{e}^{-\kappa x}\mbox{d}\kappa. &\qquad (C.29) \end{matrix} }[/math]

Now, if we let [math]\displaystyle{ \theta\to0 }[/math], [math]\displaystyle{ l\to0 }[/math], [math]\displaystyle{ \alpha_n^2\to\gamma_n^2 }[/math], and [math]\displaystyle{ \kappa\to k }[/math] on the real line, (C.29) becomes

[math]\displaystyle{ \begin{matrix} g_3^{(2n)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\left( \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)+\mbox{e}^{-\mbox{i}\gamma_mx}E_1(-\mbox{i}\gamma_mx) \right), &\qquad (C.30) \end{matrix} }[/math]

where [math]\displaystyle{ E_1 }[/math] is the well-known exponential integral function (Abramowitz and Stegun 1964, p228ff.) defined as follows

[math]\displaystyle{ E_1(w)=\int_w^\infty\frac{\mbox{e}^{-t}}t\mbox{d}t=-\gamma-\log w - \sum_{n=1}^\infty \frac{(-w)^n}{n\,n!} \quad \mbox{for} \ \mid \mbox{Arg} (w) \mid \lt \pi. \qquad (C.31) }[/math]

Note that, since no imaginary solutions exist for the infinite depth dispersion relation, [math]\displaystyle{ \big| \mbox{Arg} [\pm \mbox{i}\gamma_nx]\big|\lt \pi }[/math], so the above formulae are valid. Also note that the [math]\displaystyle{ g_3^{2n} }[/math] will be everywhere bounded, despite the logarithmic singularities in [math]\displaystyle{ E_1 }[/math]. This also follows from the rules (C.26), allowing us to deduce that taking the limit in (C.30) as [math]\displaystyle{ x \to 0^+ }[/math] gives

[math]\displaystyle{ \begin{matrix} &g_3^{(2n)}(0^+)=-\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n} \big(\log(\mbox{i}\gamma_m)+\log(-\mbox{i}\gamma_m)\big)\\ &=-\frac1\pi\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m -2i\big(A'_{-2}(\mbox{i}\gamma_1)^{2n}+A'_2(\mbox{i}\gamma_2)^{2n}\big) &\qquad (C.32a)\\ &=-\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m^2, &\qquad (C.32b) \end{matrix} }[/math]

which agrees with (C.27). Recalling that [math]\displaystyle{ A_n\to\mbox{sgn}(\mathfrak{Im}[\gamma_n])A_n' }[/math] as [math]\displaystyle{ l \to 0 }[/math], adding [math]\displaystyle{ g_2^{(2n)}(0^+) }[/math] to (C.32a) and using (C.26) to simplify the product, then gives the following result for [math]\displaystyle{ g^{(2n)}(0^+) }[/math] itself:

[math]\displaystyle{ \begin{matrix} g^{(2n)}(0^+)&=-\frac1\pi\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n}\log\gamma_m -2i\big(A'_{-2}(\mbox{i}\gamma_1)^{2n}+A'_2(\mbox{i}\gamma_2)^{2n}\big). &\qquad (C.33) \end{matrix} }[/math]

The above result is unstable as [math]\displaystyle{ \gamma_2 }[/math] becomes negative and so is only reliable for [math]\displaystyle{ \varpi\gt \varpi_\infty }[/math]. For a result that holds for all [math]\displaystyle{ \varpi }[/math], use (C.32b) and the ordinary definition of [math]\displaystyle{ g_2 }[/math].

Odd derivatives of [math]\displaystyle{ g_3 }[/math] may also be obtained by differentiating (C.30) directly, again making use of (C.31) and also (C.26) to eliminate any [math]\displaystyle{ 1/x }[/math] type singularities produced. Thus,

[math]\displaystyle{ \begin{matrix} g_3^{(2n+1)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{2n+1}\left( \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)-\mbox{e}^{-\mbox{i}\gamma_mx}E_1(-\mbox{i}\gamma_mx)\right), &\qquad (C.34) \end{matrix} }[/math]

and so we can write the general [math]\displaystyle{ n^\mbox{th} }[/math] ([math]\displaystyle{ 0\leq n\leq7 }[/math]) derivative of [math]\displaystyle{ g_3 }[/math] as

[math]\displaystyle{ \begin{matrix} g_3^{(n)}(x)&=\frac1{2\pi}\sum_{m=-2}^2A_m'(\mbox{i}\gamma_m)^{n}\left( \mbox{e}^{\mbox{i}\gamma_mx}E_1(\mbox{i}\gamma_mx)+\mbox{e}^{\mbox{i}(n\pi-\gamma_mx)}E_1(-\mbox{i}\gamma_mx)\right). &\qquad (C.35) \end{matrix} }[/math]

Recall that from (3.13) that we already know what values [math]\displaystyle{ g^{(2n+1)} }[/math] will take at the origin so there is no need to take the limit in (C.34) as [math]\displaystyle{ x \to 0^+ }[/math]. In addition, we can also check the above results numerically by letting [math]\displaystyle{ l\to0 }[/math] and [math]\displaystyle{ \kappa\to k }[/math] in (C.19a) and using generalised Gauss-Laguerre integration (with weight function [math]\displaystyle{ k^{n+1}\mbox{e}^{-kx} }[/math]).

A further confirmation of (C.35) can be obtained by checking the [math]\displaystyle{ n=0 }[/math] result against the equivalent result of Dixon and Squire 2001. Let us denote their nondimensional length variables as [math]\displaystyle{ x',z',\xi' }[/math] and [math]\displaystyle{ \zeta' }[/math], which equate to [math]\displaystyle{ x/a,z/a,\xi/a }[/math] and [math]\displaystyle{ \zeta/a }[/math] respectively. In their expression (6) for the Green's function in that scheme, which we shall denote [math]\displaystyle{ G_\mbox{SD} }[/math], the integral when [math]\displaystyle{ x'\gt \xi' }[/math] and [math]\displaystyle{ z',\zeta'\approx0 }[/math] is obtained by doubling the real part of equation (14b). Doing so, and then differentiating with respect to [math]\displaystyle{ \zeta' }[/math] followed by [math]\displaystyle{ z' }[/math] gives the result

[math]\displaystyle{ \begin{matrix} &g(x-\xi)=\frac1{a^2}\partial^2_{z'\zeta'}G_\mbox{SD}(x'-\xi',0,0)+\mbox{i} A_0\cos\gamma_0x\\ &=\mbox{i}\sum_{m=-2}^0A'_m\epsilon_me^{\mbox{i}\epsilon_m\gamma_mx} +\frac1\pi\sum_{m=-2}^2A_m'g_\mbox{cos}(\epsilon_m'\gamma_mx), &\qquad (C.36) \end{matrix} }[/math]

where [math]\displaystyle{ \epsilon_n'=\sgn\big(\mbox{e}[\gamma_n]\big) }[/math]. The cosine term needs to be added to allow for the difference between taking the Cauchy Principal value of the integral (3.4) and letting the poles [math]\displaystyle{ \pm\alpha_0 }[/math] approach the real line from the upper or lower half plane as we did, while [math]\displaystyle{ g_\mbox{cos} }[/math] is the auxiliary cosine function Abramowitz and Stegun 1964 which may be written as

[math]\displaystyle{ g_\mbox{cos}(w)=\int_0^\infty\frac{\cos t}{w+t}\mbox{d} t=\frac12\big(\mbox{e}^{\mbox{i} w}E_1(\mbox{i} w)+\mbox{e}^{-\mbox{i} w}E_1(-\mbox{i} w)\big), \quad \mathfrak{Re}[w]\gt 0. \qquad (C.37) }[/math]

Since the first summation in (C.36) is obviously [math]\displaystyle{ g_2(x) }[/math], the second sum must be [math]\displaystyle{ g_3(x) }[/math]. In light of the definition of [math]\displaystyle{ g_\mbox{cos} }[/math], the result presented by Dixon and Squire 2001 clearly agrees with (C.35).


Reflections on Ice