Difference between revisions of "Introduction to the Inverse Scattering Transform"

From WikiWaves
Jump to navigationJump to search
Line 134: Line 134:
 
finite number of solutions.  
 
finite number of solutions.  
  
 +
In other words, we solve the final equation above for <math>k</math> to obtain our eigenvalues corresponding to even solutions.
 
Similarly we repeat the above process for the odd solutions.
 
Similarly we repeat the above process for the odd solutions.
  

Revision as of 01:01, 17 September 2010

Nonlinear PDE's Course
Current Topic Introduction to the Inverse Scattering Transform
Next Topic Reaction-Diffusion Systems
Previous Topic Conservation Laws for the KdV



Introduction

The inverse scattering transformation gives a way to solve the KdV equation exactly. You can think about is as being an analogous transformation to the Fourier transformation, except it works for a non linear equation. We want to be able to solve

[math]\displaystyle{ \begin{align} \partial_{t}u+6u\partial_{x}u+\partial_{x}^{3}u & =0\\ u(x,0) & =f\left( x\right) \end{align} }[/math]

with [math]\displaystyle{ \left\vert u\right\vert \rightarrow0 }[/math] as [math]\displaystyle{ x\rightarrow\pm\infty. }[/math]

The Miura transformation is given by

[math]\displaystyle{ u=v^{2}+v_{x} \, }[/math]

and if [math]\displaystyle{ v }[/math] satisfies the mKdV

[math]\displaystyle{ \partial_{t}v-6v^{2}\partial_{x}v+\partial_{x}^{3}v=0 }[/math]

then [math]\displaystyle{ u }[/math] satisfies the KdV (but not vice versa). We can think about the Miura transformation as being a nonlinear ODE solving for [math]\displaystyle{ v }[/math] given [math]\displaystyle{ u. }[/math] This nonlinear ODE is also known as the Riccati equation and there is a well know transformation which linearises this equation. It we write

[math]\displaystyle{ v=\frac{\left( \partial_{x}w\right) }{w} }[/math]

then we obtain the equation

[math]\displaystyle{ \partial_{x}^{2}w+uw=0 }[/math]

The KdV is invariant under the transformation [math]\displaystyle{ x\rightarrow x+6\lambda t, }[/math] [math]\displaystyle{ u\rightarrow u+\lambda. }[/math] Therefore we consider the associated eigenvalue problem

[math]\displaystyle{ \partial_{x}^{2}w+uw=-\lambda w }[/math]

The eigenfunctions and eigenvalues of this scattering problem play a key role in the inverse scattering transformation. Note that this is Schrodinger's equation.

Properties of the eigenfunctions

The equation

[math]\displaystyle{ \partial_{x}^{2}w+uw=-\lambda w }[/math]

has two kinds of solutions for [math]\displaystyle{ u\rightarrow0 }[/math] as [math]\displaystyle{ x\rightarrow\pm\infty. }[/math] The first are waves and the second are bound solutions. It is well known that there are at most a finite number of bound solutions (provided [math]\displaystyle{ u\rightarrow0 }[/math] as [math]\displaystyle{ x\pm\infty }[/math] sufficiently rapidly) and a continum of solutions for the incident waves.

Example: Scattering by a Well

The properties of the eigenfunction is prehaps seem most easily through the following example

[math]\displaystyle{ u\left( x\right) =\left\{ \begin{matrix} 0, & x\notin\left[ -1,1\right] \\ b, & x\in\left[ -1,1\right] \end{matrix} \right. }[/math]

where [math]\displaystyle{ b\gt 0. }[/math]

Case when [math]\displaystyle{ \lambda\lt 0 }[/math]

If we solve this equation for the case when [math]\displaystyle{ \lambda\lt 0, }[/math] [math]\displaystyle{ \lambda=-k^{2} }[/math] we get

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} a_{1}\mathrm{e}^{kx}, & x\lt -1\\ b_{1}\cos\kappa x+b_{2}\sin\kappa x & -1\lt x\lt 1\\ a_{2}\mathrm{e}^{-kx} & x\gt 1 \end{matrix} \right. }[/math]

where [math]\displaystyle{ \kappa=\sqrt{b-k^{2}} }[/math] where we have assumed that [math]\displaystyle{ b\gt k^{2} }[/math] (there is no solution for [math]\displaystyle{ b\lt k^{2}). }[/math] We then match [math]\displaystyle{ w }[/math] and its derivative at [math]\displaystyle{ x=\pm1 }[/math] to solve for [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math]. This leads to two system of equations, one for the even ([math]\displaystyle{ a_{1}=a_{2} }[/math] and [math]\displaystyle{ b_{2}=0 }[/math] ) and one for the odd solutions ([math]\displaystyle{ a_{1}=-a_{2} }[/math] and [math]\displaystyle{ b_{1}=0) }[/math]. The solution for the even solutions is

[math]\displaystyle{ \left( \begin{matrix} \mathrm{e}^{-kx} & -\cos\kappa\\ k\mathrm{e}^{-kx} & \sin\kappa \end{matrix} \right) \left( \begin{matrix} a_{1}\\ b_{1} \end{matrix} \right) =\left( \begin{matrix} 0\\ 0 \end{matrix} \right) }[/math]

This has non trivial solutions when

[math]\displaystyle{ \det\left( \begin{matrix} \mathrm{e}^{-kx} & -\cos\kappa\\ k\mathrm{e}^{-kx} & - \kappa \sin\kappa \end{matrix} \right) =0 }[/math]

which gives us the equation

[math]\displaystyle{ - \kappa \sin\kappa \mathrm{e}^{-kx}+\left( \cos\kappa\right) k\mathrm{e}^{-kx}=0 }[/math]

or

[math]\displaystyle{ \kappa \tan\kappa=k=\sqrt{b-\kappa^{2}} }[/math]

We know that [math]\displaystyle{ 0\lt \kappa\lt \sqrt{b} }[/math] and if we plot this we see that we obtain a finite number of solutions.

In other words, we solve the final equation above for [math]\displaystyle{ k }[/math] to obtain our eigenvalues corresponding to even solutions. Similarly we repeat the above process for the odd solutions.

Case when [math]\displaystyle{ \lambda\gt 0 }[/math]

When [math]\displaystyle{ \lambda\gt 0 }[/math] we write [math]\displaystyle{ \lambda=k^{2} }[/math] and we obtain solution

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} \mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x\lt -1\\ b_{1}\cos\kappa x+b_{2}\sin\kappa x & -1\lt x\lt 1\\ a\mathrm{e}^{-\mathrm{i}kx} & x\gt 1 \end{matrix} \right. }[/math]

where [math]\displaystyle{ \kappa=\sqrt{b+k^{2}}. }[/math] Matching [math]\displaystyle{ w }[/math] and its derivaties at [math]\displaystyle{ x=\pm1 }[/math] we obtain

[math]\displaystyle{ \left( \begin{matrix} -\mathrm{e}^{-ik} & \cos\kappa & -\sin\kappa & 0\\ ik\mathrm{e}^{-ik} & \kappa\sin\kappa & \kappa\cos\kappa & 0\\ 0 & \cos\kappa & \sin\kappa & -\mathrm{e}^{-ik}\\ 0 & -\kappa\sin\kappa & \kappa\cos\kappa & ik\mathrm{e}^{-ik} \end{matrix} \right) \left( \begin{matrix} r\\ b_{1}\\ b_{2}\\ a \end{matrix} \right) =\left( \begin{matrix} \mathrm{e}^{ik}\\ ik\mathrm{e}^{-ik}\\ 0\\ 0 \end{matrix} \right) }[/math]

Connection with the KdV

If we substitute the relationship

[math]\displaystyle{ \partial_{x}^{2}w+uw=-\lambda w }[/math]

into the KdV after some manipulation we obtain

[math]\displaystyle{ \partial_{t}\lambda w^{2}+\partial_{x}\left( w\partial_{x}Q-\partial _{x}wQ\right) =0 }[/math]

where [math]\displaystyle{ Q=\partial_{t}w+\partial_{x}^{3}w-3\left( \lambda-u\right) \partial_{x}w. }[/math] If we integrate this equation then we obtain the result that

[math]\displaystyle{ \partial_{t}\lambda=0 }[/math]

provided that the eigenfunction [math]\displaystyle{ w }[/math] is bounded (which is true for the bound state eigenfunctions). This shows that the discrete eigenvalues are unchanged and [math]\displaystyle{ u\left( x,t\right) }[/math] evolves according to the KdV.

Scattering Data

For the discrete spectrum the eigenfunctions behave like

[math]\displaystyle{ w_{n}\left( x\right) =c_{n}\left( t\right) \mathrm{e}^{-k_{n}x} }[/math]

as [math]\displaystyle{ x\rightarrow\infty }[/math] with

[math]\displaystyle{ \int_{-\infty}^{\infty}\left( w_{n}\left( x\right) \right) ^{2}dx=1 }[/math]

The continuous spectrum looks like

[math]\displaystyle{ v\left( x,t\right) \approx \mathrm{e}^{-\mathrm{i}kx}+r\left( k,t\right) \mathrm{e}^{\mathrm{i}kx} ,\ \ \ x\rightarrow-\infty }[/math]
[math]\displaystyle{ v\left( x,t\right) \approx a\left( k,t\right) \mathrm{e}^{-\mathrm{i}kx},\ \ \ x\rightarrow \infty }[/math]

where [math]\displaystyle{ r }[/math] is the reflection coefficient and [math]\displaystyle{ a }[/math] is the transmission coefficient. This gives us the scattering data at [math]\displaystyle{ t=0 }[/math]

[math]\displaystyle{ S\left( \lambda,0\right) =\left( \left\{ k_{n},c_{n}\left( 0\right) \right\} _{n=1}^{N},r\left( k,0\right) ,a\left( k,0\right) \right) }[/math]

The scattering data evolves as

[math]\displaystyle{ k_{n}=k_{n} }[/math]
[math]\displaystyle{ c_{n}\left( t\right) =c_{n}\left( 0\right) \mathrm{e}^{4k_{n}^{3}t} }[/math]
[math]\displaystyle{ r\left( k,t\right) =r\left( k,0\right) \mathrm{e}^{8ik^{3}t} }[/math]
[math]\displaystyle{ a\left( k,t\right) =a\left( k,0\right) }[/math]

We can recover [math]\displaystyle{ u }[/math] from scattering data. We write

[math]\displaystyle{ F\left( x,t\right) =\sum_{n=1}^{N}c_{n}^{2}\left( t\right) \mathrm{e}^{-k_{n} x}+\int_{-\infty}^{\infty}r\left( k,t\right) \mathrm{e}^{\mathrm{i}kx}\mathrm{d}k }[/math]

Then solve

[math]\displaystyle{ K\left( x,y;t\right) +F\left( x+y;t\right) +\int_{x}^{\infty}K\left( x,z;t\right) F\left( z+y;t\right) \mathrm{d}z=0 }[/math]

This is a linear integral equation called the \emph{Gelfand-Levitan-Marchenko }equation. We then find [math]\displaystyle{ u }[/math] from

[math]\displaystyle{ u\left( x,t\right) =2\partial_{x}K\left( x,x,t\right) }[/math]


Reflectionless Potential

In general the IST is difficult to solve. However, there is a simplification we can make when we have a reflectionless potential (which we will see gives rise to the soliton solutions). The reflectionless potential is the case when [math]\displaystyle{ r\left( k,0\right) =0 }[/math] for all values of [math]\displaystyle{ k }[/math] for some [math]\displaystyle{ u. }[/math] In this case

[math]\displaystyle{ F\left( x,t\right) =\sum_{n=1}^{N}c_{n}^{2}\left( t\right) \mathrm{e}^{-k_{n}x} }[/math]

then

[math]\displaystyle{ K\left( x,y,t\right) +\sum_{n=1}^{N}c_{n}^{2}\left( t\right) \mathrm{e}^{-k_{n}\left( x+y\right) }+\int_{x}^{\infty}K\left( x,z,t\right) \sum_{n=1}^{N}c_{n}^{2}\left( t\right) \mathrm{e}^{-k_{n}\left( y+z\right) }dz=0 }[/math]

From the equation we can see that

[math]\displaystyle{ K\left( x,y,t\right) =-\sum_{m=1}^{N}c_{m}\left( t\right) v_{m}\left( x\right) \mathrm{e}^{-k_{m}y} }[/math]

If we substitute this into the equation

[math]\displaystyle{ -\sum_{n=1}^{N}c_{n}\left( t\right) v_{n}\left( x\right) \mathrm{e}^{-k_{n}y} +\sum_{n=1}^{N}c_{n}^{2}\left( t\right) \mathrm{e}^{-k_{n}\left( x+y\right) } +\int_{x}^{\infty}-\sum_{m=1}^{N}c_{m}\left( t\right) v_{m}\left( x\right) \mathrm{e}^{-k_{m}y}\sum_{n=1}^{N}c_{n}^{2}\left( t\right) \mathrm{e}^{-k_{n}\left( y+z\right) }dz=0 }[/math]

which leads to

[math]\displaystyle{ -\sum_{n=1}^{N}c_{n}\left( t\right) v_{n}\left( x\right) \mathrm{e}^{-k_{n}y} +\sum_{n=1}^{N}c_{n}^{2}\left( t\right) \mathrm{e}^{-k_{n}\left( x+y\right) } -\sum_{n=1}^{N}\sum_{m=1}^{N}\frac{c_{m}\left( t\right) c_{n}^{2}\left( t\right) }{k_{n}+k_{m}}v_{m}\left( x\right) \mathrm{e}^{-k_{m}x}\mathrm{e}^{-k_{n}\left( y+x\right) }=0 }[/math]

and we can eliminate the sum over [math]\displaystyle{ n }[/math] , the [math]\displaystyle{ c_{n}\left( t\right) , }[/math] and the [math]\displaystyle{ \mathrm{e}^{-k_{n}y} }[/math] to obtain

[math]\displaystyle{ -v_{n}\left( x\right) +c_{n}\left( t\right) \mathrm{e}^{-k_{n}x}-\sum_{m=1} ^{N}\frac{c_{n}\left( t\right) c_{m}\left( t\right) }{k_{n}+k_{m}} v_{m}\left( x\right) \mathrm{e}^{-\left( k_{m}+k_{n}\right) x}=0 }[/math]

which is an algebraic (finite dimensional system)\ for the unknows [math]\displaystyle{ v_{n}. }[/math] We can write this as

[math]\displaystyle{ \left( \mathbf{I}+\mathbf{C}\right) \vec{v}=\vec{f} }[/math]

where [math]\displaystyle{ f_{m}=c_{m}\left( t\right) \mathrm{e}^{-k_{m}x} }[/math] and

[math]\displaystyle{ c_{mn}=\sum_{m=1}^{N}\frac{c_{n}\left( t\right) c_{m}\left( t\right) }{k_{n}+k_{m}}\mathrm{e}^{-\left( k_{m}+k_{n}\right) x} }[/math]
[math]\displaystyle{ K\left( x,y,t\right) =-\sum_{m=1}^{N}c_{m}\left( t\right) \left( \mathbf{I}+\mathbf{C}\right) ^{-1}\vec{f}\mathrm{e}^{-k_{m}y} }[/math]

This leads to

[math]\displaystyle{ u\left( x,t\right) =2\partial_{x}^{2}\log\left[ \det\left( \mathbf{I} +\mathbf{C}\right) \right] }[/math]

Lets consider some simple examples. First of all if [math]\displaystyle{ n=1 }[/math] (the single soliton solution) we get

[math]\displaystyle{ \begin{matrix} K\left( x,x,t\right) & =-\frac{c_{1}\left( t\right) c_{1}\left( t\right) \mathrm{e}^{-k_{1}x}\mathrm{e}^{-k_{1}x}}{1+\frac{c_{1}\left( t\right) c_{1}\left( t\right) }{k_{1}+k_{1}}\mathrm{e}^{-\left( k_{1}+k_{1}\right) x}}\\ & =\frac{-1}{1+\mathrm{e}^{2k_{1}x-8k_{1}^{3}t-\alpha}} \end{matrix} }[/math]

where [math]\displaystyle{ \mathrm{e}^{-\alpha}=2c_{0}^{2}\left( 0\right) . }[/math] Therefore

[math]\displaystyle{ \begin{matrix} u\left( x,t\right) & =2\partial_{x}K\left( x,x,t\right) \\ & =\frac{4k_{1}\mathrm{e}^{2k_{1}x-8k_{1}^{3}t-\alpha}}{\left( 1+\mathrm{e}^{2k_{1} x-8k_{1}^{3}t-\alpha}\right) ^{2}}\\ & =\frac{-8k_{1}^{2}}{\left( \sqrt{2k_{1}}\mathrm{e}^{\theta}+\mathrm{e}^{-\theta} /\sqrt{2k_{1}}\right) ^{2}}\\ & =2k^{2}\sec^{2}\left\{ k_{1}\left( x-x_{0}\right) -4k_{1}^{3}t\right\} \end{matrix} }[/math]

where [math]\displaystyle{ \theta=k_{1}x-4k^{3}t-\alpha/2 }[/math] and [math]\displaystyle{ \sqrt{2k}\mathrm{e}^{-\alpha/2}=\mathrm{e}^{-kx_{0} } }[/math]. This is of course the single soliton solution.