Difference between revisions of "Properties of the Linear Schrodinger Equation"

From WikiWaves
Jump to navigationJump to search
Line 116: Line 116:
 
w\left(  x\right)  =\left\{
 
w\left(  x\right)  =\left\{
 
\begin{matrix}
 
\begin{matrix}
a_{1}e^{kx}, & x<-\varsigma\\
+
a_{1}e^{kx}, & x<-\varsigma,\\
b_{1}\cos\kappa x+b_{2}\sin\kappa x & -\varsigma<x<\varsigma\\
+
b_{1}\cos\kappa x+b_{2}\sin\kappa x, & -\varsigma<x<\varsigma,\\
a_{2}e^{-kx} & x>\varsigma
+
a_{2}e^{-kx}, & x>\varsigma,
 
\end{matrix}
 
\end{matrix}
 
\right.
 
\right.
Line 131: Line 131:
 
w\left(  x\right)  =\left\{
 
w\left(  x\right)  =\left\{
 
\begin{matrix}
 
\begin{matrix}
a_{1}e^{kx}, & x<-\varsigma\\
+
a_{1}e^{kx}, & x<-\varsigma,\\
b_{1}\cos\kappa x & -\varsigma<x<\varsigma\\
+
b_{1}\cos\kappa, x & -\varsigma<x<\varsigma,\\
a_{1}e^{-kx} & x>\varsigma
+
a_{1}e^{-kx}, & x>\varsigma,
 
\end{matrix}
 
\end{matrix}
 
\right.
 
\right.
Line 186: Line 186:
 
w\left(  x\right)  =\left\{
 
w\left(  x\right)  =\left\{
 
\begin{matrix}
 
\begin{matrix}
 
+
a_{1}e^{kx}, & x<-\varsigma,\\
a_{1}e^{kx}, & x<-\varsigma\\
+
b_{2}\sin\kappa x, & -\varsigma<x<\varsigma,\\
+b_{2}\sin\kappa x & -\varsigma<x<\varsigma\\
+
-a_{1}e^{-kx} & x>\varsigma,
-a_{1}e^{-kx} & x>\varsigma
 
 
\end{matrix}
 
\end{matrix}
 
\right.
 
\right.

Revision as of 04:29, 20 January 2015

Nonlinear PDE's Course
Current Topic Properties of the Linear Schrodinger Equation
Next Topic Connection betwen KdV and the Schrodinger Equation
Previous Topic Introduction to the Inverse Scattering Transform


The linear Schrodinger equation

[math]\displaystyle{ \partial_{x}^{2}w+uw=-\lambda w }[/math]

has two kinds of solutions for [math]\displaystyle{ u\rightarrow0 }[/math] as [math]\displaystyle{ x\rightarrow\pm\infty. }[/math] The first are waves and the second are bound solutions. It is well known that there are at most a finite number of bound solutions (provided [math]\displaystyle{ u\rightarrow0 }[/math] as [math]\displaystyle{ x\pm\infty }[/math] sufficiently rapidly) and a continum of solutions for the incident waves. This is easiest seen through the following examples

Example 1: [math]\displaystyle{ \delta }[/math] function potential

We consider here the case when [math]\displaystyle{ u\left( x,0\right) = u_0 \delta\left( x\right) . }[/math] Note that this function can be thought of as the limit as of the potential

[math]\displaystyle{ u\left( x\right) =\left\{ \begin{matrix} 0 & x\notin\left[ -\varepsilon,\varepsilon\right] \\ \frac{u_{0}}{2\varepsilon} & x\in\left[ -\varepsilon,\varepsilon\right] \end{matrix} \right. }[/math]

In this case we need to solve

[math]\displaystyle{ \partial_{x}^{2}w+ u_0\delta(x) w=-\lambda w }[/math]

We consider the case of [math]\displaystyle{ \lambda\lt 0 }[/math] and [math]\displaystyle{ \lambda\gt 0 }[/math] separately. For the first case we write [math]\displaystyle{ \lambda=-k^{2} }[/math] and we obtain

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} ae^{kx}, & x\lt 0\\ be^{-kx}, & x\gt 0 \end{matrix} \right. }[/math]

We have two conditions at [math]\displaystyle{ x=0, }[/math] [math]\displaystyle{ w }[/math] must be continuous at [math]\displaystyle{ 0 }[/math] and [math]\displaystyle{ \partial_{x}w\left( 0^{+}\right) -\partial_{x}w\left( 0^{-}\right) +u_0 w\left( 0\right) =0. }[/math] This final condition is obtained by integrating `across' zero as follows

[math]\displaystyle{ \begin{align} \int_{0^{-}}^{0^{+}} \partial_x^2 w +\delta(x) w + \lambda w \ \mathrm{d}x = 0. \end{align} }[/math]

This gives the condition that [math]\displaystyle{ a=b }[/math] and [math]\displaystyle{ k=u_{0}/2. }[/math] We need to normalise the eigenfunctions so that

[math]\displaystyle{ \int_{-\infty}^{\infty}\left( w\left( x\right) \right) ^{2}\mathrm{d}x=1. }[/math]

Therefore

[math]\displaystyle{ 2\int_{0}^{\infty}\left( ae^{-u_{0}x/2}\right) ^{2}\mathrm{d}x=1 }[/math]

which means that [math]\displaystyle{ a=\sqrt{u_{0}/2}. }[/math] Therefore, there is only one discrete spectral point which we denote by [math]\displaystyle{ k_{1}=u_{0}/2 }[/math]

[math]\displaystyle{ w_{1}\left( x\right) =\left\{ \begin{matrix} \sqrt{k_{1}}e^{k_{1}x}, & x\lt 0\\ \sqrt{k_{1}}e^{-k_{1}x}, & x\gt 0 \end{matrix} \right. }[/math]

The continuous eigenfunctions correspond to [math]\displaystyle{ \lambda=k^{2}\gt 0 }[/math] are of the form

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} \mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x\lt 0\\ a\mathrm{e}^{-\mathrm{i}kx}, & x\gt 0 \end{matrix} \right. }[/math]

Again we have the conditions that [math]\displaystyle{ w }[/math] must be continuous at [math]\displaystyle{ 0 }[/math] and [math]\displaystyle{ \partial_{x}w\left( 0^{+}\right) -\partial_{x}w\left( 0^{-}\right) +u_{0}w\left( 0\right) =0. }[/math] This gives us

[math]\displaystyle{ \begin{matrix} 1+r & =a\\ -ika+ik-ikr & =-au_{0} \end{matrix} }[/math]

which has solution

[math]\displaystyle{ \begin{matrix} r & =\frac{u_{0}}{2ik-u_{0}}\\ a & =\frac{2ik}{2ik-u_{0}} \end{matrix} }[/math]

Example 2: Hat Function Potential

The properties of the eigenfunction is perhaps seem most easily through the following example

[math]\displaystyle{ u\left( x\right) =\left\{ \begin{matrix} 0 & x\notin\left[ -\varsigma,\varsigma\right] \\ b & x\in\left[ -\varsigma,\varsigma\right] \end{matrix} \right. }[/math]

where [math]\displaystyle{ b\gt 0. }[/math]

Case when [math]\displaystyle{ \lambda\lt 0 }[/math]

If we solve this equation for the case when [math]\displaystyle{ \lambda\lt 0, }[/math] [math]\displaystyle{ \lambda=-k^{2} }[/math] we get

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} a_{1}e^{kx}, & x\lt -\varsigma,\\ b_{1}\cos\kappa x+b_{2}\sin\kappa x, & -\varsigma\lt x\lt \varsigma,\\ a_{2}e^{-kx}, & x\gt \varsigma, \end{matrix} \right. }[/math]

where [math]\displaystyle{ \kappa=\sqrt{b-k^{2}} }[/math] which means that [math]\displaystyle{ 0\leq k\leq\sqrt{b} }[/math] (there is no solution for [math]\displaystyle{ k\gt \sqrt{b}). }[/math] We then match [math]\displaystyle{ w }[/math] and its derivative at [math]\displaystyle{ x=\pm\varsigma }[/math] to solve for [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math]. This leads to two system of equation, one for the even ([math]\displaystyle{ a_{1}=a_{2} }[/math] and [math]\displaystyle{ b_{2}=0 }[/math] ) and one for the odd solutions ([math]\displaystyle{ a_{1}=-a_{2} }[/math] and [math]\displaystyle{ b_{1}=0) }[/math]. The solution for the even solutions is

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} a_{1}e^{kx}, & x\lt -\varsigma,\\ b_{1}\cos\kappa, x & -\varsigma\lt x\lt \varsigma,\\ a_{1}e^{-kx}, & x\gt \varsigma, \end{matrix} \right. }[/math]

If we impose the condition that the function and its derivative are continuous at [math]\displaystyle{ x=\pm\varsigma }[/math] we obtain the following equation

[math]\displaystyle{ \left( \begin{matrix} e^{-k\varsigma} & -\cos\kappa\varsigma\\ ke^{-k\varsigma} & -\kappa\sin\kappa\varsigma \end{matrix} \right) \left( \begin{matrix} a_{1}\\ b_{1} \end{matrix} \right) =\left( \begin{matrix} 0\\ 0 \end{matrix} \right) }[/math]

This has non trivial solutions when

[math]\displaystyle{ \det\left( \begin{matrix} e^{-k\varsigma} & -\cos\kappa\varsigma\\ ke^{-k\varsigma} & -\kappa\sin\kappa\varsigma \end{matrix} \right) =0 }[/math]

which gives us the equation

[math]\displaystyle{ -\kappa e^{-k\varsigma}\sin\kappa\varsigma+k\cos\kappa\varsigma e^{-k\varsigma}=0 }[/math]

or

[math]\displaystyle{ \tan\kappa\varsigma=\frac{k}{\kappa} }[/math]

We know that [math]\displaystyle{ 0\lt \kappa\lt \sqrt{b} }[/math] and if we plot this we see that we obtain a finite number of solutions.

The solution for the odd solutions is

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} a_{1}e^{kx}, & x\lt -\varsigma,\\ b_{2}\sin\kappa x, & -\varsigma\lt x\lt \varsigma,\\ -a_{1}e^{-kx} & x\gt \varsigma, \end{matrix} \right. }[/math]
[math]\displaystyle{ \left( \begin{matrix} e^{-k\varsigma} & -\sin\kappa\varsigma\\ ke^{-k\varsigma} & \cos\kappa\varsigma \end{matrix} \right) \left( \begin{matrix} a_{1}\\ b_{1} \end{matrix} \right) =\left( \begin{matrix} 0\\ 0 \end{matrix} \right) }[/math]

This can non trivial solutions when

[math]\displaystyle{ \det\left( \begin{matrix} e^{-k\varsigma} & -\sin\kappa\varsigma\\ ke^{-k\varsigma} & \kappa\cos\kappa\varsigma \end{matrix} \right) =0 }[/math]

which gives us the equation

[math]\displaystyle{ \kappa e^{-k\varsigma}a\cos\kappa\varsigma+k\sin\kappa\varsigma e^{-k\varsigma}=0 }[/math]

or

[math]\displaystyle{ \tan\varsigma\kappa=-\frac{\kappa}{k} }[/math]

Case when [math]\displaystyle{ \lambda\gt 0 }[/math]

When [math]\displaystyle{ \lambda\gt 0 }[/math] we write [math]\displaystyle{ \lambda=k^{2} }[/math] and we obtain solution

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} \mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x\lt -\varsigma\\ b_{1}\cos\kappa x+b_{2}\sin\kappa x & -\varsigma\lt x\lt \varsigma\\ a\mathrm{e}^{-\mathrm{i}kx} & x\gt \varsigma \end{matrix} \right. }[/math]

where [math]\displaystyle{ \kappa=\sqrt{b+k^{2}}. }[/math] Matching [math]\displaystyle{ w }[/math] and its derivaties at [math]\displaystyle{ x=\pm1 }[/math] we obtain

[math]\displaystyle{ \left( \begin{matrix} -\mathrm{e}^{-\mathrm{i}k\varsigma} & \cos\kappa\varsigma & -\sin\kappa\varsigma & 0\\ ik\mathrm{e}^{-\mathrm{i}k\varsigma} & \kappa\sin\kappa\varsigma & \kappa\cos\kappa \varsigma & 0\\ 0 & \cos\kappa\varsigma & \sin\kappa\varsigma & -\mathrm{e}^{-\mathrm{i}k\varsigma}\\ 0 & -\kappa\sin\kappa\varsigma & \kappa\cos\kappa\varsigma & ik\mathrm{e}^{-\mathrm{i}k\varsigma} \end{matrix} \right) \left( \begin{matrix} r\\ b_{1}\\ b_{2}\\ a \end{matrix} \right) =\left( \begin{matrix} \mathrm{e}^{\mathrm{i}k}\\ ik\mathrm{e}^{-\mathrm{i}k}\\ 0\\ 0 \end{matrix} \right) }[/math]