Properties of the Linear Schrodinger Equation: Difference between revisions

From WikiWaves
Jump to navigationJump to search
Levi (talk | contribs)
 
(21 intermediate revisions by 3 users not shown)
Line 5: Line 5:
}}
}}


The linear Schrodinger equation
The linear Schrödinger equation
<center><math>
<center><math>
\partial_{x}^{2}w+uw=-\lambda w
\partial_{x}^{2}w+uw=-\lambda w
Line 12: Line 12:
first are waves and the second are bound solutions. It is well known that
first are waves and the second are bound solutions. It is well known that
there are at most a finite number of bound solutions (provided <math>u\rightarrow0</math>
there are at most a finite number of bound solutions (provided <math>u\rightarrow0</math>
as <math>x\pm\infty</math> sufficiently rapidly) and a continum of solutions for the
as <math>x\rightarrow\pm\infty</math> sufficiently rapidly) and a continum of solutions for the
incident waves. This is easiest seen through the following examples
incident waves. This is easiest seen through the following examples


Line 30: Line 30:
In this case we need to solve
In this case we need to solve
<center><math>
<center><math>
\partial_{x}^{2}w+\delta(x) w=-\lambda w
\partial_{x}^{2}w+ u_0\delta(x) w=-\lambda w
</math></center>
</math></center>
We consider the case of <math>\lambda<0</math> and <math>\lambda>0</math> separately. For the first
 
case we write <math>\lambda=-k^{2}</math> and we obtain
===Case when <math>\lambda<0</math>===
 
We consider the cases of <math>\lambda<0</math> and <math>\lambda>0</math> separately. For the first
case we write <math>\lambda=-k^{2}</math> and we obtain (as <math>w\rightarrow0</math> as <math>x\rightarrow\pm\infty</math>)
<center><math>
<center><math>
w\left(  x\right)  =\left\{
w\left(  x\right)  =\left\{
Line 47: Line 50:
+u_0 w\left(  0\right)  =0.</math> This final condition is obtained by integrating `across' zero as follows
+u_0 w\left(  0\right)  =0.</math> This final condition is obtained by integrating `across' zero as follows
<center><math>\begin{align}
<center><math>\begin{align}
\int_{0^{-}}^{0^{+}} \partial_x^2 w +\delta(x) w + \lambda w \ dx = 0.
\int_{0^{-}}^{0^{+}} \left(\partial_x^2 w +u_0\delta(x) w + \lambda w \right) \ \mathrm{d}x = 0.
\end{align}
\end{align}
</math></center>
</math></center>
Line 54: Line 57:
<math>k=u_{0}/2.</math> We need to normalise the eigenfunctions so that
<math>k=u_{0}/2.</math> We need to normalise the eigenfunctions so that
<center><math>
<center><math>
\int_{-\infty}^{\infty}\left(  w\left(  x\right)  \right)  ^{2}dx=1.
\int_{-\infty}^{\infty}\left(  w\left(  x\right)  \right)  ^{2}\mathrm{d}x=1.
</math></center>
</math></center>
Therefore
Therefore
<center><math>
<center><math>
2\int_{0}^{\infty}\left(  ae^{-u_{0}x/2}\right)  ^{2}dx=1
2\int_{0}^{\infty}\left(  ae^{-u_{0}x/2}\right)  ^{2}\mathrm{d}x=1
</math></center>
</math>
<math>
2\left(  a^2\dfrac{e^{-u_{0}x}}{-u_{0}}\right)\Bigg|_{0}^{\infty}=1
</math>
<math>\dfrac{2a^2}{u_{0}}=1</math></center>
which means that <math>a=\sqrt{u_{0}/2}.</math> Therefore, there is only one discrete
which means that <math>a=\sqrt{u_{0}/2}.</math> Therefore, there is only one discrete
spectral point which we denote by <math>k_{1}=u_{0}/2</math>
spectral point which we denote by <math>k_{1}=u_{0}/2</math>
Line 71: Line 78:
\right.
\right.
</math></center>
</math></center>
=== Case when <math>\lambda>0</math> ===
The continuous eigenfunctions correspond to <math>\lambda=k^{2}>0</math> are of the form
The continuous eigenfunctions correspond to <math>\lambda=k^{2}>0</math> are of the form
<center><math>
<center><math>
Line 77: Line 87:


\mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x<0\\
\mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x<0\\
a\mathrm{e}^{-\mathrm{i}kx}, & x>0
t\mathrm{e}^{-\mathrm{i}kx}, & x>0
\end{matrix}
\end{matrix}
\right.
\right.
</math></center>
</math></center>
where <math>\mathrm{e}^{-\mathrm{i}kx}</math> is the incident wave, <math>r\mathrm{e}^{\mathrm{i}kx}</math> is the reflected wave, and <math>t\mathrm{e}^{-\mathrm{i}kx}</math> is the transmitted wave.
Again we have the conditions that <math>w</math> must be continuous at <math>0</math> and
Again we have the conditions that <math>w</math> must be continuous at <math>0</math> and
<math>\partial_{x}w\left(  0^{+}\right)  -\partial_{x}w\left(  0^{-}\right)
<math>\partial_{x}w\left(  0^{+}\right)  -\partial_{x}w\left(  0^{-}\right)
+u_{0}w\left(  0\right)  =0.</math> This gives us
+u_{0}w\left(  0\right)  =0.</math> This gives us
<center><math>\begin{matrix}
<center><math>\begin{matrix}
1+r  &  =a\\
1+r  &  =t\\
-ika+ik-ikr  &  =-au_{0}
-ikt+ik-ikr  &  =-tu_{0}
\end{matrix}</math></center>
\end{matrix}</math></center>
which has solution
which has solution
<center><math>\begin{matrix}
<center><math>\begin{matrix}
r  &  =\frac{u_{0}}{2ik-u_{0}}\\
r  &  =\frac{u_{0}}{2ik-u_{0}}\\
a &  =\frac{2ik}{2ik-u_{0}}
t &  =\frac{2ik}{2ik-u_{0}}
\end{matrix}</math></center>
\end{matrix}</math></center>


Line 102: Line 114:
\begin{matrix}
\begin{matrix}


0 & x\notin\left[  -\varsigma,\varsigma\right] \\
0 & x\notin\left[  -\zeta,\zeta\right] \\
b & x\in\left[  -\varsigma,\varsigma\right]
b & x\in\left[  -\zeta,\zeta\right]
\end{matrix}
\end{matrix}
\right.
\right.
Line 109: Line 121:
where <math>b>0.</math>
where <math>b>0.</math>


===Case when <math>\lambda>0</math>===
===Case when <math>\lambda<0</math>===


If we solve this equation for the case when <math>\lambda<0,</math> <math>\lambda=-k^{2}</math> we
If we solve this equation for the case when <math>\lambda<0,</math> <math>\lambda=-k^{2}</math> we
Line 116: Line 128:
w\left(  x\right)  =\left\{
w\left(  x\right)  =\left\{
\begin{matrix}
\begin{matrix}
 
a_{1}e^{kx}, & x<-\zeta,\\
a_{1}e^{kx}, & x<-\varsigma\\
b_{1}\cos\kappa x+b_{2}\sin\kappa x, & -\zeta< x <\zeta,\\
b_{1}\cos\kappa x+b_{2}\sin\kappa x & -\varsigma<x<\varsigma\\
a_{2}e^{-kx}, & x>\zeta,
a_{2}e^{-kx} & x>\varsigma
\end{matrix}
\end{matrix}
\right.
\right.
Line 125: Line 136:
where <math>\kappa=\sqrt{b-k^{2}}</math> which means that <math>0\leq k\leq\sqrt{b}</math> (there is
where <math>\kappa=\sqrt{b-k^{2}}</math> which means that <math>0\leq k\leq\sqrt{b}</math> (there is
no solution for <math>k>\sqrt{b}).</math> We then match <math>w</math> and its derivative at
no solution for <math>k>\sqrt{b}).</math> We then match <math>w</math> and its derivative at
<math>x=\pm\varsigma</math> to solve for <math>a</math> and <math>b</math>. This leads to two system of
<math>x=\pm\zeta</math> to solve for <math>a</math> and <math>b</math>. This leads to two system of
equation, one for the even (<math>a_{1}=a_{2}</math> and <math>b_{2}=0</math> ) and one for the odd
equations, one for the even (<math>a_{1}=a_{2}</math> and <math>b_{2}=0</math> ) and one for the odd
solutions (<math>a_{1}=-a_{2}</math> and <math>b_{1}=0)</math>. The solution for the even solutions
solutions (<math>a_{1}=-a_{2}</math> and <math>b_{1}=0)</math>. The solution for the even solutions
is
is
<center><math>
w\left(  x\right)  =\left\{
\begin{matrix}
a_{1}e^{kx}, & x<-\zeta,\\
b_{1}\cos\kappa x, & -\zeta< x <\zeta,\\
a_{1}e^{-kx}, & x>\zeta,
\end{matrix}
\right.
</math></center>
If we impose the condition that the function and its derivative are continuous at
<math>x=\pm\zeta</math> we obtain the following equation
<center><math>
<center><math>
\left(
\left(
\begin{matrix}
\begin{matrix}


e^{-k\varsigma} & -\cos\kappa\varsigma\\
e^{-k\zeta} & -\cos\kappa\zeta\\
ke^{-k\varsigma} & -\kappa\sin\kappa\varsigma
ke^{-k\zeta} & -\kappa\sin\kappa\zeta
\end{matrix}
\end{matrix}
\right)  \left(
\right)  \left(
Line 155: Line 177:
\begin{matrix}
\begin{matrix}


e^{-k\varsigma} & -\cos\kappa\varsigma\\
e^{-k\zeta} & -\cos\kappa\zeta\\
ke^{-k\varsigma} & -\kappa\sin\kappa\varsigma
ke^{-k\zeta} & -\kappa\sin\kappa\zeta
\end{matrix}
\end{matrix}
\right)  =0
\right)  =0
Line 162: Line 184:
which gives us the equation
which gives us the equation
<center><math>
<center><math>
-\kappa e^{-k\varsigma}\sin\kappa\varsigma+k\cos\kappa\varsigma
-\kappa e^{-k\zeta}\sin\kappa\zeta+k\cos\kappa\zeta
e^{-k\varsigma}=0
e^{-k\zeta}=0
</math></center>
</math></center>
or
or
<center><math>
<center><math>
\tan\kappa\varsigma=\frac{k}{\kappa}
\tan\kappa\zeta=\frac{k}{\kappa}
</math></center>
</math></center>
We know that <math>0<\kappa<\sqrt{b}</math> and if we plot this we see that we obtain a
We know that <math>0<\kappa<\sqrt{b}</math> and if we plot this we see that we obtain a
Line 173: Line 195:


The solution for the odd solutions is
The solution for the odd solutions is
<center><math>
w\left(  x\right)  =\left\{
\begin{matrix}
a_{1}e^{kx}, & x <-\zeta,\\
b_{2}\sin\kappa x, & -\zeta< x <\zeta,\\
-a_{1}e^{-kx} & x > \zeta,
\end{matrix}
\right.
</math></center>
and again imposing the condition that the solution and its derivative is continuous
at <math>x=\pm\zeta</math> gives
<center><math>
<center><math>
\left(
\left(
\begin{matrix}
\begin{matrix}


e^{-k\varsigma} & -\sin\kappa\varsigma\\
e^{-k\zeta} & \sin\kappa\zeta\\
ke^{-k\varsigma} & \cos\kappa\varsigma
ke^{-k\zeta} & -\kappa\cos\kappa\zeta
\end{matrix}
\end{matrix}
\right)  \left(
\right)  \left(
Line 199: Line 232:
\begin{matrix}
\begin{matrix}


e^{-k\varsigma} & -\sin\kappa\varsigma\\
e^{-k\zeta} & \sin\kappa\zeta\\
ke^{-k\varsigma} & \kappa\cos\kappa\varsigma
ke^{-k\zeta} & -\kappa\cos\kappa\zeta
\end{matrix}
\end{matrix}
\right)  =0
\right)  =0
Line 206: Line 239:
which gives us the equation
which gives us the equation
<center><math>
<center><math>
\kappa e^{-k\varsigma}a\cos\kappa\varsigma+k\sin\kappa\varsigma e^{-k\varsigma}=0
\kappa e^{-k\zeta}a\cos\kappa\zeta+k\sin\kappa\zeta e^{-k\zeta}=0
</math></center>
</math></center>
or
or
<center><math>
<center><math>
\tan\varsigma\kappa=-\frac{\kappa}{k}
\tan\zeta\kappa=-\frac{\kappa}{k}
</math></center>
</math></center>


Line 220: Line 253:
\begin{matrix}
\begin{matrix}


\mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x<-\varsigma\\
\mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x <-\zeta\\
b_{1}\cos\kappa x+b_{2}\sin\kappa x & -\varsigma<x<\varsigma\\
b_{1}\cos\kappa x+b_{2}\sin\kappa x & -\zeta< x <\zeta\\
a\mathrm{e}^{-\mathrm{i}kx} & x>\varsigma
t\mathrm{e}^{-\mathrm{i}kx} & x>\zeta
\end{matrix}
\end{matrix}
\right.
\right.
</math></center>
</math></center>
where <math>\kappa=\sqrt{b+k^{2}}.</math> Matching <math>w</math> and its derivaties at <math>x=\pm1</math> we
where <math>\kappa=\sqrt{b+k^{2}}.</math> Matching <math>w</math> and its derivatives at <math>x=\pm\zeta</math> we
obtain
obtain
<center><math>
<center><math>
Line 232: Line 265:
\begin{matrix}
\begin{matrix}


-\mathrm{e}^{-\mathrm{i}k\varsigma} & \cos\kappa\varsigma & -\sin\kappa\varsigma & 0\\
-\mathrm{e}^{-\mathrm{i}k\zeta} & \cos\kappa\zeta & -\sin\kappa\zeta & 0\\
ik\mathrm{e}^{-\mathrm{i}k\varsigma} & \kappa\sin\kappa\varsigma & \kappa\cos\kappa
ik\mathrm{e}^{-\mathrm{i}k\zeta} & \kappa\sin\kappa\zeta & \kappa\cos\kappa
\varsigma & 0\\
\zeta & 0\\
0 & \cos\kappa\varsigma & \sin\kappa\varsigma & -\mathrm{e}^{-\mathrm{i}k\varsigma}\\
0 & \cos\kappa\zeta & \sin\kappa\zeta & -\mathrm{e}^{-\mathrm{i}k\zeta}\\
0 & -\kappa\sin\kappa\varsigma & \kappa\cos\kappa\varsigma &
0 & -\kappa\sin\kappa\zeta & \kappa\cos\kappa\zeta &
ik\mathrm{e}^{-\mathrm{i}k\varsigma}
ik\mathrm{e}^{-\mathrm{i}k\zeta}
\end{matrix}
\end{matrix}
\right)  \left(
\right)  \left(
Line 245: Line 278:
b_{1}\\
b_{1}\\
b_{2}\\
b_{2}\\
a
t
\end{matrix}
\end{matrix}
\right)  =\left(
\right)  =\left(
Line 257: Line 290:
\right)
\right)
</math></center>
</math></center>
== Lecture Videos ==
=== Part 1 ===
{{#ev:youtube|anAThvCcpNw}}
=== Part 2 ===
{{#ev:youtube|SDPIx42VjLQ}}
=== Part 3 ===
{{#ev:youtube|OUmjeLZWr3M}}
=== Part 4 ===
{{#ev:youtube|hIfcO3a8_XU}}
=== Part 5 ===
{{#ev:youtube|z13lKSTficA}}
=== Part 6 ===
{{#ev:youtube|2XlQpEscxE4}}
=== Part 7 ===
{{#ev:youtube|iMMQ4NUdXNc}}
=== Part 8 ===
{{#ev:youtube|0F_dINNxMlw}}

Latest revision as of 09:27, 28 September 2025

Nonlinear PDE's Course
Current Topic Properties of the Linear Schrodinger Equation
Next Topic Connection betwen KdV and the Schrodinger Equation
Previous Topic Introduction to the Inverse Scattering Transform


The linear Schrödinger equation

[math]\displaystyle{ \partial_{x}^{2}w+uw=-\lambda w }[/math]

has two kinds of solutions for [math]\displaystyle{ u\rightarrow0 }[/math] as [math]\displaystyle{ x\rightarrow\pm\infty. }[/math] The first are waves and the second are bound solutions. It is well known that there are at most a finite number of bound solutions (provided [math]\displaystyle{ u\rightarrow0 }[/math] as [math]\displaystyle{ x\rightarrow\pm\infty }[/math] sufficiently rapidly) and a continum of solutions for the incident waves. This is easiest seen through the following examples

Example 1: [math]\displaystyle{ \delta }[/math] function potential

We consider here the case when [math]\displaystyle{ u\left( x,0\right) = u_0 \delta\left( x\right) . }[/math] Note that this function can be thought of as the limit as of the potential

[math]\displaystyle{ u\left( x\right) =\left\{ \begin{matrix} 0 & x\notin\left[ -\varepsilon,\varepsilon\right] \\ \frac{u_{0}}{2\varepsilon} & x\in\left[ -\varepsilon,\varepsilon\right] \end{matrix} \right. }[/math]

In this case we need to solve

[math]\displaystyle{ \partial_{x}^{2}w+ u_0\delta(x) w=-\lambda w }[/math]

Case when [math]\displaystyle{ \lambda\lt 0 }[/math]

We consider the cases of [math]\displaystyle{ \lambda\lt 0 }[/math] and [math]\displaystyle{ \lambda\gt 0 }[/math] separately. For the first case we write [math]\displaystyle{ \lambda=-k^{2} }[/math] and we obtain (as [math]\displaystyle{ w\rightarrow0 }[/math] as [math]\displaystyle{ x\rightarrow\pm\infty }[/math])

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} ae^{kx}, & x\lt 0\\ be^{-kx}, & x\gt 0 \end{matrix} \right. }[/math]

We have two conditions at [math]\displaystyle{ x=0, }[/math] [math]\displaystyle{ w }[/math] must be continuous at [math]\displaystyle{ 0 }[/math] and [math]\displaystyle{ \partial_{x}w\left( 0^{+}\right) -\partial_{x}w\left( 0^{-}\right) +u_0 w\left( 0\right) =0. }[/math] This final condition is obtained by integrating `across' zero as follows

[math]\displaystyle{ \begin{align} \int_{0^{-}}^{0^{+}} \left(\partial_x^2 w +u_0\delta(x) w + \lambda w \right) \ \mathrm{d}x = 0. \end{align} }[/math]

This gives the condition that [math]\displaystyle{ a=b }[/math] and [math]\displaystyle{ k=u_{0}/2. }[/math] We need to normalise the eigenfunctions so that

[math]\displaystyle{ \int_{-\infty}^{\infty}\left( w\left( x\right) \right) ^{2}\mathrm{d}x=1. }[/math]

Therefore

[math]\displaystyle{ 2\int_{0}^{\infty}\left( ae^{-u_{0}x/2}\right) ^{2}\mathrm{d}x=1 }[/math]

[math]\displaystyle{ 2\left( a^2\dfrac{e^{-u_{0}x}}{-u_{0}}\right)\Bigg|_{0}^{\infty}=1 }[/math]

[math]\displaystyle{ \dfrac{2a^2}{u_{0}}=1 }[/math]

which means that [math]\displaystyle{ a=\sqrt{u_{0}/2}. }[/math] Therefore, there is only one discrete spectral point which we denote by [math]\displaystyle{ k_{1}=u_{0}/2 }[/math]

[math]\displaystyle{ w_{1}\left( x\right) =\left\{ \begin{matrix} \sqrt{k_{1}}e^{k_{1}x}, & x\lt 0\\ \sqrt{k_{1}}e^{-k_{1}x}, & x\gt 0 \end{matrix} \right. }[/math]

Case when [math]\displaystyle{ \lambda\gt 0 }[/math]

The continuous eigenfunctions correspond to [math]\displaystyle{ \lambda=k^{2}\gt 0 }[/math] are of the form

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} \mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x\lt 0\\ t\mathrm{e}^{-\mathrm{i}kx}, & x\gt 0 \end{matrix} \right. }[/math]

where [math]\displaystyle{ \mathrm{e}^{-\mathrm{i}kx} }[/math] is the incident wave, [math]\displaystyle{ r\mathrm{e}^{\mathrm{i}kx} }[/math] is the reflected wave, and [math]\displaystyle{ t\mathrm{e}^{-\mathrm{i}kx} }[/math] is the transmitted wave.

Again we have the conditions that [math]\displaystyle{ w }[/math] must be continuous at [math]\displaystyle{ 0 }[/math] and [math]\displaystyle{ \partial_{x}w\left( 0^{+}\right) -\partial_{x}w\left( 0^{-}\right) +u_{0}w\left( 0\right) =0. }[/math] This gives us

[math]\displaystyle{ \begin{matrix} 1+r & =t\\ -ikt+ik-ikr & =-tu_{0} \end{matrix} }[/math]

which has solution

[math]\displaystyle{ \begin{matrix} r & =\frac{u_{0}}{2ik-u_{0}}\\ t & =\frac{2ik}{2ik-u_{0}} \end{matrix} }[/math]

Example 2: Hat Function Potential

The properties of the eigenfunction is perhaps seem most easily through the following example

[math]\displaystyle{ u\left( x\right) =\left\{ \begin{matrix} 0 & x\notin\left[ -\zeta,\zeta\right] \\ b & x\in\left[ -\zeta,\zeta\right] \end{matrix} \right. }[/math]

where [math]\displaystyle{ b\gt 0. }[/math]

Case when [math]\displaystyle{ \lambda\lt 0 }[/math]

If we solve this equation for the case when [math]\displaystyle{ \lambda\lt 0, }[/math] [math]\displaystyle{ \lambda=-k^{2} }[/math] we get

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} a_{1}e^{kx}, & x\lt -\zeta,\\ b_{1}\cos\kappa x+b_{2}\sin\kappa x, & -\zeta\lt x \lt \zeta,\\ a_{2}e^{-kx}, & x\gt \zeta, \end{matrix} \right. }[/math]

where [math]\displaystyle{ \kappa=\sqrt{b-k^{2}} }[/math] which means that [math]\displaystyle{ 0\leq k\leq\sqrt{b} }[/math] (there is no solution for [math]\displaystyle{ k\gt \sqrt{b}). }[/math] We then match [math]\displaystyle{ w }[/math] and its derivative at [math]\displaystyle{ x=\pm\zeta }[/math] to solve for [math]\displaystyle{ a }[/math] and [math]\displaystyle{ b }[/math]. This leads to two system of equations, one for the even ([math]\displaystyle{ a_{1}=a_{2} }[/math] and [math]\displaystyle{ b_{2}=0 }[/math] ) and one for the odd solutions ([math]\displaystyle{ a_{1}=-a_{2} }[/math] and [math]\displaystyle{ b_{1}=0) }[/math]. The solution for the even solutions is

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} a_{1}e^{kx}, & x\lt -\zeta,\\ b_{1}\cos\kappa x, & -\zeta\lt x \lt \zeta,\\ a_{1}e^{-kx}, & x\gt \zeta, \end{matrix} \right. }[/math]

If we impose the condition that the function and its derivative are continuous at [math]\displaystyle{ x=\pm\zeta }[/math] we obtain the following equation

[math]\displaystyle{ \left( \begin{matrix} e^{-k\zeta} & -\cos\kappa\zeta\\ ke^{-k\zeta} & -\kappa\sin\kappa\zeta \end{matrix} \right) \left( \begin{matrix} a_{1}\\ b_{1} \end{matrix} \right) =\left( \begin{matrix} 0\\ 0 \end{matrix} \right) }[/math]

This has non trivial solutions when

[math]\displaystyle{ \det\left( \begin{matrix} e^{-k\zeta} & -\cos\kappa\zeta\\ ke^{-k\zeta} & -\kappa\sin\kappa\zeta \end{matrix} \right) =0 }[/math]

which gives us the equation

[math]\displaystyle{ -\kappa e^{-k\zeta}\sin\kappa\zeta+k\cos\kappa\zeta e^{-k\zeta}=0 }[/math]

or

[math]\displaystyle{ \tan\kappa\zeta=\frac{k}{\kappa} }[/math]

We know that [math]\displaystyle{ 0\lt \kappa\lt \sqrt{b} }[/math] and if we plot this we see that we obtain a finite number of solutions.

The solution for the odd solutions is

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} a_{1}e^{kx}, & x \lt -\zeta,\\ b_{2}\sin\kappa x, & -\zeta\lt x \lt \zeta,\\ -a_{1}e^{-kx} & x \gt \zeta, \end{matrix} \right. }[/math]

and again imposing the condition that the solution and its derivative is continuous at [math]\displaystyle{ x=\pm\zeta }[/math] gives

[math]\displaystyle{ \left( \begin{matrix} e^{-k\zeta} & \sin\kappa\zeta\\ ke^{-k\zeta} & -\kappa\cos\kappa\zeta \end{matrix} \right) \left( \begin{matrix} a_{1}\\ b_{1} \end{matrix} \right) =\left( \begin{matrix} 0\\ 0 \end{matrix} \right) }[/math]

This can non trivial solutions when

[math]\displaystyle{ \det\left( \begin{matrix} e^{-k\zeta} & \sin\kappa\zeta\\ ke^{-k\zeta} & -\kappa\cos\kappa\zeta \end{matrix} \right) =0 }[/math]

which gives us the equation

[math]\displaystyle{ \kappa e^{-k\zeta}a\cos\kappa\zeta+k\sin\kappa\zeta e^{-k\zeta}=0 }[/math]

or

[math]\displaystyle{ \tan\zeta\kappa=-\frac{\kappa}{k} }[/math]

Case when [math]\displaystyle{ \lambda\gt 0 }[/math]

When [math]\displaystyle{ \lambda\gt 0 }[/math] we write [math]\displaystyle{ \lambda=k^{2} }[/math] and we obtain solution

[math]\displaystyle{ w\left( x\right) =\left\{ \begin{matrix} \mathrm{e}^{-\mathrm{i}kx}+r\mathrm{e}^{\mathrm{i}kx}, & x \lt -\zeta\\ b_{1}\cos\kappa x+b_{2}\sin\kappa x & -\zeta\lt x \lt \zeta\\ t\mathrm{e}^{-\mathrm{i}kx} & x\gt \zeta \end{matrix} \right. }[/math]

where [math]\displaystyle{ \kappa=\sqrt{b+k^{2}}. }[/math] Matching [math]\displaystyle{ w }[/math] and its derivatives at [math]\displaystyle{ x=\pm\zeta }[/math] we obtain

[math]\displaystyle{ \left( \begin{matrix} -\mathrm{e}^{-\mathrm{i}k\zeta} & \cos\kappa\zeta & -\sin\kappa\zeta & 0\\ ik\mathrm{e}^{-\mathrm{i}k\zeta} & \kappa\sin\kappa\zeta & \kappa\cos\kappa \zeta & 0\\ 0 & \cos\kappa\zeta & \sin\kappa\zeta & -\mathrm{e}^{-\mathrm{i}k\zeta}\\ 0 & -\kappa\sin\kappa\zeta & \kappa\cos\kappa\zeta & ik\mathrm{e}^{-\mathrm{i}k\zeta} \end{matrix} \right) \left( \begin{matrix} r\\ b_{1}\\ b_{2}\\ t \end{matrix} \right) =\left( \begin{matrix} \mathrm{e}^{\mathrm{i}k}\\ ik\mathrm{e}^{-\mathrm{i}k}\\ 0\\ 0 \end{matrix} \right) }[/math]

Lecture Videos

Part 1

Part 2

Part 3

Part 4

Part 5

Part 6

Part 7

Part 8