% Script 1 for Week 5
Most of the first half of this script is discussion, so in the first half I will identify any functioning MATLAB code (not just function M-files) by surrounding it with asterisks.
Numerical Solution of Square and Diagonalizable Linear DE systems
The Theory:
Taylor Series Exponential of a Matrix
In the solution to linear DE systems we will need to create the Taylor series equivalent of the exponential exp(t) but applied to a square matrix M.
MATLAB will try to calculate exp(M) if M is not a number by applying the exponential to each entry of M. That is not what we want here!
For square matrix M we want I + M + M^2/2 + M^3/3! + M^4/4! +....
which is the Taylor expansion of the exponential function applied to this matrix.
The problem is we cannot easily see how to calculate the exponential of a matrix. The factorial in the denominator will crush the matrix entries to 0 fast enough for convergence of the matrix sum (easy to prove) so truncation error can be made tiny but doing the matrix powers will magnify roundoff errors unreasonably. But MATLAB can do this calculation if M is DIAGONALIZABLE. This will always happen if nxn matrix M has n distinct eigenvalues, but often even if eigenvalues are repeated this can be done.
The following function is based on the easy-to-check fact that if M is diagonal the exponential Taylor series of M is diagonal too. And if c is a number on the diagonal of M then exp(c) is located in the exponential at the same place as c in M.
So suppose M is diagonalizable. That means there is an invertible matrix E and diagonal matrix e for which
M=E*e*inv(E) <---> e=inv(E)*M*E
Now
I + M + M^2/2 + M^3/3! + M^4/4! +....
= I + E*e*inv(E) + (E*e*inv(E))^2/2 + (E*e*inv(E))^3/3! + ....
= I + E*e*inv(E) + E*e*inv(E)*E*e*inv(E)/2
+ (E*e*inv(E)*E*e*inv(E)*E*e*inv(E)/3! + ....
= E*( I + e + e^2/2 + e^3/3! + e^4/4! +.... )*inv(E)
and the inner matrix is easy to calculate.
MATLAB finds invertible E and e when M is diagonalizable.
%****************************************************
function [EXPMAT E e] = TayExp(t,M);
% The output gives the Taylor exponential of M*t where t is real and
% M is a diagonalizable matrix.
[E,e]=eig(M);
if rcond(E)<10^(-13),error('This matrix is probably not diagonalizable.'),end
EXPMAT=E*diag(exp(t*diag(e)))*inv(E);
end
%****************************************************
So the Taylor Exponential of M itself is TayExp(1,M).
We now discuss Eigenvalues and Linear DE Systems
The Theory:
The type of analysis we do below will work WHENEVER a REAL nxn matrix such as M below has n different eigenvalues or, more generally, whenever the matrix is diagonalizable.
(Since the entries of a matrix corresponding to physical parameters of your situation, matrices created this way ALWAYS have n different eigenvalues. Only "theoretical, perfectly tuned" physical systems are not diagonalizable. And these perfectly tuned systems will behave very much like---for a finite time interval, at least---a system that is diagonalizable for very-close-together-eigenvalues.)
We consider the system of linear differential equations
dY/dt= M*Y
At each specific time t the solution to this DE is
Y = TayExp(t,M)*Y0
or, if you prefer you can use the explicit in-line function
%************************************************
[E,e]=eig(M);
EINV=inv(E);
Y = @(t) E*diag(exp(t*diag(e)))*EINV*Y0
%************************************************
where Y0 represents the vector of initial conditions.
This last representation is much more efficient than using the TayExp function if you will be calling it many times for many different t. (TayExp recreates E and e and inv(E) at each call which is very expensive.)
Note: calculating EINV=inv(E) once before building the function is also best!
Look at the parts of the function Y.
Suppose E_i is the ith eigenbasis vector, the ith column E(:,i) of E, and g_i(t)=exp(t*e(i,i))
EINV*Y0 is a column of constants (c_1,...c_n)'.
So E*diag(exp(t*diag(e)))*EINV*Y0 is exactly
c_1*g_1(t)*E_1 + c_2*g_2(t)*E_2 + .... + c_n*g_n(t)*E_n
so the solution function Y(t) is the sum of ordinary (but possibly complex) exponential functions times a (possibly complex) constant multiple of the corresponding (possibly complex) eigenvector.
If the solution must be real then the complex parts (if any) must all cancel out, and we will see an example of this later.
Ideal DE systems whose matrices are NOT diagonalizable do exist. Slightly more complicated methods are needed for an expression of the solution using something called the Jordan Canonical form. These are not unimportant, but I don't deal with them here. But we have alternatives: for instance we can solve these numerically (rather than getting an expression for exact solutions as we do here) in several ways using approximation methods we have/will learn about such as rk4sys or ode45.
The Examples:
In problem 13.6 of Chapra 4th Ed. they discuss a circuit with a battery and three capacitors and for physical reasons conclude that the three current values I1,I2 and I3 must satisfy a specified system of differential equations.
In their situation all the capacitors and inductances are the same, 1 henry and 0.25 coulombs respectively. This produces the DE system
(d^2/dt^2)I1 = 4*I2-4*I1
(d^2/dt^2)I2 = 4*I1-8*I2+4*I3
(d^2/dt^2)I3 = 4*I2-8*I3
Let J1, J2 and J3 be defined by
J1=(d/dt)I1
J2=(d/dt)I2
J3=(d/dt)I3
and Y=[J1 J2 J3 I1 I2 I3]'
and we can now turn the second order DE on R^3 into a first order differential equation dY/dt= M*Y on R^6
where
%************************************************
M=[0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
-4 4 0 0 0 0
4 -8 4 0 0 0
0 4 -8 0 0 0]
[E,e]=eig(M)
%************************************************
We can see that this particular M is diagonalizable with all pure imaginary eigenvalues.
There are 6 different eigenvalues e(1,1), e(2,2), e(3,3), e(4,4), e(5,5), e(6,6) for eigenvectors E(:,1), E(:,2), E(:,3), E(:,4), E(:,5), E(:,6).
Eigenvectors for different eigenvalues are linearly independent, and the real and imaginary parts of an eigenvector for a real matrix are (if nonzero) also independent.
Another fact that is easy to show is that if a matrix M has real entries, the real and imaginary parts of a complex solution are ALSO, each alone, solutions coming from the real and imaginary parts of the initial conditions. Most initial conditions will be real.
We have 6 of these complex solutions, corresponding to the columns of E.
exp(t*e(i,i))*E(:,i) for i=1...n
Our solution will be a complex linear combination of these.
We suppose now that our initial condition is real, as it usually will be.
There are 12 real-or-complex parts and the parameter space of initial conditions has real dimension 6. So we just need to find, among these 12, 6 linearly independent initial condition vectors to produce solutions for all possible initial conditions. You can't just pick 6 of these randomly: you will likely get duplicates. I looked at E and picked the real and imaginary parts of columns 1, 3 and 5.
%************************************************
K=[real(E(:,1)) real(E(:,3)) imag(E(:,5)) imag(E(:,1)) imag(E(:,3)) real(E(:,5)) ]
det(K)
%************************************************
The determinant is checked to be nonzero, so the columns of K are in fact 6 linearly independent real initial condition vectors. So the whole 6-dimensional solution space can be created by using linear combinations of these initial conditions.
In third quarter calculus we (can) show that exp(a+bi)=exp(a)*( cos(b)+i*sin(b) ) so all solutions in our case will be linear combinations of sines and cosines which have periods 2*pi/(complex part of the eigenvalues).
In the graphs below we plot the eigensolutions corresponding to the six columns of K. We just graph the currents because the derivative of these currents, though needed to create the solution, are not the main interest.
We specify a column of K and then create the cell array z which contains a list of vectors for each particular t value.
The 4th, 5th and 6th entry of z{i} are the values of the solution for that column of K at the ith time. Use these values to plot the three currents and then move on to the next column of K.
I use a cell array because the cells can contain (almost?) any MATLAB object and this approach seemed very straightforward (to me.) (It also illustrates how cells can be used.) You might want to see how to use just matrices to do the same thing. (Note: values in a,b and c are wiped out as the next column is considered. If you want the actual values of all the eigensolutions before the last one you will have to do something to preserve these values.)
We now return to the standard approach of commenting out non-MATLAB lines.
%************************************************
t=linspace(-4,4,500);
EINV=inv(E);
for L=1:6
Y=@(t) E*diag(exp(t*diag(e)))*EINV*K(:,L) ;
for j=1:length(t)
z{j}=Y(t(j));
end
for j=1:length(t)
a(j)=z{j}(4);
end
for j=1:length(t)
b(j)=z{j}(5);
end
for j=1:length(t)
c(j)=z{j}(6);
end
figure(L)
plot(t,a,t,b,t,c)
legend('current in first loop','current in second loop','current in third loop')
title('The Six Current Eigensolutions','FontSize',20,'FontWeight','bold')
grid
end
% If you want to have the solution corresponding to a particular
% collection of initial condition
Y0=[1 2 3 5 6 8]'
% the solution is
t=linspace(-4,4,500);
Yparticular=@(t) E*diag(exp(t*diag(e)))*EINV*Y0 ;
for j=1:length(t)
z{j}=Yparticular(t(j));
end
for j=1:length(t)
a(j)=z{j}(4);
end
for j=1:length(t)
b(j)=z{j}(5);
end
for j=1:length(t)
c(j)=z{j}(6);
end
figure(44)
hold on
plot(t,a,t,b,t,c)
legend('current in first loop','current in second loop','current in third loop')
init=Yparticular(0);
plot(0,init(4),'*',0,init(5),'*',0,init(6),'*')
title('Currents for this Particular Solution','FontSize',20,'FontWeight','bold')
grid
hold off
% Here is another interesting example featuring an attractor.
% dY/dt=M*y where
M=[1/2 -1;
1 -1]
[E e]=eig(M)
EINV=inv(E)
% The solution is Y = exp(t*M)*Y0
% Y = E*exp(t*e)*EINV*Y0
% The above expressions isn't a functioning MATLAB expressions because of
% dimension issues involving the time vector t. The expression will work
% only for particular t values.
% Below we can just write down diag(exp(t*diag(e))) explicitly because the
% matrix is 2x2. diag(exp(t*diag(e)))=[exp(e(1,1)*t) 0 ; 0 exp(e(2,2)*t)].
t=linspace(0,30,500);
for L=-2:9
Y0=[7,L]';
Y=@(t) E*[exp(e(1,1)*t) 0 ; 0 exp(e(2,2)*t)]*EINV*Y0;
for j=1:length(t)
z{j}=Y(t(j));
end
for j=1:length(t)
a(j)=z{j}(1);
end
for j=1:length(t)
b(j)=z{j}(2);
end
plot(a,b)
hold on
end
title('Solution Curves for 12 Initial Conditions','FontSize',20,'FontWeight','bold')
hold off
% This next one is called a saddle, an unstable equilibrium at the origin.
M=[4 -2;
3 -3]
[E e]=eig(M)
EINV=inv(E)
t=linspace(-.6,0.6,500);
for L=-5:0.5:5
Y0=[L,L]';
Y=@(t) E*[exp(e(1,1)*t) 0 ; 0 exp(e(2,2)*t)]*EINV*Y0;
for j=1:length(t)
z{j}=Y(t(j));
end
for j=1:length(t)
a(j)=z{j}(1);
end
for j=1:length(t)
b(j)=z{j}(2);
end
plot(a,b)
hold on
end
title('Solution Curves at a Saddle','FontSize',20,'FontWeight','bold')
hold off