% Script 1 for Week 4
% Sorting is a huge deal in computer programming and the built-in MATLAB sorting routines are highly optimized.
x=[5 8 2 8 1 9 6 7]
min(x)
max([5 8 2 8 1 9 6 7])
sort([5 8 2 8 1 9 6 7])
find(x==3)
find(x==min(x))
find(x==8)
sort([5; 8; 2; 8; 1; 9 ;6 ;7])
% I found a cool sorting thing you can do with matrices. You can sort by the values in a column and move the entire row along with each value. This is a common thing to do in spreadsheet programs.
M=[5 8 2 1 9 6 7;
4 6 0 9 8 7 5;
2 4 6 7 8 3 2]
sortrows(M,2)
% produces sorting by values in the second column
sortrows(M,3)
% produces sorting by values in the third column
% If you want to sort by values in a row use transpose twice:
sortrows(M',2)'
% Interesting fact: the following creates a new matrix from the 3rd 4th and 7th columns of M in the specified order
M
M(:,[7 3 4])
%**********************************************************
M=[5 8 2 ;
4 6 0 ;
2 4 6 ]
B=[ 7.85;-19.3;71.4]
% LU factorization
% If we want to solve M*X=B we can do it with the LU factorization of M.
[L,U, P]=lu(M)
% What do you do with the LU factorization?
% If you are trying to solve an equation M*X=B then you rewrite the order of the row equations that this corresponds to using the row-permutation matrix P as
% (P*M)*X=P*B
% P represents the permutation of the rows needed to avoid zeros (or tiny numbers) in the pivot locations during the production of the L and U factors.
% P*M=L*U so you solve L*U*X=P*B in two stages.
% First you solve L*Y=P*B by forward substitution and then you solve U*X=Y by back substitution.
% Then M*X=B.
% You would use this method when the size of the matrix is huge and you are going to solve this equations over and over for different target vectors B: find L and U once, solve the equation for many different B vectors using forward and back substitutions.
% Cholesky Decomposition
% The Cholesky decomposition is similar but only works for symmetric matrices.
W = M'*M
% (I use the above line just to create a matrix W I know is symmetric.)
X=chol(W)
% For Cholesky X is upper triangular so X' is lower triangular and W=X'*X
W
X'*X.
% So this is a special kind of LU decomposition.
% And X is a kind of "square root" of the symmetric matrix W.
% Cholesky factorization ASSUMES W is Hermitian: that is, W'=W.
% It gives an error message if W is not positive definite (that is, if W has some negative or 0 or complex eigenvalues.
% Use the singular value decomposition svd, for example, if there are.)
% There are lots of options in chol and many uses (inner products, etc.)
% I have used the EXISTENCE of a Cholesky factorization in proofs, but I personally have not used an ACTUAL Cholesky factorization for anything.
% QR factorization
M=[5 8 2 0 10 4 12;
4 6 0 0 8 12 1;
2 4 6 0 4 8 0;
0 0 0 0 0 0 0]
[Q R P]=qr(M)
Q*R
M*P
% M*P permutes the columns of M.
Q'*Q
% The QR decomposition performs a Gramm-Schmidt-like orthogonalization process on the columns of matrix M. It uses pivoting to switch the order of some of the columns of M if there is dependence among these columns or if the columns as listed would involve numerical issues according to some unknown internal criterion.
% Let vector space C be the columnspace of M.
% The first columns of Q are the Gramm-Schmidt vectors created from a spanning set of C taken from some of the columns of M. The remaining columns of Q are an orthonormal basis of C^(perp).
% SVD: Singular Value Decomposition, the most useful of all!!
M=[5 8 2 0 10 ;
4 6 0 0 8 ;
2 4 6 0 4 ;
0 0 1 0 0 ;
2 4 6 0 4 ]
[Left ,Sings, Right]=svd(M)
Left*Sings*Right'
M*Right(:,1)/Sings(1,1)
% M sends any column of Right to a singular values times a column of Left. And the columns of both Right and Left form orthonormal bases of domain and range spaces, respectively. This is ALMOST as good as having a basis of eigenvectors. But the SVD exists for ANY matrix, square or not.
% If M is symmetric then Left=Right and these columns are a basis of eigenvectors.
% The singular value decomposition is fantastically useful for many things. Data compression is one of them Here is an example of that.
% I create below a 1000x1000 matrix M constructed from just 50 "signals." However the matrix itself has virtually all nonzero entries, and no apparent pattern to them. In an application you will not know how M is constructed but you will hope (or know on theoretical grounds) that the entries of M are formed as the sum of a reasonable number of simpler "signals". The entries of M could, for instance, be grayscale values at each point of a 1000x1000 image file.
C = randi([-12 12],1000);
M=zeros(1000);
for k=1:50
M=M+C(:,k+50)*C(k,:);
end
[Left ,Sings, Right]=svd(M);
x=diag(Sings);
plot(x)
% Only the first 50 entries of x have non-negligible values, the rest of the singular values are tiny. That is an important tip-off that there are really just 50 signals.
S=zeros(1000);
R=Right';
for k=1:50
S=S+x(k)*Left(:,k)*R(k,:);
end
difference=S-M;
y=max(difference);
max(y)
% So rather than sending 1 million numbers to your friend who wants to render the picture, you might instead send the the first 50 columns of Left and Right' and the first 50 diagonal values in Sings.
% That is just 100050 numbers, a compression factor of almost 10.
% You can (virtually) reconstitute M at the receiving location.
% As another example, the pseudoinverse pinv(M) is created by using the SVD.
% If M is mxn and there are k nonzero singular values
M=[6 3 9 7 1;
6 3 8 3 1;
6 3 0 8 2;
0 0 0 0 0]
[Left, Diag, Right]=svd(M)
M=Left*Diag*Right' is the SVD
Left=[L1 L2] is mxm orthogonal (L1 is mxk, the first k columns)
L1=Left(:,1:3)
Right=[R1 R2] is nxn orthogonal (R1 is kxn, the first k columns)
R1=Right(:,1:3)
% Diag is of the form
% D 0
% 0 0
% where D is kxk diagonal
D=Diag(1:3,1:3)
M=L1*D*R1'
% The pseudoinverse is the nxm matrix
pinv(M)=R1*D^(-1)*L1'
% and satisfies lots of propertires that the inverse of M WOULD have, if M had one. For instance:
P=pinv(M)
% satisfies (as would a true inverse),
% M*P*M=M
% P*M*P=P
% (M*P)'=M*P
% (P*M)'=P*M
% Example of SVD compression with a greyscale bitmap image.
% Place the imagefile grid.bmp in MATLAB's working directory and drag it into the MATLAB window. Two variables will be created, and cdata is the one we want to look at. It is a 1000x1000 matrix of 8-bit numbers that represent the greyscale density throughout the image. SVD requires regular numbers so we first convert to double-precision.
a=double(cdata);
[Left ,Sings, Right]=svd(a);
x=diag(Sings);
plot(x)
% It seems only the first 30 or 40 singular values are (relatively) substantial.
S=zeros(1000);
R=Right';
for k=1:34
S=S+x(k)*Left(:,k)*R(k,:);
end
% S, though it has a million entries just like cdata, has been created using only 34034 numbers.
% Now convert back to uint8 format and create the imagefile.
b=uint8(S);
imwrite(b, 'gridcompressed.bmp')