aboutsummaryrefslogtreecommitdiff
path: root/report
diff options
context:
space:
mode:
authornunzip <np.scarh@gmail.com>2018-11-19 21:20:26 +0000
committernunzip <np.scarh@gmail.com>2018-11-19 21:20:26 +0000
commitd7f69b70517148af31369f77a78296a0dc85b9c2 (patch)
treecd002679f7f7bea5157576394f938fab83d9e102 /report
parent5632396b4adc3b1dad1dbb4286b8abd1e672db07 (diff)
downloadvz215_np1915-d7f69b70517148af31369f77a78296a0dc85b9c2.tar.gz
vz215_np1915-d7f69b70517148af31369f77a78296a0dc85b9c2.tar.bz2
vz215_np1915-d7f69b70517148af31369f77a78296a0dc85b9c2.zip
Shrink pictures part 1
Diffstat (limited to 'report')
-rwxr-xr-xreport/paper.md56
1 files changed, 28 insertions, 28 deletions
diff --git a/report/paper.md b/report/paper.md
index 963f087..b9392e6 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -25,7 +25,7 @@ as a sudden drop for eigenvalues after the 363rd.
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/eigenvalues.pdf}
+\includegraphics[width=17em]{fig/eigenvalues.pdf}
\caption{Log plot of all eigenvalues}
\label{fig:logeig}
\end{center}
@@ -39,8 +39,8 @@ obtained with different seeds for split can be observed in figure \ref{fig:mean_
\begin{figure}
\begin{center}
-\includegraphics[width=8em]{fig/mean_face.pdf}
-\includegraphics[width=8em]{fig/mean2.pdf}
+\includegraphics[width=5em]{fig/mean_face.pdf}
+\includegraphics[width=5em]{fig/mean2.pdf}
\caption{Mean Faces}
\label{fig:mean_face}
\end{center}
@@ -54,7 +54,7 @@ to flaten.
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/accuracy.pdf}
+\includegraphics[width=17em]{fig/accuracy.pdf}
\caption{NN Recognition Accuracy varying M}
\label{fig:accuracy}
\end{center}
@@ -145,7 +145,7 @@ eigenvalues.
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/variance.pdf}
+\includegraphics[width=17em]{fig/variance.pdf}
\caption{Data variance carried by each of M eigenvalues}
\label{fig:eigvariance}
\end{center}
@@ -162,13 +162,13 @@ element to the projected test image, assigning the same class as the neighbor fo
Recognition accuracy of NN classification can be observed in figure \ref{fig:accuracy}.
A confusion matrix showing success and failure cases for Nearest Neighbor classfication
-can be observed in figure \label{fig:cm}:
+can be observed in figure \ref{fig:cm}:
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/pcacm.pdf}
-\label{fig:cm}
+\includegraphics[width=15em]{fig/pcacm.pdf}
\caption{Confusion Matrix NN, M=99}
+\label{fig:cm}
\end{center}
\end{figure}
@@ -178,8 +178,8 @@ classification.
\begin{figure}
\begin{center}
-\includegraphics[width=7em]{fig/face2.pdf}
-\includegraphics[width=7em]{fig/face5.pdf}
+\includegraphics[width=5em]{fig/face2.pdf}
+\includegraphics[width=5em]{fig/face5.pdf}
\caption{Failure case for NN. Test face left. NN right}
\label{fig:nn_fail}
\end{center}
@@ -187,8 +187,8 @@ classification.
\begin{figure}
\begin{center}
-\includegraphics[width=7em]{fig/success1.pdf}
-\includegraphics[width=7em]{fig/success1t.pdf}
+\includegraphics[width=5em]{fig/success1.pdf}
+\includegraphics[width=5em]{fig/success1t.pdf}
\caption{Success case for NN. Test face left. NN right}
\label{fig:nn_succ}
\end{center}
@@ -201,7 +201,7 @@ K=1, as it can be observed from figure \ref{fig:k-diff}.
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/kneighbors_diffk.pdf}
+\includegraphics[width=17em]{fig/kneighbors_diffk.pdf}
\caption{NN recognition accuracy varying K. Split: 80-20}
\label{fig:k-diff}
\end{center}
@@ -219,7 +219,7 @@ will be used for each generated class-subspace.
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/alternative_accuracy.pdf}
+\includegraphics[width=17em]{fig/alternative_accuracy.pdf}
\caption{Accuracy of Alternative Method varying M}
\label{fig:altacc}
\end{center}
@@ -230,7 +230,7 @@ can be observed in figure \ref{fig:cm-alt}.
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/altcm.pdf}
+\includegraphics[width=15em]{fig/altcm.pdf}
\caption{Confusion Matrix for alternative method, M=5}
\label{fig:cm-alt}
\end{center}
@@ -240,9 +240,9 @@ Similarly to the NN case, we present two cases, respectively failure (figure \re
\begin{figure}
\begin{center}
-\includegraphics[width=7em]{fig/FO.JPG}
-\includegraphics[width=7em]{fig/FR.JPG}
-\includegraphics[width=7em]{fig/FL.JPG}
+\includegraphics[width=5em]{fig/FO.JPG}
+\includegraphics[width=5em]{fig/FR.JPG}
+\includegraphics[width=5em]{fig/FL.JPG}
\caption{Alternative method failure. Respectively test image, reconstructed image, class assigned}
\label{fig:altfail}
\end{center}
@@ -250,9 +250,9 @@ Similarly to the NN case, we present two cases, respectively failure (figure \re
\begin{figure}
\begin{center}
-\includegraphics[width=7em]{fig/SO.JPG}
-\includegraphics[width=7em]{fig/SR.JPG}
-\includegraphics[width=7em]{fig/SL.JPG}
+\includegraphics[width=5em]{fig/SO.JPG}
+\includegraphics[width=5em]{fig/SR.JPG}
+\includegraphics[width=5em]{fig/SL.JPG}
\caption{Alternative method success. Respectively test image, reconstructed image, class assigned}
\label{fig:altsucc}
\end{center}
@@ -335,7 +335,7 @@ vaying between 0.11s(low M_pca) and 0.19s(high M_pca).
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/ldapca3dacc.pdf}
+\includegraphics[width=17em]{fig/ldapca3dacc.pdf}
\caption{PCA-LDA NN Recognition Accuracy varying hyper-parameters}
\label{fig:ldapca_acc}
\end{center}
@@ -353,7 +353,7 @@ observed in the confusion matrix shown in figure \ref{fig:ldapca_cm}.
\begin{figure}
\begin{center}
-\includegraphics[width=20em]{fig/cmldapca.pdf}
+\includegraphics[width=17em]{fig/cmldapca.pdf}
\caption{PCA-LDA NN Recognition Confusion Matrix Mlda=50, Mpca=115}
\label{fig:ldapca_cm}
\end{center}
@@ -363,8 +363,8 @@ Two recognition examples are reported: success in figure \ref{fig:succ_ldapca} a
\begin{figure}
\begin{center}
-\includegraphics[width=7em]{fig/ldapcaf2.pdf}
-\includegraphics[width=7em]{fig/ldapcaf1.pdf}
+\includegraphics[width=5em]{fig/ldapcaf2.pdf}
+\includegraphics[width=5em]{fig/ldapcaf1.pdf}
\caption{Failure case for PCA-LDA. Test face left. NN right}
\label{fig:fail_ldapca}
\end{center}
@@ -372,8 +372,8 @@ Two recognition examples are reported: success in figure \ref{fig:succ_ldapca} a
\begin{figure}
\begin{center}
-\includegraphics[width=7em]{fig/ldapcas1.pdf}
-\includegraphics[width=7em]{fig/ldapcas2.pdf}
+\includegraphics[width=5em]{fig/ldapcas1.pdf}
+\includegraphics[width=5em]{fig/ldapcas2.pdf}
\caption{Success case for PCA-LDA. Test face left. NN right}
\label{fig:succ_ldapca}
\end{center}
@@ -429,7 +429,7 @@ Bagging is performed by generating each dataset for the ensembles by randomly pi
Feature space randomisations involves randomising the features which are analysed by the model. In the case of PCA-LDA this can be achieved by randomising the eigenvectors used when performing the PCA step. For instance, instead of choosing the most variant 120 eigenfaces, we may chose to use the 90 eigenvectors with biggest variance and picking 70 of the rest non-zero eigenvectors randomly.
-![Ensemble size effect on accraucy with 160 eeigen values (m_c=90,m_r=70\label{fig:random-e}](fig/random-ensemble.pdf)
+![Ensemble size effect on accraucy with 160 eeigen values (m_c=90,m_r=70)\label{fig:random-e}](fig/random-ensemble.pdf)
In figure \ref{fig:random-e} we can see the effect of ensemble size when using the bigget 90 eigenvalues and 70 random eigenvalues.