From 2bfa88f521b89a803f0b870b048b4ad593c03c9e Mon Sep 17 00:00:00 2001
From: Vasil Zlatanov <v@skozl.com>
Date: Tue, 20 Nov 2018 16:44:56 +0000
Subject: Minor reductions of words

---
 report/paper.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/report/paper.md b/report/paper.md
index 809af3a..ef1e9c4 100755
--- a/report/paper.md
+++ b/report/paper.md
@@ -219,7 +219,7 @@ affect recognition the most are: glasses, hair, sex and brightness of the pictur
 
 # Question 2, Generative and Discriminative Subspace Learning
 
-To combine both method it is possible to perform LDA in a generative subspace created by PCA. In order to
+One way to combine generative and discriminative learning is made possible by performing LDA on a generative subspace created by PCA. In order to
 maximize class separation and minimize the distance between elements of the same class it is necessary to 
 maximize the function J(W) (generalized Rayleigh quotient): $J(W) = \frac{W\textsuperscript{T}S\textsubscript{B}W}{W\textsuperscript{T}S\textsubscript{W}W}$. 
 
@@ -239,7 +239,7 @@ of the projected samples: $W\textsuperscript{T}\textsubscript{pca} = arg\underse
 = arg\underset{W}max\frac{|W\textsuperscript{T}W\textsuperscript{T}
 \textsubscript{pca}S\textsubscript{B}W\textsubscript{pca}W|}{|W\textsuperscript{T}W\textsuperscript{T}\textsubscript{pca}S\textsubscript{W}W\textsubscript{pca}W|}$.
 
-However, performing PCA followed by LDA carries a loss of discriminative information. This problem can
+Performing PCA followed by LDA carries a loss of discriminative information. This problem can
 be avoided through a linear combination of the two [@pca-lda]. In the following section we will use a 
 1-dimensional subspace *e*. The cost functions associated with PCA and LDA (with $\epsilon$ being a very 
 small number) are H\textsubscript{pca}(*e*)=
-- 
cgit v1.2.3-70-g09d2