···302302 Random Forest & 92.23\% & 0.92\\
303303 \hline
304304 \end{tabular}
305305- \captionsetup{width=0.70\textwidth}
305305+ \captionsetup{width=0.80\textwidth}
306306 \caption{Comparison of the accuracy and training time of each neural
307307 network and traditional machine learning technique}
308308 \label{tab:results}
309309 \end{table}
310310311311- We can see by the results that Deep Neural Networks outperform our benchmark
312312- classification models, although the time required to train these networks is
313313- significantly greater.
314314-315315- % models that learn relationships between pixels outperform those that don't
316316-311311+ \par
312312+ We can see in these results that Deep Neural Networks outperform our benchmark classification models in terms of the accuracy they achieve.
313313+ However, the time required to train these networks is significantly greater.
314314+ An additional consideration is the extra layer of abstraction present in the FCN and not the CNN.
315315+ This may indicate that the FCN can achieve better accuracies, given more training time (epochs).
316316+ \\
317317+ % models that learn relationships between pixels outperform those that don't
318318+ \todo{
319319+ Discussion of the results:
320320+ - Was this what we expected to see?
321321+ - What was surprising?
322322+ - If you take learning time into account, are NN still as good?
323323+ - We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show.
324324+ }
325325+ \par
317326 Of the benchmark classifiers we see the best performance with Random
318318- Forests and the worst performance with K Nearest Neighbours. As supported
319319- by the rest of the results, this comes down to a models ability to learn
320320- the hidden relationships between the pixels. This is made more apparent by
321321- performance of the Neural Networks.
327327+ Forests and the worst performance with K Nearest Neighbours.
328328+ The low training time of the random forests could be due to the task being one of binary classification, and the traversal of binary trees being efficient resulting in low training time.
329329+ In terms of the models' accuracies, this is supported by the rest of the results and comes down to a model's ability to learn hidden relationships between pixels.
330330+ This is made more apparent by performance of the Neural Networks.
322331323323- \section{Conclusion} \label{sec:conclusion}
332332+ \section{Conclusion} \label{sec:conclusion}
324333325334 Image from the ``Where's Waldo?'' puzzle books are ideal images to test
326335 image classification techniques. Their tendency for hidden objects and ``red
···337346 It would be interesting to investigate various of these methods further.
338347 There might be quite a lot of ground that could be gained by using
339348 specialized variants of these clustering algorithms.
340340-341341-342342- Discussion of the results:
343343- - Was this what we expected to see?
344344- - What was surprising?
345345- - If you take learning time into account, are NN still as good?
346346- - We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show.
347347-348349 \clearpage % Ensures that the references are on a separate page
349350 \pagebreak
350351 \bibliographystyle{alpha}