Group work for a Monash Research Methods course

Reports edits

Silver-T 30fbc81e 9ab43a78

+21 -20
+21 -20
mini_proj/report/waldo.tex
··· 302 302 Random Forest & 92.23\% & 0.92\\ 303 303 \hline 304 304 \end{tabular} 305 - \captionsetup{width=0.70\textwidth} 305 + \captionsetup{width=0.80\textwidth} 306 306 \caption{Comparison of the accuracy and training time of each neural 307 307 network and traditional machine learning technique} 308 308 \label{tab:results} 309 309 \end{table} 310 310 311 - We can see by the results that Deep Neural Networks outperform our benchmark 312 - classification models, although the time required to train these networks is 313 - significantly greater. 314 - 315 - % models that learn relationships between pixels outperform those that don't 316 - 311 + \par 312 + We can see in these results that Deep Neural Networks outperform our benchmark classification models in terms of the accuracy they achieve. 313 + However, the time required to train these networks is significantly greater. 314 + An additional consideration is the extra layer of abstraction present in the FCN and not the CNN. 315 + This may indicate that the FCN can achieve better accuracies, given more training time (epochs). 316 + \\ 317 + % models that learn relationships between pixels outperform those that don't 318 + \todo{ 319 + Discussion of the results: 320 + - Was this what we expected to see? 321 + - What was surprising? 322 + - If you take learning time into account, are NN still as good? 323 + - We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show. 324 + } 325 + \par 317 326 Of the benchmark classifiers we see the best performance with Random 318 - Forests and the worst performance with K Nearest Neighbours. As supported 319 - by the rest of the results, this comes down to a models ability to learn 320 - the hidden relationships between the pixels. This is made more apparent by 321 - performance of the Neural Networks. 327 + Forests and the worst performance with K Nearest Neighbours. 328 + The low training time of the random forests could be due to the task being one of binary classification, and the traversal of binary trees being efficient resulting in low training time. 329 + In terms of the models' accuracies, this is supported by the rest of the results and comes down to a model's ability to learn hidden relationships between pixels. 330 + This is made more apparent by performance of the Neural Networks. 322 331 323 - \section{Conclusion} \label{sec:conclusion} 332 + \section{Conclusion} \label{sec:conclusion} 324 333 325 334 Image from the ``Where's Waldo?'' puzzle books are ideal images to test 326 335 image classification techniques. Their tendency for hidden objects and ``red ··· 337 346 It would be interesting to investigate various of these methods further. 338 347 There might be quite a lot of ground that could be gained by using 339 348 specialized variants of these clustering algorithms. 340 - 341 - 342 - Discussion of the results: 343 - - Was this what we expected to see? 344 - - What was surprising? 345 - - If you take learning time into account, are NN still as good? 346 - - We also did say we would have these other measures, so we should at least try to include them. Then the question is also what do they show. 347 - 348 349 \clearpage % Ensures that the references are on a separate page 349 350 \pagebreak 350 351 \bibliographystyle{alpha}