LEVERAGING CONTEXTUAL DATA TO IMPROVE MACHINE-LEARNING CLASSIFICATIONS OF MARINE ZOOPLANKTON (E)
‘Deep Learning’ has led to many recent breakthroughs in automated recognition of diverse types of objects. However current out-of-the-box deep learning architectures have not performed as well with some types of digital images, including those of zooplankton. Current algorithms can only look for patterns within the pixels presented in the image that was captured. In this work, we investigate techniques for providing contextual metadata to Convolutional Neural Network algorithms applied to digital images acquired with three digital imaging devices: our new Zooglider, the ZooScan, and the UVP5. We augment pixels with physical measurements, hydrographic information, and other contextual information and examine the effects on the algorithm’s accuracy. We also compare the efficacy of Deep Learning classification with more conventional feature-based algorithms (Support Vector Machine and Random Forest). We suggest that our results are not unique to zooplankton imagery, but this approach is also translatable to other oceanographic machine learning tasks.
Ellen, J. S., University of California San Diego, USA, firstname.lastname@example.org
Ohman, M. D., Scripps Institution of Oceanography/UC San Diego, USA, email@example.com
Location: 313 A
Presentation is given by student: Yes