Abstract:
Using camera traps to acquire wildlife images is
becoming more common within conservancies. The information
provided by these camera traps enhances understanding of
wildlife behaviour and population patterns. The detection and
counting of animals present in each of the captured images is
valuable information as it can be used to guide conservation
efforts. Manual annotation of these wildlife images is a tedious
painful process. It is becoming more common to use tools that
either use AI to annotate camera trap datasets or use AI to aid
in annotation. These AI tools are usually trained on species
endemic to a particular region. The ability to fine-tune such
models to species endemic to one’s particular region is
important to save much of the time conservationists manually
look through the misclassified images. In this paper, we present
a case study where we used a YOLOv5 object detection model
trained to detect the presence and count the number of impala
and other animals from a dataset collected by researchers at the
Dedan Kimathi University of Technology Conservancy. We
analyze the results of the AI’s performance with respect to a
manually annotated dataset. The model was able to annotate
72% of the dataset at a human level of accuracy. The work here
shows promise with regard to time spent labelling camera trap
images by leveraging the presence of particular species to autoannotate a majority of the dataset