Mara Evaluator
Module to evaluate the inference with respect to a ground truth
Last updated
Module to evaluate the inference with respect to a ground truth
Last updated
This page describes how one can run the BisQue Module named MaraEvaluation
This module takes in a dataset of images along with two valid G-Object names and generates a confusion matrix that represents the similarity or dissimilarity between ground truth annotations and predicted annotations.
Login >> Analyze >> Maasai Mara (in Groups Column) >> MaraEvaluation
An image or a dataset of images
Click here for a sample dataset of input images
Ground Truth Annotations
String representing the name of a G-Object (Ex: ground_truth_annotations
)
Predicted Annotations
String representing the name of a G-Object (Ex: annos_from_AI
)
High level stats along with confusion matrix
A Table that can be exported as CSV
A Consfusion Matrix
Better Visualization of Confusion Matrix: