Mara Evaluator
Module to evaluate the inference with respect to a ground truth
This page describes how one can run the BisQue Module named MaraEvaluation
Run Instructions
This module takes in a dataset of images along with two valid G-Object names and generates a confusion matrix that represents the similarity or dissimilarity between ground truth annotations and predicted annotations.
Navigate to Module Page
Login >> Analyze >> Maasai Mara (in Groups Column) >> MaraEvaluation
Expected Inputs
An image or a dataset of images
Click here for a sample dataset of input images
Ground Truth Annotations
String representing the name of a G-Object (Ex:
ground_truth_annotations
)
Predicted Annotations
String representing the name of a G-Object (Ex:
annos_from_AI
)

Expected Outputs
High level stats along with confusion matrix
A Table that can be exported as CSV
A Consfusion Matrix

Better Visualization of Confusion Matrix:
Last updated