Skip to main content
How to Use Batch Inference

How to run a model over new a new dataset in the platform

T
Written by Taylor Maggos
Updated over 3 months ago

You have a trained model in the platform! Congratulations!! Now it’s time to put that model to work and see how it preforms on new data, a key part of model evaluation. When you run inference on a model you are testing the model’s outputs/predictions over a new set of data to see how well the model truly preforms.

In another article we talk about two ways to run inference, so check that out! But here we are going to go into the step-by-step process on running Batch Inference in the platform.

Steps to run batch inference:

  1. Upload a new dataset to the platform to use as your inference dataset. Note: this dataset should NOT contain any media you used to train the model – this should be all new media the model will be seeing for the first time!

  2. In your Project, navigate to the last tab at the top, the “productionize tab”. Note: you will only be able to run inference in the productionize tab once your model is 100% trained, so if you can’t see anything here yet check back in the “Train tab” and see where you model is in the training cycle.

  3. Then click the “+ Run New Batch Prediction” button.

  4. From here, you will have to select the model you want to use.

  5. Each model trained in your project is organized by ID and time it was trained as well as the training type, so use that information to select which model you would like to test out.

  6. Then select the dataset you would like to use to test the model (this would most likely be the new dataset we uploaded in step 1).

  7. The press the “Run Model” button.

  8. The model will start running predictions over the new data and the status will change from “Running” to “Done” when the job is finished.

  9. You can then select the check box next to the predictions job you just made and press view results.

  10. This will open the predictions tab and show you the overlays of the predictions on the media you uploaded in the inference dataset. You will also be able to see any associated metadata.

  11. You can click on a specific piece of media to bring up a better view, and then you can toggle between the mask overlay and the original image by pressing the space bar and see for yourself how well the model is predicting results!

How to Export your Batch Inference Results

  1. If you select the inference job by clicking the check box, you will see an “Export Results” button becomes available.

  2. Click Export Results and your export will start processing, you will know it went through because you will see this green message appear at the top of the screen.

  3. Once the job is complete, you’ll press the “View Exports” button.

  4. This will open up another screen where you can select the export you want and click the Download Export Link. This will download a .zip file to your computer.

In the .zip File you will find:

  1. A prediction Mask Folder (all of the mask overlays of the prediction outputs in a PNG format, each titled prediction_mask_{media_id}.png)

  2. Metadata JSON File (additional info about the export)

  3. Readme Text File (the first file you should start with, which gives you information on everything you just exported)

  4. Records JSON File (‘Long’ info that would be useful for developers working with the exports)

  5. Summary CSV (spreadsheet summary of prediction exports, including: Media ID, Media Title, Dataset ID, Predictor ID, Time Created, and Prediction Mask File Path)

Note: A predictor ID is the ID for the specific model trained. A CV model is made up of predictors and weights which can be exported as well so that model can be used at the edge or in a bare metal container or in various other scenarios!

Did this answer your question?