You have labeled all your data to create training data for your model, and now it's time to click the train button! It truly is as simple as clicking the train button on the platform to kick off your model training BUT we let you do a bit more than just that if you want.
In the Annotate tab of your project, once you click the blue "Start Training" button, the screen below pops up to give you a few extra options.
First Option: Overriding the Training Type
You may have created training data for an object detection model, but you're interested to see how the labels could convert to segmentation and how that model would perform. We allow you to choose a different model type form the drop-down menu to test out other architectures. This can be a great data point for examining your use case or it could be a way to quickly build a segmentation model from an object detection model since labeling bounding boxes is much faster than labeling a unique polygon.
You can train as many models as you would like on the platform and they will each have their own unique identifiers (weights, ID, etc.) for you to experiment with during inference testing out outside of the platform. Therefore, if you think your segmentation model may perform better as an object detection model -- don't hesitate to give it a shot!
Second Option: Level of Detail
You can choose to Zoom Out or use the Native Resolution. By zooming out you are training on the "big picture" or the full image. This works best when your object of interest is not very tiny. If you need to zoom in on the image to see the object you are trying to detect you may want to stick with Native Resolution. If you have tiled your imagery, it is typically best to train on native resolution because it will train on that tiled sized media and detect all the small details.
My rule of thumb here is if your object of interest in very small, use native resolution. If your object of interest takes up a lot of space in your imagery and is easy to identify without zooming in, then you can use the zoom out feature. As a fallback plan -- you can always train one of each and see which performs better!
Third Option: Object Identification
** This only applies to segmentation models **
If you want to train a semantic segmentation model, choose "Group by Type". This will take everything labeled from one category and treat it as the same object. For example, if you are labeling vegetation; every polygon with the vegetation label will be lumped into one group and that's how the model will be trained to identify vegetation-- it's all one big object spread out on the media. Or another example (bringing back the cake), if you have labeled cake but some labels are slices of cake and some are the cake from which the slices were cut, the model will learn that in both ways the cake may look -- in slices or as a whole -- it's all the same cake.
If you want to train an instance segmentation model, choose "Separate Individual Objects". This will teach the model to not only identify the vegetation as a whole, but notice and detect each individual leaf, just as our eyes and brain can do. In the terms of the cake, the model will be able to differentiate the whole cake from a slice of cake -- still recognizing it's all cake but at the same time detecting the difference between the whole cake and the slice of cake.