We know that there is no one-size-fits-all solution with Image or Video moderation. Expectations will vary between countries, cultures and applications.
This is why we have worked to make our API endpoints and models flexible and suitable to most needs.
As you may know, we have multiple moderation models available. When you perform a request to the API, you get to choose what models to apply and how each model is used to then decide if an image should be approved, rejected or reviewed. See our model reference to learn more about each model output.
Our Models do not return simple binary classifications (such as 'nudity' or 'no nudity'). They send you more information so that you can make a fine-grained decision on how to handle each case. The Nudity Detection, for instance, will tell you what level of nudity it has encountered. When faced with partial nudity, we tell you exactly what the image contains, so that you can take appropriate action - or simply accept the photo.
In other words, you don't need to worry about how images get handled when they are border-line nudity. You can simply define actions for each possible situation:
|Woman in bikinis||Accept Reject or Review?|
|Bare male chest||Accept Reject or Review?|
|Woman in lingerie||Accept Reject or Review?|
|Suggestive cleavage||Accept Reject or Review?|
|Miniskirt||Accept Reject or Review?|
|and so on...||see more details|
That said, if you believe you need more customization than provided out-of-the-box, for instance with new detection capacities or custom decision filters, please get in touch