The WAD endpoint recognizes and detects different categories of items that are not appropriate for the general public. The endoint is great for anyone who needs to moderate user-generated images and remove photos that contain gun-related violence or displays of alcohol or drugs.
Guns, rifles, carbines, military weapons along with other types of weapons such as knives.
Wine, beer, liquors, spirits and other alcoholic beverages.
Prescription drugs, syringes and other medical devices.
The API returns moderation results instantly and scales automatically to adapt to your needs.
Easily grow your Moderation Pipeline to tens of millions of images per month.
Built upon state-of-the-art models and proprietary technology. The moderation decisions are consistent and auditable, with feedback loops and continuous improvement built-in.
The API was built by developers for developers. You only need a few lines of code to be up and running.
Leverage our simple SDKs and detailed documentation.
No human moderator is involved, your images remain private and are not shared with any 3rd party.
Detect raw and partial nudity
Detect babies, children and teenagers under 18
Detect blurry and low quality images, get dominant colors
Detect texts and watermarks artificially added to images
Detect violent and inappropriate content