The Weapon Alcohol Drug detection model helps you determine if an image or video contains displays of weapons, alcoholic beverages, recreational drugs or medical drugs.
The API will return a weapon value between 0 and 1. This value reflects the API's confidence. Media with a weapon value closer to 1 have a higher probability of containing a weapon while media with a value closer to 0 have a lower probability.
The API has been designed to detect all displays wearable or personal weapons/arms typically found in user-submitted photos. Those include:
The detection is also robust to varying levels of zooms, blurs, rotations.
The API has not been designed to detect the following concepts:
This model is usually used to moderate user-submitted images or videos and prevent users from posting or displaying unwanted content. Specific use-cases include:
You might also want to detect scenes containing blood, harm or horrific imagery. To do so, use the Gore and Graphic Violence Detection Model.
When processing the weapon value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing a weapon while images or videos with a value below will be considered to be safe.
Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.
The value returned is between 0 and 1, images with an alcohol value closer to 1 will have alcohol on the image, while images with an alcohol value closer to 0 will not have alcohol.
The API detects the following concepts:
The detection is also robust to varying levels of zooms, blurs, rotations.
The API has *not* been designed to detect drunk people or other effects of alcohol consumption.
This model is usually used to moderate user-submitted images or videos and prevent users from posting or displaying unwanted content. Specific use-cases include:
When processing the alcohol value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing alcoholic beverages while images or videos with a value below will be considered to be safe.
Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.
The value returned is between 0 and 1, images with a value close to 1 will contain displays of drugs, while images with a value closer to 0 are considered to be safe.
The API detects the following concepts:
Symbol used to represent marijuana / cannabis or more generally drugs.
drugs
Dried buds, cannabis crystals, kief.
drugs
Cannabis cigarette, also known as spliff or dooble. The model is designed to differentiate regular tobacco cigarettes from cannabis joints.
drugs
Devices used to smoke cannabis and other herbal substances.
drugs
Medical syringes, usually made of plastic, that might be used for recreational purposes such as drug injection.
drugs
Pills, tablets, and other small doses of medicine.
drugs
Self administration of some recreational drugs such as ketamine, cocaine.
drugs
The detection is also robust to varying levels of zooms, blurs, rotations.
When processing the drugs value returned by the API, users generally set a threshold. Images or videos with a value above this threshold will be flagged as potentially containing drugs while images or videos with a value below will be considered to be safe.
Thresholds need to be fine-tuned for each individual use-case. Depending on your tolerance to false positives or false negatives, the threshold should be adapted.
If you haven't already, create an account to get your own API keys.
Let's say you want to moderate the following image:
You can either upload a public URL to the image, or upload the raw binary image. Here's how to proceed if you choose to share the image's public URL:
curl -X GET -G 'https://api.sightengine.com/1.0/check.json' \
-d 'models=weapon' \
-d 'api_user={api_user}&api_secret={api_secret}' \
--data-urlencode 'url=https://sightengine.com/assets/img/examples/example-tt-1000.jpg'
# this example uses requests
import requests
import json
params = {
'url': 'https://sightengine.com/assets/img/examples/example-tt-1000.jpg',
'models': 'weapon',
'api_user': '{api_user}',
'api_secret': '{api_secret}'
}
r = requests.get('https://api.sightengine.com/1.0/check.json', params=params)
output = json.loads(r.text)
$params = array(
'url' => 'https://sightengine.com/assets/img/examples/example-tt-1000.jpg',
'models' => 'weapon',
'api_user' => '{api_user}',
'api_secret' => '{api_secret}',
);
// this example uses cURL
$ch = curl_init('https://api.sightengine.com/1.0/check.json?'.http_build_query($params));
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
curl_close($ch);
$output = json_decode($response, true);
// this example uses axios
const axios = require('axios');
axios.get('https://api.sightengine.com/1.0/check.json', {
params: {
'url': 'https://sightengine.com/assets/img/examples/example-tt-1000.jpg',
'models': 'weapon',
'api_user': '{api_user}',
'api_secret': '{api_secret}',
}
})
.then(function (response) {
// on success: handle response
console.log(response.data);
})
.catch(function (error) {
// handle error
if (error.response) console.log(error.response.data);
else console.log(error.message);
});
The API will then return a JSON response:
{
"status": "success",
"request": {
"id": "req_1OjggusalNb2S7MxwLq2h",
"timestamp": 1509132120.6988,
"operations": 1
},
"weapon": 0.773,
"alcohol": 0.001,
"drugs": 0,
"media": {
"id": "med_1OjgEqvJtOhqP7sfNe3ga",
"uri": "https://sightengine.com/assets/img/examples/example-tt-1000.jpg"
}
}
See our full list of Image/Video models for details on other filters and checks you can run on your images and videos. You might also want to check our Text models to moderate text-based content: messages, reviews, comments, usernames...
Was this page helpful?