FAQ / Why Sightengine

How fast is Sightengine? How can I make Sightengine requests faster?

How fast is Sightengine?

Sightengine has been built with speed in mind. Images submitted to our Nudity Detection API for instance are usually processed within a few hundreds of milliseconds. This is much faster than what human moderators can achieve.

We achieve this through a series of steps:

  • Our inference algorithms have been optimized both for accuracy and speed.
  • We run them on specialized hardware (such as high-end CPUs and GPUs).
  • We hate queues. While vendors typically put incoming requests in queues and let them wait until a resource becomes available, we provision resources to make sure you never have to wait in a queue. Our default is queue-less processing.
  • We control the complete stack from end-to-end, obsessing over every detail to obtain high quality infrastructure (powerful bare-metal servers).
  • We optimize request latency by giving access to a geo-located inference network, where you can choose the location of the inference servers you use (this is an Enterprise option only).

Some of our models are heavier and take a bit more time to run than Nudity Detection. The largest ones are currently the Text Detection models. Those take closer to 1 second of processing on our standard plans.

Make requests even faster

When talking about speed, there are several elements that will influence the latency of requests to Sightengine:

  • The network overhead: this is the time it takes for you to perform a request to our servers. The time here will be influenced by the different steps (DNS lookup, TCP round-trips, SSL negotiation, etc.) and the distance between the servers.
  • Retrieving the image or video: depending on how you submit the image, our servers might have to perform an extra-step to retrieve it.
  • The actual image processing: this is the most important step, where we process your image or video.

That being said, there are a few things you can do to make sure you get the quickest responses:

  • Properly re-use connections between subsequent requests, so that you don't perform a new SSL negotiation/handshake on each request.
  • Prefer POSTing the images directly to the API, rather than sending a public URL to the image. This way our servers don't need the extra time to fetch the image.
  • Make sure you optimize the size and content of the images or videos you submit, as this will save both time and bandwidth.
  • If you need to run multiple checks on a given image or video, then make sure you batch those checks in a single request rather than doing separate API requests for each.
  • With our Enterprise plans, you can choose exactly where your images are processed and thus minimize latency by choosing a data center near your back-end. With the Enterprise plans you can also get ask for priority processing.

Was this page helpful?