Comment on page
Frequently asked questions
We certified our architecture at iBeta:
- the frontend SDK capture method (at iBeta we provided a mobile SDK) and
- the full processing of presentation attack detection on the side of Oz API.
If you replace Mobile SDK capture with Web SDK capture, the architecture and security accuracy to presentation attacks remain the same, but in some cases the False Rejection Rate might increase due to the lower quality of the laptop cameras.
Also, for the scenario of using our Web SDK we have implemented an additional layer of protection against deepfake / injection attack: camera spoofing detection (injection attacks / deep fake)
Oz Liveness against new generation of biometric attacks
However, these types of attacks are not related to presentation attacks and are not included in the iBeta certification scope.
On the basic level, you can test them yourself without even involving our engineers: visit our Web Demo page or download our demo application from Google Play (Android) or TestFlight (iOS) with the on-device functionality. To explore more features, contact us to get the credentials and necessary links to access:
- the server-based liveness and face matching analyses,
- extended Web Demo,
- Web console to check the info on orders and analyses,
- and REST API.
We'll be happy to provide you with any support.
Yes, we do provide protection from injection attacks. If you take a video using Web SDK, you get the two-layer protection against injection attacks: the way we take videos allows our neural networks tracing the attacks in a video itself and the information gathered about the browser context and camera properties helps detect usage of virtual cameras or other injection methods.
To avoid malicious intents, we do not allow usage of virtual cameras.
You can use any media to analyze. However, with our SDKs, the accuracy of the analyses is higher as the quality of a video taken by our SDKs is optimal for the subsequent Liveness analysis. The seamless integration of video capturing and video analyzing components ensures the whole process goes smoothly. Also, we designed our SDKs with real users in mind, providing great user experience.
If you take a video using Web SDK, you get the two-layer protection against injection attacks: the way we take videos allows our neural networks tracing the attacks in a video itself and the information gathered about the browser context and camera properties helps detect usage of virtual cameras or other injection methods.
APCER (FAR): 0%.
BPCER (FRR) (please note: the numbers below are for the portrait orientation, as the majority of the media we analyze are taken this way):
MSDK – 0.1-0.15%, depending on the gesture.
Web SDK – 0.5-0.6%, depending on the gesture.
Usually the analysis takes 1-3 seconds.
- The time depends on the analysis type (on-device or server-based) and some other factors.
Server-based: for the on-premise model of usage, it is your hardware that matters the most. For SaaS, our hardware is used, but still, at the peak times the analysis might take longer because of the increased load.
On-device iOS: less than a second.
On-device Android: on new smartphones – up to three seconds.
- On old models, these numbers might be a bit higher. The newer your smartphone is, the faster the analyses are.
The hybrid analysis type is a combination of on-device and server-based types, that provides both their benefits while not having their drawbacks. If you switch to the hybrid analysis from the on-device one, the main benefit you’ll have is the increase of accuracy.
Occasionally, the outcome of the analysis conducted on the device may be uncertain. By employing the hybrid approach, the system will merely forward the corresponding media to the server for an additional check. Should a definite result be determined on the device, no data is transmitted (configurable within the
Analysisstructure). Consequently, there will be a negligible increase in computational capacity and bandwidth usage, while simultaneously yielding more accurate results.
The hybrid analysis type is a combination of on-device and server-based types, that provides both their benefits while not having their drawbacks. If you switch to the hybrid analysis from the on-device one, the main benefits you’ll have are:
- reduced computational capacity,
- quicker analyses,
- decreased data transmission.
In most cases, the on-device models can furnish you with a conclusive analysis result, so you can significantly decrease analysis time and resource usage. At the same time, the accuracy level will be similar to the server-based analysis one: whenever there's uncertainty, the system forwards the corresponding media to the server for an additional check. As for definite on-device outcomes, you have the option to configure what gets transmitted and stored on the server, ranging from the original media to nothing, based on the settings within the
All your orders' data can be retrieved via API. Call
GET Folder [LIST].The response will be a JSON file with all information needed.
If you require the detailed data on your analyses, make sure the
with_analysesparameter is set to
GET /api/folders/?with_analyses=true. You can also specify other parameters like user ID or analysis type. To download our Postman collection, please proceed here.
To retrieve the information on the particular order (folder), you'll need its ID. In the web console, go to the Orders tab, click Filter, enter the order number in the appropriate field, and click Search. Once the order needed is displayed, click the icon with PDF on it, select the report template, and click Continue. The order data will be downloaded in the PDF format. To learn more about working with Oz web console, please refer to this section.
With API 5.0.0 and above, you can perform the Liveness analysis for any media, including images.
With API 4.0.8 and below, Liveness doesn’t work with images, but accepts frame sequences. Thus, to perform an analysis using a photo, you need to turn it into a frame sequence of a single frame. Place the photo into a ZIP archive and use video tags, so that Liveness algorithms treat this photo as a video. You can find more information in our integration guide.
One_shot and Selfie are Passive Liveness actions. For the Selfie action, we take a short video and send it to the analysis. For One_shot, only a single frame is used, which lowers both file size and analysis accuracy.
FAILEDstatus of the analysis means that this analysis couldn't be finished due to some kind of error. For instance, when the system can't find any face in a media, there's nothing to analyze, thus, the analysis receives the
DECLINEDstatus means that the analysis has finished, and the system considered the check was not successful. For instance, the analysis receives the
DECLINEDstatus when two faces in media files don't match (Face Matching) or when the spoofing attack is detected (Liveness).
When calculating the spoofing score, the neural networks take into account many factors. Some of them can be explained, e.g., better (and better controlled) cameras usually produce more accurate scores. That is why native SDKs' scores are typically lower for original attempts than Web SDK ones. However, because of the number of factors, the scores are not directly interpretable.
Even when you repeat attempts under the similar conditions and make them seemingly the same, the images are slightly different from attempt to attempt and the scores might differ as well.