EN: Oz Knowledge Base
Search
K
Links
Comment on page

FAQ

Frequently asked questions

General

Do Oz Forensics Products Have Any Certificates?
We certified our architecture at iBeta:
  • the frontend SDK capture method (at iBeta we provided a mobile SDK) and
  • the full processing of presentation attack detection on the side of Oz API.
If you replace Mobile SDK capture with Web SDK capture, the architecture and security accuracy to presentation attacks remain the same, but in some cases the False Rejection Rate might increase due to the lower quality of the laptop cameras.
Also, for the scenario of using our Web SDK we have implemented an additional layer of protection against deepfake / injection attack: camera spoofing detection (injection attacks / deep fake)
Oz Liveness against new generation of biometric attacks
However, these types of attacks are not related to presentation attacks and are not included in the iBeta certification scope.
How Do I Authorize in Oz API?
Obtain the authorization token via API, as described here. To get the token, you need credentials and API address provided by us. The token is necessary for all subsequent interactions with Oz API.
What Are Oz System Requirements?
Please check the compatibility table.
The server requirements are listed here. Please note: if you implement the on-premise model of usage, the better your hardware is, the faster the analyses are.
How to Test Oz Solutions?
On the basic level, you can test them yourself without even involving our engineers: visit our Web Demo page or download our demo application from Google Play (Android) or TestFlight (iOS) with the on-device functionality. To explore more features, contact us to get the credentials and necessary links to access:
  • the server-based liveness and face matching analyses,
  • extended Web Demo,
  • Web console to check the info on orders and analyses,
  • and REST API.
We'll be happy to provide you with any support.
Can I Use My Own Dataset for Testing?
Surely, you can. Contact us to get your credentials for accessing Oz API. Then, follow the instructions from this integration guide.
Does Oz SDK Protect from Injection Attacks (Virtual Cameras etc.)?
Yes, we do provide protection from injection attacks. If you take a video using Web SDK, you get the two-layer protection against injection attacks: the way we take videos allows our neural networks tracing the attacks in a video itself and the information gathered about the browser context and camera properties helps detect usage of virtual cameras or other injection methods.
To avoid malicious intents, we do not allow usage of virtual cameras.

SDK

What Is the Size of Oz Mobile SDKs?
iOS: the server-based version – 3.8 MB, full (server-based + on-device) version – 12.1 MB.
Android: the server-based version – 11 MB, full (server-based + on-device) version – 18.5 MB.
What Is the Captured Video File Size?
Native (Mobile) SDKs: simple Selfie – 0.8-1.2 MB, other gestures – 2-6 MB.
Web SDK: 2-5 MB.
For the detailed information on the file sizes, please refer to this article.
What Languages Do Your SDKs Support, and How Can I Add a Custom Language?
Web SDK: EN, ES, PT-BR, KK.
Web Demo: EN, ES, PT-BR, KK.
To add your own language to the Web SDK, please refer to these instructions.
Mobile SDK: EN, ES, HY, KK, KY, TR, PT-BR.
Mobile Demo: EN, ES, PT-BR.
To add your own language to the Mobile SDKs, please refer to these instructions: iOS, Android.
Do Oz SDKs Work with Flutter (Xamarin, React, etc.)?
Our SDKs can be integrated with any framework or library. We also have Flutter SDK: please refer to this section.
What If I Do Not Use Oz SDK to Capture Video?
You can use any media to analyze. However, with our SDKs, the accuracy of the analyses is higher as the quality of a video taken by our SDKs is optimal for the subsequent Liveness analysis. The seamless integration of video capturing and video analyzing components ensures the whole process goes smoothly. Also, we designed our SDKs with real users in mind, providing great user experience.
If you take a video using Web SDK, you get the two-layer protection against injection attacks: the way we take videos allows our neural networks tracing the attacks in a video itself and the information gathered about the browser context and camera properties helps detect usage of virtual cameras or other injection methods.

Analyses

How Accurate Are the Analyses?
The accuracy of our analyses is >99%, confirmed by NIST iBeta and NIST FRVT.
APCER (FAR): 0%.
BPCER (FRR) (please note: the numbers below are for the portrait orientation, as the majority of the media we analyze are taken this way):
MSDK – 0.1-0.15%, depending on the gesture.
Web SDK – 0.5-0.6%, depending on the gesture.
How Much Time Do the Analyses Take?
Usually the analysis takes 1-3 seconds.
  • The time depends on the analysis type (on-device or server-based) and some other factors.
Server-based: for the on-premise model of usage, it is your hardware that matters the most. For SaaS, our hardware is used, but still, at the peak times the analysis might take longer because of the increased load.
On-device iOS: less than a second.
On-device Android: on new smartphones – up to three seconds.
  • On old models, these numbers might be a bit higher. The newer your smartphone is, the faster the analyses are.
I Use the On-Device Analyses. Why Should I Switch to Hybrid?
The hybrid analysis type is a combination of on-device and server-based types, that provides both their benefits while not having their drawbacks. If you switch to the hybrid analysis from the on-device one, the main benefit you’ll have is the increase of accuracy.
Occasionally, the outcome of the analysis conducted on the device may be uncertain. By employing the hybrid approach, the system will merely forward the corresponding media to the server for an additional check. Should a definite result be determined on the device, no data is transmitted (configurable within the Analysis structure). Consequently, there will be a negligible increase in computational capacity and bandwidth usage, while simultaneously yielding more accurate results.
To enable the hybrid mode on Android, when you launch the analysis, set Mode to HYBRID. To do the same on iOS, you need to set mode to hybrid. Please note: you’ll require server access. If you already have the access, the credentials remain the same.
I Use the Server-Based Analyses. Why Should I Switch to Hybrid?
The hybrid analysis type is a combination of on-device and server-based types, that provides both their benefits while not having their drawbacks. If you switch to the hybrid analysis from the on-device one, the main benefits you’ll have are:
  • reduced computational capacity,
  • quicker analyses,
  • decreased data transmission.
In most cases, the on-device models can furnish you with a conclusive analysis result, so you can significantly decrease analysis time and resource usage. At the same time, the accuracy level will be similar to the server-based analysis one: whenever there's uncertainty, the system forwards the corresponding media to the server for an additional check. As for definite on-device outcomes, you have the option to configure what gets transmitted and stored on the server, ranging from the original media to nothing, based on the settings within the Analysis structure.
Please note: you can always find the on-device analysis results for a media in this media's metadata. The metadata key is on-device-liveness-confidence. If the value of this parameter is lower than 0.5, the on-device analysis has been finished with the SUCCESS status.
To enable the hybrid mode on Android, when you launch the analysis, set Mode to HYBRID. To do the same on iOS, you need to set mode to hybrid. Your server access credentials remain the same.
How to Download All Orders' Data?
All your orders' data can be retrieved via API. Call GET Folder [LIST]. The response will be a JSON file with all information needed.
If you require the detailed data on your analyses, make sure the with_analyses parameter is set to True: GET /api/folders/?with_analyses=true. You can also specify other parameters like user ID or analysis type. To download our Postman collection, please proceed here.
To retrieve the information on the particular order (folder), you'll need its ID. In the web console, go to the Orders tab, click Filter, enter the order number in the appropriate field, and click Search. Once the order needed is displayed, click the icon with PDF on it, select the report template, and click Continue. The order data will be downloaded in the PDF format. To learn more about working with Oz web console, please refer to this section.
How Do I Perform the Liveness Analysis for a Photo?
With API 5.0.0 and above, you can perform the Liveness analysis for any media, including images.
With API 4.0.8 and below, Liveness doesn’t work with images, but accepts frame sequences. Thus, to perform an analysis using a photo, you need to turn it into a frame sequence of a single frame. Place the photo into a ZIP archive and use video tags, so that Liveness algorithms treat this photo as a video. You can find more information in our integration guide.
What Is the Difference Between One_shot, Selfie, and Best_shot?
One_shot and Selfie are Passive Liveness actions. For the Selfie action, we take a short video and send it to the analysis. For One_shot, only a single frame is used, which lowers both file size and analysis accuracy.
Best_shot is not an action. It is an option that allows to save the best (the most high-quality and well-tuned) frame from the Liveness video. This option works only as an addition to Liveness.
What is the Difference between the DECLINED and FAILED Statuses?
The FAILED status of the analysis means that this analysis couldn't be finished due to some kind of error. For instance, when the system can't find any face in a media, there's nothing to analyze, thus, the analysis receives the FAILED status.
The DECLINED status means that the analysis has finished, and the system considered the check was not successful. For instance, the analysis receives the DECLINED status when two faces in media files don't match (Face Matching) or when the spoofing attack is detected (Liveness).
For more details, please refer to this article.
What Impacts the Spoofing (Confidence) Score?
When calculating the spoofing score, the neural networks take into account many factors. Some of them can be explained, e.g., better (and better controlled) cameras usually produce more accurate scores. That is why native SDKs' scores are typically lower for original attempts than Web SDK ones. However, because of the number of factors, the scores are not directly interpretable.
Even when you repeat attempts under the similar conditions and make them seemingly the same, the images are slightly different from attempt to attempt and the scores might differ as well.