FAQ

Frequently asked questions

General

Do Oz Forensics Products Have Any Certificates?

We certified our architecture at iBeta:

  • the frontend SDK capture method (at iBeta we provided a mobile SDK) and

  • the full processing of presentation attack detection on the side of Oz API.

If you replace Mobile SDK capture with Web SDK capture, the architecture and security accuracy to presentation attacks remain the same, but in some cases the False Rejection Rate might increase due to the lower quality of the laptop cameras.

However, these types of attacks are not related to presentation attacks and are not included in the iBeta certification scope.

How Do I Authorize in Oz API?

Obtain the authorization token via API, as described here. To get the token, you need credentials and API address provided by us. The token is necessary for all subsequent interactions with Oz API.

What Are Oz System Requirements?

Oz Biometry / Liveness Server

  • CPU: 16 cores

  • RAM: 16 GB

  • Disk: 80 GB

Oz API / Web UI / Web SDK Server

  • CPU: 8 cores

  • RAM: 16 GB

  • Disk: 300 GB

Please check the compatibility table for OS versions.

You can use two virtual machines on a single physical host, but make sure your server is powerful enough. Please note: if you implement the on-premise model of usage, the better your hardware is, the faster the analyses are.

How to Test Oz Solutions?

On the basic level, you can test them yourself without even involving our engineers: visit our Web Demo page or download our demo application from Google Play (Android) or TestFlight (iOS) with the on-device functionality. To explore more features, contact us to get the credentials and necessary links to access:

  • the server-based liveness and face matching analyses,

  • extended Web Demo,

  • Web console to check the info on orders and analyses,

  • and REST API.

We'll be happy to provide you with any support.

Can I Use My Own Dataset for Testing?

Surely, you can. Contact us to get your credentials for accessing Oz API. Then, follow the instructions from this integration guide.

Does Oz SDK Protect from Injection Attacks (Virtual Cameras etc.)?

Yes, we do provide protection from injection attacks. If you take a video using Web SDK, you get the two-layer protection against injection attacks: the way we take videos allows our neural networks tracing the attacks in a video itself and the information gathered about the browser context and camera properties helps detect usage of virtual cameras or other injection methods.

To avoid malicious intents, we do not allow usage of virtual cameras.

How Often Do You Release New Software Versions?

We continuously improve our software, enhancing its performance and security. The typical update frequency is as follows:

  • Mobile SDKs: monthly updates, with major releases featuring significant changes occurring annually.

  • Web SDK: monthly updates, which may include significant changes.

  • API: updated once or twice a year, potentially incorporating significant changes.

Regardless of the update's contents, we provide full support throughout the updating process to ensure seamless functionality.

SDK

What Is the Size of Oz Mobile SDKs?

iOS: the server-based version – 3.9 MB, full (server-based + on-device) version – 11.6 MB.

Android: the server-based version – 11 MB, full (server-based + on-device) version – 18.5 MB. Please note that the final size depends on the ABI you use.

What Is the Captured Video File Size?

Native (Mobile) SDKs: simple Selfie – 0.8-1.2 MB, other gestures – 2-6 MB.

Web SDK: 2-5 MB.

For the detailed information on the file sizes, please refer to this article.

Can I Capture a Document Using Oz SDK?

Document processing is not our specialization, and we do not have plans to develop in this direction. Our primary focus is to provide Liveness and Face Matching services with clear and precise results.

Native (Mobile) SDKs do not include document capture functionality. Web SDK only offers a basic document capture screen. Neither SDK provides OCR (document fields recognition), document type identification, quality check, etc. For these tasks, consider integrating your own or third-party OCR software.

You can utilize the documents for Face Matching with a Liveness video on the server side. Utilize the photo_id_front and photo_id_back tags for photos of the front and back sides of a document, respectively.

What Languages Do Your SDKs Support, and How Can I Add a Custom Language?

Web SDK: EN, ES, PT-BR, KK.

Web Demo: EN, ES, PT-BR, KK.

To add your own language to the Web SDK, please refer to these instructions.

Mobile SDK: EN, ES, HY, KK, KY, TR, PT-BR.

Mobile Demo: EN, ES, PT-BR.

To add your own language to the Mobile SDKs, please refer to these instructions: iOS, Android.

How Do I Change Texts in SDK Strings?

You can change any SDK message. To do so, please proceed to the localization section of the required SDK:

Download the strings file. It consists of the localization records. Each record contains a key and a text for this key. Make your changes and follow the instructions provided.

Please note: in the mobile SDKs, the text changing feature has been implemented in 8.1.0.

Do Oz SDKs Work with Flutter (Xamarin, React, etc.)?

Our SDKs can be integrated with any framework or library. To name a few: React Native, Flutter, Cordova, Xamarin, Ionic, Kotlin Multiplatform, Angular, Svelte, React.js.

We have created our own Flutter SDK for iOS and Android: please refer to this section.

For Web SDK, we have samples for Angular and React. Replace https://web-sdk.sandbox.ozforensics.com in index.html with your Web Adapter link if needed.

What If I Do Not Use Oz SDK to Capture Video?

You can use any media to analyze. However, with our SDKs, the accuracy of the analyses is higher as the quality of a video taken by our SDKs is optimal for the subsequent Liveness analysis. The seamless integration of video capturing and video analyzing components ensures the whole process goes smoothly. Also, we designed our SDKs with real users in mind, providing great user experience.

If you take a video using Web SDK, you get the two-layer protection against injection attacks: the way we take videos allows our neural networks tracing the attacks in a video itself and the information gathered about the browser context and camera properties helps detect usage of virtual cameras or other injection methods.

Do You Perform Any Media Quality Checks During Analyses?

The analyses handle media in their raw form without assessing their quality. However, we do pay attention to media quality when we take a photo or video using our SDK. We check illumination, face size and position, presence of any additional objects, etc. You can find the list of our SDK checks here.

Please note that if you use your own or a third-party SDK to capture a photo or video, the result should meet our quality criteria. Otherwise, it might affect the accuracy of the analyses.

Analyses

How Accurate Are the Analyses?

The accuracy of our analyses is >99%, confirmed by NIST iBeta and NIST FRVT.

APCER (FAR): 0%.

BPCER (FRR) (please note: the numbers below are for the portrait orientation, as the majority of the media we analyze are taken this way):

MSDK – 0.1-0.15%, depending on the gesture.

Web SDK – 0.5-0.6%, depending on the gesture.

How Much Time Do the Analyses Take?

Usually the analysis takes 1-3 seconds.

  • The time depends on the analysis type (on-device or server-based) and some other factors.

Server-based: for the on-premise model of usage, it is your hardware that matters the most. For SaaS, our hardware is used, but still, at the peak times the analysis might take longer because of the increased load.

On-device iOS: less than a second.

On-device Android: on new smartphones – up to three seconds, depends on CPU.

  • On old models, these numbers might be a bit higher. The newer your smartphone is, the faster the analyses are.

I Use the On-Device Analyses. Why Should I Switch to Hybrid?

The hybrid analysis type is a combination of on-device and server-based types, that provides both their benefits while not having their drawbacks. If you switch to the hybrid analysis from the on-device one, the main benefit you’ll have is the increase of accuracy.

Occasionally, the outcome of the analysis conducted on the device may be uncertain. By employing the hybrid approach, the system will merely forward the corresponding media to the server for an additional check. Should a definite result be determined on the device, no data is transmitted (configurable within the Analysis structure). Consequently, there will be a negligible increase in computational capacity and bandwidth usage, while simultaneously yielding more accurate results.

To enable the hybrid mode on Android, when you launch the analysis, set Mode to HYBRID. To do the same on iOS, you need to set mode to hybrid. Please note: you’ll require server access. If you already have the access, the credentials remain the same.

I Use the Server-Based Analyses. Why Should I Switch to Hybrid?

The hybrid analysis type is a combination of on-device and server-based types, that provides both their benefits while not having their drawbacks. If you switch to the hybrid analysis from the on-device one, the main benefits you’ll have are:

  • reduced computational capacity,

  • quicker analyses,

  • decreased data transmission.

In most cases, the on-device models can furnish you with a conclusive analysis result, so you can significantly decrease analysis time and resource usage. At the same time, the accuracy level will be similar to the server-based analysis one: whenever there's uncertainty, the system forwards the corresponding media to the server for an additional check. As for definite on-device outcomes, you have the option to configure what gets transmitted and stored on the server, ranging from the original media to nothing, based on the settings within the Analysis structure. Here are some figures derived from our experience:

  • The customer journey is decreased by 30%.

  • The necessary disk space is reduced by 2.5 times.

  • The CPU usage is reduced by 2.5 to 3 times.

Please note: you can always find the on-device analysis results for a media in this media's metadata. The metadata key is on-device-liveness-confidence. If the value of this parameter is lower than 0.5, the on-device analysis has been finished with the SUCCESS status.

To enable the hybrid mode on Android, when you launch the analysis, set Mode to HYBRID. To do the same on iOS, you need to set mode to hybrid. Your server access credentials remain the same.

How to Download All Orders' Data?

All your orders' data can be retrieved via API. Call GET Folder [LIST]. The response will be a JSON file with all information needed.

If you require the detailed data on your analyses, make sure the with_analyses parameter is set to True: GET /api/folders/?with_analyses=true. You can also specify other parameters like user ID or analysis type. To download our Postman collection, please proceed here.

To retrieve the information on the particular order (folder), you'll need its ID. In the web console, go to the Orders tab, click Filter, enter the order number in the appropriate field, and click Search. Once the order needed is displayed, click the icon with PDF on it, select the report template, and click Continue. The order data will be downloaded in the PDF format. To learn more about working with Oz web console, please refer to this section.

How Do I Perform the Liveness Analysis for a Photo?

With API 5.0.0 and above, you can perform the Liveness analysis for any media, including images.

With API 4.0.8 and below, Liveness doesn’t work with images, but accepts frame sequences. Thus, to perform an analysis using a photo, you need to turn it into a frame sequence of a single frame. Place the photo into a ZIP archive and use video tags, so that Liveness algorithms treat this photo as a video. You can find more information in our integration guide.

What Is the Difference Between One_shot, Selfie, and Best_shot?

One_shot and Selfie are Passive Liveness actions. For the Selfie action, we take a short video and send it to the analysis. For One_shot, only a single frame is used, which lowers both file size and analysis accuracy.

Best_shot is not an action. It is an option that allows to save the best (the most high-quality and well-tuned) frame from the Liveness video. This option works only as an addition to Liveness.

What is the Difference between the DECLINED and FAILED Statuses?

The analysis is marked as FAILED when it encounters an error preventing its completion. For example, if the system fails to detect any faces in a media file, there is no content to analyze, resulting in the FAILED status. The system provides an error description detailing the cause of the failure. If you have sent the analysis request via API, the error description is in the error_message field of the JSON response. In the Web Console order, click the red exclamation mark to open the window with the description.

The DECLINED status means that the analysis has finished, and the system considered that the check was not successful. You will also get a number which is the analysis' resulting score. For instance, the analysis receives the DECLINED status when two faces in media files don't match (Face Matching) or when the spoofing attack is detected (Liveness).

For more details on statuses, please refer to this article.

What Impacts the Spoofing (Confidence) Score?

When calculating the spoofing score, the neural networks take into account many factors. Some of them can be explained, e.g., better (and better controlled) cameras usually produce more accurate scores. That is why scores from native SDKs are typically lower for original attempts than those from Web SDKs. However, due to the multitude of factors involved, the scores are not directly interpretable.

Even when repeating attempts under similar conditions and seemingly making them the same, the images vary slightly from attempt to attempt, leading to potential differences in scores as well.

Though, these factors can have a more significant impact and may result in false rejects, increasing FRR:

  • Camera defects or objects that partially obstruct the camera's field of view, such as protective films, cases, or holders.

  • Extreme lighting conditions.

  • Use of low-quality or specialized cameras.

  • Glasses with thick frames and screen reflections (standard optical glasses are generally accepted).

  • Headgear, which is mainly accepted when not covering the face, but may slightly affect the false rejection score.

  • Not using our SDK to record media can also be a significant factor contributing to poor liveness scores and false rejects.

Last updated