FAQ
Frequently asked questions
Last updated
Frequently asked questions
Last updated
We certified our architecture at iBeta:
the frontend SDK capture method (at iBeta we provided a mobile SDK) and
the full processing of presentation attack detection on the side of Oz API.
If you replace Mobile SDK capture with Web SDK capture, the architecture and security accuracy to presentation attacks remain the same, but in some cases the False Rejection Rate might increase due to the lower quality of the laptop cameras.
Also, for the scenario of using our Web SDK we have implemented an additional layer of protection against deepfake / injection attack: camera spoofing detection (injection attacks / deep fake) Oz Liveness against new generation of biometric attacks
However, these types of attacks are not related to presentation attacks and are not included in the iBeta certification scope.
Obtain the authorization token via API, as described here. To get the token, you need credentials and API address provided by us. The token is necessary for all subsequent interactions with Oz API.
Oz Biometry / Liveness Server
CPU: 16 cores
RAM: 32 GB
Disk: 80 GB
Oz API / Web UI / Web SDK Server
CPU: 8 cores
RAM: 16 GB
Disk: 300 GB
Please check the compatibility table for OS versions.
You can use two virtual machines on a single physical host, but make sure your server is powerful enough. Please note: if you implement the on-premise model of usage, the better your hardware is, the faster the analyses are.
The system's throughput capacity depends on the number of Oz Biometry/Liveness servers you’ve set up. Each server or pod (for Podman/Docker/K8s) can handle:
Up to 34 analyses per minute, 2040 analyses per hour,
Up to 2 simultaneous threads
To increase performance, scale the deployment horizontally by adding more servers or pods.
On the basic level, you can test them yourself without even involving our engineers: visit our Web Demo page or download our demo application from Google Play (Android) or TestFlight (iOS) with the on-device functionality. To explore more features, contact us to get the credentials and necessary links to access:
the server-based liveness and face matching analyses,
extended Web Demo,
Web console to check the info on orders and analyses,
and REST API.
We'll be happy to provide you with any support.
Surely, you can. Contact us to get your credentials for accessing Oz API. Then, follow the instructions from this integration guide.
Yes, we do provide protection from injection attacks. If you take a video using our SDK, you get the two-layer protection against injection attacks: the way we take videos allows our neural networks tracing the attacks in a video itself and the information gathered about the context and camera properties helps detect usage of virtual cameras or other injection methods.
To avoid malicious intents, we do not allow usage of virtual cameras.
Telemetry is a logging service that captures every detail of user session, from SDK initialization to the check results, including security check and common device data.
The information we collect via telemetry is anonymized. Through telemetry, we do not store:
Direct identifiers (such as IMEI or username),
Financial data,
Communication data,
Location information,
Or any other data that is not required for incident investigation.
Each event generates a telemetry record, and these records help a lot in investigating incidents and enhancing our defense against current and potential threats. We highly recommend enabling and storing telemetry data.
For users of the cloud Oz API with native SDK version 8.0.0 and above, telemetry is automatically configured and saved alongside analysis data. In on-premise setups, we suggest configuring telemetry and sending the data to our servers. Please contact us to obtain the necessary credentials.
We continuously improve our software, enhancing its performance and security. The typical update frequency is as follows:
Mobile SDKs: monthly updates, with major releases featuring significant changes occurring annually.
Web SDK: monthly updates, which may include significant changes.
API: updated once or twice a year, potentially incorporating significant changes.
Regardless of the update's contents, we provide full support throughout the updating process to ensure seamless functionality.
Dynamic: the server-based version – 5 MB, full (server-based + on-device) version – 13 MB.
Static: the server-based version – 30 MB, full version – 37 MB.
Bundle (recommended): the server-based version – 10.1 MB, full (server-based + on-device) version – 18.1 MB. Please note that the final size depends on the ABI you use.
If you need Universal APK: the server-based version – 15.9 MB, full version – 24 MB.
Native (Mobile) SDKs: simple Selfie – 0.8-1.2 MB, other gestures – 2-6 MB.
Web SDK: 2-5 MB.
For the detailed information on the file sizes, please refer to this article.
Document processing is not our specialization, and we do not have plans to develop in this direction. Our primary focus is to provide Liveness and Face Matching services with clear and precise results.
Native (Mobile) SDKs do not include document capture functionality. Web SDK only offers a basic document capture screen. Neither SDK provides OCR (document fields recognition), document type identification, quality check, etc. For these tasks, consider integrating your own or third-party OCR software.
You can utilize the documents for Face Matching with a Liveness video on the server side. Utilize the photo_id_front
and photo_id_back
tags for photos of the front and back sides of a document, respectively.
Web SDK: EN, ES, PT-BR, KK.
Web Demo: EN, ES, PT-BR, KK.
To add your own language to the Web SDK, please refer to these instructions.
Mobile SDK: EN, ES, HY, KK, KY, TR, PT-BR.
Mobile Demo: EN, ES, PT-BR.
To add your own language to the Mobile SDKs, please refer to these instructions: iOS, Android.
You can change any SDK message. To do so, please proceed to the localization section of the required SDK:
Download the strings file. It consists of the localization records. Each record contains a key and a text for this key. Make your changes and follow the instructions provided.
Please note: in the mobile SDKs, the text changing feature has been implemented in 8.1.0.
Our SDKs can be integrated with any framework or library. To name a few: React Native, Flutter, Cordova, Xamarin, Ionic, Kotlin Multiplatform, Angular, Svelte, React.js.
We have created our own Flutter SDK for iOS and Android: please refer to this section.
For Web SDK, we have samples for Angular and React. Replace https://web-sdk.sandbox.ozforensics.com
in index.html with your Web Adapter link if needed.
You can use any media to analyze. However, with our SDKs, the accuracy of the analyses is higher as the quality of a video taken by our SDKs is optimal for the subsequent Liveness analysis. The seamless integration of video capturing and video analyzing components ensures the whole process goes smoothly. Also, we designed our SDKs with real users in mind, providing great user experience.
If you take a video using our SDK, you get the two-layer protection against injection attacks: the way we take videos allows our neural networks tracing the attacks in a video itself and the information gathered about context and camera properties helps detect usage of virtual cameras or other injection methods.
The analyses handle media in their raw form without assessing their quality. However, we do pay attention to media quality when we take a photo or video using our SDK. We check illumination, face size and position, presence of any additional objects, etc. You can find the list of our SDK checks here.
Please note that if you use your own or a third-party SDK to capture a photo or video, the result should meet our quality criteria. Otherwise, it might affect the accuracy of the analyses.
The accuracy of our analyses is >99%, confirmed by NIST iBeta and NIST FRVT.
APCER (FAR): 0%.
BPCER (FRR) (please note: the numbers below are for the portrait orientation, as the majority of the media we analyze are taken this way):
MSDK – depending on the gesture, 2.75% on iOS and 3.57% on Android for on-device analysis; 0.1-0.15% for server-based and hybrid analyses on both platforms.
Web SDK – 0.5-0.6%, depending on the gesture.
Usually the analysis takes 1-3 seconds.
The time depends on the analysis type (on-device or server-based) and some other factors.
Server-based: for the on-premise model of usage, it is your hardware that matters the most. For SaaS, our hardware is used, but still, at the peak times the analysis might take longer because of the increased load.
On-device iOS: less than a second.
On-device Android: on new smartphones – up to three seconds, depends on CPU.
On old models, these numbers might be a bit higher. The newer your smartphone is, the faster the analyses are.
The hybrid analysis type is a combination of on-device and server-based types, that provides both their benefits while not having their drawbacks. If you switch to the hybrid analysis from the on-device one, the main benefit you’ll have is the increase of accuracy.
Occasionally, the outcome of the analysis conducted on the device may be uncertain. By employing the hybrid approach, the system will merely forward the corresponding media to the server for an additional check. Should a definite result be determined on the device, no data is transmitted (configurable within the Analysis
structure). Consequently, there will be a negligible increase in computational capacity and bandwidth usage, while simultaneously yielding more accurate results.
To enable the hybrid mode on Android, when you launch the analysis, set Mode
to HYBRID
. To do the same on iOS, you need to set mode
to hybrid
. Please note: you’ll require server access. If you already have the access, the credentials remain the same.
The hybrid analysis type is a combination of on-device and server-based types, that provides both their benefits while not having their drawbacks. If you switch to the hybrid analysis from the on-device one, the main benefits you’ll have are:
reduced computational capacity,
quicker analyses,
decreased data transmission.
In most cases, the on-device models can furnish you with a conclusive analysis result, so you can significantly decrease analysis time and resource usage. At the same time, the accuracy level will be similar to the server-based analysis one: whenever there's uncertainty, the system forwards the corresponding media to the server for an additional check. As for definite on-device outcomes, you have the option to configure what gets transmitted and stored on the server, ranging from the original media to nothing, based on the settings within the Analysis
structure. Here are some figures derived from our experience:
The customer journey is decreased by 30%.
The necessary disk space is reduced by 2.5 times.
The CPU usage is reduced by 2.5 to 3 times.
To enable the hybrid mode on Android, when you launch the analysis, set Mode
to HYBRID
. To do the same on iOS, you need to set mode
to hybrid
. Your server access credentials remain the same.
All your orders' data can be retrieved via API. Call GET Folder [LIST].
The response will be a JSON file with all information needed.
If you require the detailed data on your analyses, make sure the with_analyses
parameter is set to True
: GET /api/folders/?with_analyses=true
. You can also specify other parameters like user ID or analysis type. To download our Postman collection, please proceed here.
To retrieve the information on the particular order (folder), you'll need its ID. In the web console, go to the Orders tab, click Filter, enter the order number in the appropriate field, and click Search. Once the order needed is displayed, click the icon with PDF on it, select the report template, and click Continue. The order data will be downloaded in the PDF format. To learn more about working with Oz web console, please refer to this section.
With API 5.0.0 and above, you can perform the Liveness analysis for any media, including images.
With API 4.0.8 and below, Liveness doesn’t work with images, but accepts frame sequences. Thus, to perform an analysis using a photo, you need to turn it into a frame sequence of a single frame. Place the photo into a ZIP archive and use video tags, so that Liveness algorithms treat this photo as a video. You can find more information in our integration guide.
One_shot and Selfie are Passive Liveness actions. For the Selfie action, we take a short video and send it to the analysis. For One_shot, only a single frame is used, which lowers both file size and analysis accuracy.
Best_shot is not an action. It is an option that allows to save the best (the most high-quality and well-tuned) frame from the Liveness video. This option works only as an addition to Liveness.
The analysis is marked as FAILED
when it encounters an error preventing its completion. For example, if the system fails to detect any faces in a media file, there is no content to analyze, resulting in the FAILED
status. The system provides an error description detailing the cause of the failure. If you have sent the analysis request via API, the error description is in the error_message
field of the JSON response. In the Web Console order, click the red exclamation mark to open the window with the description.
The DECLINED
status means that the analysis has finished, and the system considered that the check was not successful. You will also get a number which is the analysis' resulting score. For instance, the analysis receives the DECLINED
status when two faces in media files don't match (Face Matching) or when the spoofing attack is detected (Liveness).
For more details on statuses, please refer to this article.
When calculating the spoofing score, the neural networks take into account many factors. Some of them can be explained, e.g., better (and better controlled) cameras usually produce more accurate scores. That is why scores from native SDKs are typically lower for original attempts than those from Web SDKs. However, due to the multitude of factors involved, the scores are not directly interpretable.
Even when repeating attempts under similar conditions and seemingly making them the same, the images vary slightly from attempt to attempt, leading to potential differences in scores as well.
Though, these factors can have a more significant impact and may result in false rejects, increasing FRR:
Camera defects or objects that partially obstruct the camera's field of view, such as protective films, cases, or holders.
Extreme lighting conditions.
Use of low-quality or specialized cameras.
Glasses with thick frames and screen reflections (standard optical glasses are generally accepted).
Headgear, which is mainly accepted when not covering the face, but may slightly affect the false rejection score.
Not using our SDK to record media can also be a significant factor contributing to poor liveness scores and false rejects.