Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Describing how passive and active liveness works.
The objective of the Liveness check is to verify the authenticity and physical presence of an individual in front of the camera. In the passive Liveness check, it is sufficient to capture a user's face while they look into the camera. Conversely, the active Liveness check requires the user to perform an action such as smiling, blinking, or turning their head. While passive Liveness is more user-friendly, active Liveness may be necessary in some situations to confirm that the user is aware of undergoing the Liveness check.
In our Mobile or Web SDKs, you can define what action the user is required to do. You can also combine several actions into a sequence. Actions vary in the following dimensions:
User experience,
File size,
Liveness check accuracy,
Suitability for review by a human operator or in court.
In most cases, the Selfie action is optimal, but you can choose other actions based on your specific needs. Here is a summary of available actions:
Selfie
A short video, around 0.7 sec. Users are not required to do anything. Recommended for most cases. It offers the best combination of user experience and liveness check accuracy.
One shot
Similar to “Simple selfie” but only one image is chosen instead of the whole video. Recommended when media size is the most important factor. Hard to evaluate for a spoofing by a human, e.g., by an operator or in a court.
Scan
A 5-second video where a user is asked to follow the text looking at it. Recommended when the longer video is required, e.g., for subsequent review by a human operator or in a court.
Smile
Blink
Tilt head up
Tilt head down
Turn head left
Turn head right
A user is required to complete a particular gesture within 5 seconds.
Use active liveness when you need a confirmation that the user is aware of undergoing a Liveness check.
Video length and file size may vary depending on how soon a user completes a gesture.
To recognize the actions from either passive or active Liveness, our algorithms refer to the corresponding tags. These tags indicate the type of action that a user is performing within a media. For more information, please read the Media Tags article. The detailed information on how the actions, or, in other words, gestures are called in different Oz Liveness components is here.
This article describes Oz components that can be integrated into your infrastructure in various combinations depending on your needs.
The typical integration scenarios are described in the Integration Quick Start Guides section.
Oz API is the central component of the system. It provides RESTful application programming interface to the core functionality of Liveness and Face matching analyses, along with many important supplemental features:
Persistence: your media and analyses are stored for future reference unless you explicitly delete them,
Authentication, roles and access management,
Asynchronous analyses,
Ability to work with videos as well as images.
For more information, please refer to Oz API Key Concepts and Oz API Developer Guide. To test Oz API, please check the Postman collection here.
Under the logical hood, Oz API has the following components:
File storage and database where media, analyses, and other data are stored,
The Oz BIO module that runs neural network models to perform facial biometry magic,
Licensing logic.
The front-end components (Oz Liveness Mobile or Web SDK) connect to Oz API to perform server-side analyses either directly or via customer's back end.
iOS and Android SDK are collectively referred to as Mobile SDKs or Native SDKs. They are written on Swift and Kotlin/Java, respectively, and designed to be integrated into your native mobile application.
Mobile SDKs implement the out-of-the-box customizable user interface for capturing Liveness video and ensure that the two main objectives are met:
The capture process is smooth for users,
The quality of a video is optimal for the subsequent Liveness analysis.
After Liveness video is recorded and available to your mobile application, you can run the server-side analysis. You can use corresponding SDK methods, call the API directly from your mobile application, or pass the media to your backend and interact with Oz API from there.
The basic integration option is described in the Quick Start Guide.
Mobile SDKs are also capable of On-device Liveness and Face matching. On-device analyses may be a good option in low-risk context, or when you don’t want the media to leave the users’ smartphones. Oz API is not required for On-device analyses. To learn how it works, please refer to this Integration Quick Start Guide.
Web Adapter and Web Plugin together constitute Web SDK.
Web SDK is designed to be integrated into your web applications and have the same main goals as Mobile SDKs:
The capture process is smooth for users,
The quality of a video is optimal for the subsequent Liveness analysis.
Web Adapter needs to be set up on a server side. Web Plugin is called by your web application and works in a browser context. It communicates with Web Adapter, which, in turn, communicates with Oz API.
Web SDK adds the two-layer protection against injection attacks:
Collects information about browser context and camera properties to detect usage of virtual cameras or other injection methods.
Records liveness video in a format that allows server-side neural networks to search for traces of injection attack in the video itself.
Check the Integration Quick Start Guide for the basic integration scenario, and explore the Web SDK Developer Guide for more details.
Web UI is a convenient user interface that allows to explore the stored API data in the easy way. It relies on API authentication and database and does not store any data on its own.
Web console has an intuitive interface, yet the user guide is available here.
Liveness and Face Matching can also be provided by the Oz API Lite module. Oz API Lite is conceptually different from Oz API.
Fully stateless, no persistence,
Extremely easy to scale horizontally,
No built-in authentication and access management,
Works with single images, not videos.
Oz API Lite is suitable when you want to embed it into your product and/or have extremely high performance requirements (millions checks per week).
For more details, please refer to the Oz API Lite Developer Guide.
For commercial use of Oz Forensics products, a license is required. The license is time-limited and defines the software access parameters based on the terms of your agreement.
Once you initialize mobile SDK, run Web Plugin, or use Oz Bio, the system checks if your license is valid. The check runs in the background and has minimal impact on the user experience.
As you can see on the scheme above, the license is required for:
Mobile SDKs for iOS and Android,
Web SDK, which consists of Web Adapter and Web Plugin,
For each of the components, you require a separate license which is bound to this component. Thus, if you use all three components, three licenses are required.
To issue a license for mobile SDK, we require your bundle (application) ID. There are two types of licenses for iOS and Android SDKs: online and offline. Any license type can be applied to any analysis mode: on-device, server-based, or hybrid.
As its name suggests, an online license requires a stable connection. Once you initialize our SDK with this license, it connects to our license server and retrieves information about license parameters, including counters of transactions or devices, where:
Transaction: increments each time you start a video capture.
Device: increments when our SDK is installed on a new device.
The online license can be transaction-limited, device-limited, or both, according to your agreement.
The main advantages of the online license are:
You don’t need to update your application after the license renewal,
And if you want to add a new bundle ID to the incense, there’s also no need to re-issue it. Everything is done on the fly.
The data exchange for the online license is quick, ensuring your users won't experience almost any delay compared to using the offline license.
Please note that even though on-device analyses don’t need the Internet themselves, you still require a connection for license verification.
Online license is the default option for Mobile SDKs. If you require the offline license, please inform your manager.
Offline license is a type of license that can work without Internet. All license parameters are set in the license file, and you just need to add the file to your project. This license type doesn’t have any restrictions on transactions or devices.
The main benefit of the offline license is its autonomy, allowing it to function without a network connection. However, when your license expires, and you add a new one, you’ll require to release a new version of your application in Google Play and App Store. Otherwise, the SDK won’t function.
How to add a license to mobile SDK:
Web SDK license is almost similar to the mobile SDK offline license. It can function without network connection, and the license file contains all the necessary parameters, such as expiration date. Web SDK license also has no restrictions on transactions or devices.
The difference between Mobile SDK offline license and Web SDK license is that you don’t need to release a new application version when Web SDK license is renewed.
Once you're ready to move to commercial use, a new production license will be issued. We’ll provide you with new production credentials and assist you with integration and configuration. Our engineers are always available to help.
Our software offers flexible licensing options to meet your specific needs. Whether you prioritize seamless updates or prefer autonomous operation, we have a solution tailored for you. If you have any questions, please contact us.
In this section, we listed the guides for the server-based liveness check integrations.
Oz Forensics specializes in liveness and face matching: we develop products that help you to identify your clients remotely and avoid any kind of spoofing or deepfake attack. Oz software helps you to add facial recognition to your software systems and products. You can integrate Oz modules in many ways depending on your needs. We are constantly improving our components, increasing their quality.
Oz Liveness is responsible for recognizing a living person on a video it receives. Oz Liveness can distinguish a real human from their photo, video, mask, or other kinds of spoofing and deepfake attacks. The algorithm is certified in ISO-30137-3 standard by NIST accreditation iBeta biometric test laboratory with 100% accuracy.
Our liveness technology protects both against injection and presentation attacks.
The injection attack detection is layered. Our SDK examines user environment to detect potential manipulations: browser, camera, etc. Further on, the deep neural network comes into play to defend against even the most sophisticated injection attacks.
Oz Face Matching (Biometry) aims to identify the person, verifying that the person who performs the check and the papers' owner are the same person. Oz Biometry looks through the video, finds the best shot where the person is clearly seen, and compares it with the photo from ID or another document. The algorithm's accuracy is 99.99% confirmed by NIST FRVT.
Our biometry technology has both 1:1 Face Verification and 1:N Face Identification, which are also based on ML algorithms. To train our neural networks, we use an own framework based on state-of-the-art technologies. The large private dataset (over 4.5 million unique faces) with a wide representation of ethnic groups as well as using other attributes (predicted race, age, etc.) helps our biometric models to provide the robust matching scores.
Our face detector can work with photos and videos. Also, the face detector excels in detecting faces in images of IDs and passports (which can be rotated or of low quality).
Oz BIO, which is needed for server analyses and is installed for .
The license is bound to URLs of your domains and/or subdomains. To add the license to your SDK instance, you need to place it to the Web SDK container as described . In rare cases, it is also possible to .
For on-premise installations, we offer a dedicated license with a limitation on activations, with each activation representing a separate Oz BIO seat. This license can be online or offline, depending on whether your Oz BIO servers have internet access. The online license is verified through our license server, while for offline licenses, we assist you in within your infrastructure and activating the license.
For test integration purposes, we provide a free trial license that is sufficient for initial use, such as testing with your datasets to check analysis accuracy. For Mobile SDKs, you can generate a one-month license yourself on our website: . If you would like to integrate with your web application, please to obtain a license, and we will also assist you in configuring your dedicated instance of our Web SDK. With the license, you will receive credentials to access our services.
The presentation attack detection is based on deep neural networks of various architectures, combined with a proprietary ensembling algorithm to achieve optimal performance. The networks consider multiple factors, including reflection, focus, background scene, motion patterns, etc. We offer both passive (no gestures) and active (various gestures) , ensuring that your customers enjoy the user experience while delivering accurate results for you. The iBeta test was conducted using passive Liveness, and since then, we have significantly enhanced our networks to better meet the needs of our clients.
The Oz software combines accuracy in analysis with ease of integration and use. To further simplify the integration process, we have provided a detailed description of all the key concepts of our system in this section. If you're ready to get started, please refer to our , which provide the step-by-step instructions on how to achieve your facial recognition goals quickly and easily.
This guide outlines the steps for integrating the Oz Liveness Mobile SDK into a customer mobile application for capturing facial videos and performing on-device liveness checks without sending any data to a server.
The SDK implements the ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results.
Oz Liveness Mobile SDK requires a license. License is bound to the bundle_id of your application, e.g., com.yourcompany.yourapp
. Issue the 1-month trial license on our website or email us for a long-term license.
In the build.gradle of your project, add:
In the build.gradle of the module, add:
Rename the license file to forensics.license and place it into the project's res/raw folder.
To start recording, use startActivityForResult:
To obtain the captured video, use onActivityResult
:
The sdkMediaResult
object contains the captured videos.
To run the analyses, execute the code below. Mind that mediaList
is an array of objects that were captured (sdkMediaResult) or otherwise created (media you captured on your own).
Install OZLivenessSDK via CocoaPods. To integrate OZLivenessSDK into an Xcode project, add to Podfile:
Rename the license file to forensics.license and put it into the project.
Create a controller that will capture videos as follows:
The delegate object must implement OZLivenessDelegate protocol:
Use AnalysisRequestBuilder to initiate the Liveness analysis.
With these steps, you are done with basic integration of Mobile SDKs. The data from the on-device analysis is not transferred anywhere, so please bear in mind you cannot access it via API or Web console. However, the internet is still required to check the license. Additionally, we recommend that you use our logging service called telemetry, as it helps a lot in investigating attacks' details. We'll provide you with credentials.
This guide outlines the steps for integrating the Oz Liveness Mobile SDK into a customer mobile application for capturing facial videos and subsequently analyzing them on the server.
The SDK implements a ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results. The SDK methods for liveness analysis communicate with Oz API under the hood.
Before you begin, make sure you have Oz API credentials. When using SaaS API, you get them from us:
Login: j.doe@yourcompany.com
Password: …
API: https://sandbox.ohio.ozforensics.com
Web Console: https://sandbox.ohio.ozforensics.com
For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the guide on user creation via Web Console. Consider the proper user role (CLIENT
in most cases or CLIENT ADMIN
, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.
We also recommend that you use our logging service called telemetry, as it helps a lot in investigating attacks' details. For Oz API users, the service is enabled by default. For on-premise installations, we'll provide you with credentials.
Oz Liveness Mobile SDK requires a license. License is bound to the bundle_id of your application, e.g., com.yourcompany.yourapp
. Issue the 1-month trial license on our website or email us for a long-term license.
In the build.gradle of your project, add:
In the build.gradle of the module, add:
Rename the license file to forensics.license and place it into the project's res/raw folder.
Use API credentials (login, password, and API URL) that you’ve got from us.
In production, instead of hard-coding login and password in the application, it is recommended to get access token on your backend with API auth method then pass it to your application:
To start recording, use startActivityForResult:
To obtain the captured video, use onActivityResult
:
The sdkMediaResult
object contains the captured videos.
To run the analyses, execute the code below. Mind that mediaList
is an array of objects that were captured (sdkMediaResult
) or otherwise created (media you captured on your own).
Install OZLivenessSDK via CocoaPods. To integrate OZLivenessSDK into an Xcode project, add to Podfile:
Rename the license file to forensics.license and put it into the project.
Use API credentials (login, password, and API URL) that you’ve got from us.
In production, instead of hard-coding the login and password in the application, it is recommended to get an access token on your back end using the API auth method, then pass it to your application:
Create a controller that will capture videos as follows:
The delegate object must implement OZLivenessDelegate protocol:
Use AnalysisRequestBuilder to initiate the Liveness analysis. The communication with Oz API is under the hood of the run method.
With these steps, you are done with basic integration of Mobile SDKs. You will be able to access recorded media and analysis results in Web Console via browser or programmatically via API.
In developer guides, you can also find instructions for customizing the SDK look-and-feel and access the full list of our Mobile SDK methods. Check out the table below:
Please note that the Oz Liveness Mobile SDK does not include a user interface for scanning official documents. You may need to explore alternative SDKs that offer that functionality or implement it on your own. Web SDK does include a simple photo ID capture screen.
This guide describes the steps needed to add face matching to your liveness check.
By this time you should have already implemented liveness video recording and liveness check. If not, please refer to these guides:
Simply add photo_id_front to the list of actions for the plugin, e.g.,
For the purpose of this guide, it is assumed that your reference photo (e.g., front side of an ID) is stored on the device as reference.jpg.
Modify the code that runs the analysis as follows:
For on-device analyses, you can change the analysis mode from Analysis.Mode.SERVER_BASED
to Analysis.Mode.ON_DEVICE
Check also the Android sample app source code.
For the purpose of this guide, it is assumed that your reference photo (e.g., front side of an ID) is stored on the device as reference.jpg.
Modify the code that runs the analysis as follows:
For on-device analyses, you can change the analysis mode from mode: .serverBased
to mode: .onDevice
Check also the iOS sample app source code.
You will be able to access your media and analysis results in Web UI via browser or programmatically via API.
Oz API methods as well as Mobile and Web SDK methods can be combined with great flexibility. Explore the options available in the Developer Guide section.
This guide outlines the steps for integrating the Oz Liveness Web SDK into a customer web application for capturing facial videos and subsequently analyzing them on a server.
The SDK implements the ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results. Under the hood, it communicates with Oz API.
Oz Liveness Web SDK detects both presentation and injection attacks. An injection attack is an attempt to feed pre-recorded video into the system using a virtual camera.
Finally, while the cloud-based service provides the fully-fledged functionality, we also offer an on-premise version with the same functions but no need for sending any data to our cloud. We recommend starting with the SaaS mode and then reconnecting your web app to the on-premise Web Adapter and Oz API to ensure seamless integration between your front end and back end. With these guidelines in mind, integrating the Oz Liveness Web SDK into your web application can be a simple and straightforward process.
Tell us domain names of the pages from which you are going to call Web SDK and email for admin access, e.g.:
Add the following tags to your HTML code. Use Web Adapter URL received before:
Add the code that opens the plugin and handles the results:
Customizing plugin look-and-feel
Adding custom language pack
Tuning plugin behavior
Plugin parameters and callbacks
Security recommendations
For Angular and React, replace https://web-sdk.sandbox.ohio.ozforensics.com
in index.html.
Oz API is the most important component of the system. It makes sure all other components are connected with each other. Oz API:
provides the unified Rest API interface to run the Liveness and Biometry analyses
processes authorization and user permissions management
tracks and records requested orders and analyses to the database
archives the inbound media files
collects telemetry from connected mobile apps
provides settings for specific device models
generates reports with analyses results
Description of the Rest API scheme:
In this section, you will find the description of both API and SDK components of Oz Forensics Liveness and face biometric system. API is the backend component of the system, it is needed for all the system modules to interact with each other. SDK is the frontend component that is used to:
1) take videos or images which are then processed via API,
2) display results.
We provide two versions of API.
With full version, we provide you with all functionality of Oz API.
The Lite version is a simple and lightweight version with only the necessary functions included.
The SDK component consists of web SDK and mobile SDK.
Web SDK is a plugin that you can embed into your website page and the adapter for this plugin.
Mobile SDK is SDK for iOS and Android.
This guide describes how to match a liveness video with a reference photo of a person that is already stored in your database.
However, if you prefer to include a photo ID capture step to your liveness process instead of using a stored photo, then you can refer to in this section.
By this time you should have already implemented liveness video recording and liveness check. If not, please refer to these guides:
In this scenario, you upload your reference image to the same folder where you have a liveness video, initiate the BIOMETRY analysis, and poll for the results.
folder_id
Given that you already have the liveness video recorded and uploaded, you will be working with the same Oz API folder where your liveness video is. Obtain the folder ID as described below, and pass it to your back end.
For a video recorded by Android or iOS SDK, retrieve the folder_id
from the analysis’ results as shown below:
Android:
iOS:
Set the appropriate tags in the payload field of the request, depending on the nature of a reference photo that you have.
get the qualitative result from resolution (SUCCESS
or DECLINED
).
get the quantitative results from analyses.results_data.min_confidence
Here is the Postman collection for this guide.
With these steps completed, you are done with adding face matching via Oz API. You will be able to access your media and analysis results in Web UI via browser or programmatically via API.
Android source codes
iOS source codes
Android OzLiveness SDK
iOS OzLiveness SDK
in PlayMarket
in TestFlight
Android source codes
iOS source codes
Android OzLiveness SDK
iOS OzLiveness SDK
in PlayMarket
in TestFlight
In response, you’ll get URLs and credentials for further integration and usage. When using SaaS API, you get them :
For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the . Consider the proper user role (CLIENT
in most cases or CLIENT ADMIN
, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .
With these steps, you are done with basic integration of Web SDK into your web application. You will be able to access recorded media and analysis results in via browser or programmatically via (please find the instructions here: , ).
In the you can find instructions for common next steps:
Please find a sample for Oz Liveness Web SDK . To make it work, replace <web-adapter-url>
with the Web Adapter URL you've received from us.
For a video recorded by Web SDK, get the folder_id
as described .
POST /api/folders/{{folder_id}}/media/
method, replacing the folder_id
with the ID you’ve got in the previous step. This will upload your new media to the folder where your ready-made liveness video is located.
To launch the analysis, POST /api/folders/{{folder_id}}/analyses/
with the folder_id
from the previous step. In the request body, specify the biometry check to be launched.
Repeat GET /api/analyses/{{analyse_id}}
with the analyse_id
from the previous step once a second until the state changes from PROCESSING
to something else. For a finished analysis:
Oz API methods can be combined with great flexibility. Explore Oz API using the .
Domain names from which WebSDK will be called:
www.yourbrand.com
www.yourbrand2.com
Email for admin access:
j.doe@yourcompany.com
Login: j.doe@yourcompany.com
Password: …
API: https://sandbox.ohio.ozforensics.com/
Web Console: https://sandbox.ohio.ozforensics.com
Web Adapter: https://web-sdk.cdn.sandbox.ozforensics.com/your_company_name/
The "Best shot" algorithm is intended to choose the most high-quality and well-tuned frame with a face from a video record. This algorithm works as a part of the liveness analysis, so here, we describe only the best shot part.
Please note: historically, some instances are configured to allow Best Shot only for certain gestures.
1. Initiate the analysis similar to Liveness, but make sure that the "extract_best_shot"
is set to True
as shown below:
If you want to use a webhook for response, add it to the payload at this step, as described here.
2. Check and interpret results in the same way as for the pure Liveness analysis.
3. The URL to the best shot is located in the results_media -> output_images -> original_url
response.
The Biometry algorithm is intended to compare two or more photos and detect the level of similarity of the spotted faces. As a source media, the algorithm takes photos, videos, and documents (with photos).
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat until the resolution_status
and resolution
fields change status to any other except PROCESSING
, and treat this as a result.
Check the response for the min_confidence
value. It is a quantitative result of matching the people on the media uploaded.
To launch one or more analyses for your media files, you need to create a folder via Oz API (or use an existing folder) and put the files into this folder. Each file should be marked by tags: they describe what's pictured in a media and determine the applicable analyses.
For API 4.0.8 and below, please note: if you want to upload a photo for the subsequent Liveness analysis, put it into the ZIP archive and apply the video-related tags.
To create a folder and upload media to it, call POST /api/folders/
To add files to the existing folder, call POST /api/folders/{{folder_id}}/media/
Add the files to the request body; tags should be specified in the payload.
Here's the example of the payload for a passive Liveness video and ID front side photo.
An example of usage (Postman):
The successful response will return the folder data.
To get an access token, call POST /api/authorize/auth/
with credentials (which you've got from us) containing the email and password needed in the request body. The host address should be the API address (the one you've also got from us).
The successful response will return a pair of tokens:access_token
and expire_token
.
access_token is a key that grants you access to system resources. To access a resource, you need to add your access_token to the header.
headers = {‘ X-Forensic-Access-Token’: <access_token>}
access_token is time-limited, the limits depend on the account type.
service accounts – OZ_SESSION_LONGLIVE_TTL
(5 years by default),
other accounts – OZ_SESSION_TTL
(15 minutes by default).
expire_token is the token you can use to renew your access token if necessary.
If the value ofexpire_date
> current date, the value of current sessionexpire_date
is set to current date + time period that is defined as shown above (depending on the account type).
To renewaccess_token
and expire_token
, call POST
/api/authorize/refresh/.
Add expire_token
to the request body and X-Forensic-Access-Token to the header.
In case of success, you'll receive a new pair of access_token
and expire_token
. The "old" pair will be deleted upon the first authentication with the renewed tokens.
Error code
Error message
What caused the error
400
Could not locate field for key_path
expire_token
from provided dict data
expire_token
haven't been found in the request body
401
Session not found
The session with expire_token
you have passed doesn't exist.
403
You have not access to refresh this session
A user who makes the request is not thisexpire_token
session owner.
How to compare a photo or video with ones from your database.
The blacklist check algorithm is designed to check the presence of a person using a database of preloaded photos. A video fragment and/or a photo can be used as a source for comparison.
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
1. Initiate the analysis: POST/api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Wait for the resolution_status
and resolution
fields to change the status to anything other than PROCESSING
and treat this as a result.
If you want to know which person from your collection matched with the media you have uploaded, find the collection
analysis in the response, check results_media
, and retrieve person_id
. This is the ID of the person who matched with the person in your media. To get the information about this person, use GET /api/collections/{{collection_id}}/persons/{{person_id}}
with IDs of your collection and person.
The webhook feature simplifies getting analyses' results. Instead of polling after the analyses are launched, add a webhook that will call your website once the results are ready.
When you create a folder, add the webhook endpoint (resolution_endpoint
) into the payload section of your request body:
You'll receive a notification each time the analyses are completed for this folder. The webhook request will contain information about the folder and its corresponding analyses.
This article describes how to create a collection via API, how to add persons and photos to this collection and how to delete them and the collection itself if you no longer need it. You can do the same in Web console, but this article covers API methods only.
Collection in Oz API is a database of facial photos that are used to compare with the face from the captured photo or video via the Black list analysis
Person represents a human in the collection. You can upload several photos for a single person.
The collection should be created within a company, so you require your company's company_id
as a prerequisite.
If you don't know your ID, callGET /api/companies/?search_text=test
, replacing "test" with your company name or its part. Save the company_id
you've received.
Now, create a collection via POST /api/collections/
. In the request body, specify the alias for your collection and company_id
of your company:
In a response, you'll get your new collection identifier: collection_id
.
To add a new person to your collection, call POST /api/collections/{{collection_id}}/persons/
, usingcollection_id
of the collection needed. In the request body, add a photo or several photos. Mark them with appropriate tags in the payload:
The response will contain the person_id
which stands for the person identifier within your collection.
If you want to add a name of the person, in the request payload, add it as metadata:
To add more photos of the same person, call POST {{host}}/api/collections/{{collection_id}}/persons/{{person_id}}/images/
using the appropriate person_id
. The request body should be filled as you did it before with POST /api/collections/{{collection_id}}/persons/
.
To obtain information on all the persons within the single collection, call GET /api/collections/{{collection_id}}/persons/
.
To obtain a list of photos for a single person, call GET /api/collections/{{collection_id}}/persons/{{person_id}}/images/
. For each photo, the response will containperson_image_id
. You'll need this ID, for instance, if you want to delete the photo.
To delete a person with all their photos, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}
with the appropriate collection and person identifiers. All the photos will be deleted automatically. However, you can't delete a person entity if it has any related analyses, which means the Black list analysis used this photo for comparison and found a coincidence. To delete such a person, you'll need to delete these analyses using DELETE /api/analyses/{{analyse_id}}
with analyse_id
of the collection (Black list) analysis.
To delete all the collection-related analyses, get a list of folders where the Black list analysis has been used: call GET /api/folders/?analyse.type=COLLECTION
. For each folder from this list (GET /api/folders/{{folder_id}}/
), find the analyse_id
of the required analysis, and delete the analysis – DELETE /api/analyses/{{analyse_id}}
.
To delete a single photo of a person, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}/images/{{person_image_id}}
with collection, person, and image identifiers specified.
Delete the information on all the persons from this collection as described above, then call DELETE /api/collections/{{collection_id}}/
to delete the remaining collection data.
Here, you'll get acquainted with types of analyses that Oz API provides and will learn how to interpret the output.
Using Oz API, you can perform one of the following analyses:
The possible results of the analyses are explained here.
Each of the analyses has its threshold that determines the output of these analyses. By default, the threshold for Liveness is 0.5 or 50%, for Blacklist and Biometry (Face Matching) – 0.85 or 85%.
Biometry: if the final score is equal to or above the threshold, the faces on the analyzed media are considered similar.
Blacklist: if the final score is equal to or above the threshold, the face on the analyzed media matches with one of the faces in the database.
Quality: if the final score is equal to or above the threshold, the result is interpreted as an attack.
To configure the threshold depending on your needs, please contact us.
For more information on how to read the numbers in analyses' results, please refer to Quantitative Results.
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media (for details, please refer to Rules of Assigning Analyses).
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – faces are similar, media represent the same person,
0% (0) – faces are not similar and belong to different people
The Liveness detection (Quality) algorithm aims to check whether a person in a media is a real human acting in good faith, not a fake of any kind.
The Best Shot algorithm checks for the best shot from a video (a best-quality frame where the face is seen the most properly). It is an addition to liveness.
After checking, the analysis shows the chance of a spoofing attack in percents.
100% (1) – an attack is detected, the person in the video is not a real living person,
0% (0) – a person in the video is a real living person.
*Spoofing in biometry is a kind of scam when a person disguises as another person using both program and non-program tools like deepfake, masks, ready-made photos, or fake videos.
The Documents analysis aims to recognize the document and check if its fields are correct according to its type.
Oz API uses a third-party OCR analysis service provided by our partner. If you want to change this service to another one, please contact us.
As an output, you'll get a list of document fields with recognition results for each field and a result of checking that can be:
The documents passed the check successfully,
The documents failed to pass the check.
Additionally, the result of Biometry check is displayed.
The Blacklist checking algorithm is used to determine whether the person on a photo or video is present in the database of pre-uploaded images. This base can be used as a blacklist or whitelist. In the former case, the person's face is being compared with the faces of known swindlers; in the latter case, it might be a list of VIPs.
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – the person in an image or video matches with someone in the blacklist,
0% (0) – the person is not found in the blacklist.
This article covers the default rules of applying analyses.
Analyses in Oz system can be applied in two ways:
manually, for instance, when you choose the Liveness scenario in our demo application;
automatically, when you don’t choose anything and just assign all possible analyses (via API or SDK).
The automatic assignment means that Oz system decides itself what analyses to apply to media files based on its tags and type. If you upload files via the web console, you select the tags needed; if you take photo or video via Web SDK, the SDK picks the tags automatically. As for the media type, it can be IMAGE
(a photo)/VIDEO
/SHOTS_SET
, where SHOTS_SET
is a .zip archive equal to video.
Below, you will find the tags and type requirements for all analyses. If a media doesn’t match the requirements for the certain analysis, this media is ignored by algorithms.
The rules listed below act by default. To change the mapping configuration, please contact us.
This analysis is applied to all media, regardless of the gesture recorded (gesture tags begin from video_selfie
).
Important: to process a photo in API 4.0.8 and below, pack it into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored.
This analysis is applied to all media.
If the folder contains less than two matching media files, the system will return an error. If there are more than two files, then all pairs will be compared, and the system will return a result for the pair with the least similar faces.
This analysis works only when you have a pre-made image database, which is called the blacklist. The analysis is applied to all media in the folder (or the ones marked as source media).
Best Shot is an addition to the Quality (Liveness) analysis. It requires the appropriate option enabled. The analysis is applied to all media files that can be processed by the Quality analysis.
The Documents analysis is applied to images with tags photo_id_front
and photo_id_back
(documents), and photo_selfie
(selfie). The result will be positive if the system finds the selfie photo and matches it with a photo on one of the valid documents from the following list:
personal ID card
driver license
foreign passport
Code
Message
Description
202
Could not locate face on source media [media_id]
No face is found in the media that is being processed, or the source media has wrong (photo_id_back
) or/and missing tag used for the media.
202
Biometry. Analyse requires at least 2 media objects to process
The algorithms did not find the two appropriate media for analysis. This might happen when only a single media has been sent for the analysis, or a media is missing a tag.
202
Processing error - did not found any document candidates on image
The Documents analysis can't be finished because the photo uploaded seems not to be a document, or it has wrong (not photo_id_*
) or/and missing tags.
5
Invalid/missed tag values to process quality check
The tags applied can't be processed by the Quality algorithm (most likely, the tags begin from photo_*
; for Quality, they should be marked as video_*
)
5
Invalid/missed tag values to process blacklist check
The tags applied can't be processed by the Blacklist algorithm. This might happen when a media is missing a tag.
Response codes 2XX indicate a successfully processed request (e.g., code 200 for retrieving data, code 201 for adding a new entity, code 204 for deletion, etc.).
Response codes 4XX indicate that a request could not be processed correctly because of some client-side data issues (e.g., 404 when addressing a non-existing resource).
Response codes 5XX indicate that an internal server-side error occurred during the request processing (e.g., when database is temporarily unavailable).
Each response error includes HTTP code and JSON data with error description. It has the following structure:
error_code
– integer error code;
error_message
– text error description;
details
– additional error details (format is specified to each case). Can be empty.
Sample error response:
Error codes:
0 – UNKNOWN
Unknown server error.
1 - NOT ALLOWED
An unallowed method is called. Usually is followed by the 405 HTTP status of response. For example, trying to request the PATCH method, while only GET/POST ones are supported.
2 - NOT REALIZED
The method is documented but is not realized by any temporary or permanent reason.
3 - INVALID STRUCTURE
Incorrect structure of request. Some required fields missing or a format validation error occurred.
4 - INVALID VALUE
Incorrect value of the parameter inside request body or query.
5 - INVALID TYPE
The invalid data type of the request parameter.
6 - AUTH NOT PROVIDED
Access token not specified.
7 - AUTH INVALID
The access token does not exist in the database.
8 - AUTH EXPIRED
Auth token is expired.
9 - AUTH FORBIDDEN
Access denied for the current user.
10 - NOT EXIST
the requested resource is not found (alternative of HTTP status_code = 404).
11 - EXTERNAL SERVICE
Error in the external information system.
12 – DATABASE
Critical database error on the server host.
What is Oz API Lite, when and how to use it.
Oz API Lite is the lightweight yet powerful version of Oz API. The Lite version is less resource-demanding, more productive, and easier to work with. The analyses are made within the API Lite image. As Oz API Lite doesn't include any additional services like statistics or data storage, this version is the one to use when you need a high performance.
To check the Liveness processor, call GET /v1/face/liveness/health
.
To check the Biometry processor, call GET /v1/face/pattern/health
.
To perform the liveness check for an image, call POST /v1/face/liveness/detect
(it takes an image as an input and displays the evaluation of spoofing attack chance in this image)
To compare two faces in two images, call POST /v1/face/pattern/extract_and_compare
(it takes two images as an input, derives the biometry templates from these images, and compares them).
To compare an image with a bunch of images, call POST /v1/face/pattern/extract_and_compare_n
.
For the full list of Oz API Lite methods, please refer to API Methods.
You can generate the trial license here or contact us by email to get a productive license. To create the license, your applicationId
(bundle id
) is required.
To pass your license file to the SDK, call the OzLivenessSDK.init
method with a list of LicenseSources
. Use one of the following:
LicenseSource.LicenseAssetId
should contain a path to a license file called forensics.license
, which has to be located in the project's res/raw folder.
LicenseSource.LicenseFilePath
should contain a file path to the place in the device's storage where the license file is located.
In case of any license errors, the onError
function is called. Use it to handle the exception as shown above. Otherwise, the system will return information about license. To check the license data manually, use the getLicensePayload
method.
License error. License at (your_URI) not found
The license file is missing. Please check its name and path to the file.
License error. Cannot parse license from (your_URI), invalid format
The license file is somehow damaged. Please email us the file.
License error. Bundle company.application.id is not in the list allowed by license (bundle.id1, bundle.id2)
The bundle (application) identifier you specified is missing in the allowed list. Please check the spelling, if it is correct, you need to get another license for your application.
License error. Current date yyyy-mm-dd hh:mm:ss is later than license expiration date yyyy-mm-dd hh:mm:ss
Your license has expired. Please contact us.
License is not initialized. Call 'OzLivenessSDK.init before using SDK
You haven't initialized the license. Call OzLivenessSDK.init
with your license data as explained above.
Since Android and iOS 8.0.0, we have introduced the hybrid Liveness analysis mode. It is a combination of the on-device and server-based analyses that sums up the benefits of these two modes. If the on-device analysis is uncertain about the real human presence, the system initiates the server-based analysis, otherwise, no additional analyses are done.
You need less computational capacity: in the majority of cases there's no need to apply the server-side analyses and fewer requests are being sent back and forth. On Android, only 8-9% of analyses require an additional server check, on iOS, it's 4.5-5.5%.
The accuracy is similar to the one of the server-based analysis: if the on-device analysis result is uncertain, the server analysis is launched. We offer the hybrid analysis as one of the default analysis modes, but you can also implement your own logic of hybrid analysis by combining the server-based and on-device analyses in your code.
Since the 8.3.0 release, you get the analysis result faster, and less data will be transmitted (by up to 10 times): the on-device analysis is enough in the majority of cases so that you don’t need to upload the full video to analyze it on the server, and, therefore, don’t send or receive the additional data. The customer journey gets shorter.
The hybrid analysis has been available in our native (mobile) SDKs since 8.0.0. As we mentioned before, the server-based analysis is launched in the minority of cases, as if the analysis on your device has finished with a certain answer, there’s no need for a second check. This results in less server resources involved.
To enable hybrid mode on Android, when you launch the analysis, set Mode
to HYBRID
. To do the same on iOS, you need to set mode
to hybrid
. That’s all, as easy as falling off a log.
If you have any questions left, we’ll be happy to answer them.
This article describes the main types of analyses that Oz software is able to perform.
Liveness checks whether a person in a media is a real human.
Face Matching examines two or more media to identify similarities between the faces depicted in them.
Black list looks for resemblances between an individual featured in a media and individuals in a pre-existing photo database.
These analyses are accessible in the Oz API for both SaaS and On-Premise models. Liveness and Face Matching are also offered in the On-Device model. Please visit this page to learn more about the usage models.
The Liveness check is important to protect facial recognition from the two types of attacks.
A presentation attack, also known as a spoofing attack, refers to the attempt of an individual to deceive a facial recognition system by presenting into a camera video, photo, or any other type of media that mimics the appearance of a genuine user. These attacks can include the use of realistic masks or make up.
An injection attack is an attempt to deceive a facial recognition system by replacing physical camera input with a prerecorded image or video, manipulating physical camera output before it becomes input to a facial recognition, or injectiong some malicious code. Virtual camera software is the most common tool for injection attacks.
Oz Liveness is able to detect both types of attacks. Any component can detect presentation attacks, and for injection attack detection, use Oz Liveness SDK. To learn about how to use Oz components to prevent attacks, check our integration quick start guides:
Once the Liveness check is finished, you can check both qualitative and quantitative analysis results.
Asking users to perform a gesture, such as smiling or turning their head, is a popular requirement when recording a Liveness video. With Oz Liveness Mobile and Web SDK, you can also request gestures from users. However, our Liveness check relies on other factors, analyzed by neural networks, and does not depend on gestures. For more details, please check Passive and Active Liveness.
Liveness check also can return the best shot from a video: a best-quality frame where the face is seen the most properly.
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media.
Wonder how to integrate face matching into your processes? Check our integration quick start guides.
In Oz API, you can configure one or more black lists, or face collections. These collections are databases of people depicted in photos. When the Black list analysis is being conducted, Oz software compares the face in a photo or video taken with faces of this pre-made database and shows whether a face exists in a collection.
For additional information, please refer to this article.
To connect SDK to Oz API, specify the API URL and access token as shown below.
Please note: in your host application, it is recommended that you set the API address on the screen that precedes the liveness check. Setting the API URL initiates a service call to the API, which may cause excessive server load when being done at the application initialization or startup.
Alternatively, you can use the login and password provided by your Oz Forensics account manager:
Although, the preferred option is authentication via access token – for security reasons.
By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for telemetry as shown below:
Clearing authorization:
Check for the presence of the saved Oz API access token:
LogOut:
We offer different usage models for the Oz software to meet your specific needs. You can either utilize the software as a service from one of our cloud instances or integrate it into your existing infrastructure. Regardless of the usage model you choose, all Oz modules will function equally. It’s only up to you what to pick, depending on your needs.
With the SaaS model, you can access one of our clouds without having to install our software in your own infrastructure.
Choose SaaS when you want:
Faster start as you don’t need to procure or allocate hardware within your company and set up a new instance.
Zero infrastructure cost as server components are located in Oz cloud.
Lower maintenance cost as Oz maintains and upgrades server components.
No cross-border data transfer for the regions where Oz has cloud instances.
The on-premise model implies that all the Oz components required are installed within your infrastructure. Choose on-premise for:
Your data not leaving your infrastructure.
Full and detailed control over the configuration.
We also provide an opportunity of using the on-device Liveness and Face matching. This model is available in Mobile SDKs.
Consider the on-device option when:
You can’t transmit facial images to any server due to privacy concerns
The network conditions whereon you plan using Oz products are extremely poor.
The choice is yours to make, but we're always available to provide assistance.
If you use our SDK just for capturing videos, omit this step.
To check liveness and face biometry, you need to upload media to our system and then analyze them.
To interpret the results of analyses, please refer to Types of Analyses.
Here’s an example of performing a check:
To delete media files after the checks are finished, use the clearActionVideos
method.
To add metadata to a folder, use the addFolderMeta
method.
In the params
field of the Analysis
structure, you can pass any additional parameters (key + value), for instance, to extract the best shot on the server side.
To use a media file that is captured with another SDK (not Oz Android SDK), specify the path to it in OzAbstractMedia:
If you want to add your media to the existing folder, use the setFolderId
method:
If you want to get back to the previous (up to 6.4.2) versions' design, reset the customization settings of the capture screen and apply the parameters that are listed below.
Download and install the Postman client from this page. Then download the JSON file needed:
Oz API 5.1.0 works with the same collection.
Launch the client and import Oz API collection for Postman by clicking the Import button:
Click files, locate the JSON needed, and hit Open to add it:
The collection will be imported and will appear in the Postman interface:
To start using Oz Android SDK, follow the steps below.
Embed Oz Android SDK into your project as described here.
Connect SDK to API as described here. This step is optional, as this connection is required only when you need to process data on a server. If you use the on-device mode, the data is not transferred anywhere, and no connection is needed.
Capture videos using methods described here. You'll send them for analysis afterward.
Analyze media you've taken at the previous step. The process of checking liveness and face biometry is described here.
If you want to customize the look-and-feel of Oz Android SDK, please refer to this section.
Recommended Android version: 5+ (the newer the smartphone is, the faster the analyses are).
Recommended versions of components:
Gradle
7.5.1
Kotlin
1.7.21
AGP
7.3.1
Java Target Level
1.8
JDK
17
We do not support emulators.
Available languages: EN, ES, HY, KK, KY, TR, PT-BR.
To obtain the sample apps source code for the Oz Liveness SDK, proceed to the GitLab repository:
Follow the link below to see a list of SDK methods and properties:
Download the demo app latest build here.
To work properly, the resolution algorithms need each uploaded media to be marked with special tags. For video and images, the tags are different. They help algorithms to identify what should be in the photo or video and analyze the content.
The following tag types should be specified in the system for video files.
To identify the data type of the video:
video_selfie
To identify the orientation of the video:
orientation_portrait
– portrait orientation;
orientation_landscape
– landscape orientation.
To identify the action on the video:
video_selfie_left
– head turn to the left;
video_selfie_right
– head turn to the right;
video_selfie_down
– head tilt downwards;
video_selfie_high
– head raise up;
video_selfie_smile
– smile;
video_selfie_eyes
– blink;
video_selfie_scan
– scanning;
video_selfie_oneshot
– a one-frame analysis;
video_selfie_blank
– no action.
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a video file with the “blink” action:
The following tag types should be specified in the system for photo files:
A tag for selfies:
photo_selfie
– to identify the image type as “selfie”.
Tags for photos/scans of ID cards:
photo_id
– to identify the image type as “ID”;
photo_id_front
– for the photo of the ID front side;
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a “selfie” photo file:
Example of the correct tag set for a photo file with the face side of an ID card:
Example of the correct set of tags for a photo file of the back of an ID card:
Download and install the Postman client from this page. Then download the JSON file needed:
Launch the client and import Oz API Lite collection for Postman by clicking the Import button:
Click files, locate the JSON needed, and hit Open to add it:
The collection will be imported and will appear in the Postman interface:
Oz Mobile SDK stands for the Software Developer’s Kit of the Oz Forensics Liveness and Face Biometric System, providing seamless integration with customers’ mobile apps for login and biometric identification.
Currently, both Android and iOS SDK work in the portrait mode.
The Oz API is a comprehensive Rest API that enables facial biometrics, allowing for both face matching and liveness checks. This write-up provides an overview of the essential concepts that one should keep in mind while using the Oz API.
To ensure security, every Oz API call requires an access token in its HTTP headers. To obtain this token, execute the POST /api/authorize/auth
method with login and password provided by us. Pass this token in X-Forensics-Access-Token
header in subsequent Oz API calls.
This article provides comprehensive details on the authentication process. Kindly refer to it for further information.
Furthermore, the Oz API offers distinct user roles, ranging from CLIENT
, who can perform checks and access reports but lacks administrative rights, e.g., deleting folders, to ADMIN
, who enjoys nearly unrestricted access to all system objects. For additional information, please consult this guide.
The unit of work in Oz API is a folder: you can upload interrelated media to a folder, run analyses on them, and check for the aggregated result. A folder can contain the unlimited number of media, and each of the media can be a target of several analyses. Also, analyses can be performed on a bunch of media.
Media OZ API works with photos and videos. Video can be either a regular video container, e.g., MP4 or MOV, or a ZIP archive with a sequence of images. Oz API uses the file mime type to define whether media is an image, a video, or a shot set.
It is also important to determine the semantics of a content, e.g., if an image is a photo of a document or a selfie of a person. This is achieved by using tags. The selection of tags impacts whether specific types of analyses will recognize or ignore particular media files. The most important tags are:
photo_id_front
– for the front side of a photo ID
photo_selfie
– for a non-document reference photo
video_selfie_blank
– for a liveness video recorded beyond Oz Liveness SDK
if a media file is captured using the Oz Liveness SDK, the tags are assigned automatically.
The full list of Oz media tags with their explanation and examples can be found here.
Since video analysis may take a few seconds, the analyses are performed asynchronously. This implies that you initiate an analysis (/api/folders/{{folder_id}}/analyses/
) and then monitor the outcomes by polling until processing is complete (/api/analyses/{{analyse_id}}
for a single analysis or /api/folders/{{folder_id}}/analyses/
for all folder’s analyses). Alternatively, there is a webhook option available. To see an example of how to use both the polling and webhook options, please check this guide.
These were the key concepts of Oz API. To gain a deeper understanding of its capabilities, please refer to the Oz API section of our developer guide.
This section contains the most common cases of integrating the Oz Forensics Liveness and face Biometry system.
The scenarios can be combined together, for example, integrating liveness into both web and mobile applications or integrating liveness with face matching.
We recommend applying these settings when starting the app.
To customize the Oz Liveness interface, use UIcustomization
as shown below. For the description of customization parameters, please refer to .
By default, SDK uses the locale of the device. To switch the locale, use the code below:
Add the following URL to the build.gradle
of the project:
Add this to the build.gradle
of the module (VERSION is the version you need to implement. Please refer to ):
Please note: this is the default version.
Please note: the resulting file will be larger.
Also, regardless of the mode chosen, add:
Oz API is a rich Rest API for facial biometry, where you can do liveness checks and face matching. Oz API features are:
Persistence: your media and analysis are stored for future reference unless you explicitly delete it,
Ability to work with videos as well as images,
Asynchronous analyses,
Authentication,
Roles and access management.
The unit of work in Oz API is a folder: you can upload interrelated media to a folder, run analyses on them, and check for the aggregated result.
This step-by-step guide describes how to perform a liveness check on a facial image or video that you already have with Oz backend: create a folder, upload your media to this folder, initiate liveness check and poll for the results.
For better accuracy and user experience, we recommend that you use our Web and/or Native SDK on your front end for face capturing. Please refer to the relevant guides:
Get the access_token from the response and add it to the X-Forensic-Access-Token of all subsequence requests.
This step is not needed for API 5.0.0 and above.
With API 4.0.8 and below, Oz API requires video or archived sequence of images in order to perform liveness check. If you want to analyze a single image, you need to transform it into a ZIP archive. Oz API will treat this archive as a video.
Make sure that you use corresponding video-related tags later on.
In the payload field, set the appropriate tags:
The successful response will return code 201 and the folder_id
you’ll need later on.
The results will be available in a short while. The method will return analyse_id that you’ll need at the next step.
get the qualitative result from resolution (SUCCESS
or DECLINED
).
get the quantitative results from results_media[0].results_data.confidence_spoofing
. confidence_spoofing
ranges from 0.0 (real person) to 1.0 (spoofing).
Here is the Postman collection for this guide.
With these steps completed, you are done with Liveness check via Oz API. You will be able to access your media and analysis results in Web UI via browser or programmatically via API.
The tags listed allow the algorithms recognizing the files as suitable for the (Liveness) and analyses.
photo_id_back
– for the photo of the ID back side (ignored for any other analyses like or ).
,
Before you begin, make sure you have Oz API credentials. When using SaaS API, you get them :
For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the . Consider the proper user role (CLIENT
in most cases or CLIENT ADMIN
, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.
You can explore all API methods with .
For security reasons, we recommend obtaining the access token instead of using the credentials. POST /api/authorize/auth with your login and password in the request body.
To create a folder and upload your media into it, POST /api/folders/ method, adding the media you need to the body part of the request.
To launch the analysis, POST /api/folders/{{folder_id}}/analyses/ with the folder_id from the previous step. In the request body, specify the liveness check to be launched.
Repeat GET /api/analyses/{{analyse_id}} with the analyse_id
from the previous step once a second until the state changes from PROCESSING
to something else. For a finished analysis:
Oz API methods can be combined with great flexibility. Explore Oz API using the .
Login: j.doe@yourcompany.com
Password: …
API: https://sandbox.ohio.ozforensics.com
Web Console: https://sandbox.ohio.ozforensics.com
You can generate the trial license here or contact us by email to get a productive license. To create the license, your bundle id
is required. After you get a license file, there are two ways to add the license to your project.
Rename this file to forensics.license and put it into the project. In this case, you don't need to set the path to the license.
During the runtime: when initializing SDK, use the following method.
or
LicenseSource
a source of license, and LicenseData
is the information about your license. Please note: this method checks whether you have an active license or not and if yes, this license won't be replaced with a new one. To force the license replacement, use the setLicense
method.
In case of any license errors, the system will use your error handling code as shown above. Otherwise, the system will return information about license. To check the license data manually, use OZSDK.licenseData
.
License error. License at (your_URI) not found
The license file is missing. Please check its name and path to the file.
License error. Cannot parse license from (your_URI), invalid format
The license file is somehow damaged. Please email us the file.
License error. Bundle company.application.id is not in the list allowed by license (bundle.id1, bundle.id2)
The bundle (application) identifier you specified is missing in the allowed list. Please check the spelling, if it is correct, you need to get another license for your application.
License error. Current date yyyy-mm-dd hh:mm:ss is later than license expiration date yyyy-mm-dd hh:mm:ss
Your license has expired. Please contact us.
License is not initialized.
You haven't initialized the license. Please add the license to your project as described above.
Master license is the offline license that allows using Mobile SDKs with any bundle_id
, unlike the regular licenses. To get a master license, create a pair of keys as shown below. Email us the public key, and we will email you the master license shortly after that.
Your application needs to sign its bundle_id
with the private key, and the Mobile SDK checks the signature using the public key from the master license. Master licenses are time-limited.
This section describes the process of creating your private and public keys.
To create a private key, run the commands below one by one.
You will get these files:
privateKey.der is a private .der key;
privateKey.txt is privateKey.der encoded by base64. This key containing will be used as the host app bundle_id signature.
File examples:
The OpenSSL command specification: https://www.openssl.org/docs/man1.1.1/man1/openssl-pkcs8.html
To create a public key, run this command.
You will get the public key file: publicKey.pub. To get a license, please email us this file. We will email you the license.
File example:
SDK initialization:
License setting:
Prior to the SDK initializing, create a base64-encoded signature for the host app bundle_id
using the private key.
Signature creation example:
Pass the signature as the masterLicenseSignature
parameter either during the SDK initialization or license setting.
If the signature is invalid, the initialization continues as usual: the SDK checks the list of bundle_id
included into the license like it does it by default without a master license.
Please note: this feature has been implemented in 8.1.0.
To add or update the language pack for Oz Android SDK, please follow these instructions:
The localization record consists of the localization key and its string value, e.g., <string name="about">"About"</string>
.
Go to the folder for the locale needed, or create a new folder. Proceed to this guide for the details.
Create the file called strings.xml.
Copy the strings from the attached file to your freshly created file.
Redefine the strings you need in the appropriate localization records.
A list of keys for Android:
The keys action_*_go
refer to the appropriate gestures. Others refer to the hints for any gesture, info messages, or errors.
When new keys appear with new versions, if no translation is provided in your file, the new strings are shown in English.
Master license is the offline license that allows using Mobile SDKs with any bundle_id
, unlike the regular licenses. To get a master license, create a pair of keys as shown below. Email us the public key, and we will email you the master license shortly after that.
Your application needs to sign its bundle_id
with the private key, and the Mobile SDK checks the signature using the public key from the master license. Master licenses are time-limited.
This section describes the process of creating your private and public keys.
To create a private key, run the commands below one by one.
You will get these files:
privateKey.der is a private .der key;
privateKey.txt is privateKey.der encoded by base64. This key containing will be used as the host app bundle_id signature.
File examples:
The OpenSSL command specification: https://www.openssl.org/docs/man1.1.1/man1/openssl-pkcs8.html
To create a public key, run this command.
You will get the public key file: publicKey.pub. To get a license, please email us this file. We will email you the license.
File example:
SDK initialization:
For Android 6.0 (API level 23) and older:
Add the implementation 'com.madgag.spongycastle:prov:1.58.0.0'
dependency;
Before creating a signature, call Security.insertProviderAt(org.spongycastle.jce.provider.BouncyCastleProvider(), 1)
Prior to the SDK initializing, create a base64-encoded signature for the host app bundle_id
using the private key.
Signature creation example:
Pass the signature as the masterLicenseSignature
parameter during the SDK initialization.
If the signature is invalid, the initialization continues as usual: the SDK checks the list of bundle_id
included into the license, like it does it by default without a master license.
The Liveness detection algorithm is intended to detect a real living person in a media.
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
For API 4.0.8 and below, please note: the Liveness analysis works with videos and shotsets, images are ignored. If you want to analyze an image, upload it as a shotset (archive) with a single image and mark with the video_selfie_blank
tag.
1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET api/folders/{{folder_id}}/analyses/
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat the check until theresolution_status
and resolution
fields change status to any other except PROCESSING
and treat this as a result.
For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need. It indicates a chance that a person is not a real one.
If you want to get back to the previous (up to 6.4.2) versions' design, reset the customization settings of the capture screen and apply the parameters that are listed below.
To connect SDK to Oz API, specify the API URL and access token as shown below.
Please note: in your host application, it is recommended that you set the API address on the screen that precedes the liveness check. Setting the API URL initiates a service call to the API, which may cause excessive server load when being done at the application initialization or startup.
Alternatively, you can use the login and password provided by your Oz Forensics account manager:
By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for telemetry as shown below:
Create a controller that will capture videos as follows:
action
– a list of user’s actions while capturing the video.
Once video is captured, the system calls the onOZLivenessResult
method:
The method returns the results of video capturing: the [
OZMedia
]
objects. The system uses these objects to perform checks.
If you use our SDK just for capturing videos, omit the Checking Liveness and Face Biometry step.
If a user closes the capturing screen manually, the failedBecauseUserCancelled
error appears.
To customize the Oz Liveness interface, use OZCustomization
as shown below. For the description of customization parameters, please refer to iOS SDK Methods and Properties.
Please note: the customization methods should be called before the video capturing ones.
If you use our SDK just for capturing videos, omit this step.
To check liveness and face biometry, you need to upload media to our system and then analyze them.
To interpret the results of analyses, please refer to Types of Analyses.
Below, you'll see the example of performing a check and its description.
To delete media files after the checks are finished, use the cleanTempDirectory
method.
To add metadata to a folder, use AnalysisRequest.addFolderMeta
.
In the params
field of the Analysis
structure, you can pass any additional parameters (key + value), for instance, to extract the best shot on the server side.
To use a media file that is captured with another SDK (not Oz iOS SDK), specify the path to it in the OzMedia structure (the bestShotURL
property):
If you want to add your media to the existing folder, use the addFolderId
method:
To integrate OZLivenessSDK into an Xcode project via the CocoaPods dependency manager, add the following code to Podfile:
Version is optional as, by default, the newest version is integrated. However, if necessary, you can find the older version number in Changelog.
Since 8.1.0, you can also use a simpler code:
By default, the full version is being installed. It contains both server-based and on-device analysis modes. To install the server-based version only, use the following code:
For 8.1.0 and higher:
Please note: installation via SPM is available for versions 8.7.0 and above.
Add the following package dependencies via SPM: https://gitlab.com/oz-forensics/oz-mobile-ios-sdk (if you need a guide on adding the package dependencies, please refer to the Apple documentation). OzLivenessSDK is mandatory. If you don't need the on-device analyses, skip the OzLivenessSDKOnDevice file.
You can also add the necessary frameworks to your project manually.
Download the SDK files from here and add them to your project.
OZLivenessSDK.xcframework,
OZLivenessSDKResources.bundle,
OZLivenessSDKOnDeviceResources.bundle (if you don't need the on-device analyses, skip this file).
Download the TensorFlow framework 2.11 from here.
Make sure that:
both xcframework are in Target-Build Phases -> Link Binary With Libraries and Target-General -> Frameworks, Libraries, and Embedded Context;
the bundle file(s) are in Target-Build Phases -> Copy Bundle Resources.
In this section, there's a guide for the integration of the on-device liveness check.
To start using Oz iOS SDK, follow the steps below.
Embed Oz iOS SDK into your project as described here.
Connect SDK to API as described here. This step is optional, as this connection is required only when you need to process data on a server. If you use the on-device mode, the data is not transferred anywhere, and no connection is needed.
Capture videos by creating the controller as described here. You'll send them for analysis afterwards.
Upload and analyze media you've taken at the previous step. The process of checking liveness and face biometry is described here.
If you want to customize the look-and-feel of Oz iOS SDK, please refer to this section.
Minimal iOS version: 11.
Minimal Xcode version: 15 for versions 8.10.0 and newer.
Available languages: EN, ES, HY, KK, KY, TR, PT-BR.
A sample app source code using the Oz Liveness SDK is located in the GitLab repository:
Follow the link below to see a list of SDK methods and properties:
Download the demo app latest build here.
In this section, we listed the guides for the face matching checks.
This article describes how to get the analysis scores.
When you perform an analysis, the result you get is a number. For biometry, it reflects a chance that the two or more people represented in your media are the same person. For liveness, it shows a chance of deepfake or a spoofing attack: that the person in uploaded media is not a real one. You can get these numbers via API from a JSON response.
.
Make a request to the or to get a JSON response. Set the with_analyses
parameter to True
.
For the Biometry analysis, check the response for the min_confidence
value:
This value is a quantitative result of matching the people on the media uploaded.
4. For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need:
This value is a chance that a person is not a real one.
To process a bunch of analysis results, you can parse the appropriate JSON response.
In this section, you'll learn how to perform analyses and where to get the numeric results.
Liveness is checking that a person on a video is a real living person.
Biometry compares two or more faces from different media files and shows whether the faces belong to the same person or not.
Best shot is an addition to the Liveness check. The system chooses the best frame from a video and saves it as a picture for later use.
Blacklist checks whether a face on a photo or a video matches with one of the faces in the pre-created database.
The Quantitative Results section explains where and how to find the numeric results of analyses.
To learn what each analysis means, please refer to .
The description of the objects you can find in Oz Forensics system.
System objects on Oz Forensics products are hierarchically structured as shown in the picture below.
On the top level, there is a Company. You can use one copy of Oz API to work with several companies.
Besides these parameters, each object type has specific ones.
Please note: this feature has been implemented in 8.1.0.
To add or update the language pack for Oz iOS SDK, use the set(languageBundle: Bundle)
method. It shows the SDK that you are going to use the non-standard bundle. In OzLocalizationCode
, use the custom language (optional).
The localization record consists of the localization key and its string value, e.g., "about" = "About"
.
If you don’t set the custom language and bundle, the SDK uses the pre-installed languages only.
If the custom bundle is set (and language is not), it has a priority when checking translations, i.e, SDK checks for the localization record in the custom bundle localization file. If the key is not found in the custom bundle, the standard bundle text for this key is used.
If both custom bundle and language are set, SDK retrieves all the translations from the custom bundle localization file.
A list of keys for iOS:
The keys Action.*.Task
refer to the appropriate gestures. Others refer to the hints for any gesture, info messages, or errors.
When new keys appear with new versions, if no translation is provided by your custom bundle localization file, you’ll see the default (English) text.
Android SDK changes
Security updates.
You can now disable video validation that has been implemented to avoid recording of extremely short videos (3 frames and less): switch the option off using .
Fixed the bug with green videos on some smartphone models.
Security updates.
Fixed bugs that could have caused crashes on some phone models.
Changed the wording for the head_down gesture: the new wording is “tilt down”.
Added proper focus order for TalkBack when the antiscam hint is enabled.
Added the public setting extract_action_shot in the Demo Application.
Fixed bugs.
Security updates.
Fixed the bug when the recorded videos might appear green.
Resolved codec issues on some smartphone models.
Accessibility updates according to WCAG requirements: the SDK hints and UI controls can be voiced.
Improved user experience with head movement gestures.
Moved the large video compression step to the Liveness screen closure.
Fixed the bug when the best shot frame could contain an image with closed eyes.
Minor bug fixes and telemetry updates.
Security and telemetry updates.
Security updates.
Security updates.
Security and telemetry updates.
Fixed the RuntimeException
error with the server-based Liveness that appeared on some devices.
Security updates.
Security updates.
Bug fixes.
Updated the Android Gradle plugin version to 8.0.0.
Internal SDK improvements.
Internal SDK improvements.
Security updates.
Security updates.
Security updates.
Security updates.
Added a description for the error that occurs when providing an empty string as an ID in the setFolderID
method.
Fixed a bug causing an endless spinner to appear if the user switches to another application during the Liveness check.
Fixed some smartphone model specific-bugs.
Upgraded the on-device Liveness model.
Security updates.
Removed the pause after the Scan gesture.
If the recorded video is larger than 10 MB, it gets compressed.
Security and logging updates.
Changed the master license validation algorithm.
Downgraded the required compileSdkVersion
from 34 to 33.
Security updates.
Updated the on-device Liveness model.
Fixed some bugs.
Internal licensing improvements.
Internal SDK improvements.
Bug fixes.
Implemented the possibility of using a master license that works with any bundle_id
.
Video compression failure on some phone models is now fixed.
Bug fixes.
The Analysis
structure now contains the sizeReductionStrategy
field. This field defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully.
The messages for the errors that are retrieved from API are now detailed.
Updated the Liveness on-device model.
Added the Portuguese (Brazilian) locale.
If a media hasn't been uploaded correctly, the system repeats the upload.
Created a new method to retrieve the telemetry (logging) identifier: getEventSessionId
.
The login
and auth
methods are now deprecated. Use the setAPIConnection
method instead.
OzConfig.baseURL
and OzConfig.permanentAccessToken
are now deprecated.
If a user closes the screen during video capture, the appropriate error is now being handled by SDK.
Fixed some bugs and improved the SDK work.
Fixed errors.
The SDK now works properly with baseURL
set to null
.
The dependencies' versions have been brought into line with Kotlin version.
Added the new analysis mode – hybrid (Liveness only). If the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Kotlin version requirements lowered to 1.7.21.
Improved the on-device models.
For some phone models, fixed the fatal device error.
The hint text width can now exceed the frame width (when using the main camera).
Photos taken during the One Shot analysis are now being sent to the server in the original size.
Removed the OzAnalysisResult
class. The onSuccess
method ofAnalysisRequest.run
now uses the RequestResult
structure instead of List<OzAnalysisResult>.
All exceptions are moved to the com.ozforensics.liveness.sdk.core.exceptions
package (See changes below).
Classes related to AnalysisRequest
are moved to the com.ozforensics.liveness.sdk.analysis
package (See changes below).
The methods below are no longer supported:
Restructured the settings screen.
Added the center hint background customization.
Added new face frame forms (Circle, Square).
The OzLivenessSDK::init
method no longer crashes if there is a StatusListener
parameter passed.
Changed the scan gesture animation.
Please note: for this version, we updated Kotlin to 1.8.20.
Improved the SDK algorithms.
Updated the model for the on-device analyses.
Fixed the animation for sunglasses/mask.
The oval size for Liveness is now smaller.
Fixed the error with the server-based analyses while using permanentAccessToken
for authorization.
You can now hide the status bar and system buttons (works with 7.0.0 and higher).
OzLivenessSDK.init
now requires context
as the first parameter.
OzAnalysisResult
now shows the server-based analyses' scores properly.
Fixed initialization issues, displaying of wrong customization settings, authorization failures on Android <7.1.1.
Fixed crashes for Android v.6 and below.
Fixed oval positioning for some phone models.
Internal fixes and improvements.
Updated security.
Implemented some internal improvements.
The addMedia
method is now deprecated, please use uploadMedia
for uploading.
Changed the way of sharing dependencies. Due to security issues, now we share two types of libraries as shown below: sdk
is a server analysis only, full
provides both server and on-device analyses:
UICustomization
has been implemented instead of OzCustomization
.
Added the Spanish locale.
Fixed the bug with freezes that had appeared on some phone models.
SDK now captures videos in 720p.
Synchronized the names of the analysis modes with iOS: SERVER_BASED and ON_DEVICE.
Fixed the bug with displaying of localization settings.
Now you can use Fragment as Liveness screen.
Added a new field to the Analysis
structure. The params
field is for any additional parameters, for instance, if you need to set extracting the best shot on server to true. The best shot algorithm chooses the most high-quality frame from a video.
The Zoom in and Zoom out gestures are no longer supported.
Updated the biometry model.
Added a new simplified API – AnalysisRequest. With it, it’s easier to create a request for the media and analysis you need.
Published the on-device module for on-device liveness and biometry analyses. To add this module to your project, use:
To launch these analyses, use runOnDeviceBiometryAnalysis
and runOnDeviceLivenessAnalysis
methods from the OzLivenessSDK
class:
Liveness now goes smoother.
Fixed freezes on Xiaomi devices.
Optimized image converting.
New metadata parameter for OzLivenessSDK.uploadMedia
and new OzLivenessSDK.uploadMediaAndAnalyze
method to pass this parameter to folders.
Added functions for SDK initialization with LicenseSources: LicenseSource.LicenseAssetId
and LicenseSource.LicenseFilePath
. Use the OzLivenessSDK.init
method to start initialization.
Now you can get the license info upon initialization val licensePayload = OzLivenessSDK.getLicensePayload()
.
Added the Kyrgyz locale.
Added local analysis functions.
You can now configure the face frame.
Fixed version number at the Liveness screen.
Added the main camera support.
Added configuration from license support.
Added the OneShot gesture.
Added new states for OzAnalysisResult.Resolution.
Added the uploadMediaAndAnalyze
method to load a bunch of media to the server at once and send them to analysis immediately.
OzMedia
is renamed to OzAbstractMedia
and got subclasses for images and videos.
Fixed camera bugs for some devices.
Access token updates automatically.
Renamed accessToken
to permanentAccessToken
.
Added R8 rules.
Configuration became easier: config settings are mutable.
Fixed the oval frame.
Removed the unusable parameters from AnalyseRequest
.
Removed default attempt limits.
To customize the configuration options, the config property is added instead of baseURL, accessToken, etc. Use OzConfig.Builder
for initialization.
Added license support. Licences should be installed as raw resources. To pass them to OzConfig
, use setLicenseResourceId
.
Replaced the context-dependent methods with analogs.
Improved the image analysis.
Removed unusable dependencies.
Fixed logging.
Please find the Flutter repository .
Add the lines below in pubspec.yaml of the project you want to add the plugin to.
Add the license file (e.g., license.json or forensics.license) to the Flutter application/assets folder. In pubspec.yaml, specify the Flutter asset:
For Android, add the Oz repository to /android/build.gradle, allprojects → repositories section:
For Flutter 8.24.0 and above or Android Gradle plugin 8.0.0 and above, add to android/gradle.properties:
The minimum SDK version should be 21 or higher:
For iOS, set the minimum platform to 13 or higher in the Runner → Info → Deployment target → iOS Deployment Target.
In ios/Podfile, comment the use_frameworks!
line (#use_frameworks!
).
Initialize SDK by calling the init
plugin method. Note that the license file name and path should match the ones specified in pubspec.yaml (e.g., assets/license.json).
Use the API credentials (login, password, and API URL) that you’ve received from us.
In production, instead of hard-coding the login and password inside the application, it is recommended to get the access token on your backend via the API auth method, then pass it to your application:
or
To start recording, use the startLiveness
method to obtain the recorded media:
Please note: for versions 8.11 and below, the method name is executeLiveness
, and it returns the recorded media.
To obtain the media result, subscribe to livenessResult
as shown below:
To run the analyses, execute the code below.
Create the Analysis
object:
Execute the formed analysis:
The analysisResult
list of objects contains the result of the analysis.
If you want to use media captured by another SDK, the code should look like this:
The whole code block will look like this:
The next level is a User. A company can contain any amount of users. There are several roles of users with different permissions. For more information, refer to .
When a user requests an analysis (or analyses), a new folder is created. This folder contains media. One user can create any number of folders. Each folder can contain any amount of media. A user applies analyses to one or more media within a folder. The rules of assigning analyses are described . The media quality requirements are listed on .
The length of the Selfie gesture is now (affects the video file size).
You can instead of Oz logo if your license allows it.
If multiple analyses are applied to the folder simultaneously, the system sends them as a group. It means that the “worst” of the results will be taken as resolution, not the latest. Please refer to for details.
For the Liveness analysis, the system now treats the highest score as a quantitative result. The Liveness analysis output is described .
You can now add a custom or update an existing language pack. The instructions can be found .
Added the antiscam widget and its . This feature allows you to alert your customers that the video recording is being conducted, for instance, for loan application purposes. The purpose of this is to safeguard against scammers who may attempt to deceive an individual into approving a fraudulent transaction.
Added customization for the .
Implemented a range of options and switched to the new design. To restore the previous settings, please refer to .
By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for as shown below:
Parameter
Type
Description
time_created
Timestamp
Object (except user and company) creation time
time_updated
Timestamp
Object (except user and company) update time
meta_data
Json
Any user parameters
technical_meta_data
Json
Module-required parameters; reserved for internal needs
Parameter
Type
Description
company_id
UUID
Company ID within the system
name
String
Company name within the system
Parameter
Type
Description
user_id
UUID
User ID within the system
user_type
String
first_name
String
Name
last_name
String
Surname
middle_name
String
Middle name
String
User email = login
password
String
User password (only required for new users or to change)
can_start_analyze_*
String
Depends on user roles
company_id
UUID
Current user company’s ID within the system
is_admin
Boolean
Whether this user is an admin or not
is_service
Boolean
Whether this user account is service or not
Parameter
Type
Description
folder_id
UUID
Folder ID within the system
resolution_status
ResolutionStatus
The latter analysis status
Parameter
Type
Description
media_id
UUID
Media ID
original_name
String
Original filename (how the file was called on the client machine)
original_url
Url
HTTP link to this file on the API server
tags
Array(String)
List of tags for this file
Parameter
Type
Description
analyse_id
UUID
ID of the analysis
folder_id
UUID
ID of the folder
type
String
Analysis type (BIOMETRY\QUALITY\DOCUMENTS)
results_data
JSON
Results of the analysis
Removed method
Replacement
OzLivenessSDK.uploadMediaAndAnalyze
AnalysisRequest.run
OzLivenessSDK.uploadMedia
AnalysisRequest.Builder.uploadMedia
OzLivenessSDK.runOnDeviceBiometryAnalysis
AnalysisRequest.run
OzLivenessSDK.runOnDeviceLivenessAnalysis
AnalysisRequest.run
AnalysisRequest.build(): AnalysisRequest
-
AnalysisRequest.Builder.addMedia
AnalysisRequest.Builder.uploadMedia
Parameter
Type
Description
actions
List<VerificationAction>
Actions from the captured video
use_main_camera
Boolean
If True
, uses the main camera, otherwise the front one.
Metadata is any optional data you might need to add to a system object. In the meta_data
section, you can include any information you want, simply by providing any number of fields with their values:
Metadata is available for most Oz system objects. Here is the list of these objects with the API methods required to add metadata. Please note: you can also add metadata to these objects during their creation.
Object
API Method
User
PATCH /api/users/{{user_id}}
Folder
PATCH /api/folders/{{folder_id}}/meta_data/
Media
PATCH /api/media/{{media_id}}/meta_data
Analysis
PATCH /api/analyses/{{analyse_id}}/meta_data
Collection
PATCH /api/collections/{{collection_id}}/meta_data/
and, for a person in a collection,
PATCH /api/collections/{{collection_id}}/persons/{{person_id}}/meta_data
You can also change or delete metadata. Please refer to our API documentation.
You may want to use metadata to group folders by a person or lead. For example, if you want to calculate conversion when a single lead makes several Liveness attempts, just add the person/lead identifier to the folder metadata.
Here is how to add the client ID iin
to a folder object.
In the request body, add:
You can pass an ID of a person in this field, and use this ID to combine requests with the same person and count unique persons (same ID = same person, different IDs = different persons). This ID can be a phone number, an IIN, an SSN, or any other kind of unique ID. The ID will be displayed in the report as an additional column.
Another case is security: when you need to process the analyses’ result from your back end, but don’t want to perform this using the folder ID. Add an ID (transaction_id
) to this folder and use this ID to search for the required information. This case is thoroughly explained here.
If you store PII in metadata, make sure it complies with the relevant regulatory requirements.
You can also add metadata via SDK to process the information later using API methods. Please refer to the corresponding SDK sections:
API Lite (FaceVer) changes
Fixed the bug with the time_created
and folder_id
parameters of the Detect method that sometimes might have been generated incorrectly.
Security updates.
Updated models.
The file size for the detect Liveness method is now capped at 15 MB, with a maximum of 10 files per request.
Updated the gesture list for best_shot
analysis: it now supports head turns (left and right), tilts (up and down), smiling, and blinking.
Introduced the new Liveness detect method that can process videos and archives as well.
Added the version check method.
API Lite now accepts base64.
Improved the biometric model.
Added the 1:N mode.
Added the CORS policy.
Published the documentation.
Improved error messages – made them more detailed.
Simplified the Liveness/Detect methods.
Reworked and improved the core.
Added anti-spoofing algorithms.
Added the extract_and_compare
method.
This article contains the full description of folders' and analyses' statuses in API.
INITIAL
-
-
starting state
starting state
PROCESSING
starting state
starting state
analyses in progress
analyses in progress
FAILED
system error
system error
system error
system error
FINISHED
finished successfully
-
finished successfully
-
DECLINED
-
check failed
-
check failed
OPERATOR_REQUIRED
-
additional check is needed
-
additional check is needed
SUCCESS
-
check succeeded
-
check succeeded
The details on each status are below.
This is the state when the analysis is being processed. The values of this state can be:
PROCESSING
– the analysis is in progress;
FAILED
– the analysis failed due to some error and couldn't get finished;
FINISHED
– job's done, the analysis is finished, and you can check the result.
Once the analysis is finished, you'll see one of the following results:
SUCCESS
– everything went fine, the check succeeded (e.g., faces match or liveness confirmed);
OPERATOR_REQUIRED
(except the Liveness analysis) – the result should be additionally checked by a human operator;
The OPERATOR_REQUIRED status
appears only if it is set up in biometry settings.
DECLINED
– the check failed (e.g., faces don't match or some spoofing attack detected).
If the analysis hasn't been finished yet, the result inherits a value from analyse.state
: PROCESSING
(the analysis is in progress) / FAILED
(the analysis failed due to some error and couldn't get finished).
A folder is an entity that contains media to analyze. If the analyses have not been finished, the stage of processing media is shown in resolution_status
:
INITIAL
– no analyses applied;
PROCESSING
– analyses are in progress;
FAILED
– any of the analyses failed due to some error and couldn't get finished;
FINISHED
– media in this folder are processed, the analyses are finished.
Folder result is the consolidated result of all analyses applied to media from this folder. Please note: the folder result is the result of the last-finished group of analyses. If all analyses are finished, the result will be:
SUCCESS
– everything went fine, all analyses completed successfully;
OPERATOR_REQUIRED
(except the Liveness analysis) – there are no analyses with the DECLINED
status, but one or more analyses have been completed with the OPERATOR_REQUIRED
status;
DECLINED
– one or more analyses have been completed with the DECLINED
status.
The analyses you send in a single POST
request form a group. The group result is the "worst" result of analyses this group contains: INITIAL
> PROCESSING
> FAILED
> DECLINED
> OPERATOR_REQUIRED
> SUCCESS
, where SUCCESS
means all analyses in the group have been completed successfully without any errors.
In this section, we explain how to use Oz Flutter SDK for iOS and Android.
Before you start, it is recommended that you install:
Flutter 3.0.0 or higher;
Android SDK 21 or higher;
dart 2.18.6 or higher;
iOS platform 13 or higher;
Xcode.
Please find the Flutter repository here.
API changes
Improved the resource efficiency of server-based biometry analysis.
API can now extract action shots from videos of a person performing gestures. This is done to comply with the new Kazakhstan regulatory requirements for biometric identification. Dependencies with other system components are specified here.
Created a new report template that also complies with the requirements mentioned above.
If action shots are enabled, the thumbnails for the report are generated from them.
Updated the Postman collection. Please see the new collection here and at https://apidoc.ozforensics.com/.
Added the new method to check the timezone settings: GET {{host}}/api/config
Added parameters to the GET {{host}}/api/event_sessions
method:
time_created
time_created.min
time_created.max
time_updated
time_updated.min
time_updated.max
session_id
session_id.exclude
sorting
offset
limit
total_omit
If you create a folder using SHOT_SET, the corresponding video will be in media.video_url
.
Fixed the bug with CLIENT ADMIN being unable to change passwords for users from their company.
Security updates.
Face Identification 1:N is now live, significantly increasing the data processing capacity of the Oz API to find matches. Even huge face databases (containing millions of photos and more) are no longer an issue.
The Liveness (QUALITY) analysis now ignores photos tagged with photo_id
, photo_id_front
, or photo_id_back
, preventing these photos from causing the tag-related analysis error.
Security updates.
You can now apply the Liveness (QUALITY) analysis to a single image.
Fixed the bug where the Liveness analysis could finish with the SUCCESS result with no media uploaded.
The default value for the extract_best_shot
parameter is now True.
RAR archives are no longer supported.
By default, analyses.results_media.results_data
now contain the confidence_spoofing
parameter. However, if you need all three parameters for the backward compatibility, it is possible to change the response back to three parameters: confidence_replay
, confidence_liveness
, and confidence_spoofing
.
Updated the default PDF report template.
The name of the PDF report now contains folder_id
.
Security updates.
Set the autorotation of logs.
Added the CLI command for user deletion.
You can now switch off the video preview generation.
The ADMIN access token is now valid for 5 years.
Added the folder identifier folder_id
to the report name.
Fixed bugs and optimized the API work.
For the sliced video, the system now deletes the unnecessary frames.
Added new methods: GET and POST at media/<media_id>/snapshot/
.
Replaced the default report template.
The shot set preview now keeps images’ aspect ratio.
ADMIN and OPERATOR receive system_company
as a company they belong to.
Added the company_id
attribute to User, Folder, Analyse, Media.
Added the Analysis group_id
attribute.
Added the system_resolution
attribute to Folder and Analysis.
The analysis resolution_status
now returns the system_resolution
value.
Removed the PATCH method for collections.
Added the resolution_status
filter to Folder Analyses [LIST] and analyse.resolution_status
filter to Folder [LIST].
Added the audit log for Folder, User, Company.
Improved the company deletion algorithm.
Reforged the blacklist processing logic.
Fixed a few bugs.
The Photo Expert and KYC modules are now removed.
The endpoint for the user password change is now POST
users/user_id/change-password instead of PATCH
.
Provided log for the Celery app.
Added filters to the Folder [LIST] request parameters: analyse.time_created
, analyse.results_data
for the Documents analysis, results_data
for the Biometry analysis, results_media_results_data
for the QUALITY analysis. To enable filters, set the with_results_media_filter
query parameter to True
.
Added a new attribute for users – is_active
(default True
). If is_active == False
, any user operation is blocked.
Added a new exception code (1401 with status code 401) for the actions of the blocked users.
Added shots sets preview.
You can now save a shots set archive to a disk (with the original_local_path
, original_url
attributes).
A new original_info attribute is added to store md5, size, and mime-type of a shots set
Fixed ReportInfo
for shots sets.
Added health check at GET api/healthcheck.
Fixed the shots set thumbnail URL.
Now, the first frame of shots set becomes this shots set's thumbnail URL.
Modified the retry policy – the default max count of analysis attempts is increased to 3 and jitter configuration introduced.
Changed the callback algorithm.
Refactored and documented the command line tools.
Refactored modules.
Changed the delete personal information endpoint and method from delete_pi
to /pi
and from POST to DELETE, respectively.
Improved the delete personal information algorithm.
It is now forbidden to add media to cleaned folders.
Changed the authorize/restore endpoint name from auth
to auth_restore
.
Added a new tag – video_selfie_oneshot
.
Added the password validation setting (OZ_PASSWORD_POLICY
).
Added auth
, rest_unauthorized
, rps_with_token
throttling (use OZ_THROTTLING_RATES
in configuration. Off by default).
User permissions are now used to access static files (OZ_USE_PERMISSIONS_FOR_STATIC
in configuration, false by default).
Added a new folder endpoint – /delete_pi
. It clears all personal information from a folder and analyses related to this folder.
Fixed a bug with no error while trying to synchronize empty collections.
If persons are uploaded, the analyse collection TFSS request is sent.
Added the fields_to_check
parameter to document analysis (by default, all fields are checked).
Added the double_page
_spread parameter to document analysis (True
by default).
Fixed collection synchronization.
Authorization token can be now refreshed by expire_token
.
Added support for application/x-gzip.
Renamed shots_set.images to shots_set.frames.
Added user sessions API.
Users can now change a folder owner (limited by permissions).
Changed dependencies rules.
Changed the access_token prolongation policy to fix bug of prolongation before checking the expiration permission.
Move oz_collection_binding
(collection synchronization functional) to oz_core
.
Simplified the shots sets functionality. One archive keeps one shot set.
Improved the document sides recognition for the docker version.
Moved the orientation tag check to liveness at quality analysis.
Added a default report template for Admin and Operator.
Updated the biometric model.
A new ShotsSet object is not created if there are no photos for it.
Updated the data exchange format for the documents' recognition module.
You can’t delete a Collection if there are associated analyses with Collection Persons.
Added time marks to analysis: time_task_send_to_broker
, time_task_received
, time_task_finished.
Added a new authorization engine. You can now connect with Active Directory by LDAP (settings configuration required).
A new type of media in Folders – "shots_set".
You can’t delete a CollectionPerson if there are analyses associated with it.
Renamed the folder field resolution_suggest
to operator_status
.
Added a folder text field operator_comment
.
The folder fields operator_status
and operator_comment
can be edited only by Admin, Operator, Client Service, Client Operator, and Client Admin.
Only Admin and Client Admin can delete folder, folder media, report template, report template attachments, reports, and analyses (within their company).
Fixed a deletion error: when report author is deleted, their reports get deleted as well.
Client can now view only their own profile.
Client Operator can now edit only their profile.
Client can't delete own folders, media, reports, or analyses anymore.
Client Service can now create Collection Person and read reports within their company.
Client, Client Admin, Client Operator have read access to users profiles only in their company.
A/B testing is now available.
Added support for expiration date header.
Added document recognition module Standalone/Dockered binding support.
Added a new role of Client Operator (like Client Admin without permissions for company and account management).
Client Admin and Client Operator can change the analysis status.
Only Admin and Client Admin (for their company) can create, update and delete operations for Collection and CollectionPerson models from now on.
Added a check for user permissions to report template when creating a folder report.
Collection creation now returns status code 201 instead of 200.
Each of the new API users obtains a role to define access restrictions for direct API connections.
Every role is combined with flags is_admin
and is_service
, which implies restrictions additionally.
is_service
is a flag that marks the user account as a service account for automatic connection purposes. This user authentication creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, by default, the lifetime of a token is extended with each request (parameterized).
ADMIN
is a system administrator. Has unlimited access to all system objects, but can't change the analyses' statuses;
OPERATOR
is a system operator. Can view all system objects and choose the analysis result via the Make Decision button (usually needed if the status is OPERATOR_REQUIRED
);
CLIENT
is a regular consumer account. Can upload media files, process analyses, view results in personal folders, generate reports for analyses.
is_admin
– if set, the user obtains access to other users' data within this admin's company
can_start_analyse_biometry
– an additional flag to allow access to BIOMETRY analyses (enabled by default);
can_start_analyse_quality
– an additional flag to allow access to LIVENESS (QUALITY) analyses (enabled by default);
CLIENT ADMIN
is a company administrator that can manage their company account and users within it. Additionally, CLIENT ADMIN
can view and edit data of all users within their company, delete files in folders, add or delete report templates with or without attachments, the reports themselves and single analyses, check statistics, add new blacklist collections. The role is present in Web UI only. Outside Web UI, CLIENT ADMIN
is replaced by the CLIENT
role with the is_admin
flag set to true
.
CLIENT OPERATOR
is similar to OPERATOR
within their company.
Here's the detailed information on access levels.
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
-
-
CLIENT
-
their company data
-
-
CLIENT SERVICE
-
their company data
-
-
CLIENT OPERATOR
-
their company data
-
-
CLIENT ADMIN
-
their company data
their company data
their company data
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
their folders
their folders
their folders
-
CLIENT SERVICE
within their company
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
-
within their company
-
-
CLIENT SERVICE
-
within their company
-
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
+
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
-
within their company
-
CLIENT OPERATOR
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
+
+
-
CLIENT
in their folders
in their folders
-
CLIENT SERVICE
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
in their folders
in their folders
-
-
CLIENT SERVICE
within their company
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
-
-
CLIENT
-
within their company
-
-
CLIENT SERVICE
within their company
within their company
-
-
CLIENT OPERATOR
-
within their company
-
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
-
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
within their company
within their company
-
CLIENT OPERATOR
-
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
-
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
-
within their company
-
CLIENT OPERATOR
-
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
their data
-
CLIENT
-
their data
their data
-
CLIENT SERVICE
-
within their company
their data
-
CLIENT OPERATOR
-
within their company
their data
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
A dedicated Web Adapter in our cloud or the adapter deployed on-premise. The adapter's URL is required for adding the plugin.
To embed the plugin in your page, add a reference to the primary script of the plugin (plugin_liveness.php
) that you received from Oz Forensics to the HTML code of the page. web-sdk-root-url
is the Web Adapter link you've received from us.
For Angular and Vue, script (and files, if you use a version lower than 1.4.0) should be added in the same way. For React apps, use head
at your template's main page to load and initialize the OzLiveness plugin. Please note: if you use <React.StrictMode>
, you may experience issues with Web Liveness.
iOS SDK changes
Changed the wording for the head_down gesture: the new wording is “tilt down”.
Added proper focus order for VoiceOver when the antiscam hint is enabled.
Added the public setting extract_action_shot in the Demo Application.
Bug fixes.
Security updates.
Accessibility updates according to WCAG requirements: the SDK hints and UI controls can be voiced.
Improved user experience with head movement gestures.
Minor bug fixes and telemetry updates.
The screen brightness no longer changes when the rear camera is used.
Fixed the video recording issues on some smartphone models.
Security and telemetry updates.
Internal SDK improvements.
Added Xcode 16 support.
Security and telemetry updates.
Security updates.
Bug fixes.
SDK now requires Xcode 15 and higher.
Security updates.
Bug fixes.
Internal SDK improvements.
Internal SDK improvements.
Bug fixes.
Logging updates.
Security updates.
The sample is now available on SwiftUI. Please find it here.
Added a description for the error that occurs when providing an empty string as an ID in the addFolderID
method.
Bug fixes.
The messages displayed by the SDK after uploading media have been synchronized with Android.
The bug causing analysis delays that might have occurred for the One Shot gesture has been fixed.
The length of the Selfie gesture is now configurable (affects the video file size).
You can set your own logo instead of Oz logo if your license allows it.
Removed the pause after the Scan gesture.
The code in Readme.md is now up-to-date.
Security and logging updates.
Security updates.
Changed the default behavior in case a localization key is missing: now the English string value is displayed instead of a key.
Fixed some bugs.
Internal licensing improvements.
Implemented the possibility of using a master license that works with any bundle_id
.
Fixed the bug with background color flashing.
Bug fixes.
The Analysis
structure now contains the sizeReductionStrategy
field. This field defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully.
The messages for the errors that are retrieved from API are now detailed.
The toFrameGradientColor
option in hintAnimationCustomization
is now deprecated, please use the hintGradientColor
option instead.
Got back the iOS 11 support.
If multiple analyses are applied to the folder simultaneously, the system sends them as a group. It means that the “worst” of the results will be taken as resolution, not the latest. Please refer to this article for details.
For the Liveness analysis, the system now treats the highest score as a quantitative result. The Liveness analysis output is described here.
Updated the Liveness on-device model.
Added the Portuguese (Brazilian) locale.
You can now add a custom or update an existing language pack. The instructions can be found here.
If a media hasn't been uploaded correctly, the system now repeats the upload.
Added a new method to retrieve the telemetry (logging) identifier: getEventSessionId
.
The setPermanentAccessToken
, configure
and login
methods are now deprecated. Please use the setApiConnection
method instead.
The setLicense(from path:String)
method is now deprecated. Please use the setLicense(licenseSource: LicenseSource)
method instead.
Fixed some bugs and improved the SDK work.
Fixed some bugs and improved the SDK algorithms.
Added the new analysis mode – hybrid (Liveness only). If the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Improved the on-device models.
Updated the run method.
Added new structures: RequestStatus
(analysis state), ResultMedia
(analysis result for a single media) and RequestResult
(consolidated analysis result for all media).
The updated AnalysisResult
structure should be now used instead of OzAnalysisResult
.
For the OZMedia
object, you can now specify additional tags that are not included into our tags list.
The Selfie video length is now about 0.7 sec, the file size and upload time are reduced.
The hint text width can now exceed the frame width (when using the main camera).
The methods below are no longer supported:
Removed method
Replacement
analyse
AnalysisRequest.run
addToFolder
uploadMedia
documentAnalyse
AnalysisRequest.run
uploadAndAnalyse
AnalysisRequest.run
runOnDeviceBiometryAnalysis
AnalysisRequest.run
runOnDeviceLivenessAnalysis
AnalysisRequest.run
addMedia
uploadMedia
Added the center hint background customization.
Added new face frame forms (Circle, Square).
Added the antiscam widget and its customization. This feature allows you to alert your customers that the video recording is being conducted, for instance, for loan application purposes. The purpose of this is to safeguard against scammers who may attempt to deceive an individual into approving a fraudulent transaction.
Synchronized the default customization values with Android.
Added the Spanish locale.
iOS 11 is no longer supported, the minimal required version is 12.
Fixed the issue with the server-based One shot analysis.
Improved the SDK algorithms.
Fixed error handling when uploading a file to API. From this version, an error will be raised to a host application in case of an error during file upload.
Improved the on-device Liveness.
Fixed the animation for sunglasses/mask.
Fixed the bug with the .document
analysis.
Updated the descriptions of customization methods and structures.
Updated the TensorFlow version to 2.11.
Fixed several bugs, including the Biometry check failures on some phone models.
Added customization for the hint animation.
Integrated a new model.
Added the uploadMedia
method to AnalysisRequest
. The addMedia
method is now deprecated.
Fixed the combo analysis error.
Added a button to reset the SDK theme and language settings.
Fixed some bugs and localization issues.
Extended the network request timeout to 90 sec.
Added a setting for the animation icon size.
Implemented a range of UI customization options and switched to the new design. To restore the previous settings, please refer to this article.
The run
method now works similar to the one in Android SDK and returns an array of analysis' results.
Synchronized the version numbers with Android SDK.
Added a new field to the Analysis
structure. The params
field is for any additional parameters, for instance, if you need to set extracting the best shot on server to true. The best shot algorithm chooses the most high-quality frame from a video.
Fixed some localization issues.
Changed the Combo gesture.
Now you can launch the Liveness check to analyze images taken with another SDK.
The Zoom in and Zoom out gestures are no longer supported.
Added a new simplified analysis structure – AnalysisRequest
.
Added methods of on-device analysis: runOnDeviceLivenessAnalysis
and runOnDeviceBiometryAnalysis
.
You can choose the installation version. Standard installation gives access to full functionality. The core
version (OzLivenessSDK/Core
) installs SDK without the on-device functionality.
Added a method to upload data to server and start analyzing it immediately: uploadAndAnalyse
.
Improved the licensing process, now you can add a license when initializing SDK: OZSDK(licenseSources: [LicenseSource], completion: @escaping ((LicenseData?, LicenseError?) -> Void))
, where LicenseSource
is a path to physical location of your license, LicenseData
contains the license information.
Added the setLicense
method to force license adding.
Added the Turkish locale.
Added the Kyrgyz locale.
Added Completion Handler for analysis results.
Added Error User Info to telemetry to show detailed info in case of an analysis error.
Added local on-device analysis.
Added oval and rectangular frames.
Added Xcode 12.5.1+ support.
Added SDK configuration with licenses.
Added the One Shot gesture.
Improved OZVerificationResult
: added bestShotURL
which contains the best shot image and preferredMediaURL
which contains an URL to the best quality video.
When performing a local check, you can now choose a main or back camera.
Authorization sessions extend automatically.
Updated authorization interfaces.
Added the Kazakh locale.
Added license error texts.
You can cancel network requests.
You can specify Bundle for license.
Added analysis parameterization documentAnalyse
.
Fixed building errors (Xcode 12.4 / Cocoapods 1.10.1).
Added license support.
Added Xcode 12 support instead of 11.
Fixed the documentAnalyse
error where you had to fill analyseStates
to launch the analysis.
Fixed logging.
Security and telemetry updates.
The SDK hints and UI controls can be voiced in accordance to WCAG requirements.
Improved user experience with head movement gestures.
Android: moved the large video compression step to the Liveness screen closure.
Android: fixed the bug when the best shot frame could contain an image with closed eyes.
Android: resolved codec issues on some smartphone models.
Android: fixed the bug when the recorded videos might appear green.
iOS: added Xcode 16 support.
iOS: the screen brightness no longer changes when the rear camera is used.
iOS: fixed the video recording issues on some smartphone models.
The executeLiveness
method is now deprecated, please use startLiveness
instead.
Updated the code needed to obtain the Liveness results.
Security and telemetry updates.
Added descriptions for the errors that occur when providing an empty string as an ID in the addFolderID
(iOS) and setFolderID (Android)
methods.
Android: fixed a bug causing an endless spinner to appear if the user switches to another application during the Liveness check.
Android: fixed some smartphone model specific-bugs.
Security and logging updates.
Android: upgraded the on-device Liveness model.
Android: security updates.
iOS: the messages displayed by the SDK after uploading media have been synchronized with Android.
iOS: the bug causing analysis delays that might have occurred for the One Shot gesture has been fixed.
The length of the Selfie gesture is now configurable (affects the video file size).
Removed the pause after the Scan gesture.
Security and logging updates.
Bug fixes.
Android: if the recorded video is larger than 10 MB, it gets compressed.
Android: updated the on-device Liveness model.
iOS: changed the default behavior in case a localization key is missing: now the English string value is displayed instead of a key.
Fixed some bugs.
Implemented the possibility of using a master license that works with any bundle_id
.
Fixed the bug with background color flashing.
Video compression failure on some phone models is now fixed.
First version.
A singleton for Oz SDK.
Initializes OZSDK with the license data. The closure is either license data or LicenseError.
Parameter
Type
Description
licenseSources
The source of the license
Returns
-
Forces the license installation.
Parameter
Type
Description
licenseSource
Source of the license
Retrieves an access token for a user.
Parameter
Type
Description
apiConnection
Authorization parameters
Returns
The access token or an error.
Retrieves an access token for a user to send telemetry.
Parameter
Type
Description
eventsConnection
Telemetry authorization parameters
Returns
The access token or an error.
Checks whether an access token exists.
Parameters
-
Returns
The result – the true or false value.
Deletes the saved access token
Parameters
-
Returns
-
Creates the Liveness check controller.
Parameter
Type
Description
delegate
The delegate for Oz Liveness
actions
Captured action
cameraPosition (optional)
AVCaptureDevice.Position
front
– front camera (default),
back
– rear camera
Returns
UIViewController or an exception.
Creates the Liveness check controller.
Parameter
Type
Description
actions
Captured action
FaceCaptureCompletion
type alias used as follows:
public typealias FaceCaptureCompletion = (_ results: [OZMedia]?, _ error: OZVerificationStatus?) -> Void
cameraPosition (optional)
AVCaptureDevice.Position
front
– front camera (default),
back
– rear camera
Returns
UIViewController or an exception.
Deletes all videos.
Parameters
-
Returns
-
Retrieves the telemetry session ID.
Parameters
-
Returns
The telemetry session ID (String parameter).
Sets the bundle to look for translations in.
Parameter
Type
Description
languageBundle
Bundle
The bundle that contains translations
Returns
-
Sets the length of the Selfie gesture (in milliseconds).
Parameter
Type
Description
selfieLength
Int
The length of the Selfie gesture (in milliseconds). Should be within 500-5000 ms, the default length is 700
Generates the payload with media signatures.
Parameter
Type
Description
media
An array of media files
folderMeta
[String]
Additional folder metadata
Returns
Payload to be sent along with media files that were used for generation.
SDK locale (if not set, works automatically).
Parameter
Type
Description
localizationCode
The localization code
The host to call for Liveness video analysis.
Parameter
Type
Description
host
String
Host address
The holder for attempts counts before SDK returns error.
Parameter
Type
Description
singleCount
Int
Attempts on a single action/gesture
commonCount
Int
Total number of attempts on all actions/gestures if you use a sequence of them
faceAlignmentTimeout
Float
Time needed to align face into frame
uploadMediaSettings
Sets the number of attempts and timeout between them
The SDK version.
Parameter
Type
Description
version
String
Version number
A delegate for OZSDK.
Gets the Liveness check results.
Parameter
Type
Description
results
An array of the OzMedia objects.
Returns
-
The error processing method.
Parameter
Type
Description
status
The error description.
Returns
-
A protocol for performing checks.
Creates the AnalysisRequest instance.
Parameter
Type
Description
folderId (optional)
String
The identifier to define when you need to upload media to a certain folder.
Returns
The AnalysisRequest instance.
Adds an analysis to the AnalysisRequest instance.
Parameter
Type
Description
analysis
A structure containing information on the analyses required.
Returns
-
Uploads media on server.
Parameter
Type
Description
media
Media or an array of media objects to be uploaded.
Returns
-
Adds the folder ID to upload media to a certain folder.
Parameter
Type
Description
folderId
String
The folder identifier.
Returns
-
Adds metadata to a folder.
Parameter
Type
Description
meta
[String]
An array of metadata as follows:
["meta1": "data1"]
Returns
-
Runs the analyses.
Parameter
Type
Description
statusHandler
A callback function as follows:
The handler that is executed when the scenario state changes
errorHandler
A callback function as follows:
errorHandler: @escaping ((_ error: Error) -> Void)
Error handler
completionHandler
A callback function as follows:
The handler that is executed when the run method completes.
Returns
The analysis result or an error.
Customization for OzLivenessSDK (use OZSDK.customization
).
A set of customization parameters for the toolbar.
Parameter
Type
Description
closeButtonIcon
UIImage
An image for the close button
closeButtonColor
UIColor
Close button tintColor
titleFont
UIFont
Toolbar title text font
titleColor
UIColor
Toolbar title text color
backgroundColor
UIColor
Toolbar background color
titleText
String
Text on the toolbar
A set of customization parameters for the center hint that guides a user through the process of taking an image of themselves.
Parameter
Type
Description
textFont
UIFont
Center hint text font
textColor
UIColor
Center hint text color
backgroundColor
UIColor
Center hint text background
verticalPosition
Int
Center hint vertical position from the screen top (in %, 0-100)
hideTextBackground
Bool
Hides text background
backgroundCornerRadius
Int
Center hint background frame corner radius
A set of customization parameters for the hint animation.
Parameter
Type
Description
hideAnimation
Bool
A switcher for hint animation, if True
, the animation is hidden
animationIconSize
CGfloat
A side size of the animation icon square
hintGradientColor
UIColor
The close-to-frame gradient color
A set of customization parameters for the frame around the user face.
Parameter
Type
Description
geometryType
The frame type: oval, rectangle, circle, or square
cornerRadius
CGFloat
Rectangle corner radius (in dp)
strokeFaceNotAlignedColor
UIColor
Frame color when a face is not aligned properly
strokeFaceAlignedColor
UIColor
Frame color when a face is aligned properly
strokeWidth
CGFloat
Frame stroke width (in dp, 0-20)
strokePadding
CGFloat
A padding from the stroke to the face alignment area (in dp, 0-10)
A set of customization parameters for the background outside the frame.
Parameter
Type
Description
backgroundColor
UIColor
Background color
A set of customization parameters for the SDK version text.
Parameter
Type
Description
textFont
UIFont
SDK version text font
textColor
UIColor
SDK version text color
A set of customization parameters for the antiscam message that warns user about their actions being recorded.
Parameter
Type
Description
customizationEnableAntiscam
Bool
Adds the antiscam message
customizationAntiscamTextMessage
String
Antiscam message text
customizationAntiscamTextFont
UIFont
Antiscam message text font
customizationAntiscamTextColor
UIColor
Antiscam message text color
customizationAntiscamBackgroundColor
UIColor
Antiscam message text background color
customizationAntiscamCornerRadius
CGFloat
Background frame corner radius
customizationAntiscamFlashColor
UIColor
Color of the flashing indicator close to the antiscam message
Logo customization parameters. Custom logo should be allowed by license.
Parameter
Type
Description
image
UIImage
Logo image
size
CGSize
Logo size (in dp)
A source of a license.
Case
Description
licenseFilePath
An absolute path to a license (String)
licenseFileName
The name of the license file
The license data.
Parameter
Type
Description
appIDS
[String]
An array of bundle IDs
expires
TimeInterval
The expiration interval
features
Features
License features
configs (optional)
ABTestingConfigs
Additional configuration
Contains action from the captured video.
Case
Description
smile
Smile
eyes
Blink
scanning
Scan
selfie
A selfie with face alignment check
one_shot
The best shot from the video taken
left
Head turned left
right
Head turned right
down
Head tilted downwards
up
Head lifted up
Contains the locale code according to ISO 639-1.
Case
Description
en
English
hy
Armenian
kk
Kazakh
ky
Kyrgyz
tr
Turkish
es
Spanish
pt-BR
Portuguese (Brazilian)
custom(String)
Custom language (language ISO 639-1 code, two letters)
Contains all the information on the media captured.
Parameter
Type
Description
movement
User action type
mediaType
Type of media
metaData
[String] as follows:
["meta1": "data1"]
Metadata if any
videoURL
URL
URL of the Liveness video
bestShotURL
URL
URL of the best shot in PNG
preferredMediaURL
URL
URL of the API media container
timestamp
Date
Timestamp for the check completion
The type of media captured.
Case
Description
movement
A media with an action
documentBack
The back side of the document
documentFront
The front side of the document
Error description. These errors are deprecated and will be deleted in the upcoming releases.
Case
Description
userNotProcessed
The Liveness check was not processed
failedBecauseUserCancelled
The check was interrupted by user
failedBecauseCameraPermissionDenied
The Liveness check can't be performed: no camera access
failedBecauseOfBackgroundMode
The Liveness check can't be performed: background mode
failedBecauseOfTimeout
The Liveness check can't be performed: timeout
failedBecauseOfAttemptLimit
The Liveness check can't be performed: attempts limit exceeded
failedBecausePreparingTimout
The Liveness check can't be performed: face alignment timeout
failedBecauseOfLowMemory
The Liveness check can't be performed: no memory left
Contains information on what media to analyze and what analyses to apply.
Parameter
Type
Description
media
An array of the OzMedia objects
type
The type of the analysis
mode
The mode of the analysis
sizeReductionStrategy
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully
params (optional)
String
Additional parameters
The type of the analysis.
Case
Description
biometry
The algorithm that allows comparing several media and check if the people on them are the same person or not
quality
The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.
document
The analysis that aims to recognize the document and check if its fields are correct according to its type.
blacklist
The analysis that compares a face on a captured media with faces from the pre-made media database.
Currently, the .document analysis can't be performed in the on-device mode.
The mode of the analysis.
Case
Description
onDevice
The on-device analysis with no server needed
serverBased
The server-based analysis
hybrid
The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Shows the media processing status.
Case
Description
addToFolder
The system is creating a folder and adding files to this folder
addAnalyses
The system is adding analyses
waitAnalysisResult
The system is waiting for the result
Shows the files' uploading status.
Parameter
Type
Description
media
The object that is being uploaded at the moment
index
Int
Number of this object in a list
from
Int
Objects quantity
progress
Progress
Object uploading status
Shows the analysis processing status.
Parameter
Type
Description
status
Processing analysis status
progressStatus
Media uploading status
Describes the analysis result for the single media.
Parameter
Type
Description
resolution
Consolidated analysis result
sourceId
String
Media identifier
isOnDevice
Bool
Analysis mode
confidenceScore
Float
Resulting score
mediaType
String
Media file type: VIDEO / IMAGE / SHOT_SET
media
Media that is being analyzed
error
AnalysisError (inherits from Error)
Error
Contains the consolidated analysis results for all media.
Parameter
Type
Description
resolution
Consolidated analysis result
folderId
String
Folder identifier
analysisResults
A list of analysis results
Contains the results of the checks performed.
Parameter
Type
Description
resolution
Analysis resolution
type
Analysis type
mode
Analysis mode
analysisId
String
Analysis identifier
error
AnalysisError (inherits from Error)
Error
resultMedia
Results of the analysis for single media files
confidenceScore
Float
The resulting score
serverRawResponse
String
Server response
The general status for all analyses applied to the folder created.
Case
Description
INITIAL
No analyses have been applied yet
PROCESSING
The analyses are in progress
FAILED
One or more analyses failed due to some error and couldn't get finished
FINISHED
The analyses are finished
DECLINED
The check failed (e.g., faces don't match or some spoofing attack detected)
SUCCESS
Everything went fine, the check succeeded (e.g., faces match or liveness confirmed)
OPERATOR_REQUIRED
The result should be additionally checked by a human operator
Contains the results for single analyses.
Parameter
Type
Description
analyseResolutionStatus
The analysis status
type
The analysis type
folderID
String
The folder identifier
score
Float
The result of the check performed
Frame shape settings.
Case
Description
oval
Oval frame
rectangle(cornerRadius: CGFloat)
Rectangular frame (with corner radius)
circle
Circular frame
square(cornerRadius: CGFloat)
Square frame (with corner radius)
Possible license errors.
Case
Description
licenseFileNotFound
The license is not found
licenseParseError
Cannot parse the license file, the license might be invalid
licenseBundleError
The bundle_id
in the license file doesn't match with bundle_id
used.
licenseExpired
The license is expired
The authorization type.
Case
Description
fromServiceToken
Authorization with a token:
host: String
token: String
fromCredentials
Authorization with credentials:
host: String
login: String
password: String
Defines the settings for the repeated media upload.
attemptsCount
Int
Number of attempts for media upload
attemptsTimeout
Int
Timeout between attempts
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
uploadOriginal
The original video
uploadCompressed
The compressed video
uploadBestShot
The best shot taken from the video
uploadNothing
Nothing is sent (note that no folder will be created)
The plugin window is launched with open(options)
method:
The full list of OzLiveness.open()
parameters:
options
– an object with the following settings:
token
– (optional) the auth token;
license
– an object containing the license data;
licenseUrl
– a string containing the path to the license;
lang
– a string containing the identifier of one of the installed language packs;
params
– an object with identifiers and additional parameters:
extract_best_shot
– true
or false
: run the best frame choice in the Quality analysis;
action
– an array of strings with identifiers of actions to be performed.
Available actions:
photo_id_front
– photo of the ID front side;
photo_id_back
– photo of the ID back side;
video_selfie_left
– turn head to the left;
video_selfie_right
– turn head to the right;
video_selfie_down
– tilt head downwards;
video_selfie_high
– raise head up;
video_selfie_smile
– smile;
video_selfie_eyes
– blink;
video_selfie_scan
– scanning;
video_selfie_blank
– no action, simple selfie;
video_selfie_best
– special action to select the best shot from a video and perform analysis on it instead of the full video.
overlay_options
– the document's template displaying options:
show_document_pattern
: true/false
– true
by default, displays a template image, if set to false
, the image is replaced by a rectangular frame;
on_error
– a callback function (with one argument) that is called in case of any error happened during video capturing and retrieves the error information: an object with the error code, error message, and telemetry ID for logging.
on_close
– a callback function (no arguments) that is called after the plugin window is closed (whether manually by the user or automatically after the check is completed).
device_id
– (optional) identifier of camera that is being used.
disable_adaptive_aspect_ratio
(since 1.5.0) – if True
, disables the video adaptive aspect ratio, so your video doesn’t automatically adjust to the window aspect ratio. The default value is False
, and by default, the video adjusts to the closest ratio of 4:3, 3:4, 16:9, or 9:16. Please note: smartphones still require the portrait orientation to work.
get_user_media_timeout
(since 1.5.0) – when Web SDK can’t get access to the user camera, after this timeout it displays a hint on how to solve the problem. The default value is 40000 (ms).
Oz Liveness Web SDK is a module for processing data on clients' devices. With Oz Liveness Web SDK, you can take photos and videos of people via their web browsers and then analyze these media. Most browsers and devices are supported. Available languages: EN, ES, PT-BR, KK.
Please find a sample for Oz Liveness Web SDK here. To make it work, replace <web-adapter-url>
with the Web Adapter URL you've received from us.
For Angular and React, replace https://web-sdk.sandbox.ohio.ozforensics.com
in index.html.
Web SDK requires HTTPS (with SSL encryption) to work; however, at localhost and 127.0.01, you can check the resources' availability via HTTP.
Oz Liveness Web SDK consists of two components:
Client side – a JavaScript file that is being loaded within the frontend part of your application. It is called Oz Liveness Web Plugin.
Server side – a separate server module with OZ API. The module is called Oz Liveness Web Adapter.
The integration guides can be found here:
Oz Web SDK can be provided via SaaS, when the server part works on our servers and is maintained by our engineers, and you just use it, or on-premise, when Oz Web Adapter is installed on your servers. Contact us for more details and choose the model that is convenient for you.
Oz Web SDK requires a license to work. To issue a license, we need the domain name of the website where you are going to use our SDK.
This is a guide on how to start with Oz Web SDK:
Integrate the plugin into your page.
If you want to customize the look-and-feel of Oz Web SDK, please refer to this section.
Web Plugin is a plugin called by your web application. It works in a browser context. The Web Plugin communicates with Web Adapter, which, in turn, communicates with Oz API.
Please find a sample for Oz Liveness Web SDK here. To make it work, replace <web-adapter-url>
with the Web Adapter URL you've received from us.
For Angular and React, replace https://web-sdk.sandbox.ohio.ozforensics.com
in index.html.
From 1.1.0, Oz API Lite works with base64 as an input format and is also able to return the biometric templates in this format. To enable this option, add Content-Transfer-Encoding = base64
to the request headers.
Use this method to check what versions of components are used (available from 1.1.1).
Call GET /version
-
GET localhost/version
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
Use this method to check whether the biometric processor is ready to work.
Call GET /v1/face/pattern/health
-
GET localhost/v1/face/pattern/health
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
The method is designed to extract a biometric template from an image.
HTTP request content type: “image / jpeg” or “image / png”
Call POST /v1/face/pattern/extract
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns a biometric template.
The content type of the HTTP response is “application/octet-stream”.
If you've passed Content-Transfer-Encoding = base64
in headers, the template will be in base64 as well.
The method is designed to compare two biometric templates.
The content type of the HTTP request is “multipart / form-data”.
CallPOST /v1/face/pattern/compare
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing the two templates.
HTTP response content type: “application/json”.
The method combines the two methods from above, extract and compare. It extracts a template from an image and compares the resulting biometric template with another biometric template that is also passed in the request.
The content type of the HTTP request is “multipart / form-data”.
Call POST /v1/face/pattern/verify
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing two biometric templates and the biometric template.
The content type of the HTTP response is “multipart/form-data”.
The method also combines the two methods from above, extract and compare. It extracts templates from two images, compares the received biometric templates, and transmits the comparison result as a response.
The content type of the HTTP request is “multipart / form-data”.
Call POST /v1/face/pattern/extract_and_compare
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing the two extracted biometric templates.
HTTP response content type: “application / json”.
Use this method to compare one biometric template to N others.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/compare_n
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
The method combines the extract and compare_n methods. It extracts a biometric template from an image and compares it to N other biometric templates that are passed in the request as a list.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/verify_n
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
This method also combines the extract and compare_n methods but in another way. It extracts biometric templates from the main image and a list of other images and then compares them in the 1:N mode.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/
extract_and_compare_n
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
HTTP response content type: “application / json”.
Use this method to check whether the liveness processor is ready to work.
Call GET /v1/face/liveness/health
None.
GET localhost/v1/face/liveness/health
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
The detect
method is made to reveal presentation attacks. It detects a face in each image or video (since 1.2.0), sends them for analysis, and returns a result.
The method supports the following content types:
image/jpeg
or image/png
for an image;
multipart/form-data
for images, videos, and archives. You can use payload
to add any parameters that affect the analysis.
To run the method, call POST /{version}/face/liveness/detect
.
Accepts an image in JPEG or PNG format. No payload
attached.
Accepts the multipart/form-data request.
Each media file should have a unique name, e.g., media_key1
, media_key2
.
The payload
parameters should be a JSON placed in the payload
field.
Temporary IDs will be deleted once you get the result.
To extract the best shot from your video or archive, in analyses
, set extract_best_shot
= true
(as shown in the request example below). In this case, API Lite will analyze your archives and videos, and, in response, will return the best shot. It will be a base64 image in analysis->output_images->image_b64
.
Additionally, you can change the Liveness threshold. In analyses
, set the new threshold in the threshold_spoofing
parameter. If the resulting score will be higher than this parameter's value, the analysis will end up with the DECLINED status. Otherwise, the status will be SUCCESS.
HTTP response content type: “application / json”.
This callback is called periodically during the analysis’ processing. It retrieves an intermediate result (unavailable for the capture
mode). The result content depends on the Web Adapter result_mode
.
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .
When result_mode
is safe
, the on_result
callback contains the state of the analysis only:
or
Please note: the options listed below are for testing purposes only. If you require more information than what is available in the Safe mode, please follow .
For the status
value, the callback contains the state of the analysis, and for each of the analysis types, the name of the type, state, and resolution.
or
The folder
value is almost similar to the status
value, with the only difference: the folder_id
is added.
If result_mode
is set to full
, you will either receive:
once the analysis is finished, the full information on the analysis:
everything that you could see in the folder
mode;
timestamps;
metadata;
analyses’, company, analysis group IDs;
thresholds;
media info;
and more.
This callback is called after the check is completed. It retrieves the analysis result (unavailable for the capture
mode). The result content depends on the Web Adapter result_mode
.
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .
When result_mode
is safe
, the on_complete
callback contains the state of the analysis only:
Please note: The options listed below are for testing purposes only. If you require more information than what is available in the Safe mode, please follow .
For the status
value, the callback contains the state of the analysis, and for each of the analysis types, the name of the type, state, and resolution.
The folder
value is almost similar to the status
value, with the only difference: the folder_id
is added.
If result_mode
is set to full
, you receive the full information on the analysis:
everything that you could see in the folder
mode;
timestamps;
metadata;
analyses’, company, analysis group IDs;
thresholds;
media info;
and more.
The common way of Oz Mobile SDK to work is the server-based mode, when the Liveness and Biometry analyses are performed on a server as shown in the scheme below:
But there's also an option to perform checks without server calls or even Internet connection. This option stands for on-device analyses.
The on-device analyses are being performed faster and more secure as all data is processed directly on device, nothing is being sent anywhere. In this case, you don’t need a server at all, neither need you the API connection.
However, the API connection might be needed for some additional functions like telemetry or server-side SDK configuration.
The on-device analysis mode is useful when:
you do not collect, store or process personal data;
you need to identify a person quickly regardless of network conditions such as a distant region, inside a building, underground, etc.;
you’re on a tight budget as you can save money on the hardware part.
To launch the on-device check, set the appropriate mode
for Android or iOS SDK.
Android:
iOS:
To start recording, use thestartActivityForResult
method:
actions
– a list of while recording video.
For Fragment, use the code below. LivenessFragment
is the representation of the Liveness screen UI.
To obtain the captured video, use theonActivityResult
method:
If you use our SDK just for capturing videos, omit the Checking Liveness and Face Biometry step.
If a user closes the capturing screen manually, resultCode
receives the Activity.RESULT_CANCELED
value.
Code example:
[]
The handler that is executed when the method completes. The closure is either an array of objects or an .
[]
statusHandler: @escaping ((_ status:
) -> Void)
completionHandler: @escaping (_ results :
) -> Void)
[]
[]
[]
GET /api/folders/?meta_data=transaction_id==<your_transaction_id>
to find a folder in Oz API from your backend by your unique identifier.
Read more about .
meta
– an object with names of meta fields in keys and their string values in values. is transferred to Oz API and can be used to obtain analysis results or for searching;
on_submit
– a callback function (no arguments) that is called after submitting customer data to the server (unavailable for the ).
on_capture_complete
– a callback function (with one argument) that is called after the video is captured and retrieves the information on this video. The example of the response is described .
on_result
– a callback function (with one argument) that is called periodically during the analysis and retrieves an intermediate result (unavailable for the capture
mode). The result content depends on the Web Adapter result_mode
and is described .
on_complete
– a callback function (with one argument) that is called after the check is completed and retrieves the analysis result (unavailable for the capture
mode). The result content depends on the Web Adapter result_mode
and is described .
style
– .
enable_3d_mask
– enables the 3D mask as the default face capture behavior. This parameter works only if load_3d_mask
in the Web Adapter is set to true
; the default value is false
.
cameraFacingMode
(since 1.4.0) – the parameter that defines which camera to use; possible values: user
(front camera), environment
(rear camera). This parameter only works if the use_for_liveness
option in the file is undefined. If use_for_liveness
is set (with any value), cameraFacingMode
gets overridden and ignored.
while the analysis is in progress, the response similar to the for processing;
score (the value of the min_confidence
or confidence_spoofing
parameters; please refer to for details);
For details, please refer to the Checking Liveness and Face Biometry sections for and .
sdkMediaResult
– an object with video capturing results for interactions with Oz API (a list of the objects),
sdkErrorString
– description of , if any.
On-Premise
Install our Web SDK. Our engineers will help you to install the components needed using the standalone installer or manually. The license will be installed as well; to update it, please refer to this article.
Configure the adapter.
SaaS
This part is fully covered by the Oz Forensics engineers. You get a link for Oz Web Plugin (see step 2).
Parameter name
Type
Description
core
String
API Lite core version number.
tfss
String
TFSS version number.
models
[String]
An array of model versions, each record contains model name and model version number.
Parameter name
Type
Description
status
Int
0 – the biometric processor is working correctly.
3 – the biometric processor is inoperative.
message
String
Message.
Parameter name
Type
Description
Not specified*
Stream
Required parameter. Image to extract the biometric template.
The “Content-Type” header field must indicate the content type.
Parameter name
Type
Description
Not specified*
Stream
A biometric template derived from an image
Parameter name
Type
Description
bio_feature
Stream
Required parameter.
First biometric template.
bio_template
Stream
Required parameter.
Second biometric template.
Parameter name
Type
Description
score
Float
The result of comparing two templates
decision
String
Recommended solution based on the score.
approved – positive. The faces match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
sample
Stream
Required parameter.
Image to extract the biometric template.
bio_template
Stream
Required parameter.
The biometric template to compare with.
Parameter name
Type
Description
score
Float
The result of comparing two templates
bio_feature
Stream
Biometric template derived from image
Parameter name
Type
Description
sample_1
Stream
Required parameter.
First image.
sample_2
Stream
Required parameter.
Second image
Parameter name
Type
Description
score
Float
The result of comparing the two extracted templates.
decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
template_1
Stream
This parameter is mandatory. The first (main) biometric template
templates_n
Stream
A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the main and Nth templates. The result has the fields as follows:
*filename
String
A filename for the Nth template.
*score
Float
The result of comparing the main and Nth templates.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
sample_1
Stream
This parameter is mandatory. The main image.
templates_n
Stream
A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the template derived from the main image and the Nth template. The result has the fields as follows:
*filename
String
A filename for the Nth template.
*score
Float
The result of comparing the template derived from the main image and the Nth template.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
sample_1
Stream
This parameter is mandatory. The first (main) image.
samples_n
Stream
A list of N images. Each of them should be passed separately but the parameter name should be samples_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the main and Nth images. The result has the fields as follows:
*filename
String
A filename for the Nth image.
*score
Float
The result of comparing the main and Nth images.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
HTTP response codes
The value of the “code” parameter
Description
400
BPE-002001
Invalid Content-Type of HTTP request
400
BPE-002002
Invalid HTTP request method
400
BPE-002003
Failed to read the biometric sample*
400
BPE-002004
Failed to read the biometric template
400
BPE-002005
Invalid Content-Type of the multiparted HTTP request part
400
BPE-003001
Failed to retrieve the biometric template
400
BPE-003002
The biometric sample* is missing face
400
BPE-003003
More than one person is present on the biometric sample*
500
BPE-001001
Internal bioprocessor error
400
BPE-001002
TFSS error. Call the biometry health method.
Parameter name
Type
Description
status
Int
0 – the liveness processor is working correctly.
3 – the liveness processor is inoperative.
message
String
Message.
HTTP response codes
The value of the “code” parameter
Description
400
LDE-002001
Invalid Content-Type of HTTP request
400
LDE-002002
Invalid HTTP request method
400
LDE-002004
Failed to extract the biometric sample*
400
LDE-002005
Invalid Content-Type of the multiparted HTTP request part
500
LDE-001001
Liveness detection processor internal error
400
LDE-001002
TFSS error. Call the Liveness health method.
To set your own look-and-feel options, use the style
section in the Ozliveness.open
method. Here is what you can change:
faceFrame
– the color of the frame around a face:
faceReady
– the frame color when the face is correctly placed within the frame;
faceNotReady
– the frame color when the face is placed improperly and can't be analyzed.
centerHint
– the text of the hint that is displayed in the center.
textSize
– the size of the text;
color
– the color of the text;
yPosition
– the vertical position measured from top;
letterSpacing
– the spacing between letters;
fontStyle
– the style of font (bold, italic, etc.).
closeButton
– the button that closes the plugin:
image
– the button image, can be an image in PNG or dataURL in base64.
backgroundOutsideFrame
– the color of the overlay filling (outside the frame):
color
– the fill color.
Example:
To force the closing of the plugin window, use the close()
method. All requests to server and callback functions (except on_close
) within the current session will be aborted.
Example:
To hide the plugin window without cancelling the requests for analysis results and user callback functions, call the hide()
method. Use this method, for instance, if you want to display your own upload indicator after submitting data.
An example of usage:
The add_lang(lang_id, lang_obj)
method allows adding a new or customized language pack.
Parameters:
lang_id
: a string value that can be subsequently used as lang
parameter for the open()
method;
lang_obj
: an object that includes identifiers of translation strings as keys and translation strings themselves as values.
A list of language identifiers:
en
English
es
Spanish
pt-br*
Portuguese (Brazilian)
kz
Kazakh
*Formerly pt
, changed in 1.3.1.
An example of usage:
OzLiveness.add_lang('en', enTranslation)
, where enTranslation
is a JSON object.
To set the SDK language, when you launch the plugin, specify the language identifier in lang
:
You can check which locales are installed in Web SDK: use the ozLiveness.get_langs()
method. If you have added a locale manually, it will also be shown.
A list of all language identifiers:
The keys oz_action_*_go
refer to the appropriate gestures. oz_tutorial_camera_*
– to the hints on how to enable camera in different browsers. Others refer to the hints for any gesture, info messages, or errors.
Since 1.5.0, if your language pack doesn't include a key, the message for this key will be shown in English.
This callback is called when the system encounters any error. It contains the error details and telemetry ID that you can use for further investigation.
To set your own look-and-feel options, use the style
section in the Ozliveness.open
method. The options are listed below the example.
Main color settings.
Parameter
Description
textColorPrimary
Main text color
backgroundColorPrimary
Main background color
textColorSecondary
Secondary text color
backgroundColorSecondary
Secondary background color
iconColor
Icons’ color
Main font settings.
Parameter
Description
textFont
Font
textSize
Font size
textWeight
Font weight
textStyle
Font style
Title font settings.
Parameter
Description
textFont
Font
textSize
Font size
textWeight
Font weight
textStyle
Font style
Buttons’ settings.
Parameter
Description
textFont
Font
textSize
Font size
textWeight
Font weight
textStyle
Font style
textColorPrimary
Main text color
backgroundColorPrimary
Main background color
textColorSecondary
Secondary text color
backgroundColorSecondary
Secondary background color
cornerRadius
Button corner radius
Toolbar settings.
Parameter
Description
closeButtonIcon
Close button icon
iconColor
Close button icon color
Center hint settings.
Parameter
Description
textFont
Font
textSize
Font size
textWeight
Font weight
textStyle
Font style
textColor
Text color
backgroundColor
Background color
backgroundOpacity
Background opacity
backgroundCornerRadius
Frame corner radius
verticalPosition
Vertical position
Hint animation settings.
Parameter
Description
hideAnimation
Disable animation
hintGradientColor
Gradient color
hintGradientOpacity
Gradient opacity
animationIconSize
Animation icon size
Face frame settings.
Parameter
Description
geometryType
Frame shape: rectangle or oval
cornersRadius
Frame corner radius (for rectangle)
strokeDefaultColor
Frame color when a face is not aligned properly
strokeFaceInFrameColor
Frame color when a face is aligned properly
strokeOpacity
Stroke opacity
strokeWidth
Stroke width
strokePadding
Padding from stroke
Document capture frame settings.
Parameter
Description
cornersRadius
Frame corner radius
templateColor
Document template color
templateOpacity
Document template opacity
Background settings.
Parameter
Description
backgroundColor
Background color
backgroundOpacity
Background opacity
Scam protection settings: the antiscam message warns user about their actions being recorded.
Parameter
Description
textMessage
Antiscam message text
textFont
Font
textSize
Font size
textWeight
Font weight
textStyle
Font style
textColor
Text color
textOpacity
Text opacity
backgroundColor
Background color
backgroundOpacity
Background opacity
backgroundCornerRadius
Frame corner radius
flashColor
Flashing indicator color
SDK version text settings.
Parameter
Description
textFont
Font
textSize
Font size
textWeight
Font weight
textStyle
Font style
textColor
Text color
textOpacity
Text opacity
3D mask settings. The mask has been implemented in 1.2.1.
Parameter
Description
maskColor
The color of the mask itself
glowColor
The color of the glowing mask shape
minAlpha
Minimum mask transparency level. Implemented in 1.3.1
maxAlpha
Maximum mask transparency level. Implemented in 1.3.1
Table of parameters' correspondence:
Previous design
New design
doc_color
-
face_color_success
faceFrame.faceReady
faceFrameCustomization.strokeFaceInFrameColor
face_color_fail
faceFrame.faceNotReady
faceFrameCustomization.strokeDefaultColor
centerHint.textSize
centerHintCustomization.textSize
centerHint.color
centerHintCustomization.textColor
centerHint.yPosition
centerHintCustomization.verticalPosition
centerHint.letterSpacing
-
centerHint.fontStyle
centerHintCustomization.textStyle
closeButton.image
-
backgroundOutsideFrame.color
backgroundCustomization.backgroundColor
Please note: for the plugin to work, your browser version should support JavaScript ES6 and be the one as follows or newer.
Google Chrome (and other browsers based on the Chromium engine)
56
Mozilla Firefox
55
Safari
11
Microsoft Edge*
17
Opera
47
*Web SDK doesn't work in Internet Explorer compatibility mode due to lack of important functions.
Mostly, license is set on the server side (Web Adapter). This article covers a rare case when you use Web Plugin only.
To generate the license, we need the domain name of the website where you are going to use Oz Forensics Web SDK, for instance, your-website.com. You can also define subdomains.
To find the origin, in the developer mode, run window.origin
on the page you are going to embed Oz Web SDK in. At localhost / 127.0.0.1, license can work without this information.
Set the license as shown below:
With license data:
With license path:
Check whether the license is updated properly.
Example
Proceed to your website origin and launch Liveness -> Simple selfie.
Once the license is added, the system will check its validity on launch.
Web SDK changes
Simplified the checks that require user to move their head: turning left or right, tilting, or looking down.
Decreased the distance threshold for the head-moving actions: turning left or right, tilting, or looking down.
The application's behavior when the opened dev-tools are detected is now manageable.
You can now configure method signatures to make them trusted via checksum of the modified function.
Changed the wording for the head_down
gesture: the new wording is “tilt down”.
For possible regulatory requirements, updated the Web Adapter configuration file with a new parameter: extract_action_shot
. If True
, for each gesture, the system saves the corresponding image to display it in report, e.g., closed eyes for blinking, instead of random frame for thumbnail.
Fixed an issue where an arrow incorrectly appeared after capturing head-movement gestures.
Fixed an issue where the oval disappeared when the "Great!" phrase was displayed.
Improved the selection of the best shot.
Security updates.
Security updates.
Resolved the issue where a video could not be generated from a sequence of frames.
The on_complete
callback now is called upon folder status change.
Updated instructions for camera access in the Android Chrome and Facebook browsers. New keys:
error_no_camera_access
,
oz_tutorial_camera_android_chrome_with_screens_title
,
oz_tutorial_camera_android_chrome_instruction_screen_click_settings
,
oz_tutorial_camera_android_chrome_instruction_screen_permissions
,
oz_tutorial_camera_android_chrome_instruction_screen_allow_access
,
try_again
,
oz_tutorial_camera_external_browser_button
,
oz_tutorial_camera_external_browser_manual_open_link
,
oz_tutorial_camera_external_browser_title
.
Added the get_langs()
method that returns a list of locales available in the installed Web SDK.
Added an error for the case of setting a non-available locale.
Added an error for the case of lacking of a necessary resource. New key: unable_load_resource
.
Changed texts for the error_connection_lost
and error_service_unavailable
errors.
Uploaded new Web SDK string files.
The crop function no longer adds borders for images smaller than 512×512.
In case of camera access timeout, we now display a page with instructions for users to enable camera access: default for all browsers and specific for Facebook.
Added several localization records to the Web SDK strings file. New localization keys:
accessing_camera_switch_to_another_browser
,
error_camera_timeout_instruction
,
error_camera_timeout_title
,
error_camera_timeout_android_facebook_instruction
.
Improved user experience for card printer machines. Users no longer need to get that close to the screen with face frame.
Added the disable_adaptive_aspect_ratio
parameter to the Web Plugin. This parameter switches off the default video aspect ratio adjustment to the window.
Implemented the get_user_media_timeout
parameter for Web Plugin: when SDK can’t get access to the user camera, after this timeout it displays a hint on how to solve the problem.
Added several localization records into the Web SDK strings file. New keys:
oz_tutorial_camera_android_edge_browser
oz_tutorial_camera_android_edge_instruction
oz_tutorial_camera_android_edge_title
error_camera_timeout_instruction
error_camera_timeout_title
Improved the localization: when SDK can’t find a translation for a key, it displays a message in English.
You can now distribute the serverless Web SDK via Node Package Manager.
You can switch off the display of API errors in modal windows. Set the disable_adapter_errors_on_screen
parameter in the configuration file to True
.
The mobile browsers now use the rear camera to take the documents’ photos.
Updated samples.
Fixed the bug with abnormal 3D mask reaction when user needs to repeat a gesture.
Logging and security updates.
Fixed the bug where the warning about incorrect device orientation was not displayed when a mobile user attempted to take a video with their face in landscape orientation.
Some users may have experienced freezes while using WebView. Now, users can tap a button to continue working with the application. The corresponding string has been added to the string file in the localization section. Key: tap_to_continue
.
Debugging improvements.
Major security updates: improved protection against virtual cameras and JavaScript tampering.
Improved WebView support:
Added camera access instructions for applications within the generic WebView browsers on Android and iOS. The corresponding events are added to telemetry.
Improved the React Native app integration by adding the webkit-playsinline
attribute, thereby fixing the issue of the full-screen camera launch on iOS WebView.
The iFrame using error when iframe_allowed
= False
is now shown properly.
New localization keys:
oz_tutorial_camera_android_webview_browser
oz_tutorial_camera_android_webview_instruction
oz_tutorial_camera_android_webview_title
You can now use Web SDK for the Black List analysis: to compare the face from your Liveness video with faces from your database. Create a collection (or collections) with these photos via API or Web UI, and add the corresponding ID (or IDs) to the analyses.collection_ids
array in the Web Adapter configuration file.
The iframe support is back: set the iframe_allowed
parameter in the Web Adapter configuration file to True
.
The interval for polling for the analyses’ results is now configurable. Change it in the results_polling_interval
parameter of the Web Adapter configuration file if necessary.
You can now select the front or back camera via Web Plugin. In the OzLiveness.open()
method, set cameraFacingMode
to user
for the front camera and environment
for the back one. This parameter only works when the use_for_liveness
option in the Web Adapter configuration file is not set.
The plugin styles are now being added automatically. Please remove <link rel="stylesheet" href="/plugin/ozliveness.css" />
from your page to prevent style conflicts.
Fixed some bugs and updated telemetry.
Improved the protection against injection attacks.
Replaced the code for Brazilian Portuguese from pt
to pt-br
to match the ISO standard.
Removed the lang_default
adapter parameter.
The 3D mask transparency became customizable.
Implemented the possibility of using a master license that works with any domain.
Added the master_license_signature
option into Web Adapter configuration parameters.
Fixed some bugs.
Internal SDK improvements.
To enhance your clients’ experience with Web SDK, we implemented the 3D-mask that replaces the oval during face capture. To make it work, set the load_3d_mask
in Configuration file settings to true
.
Updated telemetry (logging).
Logging updates.
Security updates.
Internal SDK improvements.
Internal SDK improvements.
Fixed some bugs.
Changed the signature of the on_error()
callback: now it returns an object with the error code, error message, and telemetry ID for logging.
Added the configuration parameter for the debug mode. If True
, the Web SDK enables access to the /debug.php page, which contains information about the current configuration and the current license.
Fixed some bugs and improved logging.
If your device has multiple cameras, you can now choose one when launching the Web Plugin.
Implemented the new design for SDK and demo, including the scam protection option: the antiscam message warns user about their actions being recorded. Please check the new customization options here.
Added the Portuguese, Spanish, and Kazakh locales.
Added the combo gesture.
Added the progress bar for media upload.
Removed the Zoom in / Zoom out gestures.
On tablets, you can now capture video in landscape orientation.
Removed the lang_allow
option from Web Adapter configuration file.
In the capture
architecture, when a virtual camera is detected, the additional_info
parameter is inside the from_virtual_camera
section.
You can now crop the lossless frame without losing quality.
Fixed face landmarks for the capture
architecture.
Improved the recording quality;
Reforged licensing:
added detailed error descriptions;
now you can set the license in JS during the runtime;
when you set a license in OzLiveness.open(), it rewrites the previous license;
the license no longer requires port and protocol;
you can now specify subdomains in the license;
upon the launch of the plugin on a server, the license payload is displayed in the Docker log;
localhost and 127.0.0.1 no longer ask for a license;
The on_capture_complete callback is now available on any architecture: it is called once a video is taken and returns info on actions from the video;
Oz Web Liveness and Oz Web Adapter versions are displayed in the Docker log upon launch;
Deleted the deprecated adapter_version field from order metadata;
Added the parameters to pass the information about the bounding box – landmarks that define where the face in the frame is;
Fixed the Switch camera button in Google Chrome;
Upon the start of Web SDK, the actual configuration parameters are displayed in the Docker log.
Changed the extension of some Oz system files from .bin to .dat.
Additional scripts are now called using the main script's address.
Web SDK now can be installed via static files only (works for the capture
type of architecture).
Web SDK can now work with CDN.
Now, you can launch several Oz Liveness plugins on different pages. In this case, you need to specify the path to scripts in head
of these pages.
If you update the Web SDK version from 0.4.0, the license should be updated as well.
Fixed a bug with the shooting screen.
Added licensing (requires origin).
You can now customize the look-and-feel of Web SDK.
Fixed Angular integration.
Fixed the bug where the IMAGE_FOLDER section was missed in the JSON response with the lossless frame enabled.
Fixed issues with the ravenjs library.
A frame for taking a documents photo is now customizable.
Implemented security updates.
Metadata now contains names of all cameras you can use.
Video and zip formats now allow loading a lossless image.
Fixed Best Shot.
Separated the error code and error description in server responses.
If the SDK mode is set in the environment variables architecture
, api_url
, it is passed to settings automatically.
In the Lite mode, you can select the best frame for any action.
In the Lite mode, an image sent via API gets the on_complete
status only after a successful liveness.
You can manage CORS using the environment variables (CORS headers are not added by default).
Added the folder value for result_mode
: it returns the same value as status but with folder_id
.
Updated encryption: now only metadata required to decrypt an object is encrypted.
Updated data transfer: images are being sent in separate form fields.
Added the camera parameters check.
Enabled a new method for image encryption.
Optimized image transfer format.
Added the use_for_liveness
option: mobile devices use back camera by default, on desktop, flip and oval circling are off. By default, the option is switched off.
Decreased video length for video_selfie_best
(the Selfie gesture) from 1 to 0,2 sec.
Loading scripts is now customizable.
Improved UX.
Added the Kazakh locale.
Added a guide for accessing the camera on a desktop.
Improved logging: plugin_liveness.php requests and recording user-agent to the server log.
Added the Lite mode.
Added encryption.
Updated libraries.
You can now hide the Oz Forensics logo.
Updated a guide for Facebook, Instagram, Samsung, Opera.
Added handlers for unknown variables and a guide for “unknown” browsers.
Optimized memory usage for a frame.
Added a guide on how to switch cameras on using Android browsers.
APM
Analyses per minute. Please note:
Analysis is a request for Quality (Liveness) or Biometry analysis using a single media.
A single analysis with multiple media counts as separate analyses in terms of APM.
Multiple analysis types on single media (two media for Biometry) count as separate analyses in terms of APM.
PoC
Proof of Concept
Node
A Node is a worker machine. Can be either a virtual or a physical machine.
HA
High availability
K8s
Kubernetes
SC
StorageClass
RWX
ReadWriteMany
Oz API components:
APP is the API front app that receives REST requests, performs preprocessing, and creates tasks for other API components.
Celery is the asynchronous task queue. API has the following celery queues:
Celery-default processes system-wide tasks.
Celery-maintenance processes maintenance tasks.
Celery-tfss processes analysis tasks.
Celery-resolution checks for completion of all nested analyses within a folder and changes folder status.
Celery-preview_convert creates a video preview for media.
Celery-beat is a CronJob for managing maintenance celery tasks.
Celery-Flower is a Celery metrics collector.
Celery-regula (optional) processes document analysis tasks.
Redis is a message broker and result backend for Celery.
RabbitMQ (optional) can be used as a message broker for Celery instead of Redis.
Nginx serves static media files for external HTTP(s) requests.
O2N (optional) processes the Blacklist analysis.
Statistic (optional) provides statistics' collection for Web UI.
Web UI provides the web interface.
BIO-Updater checks for models updates and downloads new models.
Oz BIO (TFSS) runs TensorFlow with AI models and makes decisions for incoming media.
The BIO-Updater and BIO components require access to the following external resources:
The deployment scenario depends on the workload you expect.
Use cases
Testing/Development purposes
Small installations with low number of APM
Typical usage with moderate load
High load with HA and autoscaling
Usage with cloud provider
~ APM
~ analyses per month
~ APM
analyses per month
APM
analyses per month
Environment
Docker
Docker
Kubernetes
HA
No
Partially
Yes
Pros
Requires a minimal amount of computing resources
Low complexity, so no high-qualified engineers are needed on-site
Easy to manage and support
Partially supports HA
Can be scaled up to support higher workload
HA and autoscaling
Observability and manageability
Allows high workload and can be scaled up
Cons
Suitable only for low loads, no high APM
No scaling and high-availability
API HA requires precise balancing
Higher staff qualification requirements
High staff qualification requirements
Additional infrastructure requirements
External resource requirements
PostgreSQL
For Kubernetes deployments:
K8s v1.25+
ingress-nginx
clusterIssuer
kube-metrics
Prometheus
clusterAutoscaler
PostgreSQL
Autoscaling is implemented on the basis of ClusterAutoscaler and must be supported by your infrastructure.
Please find the installation guide here: Docker.
Type of containerization: Docker,
Type of installation: Docker compose,
Autoscaling/HA: none.
Software
Docker 19.03+,
Podman 4.4+,
Python 3.4+.
Storage
Depends on media quality, the type and number of analyses, and the required archive depth.
May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to this article for media size reference.
Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.
Staff qualification:
Basic knowledge of Linux and Docker.
Single node.
Resources:
1 node,
16 CPU/32 RAM.
Two nodes.
Resources:
2 nodes,
16 CPU/32 RAM for the first node; 8 CPU/16 RAM for the second node.
Please find the installation guide here: Docker.
Type of containerization: Docker/Podman,
Type of installation: Docker compose,
Autoscaling/HA: manual scaling; HA is partially supported.
Computational resources
Depending on load, you can change the number of nodes. However, for 5+ nodes, we recommend that you proceed to the High Load section.
From 2 to 4 Docker nodes (see schemes):
2 Nodes:
24 CPU/32 RAM per node.
3 Nodes:
16 CPU/24 RAM per node.
4 Nodes:
8 CPU/16 RAM for two nodes (each),
16 CPU/24 RAM for two nodes (each).
We recommend using external self-managed PostgreSQL database and NFS share.
Software
Docker 19.03+,
Podman 4.4+,
Python 3.4+.
Storage
Depends on media quality, the type and number of analyses, and the required archive depth.
May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to this article for media size reference.
Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.
Staff qualification:
Advanced knowledge of Linux, Docker, and Postgres.
2 nodes:
3 nodes:
4 nodes:
Please find the installation guide here: Kubernetes.
Type of containerization: Type of containerization: Docker containers with Kubernetes orchestration,
Type of installation: Helm charts,
Autoscaling/HA: supports autoscaling; HA for most components.
Computational resources
3-4 nodes. Depending on load, you can change the number of nodes.
16 CPU/32 RAM Nodes for the BIO pods,
8+ CPU/16+ RAM Nodes for all other workload.
We recommend using external self-managed PostgreSQL database.
Requires RWX (ReadWriteMany) StorageClass or NFS share.
Software
Docker 19.03+,
Python 3.4+.
Storage
Depends on media quality, the type and number of analyses, and the required archive depth.
May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to this article for media size reference.
Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.
Staff qualification:
Advanced knowledge of Linux, Docker, Kubernetes, and Postgres.
Deletes all action videos from file system (iOS 8.4.0 and higher, Android).
Returns
Future<Void>.
Returns the SDK version.
Returns
Future<String>.
Initializes SDK with license sources.
Parameter
Type
Description
licenses
List<String>
A list of licences
Returns
Case
Text
True
Initialization has completed successfully
False
Initialization error
Authentication via credentials.
Parameter
Type
Description
String
User email
password
String
User password
host
String
Server URL
Returns
Case
Text
Success
Nothing (void)
Failed
PlatformException:
code = AUTHENTICATION_FAILED
message = exception details
Authentication via access token.
Parameter
Type
Description
token
String
User email
host
String
Server URL
Returns
Case
Text
Success
Nothing (void)
Failed
PlatformException:
code = AUTHENTICATION_FAILED
message = exception details
Connection to the telemetry server via credentials.
Parameter
Type
Description
String
User email
password
String
User password
host
String
Server URL
Returns
Case
Text
Success
Nothing (void)
Failed
PlatformException:
code = AUTHENTICATION_FAILED
message = exception details
Connection to the telemetry server via access token.
Parameter
Type
Description
token
String
User email
host
String
Server URL
Returns
Case
Text
Success
Nothing (void)
Failed
PlatformException:
code = AUTHENTICATION_FAILED
message = exception details
Checks whether an access token exists.
Returns
Case
Returns
Token exists
True
Token does not exist
False
Deletes the saved access token.
Returns
Nothing (void).
Returns the list of SDK supported languages.
Returns
List<Locale>.
Starts the Liveness video capturing process.
Parameter
Type
Description
actions
Actions to execute
mainCamera
Boolean
Use main (True
) or front (False
) camera
Sets the length of the Selfie gesture (in milliseconds).
Parameter
Type
Description
selfieLength
Int
The length of the Selfie gesture (in milliseconds). Should be within 500-5000 ms, the default length is 700
Returns
Error if any.
Launches the analyses.
Parameter
Type
Description
analysis
The list of Analysis structures
uploadMedia
The list of the captures videos
params
Map<String, Any>
Additional parameters
Returns
List<RequestResult>.
Sets the SDK localization.
Parameter
Type
Description
locale
The SDK language
The number of attempts before SDK returns error.
Parameter
Type
Description
singleCount
int
Attempts on a single action/gesture
commonCount
int
Total number of attempts on all actions/gestures if you use a sequence of them
Sets the UI customization values for OzLivenessSDK. The values are described in the Customization structures section. Structures can be found in the lib\customization.dart file.
Sets the timeout for the face alignment for actions.
Parameter
Type
Description
timeout
int
Timeout in milliseconds
Add fonts and drawable resources to the application/ios project.
Fonts and images should be placed into related folders:
ozforensics_flutter_plugin\android\src\main\res\drawable ozforensics_flutter_plugin\android\src\main\res\font
These are defined in the customization.dart file.
Contains the information about customization parameters.
Toolbar customization parameters.
Parameter
Type
Description
closeButtonIcon
String
Close button icon received from plugin
closeButtonColor
String
Color #XXXXXX
titleText
String
Header text
titleFont
String
Text font
titleSize
int
Font size
titleFontStyle
String
Font style
titleColor
String
Color #XXXXXX
titleAlpha
int
Header text opacity
isTitleCentered
bool
Sets the text centered
backgroundColor
String
Header background color #XXXXXX
backgroundAlpha
int
Header background opacity
Center hint customization parameters.
Parameter
Type
Description
textFont
String
Text font
textFontStyle
String
Font style
textColor
String
Color #XXXXXX
textSize
int
Font size
verticalPosition
int
Y position
textAlpha
int
Text opacity
centerBackground
bool
Sets the text centered
Hint animation customization parameters.
Parameter
Type
Description
hideAnimation
bool
Hides the hint animation
animationIconSize
int
Animation icon size in px (40-160)
hintGradientColor
String
Color #XXXXXX
hintGradientOpacity
int
Gradient color
Frame around face customization parameters.
Parameter
Type
Description
geometryType
String
Frame shape received from plugin
geometryTypeRadius
int
Corner radius for rectangle
strokeWidth
int
Frame stroke width
strokeFaceNotAlignedColor
String
Color #XXXXXX
strokeFaceAlignedColor
String
Color #XXXXXX
strokeAlpha
int
Stroke opacity
strokePadding
int
Stroke padding
SDK version customization parameters.
Parameter
Type
Description
textFont
String
Text font
textFontStyle
String
Font style
textColor
String
Color #XXXXXX
textSize
int
Font size
textAlpha
int
Text opacity
Background customization parameters.
Parameter
Type
Description
backgroundColor
String
Color #XXXXXX
backgroundAlpha
int
Background opacity
Defined in the models.dart file.
Stores the language information.
Case
Description
en
English
hy
Armenian
kk
Kazakh
ky
Kyrgyz
tr
Turkish
es
Spanish
pt_br
Portuguese (Brazilian)
The type of media captured.
Case
Description
movement
A media with an action
documentBack
The back side of the document
documentFront
The front side of the document
The type of media captured.
Case
Description
documentPhoto
A photo of a document
video
A video
shotSet
A frame archive
Contains an action from the captured video.
Case
Description
blank
A video with no gesture
photoSelfie
A selfie photo
videoSelfieOneShot
A video with the best shot taken
videoSelfieScan
A video with the scanning gesture
videoSelfieEyes
A video with the blink gesture
videoSelfieSmile
A video with the smile gesture
videoSelfieHigh
A video with the lifting head up gesture
videoSelfieDown
A video with the tilting head downwards gesture
videoSelfieRight
A video with the turning head right gesture
videoSelfieLeft
A video with the turning head left gesture
photoIdPortrait
A photo from a document
photoIdBack
A photo of the back side of the document
photoIdFront
A photo of the front side of the document
Stores information about media.
Parameter
Type
Description
Platform
fileType
The type of the file
Android
movement
An action on a media
iOS
mediatype
String
A type of media
iOS
videoPath
String
A path to a video
bestShotPath
String
path of the best shot in PNG for video or image path for liveness
preferredMediaPath
String
URL of the API media container
photoPath
String
A path to a photo
archivePath
String
A path to an archive
tag
A tag for media
Android
Stores information about the analysis result.
Parameter
Type
Description
Platform
folderId
String
The folder identifier
type
The analysis type
errorCode
int
The error code
Android only
errorMessage
String
The error message
mode
The mode of the analysis
confidenceScore
Double
The resulting score
resolution
The completed analysis' result
status
Boolean
The analysis state:
true
- success;
false
- failed
Stores data about a single analysis.
Parameter
Type
Description
type
The type of the analysis
mode
The mode of the analysis
mediaList
Media to analyze
params
Map<String, String>
Additional analysis parameters
sizeReductionStrategy
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully
Analysis type.
Case
Description
biometry
The algorithm that allows comparing several media and check if the people on them are the same person or not
quality
The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.
Analysis mode.
Case
Description
onDevice
The on-device analysis with no server needed
serverBased
The server-based analysis
hybrid
The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Contains the action from the captured video.
Case
Description
oneShot
The best shot from the video taken
blank
A selfie with face alignment check
scan
Scan
headRight
Head turned right
headLeft
Head turned left
headDown
Head tilted downwards
headUp
Head lifted up
eyeBlink
Blink
smile
Smile
The general status for all analyses applied to the folder created.
Case
Description
failed
One or more analyses failed due to some error and couldn't get finished
declined
The check failed (e.g., faces don't match or some spoofing attack detected)
success
Everything went fine, the check succeeded (e.g., faces match or liveness confirmed)
operatorRequired
The result should be additionally checked by a human operator
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
uploadOriginal
The original video
uploadCompressed
The compressed video
uploadBestShot
The best shot taken from the video
uploadNothing
Nothing is sent (note that no folder will be created)
This is a Map to define the platform-specific resources on the plugin level.
This key is a Map for the close button icon.
Key
Value
Close
Android drawable resource / iOS Pods resource
Arrow
Android drawable resource / iOS Pods resource
This key is a Map containing the data on the uploaded fonts.
Key
Value
Flutter application font name
Android font resource / iOS Pods resource, used to retrieve the font on the plugin level
This key is a Map containing the data on the uploaded font styles.
Key
Value
Flutter application font style name
Name of the style retrieved for the font creation on the plugin level
This key is a Map containing the data on grame shape.
Key
Value
Oval
Oval shape
Rectangle
Rectangular shape
To launch the services, you'll require:
CPU: 16 cores,
RAM: 32 GB,
Disk: 100 GB, SSD,
Linux-compatible OS,
Docker 19.03+ (or Podman 4.4+),
Docker Compose 1.27+ (or podman-compose 1.2.0+, if you use Podman).
For Docker installations with multiple API servers, you'll also require shared volume or NFS.
The package you get consists of the following directories and files:
Put the license file in ./configs/tfss/license.key
.
Unzip the file that contains models into the ./data/tfss/models
directory.
Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh
script.
Set the initial passwords and values:
For this configuration, run all services on a single host:
We recommend using PostgreSQL as a container only for testing purposes. For the production deployment, it is recommended to use a standalone database.
Create a directory and unzip the distribution package into it. The package contains Docker Compose manifests and directories with the configuration files required for operation.
Put the license file in ./configs/tfss/license.key
.
Unzip the file that contains models into the ./data/tfss/models
directory.
Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh
script.
For this configuration, run TFSS service on a separate host:
Create a directory and unzip the distribution package into it. The package contains Docker Compose manifests and directories with the configuration files required for operation.
Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh
script.
For this configuration, run all services on a single host:
In this article, you’ll learn how to capture videos and send them through your backend to Oz API.
Here is the data flow for your scenario:
1. Oz Web SDK takes a video and makes it available for the host application as a frame sequence.
2. The host application calls your backend using an archive of these frames.
3. After the necessary preprocessing steps, your backend calls Oz API, which performs all necessary analyses and returns the analyses’ results.
4. Your backend responds back to the host application if needed.
On the server side, Web SDK must be configured to operate in the Capture
mode:
The architecture
parameter must be set to capture
in the app_config.json file.
In your Web app, add a callback to process captured media when opening the Web SDK plugin:
The result object structure depends on whether any virtual camera is detected or not.
Here’s the list of variables with descriptions.
Variable
Type
Description
best_frame
String
The best frame, JPEG in the data URL format
best_frame_png
String
The best frame, PNG in the data URL format, it is required for protection against virtual cameras when video is not used
best_frame_bounding_box
Array[Named_parameter: Int]
The coordinates of the bounding box where the face is located in the best frame
best_frame_landmarks
Array[Named_parameter: Array[Int, Int]]
The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the best frame
frame_list
Array[String]
All frames in the data URL format
frame_bounding_box_list
Array[Array[Named_parameter: Int]]
The coordinates of the bounding boxes where the face is located in the corresponding frames
frame_landmarks
Array[Named_parameter: Array[Int, Int]]
The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the corresponding frames
action
String
An action code
additional_info
String
Information about client environment
The video from Oz Web SDK is a frame sequence, so, to send it to Oz API, you’ll need to archive the frames and transmit them as a ZIP file via the POST /api/folders
request (check our Postman collections).
You can retrieve the MP4 video from a folder using the /api/folders/{{folder_id}}
request with this folder's ID. In the JSON that you receive, look for the preview_url
in source_media
. The preview_url
parameter contains the link to the video. From the plugin, MP4 videos are unavailable (only as frame sequences).
Also, in the POST {{host}}/api/folders
request, you need to add the additional_info
field. It is required for the capture
architecture mode to gather the necessary information about client environment. Here’s the example of filling in the request’s body:
Oz API accepts data without the base64 encoding.
Even though the analysis result is available to the host application via Web Plugin callbacks, it is recommended that the application back end receives it directly from Oz API. All decisions of the further process flow should be made on the back end as well. This eliminates any possibility of malicious manipulation with analysis results within the browser context.
To find your folder from the back end, you can follow these steps:
On the front end, add your unique identifier to the folder metadata.
You can add your own key-value pairs to attach user document numbers, phone numbers, or any other textual information. However, ensure that tracking personally identifiable information (PII) complies with relevant regulatory requirements.
Use the on_complete
callback of the plugin to be notified when the analysis is done. Once used, call your back end and pass the transaction_id
value.
On the back end side, find the folder by the identifier you've specified using the Oz API Folder LIST
method:
To speed up the processing of your request, we recommend adding the time filter as well:
In the response, find the analysis results and folder_id
for future reference.
Web Adapter may send analysis results to the Web Plugin with various levels of verbosity. It is recommended that, in production, the level of verbosity is set to minimum.
In the Web Adapter configuration file, set the result_mode
parameter to "safe".
A singleton for Oz SDK.
Deletes all action videos from file system.
Parameters
-
Returns
-
Creates an intent to start the Liveness activity.
Returns
-
Utility function to get the SDK error from OnActivityResult's intent.
Returns
Retrieves the SDK license payload.
Parameters
-
Returns
Utility function to get SDK results from OnActivityResult's intent.
Returns
A list of OzAbstractMedia objects.
Initializes SDK with license sources.
Returns
-
Enables logging using the Oz Liveness SDK logging mechanism.
Returns
-
Connection to API.
Connection to the telemetry server.
Deletes the saved token.
Parameters
-
Returns
-
Retrieves the telemetry session ID.
Parameters
-
Returns
The telemetry session ID (String parameter).
Retrieves the SDK version.
Parameters
-
Returns
The SDK version (String parameter).
Generates the payload with media signatures.
Returns
Payload to be sent along with media files that were used for generation.
A class for performing checks.
The analysis launching method.
A builder class for AnalysisRequest.
Creates the AnalysisRequest instance.
Parameters
-
Returns
Adds an analysis to your request.
Returns
Error if any.
Adds a list of analyses to your request. Allows executing several analyses for the same folder on the server side.
Returns
Error if any.
Adds metadata to a folder you create (for the server-based analyses only). You can add a pair key-value as additional information to the folder with the analysis result on the server side.
Returns
Error if any.
Uploads one or more media to a folder.
Returns
Error if any.
For the previously created folder, sets a folderId. The folder should exist on the server side. Otherwise, a new folder will be created.
Returns
Error if any.
Configuration for OzLivenessSDK (use OzLivenessSDK.config).
Sets the length of the Selfie gesture (in milliseconds).
Returns
Error if any.
The possibility to enable additional debug info by clicking on version text.
The number of attempts before SDK returns error.
Settings for repeated media upload.
Timeout for face alignment (measured in milliseconds).
Interface implementation to retrieve error by Liveness detection.
Locale to display string resources.
Logging settings.
Uses the main (rear) camera instead of the front camera for liveness detection.
Disables the option that prevents videos to be too short (3 frames or less).
Customization for OzLivenessSDK (use OzLivenessSDK.config.customization
).
Hides the status bar and the three buttons at the bottom. The default value is True
.
A set of customization parameters for the toolbar.
A set of customization parameters for the center hint that guides a user through the process of taking an image of themselves.
A set of customization parameters for the hint animation.
A set of customization parameters for the frame around the user face.
A set of customization parameters for the background outside the frame.
A set of customization parameters for the SDK version text.
A set of customization parameters for the antiscam message that warns user about their actions being recorded.
Logo customization parameters. Custom logo should be allowed by license.
Contains the action from the captured video.
Contains the extended info about licensing conditions.
A class for the captured media that can be:
A document photo.
A set of shots in an archive.
A Liveness video.
Contains an action from the captured video.
A class for license that can be:
Contains the license ID.
Contains the path to a license.
A class for analysis status that can be:
This status means the analysis is launched.
This status means the media is being uploaded.
The type of the analysis.
Currently, the DOCUMENTS
analysis can't be performed in the on-device mode.
The mode of the analysis.
Contains information on what media to analyze and what analyses to apply.
The general status for all analyses applied to the folder created.
Holder for attempts counts before SDK returns error.
Contains logging settings.
A class for color that can be (depending on the value received):
Frame shape settings.
Exception class for AnalysisRequest.
Structure that describes media used in AnalysisRequest.
Structure that describes the analysis result for the single media.
Consolidated result for all analyses performed.
Result of the analysis for all media it was applied to.
Defines the authentication method.
Authentication via token.
Authentication via credentials.
Defines the settings for the repeated media upload.
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
The Oz system consists of many interconnected components.
Most of the components require scaling to ensure functionality during an increase in . The scheme below represents how the system processes an analysis using supportive software.
This guide explains how to scale components and improve performance for both Docker and Kubernetes deployments.
For K8s deployments, use HPA. In the chart values, you'll find the use_hpa
parameter for each component that supports HPA.
BIO is the most resource-consuming component as it performs media analysis processing.
The BIO component may take up to 10 minutes to start (applicable for versions <1.2).
Scaling BIO nodes might be challenging during a rapid increase in APM.
For Docker installations, plan the required number of components to handle peak loads.
For Kubernetes installations, schedule the creation of additional BIO pods to manage demand using cronjobHpaUpdater
in values.yaml
.
CPU must support avx
, avx2
, fma
, sse4.1
, and sse4.2
.
If CPU support avx512f
instructions: to slightly increase performance, you can use a specific BIO image.
We recommend using Intel CPU. AMD CPUs are also supported, but require additional configuration.
For better performance, each BIO should reside on a dedicated node with reserved resources.
APM expectations may vary, depending on speed of CPUs (GHz).
Recommended: 2.5+ GHz.
For better performance: 3.0+ GHz.
Each BIO instance can handle an unlimited number of simultaneous requests. However, as the number increases, the execution time for each request grows.
The assignment of analyses to BIO instances is managed by celery-tfss
workers. Each celery-tfss
instance is configured to handle 2 analyses simultaneously, as this provides optimal performance and analysis time.
For Kubernetes installations, this behavior can be adjusted using the concurrency parameter in values.yaml. In the default configuration, the number of celery-tfss
instances should match the number of BIOs.
The required number of BIOs can be calculated using the formula below, based on the average analysis time and the expected number of analyses:
Here,
N(BIO) is the required number of BIO nodes
n(APM) is the expected number of APM,
t(analysis) is the measured average duration of a single analysis (3 seconds by default),
C is concurrency (2 by default).
For a large number of requests involving .zip archives (e.g., if you use Web SDK), you might require additional scaling of the celery-preview_convert
service.
Oz API is not intended to be a long-term analysis storage system, so you might encounter longer API response time after 10-30 million folders stored in database (depending on the database performance). Thus, we recommend archiving or deleting data from API after one year of storing it.
For the self-managed database, you'll require the following resources:
API
CPU: 4 cores (up to 8),
RAM: 8 GB (up to 16),
SSD storage.
O2N
CPU: 4 cores (up to 8),
RAM: equal to the database size,
SSD storage.
O2N database requires the Pgvector extension.
Do not create indexes for O2N database as they will reduce accuracy.
For O2N, parallelism is crucial.
Scale up RAM with the growth of your database.
If the number of active connections allows you to stay within the shared_buffers
limit, you can increase work_mem
.
To increase API performance, consider using this list of indexes:
For high-load environments, achieving the best performance requires precise database tuning. Please contact our support for assistance.
This table represents analysis duration for different gestures.
Below is a list of recommended and non-recommended practices for using Oz API methods.
/authorize
Refresh expired token when possible.
Monitor token expiration.
Use service token only in when appropriate.
Making requests with an expired token.
Creating a new token instead of refreshing an expiring one.
Requesting a new token on each API call.
/api/companies
Create a new named company for your deployment.
Avoid
Using the default system_company
.
/api/users
Create named accounts for users.
Create a separate named service account for each business process.
Sharing accounts between users.
Creating a new user for each analysis request.
/api/healthcheck
Using healthcheck too frequently as it may overwhelm system with internal checks.
/api/folders
Always use the time-limiting query parameters: time.created.min
and time.created.max
.
Use the total_omit=true
query parameter.
Use the folder_id
query parameter when you know the folder ID.
Use the technical_meta_data
query parameter in case you have meta_data
set.
Use the limit
query parameter when possible.
Using the with_analyzes=true
query parameter when it is unnecessary, in requests involving long time periods or unspecified durations.
Using the technical_meta_data
query parameter with unspecified additional parameters or time period.
/api/folders
Placing too many data in the payload meta_data
.
List<>
List<>
List<>
List<>
Set the initial passwords and values as described in .
The – String.
The license payload () – the object that contains the extended info about licensing conditions.
The class instance.
Contains the locale code according to .
The duration of the analysis depends on the gesture used for Liveness. gestures are generally analyzed faster, while gestures provide higher accuracy but take more time. The longer the gesture takes, the longer the analysis will take.
API performance mainly depends on and performance. Most of the available use a database to return results. Thus, to maintain low API response time, we recommend using a high-performance database. Additionally, to reduce the number of requests for analysis results via the /api/folders
methods, please consider .
Parameter
Type
Description
actions
A list of possible actions
Parameter
Type
Description
data
Intent
The object to test
Parameter
Type
Description
data
Intent
The object to test
Parameter
Type
Description
context
Context
The Context class
licenseSources
A list of license references
statusListener
StatusListener
Optional listener to check the license load result
Parameter
Type
Description
tag
String
Message tag
log
String
Message log
Parameter
Type
Description
connection
Connection type
statusListener
StatusListener<String?>
Listener
Parameter
Type
Description
connection
Connection type
statusListener
StatusListener<String?>
Listener
Parameter
Type
Description
media
An array of media files
folderMeta (optional)
[string:any]
Additional folder metadata
Parameter
Type
Description
onStatusChange
A callback function as follows:
onStatusChange(status: AnalysisRequest.
AnalysisStatus
) { handleStatus() }
The function is executed when the status of the AnalysisRequest changes.
onError
A callback function as follows:
onError(error: OzException) { handleError() }
The function is executed in case of errors.
onSuccess
The function is executed when all the analyses are completed.
Parameter
Type
Description
analysis
A structure for analysis
Parameter
Type
Description
analysis
[Analysis]
A list of Analysis structures
Parameter
Type
Description
key
String
Key for metadata.
value
String
Value for metadata.
Parameter
Type
Description
mediaList
An OzAbstractMedia object or a list of objects.
Parameter
Type
Description
folderID
String
A folder identifier.
Parameter
Type
Description
selfieLength
Int
The length of the Selfie gesture (in milliseconds). Should be within 500-5000 ms, the default length is 700
Parameter
Type
Description
allowDebugVisualization
Boolean
Enables or disables the debug info.
Parameter
Type
Description
attemptsSettings
Sets the number of attempts
Parameter
Type
Description
uploadMediaSettings
Sets the number of attempts and timeout between them
Parameter
Type
Description
faceAlignmentTimeout
Long
A timeout value
Parameter
Type
Description
livenessErrorCallback
ErrorHandler
A callback value
Parameter
Type
Description
localizationCode
A locale code
Parameter
Type
Description
logging
Logging settings
Parameter
Type
Description
useMainCamera
Boolean
True
– rear camera,
False
– front camera
Parameter
Type
Description
disableFramesCountValidation
Boolean
True
– validation is off,
False
– validation is on
Parameter
Type
Description
closeIconRes
Int (@DrawableRes)
An image for the close button
closeIconTint
Close button color
titleTextFont
Int (@FontRes)
Toolbar title text font
titleTextFontStyle
Int (values from android.graphics.Typeface properties, e.g., Typeface.BOLD)
Toolbar title text font style
titleTextSize
Int
Toolbar title text size (in sp, 12-18)
titleTextAlpha
Int
Toolbar title text opacity (in %, 0-100)
titleTextColor
Toolbar title text color
backgroundColor
Toolbar background color
backgroundAlpha
Int
Toolbar background opacity (in %, 0-100)
isTitleCentered
Boolean
Defines whether the text on the toolbar is centered or not
title
String
Text on the toolbar
Parameter
Type
Description
textFont
String
Center hint text font
textStyle
Int (values from android.graphics.Typeface properties, e.g.,Typeface.BOLD)
Center hint text style
textSize
Int
Center hint text size (in sp, 12-34)
textColor
Center hint text color
textAlpha
Int
Center hint text opacity (in %, 0-100)
verticalPosition
Int
Center hint vertical position from the screen bottom (in %, 0-100)
backgroundColor
Center hint background color
backgroundOpacity
Int
Center hint background opacity
backgroundCornerRadius
Int
Center hint background frame corner radius (in dp, 0-20)
Parameter
Type
Description
hintGradientColor
Gradient color
hintGradientOpacity
Int
Gradient opacity
animationIconSize
Int
A side size of the animation icon square
hideAnimation
Boolean
A switcher for hint animation, if True
, the animation is hidden
Parameter
Type
Description
geometryType
The frame type: oval, rectangle, circle, square
cornerRadius
Int
Rectangle corner radius (in dp, 0-20)
strokeDefaultColor
Frame color when a face is not aligned properly
strokeFaceInFrameColor
Frame color when a face is aligned properly
strokeAlpha
Int
Frame opacity (in %, 0-100)
strokeWidth
Int
Frame stroke width (in dp, 0-20)
strokePadding
Int
A padding from the stroke to the face alignment area (in dp, 0-10)
Parameter
Type
Description
backgroundColor
Background color
backgroundAlpha
Int
Background opacity (in %, 0-100)
Parameter
Type
Description
textFont
Int (@FontRes)
SDK version text font
textSize
Int
SDK version text size (in sp, 12-16)
textColor
SDK version text color
textAlpha
Int
SDK version text opacity (in %, 20-100)
Parameter
Type
Description
textMessage
String
Antiscam message text
textFont
String
Antiscam message text font
textSize
Int
Antiscam message text size (in px, 12-18)
textColor
Antiscam message text color
textAlpha
Int
Antiscam message text opacity (in %, 0-100)
backgroundColor
Antiscam message background color
backgroundOpacity
Int
Antiscam message background opacity
cornerRadius
Int
Background frame corner radius (in px, 0-20)
flashColor
Color of the flashing indicator close to the antiscam message
Parameter
Type
Description
image
Bitmap (@DrawableRes)
Logo image
size
Size
Logo size (in dp)
Case
Description
OneShot
The best shot from the video taken
Blank
A selfie with face alignment check
Scan
Scan
HeadRight
Head turned right
HeadLeft
Head turned left
HeadDown
Head tilted downwards
HeadUp
Head lifted up
EyeBlink
Blink
Smile
Smile
Parameter
Type
Description
expires
Float
The expiration interval
features
Features
License features
appIDS
[String]
An array of bundle IDs
Parameter
Type
Description
tag
A tag for a document photo.
photoPath
String
An absolute path to a photo.
additionalTags (optional)
String
Additional tags if needed (including those not from the OzMediaTag enum).
metaData
Map<String, String>
Media metadata
Parameter
Type
Description
tag
A tag for a shot set
archivePath
String
A path to an archive
additionalTags (optional)
String
Additional tags if needed (including those not from the OzMediaTag enum)
metaData
Map<String, String>
Media metadata
Parameter
Type
Description
tag
A tag for a video
videoPath
String
A path to a video
bestShotPath (optional)
String
URL of the best shot in PNG
preferredMediaPath (optional)
String
URL of the API media container
additionalTags (optional)
String
Additional tags if needed (including those not from the OzMediaTag enum)
metaData
Map<String, String>
Media metadata
Case
Description
Blank
A video with no gesture
PhotoSelfie
A selfie photo
VideoSelfieOneShot
A video with the best shot taken
VideoSelfieScan
A video with the scanning gesture
VideoSelfieEyes
A video with the blink gesture
VideoSelfieSmile
A video with the smile gesture
VideoSelfieHigh
A video with the lifting head up gesture
VideoSelfieDown
A video with the tilting head downwards gesture
VideoSelfieRight
A video with the turning head right gesture
VideoSelfieLeft
A video with the turning head left gesture
PhotoIdPortrait
A photo from a document
PhotoIdBack
A photo of the back side of the document
PhotoIdFront
A photo of the front side of the document
Parameter
Type
Description
id
Int
License ID
Parameter
Type
Description
path
String
An absolute path to a license
Parameter
Type
Description
analysis
Contains information on what media to analyze and what analyses to apply.
Parameter
Type
Description
media
The object that is being uploaded at the moment
index
Int
Number of this object in a list
from
Int
Objects quantity
percentage
Int
Completion percentage
Case
Description
BIOMETRY
The algorithm that allows comparing several media and check if the people on them are the same person or not
QUALITY
The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.
DOCUMENTS
The analysis that aims to recognize the document and check if its fields are correct according to its type.
Case
Description
ON_DEVICE
The on-device analysis with no server needed
SERVER_BASED
The server-based analysis
HYBRID
The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Parameter
Type
Description
type
Type
The type of the analysis
mode
Mode
The mode of the analysis
mediaList
An array of the OzAbstractMedia objects
params (optional)
Map<String, Any>
Additional parameters
sizeReductionStrategy
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully
Case
Description
FAILED
One or more analyses failed due to some error and couldn't get finished
DECLINED
The check failed (e.g., faces don't match or some spoofing attack detected)
SUCCESS
Everything went fine, the check succeeded (e.g., faces match or liveness confirmed)
OPERATOR_REQUIRED
The result should be additionally checked by a human operator
Parameter
Type
Description
singleCount
Int
Attempts on a single action/gesture
commonCount
Int
Total number of attempts on all actions/gestures if you use a sequence of them
Case
Description
EN
English
HY
Armenian
KK
Kazakh
KY
Kyrgyz
TR
Turkish
ES
Spanish
PT-BR
Portuguese (Brazilian)
Parameter
Type
Description
allowDefaultLogging
Boolean
Allows logging to LogCat
allowFileLogging
Boolean
Allows logging to an internal file
journalObserver
StatusListener
An event listener to receive journal events on the application side
Parameter
Type
Description
resId
Int
Link to the color in the Android resource system
Parameter
Type
Description
hex
String
Color hex (e.g., #FFFFFF)
Parameter
Type
Description
color
Int
The Int value of a color in Android
Case
Description
Oval
Oval frame
Rectangle
Rectangular frame
Circle
Circular frame
Square
Square frame
Parameter
Type
Description
apiErrorCode
Int
Error code
message
String
Error message
Parameter
Type
Description
mediaId
String
Media identifier
mediaType
String
Type of the media
originalName
String
Original media name
ozMedia
Media object
tags
List<String>
Tags for media
Parameter
Type
Description
confidenceScore
Float
Resulting score
isOnDevice
Boolean
Mode of the analysis
resolution
Consolidated analysis result
sourceMedia
Source media
type
Type of the analysis
Parameter
Type
Description
analysisResults
List<AnalysisResult>
Analysis result
folderId
String
Folder identifier
resolution
Consolidated analysis result
Parameter
Type
Description
resolution
Consolidated analysis result
type
Type of the analysis
mode
Resulting score
resultMedia
List<ResultMedia>
A list of results of the analyses for single media
confidenceScore
Float
Resulting score
analysisId
String
Analysis identifier
params
@RawValue Map<String, Any>
Additional folder parameters
error
Error if any
serverRawResponse
String
Response from backend
Parameter
Type
Description
host
String
API address
token
String
Access token
Parameter
Type
Description
host
String
API address
username
String
User name
password
String
Password
Parameter
Type
Description
attemptsCount
Int
Number of attempts for media upload
attemptsTimeout
Int
Timeout between attempts
Case
Description
UPLOAD_ORIGINAL
The original video
UPLOAD_COMPRESSED
The compressed video
UPLOAD_BEST_SHOT
The best shot taken from the video
UPLOAD_NOTHING
Nothing is sent (note that no folder will be created)
Error Code
Error Message
Description
ERROR = 3
Error.
An unknown error has happened
ATTEMPTS_EXHAUSTED_ERROR = 4
Error. Attempts exhausted for liveness action.
The number of action attempts is exceeded
VIDEO_RECORD_ERROR = 5
Error by video record.
An error happened during video recording
NO_ACTIONS_ERROR = 6
Error. OzLivenessSDK started without actions.
No actions found in a video
FORCE_CLOSED = 7
Error. Liveness activity is force closed from client application.
A user closed the Liveness screen during video recording
DEVICE_HAS_NO_FRONT_CAMERA = 8
Error. Device has not front camera.
No front camera found
DEVICE_HAS_NO_MAIN_CAMERA = 9
Error. Device has not main camera.
No rear camera found
DEVICE_CAMERA_CONFIGURATION_NOT_SUPPORTED = 10
Error. Device camera configuration is not supported.
Oz Liveness doesn't support the camera configuration of the device
FACE_ALIGNMENT_TIMEOUT = 12
Error. Face alignment timeout in OzLivenessSDK.config.faceAlignmentTimeout milliseconds
Time limit for the face alignment is exceeded
ERROR = 13
The check was interrupted by user
User has closed the screen during the Liveness check.
Gesture
Average analysis time
50th percentile
[video_selfie_best]
3.895
3.13
[video_selfie_blank]
5.491
4.984
[video_selfie_down]
11.051
8.052
[video_selfie_eyes]
9.953
7.399
[video_selfie_high]
10.937
8.112
[video_selfie_left]
9.713
7.558
[video_selfie_right]
9.95
7.446
[video_selfie_scan]
9.425
7.29
[video_selfie_smile]
10.25
7.621
To install Oz product via Kubernetes, consider using Helm charts.
Oz API and related components: Helm chart.
API 5.2: version 0.11.x,
API 5.3 (regulatory update for Kazakhstan): 0.12.x.
Web SDK: Helm chart. Please note: the version of the chart is not tied to the Web SDK version.
For testing purposes, the database installed and created automatically by the chart is sufficient. However, for production, we strongly recommend using a separate, self-managed database.
Recommended PostgreSQL version: 15.5.
Create a database using the script(s) below.
To increase performance, consider using this list of indexes:
API and Web SDK charts require RWX SC (CephFS, EFS, NFS, Longhorn, etc.).
To deploy in Kubernetes, download the chart version you require and adjust the values.yaml
file. This file specifies parameters for deployment of Oz products.
This example is based on the 0.11.28 chart version.
Adjust the values.yaml
file, setting the following mandatory parameters before deployment:
ozDockerHubCreds
: you'll receive them from Oz Engineer.
UserParams
:
URLs
:
apiURL
: URL for API. May be internal, if you use Web SDK only. For Mobile SDKs, should be public. Please refer to this article for more information.
DB
: must be set, if you use an external PostgreSQL server. For details, please check Database Creation.
use_chart_postgres
: false by default. Enables internal PostgreSQL server (not recommended for production).
postgresUser
: same as <<USERNAME>>
.
postgresHost
: the hostname of your PostgreSQL server.
postgresDB
: same as <<DB_NAME>>
.
postgresUserPassword
: same as <<PASSWORD>>
.
postgresPort
: 5432 by default.
o2nDB
:
use_chart_o2nDB
: false by default. Enables internal PostgreSQL server (not recommended for production).
startinit
: true
by default. Enables database init scripts. Set to false
after chart is deployed.
creds
:
postgresHost
: the hostname of your PostgreSQL server with O2N database.
postgresPort
: 5432 by default.
postgresDB
: same as <<O2N_DB_NAME>>
.
postgresUser
: same as <<O2N_USERNAME>>
.
postgresUserPassword
: Same as <<O2N_PASSWORD>>
.
Creds
:
apiAdminLogin
: login for new (default) user for API. Will be created on the first run.
apiAdminPass
: password for the default user.
webUILocalAdminLogin
: local Admin for Web UI. Should differ from apiAdminLogin
.
webUILocalAdminPass
: password for webUILocalAdminLogin
.
BIO
:
licenseKey
: you'll receive it from Oz Engineer / Sales.
clientToken
: you'll receive it from Oz Engineer.
pvc
:
api
:
static
:
storageClassName
: RWX StorageClass.
size
: Expected size for PV.
Params
:
Global
:
startinits
: false
by default. Set to true
on the first run, then, after successful deployment, change back to false
.
To adjust API behavior, you might want to change other parameters. Please refer to comments in the values.yaml
file.
BIO is a part of the API chart. The BIO pods require separate nodes for each pod. To ensure BIO resides on dedicated nodes, you can use affinity and tolerations.
The BIO behavior can be customized via Params
-> global
-> affinity
in values.yaml
.
The default parameters are listed below:
The example of chart deployment via Helm:
Installation of Web SDK requires API pre-installed. Except specific cases, Web SDK cannot work without API.
For proper deployment, Web SDK requires an API service account. Pre-create a user for Web SDK with the CLIENT type and is_service
flag set. Please refer to User Roles for more details.
This example is based on the 1.5.1+onPremise chart version.
Adjust the values.yaml
file, setting the following mandatory parameters before deployment:
ozDockerHubCreds
: you'll receive them from Oz Engineer.
UserParams
:
URLs
:
apiURL
: API URL. Can be an internal API URL.
webSDKURL
: WebSDK url that will be used for public access.
Creds
:
AdminLogin
: login of the user that should be pre-created in API. Do not use the default admin login.
AdminPass
: password of the pre-created user.
PVC
:
persistentStorage
: false
be default. Set to true
if you use more than 1 Web SDK pod.
storageClassName
: RWX StorageClass.
Params
:
websdk
:
license
: should contain your Web SDK license. You'll receive it from Oz Engineer / Sales.
To adjust API behavior, you might want to change other parameters. Please refer to comments in the values.yaml
file.
The example of chart deployment via Helm: