Oz API is the central component of the system. It provides RESTful application programming interface to the core functionality of Liveness and Face matching analyses, along with many important supplemental features:
Persistence: your media and analyses are stored for future reference unless you explicitly delete them,
Authentication, roles and access management,
Asynchronous analyses,
For more information, please refer to and . To test Oz API, please check the Postman collection .
Under the logical hood, Oz API has the following components:
File storage and database where media, analyses, and other data are stored,
The Oz BIO module that runs neural network models to perform facial biometry magic,
Licensing logic.
The front-end components (Oz Liveness Mobile or Web SDK) connect to Oz API to perform server-side analyses either directly or via customer's back end.
iOS and Android SDK
iOS and Android SDK are collectively referred to as Mobile SDKs or Native SDKs. They are written on Swift and Kotlin/Java, respectively, and designed to be integrated into your native mobile application.
Mobile SDKs implement the out-of-the-box customizable user interface for capturing Liveness video and ensure that the two main objectives are met:
The capture process is smooth for users,
The quality of a video is optimal for the subsequent Liveness analysis.
After Liveness video is recorded and available to your mobile application, you can run the server-side analysis. You can use corresponding SDK methods, call the API directly from your mobile application, or pass the media to your backend and interact with Oz API from there.
The basic integration option is described in the .
Mobile SDKs are also capable of On-device Liveness and Face matching. On-device analyses may be a good option in low-risk context, or when you don’t want the media to leave the users’ smartphones. Oz API is not required for On-device analyses. To learn how it works, please refer to this .
We recommend using server-based analyses whenever possible, as on-device ones tend to produce less accurate results.
Web SDK
Web Adapter and Web Plugin together constitute Web SDK.
Web SDK is designed to be integrated into your web applications and have the same main goals as Mobile SDKs:
The capture process is smooth for users,
The quality of a video is optimal for the subsequent Liveness analysis.
Web Adapter needs to be set up on a server side. Web Plugin is called by your web application and works in a browser context. It communicates with Web Adapter, which, in turn, communicates with Oz API.
Web SDK adds the two-layer protection against injection attacks:
Collects information about browser context and camera properties to detect usage of virtual cameras or other injection methods.
Records liveness video in a format that allows server-side neural networks to search for traces of injection attack in the video itself.
Check the for the basic integration scenario, and explore the for more details.
Web UI (Web Console)
Web UI is a convenient user interface that allows to explore the stored API data in the easy way. It relies on API authentication and database and does not store any data on its own.
Web console has an intuitive interface, yet the user guide is available .
Oz Liveness and Biometry Key Concepts
Oz Forensics specializes in liveness and face matching: we develop products that help you to identify your clients remotely and avoid any kind of spoofing or deepfake attack. Oz software helps you to add facial recognition to your software systems and products. You can integrate Oz modules in many ways depending on your needs. We are constantly improving our components, increasing their quality.
Oz Liveness is responsible for recognizing a living person on a video it receives. Oz Liveness can distinguish a real human from their photo, video, mask, or other kinds of spoofing and deepfake attacks. The accuracy of our algorithms is confirmed by BixeLab, the independent biometric testing laboratory.
Here are the confirmation letters from the laboratory:
Oz Liveness is also certified by NIST accreditation iBeta biometric test laboratory with 100% accuracy.
Our liveness technology protects both against injection and presentation attacks.
The injection attack detection is layered. Our SDK examines user environment to detect potential manipulations: browser, camera, etc. Further on, the deep neural network comes into play to defend against even the most sophisticated injection attacks.
The presentation attack detection is based on deep neural networks of various architectures, combined with a proprietary ensembling algorithm to achieve optimal performance. The networks consider multiple factors, including reflection, focus, background scene, motion patterns, etc. We offer both passive (no gestures) and active (various gestures) Liveness options, ensuring that your customers enjoy the user experience while delivering accurate results for you. The iBeta test was conducted using passive Liveness, and since then, we have significantly enhanced our networks to better meet the needs of our clients.
Oz Face Matching (Biometry)
Oz Face Matching (Biometry) aims to identify the person, verifying that the person who performs the check and the papers' owner are the same person. Oz Biometry looks through the video, finds the best shot where the person is clearly seen, and compares it with the photo from ID or another document. The algorithm's accuracy is 99.99% confirmed by NIST FRVT.
Our biometry technology has both 1:1 Face Verification and 1:N Face Identification, which are also based on ML algorithms. To train our neural networks, we use an own framework based on state-of-the-art technologies. The large private dataset (over 4.5 million unique faces) with a wide representation of ethnic groups as well as using other attributes (predicted race, age, etc.) helps our biometric models to provide the robust matching scores.
Our face detector can work with photos and videos. Also, the face detector excels in detecting faces in images of IDs and passports (which can be rotated or of low quality).
The Oz software combines accuracy in analysis with ease of integration and use. To further simplify the integration process, we have provided a detailed description of all the key concepts of our system in this section. If you're ready to get started, please refer to our integration guides, which provide the step-by-step instructions on how to achieve your facial recognition goals quickly and easily.
Oz Forensics adheres to SOC 2® and ISO/IEC 27001:2022 standards.
The webhook feature simplifies getting analyses' results. Instead of polling after the analyses are launched, add a webhook that will call your website once the results are ready.
When you create a folder, add the webhook endpoint (resolution_endpoint) into the payload section of your request body:
Postman
Payload example
{
"resolution_endpoint": "address.com", // use address of your website here
... // other request details - folder etc.
}
You'll receive a notification each time the analyses are completed for this folder. The webhook request will contain information about the folder and its corresponding analyses.
Developer Guide
In this section, you will find the description of both API and SDK components of Oz Forensics Liveness and face biometric system. API is the backend component of the system, it is needed for all the system modules to interact with each other. SDK is the frontend component that is used to:
1) take videos or images which are then processed via API,
2) display results.
We provide two versions of API.
With full version, we provide you with all functionality of Oz API.
The Lite version is a simple and lightweight version with only the necessary functions included.
Oz API
Oz API is the most important component of the system. It makes sure all other components are connected with each other. Oz API:
provides the unified Rest API interface to run the Liveness and Biometry analyses,
processes authorization and user permissions management,
Server-Based Liveness
In this section, we listed the guides for the server-based liveness check integrations.
Authentication and Non-Instant Data Handling
In API 6.0, we've implemented new analysis modes:
Flutter
In this section, we explain how to use Oz Flutter SDK for iOS and Android.
Before you start, it is recommended that you install:
Flutter 3.0.0 or higher;
Android SDK 21 or higher;
Oz Mobile SDK (iOS, Android, Flutter)
Oz Mobile SDK stands for the Software Developer’s Kit of the Oz Forensics Liveness and Face Biometric System, providing seamless integration with customers’ mobile apps for login and biometric identification.
Currently, both Android and iOS SDK work in the portrait mode.
Description of the on_error Callback
This callback is called when the system encounters any error. It contains the error details and telemetry ID that you can use for further investigation.
Integration Quick Start Guides
This section contains the most common cases of integrating the Oz Forensics Liveness and face Biometry system.
The scenarios can be combined together, for example, integrating liveness into both web and mobile applications or integrating liveness with face matching.
tracks and records requested orders and analyses to the database (for example, in 6.0, the database size for 100 000 orders with analyses is ~4 GB),
archives the inbound media files,
collects telemetry from connected mobile apps,
provides settings for specific device models,
generates reports with analyses' results.
For the latest API methods collection, please refer to our API reference.
In API 6.0, we introduced two new operation modes: Instant API and single request.
In the Instant API mode – also known as non-persistent – no data is stored at any point. You send a request, receive the result, and can be confident that nothing is saved. This mode is ideal for handling sensitive data and helps ensure GDPR compliance. Additionally, it reduces storage requirements on your side.
Single request mode allows you to send all media along with the analysis request in one call and receive the results in the same response. This removes the need for multiple API calls – one is sufficient. However, if needed, you can still use the multi-request mode.
on_error {
"code": "error_code",
"event_session_id": "id_of_telemetry_session_with_error",
"message": "<error decription>",
"context": {} // additional information if any
}
How to Restore the Previous Design after an Update
If you want to get back to the previous (up to 6.4.2) versions' design, reset the customization settings of the capture screen and apply the parameters that are listed below.
// customization parameters for the toolbar
let toolbarCustomization = ToolbarCustomization(
closeButtonColor: .white,
backgroundColor: .black)
// customization parameters for the center hint
let centerHintCustomization = CenterHintCustomization(
verticalPosition: 70)
// customization parameters for the center hint animation
let hintAnimationCustomization = HintAnimationCustomization(
hideAnimation: true)
// customization parameters for the frame around the user face
let faceFrameCustomization = FaceFrameCustomization(
strokeWidth: 6,
strokeFaceAlignedColor: .green,
strokeFaceNotAlignedColor: .red)
// customization parameters for the background outside the frame
let backgroundCustomization = BackgroundCustomization(
backgroundColor: .clear)
OZSDK.customization = OZCustomization(toolbarCustomization: toolbarCustomization,
centerHintCustomization: centerHintCustomization,
hintAnimationCustomization: hintAnimationCustomization,
faceFrameCustomization: faceFrameCustomization,
versionCustomization: vesrionCustomization,
backgroundCustomization: backgroundCustomization)
SaaS, On-premise, On-device: What to Choose
We offer different usage models for the Oz software to meet your specific needs. You can either utilize the software as a service from one of our cloud instances or integrate it into your existing infrastructure. Regardless of the usage model you choose, all Oz modules will function equally. It’s only up to you what to pick, depending on your needs.
When to Choose SaaS
With the SaaS model, you can access one of our clouds without having to install our software in your own infrastructure.
Choose SaaS when you want:
Faster start as you don’t need to procure or allocate hardware within your company and set up a new instance.
Zero infrastructure cost as server components are located in Oz cloud.
Lower maintenance cost as Oz maintains and upgrades server components.
When to Choose On-Premise
The on-premise model implies that all the Oz components required are installed within your infrastructure. Choose on-premise for:
Your data not leaving your infrastructure.
Full and detailed control over the configuration.
Expand this section to check the minimum hardware requirements
Oz Biometry / Liveness Server
OS: please check versions
When to Choose On-Device
We also provide an opportunity of using the on-device Liveness and Face matching. This model is available in Mobile SDKs.
Consider the on-device option when:
You can’t transmit facial images to any server due to privacy concerns
The network conditions whereon you plan using Oz products are extremely poor.
We recommend using server-based analyses whenever possible, as on-device ones tend to produce less accurate results.
The choice is yours to make, but we're always available to provide assistance.
Quantitative Results
This article describes how to get the analysis scores.
When you perform an analysis, the result you get is a number. For biometry, it reflects a chance that the two or more people represented in your media are the same person. For liveness, it shows a chance of deepfake or a spoofing attack: that the person in uploaded media is not a real one. You can get these numbers via API from a JSON response.
Add this to the build.gradle of the module (VERSION is the version you need to implement. Please refer to Changelog):
for the server-based version only
Please note: this is the default version.
for both server-based and on-device versions
Please note: the resulting file will be larger.
Also, regardless of the mode chosen, add:
Best Shot
The "Best shot" algorithm is intended to choose the most high-quality and well-tuned frame with a face from a video record. This algorithm works as a part of the liveness analysis, so here, we describe only the best shot part.
Please note: historically, some instances are configured to allow Best Shot only for certain gestures.
Processing steps
1. Initiate the analysis similar to , but make sure that extract_best_shot is set to true as shown below:
If you want to use a webhook for response, add it to the payload at this step, as described .
2. Check and interpret results in the same way as for the pure analysis.
3. The URL to the best shot is located in the results_media -> output_images -> original_url response.
Oz API Lite
What is Oz API Lite, when and how to use it.
API Lite is deprecated and no longer maintained. Its functionality has been added to API full: see Instant API.
Oz API Lite is the lightweight yet powerful version of Oz API. The Lite version is less resource-demanding, more productive, and easier to work with. The analyses are made within the API Lite image. As Oz API Lite doesn't include any additional services like statistics or data storage, this version is the one to use when you need a high performance.
Examples of methods
To check the Liveness processor, GET /v1/face/liveness/health.
To check the Biometry processor, GET /v1/face/pattern/health.
To perform the liveness check for an image, POST /v1/face/liveness/detect (it takes an image as an input and displays the evaluation of spoofing attack chance in this image)
To compare two faces in two images, POST /v1/face/pattern/extract_and_compare (it takes two images as an input, derives the biometry templates from these images, and compares them).
To compare an image with a bunch of images, POST /v1/face/pattern/extract_and_compare_n.
For the full list of Oz API Lite methods, please refer to .
Closing or Hiding the Plugin
Closing the Plugin
To force the closing of the plugin window, use the close() method. All requests to server and callback functions (except on_close) within the current session will be aborted.
Example:
var session_id = 123;
OzLiveness.open({
// We transfer the arbitrary meta data, by which we can later identify the session in Oz API
meta: {
session_id: session_id
},
// After sending the data, forcibly close the plugin window and independently request the result
on_submit: function() {
OzLiveness.close();
my_result_function(session_id);
}
});
Hiding the Plugin Window without Cancelling the Callbacks
To hide the plugin window without cancelling the requests for analysis results and user callback functions, call the hide() method. Use this method, for instance, if you want to display your own upload indicator after submitting data.
An example of usage:
Liveness, Face Matching, Collection Checks
This article describes the main types of analyses that Oz software is able to perform.
Liveness checks whether a person in a media is a real human.
Face Matching examines two or more media to identify similarities between the faces depicted in them.
Passive and Active Liveness
Describing how passive and active liveness works.
The objective of the Liveness check is to verify the authenticity and physical presence of an individual in front of the camera. In the passive Liveness check, it is sufficient to capture a user's face while they look into the camera. Conversely, the active Liveness check requires the user to perform an action such as smiling, blinking, or turning their head. While passive Liveness is more user-friendly, active Liveness may be necessary in some situations to confirm that the user is aware of undergoing the Liveness check.
In our Mobile or Web SDKs, you can define what action the user is required to do. You can also combine several actions into a sequence. Actions vary in the following dimensions:
User experience,
Oz API Key Concepts
The Oz API is a comprehensive Rest API that enables facial biometrics, allowing for both face matching and liveness checks. This write-up provides an overview of the essential concepts that one should keep in mind while using the Oz API.
Authentication, Roles, and Access Management
To ensure security, every Oz API call requires an access token in its HTTP headers. To obtain this token, execute the POST /api/authorize/auth method with login and password provided by us. Pass this token in X-Forensics-Access-Token
Proprietary Format: OzCapsula Data Container
To improve the overall security level or Oz products, we’ve introduced a new proprietary data exchange format that providesimproved data confidentiality and integrity: OzCapsula.
How It Works
A proprietary data exchange format is a container that safely stores and transmits transaction-related media data.
When you capture video using Oz SDK, your media is placed into the OzCapsula container along with all required information. The package can be processed only in Oz API due to internal mechanisms, so it is significantly more difficult to access the package containing.
API Error Codes
HTTP Response Codes
Response codes 2XX indicate a successfully processed request (e.g., code 200 for retrieving data, code 201 for adding a new entity, code 204 for deletion, etc.).
Oz API Postman Collections
Download and install the Postman client from this Then download the JSON file needed:
Oz API Postman collections
Instant API: Non-Persistent Mode
Instant API, or non-persistent operation mode, has been introduced in API 6.0.1. It is a mode where we do not save any data anywhere. All data is being used only within a request: you send it, receive the response, and that's all, nothing gets recorded. This ensures you do not store any sensitive data, which might be crucial for GDPR compliance. Also, it significantly reduces storage requirements.
To enable this mode, when you prepare the config.py file to run the API, set the OZ_APP_COMPONENTS parameter to stateless. Call POST /api/instant/folders/ to send the request without saving any data. Authorization for Instant API should be set on your side.
Metadata
Overview
Metadata is any optional data you might need to add to a . In the meta_data section, you can include any information you want, simply by providing any number of fields with their values:
Changelog
API Lite (FaceVer) changes
1.2.3 – Nov., 2024
Fixed the bug with the time_created and folder_id parameters of the Detect method that sometimes might have been generated incorrectly.
How to Restore the Previous Design after an Update
If you want to get back to the previous (up to 6.4.2) versions' design, reset the customization settings of the capture screen and apply the parameters that are listed below.
Android Localization: Adding a Custom or Updating an Existing Language Pack
Please note: this feature has been implemented in 8.1.0.
To add or update the language pack for Oz Android SDK, please follow these instructions:
The localization record consists of the localization key and its string value, e.g., <string name="about">"About"</string>.
iOS
To start using Oz iOS SDK, follow the steps below.
Embed Oz iOS SDK into your project as described .
Get a trial license for SDK on our or a production license by . We'll need your bundle id. Add the license to your project as described .
Oz API Lite Postman Collection
Download and install the Postman client from this Then download the JSON file needed:
1.2.0
1.1.1
Connecting SDK to API
To connect SDK to Oz API, specify the API URL and as shown below.
Please note: in your host application, it is recommended that you set the API address on the screen that precedes the liveness check. Setting the API URL initiates a service call to the API, which may cause excessive server load when being done at the application initialization or startup.
Alternatively, you can use the login and password provided by your Oz Forensics account manager:
By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for
Capturing Videos
OzCapsula (SDK v8.22 and newer)
Please note: all required data (other than the video) must be packaged into the container before starting the Liveness screen.
Uploading Media
To launch one or more analyses for your media files, you need to create a folder via Oz API (or use an existing folder) and put the files into this folder. Each file should be marked by : they describe what's pictured in a media and determine the applicable analyses.
For API 4.0.8 and below, please note: if you want to upload a photo for the subsequent Liveness analysis, put it into the ZIP archive and apply the tags.
To create a folder and upload media to it, call POST /api/folders/
Description of the on_complete Callback
This callback is called after the check is completed. It retrieves the analysis result (unavailable for the capture mode). The result content depends on the Web Adapter result_mode .
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .
Description of the on_result Callback
This callback is called periodically during the analysis’ processing. It retrieves an intermediate result (unavailable for the capture mode). The result content depends on the Web Adapter result_mode .
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .
Web Plugin (Web SDK Frontend part)
Web Plugin is a plugin called by your web application. It works in a browser context. The Web Plugin communicates with Web Adapter, which, in turn, communicates with Oz API.
Please find a sample for Oz Liveness Web SDK . To make it work, replace <web-adapter-url> with the Web Adapter URL you've received from us.
For the samples below, replace https://web-sdk.sandbox.ohio.ozforensics.com in index.html.
Collection (1:N) Check
How to compare a photo or video with ones from your database.
The collection check algorithm is designed to check the presence of a person using a database of preloaded photos. A video fragment and/or a photo can be used as a source for comparison.
Prerequisites:
You're
iOS Localization: Adding a Custom or Updating an Existing Language Pack
Please note: this feature has been implemented in 8.1.0.
To add or update the language pack for Oz iOS SDK, use the set(languageBundle: Bundle) method. It shows the SDK that you are going to use the non-standard bundle. In OzLocalizationCode, use the custom language (optional).
The localization record consists of the localization key and its string value, e.g., "about" = "About"
Adding the Plugin to Your Web Page
Requirements
A dedicated Web Adapter in our cloud or the adapter deployed on-premise. The adapter's URL is required for adding the plugin.
No-Server Licensing
Mostly, license is set on the server side (Web Adapter). This article covers a rare case when you use Web Plugin only.
To generate the license, we need the domain name of the website where you are going to use Oz Forensics Web SDK, for instance, your-website.com. You can also define subdomains.
Collection looks for resemblances between an individual featured in a media and individuals in a pre-existing photo database.
These analyses are accessible in the Oz API for both SaaS and On-Premise models. Liveness and Face Matching are also offered in the On-Device model. Please visit this page to learn more about the usage models.
Liveness
The Liveness check is important to protect facial recognition from the two types of attacks.
A presentation attack, also known as a spoofing attack, refers to the attempt of an individual to deceive a facial recognition system by presenting into a camera video, photo, or any other type of media that mimics the appearance of a genuine user. These attacks can include the use of realistic masks or make up.
An injection attack is an attempt to deceive a facial recognition system by replacing physical camera input with a prerecorded image or video, manipulating physical camera output before it becomes input to a facial recognition, or injectiong some malicious code. Virtual camera software is the most common tool for injection attacks.
Oz Liveness is able to detect both types of attacks. Any component can detect presentation attacks, and for injection attack detection, use Oz Liveness SDK. To learn about how to use Oz components to prevent attacks, check our integration quick start guides:
Once the Liveness check is finished, you can check both qualitative and quantitative analysis results.
Results overview
Qualitative results
SUCCESS – everything went fine, the analysis has completed successfully;
DECLINED – the check failed (an attack detected).
If the analysis hasn't been finished yet, the result can be PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).
If you have analyzed multiple media, the aggregated status will be SUCCESS only if each analysis on each media has finished with the SUCCESS result.
Quantitative results
100% (1) – an attack is detected, the person in the video is not a real living person,
0% (0) – a person in the video is a real living person.
Asking users to perform a gesture, such as smiling or turning their head, is a popular requirement when recording a Liveness video. With Oz Liveness Mobile and Web SDK, you can also request gestures from users. However, our Liveness check relies on other factors, analyzed by neural networks, and does not depend on gestures. For more details, please check Passive and Active Liveness.
Liveness check also can return the best shot from a video: a best-quality frame where the face is seen the most properly.
Face Matching
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media.
Results overview
Qualitative results
SUCCESS – everything went fine, the analysis has completed successfully;
DECLINED – the check failed (faces don't match).
If the analysis hasn't been finished yet, the result can be PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).
Quantitative results
After comparison, the algorithm provides numbers that represent the similarity level. The numbers vary from 100 to 0% (1 to 0), where:
100% (1) – faces are similar, media represent the same person,
0% (0) – faces are not similar and belong to different people
There are two scores to consider: the minimum and maximum. If you have analyzed two media, these scores will be equal. For three or more media, the similarity score is calculated for each pair. Once calculated, these scores get aggregated and analysis returns the minimum and maximum similarity scores for the media compared. Typically, the minimum score is enough.
In Oz API, you can configure one or more face collections. These collections are databases of people depicted in photos. When the Collection analysis is being conducted, Oz software compares the face in a photo or video taken with faces of this pre-made database and shows whether a face exists in a collection.
Results overview
Qualitative results
SUCCESS – everything went fine, the analysis has completed successfully;
DECLINED – the check failed (faces match).
If the analysis hasn't been finished yet, the result can be PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).
Quantitative results
After comparison, the algorithm provides a score that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – the person in an image or video matches with someone in your database,
0% (0) – the person is not found in the collection.
For additional information, please refer to this article.
Security updates.
1.2.2 – Oct. 17, 2024
Updated models.
1.2.1 – Sept. 05, 2024
The file size for the detect Liveness method is now capped at 15 MB, with a maximum of 10 files per request.
Updated the gesture list for best_shot analysis: it now supports head turns (left and right), tilts (up and down), smiling, and blinking.
{
"analyses": [{
"type": "quality",
"source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // // optional; omit to include all media from the folder
"params" : {
"extract_best_shot": true // the mandatory part for the best shot analysis
}
}]
}
OzLiveness.open({
// When receiving an intermediate result, hide the plugin window and show your own loading indicators
on_result: function(result) {
OzLiveness.hide();
if (result.state === 'processing') {
show_my_loader();
}
},
on_complete: function() {
hide_my_loader();
}
});
OzLivenessSDK.config.customization =UICustomization(// customization parameters for the toolbar toolbarCustomization =ToolbarCustomization( closeIconTint = Color.ColorHex("#FFFFFF"), backgroundColor = Color.ColorHex("#000000"), backgroundAlpha =100, ),// customization parameters for the center hint centerHintCustomization =CenterHintCustomization( verticalPosition =70 ),// customization parameters for the hint animation new HintAnimation( hideAnimation =true ),// customization parameters for the frame around the user face faceFrameCustomization =FaceFrameCustomization( strokeDefaultColor = Color.ColorHex("#EC574B"), strokeFaceInFrameColor = Color.ColorHex("#00FF00"), strokeWidth =6, ),// customization parameters for the background outside the frame backgroundCustomization =BackgroundCustomization( backgroundAlpha =100 ),)
Response codes 4XX indicate that a request could not be processed correctly because of some client-side data issues (e.g., 404 when addressing a non-existing resource).
Response codes 5XX indicate that an internal server-side error occurred during the request processing (e.g., when database is temporarily unavailable).
Response Body with Errors
Each response error includes HTTP code and JSON data with error description. It has the following structure:
error_code – integer error code;
error_message– text error description;
details – additional error details (format is specified to each case). Can be empty.
Sample error response:
Error codes:
0 – UNKNOWN Unknown server error.
1 - NOT ALLOWED An unallowed method is called. Usually is followed by the 405 HTTP status of response. For example, trying to request the PATCH method, while only GET/POST ones are supported.
2 - NOT REALIZED The method is documented but is not realized by any temporary or permanent reason.
3 - INVALID STRUCTURE Incorrect structure of request. Some required fields missing or a format validation error occurred.
4 - INVALID VALUE Incorrect value of the parameter inside request body or query.
5 - INVALID TYPE The invalid data type of the request parameter.
6 - AUTH NOT PROVIDED Access token not specified.
7 - AUTH INVALID The access token does not exist in the database.
8 - AUTH EXPIRED Auth token is expired.
9 - AUTH FORBIDDEN Access denied for the current user.
10 - NOT EXIST the requested resource is not found (alternative of HTTP status_code = 404).
11 - EXTERNAL SERVICE Error in the external information system.
12 – DATABASE Critical database error on the server host.
Please note: as Instant API doesn't store data, it is not intended to work with Blacklist (1:N).
If you use Instant API with Web SDK, in Web Adapter configuration, set architecture to lite. The version of Web SDK should be 1.7.14 or above.
Requirements
CPU: 16 cores, 32 threads, base frequency – 2.3 GHz, single-core maximum turbo frequency – 4 GHz.
RAM: 32 GB, DDR 5, Dual Channel.
To evaluate your RPS and RPM and configure your system for optimal performance, please contact us.
Configuration File Parameters
Prior to the launch, prepare a configuration file with the parameters listed below.
Mandatory Parameters
These parameters are crucial to run Instant API.
Installation
Docker
Docker Compose
Instant API Methods
You can find the Instant API methods here or download the collection below.
Metadata is available for most Oz system objects. Here is the list of these objects with the API methods required to add metadata. Please note: you can also add metadata to these objects during their creation.
You can also change or delete metadata. Please refer to our API documentation.
Usage Examples
You may want to use metadata to group folders by a person or lead. For example, if you want to calculate conversion when a single lead makes several Liveness attempts, just add the person/lead identifier to the folder metadata.
Here is how to add the client ID iin to a folder object.
In the request body, add:
You can pass an ID of a person in this field, and use this ID to combine requests with the same person and count unique persons (same ID = same person, different IDs = different persons). This ID can be a phone number, an IIN, an SSN, or any other kind of unique ID. The ID will be displayed in the report as an additional column.
Another case is security: when you need to process the analyses’ result from your back end, but don’t want to perform this using the folder ID. Add an ID (transaction_id) to this folder and use this ID to search for the required information. This case is thoroughly explained here.
If you store PII in metadata, make sure it complies with the relevant regulatory requirements.
You can also add metadata via SDK to process the information later using API methods. Please refer to the corresponding SDK sections:
OZSDK.setApiConnection(Connection.fromServiceToken(host: "https://sandbox.ohio.ozforensics.com", token: token)) { (token, error) in
}
OZSDK.setApiConnection(Connection.fromCredentials(host: “https://sandbox.ohio.ozforensics.com”, login: login, password: p)) { (token, error) in
// Your code to handle error or token
}
Create a controller that will capture videos as follows:
The delegate object must implement the OZLivenessDelegate protocol:
SDK 8.21 and older
Create a controller that will capture videos as follows:
action – a list of user’s actions while capturing the video.
Once video is captured, the system calls the onOZLivenessResult method:
The method returns the results of video capturing: the [OZMedia] objects. The system uses these objects to perform checks.
If you use our SDK just for capturing videos, omit the Checking Liveness and Face Biometry step.
If a user closes the capturing screen manually, the failedBecauseUserCancelled error appears.
Safe
When result_mode is safe, the on_complete callback contains the state of the analysis only:
Please note: The options listed below are for testing purposes only. If you require more information than what is available in the Safe mode, please follow Security Recommendations.
Status
For the status value, the callback contains the state of the analysis, and for each of the analysis types, the name of the type, state, and resolution.
Folder
The folder value is almost similar to the status value, with the only difference: the folder_id is added.
Full
In this case, you receive the detailed response possibly containing sensitive data. This mode is deprecated; for security reasons, we recommend using the safe mode.
When result_mode is safe, the on_result callback contains the state of the analysis only:
or
Please note: the options listed below are for testing purposes only. If you require more information than what is available in the Safe mode, please follow Security Recommendations.
Status
For the status value, the callback contains the state of the analysis, and for each of the analysis types, the name of the type, state, and resolution.
or
Folder
The folder value is almost similar to the status value, with the only difference: the folder_id is added.
Full
In this case, you receive the detailed response possibly containing sensitive data. This mode is deprecated; for security reasons, we recommend using the safe mode.
1. Initiate the analysis: POST/api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll need analysis_id or folder_id from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.
GET /api/folders/{{folder_id}} – for all analyses performed on media in the folder with the folder_id you have from the previous step.
Wait for the resolution_status and resolution fields to change the status to anything other than PROCESSING and treat this as a result.
If you want to know which person from your collection matched with the media you have uploaded, find the collection analysis in the response, check results_media, and retrieve person_id. This is the ID of the person who matched with the person in your media. To get the information about this person, use GET /api/collections/{{collection_id}}/persons/{{person_id}} with IDs of your collection and person.
To embed the plugin in your page, add a reference to the primary script of the plugin (plugin_liveness.php) that you received from Oz Forensics to the HTML code of the page. web-sdk-root-url is the Web Adapter link you've received from us.
For versions below 1.4.0
Add a reference to the file with styles and to the primary script of the plugin (plugin_liveness.php) that you received from Oz Forensics to the HTML code of the page. web-sdk-root-url is the Web Adapter link you've received from us.
For Angular and Vue, script (and files, if you use a version lower than 1.4.0) should be added in the same way. For React apps, use head at your template's main page to load and initialize the OzLiveness plugin. Please note: if you use <React.StrictMode>, you may experience issues with Web Liveness.
To find the origin, in the developer mode, run window.origin on the page you are going to embed Oz Web SDK in. At localhost / 127.0.0.1, license can work without this information.
Set the license as shown below:
With license data:
With license path:
Check whether the license is updated properly.
Example
Proceed to your website origin and launch Liveness -> Simple selfie.
Once the license is added, the system will check its validity on launch.
OzLivenessSDK.INSTANCE.getConfig().setCustomization(new UICustomization(
// customization parameters for the toolbar
new ToolbarCustomization(
R.drawable.ib_close,
new Color.ColorHex("#FFFFFF"),
new Color.ColorHex("#000000"),
100, // toolbar text opacity (in %)
),
// customization parameters for the center hint
new CenterHintCustomization(
70, // vertical position (in %)
),
// customization parameters for the hint animation
new HintAnimation(
true, // hide animation
),
// customization parameters for the frame around the user face
new FaceFrameCustomization(
new Color.ColorHex("#EC574B"),
new Color.ColorHex("#00FF00"),
6, // frame stroke width (in dp)
),
// customization parameters for the background outside the frame
new BackgroundCustomization(
100 // background opacity (in %)
),
)
);
{
"error_code": 0,
"error_message": "Unknown server side error occurred",
"details": null
}
# application components list, values for Instant API: auth,stateless
# auth is for Oz authentication component
OZ_APP_COMPONENTS=stateless
# local storage support enable
OZ_LOCAL_STORAGE_SUPPORT_ENABLE=false
# service tfss host
OZ_SERVICE_TFSS_HOST=http://xxx.xxx.xxx.xxx:xxxx
# allowed hosts
APP_ALLOWED_HOSTS=example-host1.com,example-host2.com
# secret key
OZ_API_SECRET_KEY=long_secret_key
{
"analyses": [{
"type": "collection",
"source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // // optional; omit to include all media from the folder
}]
}
Suitability for review by a human operator or in court.
In most cases, the Selfie action is optimal, but you can choose other actions based on your specific needs. Here is a summary of available actions:
Passive Liveness
Selfie
A short video, around 0.7 sec. Users are not required to do anything.
Recommended for most cases. It offers the best combination of user experience and liveness check accuracy.
One shot
Similar to “Simple selfie” but only one image is chosen instead of the whole video.
Recommended when media size is the most important factor.
Hard to evaluate for a spoofing by a human, e.g., by an operator or in a court. We recommend avoiding the one shot gesture whenever possible, as it tends to produce less accurate results.
Scan
A 5-second video where a user is asked to follow the text looking at it.
Recommended when the longer video is required, e.g., for subsequent review by a human operator or in a court.
Active Liveness
Smile
Blink
Tilt head up
A user is required to complete a particular gesture within 5 seconds.
Use active liveness when you need a confirmation that the user is aware of undergoing a Liveness check.
Video length and file size may vary depending on how soon a user completes a gesture.
To recognize the actions from either passive or active Liveness, our algorithms refer to the corresponding tags. These tags indicate the type of action that a user is performing within a media. For more information, please read the Media Tags article. The detailed information on how the actions, or, in other words, gestures are called in different Oz Liveness components is here.
header in subsequent Oz API calls.
This article provides comprehensive details on the authentication process. Kindly refer to it for further information.
Furthermore, the Oz API offers distinct user roles, ranging from CLIENT, who can perform checks and access reports but lacks administrative rights, e.g., deleting folders, to ADMIN, who enjoys nearly unrestricted access to all system objects. For additional information, please consult this guide.
Persistence
The unit of work in Oz API is a folder: you can upload interrelated media to a folder, run analyses on them, and check for the aggregated result. A folder can contain the unlimited number of media, and each of the media can be a target of several analyses. Also, analyses can be performed on a bunch of media.
Media Types and Tags
Media OZ API works with photos and videos. Video can be either a regular video container, e.g., MP4 or MOV, or a ZIP archive with a sequence of images. Oz API uses the file mime type to define whether media is an image, a video, or a shot set.
It is also important to determine the semantics of a content, e.g., if an image is a photo of a document or a selfie of a person. This is achieved by using tags. The selection of tags impacts whether specific types of analyses will recognize or ignore particular media files. The most important tags are:
photo_id_front – for the front side of a photo ID
photo_selfie – for a non-document reference photo
video_selfie_blank – for a liveness video recorded beyond Oz Liveness SDK
if a media file is captured using the Oz Liveness SDK, the tags are assigned automatically.
The full list of Oz media tags with their explanation and examples can be found here.
Asynchronous analyses
Since video analysis may take a few seconds, the analyses are performed asynchronously. This implies that you initiate an analysis (/api/folders/{{folder_id}}/analyses/) and then monitor the outcomes by polling until processing is complete (/api/analyses/{{analyse_id}} for a single analysis or /api/folders/{{folder_id}}/analyses/ for all folder’s analyses). Alternatively, there is a webhook option available. To see an example of how to use both the polling and webhook options, please check this guide.
These were the key concepts of Oz API. To gain a deeper understanding of its capabilities, please refer to the Oz API section of our developer guide.
Benefits
Integrity and Authenticity Guaranteed. Each container’s content is verified by the API, ensuring data arriving from the user device is genuine, untampered, and fully intact.
Full Confidentiality of Internal Data. Multi-layered cryptographic protection keeps all data, metadata, and technical details hidden from unauthorized viewing, extraction, or interpretation.
Unified Data Package. All content is stored in a single, secure file, simplifying transmission and enabling consistent, predictable processing.
Built-In Investigation Tools. Every action within the container is logged, giving complete visibility for incident analysis and rapid troubleshooting.
Strict, Built-In Access Control. Only authorized systems can open or use the container, preventing misuse, tampering attempts, or unauthorized integration.
Ready for High-Volume Workflows. The container is designed to scale effortlessly, supporting any number of transmissions without performance or integration issues.
{
"media:tags": { // this section sets the tags for the media files that you upload
// media files are referenced by the keys in a multipart form
"video1": [ // your file key
// a typical set of tags for a passive Liveness video
"video_selfie", // video of a person
"video_selfie_blank", // no gesture used
"orientation_portrait" // video orientation
],
"photo1": [
// a typical set of tags for an ID front side
"photo_id",
"photo_id_front"
]
}
}
If you don’t set the custom language and bundle, the SDK uses the pre-installed languages only.
If the custom bundle is set (and language is not), it has a priority when checking translations, i.e, SDK checks for the localization record in the custom bundle localization file. If the key is not found in the custom bundle, the standard bundle text for this key is used.
If both custom bundle and language are set, SDK retrieves all the translations from the custom bundle localization file.
A list of keys for iOS:
The keys Action.*.Task refer to the appropriate gestures. Others refer to the hints for any gesture, info messages, or errors.
When new keys appear with new versions, if no translation is provided by your custom bundle localization file, you’ll see the default (English) text.
For commercial use of Oz Forensics products, a license is required. The license is time-limited and defines the software access parameters based on the terms of your agreement.
Once you initialize mobile SDK, run Web Plugin, or use Oz Bio, the system checks if your license is valid. The check runs in the background and has minimal impact on the user experience.
As you can see on the scheme above, the license is required for:
Mobile SDKs for iOS and Android,
Web SDK, which consists of Web Adapter and Web Plugin,
Oz BIO, which is needed for server analyses and is installed for .
For each of the components, you require a separate license which is bound to this component. Thus, if you use all three components, three licenses are required.
Native SDKs (iOS and Android)
To issue a license for mobile SDK, we require your bundle (application) ID. There are two types of licenses for iOS and Android SDKs: online and offline. Any license type can be applied to any analysis mode: on-device, server-based, or hybrid.
Online License
As its name suggests, an online license requires a stable connection. Once you initialize our SDK with this license, it connects to our license server and retrieves information about license parameters, including counters of transactions or devices, where:
Transaction: increments each time you start a video capture.
Device: increments when our SDK is installed on a new device.
The online license can be transaction-limited, device-limited, or both, according to your agreement.
The main advantages of the online license are:
You don’t need to update your application after the license renewal,
And if you want to add a new bundle ID to the incense, there’s also no need to re-issue it. Everything is done on the fly.
The data exchange for the online license is quick, ensuring your users won't experience almost any delay compared to using the offline license.
Please note that even though on-device analyses don’t need the Internet themselves, you still require a connection for license verification.
Online license is the default option for Mobile SDKs. If you require the offline license, please inform your manager.
Offline License
Offline license is a type of license that can work without Internet. All license parameters are set in the license file, and you just need to add the file to your project. This license type doesn’t have any restrictions on transactions or devices.
The main benefit of the offline license is its autonomy, allowing it to function without a network connection. However, when your license expires, and you add a new one, you’ll require to release a new version of your application in Google Play and App Store. Otherwise, the SDK won’t function.
How to add a license to mobile SDK:
Web SDK
Web SDK license is almost similar to the mobile SDK offline license. It can function without network connection, and the license file contains all the necessary parameters, such as expiration date. Web SDK license also has no restrictions on transactions or devices.
The license is bound to URLs of your domains and/or subdomains. To add the license to your SDK instance, you need to place it to the Web SDK container as described . In rare cases, it is also possible to .
The difference between Mobile SDK offline license and Web SDK license is that you don’t need to release a new application version when Web SDK license is renewed.
On-Premise (Oz BIO or Server License)
For on-premise installations, we offer a dedicated license with a limitation on activations, with each activation representing a separate Oz BIO seat. This license can be online or offline, depending on whether your Oz BIO servers have internet access. The online license is verified through our license server, while for offline licenses, we assist you in within your infrastructure and activating the license.
Trial
For test integration purposes, we provide a free trial license that is sufficient for initial use, such as testing with your datasets to check analysis accuracy. For Mobile SDKs, you can generate a one-month license yourself on our website: . If you would like to integrate with your web application, please to obtain a license, and we will also assist you in configuring your dedicated instance of our Web SDK. With the license, you will receive credentials to access our services.
Once you're ready to move to commercial use, a new production license will be issued. We’ll provide you with new production credentials and assist you with integration and configuration. Our engineers are always available to help.
Our software offers flexible licensing options to meet your specific needs. Whether you prioritize seamless updates or prefer autonomous operation, we have a solution tailored for you. If you have any questions, please contact us.
Liveness
The Liveness detection algorithm is intended to detect a real living person in a media.
You have already marked by correct tags into this folder.
For API 4.0.8 and below, please note: the Liveness analysis works with videos and shotsets, images are ignored. If you want to analyze an image, upload it as a shotset (archive) with a single image and mark with the video_selfie_blank tag.
Processing Steps
1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described .
You'll needanalysis_id or folder_id from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.
GET api/folders/{{folder_id}}/analyses/ – for all analyses performed on media in the folder with the folder_id you have from the previous step.
Repeat the check until theresolution_status and resolution fields change status to any other except PROCESSING and treat this as a result.
For the Liveness Analysis, seek the confidence_spoofing value related to the video you need. It indicates a chance that a person is not a real one.
Biometry (Face Matching)
The Biometry algorithm is intended to compare two or more photos and detect the level of similarity of the spotted faces. As a source media, the algorithm takes photos, videos, and documents (with photos).
You have already marked by correct tags into this folder.
Processing steps
1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described .
You'll needanalysis_id or folder_id from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.
GET /api/folders/{{folder_id}} – for all analyses performed on media in the folder with the folder_id you have from the previous step.
Repeat until the resolution_status and resolution fields change status to any other except PROCESSING, and treat this as a result.
Check the response for the min_confidence value. It is a quantitative result of matching the people on the media uploaded.
Android
To start using Oz Android SDK, follow the steps below.
Embed Oz Android SDK into your project as described here.
Get a trial license for SDK on our website or a production license by email. We'll need your application id. Add the license to your project as described here.
Connect SDK to API as described . This step is optional, as this connection is required only when you need to process data on a server.
Capture videos using methods described . You'll send them for analysis afterward.
Analyze media you've taken at the previous step. The process of checking liveness and face biometry is described .
If you want to customize the look-and-feel of Oz Android SDK, please refer to .
Resources
Recommended Android version: 5+ (the newer the smartphone is, the faster the analyses are).
Recommended versions of components:
We do not support emulators.
Available languages: EN, ES, HY, KK, KY, TR, PT-BR.
To obtain the sample apps source code for the Oz Liveness SDK, proceed to the GitLab repository:
Follow the link below to see a list of SDK methods and properties:
Download the demo app latest build .
OzCapsula Data Container
Configuring Oz API
The main change in interaction with API is that you require to send data with another content type, as data container is a binary file: Content-Type = application/octet-stream. We've added support for this content type along with container functionality.
Also, Instant API now requires client’s private and public keys to function. The paths to these keys should be specified in OZ_JWT_PRIVATE_KEY_PATH and OZ_JWT_PUBLIC_KEY_PATH in the configuration file.
To generate them, use commands as listed below.
Examples
POST api/folders:
POST api/instant/folders:
Exceptions
Obtaining a Session Token
Before you start with SDK, obtain a session token:
(Optional, only if you use stateful API) Authorize as any non-OPERATOR role.
Call GET {{host}}/api/authorize/session_token.
Example request
Example response
Adding SDK to a Client’s Mobile App
CocoaPods
To integrate OZLivenessSDK into an Xcode project via the CocoaPods dependency manager, add the following code to Podfile:
Version is optional as, by default, the newest version is integrated. However, if necessary, you can find the older version number in Changelog.
Since 8.1.0, you can also use a simpler code:
By default, the full version is being installed. It contains both server-based and on-device analysis modes. To install the server-based version only, use the following code:
For 8.1.0 and higher:
SPM
Please note: installation via SPM is available for versions 8.7.0 and above.
Add the following package dependencies via SPM: (if you need a guide on adding the package dependencies, please refer to the ). OzLivenessSDK is mandatory. If you don't need the on-device analyses, skip the OzLivenessSDKOnDevice file.
Manual Installation
You can also add the necessary frameworks to your project manually.
Download the SDK files from and add them to your project.
OZLivenessSDK.xcframework,
OZLivenessSDKResources.bundle,
OZLivenessSDKOnDeviceResources.bundle (if you don't need the on-device analyses, skip this file).
Download the TensorFlow framework 2.11 from .
Make sure that:
both xcframework are in Target-Build Phases -> Link Binary With Libraries and Target-General -> Frameworks, Libraries, and Embedded Context;
the bundle file(s) are in Target-Build Phases -> Copy Bundle Resources.
Getting a License for iOS SDK
License
You can generate the trial license here or contact us by email to get a productive license. To create the license, your bundle id is required. After you get a license file, there are two ways to add the license to your project.
Rename this file to forensics.license and put it into the project. In this case, you don't need to set the path to the license.
During the runtime: when initializing SDK, use the following method.
or
LicenseSource a source of license, and LicenseData is the information about your license. Please note: this method checks whether you have an active license or not and if yes, this license won't be replaced with a new one. To force the license replacement, use the setLicense method.
In case of any license errors, the system will use your error handling code as shown above. Otherwise, the system will return information about license. To check the license data manually, use OZSDK.licenseData.
Possible License Errors
Error message
What to Do
Browser Compatibility
Please note: for the plugin to work, your browser version should support JavaScript ES6 and be the one as follows or newer.
Browser
Version
Google Chrome (and other browsers based on the Chromium engine)
56
Mozilla Firefox
55
Safari
11
*Web SDK doesn't work in Internet Explorer compatibility mode due to lack of important functions.
Security Recommendations
Retrieve the analysis response and process it on the back end
Even though the analysis result is available to the host application via Web Plugin callbacks, it is recommended that the application back end receives it directly from Oz API. All decisions of the further process flow should be made on the back end as well. This eliminates any possibility of malicious manipulation with analysis results within the browser context.
To find your folder from the back end, you can follow these steps:
On the front end, add your unique identifier to the folder metadata.
You can add your own key-value pairs to attach user document numbers, phone numbers, or any other textual information. However, ensure that tracking personally identifiable information (PII) complies with relevant regulatory requirements.
Use the on_complete callback of the plugin to be notified when the analysis is done. Once used, call your back end and pass the transaction_id value.
On the back end side, find the folder by the identifier you've specified using the Oz API Folder LIST method:
To speed up the processing of your request, we recommend adding the time filter as well:
Limit amount of the information sent to Web Plugin from the server
Web Adapter may send analysis results to the Web Plugin with various levels of verbosity. It is recommended that, in production, the level of verbosity is set to minimum.
In the Web Adapter file, set the result_mode parameter to "safe".
Collection (1:N) Management in Oz API
This article describes how to create a collection via API, how to add persons and photos to this collection and how to delete them and the collection itself if you no longer need it. You can do the same in , but this article covers API methods only.
Collection in Oz API is a database of facial photos that are used to compare with the face from the captured photo or video via the Collection analysis
Person represents a human in the collection. You can upload several photos for a single person.
How to Issue a Service Token
Here’s a step-by-step guide on how to issue a service token in Oz API 5 and 6.
1
Step 1
Authorize using your ADMIN account: {{host}}/api/authorize/auth.
Rules of Assigning Analyses
This article covers the default rules of applying analyses.
Analyses in Oz system can be applied in two ways:
manually, for instance, when you choose the Liveness scenario in our demo application;
automatically, when you don’t choose anything and just assign all possible analyses (via API or SDK).
The automatic assignment means that Oz system decides itself what analyses to apply to media files based on its
Authentication
Getting an Access Token
To get an access token, call POST /api/authorize/auth/ with credentials (which you've got from us) containing the email and password needed in the request body. The host address should be the API address (the one you've also got from us).
The successful response will return a pair of tokens:access_token and expire_token
Security Recommendations
In 8.8.0, we’ve implemented SSL pinning to protect our clients from MITM attacks. We strongly recommend adding a built-in certificate whitelist to your application to prevent fraud with third-party certificates set as trusted.
What is a MITM attack
MITM (man in the middle) attack is a type of attacks when a cyber fraudster breaks into the communication between application and backend, setting up a proxy to intercept and alter the traffic (e.g., substitute the video being sent). Typically, these attacks involve the fraudster setting their certificate as trusted on the user's device beforehand.
You can add a list of certificates your application should trust at the moment of connection to Oz API via the optional sslPins field of OzConnection class. As an input, this field takes a list of public certificate key hashes with their expiration dates as shown below:
General Security Recommendations
This article covers common cyberattacks and the steps you can take to stay safe.
Cyberattack Types
The most common cyberattacks can be divided to three groups: injection, integration, and presentation attacks. Below, you’ll find some examples for mobile and web SDKs.
Checking Liveness and Face Biometry
If you use our SDK just for capturing videos, omit this step.
Customizing iOS SDK Interface
To customize the Oz Liveness interface, use OZCustomization as shown below. For the description of customization parameters, please refer to .
Please note: the customization methods should be called before the video capturing ones.
Localization: Adding a Custom Language Pack
The add_lang(lang_id, lang_obj) method allows adding a new or customized language pack.
Parameters:
lang_id: a string value that can be subsequently used as lang parameter for the open() method;
Oz Liveness Web SDK
Oz Liveness Web SDK is a module for processing data on clients' devices. With Oz Liveness Web SDK, you can take photos and videos of people via their web browsers and then analyze these media. Most browsers and devices are supported. Available languages: EN, ES, PT-BR, KK.
Please find a sample for Oz Liveness Web SDK . To make it work, replace <web-adapter-url> with the Web Adapter URL you've received from us.
For Angular and React, replace https://web-sdk.sandbox.ohio.ozforensics.com in index.html.
Customization Options for Older Versions (before 1.0.1)
To set your own look-and-feel options, use the style section in the Ozliveness.open method. Here is what you can change:
faceFrame – the color of the frame around a face:
pod 'OZLivenessSDK', :git => 'https://gitlab.com/oz-forensics/oz-liveness-ios', :tag => 'VERSION'
// for the latest version
pod ‘OZLivenessSDK’
// OR, for the specific version
// pod ‘OZLivenessSDK’, ‘8.22.0’
The license file is missing. Please check its name and path to the file.
License error. Cannot parse license from (your_URI), invalid format
The license file is somehow damaged. Please email us the file.
License error. Bundle company.application.id is not in the list allowed by license (bundle.id1, bundle.id2)
The bundle (application) identifier you specified is missing in the allowed list. Please check the spelling, if it is correct, you need to get another license for your application.
License error. Current date yyyy-mm-dd hh:mm:ss is later than license expiration date yyyy-mm-dd hh:mm:ss
Your license has expired. Please contact us.
License is not initialized.
You haven't initialized the license. Please add the license to your project as described above.
In the response, find the analysis results and folder_id for future reference.
The collection should be created within a company, so you require your company's company_id as a prerequisite.
If you don't know your ID, call GET /api/companies/?search_text=test, replacing "test" with your company name or its part. Save the company_id you've received.
Now, create a collection via POST /api/collections/. In the request body, specify the alias for your collection and company_id of your company:
In a response, you'll get your new collection identifier: collection_id.
How to Add a Person or a Photo to a Collection
To add a new person to your collection, call POST /api/collections/{{collection_id}}/persons/, using collection_id of the collection needed. In the request body, add a photo or several photos. Mark them with appropriate tags in the payload:
The response will contain the person_id which stands for the person identifier within your collection.
If you want to add a name of the person, in the request payload, add it as metadata:
To add more photos of the same person, call POST {{host}}/api/collections/{{collection_id}}/persons/{{person_id}}/images/ using the appropriate person_id. The request body should be filled as you did it before with POST /api/collections/{{collection_id}}/persons/.
To obtain information on all the persons within the single collection, call GET /api/collections/{{collection_id}}/persons/.
To obtain a list of photos for a single person, call GET /api/collections/{{collection_id}}/persons/{{person_id}}/images/. For each photo, the response will containperson_image_id. You'll need this ID, for instance, if you want to delete the photo.
How to Remove a Photo or a Person from a Collection
To delete a person with all their photos, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}} with the appropriate collection and person identifiers. All the photos will be deleted automatically. However, you can't delete a person entity if it has any related analyses, which means the Collection analysis used this photo for comparison and found a match. To delete such a person, you'll need to delete these analyses using DELETE /api/analyses/{{analysis_id}} with analysis_id of the Collection (1:N) analysis.
To delete all the collection-related analyses, get a list of folders where the Collection analysis has been used: call GET /api/folders/?analyse.type=COLLECTION. For each folder from this list (GET /api/folders/{{folder_id}}/), find the analysis_id of the required analysis, and delete the analysis – DELETE /api/analyses/{{analysis_id}}.
To delete a single photo of a person, call DELETE collections/<collection_id>/persons/<person_id>/images/<media_id>/ with collection, person, and image identifiers specified.
How to Delete a Collection
Delete the information on all the persons from this collection as described above, then call DELETE /api/collections/{{collection_id}}/ to delete the remaining collection data.
At the Capturing Videos step you've created a data container with all the required information in it , so now just send it to analysis using the addContainer(container) and run methods.
SDK 8.21 and older
To check liveness and face biometry, you need to upload media to our system and then analyze them.
To interpret the results of analyses, please refer to Types of Analyses.
Below, you'll see the example of performing a check and its description.
To delete media files after the checks are finished, use the cleanTempDirectory method.
Adding Metadata
To add metadata to a folder, use AnalysisRequest.addFolderMeta.
Extracting the Best Shot
In the params field of the Analysis structure, you can pass any additional parameters (key + value), for instance, to extract the best shot on the server side.
Using Media from Another SDK
To use a media file that is captured with another SDK (not Oz iOS SDK), specify the path to it in the OzMedia structure (the bestShotURL property):
Adding Media to a Certain Folder
If you want to add your media to the existing folder, use the addFolderId method:
lang_obj: an object that includes identifiers of translation strings as keys and translation strings themselves as values.
A list of language identifiers:
lang_id
Language
en
English
es
Spanish
pt-br
Portuguese (Brazilian)
kz, kk
Kazakh
An example of usage:
OzLiveness.add_lang('en', enTranslation), where enTranslation is a JSON object.
To set the SDK language, when you launch the plugin, specify the language identifier in lang:
You can check which locales are installed in Web SDK: use the ozLiveness.get_langs() method. If you have added a locale manually, it will also be shown.
A list of all language identifiers:
The keys oz_action_*_go refer to the appropriate gestures. oz_tutorial_camera_* – to the hints on how to enable camera in different browsers. Others refer to the hints for any gesture, info messages, or errors.
Since 1.5.0, if your language pack doesn't include a key, the message for this key will be shown in English.
Before 1.5.0
If your language pack doesn't include a key, the translation for this key won't be shown.
faceReady – the frame color when the face is correctly placed within the frame;
faceNotReady – the frame color when the face is placed improperly and can't be analyzed.
centerHint – the text of the hint that is displayed in the center.
textSize – the size of the text;
color – the color of the text;
yPosition – the vertical position measured from top;
letterSpacing – the spacing between letters;
fontStyle – the style of font (bold, italic, etc.).
closeButton – the button that closes the plugin:
image – the button image, can be an image in PNG or dataURL in base64.
backgroundOutsideFrame – the color of the overlay filling (outside the frame):
color – the fill color.
Example:
request body
{
"analyses": [
{
"type": "quality",
"source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // optional; omit to include all media from the folder
...
}
]
}
[
{
// you may have multiple analyses in the list
// pick the one you need by analyse_id or type
"analysis_id": "1111aaaa-11aa-11aa-11aa-111111aaaaaa",
"type": "QUALITY",
"results_media": [
{
// if you have multiple media in one analysis, match score with media by source_video_id/source_shots_set_id
"source_video_id": "1111aaaa-11aa-11aa-11aa-111111aaaaab", // for shots_set media, the key would be source_shots_set_id
"results_data":
{
"confidence_spoofing": 0.05790174 // quantitative score for this media
}
"resolution_status": "SUCCESS", // qualitative resolution (based on all media)
...
]
...
}
...
]
request body
{
"analyses": [{
"type": "biometry",
// optional; omit to include all media from the folder
"source_media": [
"1111aaaa-11aa-11aa-11aa-111111aaaaaa",
"2222bbbb-22bb-22bb-22bb-222222bbbbbb"
]
}]
}
[
{
// you may have multiple analyses in the list
// pick the one you need by analyse_id or type
"analysis_id": "1111aaaa-11aa-11aa-11aa-111111aaaaaa",
"type": "BIOMETRY",
"results_media": [
{
// if you have multiple media in one analysis, match score with media by source_video_id/source_shots_set_id
"source_video_id": "1111aaaa-11aa-11aa-11aa-111111aaaaab", // for shots_set media, the key would be source_shots_set_id
"results_data":
{
"max_confidence": 0.997926354,
"min_confidence": 0.997926354 // quantitative score for this media
}
...
]
"resolution_status": "SUCCESS", // qualitative resolution (based on all media)
...
}
...
]
pod 'OZLivenessSDK/Core', :git => 'https://gitlab.com/oz-forensics/oz-liveness-ios.git', :tag => 'VERSION'
pod ‘OZLivenessSDK/Core’
// OR
// pod ‘OZLivenessSDK/Core’, ‘8.22.0’
OZSDK(licenseSources: [.licenseFileName(“forensics.license”)]) { licenseData, error in
if let error = error {
print(error)
}
}
OZSDK(licenseSources: [.licenseFilePath(“path_to_file”)]) { licenseData, error in
if let error = error {
print(error)
}
}
OzLiveness.open({
...
meta: {
// the user or lead ID from an external lead generator
// that you can pass to keep track of multiple attempts made by the same user
'end_user_id': '<user_or_lead_id>',
// the unique attempt ID
'transaction_id': '<unique_transaction_id>'
}
});
func onResult(container: DataContainer) {
let analysisRequest = AnalysisRequestBuilder()
analysisRequest.addContainer(container)
analysisRequest.run(
statusHandler: { status in
},
errorHandler: { error in
}
) { result in
}
}
let analysisRequest = AnalysisRequestBuilder()
// create one or more analyses
let analysis = Analysis.init(
media: mediaToAnalyze, // mediaToAnalyze is an array of OzMedia that were captured or otherwise created
type: .quality, // check the analysis types in iOS methods
mode: .serverBased) // or .onDevice if you want the on-device analysis
analysisRequest.uploadMedia(mediaToAnalyze)
analysisRequest.addAnalysis(analysis)
// initiate the analyses
analysisRequest.run(
statusHandler: { state in }, // scenario steps progress handler
errorHandler: { _ in }
) { result in
// receive and handle analyses results here
}
let analysis = Analysis.init(media: mediaToAnalyze, type: .quality, mode: .serverBased)
var folderMeta: [String: Any] = ["key1": "value1"]
analysisRequest.addFolderMeta(folderMeta)
...
This step can be omitted if a company already exists.
As a user must belong to a company, create a company: call {{host}}/api/companies/ with your company name.
Example request
Example response
3
Step 3
Create a service user. Call {{host}}/api/users/ and write down user_id that you will get in response.
Example request
Example response
As in API 6.0, the logic of issuing a service token has slightly changed, here are examples for both API 6 and API 5 (and below) cases.
API 6
In the request body, define user_type as CLIENT_SERVICE.
API 5 and below
Set the is_service flag value to true.
4
Step 4
If you need to obtain the service token to use it, for instance, with Web SDK, authorize as ADMIN (same as in Step 1) and call:
API 6: {{host}}/api/authorize/service_token/{user_id} with user_id from the previous step.
API 5 and below: {{host}}/api/authorize/service_token.
Example request
Example response
In response, you will get a service token that you can use in any service processes.
For Web SDK, specify this token’s value as api_token in the .
and type. If you upload files via the web console, you select the tags needed; if you take photo or video via Web SDK, the SDK picks the tags automatically. As for the media type, it can be IMAGE (a photo)/VIDEO/SHOTS_SET, where SHOTS_SET is a .zip archive equal to video.
Below, you will find the tags and type requirements for all analyses. If a media doesn’t match the requirements for the certain analysis, this media is ignored by algorithms.
The rules listed below act by default. To change the mapping configuration, please contact us.
This analysis is applied to all media, regardless of the gesture recorded (gesture tags begin from video_selfie).
Important: to process a photo in API 4.0.8 and below, pack it into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored.
If the folder contains less than two matching media files, the system will return an error. If there are more than two files, then all pairs will be compared, and the system will return a result for the pair with the least similar faces.
This analysis works only when you have a pre-made image database, which is called collection. The analysis is applied to all media in the folder (or the ones marked as source media).
Best Shot is an addition to the Quality (Liveness) analysis. It requires the appropriate option enabled. The analysis is applied to all media files that can be processed by the Quality analysis.
The Documents analysis is applied to images with tags photo_id_front and photo_id_back (documents), and photo_selfie (selfie). The result will be positive if the system finds the selfie photo and matches it with a photo on one of the valid documents from the following list:
personal ID card
driver license
foreign passport
Tag-Related Errors
Code
Message
Description
202
Could not locate face on source media [media_id]
No face is found in the media that is being processed, or the source media has wrong (photo_id_back) or/and missing tag used for the media.
202
Biometry. Analysis requires at least 2 media objects to process
The algorithms did not find the two appropriate media for analysis. This might happen when only a single media has been sent for the analysis, or a media is missing a tag.
202
Processing error - did not found any document candidates on image
The Documents analysis can't be finished because the photo uploaded seems not to be a document, or it has wrong (not photo_id_*) or/and missing tags.
access_token is time-limited, the limits depend on the account type.
service accounts – OZ_SESSION_LONGLIVE_TTL (5 years by default),
other accounts – OZ_SESSION_TTL (15 minutes by default).
expire_token is the token you can use to renew your access token if necessary.
Automatic session extension
If the value ofexpire_date > current date, the value of current sessionexpire_date is set to current date + time period that is defined as shown above (depending on the account type).
Token Renewal
To renewaccess_tokenandexpire_token, call POST /api/authorize/refresh/. Add expire_token to the request body and X-Forensic-Access-Token to the header.
In case of success, you'll receive a new pair of access_tokenand expire_token. The "old" pair will be deleted upon the first authentication with the renewed tokens.
Errors
Error code
Error message
What caused the error
400
Could not locate field for key_pathexpire_token from provided dict data
expire_token haven't been found in the request body
401
Session not found
The session with expire_token you have passed doesn't exist.
403
You have not access to refresh this session
A user who makes the request is not thisexpire_token session owner.
Enter your domain address. Once the address is processed, you’ll see a list of your servers.
Click the server address needed to load a list of certificates. Certificate key is in the Pin SHA256 line of the Subject field. Expiration date is shown in the Valid until field.
Certificate number one is your host certificate. Your root certificate is in the very bottom of the list. Others are intermediate. For SSL pinning, any of them fits.
Choosing a certificate
The higher the certificate is on the list, the better the level of protection against theft. Thus, if you use the host certificate to pin in your application, you get the highest security level. However, the lifetime of these certificates is significantly shorter than that of intermediate or root certificates. To keep your application secure, you will need to change your pins as soon as they expire; otherwise, functionality might become unavailable.
As a reasonable balance between safety and the resources needed to maintain it, we recommend using intermediate or even root certificate keys for pinning. While the security level is slightly lower, you won’t need to change these pins as often because these certificates have a much longer lifetime.
Certificate owner
Trust level
Resources requirements (depend on the certificate’s lifetime)
Host
Highest
High, but requires the most resources to maintain: keys’ list should be updated at the same time as certificate
Intermediate certificate authority
Above average; the application considers all certificates that have been issued by this authority as trusted
Average
Root certificate authority
Average; the application considers all certificates that have been issued by this authority as trusted, including the intermediate authority-issued certificates
Low
Obtaining Hash and Date
The commands listed in this section have been tested on Ubuntu, but they should work on other Linux-based OS as well.
To obtain the hash, run the following command with your server domain and port:
In the response, you’ll receive hash for your SslPin.
To get the certificate’s expiration date, run the next command – again with your server domain and port:
The date you require will be in the notAfter parameter.
We’ll provide you with the hash and date of our API server certificate.
Injection Attacks
An injection attack on a liveness detection system is an attempt to bypass liveness verification mechanisms by injecting falsified data or modifying the execution logic of the frontend SDK. This type of attack is typically carried out by code injection, tampering with the runtime environment, or replacing camera input components (e.g., intercepting and substituting the video stream in real time, using virtual cameras, or manipulating JavaScript logic in the Web SDK). The goal of such attacks is to trick the liveness system into accepting fake or pre-recorded content as a genuine live interaction with the user.
Examples for mobile SDKs:
Virtual cameras.
File system integrity compromise.
Function hooking & application modification.
Emulators and cloud devices.
For web:
Virtual cameras.
Code injection attacks.
Presentation Attacks
A presentation attack is an attempt to deceive the system by presenting pre-recorded or artificial content that mimics a real user. The goal of such attacks is to pass the liveness check without involving a real, live person. These attacks do not target the SDK directly but rather the biometric models on the backend. They may include:
Photos,
Videos,
3D masks,
Screens of other devices, or
Other media used to create the illusion of live presence.
Other (System Manipulation) Attacks
These attacks include cyber fraudsters manipulating how the liveness detection module is integrated into the application or backend, bypassing or faking the check. Typically, these attacks involve patching the app, injecting hooks, or exploiting weak verification of liveness results. For instance, Man-in-the-Middle (MitM) / SSL Interception attack that is based on substitution or manipulation of captured data during network transmission, typically involving SSL/TLS violations or certificate pinning bypass.
Oz Software Built-In Security Measures
With cyberattacks on the rise, cybersecurity has become crucial and is now our highest priority. We provide protection even from multi-vector complex attacks, ensuring your data is safe at all stages of processing, including media capture, data transmission, and analysis. This protection involves many mechanisms on multiple layers that work together, supporting and reinforcing each other. To name a few:
We do not accept virtual cameras and emulators.
In native SDKs, you can configure SSL pinning and add protection for media files using request payload.
For Web SDK, you can move the decision logic to backend to avoid manipulating data within the browser context.
As our software aims to be embedded, it includes mechanisms to verify its runtime integrity, but it does not validate the integrity of the host application itself. Ensuring protection of the host application through anti-tampering techniques, code obfuscation, and runtime integrity verification is the responsibility of the host application owner. Without such safeguards, even a secure SDK may become susceptible to manipulation at the application or platform level.
Recommendations for Host Application Protection
Here are some measures we recommend to protect your application.
Consider revising your policies. This might involve:
Creating and using corporate SSL certificates,
Limiting access to unverified sources,
Using SSL proxy,
Controlling connections via SNI / TLS Handshake,
Creating a security policy and adhere to it,
etc.
For mobile applications, use Play Integrity (Android) and App Attest (iOS).
As for our SDKs, we recommend:
Ensuring you always have the latest version of Oz software installed, as almost each of our releases includes security enhancements.
Setting up for Native SDKs.
For more detailed recommendations, please contact us. For us, clients' safety comes first, and we’ll be happy to help.
Web SDK requires HTTPS (with SSL encryption) to work; however, at localhost and 127.0.01, you can check the resources' availability via HTTP.
Oz Liveness Web SDK consists of two components:
Client side – a JavaScript file that is being loaded within the frontend part of your application. It is called Oz Liveness Web Plugin.
Server side – a separate server module with OZ API. The module is called Oz Liveness Web Adapter.
The integration guides can be found here:
Oz Web SDK can be provided via SaaS, when the server part works on our servers and is maintained by our engineers, and you just use it, or on-premise, when Oz Web Adapter is installed on your servers. Contact us for more details and choose the model that is convenient for you.
Oz Web SDK requires a license to work. To issue a license, we need the domain name of the website where you are going to use our SDK.
To work properly, the resolution algorithms need each uploaded media to be marked with special tags. For video and images, the tags are different. They help algorithms to identify what should be in the photo or video and analyze the content.
Tags for Video Files
The following tag types should be specified in the system for video files.
To identify the data type of the video:
video_selfie
To identify the orientation of the video:
The tags listed allow the algorithms recognizing the files as suitable for the (Liveness) and analyses.
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a video file with the “blink” action:
Tags for Photo Files
The following tag types should be specified in the system for photo files:
A tag for selfies:
photo_selfie – to identify the image type as “selfie”.
Tags for photos/scans of ID cards:
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a “selfie” photo file:
Example of the correct tag set for a photo file with the face side of an ID card:
Example of the correct set of tags for a photo file of the back of an ID card:
Getting a License for Android SDK
You can generate the trial license here or contact us by email to get a productive license. To create the license, your applicationId (bundle id) is required.
To pass your license file to the SDK, call the OzLivenessSDK.init method with a list of LicenseSources. Use one of the following:
LicenseSource.LicenseAssetId should contain a path to a license file called forensics.license, which has to be located in the project's res/raw folder.
LicenseSource.LicenseFilePath should contain a file path to the place in the device's storage where the license file is located.
In case of any license errors, the onError function is called. Use it to handle the exception as shown above. Otherwise, the system will return information about license. To check the license data manually, use the getLicensePayload method.
Possible License Errors
Error message
What to Do
Connecting SDK to API
To connect SDK to Oz API, specify the API URL and access token as shown below.
In your host application, it is recommended that you set the API address on the screen that precedes the liveness check. Setting the API URL initiates a service call to the API, which may cause excessive server load when being done at the application initialization or startup. We recommend calling the setApiConnection method once, for example, in the Application class.
The order of SDK initialization and API connection does not matter, but both methods must be finished successfully before invoking the createStartIntent method.
Alternatively, you can use the login and password provided by your Oz Forensics account manager:
Although, the preferred option is authentication via access token – for security reasons.
By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for as shown below:
Clearing authorization:
Other Methods
Check for the presence of the saved Oz API access token:
LogOut:
Master License for Android
Master license is the offline license that allows using Mobile SDKs with any bundle_id, unlike the regular licenses. To get a master license, create a pair of keys as shown below. Email us the public key, and we will email you the master license shortly after that.
Your application needs to sign its bundle_id with the private key, and the Mobile SDK checks the signature using the public key from the master license. Master licenses are time-limited.
Generating Keys
This section describes the process of creating your private and public keys.
Creating a Private Key
To create a private key, run the commands below one by one.
You will get these files:
privateKey.der is a private .der key;
privateKey.txt is privateKey.der encoded by base64. This key containing will be used as the host app bundle_id signature.
The OpenSSL command specification:
Creating a Public Key
To create a public key, run this command.
You will get the public key file: publicKey.pub. To get a license, please email us this file. We will email you the license.
SDK Integration
SDK initialization:
For Android 6.0 (API level 23) and older:
Add the implementation 'com.madgag.spongycastle:prov:1.58.0.0' dependency;
Prior to the SDK initializing, create a base64-encoded signature for the host app bundle_id using the private key.
Signature creation example:
Pass the signature as the masterLicenseSignature parameter during the SDK initialization.
If the signature is invalid, the initialization continues as usual: the SDK checks the list of bundle_id included into the license, like it does it by default without a master license.
Checking Liveness and Face Biometry
If you use our SDK just for capturing videos, omit this step.
OzCapsula (SDK v8.22 and newer)
At the step you've created a data container with all the required information in it , so now just send it to analysis using the addContainer(container) and run methods.
SDK 8.21 and older
To check liveness and face biometry, you need to upload media to our system and then analyze them.
To interpret the results of analyses, please refer to .
Here’s an example of performing a check:
To delete media files after the checks are finished, use the clearActionVideos method.
Adding Metadata
To add metadata to a folder, use the addFolderMeta method.
Extracting the Best Shot
In the params field of the Analysis structure, you can pass any additional parameters (key + value), for instance, to extract the best shot on the server side.
Using Media from Another SDK
To use a media file that is captured with another SDK (not Oz Android SDK), specify the path to it in :
Adding Media to a Certain Folder
If you want to add your media to the existing folder, use the setFolderId method:
Types of Analyses and What They Check
Here, you'll get acquainted with types of analyses that Oz API provides and will learn how to interpret the output.
Using Oz API, you can perform one of the following analyses:
,
,
Statuses in API
This article contains the full description of folders' and analyses' statuses in API.
Field name / status
analyse.state
analyse.resolution_status
folder.resolution_status
system_resolution
How to Integrate Server-Based Liveness into Your Web Application
This guide outlines the steps for integrating the Oz Liveness Web SDK into a customer web application for capturing facial videos and subsequently analyzing them on a server.
The SDK implements the ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results. Under the hood, it communicates with Oz API.
Oz Liveness Web SDK detects both presentation and injection attacks. An injection attack is an attempt to feed pre-recorded video into the system using a virtual camera.
Finally, while the cloud-based service provides the fully-fledged functionality, we also offer an on-premise version with the same functions but no need for sending any data to our cloud.
We recommend starting with the SaaS mode and then reconnecting your web app to the on-premise Web Adapter and Oz API to ensure seamless integration between your front end and back end. With these guidelines in mind, integrating the Oz Liveness Web SDK into your web application can be a simple and straightforward process.
Master License for iOS
Master license is the offline license that allows using Mobile SDKs with any bundle_id, unlike the regular licenses. To get a master license, create a pair of keys as shown below. Email us the public key, and we will email you the master license shortly after that.
Your application needs to sign its bundle_id with the private key, and the Mobile SDK checks the signature using the public key from the master license. Master licenses are time-limited.
Generating Keys
This section describes the process of creating your private and public keys.
Using OzCapsula Data Container in Web SDK
In this mode, the SDK captures media from the user, packs all biometric and technical data into a container (application/octet-stream), and then either:
uploads it directly to Oz API (architecture: normal for stateful API or lite for Instant), or
returns it to your backend for forwarding (capture architecture).
{
"credentials": {
"email": "{{user_email}}", // your login
"password": "{{user_password}}" // your password
}
}
{
"expire_token": "{{expire_token}}"
}
Connection.fromServiceToken(
"your API server host",
"your token",
listOf(
SslPin(
"your hash", // SHA256 key hash in base64
<date> // key expiration date as a UNIX timestamp, UTC time
)
),
)
let pins = [SSLPin.pin(
publicKeyHash: "your hash", // SHA256 key hash in base64
expirationDate: date)] // key expiration date as a UNIX timestamp, UTC time
OZSDK.setApiConnection(.fromServiceToken(
host: "your API server host",
token: "your token",
sslPins: pins)) { (token, error) in
//
}
Install our Web SDK. Our engineers will help you to install the components needed using the standalone installer or manually. The license will be installed as well; to update it, please refer to this article.
This part is fully covered by the Oz Forensics engineers. You get a link for Oz Web Plugin (see step 2).
starting state
starting state
analyses in progress
analyses in progress
FAILED
system error
system error
system error
system error
FINISHED
finished successfully
-
finished successfully
-
DECLINED
-
check failed
-
check failed
OPERATOR_REQUIRED
-
additional check is needed
-
additional check is needed
SUCCESS
-
check succeeded
-
check succeeded
The details on each status are below.
Analysis State (analyse.state)
This is the state when the analysis is being processed. The values of this state can be:
PROCESSING – the analysis is in progress;
FAILED – the analysis failed due to some error and couldn't get finished;
FINISHED – job's done, the analysis is finished, and you can check the result.
Analysis Result (analyse.resolution_status)
Once the analysis is finished, you'll see one of the following results:
SUCCESS– everything went fine, the check succeeded (e.g., faces match or liveness confirmed);
OPERATOR_REQUIRED (except the Liveness analysis) – the result should be additionally checked by a human operator;
The OPERATOR_REQUIRED status appears only if it is set up in biometry settings.
DECLINED – the check failed (e.g., faces don't match or some spoofing attack detected).
If the analysis hasn't been finished yet, the result inherits a value from analyse.state: PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).
Folder Status (folder.resolution_status)
A folder is an entity that contains media to analyze. If the analyses have not been finished, the stage of processing media is shown in resolution_status:
INITIAL – no analyses applied;
PROCESSING – analyses are in progress;
FAILED – any of the analyses failed due to some error and couldn't get finished;
FINISHED – media in this folder are processed, the analyses are finished.
Folder Result (system_resolution)
Folder result is the consolidated result of all analyses applied to media from this folder. Please note: the folder result is the result of the last-finished group of analyses. If all analyses are finished, the result will be:
SUCCESS– everything went fine, all analyses completed successfully;
OPERATOR_REQUIRED (except the Liveness analysis) – there are no analyses with the DECLINED status, but one or more analyses have been completed with the OPERATOR_REQUIRED status;
DECLINED – one or more analyses have been completed with the DECLINED status.
The analyses you send in a single POST request form a group. The group result is the "worst" result of analyses this group contains: INITIAL > PROCESSING > FAILED > DECLINED > OPERATOR_REQUIRED > SUCCESS, where SUCCESS means all analyses in the group have been completed successfully without any errors.
You will get the public key file: publicKey.pub. To get a license, please email us this file. We will email you the license.
SDK Integration
SDK initialization:
License setting:
Prior to the SDK initializing, create a base64-encoded signature for the host app bundle_id using the private key.
Signature creation example:
Pass the signature as the masterLicenseSignature parameter either during the SDK initialization or license setting.
If the signature is invalid, the initialization continues as usual: the SDK checks the list of bundle_id included into the license like it does it by default without a master license.
Configuration Overview
When enabling container mode, the key parameters are:
Parameter
Value
Description
use_wasm_container
true
Enables data container generation
architecture
normal / lite/ capture
Defines who sends the data to Oz API (Web SDK or your backend)
api_use_session_token
api / client
Defines who retrieves the session token (Web SDK or your backend)
Session Token
api_use_session_token: "client"
In this mode, your backend obtains the session token before opening the Web SDK.
Steps:
Request a session token from Oz API:
The response will contain a short-term session_token:
Pass this token to the Web SDK Plugin:
The session token is valid only for a few minutes and must be requested before each capture session.
api_use_session_token: "api"
In this mode, the SDK automatically retrieves the session token directly from Oz API.
You don’t need to request or provide it manually.
Flow for Different Architectures
The flow is different depending on your architecture type.
architecture: "normal"
In this mode, the Web SDK automatically uploads the generated container to Oz API.
You do not need to handle any upload manually.
architecture: "lite"
In this mode, the Web SDK automatically uploads the generated container to Oz API.
You do not need to handle any upload manually.
architecture: "capture"
In this mode, Web SDK only captures and packs data, but does not send it to the Oz API.
Your backend is responsible for receiving the container and forwarding it to Oz API.
Flow:
Web SDK performs video capture and calls the on_capture_complete(result, container) callback, where the second argument (container) is a Blob object (application/octet-stream).
You send this blob to your backend.
Your backend sends it to Oz API using an HTTPS POST request.
Example request:
The response will be similar to the one from the non-container flow.
private fun getMasterSignature(): String {
Security.insertProviderAt(org.spongycastle.jce.provider.BouncyCastleProvider(), 1)
val privateKeyBase64String = "the string copied from the privateKey.txt file"
// with key example:
// val privateKeyBase64String = "MIIEpAIBAAKCAQEAxnpv02nNR34uNS0yLRK1o7Za2hs4Rr0s1V1/e1JZpCaK8o5/3uGV+qiaTbKqU6x1tTrlXwE2BRzZJLLQdTfBL/rzqVLQC/n+kAmvsqtHMTUqKquSybSTY/zAxqHF3Fk59Cqisr/KQamPh2tmg3Gu61rr9gU1rOglnuqt7FioNMCMvjW7ciPv+jiawLxaPrzNiApLqHVN+xCFh6LLb4YlGRaNUXlOgnoLGWSQEsLwBZFkDJDSLTJheNVn9oa3PXg4OIlJIPlYVKzIDDcSTNKdzM6opkS5d+86yjI1aTKEH3Zs64+QoEuoDfXUxS3TOUFx8P+wfjOR5tYAT+7TRN4ocwIDAQABAoIBAATWJPV05ZCxbXTURh29D/oOToZ0FVn78CS+44Vgy1hprAcfG9SVkK8L/r6X9PiXAkNJTR+Uivly64Oua8//bNC7f8aHgxRXojFmWwayj8iOMBncFnad1N2h4hy1AnpNHlFp3I8Yh1g0RpAZOOVJFucbTxaup9Ev0wLdWyGgQ3ENmRXAyLU5iUDwUSXg59RCBFKcmsMT2GmmJt1BU4P3lL9KVyLBktqeDWR/l5K5y8pPo6K7m9NaOkynpZo+mHVoOTCtmTj5TC/MH9YRHlF15VxQgBbZXuBPxlYoQCsMDEcZlMBWNw3cNR6VBmGiwHIc/tzSHZVsbY0VRCYEbxhCBZkCgYEA+Uz0VYKnIWViQF2Na6LFuqlfljZlkOvdpU4puYTCdlfpKNT3txYzO0T00HHY9YG9k1AW78YxQwsopOXDCmCqMoRqlbn1SBe6v49pVB85fPYU2+L+lftpPlx6Wa0xcgzwOBZonHb4kvp1tWhUH+B5t27gnvRz/rx5jV2EfmWinycCgYEAy8/aklZcgoXWf93N/0EZcfzQo90LfftkKonpzEyxSzqCw7B9fHY68q/j9HoP4xgJXUKbx1Fa8Wccc0DSoXsSiQFrLhnT8pE2s1ZWvPaUqyT5iOZOW6R+giFSLPWEdwm6+BeFoPQQFHf8XH3Z2QoAepPrEPiDoGN1GSIXcCwoe9UCgYEAgoKj4uQsJJKT1ghj0bZ79xVWQigmEbE47qI1u7Zhq1yoZkTfjcykc2HNHBaNszEBks45w7qo7WU5GOJjsdobH6kst0eLvfsWO9STGoPiL6YQE3EJQHFGjmwRbUL7AK7/Tw2EJG0wApn150s/xxRYBAyasPxegTwgEj6j7xu7/78CgYEAxbkI52zG5I0o0fWBcf9ayx2j30SDcJ3gx+/xlBRW74986pGeu48LkwMWV8fO/9YCx6nl7JC9dHI+xIT/kk8OZUGuFBRUbP95nLPHBB0Hj50YRDqBjCBh5qaizSEGeGFFNIfFSKddri3U8nnZTNiKLGCx7E3bjE7QfCh5qoX8ZF0CgYAtsEPTNKWZKA23qTFI+XAg/cVZpbSjvbHDSE8QB6X8iaKJFXbmIC0LV5tQO/KT4sK8g40m2N9JWUnaryTiXClaUGU3KnSlBdkIA+I77VvMKMGSg+uf4OdfJvvcs4hZTqZRdTm3dez8rsUdiW1cX/iI/dJxF4964YIFR65wL+SoRg=="
val sig = Signature.getInstance("SHA512WithRSA")
val keySpec = PKCS8EncodedKeySpec(Base64.decode(privateKeyBase64String, Base64.DEFAULT))
val keyFactory = KeyFactory.getInstance("RSA")
sig.initSign(keyFactory.generatePrivate(keySpec))
sig.update(packageName.toByteArray(Charsets.UTF_8))
return Base64.encodeToString(sig.sign(), Base64.DEFAULT).replace("\n", "")
}
Kotlin
private fun runAnalysis(container: DataContainer?) {
if (container == null) return
AnalysisRequest.Builder()
.addContainer(container)
.build()
.run(
{ result ->
val isSuccess = result.analysisResults.all { it.resolution == Resolution.SUCCESS }
},
{ /* show error */ },
{ /* update status */ },
)
}
analysisCancelable = AnalysisRequest.Builder()
// mediaToAnalyze is an array of OzAbstractMedia that were captured or otherwise created
.addAnalysis(Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, mediaToAnalyze))// or ON_DEVICE if you want the on-device analysis
.build()
//initiating the analyses and setting up a listener
.run(object : AnalysisRequest.AnalysisListener {
override fun onStatusChange(status: AnalysisRequest.AnalysisStatus) { handleStatus(status) // or your status handler
}
override fun onSuccess(result: RequestResult) {
handleResults(result) // or your result handler
}
override fun onError(error: OzException) { handleError(error) // or your error handler
}
})
analysisCancelable = new AnalysisRequest.Builder()
// mediaToAnalyze is an array of OzAbstractMedia that were captured or otherwise created
.addAnalysis(new Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, mediaToAnalyze)) // or ON_DEVICE if you want the on-device analysis
.build()
//initiating the analyses and setting up a listener
.run(new AnalysisRequest.AnalysisListener() {
@Override
public void onSuccess(@NonNull RequestResult list) { handleResults(list); } // or your result handler
@Override
public void onError(@NonNull OzException e) { handleError(e); } // or your error handler
@Override
public void onStatusChange(@NonNull AnalysisRequest.AnalysisStatus analysisStatus) { handleStatus(analysisStatus); } // or your status handler
})
.addFolderMeta(
mapOf(
"key1" to "value1",
"key2" to "value2"
)
)
val file = File(context.filesDir, "media.mp4") // use context.getExternalFilesDir(null) instead of context.filesDir for external app storage
val media = OzAbsractMedia.OzVideo(OzMediaTag.VideoSelfieSmile, file.absolutePath)
.setFolderId(folderId)
openssl genpkey -algorithm RSA -outform DER -out privateKey.der -pkeyopt rsa_keygen_bits:2048
# for MacOS
base64 -i privateKey.der -o privateKey.txt
# for Linux
base64 -w 0 privateKey.der > privateKey.txt
The possible results of the analyses are explained here.
Each of the analyses has its threshold that determines the output of these analyses. By default, the threshold for Liveness is 0.5 or 50%, for Collection and Biometry (Face Matching) – 0.85 or 85%.
Biometry: if the final score is equal to or above the threshold, the faces on the analyzed media are considered similar.
Collection: if the final score is equal to or above the threshold, the face on the analyzed media matches with one of the faces in the database.
Quality: if the final score is equal to or above the threshold, the result is interpreted as an attack.
To configure the threshold depending on your needs, please .
For more information on how to read the numbers in analyses' results, please refer to .
Biometry
Purpose
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media (for details, please refer to Rules of Assigning Analyses).
Output
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – faces are similar, media represent the same person,
0% (0) – faces are not similar and belong to different people
Quality (Liveness, Best Shot)
Purpose
The Liveness detection (Quality) algorithm aims to check whether a person in a media is a real human acting in good faith, not a fake of any kind.
The Best Shot algorithm checks for the best shot from a video (a best-quality frame where the face is seen the most properly). It is an addition to liveness.
Output
After checking, the analysis shows the chance of a spoofing attack in percents.
100% (1) – an attack is detected, the person in the video is not a real living person,
0% (0) – a person in the video is a real living person.
*Spoofing in biometry is a kind of scam when a person disguises as another person using both program and non-program tools like deepfake, masks, ready-made photos, or fake videos.
Documents
This analysis type is deprecated.
Purpose
The Documents analysis aims to recognize the document and check if its fields are correct according to its type.
Oz API uses a third-party OCR analysis service provided by our partner. If you want to change this service to another one, please contact us.
Output
As an output, you'll get a list of document fields with recognition results for each field and a result of checking that can be:
The documents passed the check successfully,
The documents failed to pass the check.
Additionally, the result of Biometry check is displayed.
Collection (1:N)
Purpose
The Collection checking algorithm is used to determine whether the person on a photo or video is present in the database of pre-uploaded images. The person's face is being compared with the faces of known swindlers or a list of VIPs depending on your needs.
Output
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – the person in an image or video matches with someone in the collection,
0% (0) – the person is not found in the collection.
Tell us domain names of the pages from which you are going to call Web SDK and email for admin access, e.g.:
Domain names from which Web SDK will be called:
www.yourbrand.com
www.yourbrand2.com
Email for admin access:
In response, you’ll get URLs and credentials for further integration and usage. When using SaaS API, you get them from us:
For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the . Consider the proper user role (CLIENT in most cases or CLIENT ADMIN, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.
2
Obtain a session token from Oz API
Session token is required for OzCapsula functionality.
(Optional, only if you use stateful API) Authorize as any non-OPERATOR role.
Call GET {{host}}/api/authorize/session_token.
Example request
3
Add Web Plugin to your web pages
Add the following tags to your HTML code. Use Web Adapter URL received before:
4
Implement your logic around Web Plugin
Add the code that opens the plugin and handles the results. You'll require a session token from step 2.
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .
With these steps, you are done with basic integration of Web SDK into your web application. You will be able to access recorded media and analysis results in Web Console via browser or programmatically via API (please find the instructions here: retrieving an MP4 video, getting analysis results).
The license file is missing. Please check its name and path to the file.
License error. Cannot parse license from (your_URI), invalid format
The license file is somehow damaged. Please email us the file.
License error. Bundle company.application.id is not in the list allowed by license (bundle.id1, bundle.id2)
The bundle (application) identifier you specified is missing in the allowed list. Please check the spelling, if it is correct, you need to get another license for your application.
License error. Current date yyyy-mm-dd hh:mm:ss is later than license expiration date yyyy-mm-dd hh:mm:ss
Your license has expired. Please contact us.
License is not initialized. Call 'OzLivenessSDK.init before using SDK
You haven't initialized the license. Call OzLivenessSDK.init with your license data as explained above.
OzLivenessSDK.INSTANCE.getConfig().setBaseURL(BASE_URL);OzLivenessSDK.INSTANCE.init(context,Arrays.asList(new LicenseSource.LicenseAssetId(R.raw.forensics),new LicenseSource.LicenseFilePath("absolute_path_to_your_license_file")),newStatusListener<LicensePayload>(){@Overridepublicvoid onStatusChanged(@NullableStrings){}@Overridepublicvoid onSuccess(LicensePayloadlicensePayload){/*check the license payload*/@Overridepublicvoid onError(@NonNullOzExceptione){/*});
Capturing Videos
OzCapsula (SDK v8.22 and newer)
Please note: all required data (other than the video) must be packaged into the container before starting the Liveness screen.
To start recording, use startActivityForResult:
To obtain the captured video, use onActivityResult:
If you use fragment, please refer to the example below. LivenessFragment is the representation of the Liveness screen UI.
SDK 8.21 and older
To start recording, use thestartActivityForResult method:
actions – a list of while recording video.
For Fragment, use the code below. LivenessFragment is the representation of the Liveness screen UI.
To ensure the license being processed properly, we recommend initializing SDK first, then opening the Liveness screen.
To obtain the captured video, use theonActivityResult method:
sdkMediaResult – an object with video capturing results for interactions with Oz API (a list of the objects),
sdkErrorString – description of , if any.
If you use our SDK just for capturing videos, omit the Checking Liveness and Face Biometry step.
If a user closes the capturing screen manually, resultCode receives the Activity.RESULT_CANCELED value.
Code example:
Changelog
8.22.0 – Dec. 23, 2025
Changed the way you add our SDK to pubspec.yaml. Please check the installation section.
Android:
Fixed the bug with green videos on some smartphone models.
Fixed occasional SDK crashes in specific cases and / or on specific devices.
Resolved the issue with mediaId appearing null.
iOS:
Fixed the bug with crashes that might happen during the Biometry analysis after taking a reference photo using camera.
Resolved the issue with SDK not returning the license-related callbacks.
8.19.0 – Nov. 24, 2025
Android:
Resolved an issue with warning that could appear when running Fragment.
SDK no longer crashes when calling copyPlane.
8.18.1 – Sept. 10, 2025
initSDK in the iOS debugging mode now works properly.
8.18.0 – Aug. 26, 2025
You can now .
Fixed an error in the example code.
The Scan gesture hint is now properly voiced.
8.16.0 – Apr. 30, 2025
Changed the wording for the head_down gesture: the new wording is “tilt down”.
Updated the authorization logic.
Improved voiceover.
8.14.0 – Dec. 17, 2024
Security and telemetry updates.
The SDK hints and UI controls can be voiced in accordance to WCAG requirements.
Improved user experience with head movement gestures.
8.12.0 – Oct. 11, 2024
The executeLiveness method is now deprecated, please use startLiveness instead.
Updated the code needed to obtain the Liveness results.
Security and telemetry updates.
8.8.2 – June 27, 2024
Added descriptions for the errors that occur when providing an empty string as an ID in the addFolderID (iOS) and setFolderID (Android) methods.
Android:
Fixed a bug causing an endless spinner to appear if the user switches to another application during the Liveness check.
8.6.0 – Apr. 15, 2024
Android:
Upgraded the on-device Liveness model.
Security updates.
8.5.0 – Mar. 20, 2024
The length of the Selfie gesture is now (affects the video file size).
Removed the pause after the Scan gesture.
Security and logging updates.
8.4.0 – Jan. 11, 2024
Android: updated the on-device Liveness model.
iOS: changed the default behavior in case a localization key is missing: now the English string value is displayed instead of a key.
Fixed some bugs.
8.3.0 – Nov. 30, 2023
Implemented the possibility of using a master license that works with any bundle_id.
Fixed the bug with background color flashing.
Video compression failure on some phone models is now fixed.
8.2.0 – Nov. 17, 2023
Initial release.
Single Request
Overview and Benefits
In version 6.0.1, we introduced a new feature which allows you to send all required data and receive the analysis result within a single request.
Before 6.0.1, interacting with the API required multiple requests: you had to create a folder and upload media to it, initiate analyses (see Liveness, Biometry, and Blacklist), and then either poll for results or use webhooks for notifications when the result was ready. This flow is still supported, so if you need to send separate requests, you can continue using the existing methods that are listed above.
However, the new API operation mode significantly simplifies the process by allowing you to send a single request and receive the response synchronously. The key benefits are:
Single request for everything – all data is sent in one package, eliminating the risk of data loss.
Synchronous response – no need for polling or webhooks to retrieve results.
High performance – supports up to 36 analyses per minute per instance.
Usage
To use this method, call POST /api/folders/. In the X-Forensic-Access-Token header, pass your . Add media files to the request body and define the tags and metadata if needed in the payload part.
Request Example
Response Example
In response, you receive analysis results.
You're done.
Migration to OzCapsula
This guide describes the migration to the latest OzCapsula architecture: the approach where you use an encrypted to securely transmit data between frontend and backend.
Component Version Requirements
Before starting the migration, ensure that all components are updated to the minimum required versions:
OzLiveness.open({
session_token,
lang: 'en',
action: [
// 'photo_id_front', // request photo ID picture
'video_selfie_blank' // request passive liveness video
],
on_complete: function (result) {
// This callback is invoked when the analysis is complete
console.log('on_complete', result);
}
});
Older versions are not compatible with the new capture and analysis flow.
Authentication Changes
All capture sessions now require a session_token issued by the backend.
The token must be obtained before starting video capture.
It is tied to the current session and the specific container.
The token has a limited lifetime.
Migration actions
Implement a backend call to obtain session_token.
Pass the token explicitly when creating the capture screen.
Flow Changes
Before Container (Legacy Flow)
You launch media capture.
Media is captured.
Media along with required data is sent to Oz API (using your backend as intermediate if needed).
With Container (New Flow)
You request session_token from backend.
You put additional data like metadata (if needed) into container and launch video capture with session_token.
SDK captures media, packages it into container, and returns a Media Container.
Media container is sent to Oz API (using your backend as intermediate if needed).
Migration actions
API
Upgrade to v6.4.1-40 or higher.
Switch Content-Type of data you send to application/octet-stream.
For Instant API, obtain private and public keys as described here.
Web SDK
Upgrade to v1.9.2 or higher.
Ensure backend supports session_token.
In the configuration file, set use_wasm_container to true and api_use_session_token to api or client (please refer to for details).
Update initialization to pass token explicitly.
If you use the capture architecture type, ensure you receive and send to your backend and then to us a blob object (application/octet-stream).
Mobile SDKs
Upgrade to v8.22 or higher.
Ensure backend supports session_token.
Implement new interfaces as described below.
Android
1
Launching Capture Screen
2
Delegate
3
Launching Analysis
iOS
1
Launching Capture Screen
2
Delegate
3
Launching Analysis
Migration Checklist
Mobile SDKs are updated to 8.22 or newer.
Added the session_token backend call.
New public interface is implemented: createMediaCaptureScreen with CaptureRequest.
Web SDK is updated to 1.9.2 or newer.
use_wasm_container is set to true.
api_use_session_token is set to api or client
For best results, migrate all components simultaneously and avoid partial upgrades.
How to Integrate On-Device Liveness into Your Mobile Application
We recommend using server-based analyses whenever possible, as on-device ones tend to produce less accurate results.
This guide outlines the steps for integrating the Oz Liveness Mobile SDK into a customer mobile application for capturing facial videos and performing on-device liveness checks without sending any data to a server.
The SDK implements the ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results.
Oz Liveness Mobile SDK requires a license. License is bound to the bundle_id of your application, e.g., com.yourcompany.yourapp. Issue the 1-month trial license or for a long-term license.
Android
1
Add SDK to your project
In the build.gradle of your project, add:
In the build.gradle of the module, add:
2
iOS
1
Add our SDK to your project
Install OZLivenessSDK.
With these steps, you are done with basic integration of Mobile SDKs. The data from the on-device analysis is not transferred anywhere, so please bear in mind you cannot access it via API or Web console. However, the internet is still required to check the license. Additionally, we recommend that you use our logging service called telemetry, as it helps a lot in investigating attacks' details. We'll provide you with credentials.
Customizing Android SDK
Configuration
We recommend applying these settings when starting the app.
// connecting to the API serverOzLivenessSDK.setApiConnection(OzConnection.fromServiceToken(HOST, TOKEN))// settings for the number of attempts to detect an action
Add the lines below in pubspec.yaml of the project you want to add the plugin to.
For 8.22 and above:
For 8.21 and below:
Add the license file (e.g., license.json or forensics.license) to the Flutter application/assets folder. In pubspec.yaml, specify the Flutter asset:
For Android, add the Oz repository to /android/build.gradle, allprojects → repositories section:
For Flutter 8.24.0 and above or Android Gradle plugin 8.0.0 and above, add to android/gradle.properties:
The minimum SDK version should be 21 or higher:
For iOS, set the minimum platform to 13 or higher in the Runner → Info → Deployment target → iOS Deployment Target.
In ios/Podfile, comment the use_frameworks! line (#use_frameworks!).
Getting Started with Flutter
Initializing SDK
Initialize SDK by calling the init plugin method. Note that the license file name and path should match the ones specified in pubspec.yaml (e.g., assets/license.json).
Connecting SDK to API
Use the API credentials (login, password, and API URL) that you’ve received from us.
In production, instead of hard-coding the login and password inside the application, it is recommended to get the access token on your backend via the API auth method, then pass it to your application:
By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for as shown below:
or
Capturing Videos
To start recording, use the startLiveness method to obtain the recorded media:
Please note: for versions 8.11 and below, the method name is executeLiveness, and it returns the recorded media.
To obtain the media result, subscribe to livenessResult as shown below:
Checking Liveness and Face Biometry
To run the analyses, execute the code below.
Create the Analysis object:
Execute the formed analysis:
If you need to run an analysis for a particular folder, pass its ID:
The analysisResult list of objects contains the result of the analysis.
If you want to use media captured by another SDK, the code should look like this:
How to Integrate Server-Based Liveness into Your Mobile Application
This guide outlines the steps for integrating the Oz Liveness Mobile SDK into a customer mobile application for capturing facial videos and subsequently analyzing them on the server.
The SDK implements a ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results. The SDK methods for liveness analysis communicate with Oz API under the hood.
Before you begin, make sure you have Oz API credentials. When using SaaS API, you get them :
For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the
System Objects
The description of the objects you can find in Oz Forensics system.
Objects Hierarchy
System objects on Oz Forensics products are hierarchically structured as shown in the picture below.
On the top level, there is a Company. You can use one copy of Oz API to work with several companies.
val sessionToken: String = getSessionToken()
val captureRequest = CaptureRequest(
listOf(
AnalysisProfile(
Analysis.Type.QUALITY,
listOf(MediaRequest.ActionMedia(OzAction.Blank)),
)
)
)
val intent = OzLivenessSDK.createStartIntent(captureRequest, sessionToken)
startActivityForResult(intent, REQUEST_LIVENESS_CONTAINER)
val intent = OzLivenessSDK.createStartIntent(listOf(OzAction.Blank))
startActivityForResult(intent, REQUEST_LIVENESS_MEDIA)
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == REQUEST_LIVENESS_CONTAINER) {
when (resultCode) {
OzLivenessResultCode.SUCCESS -> runAnalysis(OzLivenessSDK.getContainerFromIntent(data))
OzLivenessResultCode.USER_CLOSED_LIVENESS -> { /* user closed the screen */ }
else -> {
val errorMessage = OzLivenessSDK.getErrorFromIntent(data)
/* show error */
}
}
}
}
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == REQUEST_LIVENESS_MEDIA) {
when (resultCode) {
OzLivenessResultCode.SUCCESS -> runAnalysis(OzLivenessSDK.getResultFromIntent(data))
OzLivenessResultCode.USER_CLOSED_LIVENESS -> { /* user closed the screen */ }
else -> {
val errorMessage = OzLivenessSDK.getErrorFromIntent(data)
/* show error */
}
}
}
}
getSessionToken() { sessionToken in
DispatchQueue.main.async {
do {
let action:OZVerificationMovement = .selfie
let mediaRequest = MediaRequest.action(action)
let profile = AnalysisProfile(mediaList: [mediaRequest],
type: .quality,
params: [:] )
let request = CaptureRequest(analysisProfileList: [profile], cameraPosition: .front)
let ozLivenessVC = try OZSDK.createMediaCaptureScreen(self, request, sessionToken: sessionToken)
self.present(ozLivenessVC, animated: true)
} catch let error {
print(error.localizedDescription)
}
}
}
do {
let ozLivenessVC : UIViewController = try OZSDK.createVerificationVCWithDelegate(self, actions: .selfie)
self.present(ozLivenessVC, animated: true)
} catch let error {
print(error.localizedDescription)
}
OzLiveness.open({
// omit session_token if your SDK version is older than 1.9.2 or you have set api_use_session_token to api
session_token,
lang: 'en',
action: [
'photo_id_front', // request photo ID picture
'video_selfie_blank' // request passive liveness video
],
meta: {
// an ID of user undergoing the check
// add for easier conversion calculation
'end_user_id': '<user_or_lead_id>',
// Your unique identifier that you can use later to find this folder in Oz API
// Optional, yet recommended
'transaction_id': '<your_transaction_id>',
// You can add iin if you plan to group transactions by the person identifier
'iin': '<your_client_iin>',
// Other meta data
'meta_key': 'meta_value',
},
on_error: function (result) {
// error details
console.error('on_error', result);
},
on_complete: function (result) {
// This callback is invoked when the analysis is complete
// It is recommended to commence the transaction on your backend,
// using transaction_id to find the folder in Oz API and get the results
console.log('on_complete', result);
},
on_capture_complete: function (result) {
// Handle captured data here if necessary
console.log('on_capture_complete', result);
}
});
ozsdk: ^8.22.0
– (optional) the auth token;
license – an object containing the license data;
licenseUrl – a string containing the path to the license;
lang – a string containing the identifier of one of the installed language packs;
meta– an object with names of meta fields in keys and their string values in values. Metadata is transferred to Oz API and can be used to obtain analysis results or for searching;
params– an object with identifiers and additional parameters:
extract_best_shot– true or false: run the best frame choice in the Quality analysis;
action– an array of strings with identifiers of actions to be performed.
Available actions:
photo_id_front – photo of the ID front side;
photo_id_back – photo of the ID back side;
video_selfie_left – turn head to the left;
video_selfie_right – turn head to the right;
video_selfie_down – tilt head downwards;
video_selfie_high – raise head up;
video_selfie_smile – smile;
video_selfie_eyes – blink;
video_selfie_scan – scanning;
video_selfie_blank – no action, simple selfie;
video_selfie_best – special action to select the best shot from a video and perform analysis on it instead of the full video.
overlay_options – the document's template displaying options:
show_document_pattern: true/false – true by default, displays a template image, if set to false, the image is replaced by a rectangular frame;
on_submit– a callback function (no arguments) that is called after submitting customer data to the server (unavailable for the capture mode).
on_capture_complete – a callback function (with one argument) that is called after the video is captured and retrieves the information on this video. The example of the response is described here.
on_result– a callback function (with one argument) that is called periodically during the analysis and retrieves an intermediate result (unavailable for the capture mode). The result content depends on the Web Adapter result_modeconfiguration parameter and is described here.
on_complete– a callback function (with one argument) that is called after the check is completed and retrieves the analysis result (unavailable for the capture mode). The result content depends on the Web Adapter result_modeconfiguration parameter and is described here.
on_error – a callback function (with one argument) that is called in case of any error happened during video capturing and retrieves the error information: an object with the error code, error message, and telemetry ID for logging.
on_close– a callback function (no arguments) that is called after the plugin window is closed (whether manually by the user or automatically after the check is completed).
device_id – (optional) identifier of camera that is being used.
enable_3d_mask – enables the 3D mask as the default face capture behavior. This parameter works only if load_3d_mask in the Web Adapter configuration parameters is set to true; the default value is false.
cameraFacingMode (since 1.4.0) – the parameter that defines which camera to use; possible values: user (front camera), environment (rear camera). This parameter only works if the use_for_liveness option in the Web Adapter configuration file is undefined. If use_for_liveness is set (with any value), cameraFacingMode gets overridden and ignored.
disable_adaptive_aspect_ratio (since 1.5.0) – if True, disables the video adaptive aspect ratio, so your video doesn’t automatically adjust to the window aspect ratio. The default value is False, and by default, the video adjusts to the closest ratio of 4:3, 3:4, 16:9, or 9:16. Please note: smartphones still require the portrait orientation to work.
get_user_media_timeout (since 1.5.0) – when Web SDK can’t get access to the user camera, after this timeout it displays a hint on how to solve the problem. The default value is 40000 (ms).
if the getUserMedia() function hangs, you can manage the SDK behavior using the following parameters (since 1.7.15):
get_user_media_promise_timeout_ms – set the timeout (in ms) after which SDK will throw an error or display an instruction. This parameter is an object with the following keys: "platform_browser", "browser", "platform", "default"(the priority matches the sequence).
get_user_media_promise_timeout_throw_error – defines whether, after the time period defined in the parameter above, SDK should call an error (if true) or display a user instruction (if false).
.
if api_use_session_token = client, session_token is requested before session and passed in open.options().
private fun runAnalysis(container: DataContainer?) {
if (container == null) return
AnalysisRequest.Builder()
.addContainer(container)
.build()
.run(
{ result ->
val isSuccess = result.analysisResults.all { it.resolution == Resolution.SUCCESS }
},
{ /* show error */ },
{ /* update status */ },
)
}
private fun runAnalysis(media: List<OzAbstractMedia>?) {
if (media.isNullOrEmpty()) return
AnalysisRequest.Builder()
.addAnalysis(Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, media))
.build()
.run(
{ result ->
val isSuccess = result.analysisResults.all { it.resolution == Resolution.SUCCESS }
},
{ /* show error */ },
{ /* update status */ },
)
}
func onResult(container: DataContainer) {
let analysisRequest = AnalysisRequestBuilder()
analysisRequest.addContainer(container)
analysisRequest.run(
statusHandler: { status in
},
errorHandler: { error in
}
) { result in
}
}
func onResult(results: [OZMedia]) {
let analysisRequest = AnalysisRequestBuilder()
let analysis = Analysis(media: results,
type: .quality,
mode: .serverBased,
params: nil)
analysisRequest.addAnalysis(analysis)
analysisRequest.run { status in
} errorHandler: { error in
} completionHandler: { results in
}
}
OzLivenessSDK.INSTANCE.getConfig().setCustomization(new UICustomization(
// customization parameters for the toolbar
new ToolbarCustomization(
R.drawable.ib_close,
new Color.ColorRes(R.color.white),
R.style.Sdk_Text_Primary,
new Color.ColorRes(R.color.white),
R.font.roboto,
Typeface.NORMAL,
100, // toolbar text opacity (in %)
18, // toolbar text size (in sp)
new Color.ColorRes(R.color.black),
60, // toolbar alpha (in %)
"Liveness", // toolbar title
true // center toolbar title
),
// customization parameters for the center hint
new CenterHintCustomization(
R.font.roboto,
new Color.ColorRes(R.color.text_color),
20,
50,
R.style.Sdk_Text_Primary,
new Color.ColorRes(R.color.color_surface),
100, // background opacity
14, // corner radius for background frame
100 // text opacity
),
// customization parameters for the hint animation
new HintAnimation(
new Color.ColorRes(R.color.red), // gradient color
80, // gradient opacity (in %)
120, // the side size of the animation icon square
false // hide animation
),
// customization parameters for the frame around the user face
new FaceFrameCustomization(
GeometryType.RECTANGLE,
10, // frame corner radius (for GeometryType.RECTANGLE)
new Color.ColorRes(R.color.error_red),
new Color.ColorRes(R.color.success_green),
100, // frame stroke alpha (in %)
5, // frame stroke width (in dp)
3 // frame stroke padding (in dp)
),
// customization parameters for the background outside the frame
new BackgroundCustomization(
new Color.ColorRes(R.color.black),
60 // background alpha (in %)
),
// customization parameters for the SDK version text
new VersionTextCustomization(
R.style.Sdk_Text_Primary,
R.font.roboto,
12, // version text size
new Color.ColorRes(R.color.white),
100 // version text alpha
),
// customization parameters for the antiscam protection text
new AntiScamCustomization(
"Recording .. ",
R.font.roboto,
12,
new Color.ColorRes(R.color.text_color),
100,
R.style.Sdk_Text_Primary,
new Color.ColorRes(R.color.color_surface),
100,
14,
new Color.ColorRes(R.color.green)
)
// custom logo parameters
new LogoCustomization(
new Image.Drawable(R.drawable.ic_logo),
new Size(176, 64)
)
)
);
val OZ_LIVENESS_REQUEST_CODE =1val intent = OzLivenessSDK.createStartIntent(listOf
intOZ_LIVENESS_REQUEST_CODE=1;Intentintent
To obtain the captured video, use onActivityResult:
The sdkMediaResult object contains the captured videos.
4
Run analyses
To run the analyses, execute the code below. Mind that mediaList is an array of objects that were captured (sdkMediaResult) or otherwise created (media you captured on your own).
To integrate OZLivenessSDK into an Xcode project, add to Podfile:
SPM
Add the following package dependencies via SPM: https://gitlab.com/oz-forensics/oz-mobile-ios-sdk (if you need a guide on adding the package dependencies, please refer to the Apple documentation). OzLivenessSDK is mandatory. Ensure you've added the OzLivenessSDKOnDevice file.
2
Initialize SDK
Rename the license file to forensics.license and put it into the project.
3
Add face recording
Create a controller that will capture videos as follows:
The delegate object must implement OZLivenessDelegate protocol:
4
Run analyses
Use AnalysisRequestBuilder to initiate the Liveness analysis.
. Consider the proper user role (CLIENT in most cases or CLIENT ADMIN, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.
We also recommend that you use our logging service called telemetry, as it helps a lot in investigating attacks' details. For Oz API users, the service is enabled by default. For on-premise installations, we'll provide you with credentials.
Oz Liveness Mobile SDK requires a license. License is bound to the bundle_id of your application, e.g., com.yourcompany.yourapp. Issue the 1-month trial license on our website or email us for a long-term license.
Android
1
Add SDK to your project
In the build.gradle of your project, add:
In the build.gradle of the module, add:
2
Initialize SDK
Rename the license file to forensics.license and place it into the project's res/raw folder.
3
Connect SDK to Oz API
Use API credentials (login, password, and API URL) that you’ve got from us.
In production, instead of hard-coding login and password in the application, it is recommended to get access token on your backend with API method then pass it to your application:
4
Add face recording
To start recording, use startActivityForResult:
To obtain the captured video, use onActivityResult
To integrate OZLivenessSDK into an Xcode project, add to Podfile:
SPM
Add the following package dependencies via SPM: (if you need a guide on adding the package dependencies, please refer to the ). OzLivenessSDK is mandatory. Skip the OzLivenessSDKOnDevice file.
2
Initialize SDK
Rename the license file to forensics.license and put it into the project.
3
Connect SDK to Oz API
Use API credentials (login, password, and API URL) that you’ve got from us.
In production, instead of hard-coding the login and password in the application, it is recommended to get an access token on your back end using the API method, then pass it to your application:
4
Add face recording
Create a controller that will capture videos as follows:
The delegate object must implement the OZLivenessDelegate protocol:
5
Run analyses
Use AnalysisRequestBuilder to initiate the Liveness analysis. The communication with Oz API is under the hood of the run method.
With these steps, you are done with basic integration of Mobile SDKs. You will be able to access recorded media and analysis results in Web Console via browser or programmatically via API.
In developer guides, you can also find instructions for customizing the SDK look-and-feel and access the full list of our Mobile SDK methods. Check out the table below:
The next level is a User. A company can contain any amount of users. There are several roles of users with different permissions. For more information, refer to User Roles.
When a user requests an analysis (or analyses), a new folder is created. This folder contains media. One user can create any number of folders. Each folder can contain any amount of media. A user applies analyses to one or more media within a folder. The rules of assigning analyses are described here. The media quality requirements are listed on this page.
Object parameters
Common parameters
Parameter
Type
Description
time_created
Timestamp
Object (except user and company) creation time
time_updated
Timestamp
Object (except user and company) update time
meta_data
Json
Any user parameters
technical_meta_data
Besides these parameters, each object type has specific ones.
Company
Parameter
Type
Description
company_id
UUID
Company ID within the system
name
String
Company name within the system
User
Parameter
Type
Description
user_id
UUID
User ID within the system
user_type
String
first_name
String
Name
last_name
Folder
Parameter
Type
Description
folder_id
UUID
Folder ID within the system
resolution_status
ResolutionStatus
The latter analysis status
Media
Parameter
Type
Description
media_id
UUID
Media ID
original_name
String
Original filename (how the file was called on the client machine)
original_url
Url
HTTP link to this file on the API server
tags
Analysis
Parameter
Type
Description
analyse_id
UUID
ID of the analysis
folder_id
UUID
ID of the folder
type
String
Analysis type (BIOMETRY\QUALITY\DOCUMENTS)
results_data
Using OzCapsula Data Container in Native SDK
OzCapsula data container is a unified file that protects your data end-to-end while ensuring its integrity during transmission. To use it, implement new methods: AnalysisRequest.addContainer for adding data for analysis and createMediaCaptureScreen(request) for video capture. This method takes a video and packages it into a data container.
Code Examples
Please follow the links below to access the examples. You can also refer to the illustrative examples shown below the links.
Capturing Video and Description of the on_capture_complete Callback
In this article, you’ll learn how to capture videos and send them through your backend to Oz API.
1. Overview
Here is the data flow for your scenario:
1. Oz Web SDK takes a video and makes it available for the host application as a frame sequence.
2. The host application calls your backend using an archive of these frames.
OZSDK(licenseSources: [.licenseFileName("forensics.license")]) { licenseData, error in
if let error = error {
print(error.errorDescription)
}
}
let actions: [OZVerificationMovement] = [.selfie]
let ozLivenessVC: UIViewController = OZSDK.createVerificationVCWithDelegate(delegate, actions: actions)
self.present(ozLivenessVC, animated: true)
let actions: [OZVerificationMovement] = [.selfie]
let ozLivenessVC: UIViewController = OZSDK.createVerificationVCWithDelegate(delegate, actions: actions)
self.present(ozLivenessVC, animated: true)
let analysisRequest = AnalysisRequestBuilder()
let analysis = Analysis.init(
media: mediaToAnalyze,
type: .quality,
mode: .onDevice)
analysisRequest.uploadMedia(mediaToAnalyze)
analysisRequest.addAnalysis(analysis)
analysisRequest.run(
scenarioStateHandler: { state in }, // scenario steps progress handler
uploadProgressHandler: { (progress) in } // file upload progress handler
) { (analysisResults : [OzAnalysisResult], error) in
// receive and handle analyses results here
for result in analysisResults {
print(result.resolution)
print(result.folderID)
}
}
dependencies {
implementation 'com.ozforensics.liveness:full:<version>'
// You can find the version needed in the Android changelog
}
pod 'OZLivenessSDK', :git => 'https://gitlab.com/oz-forensics/oz-liveness-ios', :tag => '<version>' // You can find the version needed in iOS changelog
dependencies {
implementation 'com.ozforensics.liveness:full:<version>'
// You can find the version needed in the Android changelog
}
pod 'OZLivenessSDK', :git => 'https://gitlab.com/oz-forensics/oz-liveness-ios', :tag => '<version>' // You can find the version needed in iOS changelog
Json
Module-required parameters; reserved for internal needs
String
Surname
middle_name
String
Middle name
email
String
User email = login
password
String
User password (only required for new users or to change)
val sessionToken: String = getSessionToken()
val captureRequest = CaptureRequest(
listOf(
AnalysisProfile(
Analysis.Type.QUALITY,
listOf(MediaRequest.ActionMedia(OzAction.Blank)),
)
)
)
val intent = OzLivenessSDK.createStartIntent(captureRequest, sessionToken)
startActivityForResult(intent, REQUEST_LIVENESS_CONTAINER)
Kotlin
private fun runAnalysis(container: DataContainer?) {
if (container == null) return
AnalysisRequest.Builder()
.addContainer(container)
.build()
.run(
{ result ->
val isSuccess = result.analysisResults.all { it.resolution == Resolution.SUCCESS }
},
{ /* show error */ },
{ /* update status */ },
)
}
OZSDK(licenseSources: [.licenseFileName("forensics.license")]) { licenseData, error in
if let error = error {
print(error.errorDescription)
}
}
OZSDK.setApiConnection(Connection.fromCredentials(host: “https://sandbox.ohio.ozforensics.com”, login: login, password: p)) { (token, error) in
// Your code to handle error or token
}
OZSDK.setApiConnection(Connection.fromServiceToken(host: "https://sandbox.ohio.ozforensics.com", token: token)) { (token, error) in
}
getSessionToken() { sessionToken in
DispatchQueue.main.async {
do {
let action:OZVerificationMovement = .selfie
let mediaRequest = MediaRequest.action(action)
let profile = AnalysisProfile(mediaList: [mediaRequest],
type: .quality,
params: [:] )
let request = CaptureRequest(analysisProfileList: [profile], cameraPosition: .front)
let ozLivenessVC = try OZSDK.createMediaCaptureScreen(self, request, sessionToken: sessionToken)
self.present(ozLivenessVC, animated: true)
} catch let error {
print(error.localizedDescription)
}
}
}
func onResult(container: DataContainer) {
let analysisRequest = AnalysisRequestBuilder()
analysisRequest.addContainer(container)
analysisRequest.run(
statusHandler: { status in
},
errorHandler: { error in
}
) { result in
}
}
Kotlin
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == REQUEST_LIVENESS_CONTAINER) {
when (resultCode) {
OzLivenessResultCode.SUCCESS -> runAnalysis(OzLivenessSDK.getContainerFromIntent(data))
OzLivenessResultCode.USER_CLOSED_LIVENESS -> { /* user closed the screen */ }
else -> {
val errorMessage = OzLivenessSDK.getErrorFromIntent(data)
/* show error */
}
}
}
}
Please check the methods and properties below. You can also find them in the corresponding sections of iOS and Android documentation sections.
addContainer
This method replaces addAnalysis in the AnalysisRequest structure when you use the data container flow.
Input
Parameter
Type
Description
OzDataContainer
bytearray[]
An encrypted file containing media and collateral info, the output of the method
createMediaCaptureScreen
Captures media file with all information you need and packages it into a data container.
Input
Parameter
Type
Description
request
Detects a request for video capture
session_token
String
Stores additional information to protect against replay attacks
Output
Parameter
Type
Description
OzDataContainer
bytearray[]
An encrypted file containing media and collateral info
public data class CaptureRequest
Detects a request for video capture.
Parameter
Type
Description
analysisProfileList
List<>
A list of objects that contain information on media and analyses that should be applied to them
folderMeta (optional)
Map<String, Any>
Additional folder metadata
additionalMediaList (optional)
List<>
Media files that you need to upload to server, but it’s not necessary for analyses
cameraPosition (optional)
String
front (default) – front camera
back – rear camera
public data class AnalysisProfile
Contains information on media files and analyses that should be applied to them.
Parameter
Type
Description
mediaList
List<>
A list of media to be analyzed
type
String ( (Android) or (iOS))
Analysis type
params (optional)
Map<String, Any>
Additional analysis parameters
public sealed class MediaRequest
Stores information about a media file.
Please note: you should add actionMedia OR userMedia, these parameters are mutually exclusive.
Parameter
Type
Description
id
String (UUID v4)
Media ID
actionMedia
(Android) or t (iOS)
An action that user should perform in a video
userMedia
(Android) or (iOS)
An external media file, e.g., a reference or a document photo
Exceptions
Error
Text
Description
session_token_is_empty
Session token must not be empty
Session token is mandatory but hasn’t been provided
data_container_internal_failure_1
Internal failure occurred while processing the data container
The device doesn’t have enough memory to proceed
data_container_internal_failure_2
data_container_internal_failure_3
data_container_internal_failure_4
Internal failure occurred while processing the data container
SDK couldn’t generate the container. Try again
data_container_internal_failure_1000
Internal failure occurred while processing the data container
Any other error not from the list above
If, during the video capture, SDK encounters an error that prevents user scenario from completion, the data container is deleted.
Should you have any questions, please contact us.
3. After the necessary preprocessing steps, your backend calls Oz API, which performs all necessary analyses and returns the analyses’ results.
4. Your backend responds back to the host application if needed.
2. Implementation
On the server side, Web SDK must be configured to operate in the Capture mode:
The architecture parameter must be set to capture in the app_config.json file.
In your Web app, add a callback to process captured media when opening the Web SDK plugin:
The result object structure depends on whether any virtual camera is detected or not.
No Virtual Camera Detected
Any Virtual Camera Detected
Here’s the list of variables with descriptions.
Variable
Type
Description
best_frame
String
The best frame, JPEG in the data URL format
best_frame_png
String
The best frame, PNG in the data URL format, it is required for protection against virtual cameras when video is not used
best_frame_bounding_box
Array[Named_parameter: Int]
The coordinates of the bounding box where the face is located in the best frame
best_frame_landmarks
Please note:
The video from Oz Web SDK is a frame sequence, so, to send it to Oz API, you’ll need to archive the frames and transmit them as a ZIP file via the POST /api/folders request (check ourPostman collections).
You can retrieve the MP4 video from a folder using the /api/folders/{{folder_id}} request with this folder's ID. In the JSON that you receive, look for the preview_url in source_media. The preview_url parameter contains the link to the video. From the plugin, MP4 videos are unavailable (only as frame sequences).
Also, in the POST {{host}}/api/folders request, you need to add the additional_info field. It is required for the capture architecture mode to gather the necessary information about client environment. Here’s the example of filling in the request’s body:
Oz API accepts data without the base64 encoding.
// capture and pack media
val referentPhoto = MediaRequest.UserMedia(OzAbstractMedia.OzDocumentPhoto(OzMediaTag.Blank, referentPhotoPath))
val blinkVideo = MediaRequest.ActionMedia(OzAction.EyeBlink)
val scanVideo = MediaRequest.ActionMedia(OzAction.Scan)
val intent = OzLivenessSDK.createMediaCaptureScreen(
CaptureRequest(
listOf(
AnalysisProfile(
Analysis.Type.BIOMETRY,
listOf(referentPhoto, scanVideo)
),
AnalysisProfile(
Analysis.Type.QUALITY,
listOf(referentPhoto, scanVideo, blinkVideo)
),
),
),
sessionToken
)
startActivityForResult(intent, REQUEST_CODE_SDK)
// subscription to result
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (requestCode == REQUEST_CODE_SDK) {
when (resultCode) {
OzLivenessResultCode.USER_CLOSED_LIVENESS -> { /* user closed the screen */ }
OzLivenessResultCode.SUCCESS -> {
// result
val container = OzLivenessSDK.getContainerFromIntent(data)
...
}
else -> {
// error
val errorMessage = OzLivenessSDK.getErrorFromIntent(data)
...
}
}
}
}
// launching analyses
AnalysisRequest.Builder()
.addContainer(container)
.build()
.run(
object: AnalysisRequest.AnalysisListener {
override fun onSuccess(result: RequestResult) {
...
}
override fun onError(exception: OzException) {
...
}
}
)
// capture and pack media
let mediaRequest = MediaRequest.actionMedia(.selfie)
let profile = AnalysisProfile(mediaList: [mediaRequest],
type: .quality,
params: ["extract_best_shot" : true])
let request = CaptureRequest(analysisProfileList: [profile], cameraPosition: cameraPosition)
self.ozLivenessVC = try OZSDK.createMediaCaptureScreen(self, request, sessionToken: sessionToken)
self.present(ozLivenessVC, animated: true)
// subscription to result
extension ViewController: LivenessDelegate {
func onError(status: OZVerificationStatus?) {
// error handling
}
func onResult(container: DataContainer) {
let analysisRequest = AnalysisRequestBuilder()
analysisRequest.addContainer(container)
analysisRequest.run(statusHandler: { [weak self] state in
},
errorHandler: { [weak self] error in
// error
}) { result in
// result
}
}
OZLiveness.open({
... // other parameters
on_capture_complete: function(result) {
// Your code to process media/send it to your API, this is STEP #2
}
})
The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the best frame
frame_list
Array[String]
All frames in the data URL format
frame_bounding_box_list
Array[Array[Named_parameter: Int]]
The coordinates of the bounding boxes where the face is located in the corresponding frames
frame_landmarks
Array[Named_parameter: Array[Int, Int]]
The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the corresponding frames
action
String
An action code
additional_info
String
Information about client environment
Changelog
API changes
6.4.1-40 – Dec. 24, 2025
Implemented a new proprietary data format: OzCapsula Data Container.
Resolved the issue with occasional false rejections using videos received from Web SDK.
The Collection analysis now ignores images tagged with photo_id_back.
Internal improvements.
6.4.0 – Nov. 24, 2025
Only for SaaS.
Updated API to support upcoming features.
6.3.5 – Nov. 03, 2025
Fixed bugs that could cause the GET /api/folders/ request to return incorrect results.
6.3.4 – Oct. 20, 2025
Updated API to support upcoming features.
Fixed bugs.
6.3.3 – Sept. 29, 2025
Resolved an issue with POST /api/instant/folders/ and POST /api/folders/ returning “500 Internal Server Error” when the video sent is corrupted. Now system returns “400 Bad Request”.
Updated API to support upcoming features.
6.3.0 – Aug 05, 2025
Updated API 6 to support Kazakhstan regulatory requirements: added the functionality of extracting action shots from videos of a person performing gestures.
You can remove admin rights from a CLIENT ADMIN user and change their role to CLIENT via PATCH /api/users/{{user_id}}.
You can generate a service token for a user with the OPERATOR role.
6.2.5 – June 18, 2025
Optimization and performance updates.
6.2.3 – June 02, 2025
Analyses can now be done in parallel with each other. To enable this feature, add the OZ_ANALYSE_PARALLELED_CHECK_MEDIA_ENABLED parameter to config.py and set it to true (the default value is false).
For the instant mode, authorization can be disabled. Add the OZ_AUTHORIZE_DISABLED_STATELESS parameter to config.py and set it to true
6.0.1 – Apr. 30, 2025
Please note: this version doesn't support the Kazakhstan regulatory requirements.
Optimized storage and database.
Implemented the which involves creating a folder and executing analyses in a single request by attaching a part of the analysis in the payload.
Implemented the without storing any data locally or in database. This mode can be used either with or without other API components.
Deprecated Endpoint or Parameter
Replacement
5.3.1 – Dec. 24, 2024
Improved the resource efficiency of server-based biometry analysis.
5.3.0 – Nov. 27, 2024
API can now extract action shots from videos of a person performing gestures. This is done to comply with the new Kazakhstan regulatory requirements for biometric identification.
Created a new report template that also complies with the requirements mentioned above.
If action shots are enabled, the thumbnails for the report are generated from them.
5.2.0 – Sept. 06, 2024
Updated the Postman collection. Please see the new collection and at .
Added the new method to check the timezone settings: GET {{host}}/api/config
Added parameters to the GET {{host}}/api/event_sessions method:
5.1.1 – July 16, 2024
Security updates.
5.1.0 – Mar. 20, 2024
Face Identification 1:N is now live, significantly increasing the data processing capacity of the Oz API to find matches. Even huge face databases (containing millions of photos and more) are no longer an issue.
The Liveness (QUALITY) analysis now ignores photos tagged with photo_id, photo_id_front, or photo_id_back, preventing these photos from causing the tag-related analysis error.
5.0.1 – July 16, 2024
Security updates.
5.0.0 – Nov. 17, 2023
You can now apply the Liveness (QUALITY) analysis to a single image.
Fixed the bug where the Liveness analysis could finish with the SUCCESS result with no media uploaded.
The default value for the extract_best_shot parameter is now True.
4.0.8-patch1 – July 16, 2024
Security updates.
4.0.8 – May 22, 2023
Set the autorotation of logs.
Added the CLI command for user deletion.
You can now switch off the video preview generation.
4.0.2 – Sept. 13, 2022
For the sliced video, the system now deletes the unnecessary frames.
Added new methods: GET and POST at media/<media_id>/snapshot/.
Replaced the default report template.
3.33.0
The Photo Expert and KYC modules are now removed.
The endpoint for the user password change is now POST users/user_id/change-password instead of PATCH.
3.32.1
Provided log for the Celery app.
3.32.0
Added filters to the Folder [LIST] request parameters: analyse.time_created, analyse.results_data for the Documents analysis, results_data for the Biometry analysis, results_media_results_data for the QUALITY analysis. To enable filters, set the with_results_media_filter query parameter to True.
3.31.0
Added a new attribute for users – is_active (default True). If is_active == False, any user operation is blocked.
Added a new exception code (1401 with status code 401) for the actions of the blocked users.
3.30.0
Added shots sets preview.
You can now save a shots set archive to a disk (with the original_local_path, original_url attributes).
A new original_info attribute is added to store md5, size, and mime-type of a shots set
3.29.0
Added health check at GET api/healthcheck.
3.28.1
Fixed the shots set thumbnail URL.
3.28.0
Now, the first frame of shots set becomes this shots set's thumbnail URL.
3.27.0
Modified the retry policy – the default max count of analysis attempts is increased to 3 and jitter configuration introduced.
Changed the callback algorithm.
Refactored and documented the command line tools.
3.25.0
Changed the delete personal information endpoint and method from delete_pi to /pi and from POST to DELETE, respectively.
3.23.1
Improved the delete personal information algorithm.
It is now forbidden to add media to cleaned folders.
3.23.0
Changed the authorize/restore endpoint name from auth to auth_restore.
Added a new tag – video_selfie_oneshot.
3.22.2
Fixed a bug with no error while trying to synchronize empty collections.
If persons are uploaded, the analyse collection TFSS request is sent.
3.22.0
Added the fields_to_check parameter to document analysis (by default, all fields are checked).
Added the double_page_spread parameter to document analysis (True by default).
3.21.3
Fixed collection synchronization.
3.21.0
Authorization token can be now refreshed by expire_token.
3.20.1
Added support for application/x-gzip.
3.20.0
Renamed shots_set.images to shots_set.frames.
3.18.0
Added user sessions API.
Users can now change a folder owner (limited by permissions).
Changed dependencies rules.
3.16.0
Move oz_collection_binding (collection synchronization functional) to oz_core.
3.15.3
Simplified the shots sets functionality. One archive keeps one shot set.
3.15.2
Improved the document sides recognition for the docker version.
3.15.1
Moved the orientation tag check to liveness at quality analysis.
3.15.0
Added a default report template for Admin and Operator.
3.14.0
Updated the biometric model.
3.13.2
A new ShotsSet object is not created if there are no photos for it.
Updated the data exchange format for the documents' recognition module.
3.13.1
You can’t delete a Collection if there are associated analyses with Collection Persons.
3.13.0
Added time marks to analysis: time_task_send_to_broker, time_task_received, time_task_finished.
3.12.0
Added a new authorization engine. You can now connect with Active Directory by LDAP (settings configuration required).
3.11.0
A new type of media in Folders – "shots_set".
You can’t delete a CollectionPerson if there are analyses associated with it.
3.10.0
Renamed the folder field resolution_suggest to operator_status.
Added a folder text field operator_comment.
3.9.0
Fixed a deletion error: when report author is deleted, their reports get deleted as well.
3.8.1
Client can now view only their own profile.
3.8.0
Client Operator can now edit only their profile.
Client can't delete own folders, media, reports, or analyses anymore.
Client Service can now create Collection Person and read reports within their company.
3.7.1
Client, Client Admin, Client Operator have read access to users profiles only in their company.
A/B testing is now available.
Added support for expiration date header.
3.7.0
Added a new role of Client Operator (like Client Admin without permissions for company and account management).
Client Admin and Client Operator can change the analysis status.
Only Admin and Client Admin (for their company) can create, update and delete operations for Collection and CollectionPerson models from now on.
Changelog
iOS SDK changes
8.23.0 – Feb. 06, 2026
Updated code sample in align with OzCapsula functionality.
Security updates.
(the default value is
false
) to use instant API without authorization.
Fixed the issue with MP4 videos that sometimes could not be played after downloading from SDK.
We now return the correct error for the non-authorized requests.
Fixed the bug with “spontaneous” error 500 that had been caused by too few frames in video. Added the check for the number of frames and more descriptive error messages.
Performance, security, and installation updates.
You can now combine working systems based on asynchronous method or celery worker (local processing, celery processing). Added S3 storage mechanics for each of the combinations.
Implemented security updates.
We no longer support RAR archives.
We no longer support Active Directory. This functionality will be returned in the upcoming releases.
Improved mechanics for calculating analysis time.
Replaced the is_admin and is_service flags for the CLIENT role with new roles: CLIENT ADMIN and CLIENT SERVICE, respectively. Set the roles in user_type.
To issue a service token for a user via {{host}}/api/authorize/service_token/, this user must have the CLIENT SERVICE role. You can also create a token for another user with this role: call {{host}}/api/authorize/service_token/{user_id}.
Removed collection and person attributes from COLLECTION.analysis.
We no longer store separate objects for each frame in SHOTS_SET. If you want to save an image from your video, consider enabling best shot.
expire_date in {{host}}/api/authorize/auth and {{host}}/api/authorize/refresh
access_token.exp from payload
session_id in {{host}}/api/authorize/auth and {{host}}/api/authorize/refresh
token_id
time_created
time_created.min
time_created.max
time_updated
time_updated.min
time_updated.max
session_id
session_id.exclude
sorting
offset
limit
total_omit
If you create a folder using SHOT_SET, the corresponding video will be in media.video_url.
Fixed the bug with CLIENT ADMIN being unable to change passwords for users from their company.
RAR archives are no longer supported.
By default, analyses.results_media.results_data now contain the confidence_spoofing parameter. However, if you need all three parameters for the backward compatibility, it is possible to change the response back to three parameters: confidence_replay, confidence_liveness, and confidence_spoofing.
Updated the default PDF report template.
The name of the PDF report now contains folder_id.
The ADMIN access token is now valid for 5 years.
Added the folder identifier folder_id to the report name.
Fixed bugs and optimized the API work.
The shot set preview now keeps images’ aspect ratio.
ADMIN and OPERATOR receive system_company as a company they belong to.
Added the company_id attribute to User, Folder, Analyse, Media.
Added the Analysis group_id attribute.
Added the system_resolution attribute to Folder and Analysis.
The analysis resolution_status now returns the system_resolution value.
Removed the PATCH method for collections.
Added the resolution_status filter to Folder Analyses [LIST] and analyse.resolution_status filter to Folder [LIST].
Added the audit log for Folder, User, Company.
Improved the company deletion algorithm.
Reforged the blacklist processing logic.
Fixed a few bugs.
Fixed ReportInfo for shots sets.
Refactored modules.
Added the password validation setting (OZ_PASSWORD_POLICY).
Added auth, rest_unauthorized, rps_with_token throttling (use OZ_THROTTLING_RATES in configuration. Off by default).
User permissions are now used to access static files (OZ_USE_PERMISSIONS_FOR_STATIC in configuration, false by default).
Added a new folder endpoint – /delete_pi. It clears all personal information from a folder and analyses related to this folder.
Changed the access_token prolongation policy to fix bug of prolongation before checking the expiration permission.
The folder fields operator_status and operator_comment can be edited only by Admin, Operator, Client Service, Client Operator, and Client Admin.
Only Admin and Client Admin can delete folder, folder media, report template, report template attachments, reports, and analyses (within their company).
Long antiscam messages are now displayed properly.
SDK no longer crashes when Liveness screen is called several times in a row with short intervals.
Resolved the issue with container errors not being returned.
Updated dependencies.
8.22.0 – Dec. 09, 2025
Implemented a new proprietary data format: OzCapsula Data Container.
Resolved the issue with SDK not returning the license-related callbacks.
Enhanced security.
8.21.0 – Nov. 21, 2025
Resolved the issue with SDK sometimes not responding to user actions on some devices.
Updated SDK to support the upcoming security features.
8.20.0 – Oct. 17, 2025
Fixed the bug with crashes that might happen during the Biometry analysis after taking a reference photo using camera.
Enhanced security.
8.19.0 – Sept. 23, 2025
The Scan gesture animation now works properly.
Fixed the bug where SDK didn’t call completion during initialization in debug mode.
Enhanced security.
8.18.2 – Aug. 22, 2025
Addressed an SDK crash that occasionally happened when invoking the license.
8.18.1 – Aug. 06, 2025
We highly recommend updating to this version.
Resolved the issue with integration via Swift UI.
SDK no longer crashes on smartphones that are running low on storage.
Security and telemetry updates.
8.17.0 – June 12, 2025
Security updates.
8.16.2 – Apr. 22, 2025
Xcode updated to version 16 to comply with Apple requirements.
8.16.1 – Apr. 09, 2025
Security updates.
8.16.0 – Mar. 11, 2025
Updated the authorization logic.
Improved voiceover.
SDK now compresses videos if their size exceeds 10 MB.
Head movement gestures are now handled properly.
Security updates.
8.15.0 – Dec. 30, 2024
Changed the wording for the head_down gesture: the new wording is “tilt down”.
Added proper focus order for VoiceOver when the antiscam hint is enabled.
Added the public setting extract_action_shot in the Demo Application.
Bug fixes.
Security updates.
8.14.0 – Dec. 3, 2024
Accessibility updates according to WCAG requirements: the SDK hints and UI controls can be voiced.
Improved user experience with head movement gestures.
Minor bug fixes and telemetry updates.
8.13.0 – Nov. 11, 2024
The screen brightness no longer changes when the rear camera is used.
Fixed the video recording issues on some smartphone models.
Security and telemetry updates.
8.12.2 – Oct. 24, 2024
Internal SDK improvements.
8.12.1 – Sept. 30, 2024
Added Xcode 16 support.
Security and telemetry updates.
8.11.0 – Sept. 10, 2024
Security updates.
8.10.1 – Aug. 23, 2024
Bug fixes.
8.10.0 – Aug. 1, 2024
SDK now requires Xcode 15 and higher.
Security updates.
Bug fixes.
8.9.1 – July 16, 2024
Internal SDK improvements.
8.8.3 – July 11, 2024
Internal SDK improvements.
8.8.2 – June 25, 2024
Bug fixes.
8.8.1 – June 25, 2024
Logging updates.
8.8.0 – June 18, 2024
Security updates.
8.7.0 – May 10, 2024
You can now install iOS SDK via Swift Package Manager.
The sample is now available on SwiftUI. Please find it here.
Added a description for the error that occurs when providing an empty string as an ID in the addFolderID method.
Bug fixes.
8.6.0 – Apr. 10, 2024
The messages displayed by the SDK after uploading media have been synchronized with Android.
The bug causing analysis delays that might have occurred for the One Shot gesture has been fixed.
8.5.0 – Mar. 06, 2024
The length of the Selfie gesture is now configurable (affects the video file size).
You can set your own logo instead of Oz logo if your license allows it.
Removed the pause after the Scan gesture.
The code in is now up-to-date.
Security and logging updates.
8.4.2 – Jan. 24, 2024
Security updates.
8.4.0 – Jan. 09, 2024
Changed the default behavior in case a localization key is missing: now the English string value is displayed instead of a key.
Fixed some bugs.
8.3.3 – Dec. 11, 2023
Internal licensing improvements.
8.3.0 – Nov. 17, 2023
Implemented the possibility of using a master license that works with any bundle_id.
Fixed the bug with background color flashing.
8.2.1 – Nov. 11, 2023
Bug fixes.
8.2.0 – Oct. 30, 2023
The Analysis structure now contains the sizeReductionStrategy field. This field defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully.
The messages for the errors that are retrieved from API are now detailed.
The toFrameGradientColor option in hintAnimationCustomization is now deprecated, please use the hintGradientColor option instead.
Got back the iOS 11 support.
8.1.1 – Oct. 09, 2023
If multiple analyses are applied to the folder simultaneously, the system sends them as a group. It means that the “worst” of the results will be taken as resolution, not the latest. Please refer to this article for details.
For the Liveness analysis, the system now treats the highest score as a quantitative result. The Liveness analysis output is described here.
8.1.0 – Sept. 07, 2023
Updated the Liveness on-device model.
Added the Portuguese (Brazilian) locale.
You can now add a custom or update an existing language pack. The instructions can be found here.
If a media hasn't been uploaded correctly, the system now repeats the upload.
Added a new method to retrieve the telemetry (logging) identifier: getEventSessionId.
The setPermanentAccessToken, configure and login methods are now deprecated. Please use the setApiConnection method instead.
The setLicense(from path:String) method is now deprecated. Please use the setLicense(licenseSource: LicenseSource) method instead.
Fixed some bugs and improved the SDK work.
8.0.2 – Aug. 15, 2023
Fixed some bugs and improved the SDK algorithms.
8.0.0 – June 27, 2023
Added the new analysis mode – hybrid (Liveness only). If the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Improved the on-device models.
Updated the run method.
Added new structures: RequestStatus (analysis state), ResultMedia (analysis result for a single media) and RequestResult (consolidated analysis result for all media).
The updated AnalysisResult structure should be now used instead of OzAnalysisResult.
For the OZMedia object, you can now specify additional tags that are not included into our tags list.
The Selfie video length is now about 0.7 sec, the file size and upload time are reduced.
The hint text width can now exceed the frame width (when using the main camera).
The methods below are no longer supported:
Removed method
Replacement
analyse
AnalysisRequest.run
addToFolder
uploadMedia
documentAnalyse
AnalysisRequest.run
uploadAndAnalyse
AnalysisRequest.run
runOnDeviceBiometryAnalysis
AnalysisRequest.run
7.3.0 – June 06, 2023
Added the center hint background customization.
Added new face frame forms (Circle, Square).
Added the antiscam widget and its customization. This feature allows you to alert your customers that the video recording is being conducted, for instance, for loan application purposes. The purpose of this is to safeguard against scammers who may attempt to deceive an individual into approving a fraudulent transaction.
Synchronized the default customization values with Android.
Added the Spanish locale.
iOS 11 is no longer supported, the minimal required version is 12.
7.2.1 – May 24, 2023
Fixed the issue with the server-based One shot analysis.
7.2.0 – May 18, 2023
Improved the SDK algorithms.
7.1.6 – May 04, 2023
Fixed error handling when uploading a file to API. From this version, an error will be raised to a host application in case of an error during file upload.
7.1.5 – Apr. 03, 2023
Improved the on-device Liveness.
7.1.4 – Mar. 24, 2023
Fixed the animation for sunglasses/mask.
Fixed the bug with the .document analysis.
Updated the descriptions of customization methods and structures.
7.1.2 – Feb. 21, 2023
Updated the TensorFlow version to 2.11.
Fixed several bugs, including the Biometry check failures on some phone models.
7.1.1 – Feb. 06, 2023
Added customization for the hint animation.
7.1.0 – Jan. 20, 2023
Integrated a new model.
Added the uploadMedia method to AnalysisRequest. The addMedia method is now deprecated.
Fixed the combo analysis error.
Added a button to reset the SDK theme and language settings.
Fixed some bugs and localization issues.
Extended the network request timeout to 90 sec.
Added a setting for the animation icon size.
7.0.0 – Dec. 08, 2022
Implemented a range of UI customization options and switched to the new design. To restore the previous settings, please refer to this article.
Synchronized the version numbers with Android SDK.
Added a new field to the Analysis structure. The params field is for any additional parameters, for instance, if you need to set extracting the best shot on server to true. The best shot algorithm chooses the most high-quality frame from a video.
Fixed some localization issues.
Changed the Combo gesture.
Now you can launch the Liveness check to analyze images taken with another SDK.
3.0.1
The Zoom in and Zoom out gestures are no longer supported.
3.0.0
Added a new simplified analysis structure – AnalysisRequest.
2.3.0
Added methods of on-device analysis: runOnDeviceLivenessAnalysis and runOnDeviceBiometryAnalysis.
You can choose the installation version. Standard installation gives access to full functionality. The core version (OzLivenessSDK/Core) installs SDK without the on-device functionality.
Added a method to upload data to server and start analyzing it immediately: uploadAndAnalyse.
Improved the licensing process, now you can add a license when initializing SDK: OZSDK(licenseSources: [LicenseSource], completion: @escaping ((LicenseData?, LicenseError?) -> Void)), where LicenseSource is a path to physical location of your license, LicenseData contains the license information.
Added the setLicense method to force license adding.
2.2.3
Added the Turkish locale.
2.2.1
Added the Kyrgyz locale.
Added Completion Handler for analysis results.
Added Error User Info to telemetry to show detailed info in case of an analysis error.
2.2.0
Added local on-device analysis.
Added oval and rectangular frames.
Added Xcode 12.5.1+ support.
2.1.4
Added SDK configuration with licenses.
2.1.3
Added the One Shot gesture.
Improved OZVerificationResult: added bestShotURL which contains the best shot image and preferredMediaURL which contains an URL to the best quality video.
When performing a local check, you can now choose a main or back camera.
2.1.2
Authorization sessions extend automatically.
Updated authorization interfaces.
2.1.1
Added the Kazakh locale.
Added license error texts.
2.1.0
You can cancel network requests.
You can specify Bundle for license.
Added analysis parameterization documentAnalyse.
Fixed building errors (Xcode 12.4 / Cocoapods 1.10.1).
2.0.0
Added license support.
Added Xcode 12 support instead of 11.
Fixed the documentAnalyse error where you had to fill analyseStates to launch the analysis.
Implemented a new proprietary data format: OzCapsula Data Container.
You can now customize face oval using new CSS class (stroke color, glow effects, etc.). All standard CSS properties are supported.
We now support the kk language code as alias for kz (Kazakh). OzLiveness.set_lang('kk') and OzLiveness.set_lang('kz') work equally.
The OzLiveness.hide() and show() methods now support global UI visibility state. You don’t need to call them each time, the state is saved between sessions.
Added new callbacks:
on_media_stream_start is called when getUserMedia is completed successfully,
on_media_stream_stop is called once video is captured.
The ready() promise is now exposed as the plugin initialization event. Added new methods:
OzLiveness.ready(): Promise<void> to return the Promise that is resolved after all plugin resources are loaded,
OzLiveness.isReady(): Boolean
Fixed the bug with empty loader been displayed on screen between processing and uploading.
If you enter the wrong language code, SDK now shows the proper error.
The non-English locales are now being applied properly on the first open() call.
Enhanced security.
1.8.1 – Oct. 09, 2025
Web SDK no longer returns an error when you call OzLiveness.hide() immediately after launching the plugin.
Improved performance.
Enhanced security.
1.8.0 – Sept. 26, 2025
You can now launch Web Plugin in the windowed mode. Define parent_container in OpenOptions: parent_container: string | HTMLElement.
Loader transitions are now customizable. Please refer to the settings.
1.7.15 – Aug. 12, 2025
You can now replace the default loader with your custom one. Please refer to the settings.
The behavior of SDK, when getting camera access takes too long, is now manageable. Please refer to get_user_media_promise_timeout_* parameters listed .
Security and telemetry updates.
1.7.14-89-1 – July 09, 2025
Breaking changes: we no longer transmit scores in callbacks. Replace confidence_spoofing with 0 for SUCCESS and with 1 for other statuses.
Fixed the bug with Web SDK being unable to read the image_data_tensor properties.
1.7.14 – June 13, 2025
Added support for API 6.0.
CORS headers in should now be specified without quotation marks.
Added a new parameter to manage authorization. auth defines whether and what authorization is used:
1.6.15 – Dec. 27, 2024
Simplified the checks that require user to move their head: turning left or right, tilting, or looking down.
Decreased the distance threshold for the head-moving actions: turning left or right, tilting, or looking down.
The application's behavior when the opened dev-tools are detected is now manageable.
1.6.12 – Oct. 18, 2024
Security updates.
Resolved the issue where a video could not be generated from a sequence of frames.
1.6.0 – June 24, 2024
The on_complete callback now is called upon folder status change.
Updated instructions for camera access in the Android Chrome and Facebook browsers. New keys:
error_no_camera_access,
1.5.3 – May 28, 2024
In case of camera access timeout, we now display a page with instructions for users to enable camera access: default for all browsers and specific for Facebook.
Added several localization records to the . New localization keys:
accessing_camera_switch_to_another_browser,
1.5.0 – May 05, 2024
Improved user experience for card printer machines. Users no longer need to get that close to the screen with face frame.
Added the disable_adaptive_aspect_ratio parameter to the Web Plugin. This parameter switches off the default video aspect ratio adjustment to the window.
Implemented the get_user_media_timeout parameter for Web Plugin: when SDK can’t get access to the user camera, after this timeout it displays a hint on how to solve the problem.
1.4.3 – Apr. 15, 2024
Fixed the bug where the warning about incorrect device orientation was not displayed when a mobile user attempted to take a video with their face in landscape orientation.
Some users may have experienced freezes while using WebView. Now, users can tap a button to continue working with the application. The corresponding string has been added to the string file in the . Key: tap_to_continue.
1.4.2 – Mar. 14, 2024
Debugging improvements.
1.4.1 – Feb. 27, 2024
Major security updates: improved protection against virtual cameras and JavaScript tampering.
Improved WebView support:
Added camera access instructions for applications within the generic WebView browsers on Android and iOS. The corresponding events are added to telemetry.
1.4.0 – Feb. 07, 2024
You can now use Web SDK for the analysis: to compare the face from your Liveness video with faces from your database. Create a collection (or collections) with these photos via or , and add the corresponding ID (or IDs) to the analyses.collection_ids array in the Web Adapter configuration file.
The iframe support is back: set the iframe_allowed parameter in the Web Adapter configuration file to True.
1.3.1 – Jan. 12, 2024
Improved the protection against injection attacks.
Replaced the code for Brazilian Portuguese from pt to pt-br to match the ISO standard.
Removed the lang_default
1.2.2 – Dec. 15, 2023
Internal SDK improvements.
1.2.1 – Nov. 04, 2023
To enhance your clients’ experience with Web SDK, we implemented the 3D-mask that replaces the oval during face capture. To make it work, set the load_3d_mask in to true.
Updated telemetry (logging).
1.1.5 – Oct. 27, 2023
Logging updates.
1.1.4 – Oct. 2023
Security updates.
1.1.3 – Sept. 29, 2023
Internal SDK improvements.
1.1.2 – Sept. 21, 2023
Internal SDK improvements.
1.1.1 – Aug. 29, 2023
Fixed some bugs.
1.1.0 – Aug. 24, 2023
Changed the signature of the on_error() callback: now it returns an object with the error code, error message, and telemetry ID for logging.
Added the configuration parameter for the debug mode. If True, the Web SDK enables access to the /debug.php page, which contains information about the current configuration and the current license.
1.0.2 – July 06, 2023
If your device has multiple cameras, you can now choose one when launching the Web Plugin.
1.0.1 – July 01, 2023
Implemented the new design for SDK and demo, including the scam protection option: the antiscam message warns user about their actions being recorded. Please check the new customization options .
Added the Portuguese, Spanish, and Kazakh locales.
Added the combo gesture.
0.9.1 – Mar. 01, 2023
In the capture architecture, when a virtual camera is detected, the additional_info parameter is inside the from_virtual_camera section.
You can now crop the lossless frame without losing quality.
0.9.0 – Feb. 20, 2023
Improved the recording quality;
Reforged :
added detailed error descriptions;
0.7.6 – Sept. 27, 2022
Changed the extension of some Oz system files from .bin to .dat.
0.7.5
Additional scripts are now called using the main script's address.
0.7.4
Web SDK now can be installed via static files only (works for the capture type of architecture).
Web SDK can now work with CDN.
Now, you can launch several Oz Liveness plugins on different pages. In this case, you need to specify the path to scripts in head
If you update the Web SDK version from 0.4.0, the license should be updated as well.
0.4.1
Fixed a bug with the shooting screen.
0.4.0
Added licensing (requires origin).
You can now of Web SDK.
0.3.2044
Fixed Angular integration.
0.3.2043
Fixed the bug where the IMAGE_FOLDER section was missed in the JSON response with the lossless frame enabled.
0.3.2042
Fixed issues with the ravenjs library.
0.3.2041
A frame for taking a documents photo is now customizable.
0.3.2012
Implemented security updates.
0.3.2009 (0.4.8)
Metadata now contains names of all cameras you can use.
0.3.2005 (0.4.8)
Video and zip formats now allow loading a lossless image.
Fixed Best Shot.
0.3.2004 (0.4.8)
Separated the error code and error description in server responses.
0.3.2001 (0.4.6)
If the SDK mode is set in the environment variables architecture, api_url, it is passed to settings automatically.
In the Lite mode, you can select the best frame for any action.
In the Lite mode, an image sent via API gets the
0.3.1999
Added the folder value for result_mode: it returns the same value as status but with folder_id.
0.3.1997 (0.4.5)
Updated encryption: now only metadata required to decrypt an object is encrypted.
Updated data transfer: images are being sent in separate form fields.
Added the camera parameters check.
0.3.1992 (0.4.4)
Enabled a new method for image encryption.
Optimized image transfer format.
0.3.1991
Added the use_for_liveness option: mobile devices use back camera by default, on desktop, flip and oval circling are off. By default, the option is switched off.
0.3.1990 (0.4.3)
Decreased video length for video_selfie_best (the Selfie gesture) from 1 to 0,2 sec.
Loading scripts is now customizable.
Improved UX.
0.3.1988 (0.4.2)
Added the Kazakh locale.
Added a guide for accessing the camera on a desktop.
Improved logging: plugin_liveness.php requests and recording user-agent to the server log.
0.3.1987 (0.4.1)
Added encryption.
Updated libraries.
0.3.1986 (0.3.91)
You can now hide the Oz Forensics logo.
0.3.1984
Updated a guide for Facebook, Instagram, Samsung, Opera.
Added handlers for unknown variables and a guide for “unknown” browsers.
0.3.1983
Optimized memory usage for a frame.
Added a guide on how to switch cameras on using Android browsers.
to check the initialization state.
Improved handling of head movement gestures (up / down).
Resolved an issue with excessive error messages in console.
Updated telemetry.
Enhanced security.
Resolved the issue when OzLiveness was sometimes called later than expected.
Security update.
true (the default value) – authorization is required and is based on the generated key;
user:pass – authorization is required and is based on the user login and password;
Fixed the bug with colors being applied incorrectly during 3D mask customization.
Resolved the issue with incorrect mirroring when the use_for_liveness parameter is not set.
The document scan in plugin now works properly.
Improved accessibility: the hints throughout the customer journey (camera access, processing data, uploading data, requesting results) are now properly and completely voiced by screen readers in assertive mode (changes in hints are announced immediately).
Created an endpoint for license verification: [GET] /check_license.php.
Reduced the bundle size.
Fixed the issue with missing analysis details in the on_complete callback when using result_mode: full.
Fixed the issue when the camera switch button might have been missed.
The front camera no longer displays the user's actions as in a mirror image.
Improved error handling.
Improved support for low-performance devices.
Added the closed eyes check to the Scan gesture.
Internal improvements, bug fixes, major telemetry and security updates.
You can now configure method signatures to make them trusted via checksum of the modified function.
Changed the wording for the head_down gesture: the new wording is “tilt down”.
For possible regulatory requirements, updated the Web Adapter configuration file with a new parameter: extract_action_shot. If True, for each gesture, the system saves the corresponding image to display it in report, e.g., closed eyes for blinking, instead of random frame for thumbnail.
Fixed an issue where an arrow incorrectly appeared after capturing head-movement gestures.
Fixed an issue where the oval disappeared when the "Great!" phrase was displayed.
Improved the localization: when SDK can’t find a translation for a key, it displays a message in English.
You can now distribute the serverless Web SDK via Node Package Manager.
You can switch off the display of API errors in modal windows. Set the disable_adapter_errors_on_screen parameter in the configuration file to True.
The mobile browsers now use the rear camera to take the documents’ photos.
Updated samples.
Fixed the bug with abnormal 3D mask reaction when user needs to repeat a gesture.
Logging and security updates.
Improved the React Native app integration by adding the webkit-playsinline attribute, thereby fixing the issue of the full-screen camera launch on iOS WebView.
The iFrame using error when iframe_allowed = False is now shown properly.
New localization keys:
oz_tutorial_camera_android_webview_browser
oz_tutorial_camera_android_webview_instruction
oz_tutorial_camera_android_webview_title
The interval for polling for the analyses’ results is now configurable. Change it in the results_polling_interval parameter of the Web Adapter configuration file if necessary.
You can now select the front or back camera via Web Plugin. In the OzLiveness.open() method, set cameraFacingMode to user for the front camera and environment for the back one. This parameter only works when the use_for_liveness option in the Web Adapter configuration file is not set.
The plugin styles are now being added automatically. Please remove <link rel="stylesheet" href="/plugin/ozliveness.css" /> from your page to prevent style conflicts.
Fixed some bugs and updated telemetry.
adapter parameter.
The 3D mask transparency became customizable.
Implemented the possibility of using a master license that works with any domain.
Added the master_license_signature option into Web Adapter configuration parameters.
Fixed some bugs.
Fixed some bugs and improved logging.
Added the progress bar for media upload.
Removed the Zoom in / Zoom out gestures.
On tablets, you can now capture video in landscape orientation.
Removed the lang_allow option from Web Adapter configuration file.
Fixed face landmarks for the capture architecture.
now you can set the license in JS during the runtime;
when you set a license in OzLiveness.open(), it rewrites the previous license;
the license no longer requires port and protocol;
you can now specify subdomains in the license;
upon the launch of the plugin on a server, the license payload is displayed in the Docker log;
localhost and 127.0.0.1 no longer ask for a license;
The on_capture_complete callback is now available on any architecture: it is called once a video is taken and returns info on actions from the video;
Oz Web Liveness and Oz Web Adapter versions are displayed in the Docker log upon launch;
Deleted the deprecated adapter_version field from order metadata;
Added the parameters to pass the information about the bounding box – landmarks that define where the face in the frame is;
Fixed the Switch camera button in Google Chrome;
Upon the start of Web SDK, the actual configuration parameters are displayed in the Docker log.
of these pages.
on_complete
status only after a successful liveness.
You can manage CORS using the environment variables (CORS headers are not added by default).
Fixed the bug with green videos appearing on some devices.
Resolved the issue with occasional SDK crashes caused by restrictions of device camera.
Fixed minor bugs.
8.23.0 – Dec. 30, 2025
Fixed the bug with SDK crashes caused by the “Invalid to call at Released state” and “Pending dequeue output buffer request cancelled” errors.
Fixed the bug with "java.util.concurrent.TimeoutException" causing crashes.
Android SDK now passes orientation and media type tags along with the action one.
8.22.1 – Dec. 10, 2025
Fixed a minor bug.
8.22.0 – Dec. 01, 2025
Implemented a new proprietary data format: OzCapsula Data Container.
Fixed occasional SDK crashes in specific cases and / or on specific devices.
Enhanced security.
8.21.0 – Nov. 12, 2025
Improved SDK performance for some devices.
Updated SDK to support the upcoming security features.
8.20.0 – Oct. 10, 2025
Fixed the bug with green videos on some smartphone models.
Resolved the issue with mediaId appearing null.
Enhanced security.
8.19.0 – Sept. 15, 2025
Resolved an issue with warning that could appear when running Fragment.
SDK no longer crashes when calling copyPlane.
When you choose to send compressed videos for a hybrid analysis, SDK no longer saves original media as well as compressed.
8.18.4 – Aug. 29, 2025
To support memory page size of 16 KB, switched TensorFlow to LiteRT.
8.18.2 – Aug. 7, 2025
We highly recommend updating to this version.
Fixed the bug that caused video duration and file size to increase.
8.18.0 – July 16, 2025
Added support for Google Dynamic Feature Delivery.
Resolved an issue that might have caused crashes when taping the close button on the Liveness screen.
Fixed a bug where the SDK would crash with "CameraDevice was already closed" exception.
8.17.3 – July 02, 2025
Resolved the issue with OkHttp compatibility.
Fixed the bug with Fragment missing context.
8.17.2 – June 26, 2025
Resolved a camera access issue affecting some mobile device models.
8.17.1 – June 23, 2025
Security updates.
8.17.0 – May 22, 2025
Security updates.
8.16.3 – Apr. 08, 2025
Security updates.
8.16.2 – Mar. 19, 2025
Resolved the issue with possible SDK crashes when closing the Liveness screen.
8.16.1 – Mar. 14, 2025
Security updates.
8.16.0 – Mar. 11, 2025
Updated the authorization logic.
Improved voiceover.
Fixed the issue with SDK lags and the non-responding error that users might have encountered on some devices after completing the video recording.
8.15.6 – Feb. 26, 2025
Security updates.
8.15.5 – Feb. 18, 2025
You can now disable video validation that has been implemented to avoid recording of extremely short videos (3 frames and less): switch the option off using .
Fixed the bug with green videos on some smartphone models.
Security updates.
8.15.4 – Feb. 11, 2025
Fixed bugs that could have caused crashes on some phone models.
8.15.0 – Dec. 30, 2024
Changed the wording for the head_down gesture: the new wording is “tilt down”.
Added proper focus order for TalkBack when the antiscam hint is enabled.
Added the public setting extract_action_shot in the Demo Application.
8.14.1 – Dec. 5, 2024
Fixed the bug when the recorded videos might appear green.
Resolved codec issues on some smartphone models.
8.14.0 – Dec. 2, 2024
Accessibility updates according to WCAG requirements: the SDK hints and UI controls can be voiced.
Improved user experience with head movement gestures.
Moved the large video compression step to the Liveness screen closure.
8.13.0 – Nov. 12, 2024
Security and telemetry updates.
8.12.4 – Oct. 01, 2024
Security updates.
8.12.2 – Sept. 10, 2024
Security updates.
8.12.0 – Aug. 29, 2024
Security and telemetry updates.
8.11.0 – Aug. 19, 2024
Fixed the RuntimeException error with the server-based Liveness that appeared on some devices.
Security updates.
8.10.0 – July 26, 2024
Security updates.
Bug fixes.
8.9.0 – July 18, 2024
Updated the Android Gradle plugin version to 8.0.0.
Internal SDK improvements.
8.8.3 – July 11, 2024
Internal SDK improvements.
8.8.2 – June 21, 2024
Security updates.
8.8.1 – June 12, 2024
Security updates.
8.8.0 – June 04, 2024
Security updates.
8.7.3 – June 03, 2024
Security updates.
8.7.0 – May 06, 2024
Added a description for the error that occurs when providing an empty string as an ID in the setFolderID method.
Fixed a bug causing an endless spinner to appear if the user switches to another application during the Liveness check.
Fixed some smartphone model specific-bugs.
8.6.0 – Apr. 05, 2024
Upgraded the on-device Liveness model.
Security updates.
8.5.0 – Feb. 27, 2024
The length of the Selfie gesture is now (affects the video file size).
You can instead of Oz logo if your license allows it.
Removed the pause after the Scan gesture.
8.4.4 – Feb. 06, 2024
Changed the master license validation algorithm.
8.4.3 – Jan. 29, 2024
Downgraded the required compileSdkVersion from 34 to 33.
8.4.2 – Jan. 15, 2024
Security updates.
8.4.0 – Jan. 04, 2024
Updated the on-device Liveness model.
Fixed some bugs.
8.3.3 – Dec. 11, 2023
Internal licensing improvements.
8.3.2 – Nov. 30, 2023
Internal SDK improvements.
8.3.1 – Nov. 24, 2023
Bug fixes.
8.3.0 – Nov. 17, 2023
Implemented the possibility of using a master license that works with any bundle_id.
Video compression failure on some phone models is now fixed.
8.2.1 – Nov. 01, 2023
Bug fixes.
8.2.0 – Oct. 23, 2023
The Analysis structure now contains the sizeReductionStrategy field. This field defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully.
The messages for the errors that are retrieved from API are now detailed.
8.1.1 – Oct. 02, 2023
If multiple analyses are applied to the folder simultaneously, the system sends them as a group. It means that the “worst” of the results will be taken as resolution, not the latest. Please refer to for details.
For the Liveness analysis, the system now treats the highest score as a quantitative result. The Liveness analysis output is described .
8.1.0 – Sept. 07, 2023
Updated the Liveness on-device model.
Added the Portuguese (Brazilian) locale.
You can now add a custom or update an existing language pack. The instructions can be found .
8.0.3 – Aug. 24, 2023
Fixed errors.
8.0.2 – July 13, 2023
The SDK now works properly with baseURL set to null.
8.0.1 – June 28, 2023
The dependencies' versions have been brought into line with Kotlin version.
8.0.0 – June 19, 2023
Added the new analysis mode – hybrid (Liveness only). If the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Kotlin version requirements lowered to 1.7.21.
Improved the on-device models.
Public interface changes
New entities
7.3.1 – June 07, 2023
Restructured the settings screen.
Added the center hint background customization.
Added new face frame forms (Circle, Square).
Please note: for this version, we updated Kotlin to 1.8.20.
7.2.0 – May 04, 2023
Improved the SDK algorithms.
7.1.4 – Mar. 30, 2023
Updated the model for the on-device analyses.
Fixed the animation for sunglasses/mask.
The oval size for Liveness is now smaller.
7.1.3 – Mar. 03, 2023
Fixed the error with the server-based analyses while using permanentAccessToken for authorization.
7.1.2 – Feb. 22, 2023
Added customization for the .
You can now hide the status bar and system buttons (works with 7.0.0 and higher).
OzLivenessSDK.init now requires context as the first parameter.
7.1.1 – Jan. 16, 2023
Fixed crashes for Android v.6 and below.
Fixed oval positioning for some phone models.
Internal fixes and improvements.
7.1.0 – Dec. 16, 2022
Updated security.
Implemented some internal improvements.
The addMedia method is now deprecated, please use uploadMedia for uploading.
7.0.0 – Nov. 23, 2022
Changed the way of sharing dependencies. Due to security issues, now we share two types of libraries as shown below: sdk is a server analysis only, full provides both server and on-device analyses:
UICustomization has been implemented instead of OzCustomization.
Implemented a range of options and switched to the new design. To restore the previous settings, please refer to .
6.4.2.3
Fixed the bug with freezes that had appeared on some phone models.
SDK now captures videos in 720p.
6.4.1
Synchronized the names of the analysis modes with iOS: SERVER_BASED and ON_DEVICE.
Fixed the bug with displaying of localization settings.
6.4.0
Now you can use Fragment as Liveness screen.
Added a new field to the Analysis structure. The params field is for any additional parameters, for instance, if you need to set extracting the best shot on server to true. The best shot algorithm chooses the most high-quality frame from a video.
6.3.7
The Zoom in and Zoom out gestures are no longer supported.
6.3.6
Updated the biometry model.
6.3.5
Added a new simplified API – AnalysisRequest. With it, it’s easier to create a request for the media and analysis you need.
6.3.4
Published the on-device module for on-device liveness and biometry analyses. To add this module to your project, use:
To launch these analyses, use runOnDeviceBiometryAnalysis and runOnDeviceLivenessAnalysis methods from the OzLivenessSDK class:
6.3.3
Liveness now goes smoother.
Fixed freezes on Xiaomi devices.
Optimized image converting.
6.3.1
New metadata parameter for OzLivenessSDK.uploadMedia and new OzLivenessSDK.uploadMediaAndAnalyze method to pass this parameter to folders.
6.2.8
Added functions for SDK initialization with LicenseSources: LicenseSource.LicenseAssetId and LicenseSource.LicenseFilePath. Use the OzLivenessSDK.init method to start initialization.
Now you can get the license info upon initialization val licensePayload = OzLivenessSDK.getLicensePayload().
6.2.4
Added the Kyrgyz locale.
6.2.0
Added local analysis functions.
You can now configure the face frame.
Fixed version number at the Liveness screen.
6.1.0
Added the main camera support.
6.0.1
Added configuration from license support.
6.0.0
Added the OneShot gesture.
Added new states for OzAnalysisResult.Resolution.
Added the uploadMediaAndAnalyze method to load a bunch of media to the server at once and send them to analysis immediately.
5.1.0
Access token updates automatically.
Renamed accessToken to permanentAccessToken.
Added R8 rules.
5.0.2
Fixed the oval frame.
Removed the unusable parameters from AnalyseRequest.
Removed default attempt limits.
5.0.0
To customize the configuration options, the config property is added instead of baseURL, accessToken, etc. Use OzConfig.Builder for initialization.
Added license support. Licences should be installed as raw resources. To pass them to OzConfig, use setLicenseResourceId.
Enhanced security.
Updated Oz Forensics website link.
Enhanced security.
Security and telemetry updates.
Resolved the issue with SDK crashes on some devices that might have occurred because of trying to access non-initialized or closed resources.
Security updates.
Fixed bugs.
Security updates.
Fixed the bug when the best shot frame could contain an image with closed eyes.
Minor bug fixes and telemetry updates.
If the recorded video is larger than 10 MB, it gets compressed.
Security and logging updates.
If a media hasn't been uploaded correctly, the system repeats the upload.
Created a new method to retrieve the telemetry (logging) identifier: getEventSessionId.
The login and auth methods are now deprecated. Use the setAPIConnection method instead.
OzConfig.baseURL and OzConfig.permanentAccessToken are now deprecated.
If a user closes the screen during video capture, the appropriate error is now being handled by SDK.
Fixed some bugs and improved the SDK work.
For some phone models, fixed the fatal device error.
The hint text width can now exceed the frame width (when using the main camera).
Photos taken during the One Shot analysis are now being sent to the server in the original size.
Removed the OzAnalysisResult class. The onSuccess method ofAnalysisRequest.run now uses the RequestResult structure instead of List<OzAnalysisResult>.
All exceptions are moved to the com.ozforensics.liveness.sdk.core.exceptions package (See changes below).
Classes related to AnalysisRequest are moved to the com.ozforensics.liveness.sdk.analysispackage (See changes below).
The methods below are no longer supported:
AnalysisRequest.Builder.uploadMedia
AnalysisRequest.Type.HYBRID in com.ozforensics.liveness.sdk.analysis.entity
AnalysisError in com.ozforensics.liveness.sdk.analysis.entity
SourceMedia in com.ozforensics.liveness.sdk.analysis.entity
ResultMedia in com.ozforensics.liveness.sdk.analysis.entity
RequestResult in com.ozforensics.liveness.sdk.analysis.entity
Moved entities
NoAnalysisException from com.ozforensics.liveness.sdk.exceptions в com.ozforensics.liveness.sdk.core.exceptions
NoNetworkException from com.ozforensics.liveness.sdk.exceptions в com.ozforensics.liveness.sdk.core.exceptions
TokenException from com.ozforensics.liveness.sdk.exceptions в com.ozforensics.liveness.sdk.core.exceptions
NoMediaInAnalysisException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.core.exceptions
EmptyMediaListException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.core.exceptions
NoSuchMediaException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.core.exceptions
LicenseException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.security.exception
Analysis from com.ozforensics.liveness.sdk.analysis.entity to com.ozforensics.liveness.sdk.core.model
AnalysisRequest from com.ozforensics.liveness.sdk.analysis to com.ozforensics.liveness.sdk.core
AnalysisListener from com.ozforensics.liveness.sdk.analysis to com.ozforensics.liveness.sdk.core
AnalysisStatus from com.ozforensics.liveness.sdk.analysis to com.ozforensics.liveness.sdk.core
AnalysisRequest.Builder from com.ozforensics.liveness.sdk.analysis to com.ozforensics.liveness.sdk.core
OzException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.core.exceptions
Changed classes
OzLivenessSDK
Removed uploadMediaAndAnalyze
Removed uploadMedia
Removed runOnDeviceBiometryAnalysis
Removed runOnDeviceLivenessAnalysis
AnalysisRequest
Removed build(): AnalysisRequest
AnalysisRequest.Builder
Removed addMedia
Removed onSuccess(result: List<OzAnalysisResult>)
Added onSuccess(result: RequestResult)
Added the antiscam widget and its customization. This feature allows you to alert your customers that the video recording is being conducted, for instance, for loan application purposes. The purpose of this is to safeguard against scammers who may attempt to deceive an individual into approving a fraudulent transaction.
The OzLivenessSDK::init method no longer crashes if there is a StatusListener parameter passed.
Changed the scan gesture animation.
OzAnalysisResult now shows the server-based analyses' scores properly.
Fixed initialization issues, displaying of wrong customization settings, authorization failures on Android <7.1.1.
Added the Spanish locale.
OzMedia is renamed to OzAbstractMedia and got subclasses for images and videos.
Fixed camera bugs for some devices.
Configuration became easier: config settings are mutable.
Replaced the context-dependent methods with analogs.
val mediaList: List<OzAbstractMedia> = ...
val biometryAnalysisResult: OzAnalysisResult = OzLivenessSDK.runOnDeviceBiometryAnalysis(mediaList)
val livenessAnalysisResult: OzAnalysisResult = OzLivenessSDK.runOnDeviceLivenessAnalysis(mediaList)
Look-and-Feel Customization
To set your own look-and-feel options, use the style section in the Ozliveness.open method. The options are listed below the example.
baseColorCustomization
Main color settings.
baseFontCustomization
Main font settings.
titleFontCustomization
Title font settings.
buttonCustomization
Buttons’ settings.
toolbarCustomization
Toolbar settings.
centerHintCustomization
Center hint settings.
hintAnimation
Hint animation settings.
faceFrameCustomization
Face frame settings.
documentFrameCustomization
Document capture frame settings.
backgroundCustomization
Background settings.
antiscamCustomization
Scam protection settings: the antiscam message warns user about their actions being recorded.
versionTextCustomization
SDK version text settings.
maskCustomization
3D mask settings. The mask has been implemented in 1.2.1.
switchCameraButtonCustomization
Use this setting to hide the Switch Camera button (added in 1.7.14).
loaderSlot
Settings for a custom loader (added in 1.7.15).
loaderTransition
Loader transition settings (added in 1.8.0).
ozliveness_face_stroke
Use this class to customize the oval (added in 1.9.2). All standard CSS properties are supported.
Migrating to the New Design from the Previous Versions (before 1.0.1)
Minimum mask transparency level. Implemented in 1.3.1
maxAlpha
Maximum mask transparency level. Implemented in 1.3.1
Parameter
Description
enableSwitchCameraButton
true (default) – shows the Switch Camera button,
false – hides the button.
Event
Payload
Is called
loader:init
{os, browser, platform}
immediately after inserting the slot
loader:waitingCamera
{os, browser, platform, waitedMs}
every waitedMs ms, while waiting for camera access
loader:cameraReady
when access is granted and loader should be hidden
loader:processing
{phase: 'start' | 'end'}
before / after data preparation
Parameter
Description
type
Animation type: none, fade, slide, scale
duration
Animation length in ms
easing (optional)
easing: linear, ease-in-out, etc
Previous design
New design
doc_color
-
face_color_success
faceFrame.faceReady
faceFrameCustomization.strokeFaceInFrameColor
face_color_fail
faceFrame.faceNotReady
faceFrameCustomization.strokeDefaultColor
centerHint.textSize
centerHintCustomization.textSize
centerHint.color
centerHintCustomization.textColor
backgroundColorPrimary
backgroundColor
strokeWidth
textColor
textOpacity
loader:uploading
centerHint.yPosition
User Roles
Each of the new API users should obtain a role to define access restrictions for direct API connections. Set the role in the user_type field when you create a new user.
ADMIN is a system administrator, who has unlimited access to all system objects, but can't change the analyses' statuses;
OPERATOR
is a system operator, who can view all system objects and choose the analysis result via the
Make Decision
button (usually needed if the
is OPERATOR_REQUIRED);
CLIENT is a regular consumer account, who can upload media files, process analyses, view results in personal folders, generate reports for analyses.
can_start_analysis_biometry – an additional flag to allow access to BIOMETRY analyses (enabled by default);
can_start_analysis_quality – an additional flag to allow access to (QUALITY) analyses (enabled by default);
can_start_analysis_collection – an additional flag to allow access to analyses (enabled by default).
CLIENT ADMIN is a company administrator that can manage their company account and users within it. Additionally, CLIENT ADMIN can view and edit data of all users within their company, delete files in folders, add or delete report templates with or without attachments, the reports themselves and single analyses, check statistics, add new blacklist collections.
CLIENT OPERATOR is similar to OPERATOR within their company.
CLIENT SERVICE is a service user account for automatic connection purposes. Authentication with this user creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, also by default, the lifetime of a token is extended with each request (parameterized).
For API versions below 6.0
For API 5.3 and below, to create a CLIENT user with admin or service rights, you require to set the corresponding flags to true:
is_admin – if set, the user obtains access to other users' data within this admin's company.
is_service is a flag that marks the user account as a service accountfor automatic connection purposes. Authentication with this user creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, also by default, the lifetime of a token is extended with each request (parameterized).
From 1.1.0, Oz API Lite works with base64 as an input format and is also able to return the biometric templates in this format. To enable this option, add Content-Transfer-Encoding = base64 to the request headers.
version – component version check
Use this method to check what versions of components are used (available from 1.1.1).
Call GET /version
Input parameters
-
Request example
GET localhost/version
Successful response
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
Output parameters
Response example
Biometry
health – biometric processor status check
Use this method to check whether the biometric processor is ready to work.
Call GET /v1/face/pattern/health
Input parameters
-
Request example
GET localhost/v1/face/pattern/health
Successful response
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
Output parameters
Response example
extract – the biometric template extraction
The method is designed to extract a biometric template from an image.
The name itself is not mandatory for a parameter of the Stream type.
To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.
Request example
Successful response
In case of success, the method returns a biometric template.
The content type of the HTTP response is “application/octet-stream”.
If you've passed Content-Transfer-Encoding = base64 in headers, the template will be in base64 as well.
Output parameters
*
The name itself is not mandatory for a parameter of the Stream type.
Response example
compare – the comparison of biometric templates
The method is designed to compare two biometric templates.
The content type of the HTTP request is “multipart / form-data”.
CallPOST /v1/face/pattern/compare
Input parameters
To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.
Request example
Successful response
In case of success, the method returns the result of comparing the two templates.
HTTP response content type: “application/json”.
Output parameters
Response example
verify – the biometric verification
The method combines the two methods from above, extract and compare. It extracts a template from an image and compares the resulting biometric template with another biometric template that is also passed in the request.
The content type of the HTTP request is “multipart / form-data”.
Call POST /v1/face/pattern/verify
Input parameters
To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.
Request example
Successful response
In case of success, the method returns the result of comparing two biometric templates and the biometric template.
The content type of the HTTP response is “multipart/form-data”.
Output parameters
Response example
extract_and_compare – extracting and comparison of templates derived from two images
The method also combines the two methods from above, extract and compare. It extracts templates from two images, compares the received biometric templates, and transmits the comparison result as a response.
The content type of the HTTP request is “multipart / form-data”.
Call POST /v1/face/pattern/extract_and_compare
Input parameters
To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.
Request example
Successful response
In case of success, the method returns the result of comparing the two extracted biometric templates.
HTTP response content type: “application / json”.
Output parameters
Response example
compare_n – 1:N biometric template comparison
Use this method to compare one biometric template to N others.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/compare_n
Input parameters
Request example
Successful response
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
Output parameters
Response example
verify_n – 1:N biometric verification
The method combines the extract and compare_n methods. It extracts a biometric template from an image and compares it to N other biometric templates that are passed in the request as a list.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/verify_n
Input parameters
To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.
Request example
Successful response
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
Output parameters
Response example
extract_and_compare_n – 1:N template extraction and comparison
This method also combines the extract and compare_n methods but in another way. It extracts biometric templates from the main image and a list of other images and then compares them in the 1:N mode.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/extract_and_compare_n
Input parameters
To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.
Request example
Successful response
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
Output parameters
Response example
Method errors
HTTP response content type: “application / json”.
*
A biometric sample is an input image.
Liveness
health – checking the status of liveness processor
Use this method to check whether the liveness processor is ready to work.
Call GET /v1/face/liveness/health
Input parameters
None.
Request example
GET localhost/v1/face/liveness/health
Successful response
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
Output parameters
Response example
detect – presentation attack detection
The detect method is made to reveal presentation attacks. It detects a face in each image or video (since 1.2.0), sends them for analysis, and returns a result.
The method supports the following content types:
image/jpeg or image/png for an image;
multipart/form-data for images, videos, and archives. You can use payload to add any parameters that affect the analysis.
To run the method, call POST /{version}/face/liveness/detect.
Image
Accepts an image in JPEG or PNG format. No payload attached.
Request exampleSuccessful response example
Multipart/form-data
Accepts the multipart/form-data request.
Each media file should have a unique name, e.g., media_key1, media_key2.
The payload parameters should be a JSON placed in the payload field.
Temporary IDs will be deleted once you get the result.
Request exampleSuccessful response example
Multipart/form-data with Best Shot
To extract the best shot from your video or archive, in analyses, set extract_best_shot = true (as shown in the request example below). In this case, API Lite will analyze your archives and videos, and, in response, will return the best shot. It will be a base64 image in analysis->output_images->image_b64.
Additionally, you can change the Liveness threshold. In analyses, set the new threshold in the threshold_spoofing parameter. If the resulting score will be higher than this parameter's value, the analysis will end up with the DECLINED status. Otherwise, the status will be SUCCESS.
Request exampleSuccessful response exampleThe payload field
Method errors
HTTP response content type: “application / json”.
*
A biometric sample is an input image.
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Failed to read the biometric template
400
BPE-002005
Invalid Content-Type of the multiparted HTTP request part
400
BPE-003001
Failed to retrieve the biometric template
400
BPE-003002
The biometric sample* is missing face
400
BPE-003003
More than one person is present on the biometric sample*
500
BPE-001001
Internal bioprocessor error
400
BPE-001002
TFSS error. Call the biometry health method.
Invalid Content-Type of the multiparted HTTP request part
500
LDE-001001
Liveness detection processor internal error
400
LDE-001002
TFSS error. Call the Liveness health method.
Parameter name
Type
Description
core
String
API Lite core version number.
tfss
String
TFSS version number.
models
[String]
An array of model versions, each record contains model name and model version number.
Parameter name
Type
Description
status
Int
0 – the biometric processor is working correctly.
3 – the biometric processor is inoperative.
message
String
Message.
Parameter name
Type
Description
Not specified*
Stream
Required parameter. Image to extract the biometric template.
The “Content-Type” header field must indicate the content type.
Parameter name
Type
Description
Not specified*
Stream
A biometric template derived from an image
Parameter name
Type
Description
bio_feature
Stream
Required parameter.
First biometric template.
bio_template
Stream
Required parameter.
Second biometric template.
Parameter name
Type
Description
score
Float
The result of comparing two templates
decision
String
Recommended solution based on the score.
approved – positive. The faces match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
sample
Stream
Required parameter.
Image to extract the biometric template.
bio_template
Stream
Required parameter.
The biometric template to compare with.
Parameter name
Type
Description
score
Float
The result of comparing two templates
bio_feature
Stream
Biometric template derived from image
Parameter name
Type
Description
sample_1
Stream
Required parameter.
First image.
sample_2
Stream
Required parameter.
Second image
Parameter name
Type
Description
score
Float
The result of comparing the two extracted templates.
decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
template_1
Stream
This parameter is mandatory. The first (main) biometric template
templates_n
Stream
A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the main and Nth templates. The result has the fields as follows:
*filename
String
A filename for the Nth template.
*score
Float
The result of comparing the main and Nth templates.
*decision
Parameter name
Type
Description
sample_1
Stream
This parameter is mandatory. The main image.
templates_n
Stream
A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the template derived from the main image and the Nth template. The result has the fields as follows:
*filename
String
A filename for the Nth template.
*score
Float
The result of comparing the template derived from the main image and the Nth template.
*decision
Parameter name
Type
Description
sample_1
Stream
This parameter is mandatory. The first (main) image.
samples_n
Stream
A list of N images. Each of them should be passed separately but the parameter name should be samples_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the main and Nth images. The result has the fields as follows:
Total number of attempts on all actions/gestures if you use a sequence of them
setUICustomization
Sets the UI customization values for OzLivenessSDK. The values are described in the Customization structures section. Structures can be found in the lib\customization.dart file.
setfaceAlignmentTimeout
Sets the timeout for the face alignment for actions.
Parameter
Type
Description
timeout
int
Timeout in milliseconds
Fonts and Other Customized Resources
For iOS
Add fonts and drawable resources to the application/ios project.
For Android
Fonts and images should be placed into related folders:
Contains the information about customization parameters.
ToolbarCustomization
Toolbar customization parameters.
Parameter
Type
Description
closeButtonIcon
String
Close button icon received from plugin
closeButtonColor
String
Color #XXXXXX
titleText
String
Header text
titleFont
CenterHintCustomization
Center hint customization parameters.
Parameter
Type
Description
textFont
String
Text font
textFontStyle
String
Font style
textColor
String
Color #XXXXXX
textSize
HintAnimation
Hint animation customization parameters.
Parameter
Type
Description
hideAnimation
bool
Hides the hint animation
animationIconSize
int
Animation icon size in px (40-160)
hintGradientColor
String
Color #XXXXXX
hintGradientOpacity
FaceFrameCustomization
Frame around face customization parameters.
Parameter
Type
Description
geometryType
String
Frame shape received from plugin
geometryTypeRadius
int
Corner radius for rectangle
strokeWidth
int
Frame stroke width
strokeFaceNotAlignedColor
VersionLabelCustomization
SDK version customization parameters.
Parameter
Type
Description
textFont
String
Text font
textFontStyle
String
Font style
textColor
String
Color #XXXXXX
textSize
BackgroundCustomization
Background customization parameters.
Parameter
Type
Description
backgroundColor
String
Color #XXXXXX
backgroundAlpha
int
Background opacity
Flutter structures
Defined in the models.dart file.
enum Locale
Stores the language information.
Case
Description
en
English
hy
Armenian
kk
Kazakh
ky
Kyrgyz
tr
Turkish
enum MediaType
The type of media captured.
Case
Description
movement
A media with an action
documentBack
The back side of the document
documentFront
The front side of the document
enum FileType
The type of media captured.
Case
Description
documentPhoto
A photo of a document
video
A video
shotSet
A frame archive
enum MediaTag
Contains an action from the captured video.
Case
Description
blank
A video with no gesture
photoSelfie
A selfie photo
videoSelfieOneShot
A video with the best shot taken
videoSelfieScan
A video with the scanning gesture
videoSelfieEyes
A video with the blink gesture
Media
Stores information about media.
Parameter
Type
Description
Platform
fileType
The type of the file
Android
movement
An action on a media
iOS
mediatype
RequestResult
Stores information about the analysis result.
Parameter
Type
Description
Platform
folderId
String
The folder identifier
type
The analysis type
errorCode
Analysis
Stores data about a single analysis.
Parameter
Type
Description
type
The type of the analysis
mode
The mode of the analysis
mediaList
List<>
Media to analyze
params
Structures
enum Type
Analysis type.
Case
Description
biometry
The algorithm that allows comparing several media and check if the people on them are the same person or not
quality
The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.
enum Mode
Analysis mode.
Case
Description
onDevice
The on-device analysis with no server needed
serverBased
The server-based analysis
hybrid
The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
enum VerificationAction
Contains the action from the captured video.
Case
Description
oneShot
The best shot from the video taken
blank
A selfie with face alignment check
scan
Scan
headRight
Head turned right
headLeft
Head turned left
Resolution
The general status for all analyses applied to the folder created.
Case
Description
failed
One or more analyses failed due to some error and couldn't get finished
declined
The check failed (e.g., faces don't match or some spoofing attack detected)
success
Everything went fine, the check succeeded (e.g., faces match or liveness confirmed)
operatorRequired
The result should be additionally checked by a human operator
enum SizeReductionStrategy
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
uploadOriginal
The original video
uploadCompressed
The compressed video
uploadBestShot
The best shot taken from the video
uploadNothing
Nothing is sent (note that no folder will be created)
Initializes OZSDK with the license data. The closure is either license data or .
Returns
-
setLicense
Forces the license installation.
setApiConnection
Retrieves an access token for a user.
Returns
The access token or an error.
setEventsConnection
Retrieves an access token for a user to send telemetry.
Returns
The access token or an error.
isLoggedIn
Checks whether an access token exists.
Parameters
-
Returns
The result – the true or false value.
logout
Deletes the saved access token
Parameters
-
Returns
-
createVerificationVCWithDelegate
Creates the Liveness check controller.
Returns
UIViewController or an exception.
createVerificationVC
Creates the Liveness check controller.
Returns
UIViewController or an exception.
cleanTempDirectory
Deletes all videos.
Parameters
-
Returns
-
getEventSessionId
Retrieves the telemetry session ID.
Parameters
-
Returns
The telemetry session ID (String parameter).
set
Sets the bundle to look for translations in.
Returns
-
setSelfieLength
Sets the length of the Selfie gesture (in milliseconds).
generateSignedPayload
Generates the payload with media signatures.
Returns
Payload to be sent along with media files that were used for generation.
Properties
localizationCode
SDK locale (if not set, works automatically).
host
The host to call for Liveness video analysis.
attemptSettings
The holder for attempts counts before SDK returns error.
version
The SDK version.
OZLivenessDelegate
A delegate for OZSDK.
Methods
onOZLivenessResult
Gets the Liveness check results.
Returns
-
onError
The error processing method.
Returns
-
AnalysisRequest
A protocol for performing checks.
Methods
AnalysisRequestBuilder
Creates the AnalysisRequest instance.
Returns
The AnalysisRequest instance.
addAnalysis
Adds an analysis to the AnalysisRequest instance.
Returns
-
uploadMedia
Uploads media on server.
Returns
-
addFolderId
Adds the folder ID to upload media to a certain folder.
Returns
-
addFolderMeta
Adds metadata to a folder.
Returns
-
run
Runs the analyses.
Returns
The analysis result or an error.
Customization
Customization for OzLivenessSDK (use OZSDK.customization).
toolbarCustomization
A set of customization parameters for the toolbar.
centerHintCustomization
A set of customization parameters for the center hint that guides a user through the process of taking an image of themselves.
hintAnimationCustomization
A set of customization parameters for the hint animation.
faceFrameCustomization
A set of customization parameters for the frame around the user face.
backgroundCustomization
A set of customization parameters for the background outside the frame.
versionCustomization
A set of customization parameters for the SDK version text.
antiscamCustomization
A set of customization parameters for the antiscam message that warns user about their actions being recorded.
logoCustomization
Logo customization parameters. Custom logo should be allowed by license.
Variables and Objects
enum LicenseSource
A source of a license.
struct LicenseData
The license data.
enum OzVerificationMovement
Contains action from the captured video.
enum OZLocalizationCode
Contains the locale code according to .
struct OZMedia
Contains all the information on the media captured.
enum MediaType
The type of media captured.
enum OZVerificationStatus
Error description. These errors are deprecated and will be deleted in the upcoming releases.
struct Analysis
Contains information on what media to analyze and what analyses to apply.
enum AnalysisType
The type of the analysis.
Currently, the .document analysis can't be performed in the on-device mode.
enum AnalysisMode
The mode of the analysis.
enum ScenarioState
Shows the media processing status.
struct AnalysisStatus
Shows the files' uploading status.
RequestStatus
Shows the analysis processing status.
ResultMedia
Describes the analysis result for the single media.
RequestResult
Contains the consolidated analysis results for all media.
class AnalysisResult
Contains the results of the checks performed.
enum AnalyseResolutionStatus
The general status for all analyses applied to the folder created.
struct AnalyseResolution
Contains the results for single analyses.
enum GeometryType
Frame shape settings.
enum LicenseError
Possible license errors.
enum Connection
The authorization type.
struct UploadMediaSettings
Defines the settings for the repeated media upload.
Parameter
Type
Description
enum SizeReductionStrategy
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
sslPin
Contains information about the .
Data Container
The methods below apply to the new feature – that has been implemented in 8.22.
addContainer
This method replaces addAnalysis in the AnalysisRequest structure when you use the data container flow.
Input
createMediaCaptureScreen
Captures media file with all information you need and packages it into a data container.
Input
Output
public data class CaptureRequest
Detects a request for video capture.
public data class AnalysisProfile
Contains information on media files and analyses that should be applied to them.
public sealed class MediaRequest
Stores information about a media file.
Please note: you should add actionMedia OR userMedia, these parameters are mutually exclusive.
Sets the number of attempts and timeout between them
Toolbar title text color
backgroundColor
UIColor
Toolbar background color
titleText
String
Text on the toolbar
Center hint vertical position from the screen top (in %, 0-100)
hideTextBackground
Bool
Hides text background
backgroundCornerRadius
Int
Center hint background frame corner radius
Frame color when a face is aligned properly
strokeWidth
CGFloat
Frame stroke width (in dp, 0-20)
strokePadding
CGFloat
A padding from the stroke to the face alignment area (in dp, 0-10)
Antiscam message text color
customizationAntiscamBackgroundColor
UIColor
Antiscam message text background color
customizationAntiscamCornerRadius
CGFloat
Background frame corner radius
customizationAntiscamFlashColor
UIColor
Color of the flashing indicator close to the antiscam message
Additional configuration
Head turned left
right
Head turned right
down
Head tilted downwards
up
Head lifted up
Spanish
pt-BR
Portuguese (Brazilian)
custom(String)
Custom language (language ISO 639-1 code, two letters)
URL of the Liveness video
bestShotURL
URL
URL of the best shot in PNG
preferredMediaURL
URL
URL of the API media container
timestamp
Date
Timestamp for the check completion
The Liveness check can't be performed: attempts limit exceeded
failedBecausePreparingTimout
The Liveness check can't be performed: face alignment timeout
failedBecauseOfLowMemory
The Liveness check can't be performed: no memory left
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully
params (optional)
String
Additional parameters
Object uploading status
Resulting score
mediaType
String
Media file type: VIDEO / IMAGE / SHOT_SET
media
Media that is being analyzed
error
AnalysisError (inherits from Error)
Error
Analysis identifier
error
AnalysisError (inherits from Error)
Error
resultMedia
[]
Results of the analysis for single media files
confidenceScore
Float
The resulting score
serverRawResponse
String
Server response
Everything went fine, the check succeeded (e.g., faces match or liveness confirmed)
OPERATOR_REQUIRED
The result should be additionally checked by a human operator
The algorithm that allows comparing several media and check if the people on them are the same person or not
quality
The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.
document (deprecated)
The analysis that aims to recognize the document and check if its fields are correct according to its type.
blacklist
The analysis that compares a face on a captured media with faces from the pre-made media database.
Case
Description
onDevice
The on-device analysis with no server needed. We recommend using server-based analyses whenever possible, as on-device ones tend to produce less accurate results
serverBased
The server-based analysis
hybrid
The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Case
Description
addToFolder
The system is creating a folder and adding files to this folder
Utility function to get the SDK error from OnActivityResult's intent.
Returns
The – String.
getLicensePayload
Retrieves the SDK license payload.
Parameters
-
Returns
The license payload () – the object that contains the extended info about licensing conditions.
getResultFromIntent
Utility function to get SDK results from OnActivityResult's intent.
Returns
A list of OzAbstractMedia objects.
init
Initializes SDK with license sources.
Returns
-
log
Enables logging using the Oz Liveness SDK logging mechanism.
Returns
-
setApiConnection
Connection to API.
setEventsConnection
Connection to the telemetry server.
logout
Deletes the saved token.
Parameters
-
Returns
-
getEventSessionId
Retrieves the telemetry session ID.
Parameters
-
Returns
The telemetry session ID (String parameter).
version
Retrieves the SDK version.
Parameters
-
Returns
The SDK version (String parameter).
generateSignedPayload
Generates the payload with media signatures.
Returns
Payload to be sent along with media files that were used for generation.
AnalysisRequest
A class for performing checks.
run
The analysis launching method.
class Builder
A builder class for AnalysisRequest.
build
Creates the AnalysisRequest instance.
Parameters
-
Returns
The class instance.
addAnalysis
Adds an analysis to your request.
Returns
Error if any.
addAnalyses
Adds a list of analyses to your request. Allows executing several analyses for the same folder on the server side.
Returns
Error if any.
addFolderMeta
Adds metadata to a folder you create (for the server-based analyses only). You can add a pair key-value as additional information to the folder with the analysis result on the server side.
Returns
Error if any.
uploadMedia
Uploads one or more media to a folder.
Returns
Error if any.
setFolderId
For the previously created folder, sets a folderId. The folder should exist on the server side. Otherwise, a new folder will be created.
Returns
Error if any.
OzConfig
Configuration for OzLivenessSDK (use OzLivenessSDK.config).
setSelfieLength
Sets the length of the Selfie gesture (in milliseconds).
Returns
Error if any.
allowDebugVisualization
The possibility to enable additional debug info by clicking on version text.
attemptSettings
The number of attempts before SDK returns error.
uploadMediaSettings
Settings for repeated media upload.
faceAlignmentTimeout
Timeout for face alignment (measured in milliseconds).
livenessErrorCallback
Interface implementation to retrieve error by Liveness detection.
localizationCode
Locale to display string resources.
logging
Logging settings.
useMainCamera
Uses the main (rear) camera instead of the front camera for liveness detection.
disableFramesCountValidation
Disables the option that prevents videos to be too short (3 frames or less).
UICustomization
Customization for OzLivenessSDK (use OzLivenessSDK.config.customization).
hideStatusBar
Hides the status bar and the three buttons at the bottom. The default value is True.
toolbarCustomization
A set of customization parameters for the toolbar.
centerHintCustomization
A set of customization parameters for the center hint that guides a user through the process of taking an image of themselves.
hintAnimation
A set of customization parameters for the hint animation.
faceFrameCustomization
A set of customization parameters for the frame around the user face.
backgroundCustomization
A set of customization parameters for the background outside the frame.
versionTextCustomization
A set of customization parameters for the SDK version text.
antiscamCustomization
A set of customization parameters for the antiscam message that warns user about their actions being recorded.
logoCustomization
Logo customization parameters. Custom logo should be allowed by license.
Variables and Objects
enum OzAction
Contains the action from the captured video.
class LicensePayload
Contains the extended info about licensing conditions.
sealed class OzAbstractMedia
A class for the captured media that can be:
OzDocumentPhoto
A document photo.
OzShotSet
A set of shots in an archive.
OzVideo
A Liveness video.
enum OzMediaTag
Contains an action from the captured video.
sealed class LicenseSource
A class for license that can be:
LicenseAssetId
Contains the license ID.
LicenseFilePath
Contains the path to a license.
class AnalysisStatus
A class for analysis status that can be:
RunningAnalysis
This status means the analysis is launched.
UploadingMedia
This status means the media is being uploaded.
enum Type
The type of the analysis.
Currently, the DOCUMENTS analysis can't be performed in the on-device mode.
enum Mode
The mode of the analysis.
We recommend using server-based analyses whenever possible, as on-device ones tend to produce less accurate results.
class Analysis
Contains information on what media to analyze and what analyses to apply.
enum Resolution
The general status for all analyses applied to the folder created.
class OzAttemptsSettings
Holder for attempts counts before SDK returns error.
enum OzLocalizationCode
Contains the locale code according to .
class OzLogging
Contains logging settings.
sealed class Color
A class for color that can be (depending on the value received):
ColorRes
ColorHex
ColorInt
enum GeometryType
Frame shape settings.
class AnalysisError
Exception class for AnalysisRequest.
class SourceMedia
Structure that describes media used in AnalysisRequest.
class ResultMedia
Structure that describes the analysis result for the single media.
class RequestResult
Consolidated result for all analyses performed.
class AnalysisResult
Result of the analysis for all media it was applied to.
class OzConnection
Defines the authentication method.
OzConnection.fromServiceToken
Authentication via token.
OzConnection.fromCredentials
Authentication via credentials.
class OzUploadMediaSettings
Defines the settings for the repeated media upload.
enum SizeReductionStrategy
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
sslPin
Contains information about the .
Data Container
The methods below apply to the new feature – that has been implemented in 8.22.
addContainer
This method replaces addAnalysis in the AnalysisRequest structure when you use the data container flow.
Input
createMediaCaptureScreen
Captures media file with all information you need and packages it into a data container.
Input
Output
public data class CaptureRequest
Detects a request for video capture.
public data class AnalysisProfile
Contains information on media files and analyses that should be applied to them.
public sealed class MediaRequest
Stores information about a media file.
Please note: you should add actionMedia OR userMedia, these parameters are mutually exclusive.
Error Description
Toolbar title text font style
titleTextSize
Int
Toolbar title text size (in sp, 12-18)
titleTextAlpha
Int
Toolbar title text opacity (in %, 0-100)
titleTextColor
Toolbar title text color
backgroundColor
Toolbar background color
backgroundAlpha
Int
Toolbar background opacity (in %, 0-100)
isTitleCentered
Boolean
Defines whether the text on the toolbar is centered or not
title
String
Text on the toolbar
Center hint text color
textAlpha
Int
Center hint text opacity (in %, 0-100)
verticalPosition
Int
Center hint vertical position from the screen bottom (in %, 0-100)
backgroundColor
Center hint background color
backgroundOpacity
Int
Center hint background opacity
backgroundCornerRadius
Int
Center hint background frame corner radius (in dp, 0-20)
A switcher for hint animation, if True, the animation is hidden
Frame color when a face is aligned properly
strokeAlpha
Int
Frame opacity (in %, 0-100)
strokeWidth
Int
Frame stroke width (in dp, 0-20)
strokePadding
Int
A padding from the stroke to the face alignment area (in dp, 0-10)
SDK version text opacity (in %, 20-100)
Antiscam message text color
textAlpha
Int
Antiscam message text opacity (in %, 0-100)
backgroundColor
Antiscam message background color
backgroundOpacity
Int
Antiscam message background opacity
cornerRadius
Int
Background frame corner radius (in px, 0-20)
flashColor
Color of the flashing indicator close to the antiscam message
Head tilted downwards
HeadUp
Head lifted up
EyeBlink
Blink
Smile
Smile
Media metadata
Media metadata
URL of the API media container
additionalTags (optional)
String
Additional tags if needed (including those not from the OzMediaTag enum)
metaData
Map<String, String>
Media metadata
A video with the smile gesture
VideoSelfieHigh
A video with the lifting head up gesture
VideoSelfieDown
A video with the tilting head downwards gesture
VideoSelfieRight
A video with the turning head right gesture
VideoSelfieLeft
A video with the turning head left gesture
PhotoIdPortrait
A photo from a document
PhotoIdBack
A photo of the back side of the document
PhotoIdFront
A photo of the front side of the document
Completion percentage
Additional parameters
sizeReductionStrategy
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully
Spanish
PT-BR
Portuguese (Brazilian)
Media object
tags
List<String>
Tags for media
Source media
type
Type of the analysis
A list of results of the analyses for single media
confidenceScore
Float
Resulting score
analysisId
String
Analysis identifier
params
@RawValue Map<String, Any>
Additional folder parameters
error
Error if any
serverRawResponse
String
Response from backend
Whitelisted certificates
No found in a video
FORCE_CLOSED = 7
Error. Liveness activity is force closed from client application.
A user closed the Liveness screen during video recording
DEVICE_HAS_NO_FRONT_CAMERA = 8
Error. Device has not front camera.
No front camera found
DEVICE_HAS_NO_MAIN_CAMERA = 9
Error. Device has not main camera.
No rear camera found
DEVICE_CAMERA_CONFIGURATION_NOT_SUPPORTED = 10
Error. Device camera configuration is not supported.
Oz Liveness doesn't support the camera configuration of the device
FACE_ALIGNMENT_TIMEOUT = 12
Error. Face alignment timeout in OzLivenessSDK.config.faceAlignmentTimeout milliseconds
Time limit for the is exceeded
ERROR = 13
The check was interrupted by user
User has closed the screen during the Liveness check.
The algorithm that allows comparing several media and check if the people on them are the same person or not
QUALITY
The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.
DOCUMENTS (deprecated)
The analysis that aims to recognize the document and check if its fields are correct according to its type.
Case
Description
ON_DEVICE
The on-device analysis with no server needed
SERVER_BASED
The server-based analysis
HYBRID
The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.