Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 142 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

EN: Oz Knowledge Base

General

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Guides

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

SDK

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Administrator Guide

Loading...

Loading...

Solution Architecture

This article describes Oz components that can be integrated into your infrastructure in various combinations depending on your needs.

Oz API

Oz API is the central component of the system. It provides RESTful application programming interface to the core functionality of Liveness and Face matching analyses, along with many important supplemental features:

  • Persistence: your media and analyses are stored for future reference unless you explicitly delete them,

  • Authentication, roles and access management,

  • Asynchronous analyses,

  • Ability to work with videos as well as images.

Under the logical hood, Oz API has the following components:

  • File storage and database where media, analyses, and other data are stored,

  • The Oz BIO module that runs neural network models to perform facial biometry magic,

  • Licensing logic.

The front-end components (Oz Liveness Mobile or Web SDK) connect to Oz API to perform server-side analyses either directly or via customer's back end.

iOS and Android SDK

iOS and Android SDK are collectively referred to as Mobile SDKs or Native SDKs. They are written on Swift and Kotlin/Java, respectively, and designed to be integrated into your native mobile application.

Mobile SDKs implement the out-of-the-box customizable user interface for capturing Liveness video and ensure that the two main objectives are met:

  • The capture process is smooth for users,

  • The quality of a video is optimal for the subsequent Liveness analysis.

After Liveness video is recorded and available to your mobile application, you can run the server-side analysis. You can use corresponding SDK methods, call the API directly from your mobile application, or pass the media to your backend and interact with Oz API from there.

Web SDK

Web Adapter and Web Plugin together constitute Web SDK.

Web SDK is designed to be integrated into your web applications and have the same main goals as Mobile SDKs:

  • The capture process is smooth for users,

  • The quality of a video is optimal for the subsequent Liveness analysis.

Web Adapter needs to be set up on a server side. Web Plugin is called by your web application and works in a browser context. It communicates with Web Adapter, which, in turn, communicates with Oz API.

Web SDK adds the two-layer protection against injection attacks:

  1. Collects information about browser context and camera properties to detect usage of virtual cameras or other injection methods.

  2. Records liveness video in a format that allows server-side neural networks to search for traces of injection attack in the video itself.

Web UI (Web Console)

Web UI is a convenient user interface that allows to explore the stored API data in the easy way. It relies on API authentication and database and does not store any data on its own.

The typical integration scenarios are described in the section.

For more information, please refer to and . To test Oz API, please check the Postman collection .

The basic integration option is described in the .

Mobile SDKs are also capable of On-device Liveness and Face matching. On-device analyses may be a good option in low-risk context, or when you don’t want the media to leave the users’ smartphones. Oz API is not required for On-device analyses. To learn how it works, please refer to this .

Check the for the basic integration scenario, and explore the for more details.

Web console has an intuitive interface, yet the user guide is available .

Integration Quick Start Guides
Oz API Key Concepts
Oz API Developer Guide
here
Quick Start Guide
Integration Quick Start Guide
Integration Quick Start Guide
Web SDK Developer Guide
here

Liveness, Face Matching, Black List Checks

This article describes the main types of analyses that Oz software is able to perform.

  • Liveness checks whether a person in a media is a real human.

  • Face Matching examines two or more media to identify similarities between the faces depicted in them.

  • Black list looks for resemblances between an individual featured in a media and individuals in a pre-existing photo database.

Liveness

The Liveness check is important to protect facial recognition from the two types of attacks.

A presentation attack, also known as a spoofing attack, refers to the attempt of an individual to deceive a facial recognition system by presenting into a camera video, photo, or any other type of media that mimics the appearance of a genuine user. These attacks can include the use of realistic masks or make up.

An injection attack is an attempt to deceive a facial recognition system by replacing physical camera input with a prerecorded image or video, manipulating physical camera output before it becomes input to a facial recognition, or injectiong some malicious code. Virtual camera software is the most common tool for injection attacks.

Oz Liveness is able to detect both types of attacks. Any component can detect presentation attacks, and for injection attack detection, use Oz Liveness SDK. To learn about how to use Oz components to prevent attacks, check our integration quick start guides:

Once the Liveness check is finished, you can check both qualitative and quantitative analysis results.

Results overview

Qualitative results

  • SUCCESS – everything went fine, the analysis has completed successfully;

  • DECLINED – the check failed (an attack detected).

If the analysis hasn't been finished yet, the result can be PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).

If you have analyzed multiple media, the aggregated status will be SUCCESS only if each analysis on each media has finished with the SUCCESS result.

Quantitative results

  • 100% (1) – an attack is detected, the person in the video is not a real living person,

  • 0% (0) – a person in the video is a real living person.

Liveness check also can return the best shot from a video: a best-quality frame where the face is seen the most properly.

Face Matching

The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media.

Results overview

Qualitative results

  • SUCCESS – everything went fine, the analysis has completed successfully;

  • DECLINED – the check failed (faces don't match).

If the analysis hasn't been finished yet, the result can be PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).

Quantitative results

After comparison, the algorithm provides numbers that represent the similarity level. The numbers vary from 100 to 0% (1 to 0), where:

  • 100% (1) – faces are similar, media represent the same person,

  • 0% (0) – faces are not similar and belong to different people

There are two scores to consider: the minimum and maximum. If you have analyzed two media, these scores will be equal. For three or more media, the similarity score is calculated for each pair. Once calculated, these scores get aggregated and analysis returns the minimum and maximum similarity scores for the media compared. Typically, the minimum score is enough.

Black List

In Oz API, you can configure one or more black lists, or face collections. These collections are databases of people depicted in photos. When the Black list analysis is being conducted, Oz software compares the face in a photo or video taken with faces of this pre-made database and shows whether a face exists in a collection.

Results overview

Qualitative results

  • SUCCESS – everything went fine, the analysis has completed successfully;

  • DECLINED – the check failed (faces match).

If the analysis hasn't been finished yet, the result can be PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).

Quantitative results

After comparison, the algorithm provides a score that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:

  • 100% (1) – the person in an image or video matches with someone in the blacklist database,

  • 0% (0) – the person is not found in the blacklist.

These analyses are accessible in the Oz API for both SaaS and On-Premise models. Liveness and Face Matching are also offered in the On-Device model. Please visit to learn more about the usage models.

,

,

.

Asking users to perform a gesture, such as smiling or turning their head, is a popular requirement when recording a Liveness video. With Oz Liveness Mobile and Web SDK, you can also request gestures from users. However, our Liveness check relies on other factors, analyzed by neural networks, and does not depend on gestures. For more details, please check .

Wonder how to integrate face matching into your processes? Check our .

For additional information, please refer to .

this page
How to integrate server-based liveness into your web application
How to integrate server-based liveness into your mobile application
How to check your media for liveness without Oz front end
Passive and Active Liveness
integration quick start guides
this article

Oz API Key Concepts

The Oz API is a comprehensive Rest API that enables facial biometrics, allowing for both face matching and liveness checks. This write-up provides an overview of the essential concepts that one should keep in mind while using the Oz API.

Authentication, Roles, and Access Management

To ensure security, every Oz API call requires an access token in its HTTP headers. To obtain this token, execute the POST /api/authorize/auth method with login and password provided by us. Pass this token in X-Forensics-Access-Token header in subsequent Oz API calls.

Persistence

The unit of work in Oz API is a folder: you can upload interrelated media to a folder, run analyses on them, and check for the aggregated result. A folder can contain the unlimited number of media, and each of the media can be a target of several analyses. Also, analyses can be performed on a bunch of media.

Media Types and Tags

Media OZ API works with photos and videos. Video can be either a regular video container, e.g., MP4 or MOV, or a ZIP archive with a sequence of images. Oz API uses the file mime type to define whether media is an image, a video, or a shot set.

It is also important to determine the semantics of a content, e.g., if an image is a photo of a document or a selfie of a person. This is achieved by using tags. The selection of tags impacts whether specific types of analyses will recognize or ignore particular media files. The most important tags are:

  • photo_id_front – for the front side of a photo ID

  • photo_selfie – for a non-document reference photo

  • video_selfie_blank – for a liveness video recorded beyond Oz Liveness SDK

  • if a media file is captured using the Oz Liveness SDK, the tags are assigned automatically.

Asynchronous analyses

Oz API vs. Oz API Lite

Liveness and Face Matching can also be provided by the Oz API Lite module. Oz API Lite is conceptually different from Oz API.

  • Fully stateless, no persistence,

  • Extremely easy to scale horizontally,

  • No built-in authentication and access management,

  • Works with single images, not videos.

Oz API Lite is suitable when you want to embed it into your product and/or have extremely high performance requirements (millions checks per week).

SaaS, On-premise, On-device: What to Choose

We offer different usage models for the Oz software to meet your specific needs. You can either utilize the software as a service from one of our cloud instances or integrate it into your existing infrastructure. Regardless of the usage model you choose, all Oz modules will function equally. It’s only up to you what to pick, depending on your needs.

When to Choose SaaS

With the SaaS model, you can access one of our clouds without having to install our software in your own infrastructure.

Choose SaaS when you want:

  • Faster start as you don’t need to procure or allocate hardware within your company and set up a new instance.

  • Zero infrastructure cost as server components are located in Oz cloud.

  • Lower maintenance cost as Oz maintains and upgrades server components.

  • No cross-border data transfer for the regions where Oz has cloud instances.

When to Choose On-Premise

The on-premise model implies that all the Oz components required are installed within your infrastructure. Choose on-premise for:

  • Your data not leaving your infrastructure.

  • Full and detailed control over the configuration.

When to Choose On-Device

We also provide an opportunity of using the on-device Liveness and Face matching. This model is available in Mobile SDKs.

Consider the on-device option when:

  • You can’t transmit facial images to any server due to privacy concerns

  • The network conditions whereon you plan using Oz products are extremely poor.

The choice is yours to make, but we're always available to provide assistance.

provides comprehensive details on the authentication process. Kindly refer to it for further information.

Furthermore, the Oz API offers distinct user roles, ranging from CLIENT, who can perform checks and access reports but lacks administrative rights, e.g., deleting folders, to ADMIN, who enjoys nearly unrestricted access to all system objects. For additional information, please consult .

The full list of Oz media tags with their explanation and examples can be found .

Since video analysis may take a few seconds, the analyses are performed asynchronously. This implies that you initiate an analysis (/api/folders/{{folder_id}}/analyses/) and then monitor the outcomes by polling until processing is complete (/api/analyses/{{analyse_id}} for a single analysis or /api/folders/{{folder_id}}/analyses/ for all folder’s analyses). Alternatively, there is a webhook option available. To see an example of how to use both the polling and webhook options, please check .

These were the key concepts of Oz API. To gain a deeper understanding of its capabilities, please refer to the of our developer guide.

For more details, please refer to the .

Expand this section to check the minimum hardware requirements

Oz Biometry / Liveness Server

  • OS: please check versions

  • CPU: 16 cores

  • RAM: 24 GB

  • Disk: 80 GB

Oz API / Web UI / Web SDK Server

  • OS: OS: please check versions

  • CPU: 8 cores

  • RAM: 16 GB

  • Disk: 300 GB

Please check the OS version with our team.

This article
this guide
here
Oz API section
Oz API Lite Developer Guide
here
here

How to Integrate Server-Based Liveness into Your Web Application

This guide outlines the steps for integrating the Oz Liveness Web SDK into a customer web application for capturing facial videos and subsequently analyzing them on a server.

The SDK implements the ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results. Under the hood, it communicates with Oz API.

Oz Liveness Web SDK detects both presentation and injection attacks. An injection attack is an attempt to feed pre-recorded video into the system using a virtual camera.

Finally, while the cloud-based service provides the fully-fledged functionality, we also offer an on-premise version with the same functions but no need for sending any data to our cloud. We recommend starting with the SaaS mode and then reconnecting your web app to the on-premise Web Adapter and Oz API to ensure seamless integration between your front end and back end. With these guidelines in mind, integrating the Oz Liveness Web SDK into your web application can be a simple and straightforward process.

1. Get your Web Adapter

Tell us domain names of the pages from which you are going to call Web SDK and email for admin access, e.g.:

Domain names from which WebSDK will be called:

  1. www.yourbrand.com

  2. www.yourbrand2.com

Email for admin access:

  • j.doe@yourcompany.com

Login: j.doe@yourcompany.com

Password: …

API: https://sandbox.ohio.ozforensics.com/

Web Console: https://sandbox.ohio.ozforensics.com

Web Adapter: https://web-sdk.cdn.sandbox.ozforensics.com/your_company_name/

2. Add Web Plugin to your web pages

Add the following tags to your HTML code. Use Web Adapter URL received before:

<script src="https:///plugin_liveness.php"></script>

3. Implement your logic around Web Plugin

Add the code that opens the plugin and handles the results:

OzLiveness.open({
  lang: 'en',
  action: [
    // 'photo_id_front', // request photo ID picture
    'video_selfie_blank' // request passive liveness video
  ],
  on_complete: function (result) {
    // This callback is invoked when the analysis is complete
    console.log('on_complete', result);
  }
});
  • Customizing plugin look-and-feel

  • Adding custom language pack

  • Tuning plugin behavior

  • Plugin parameters and callbacks

  • Security recommendations

For Angular and React, replace https://web-sdk.sandbox.ohio.ozforensics.com in index.html.

Passive and Active Liveness

Describing how passive and active liveness works.

The objective of the Liveness check is to verify the authenticity and physical presence of an individual in front of the camera. In the passive Liveness check, it is sufficient to capture a user's face while they look into the camera. Conversely, the active Liveness check requires the user to perform an action such as smiling, blinking, or turning their head. While passive Liveness is more user-friendly, active Liveness may be necessary in some situations to confirm that the user is aware of undergoing the Liveness check.

In our Mobile or Web SDKs, you can define what action the user is required to do. You can also combine several actions into a sequence. Actions vary in the following dimensions:

  • User experience,

  • File size,

  • Liveness check accuracy,

  • Suitability for review by a human operator or in court.

In most cases, the Selfie action is optimal, but you can choose other actions based on your specific needs. Here is a summary of available actions:

Passive Liveness

  • Selfie

A short video, around 0.7 sec. Users are not required to do anything. Recommended for most cases. It offers the best combination of user experience and liveness check accuracy.

  • One shot

Similar to “Simple selfie” but only one image is chosen instead of the whole video. Recommended when media size is the most important factor. Hard to evaluate for a spoofing by a human, e.g., by an operator or in a court.

  • Scan

A 5-second video where a user is asked to follow the text looking at it. Recommended when the longer video is required, e.g., for subsequent review by a human operator or in a court.

Active Liveness

  • Smile

  • Blink

  • Tilt head up

  • Tilt head down

  • Turn head left

  • Turn head right

A user is required to complete a particular gesture within 5 seconds.

Use active liveness when you need a confirmation that the user is aware of undergoing a Liveness check.

Video length and file size may vary depending on how soon a user completes a gesture.

How to Check Your Media for Liveness without Oz Front End

Oz API is a rich Rest API for facial biometry, where you can do liveness checks and face matching. Oz API features are:

  • Persistence: your media and analysis are stored for future reference unless you explicitly delete it,

  • Ability to work with videos as well as images,

  • Asynchronous analyses,

  • Authentication,

  • Roles and access management.

The unit of work in Oz API is a folder: you can upload interrelated media to a folder, run analyses on them, and check for the aggregated result.

This step-by-step guide describes how to perform a liveness check on a facial image or video that you already have with Oz backend: create a folder, upload your media to this folder, initiate liveness check and poll for the results.

For better accuracy and user experience, we recommend that you use our Web and/or Native SDK on your front end for face capturing. Please refer to the relevant guides:

1. Get the access token

For security reasons, we recommend obtaining the access token instead of using the credentials. Call POST /api/authorize/auth with your login and password in the request body.

Obtain access_token from the response and add it to the X-Forensic-Access-Token of all subsequence requests.

2. Convert a single image to an archive (for API 4.0.8 and below)

Omit this step for API 5.0.0 and above.

With API 4.0.8 and below, Oz API requires video or archived sequence of images in order to perform liveness check. If you want to analyze a single image, you need to transform it into a ZIP archive. Oz API will treat this archive as a video.

Make sure that you use corresponding video-related tags later on.

3. Upload media to a folder

To create a folder and upload your media into it, call POST /api/folders/, adding the media you need to the body part of the request.

In the payload field, set the appropriate tags:

The successful response will return code 201 and folder_id you’ll need later on.

4. Initiate the analysis

The results will be available in a short while. The method will return analysis_id that you’ll need at the next step.

5. Poll for the results

Repeat calling GET /api/analyses/{{analysis_id}} with the analysis_id from the previous step once a second until the state changes from PROCESSING to something else. For a finished analysis:

  • get the qualitative result from resolution (SUCCESS or DECLINED).

  • get the quantitative results from results_media[0].results_data.confidence_spoofing. confidence_spoofing ranges from 0.0 (real person) to 1.0 (spoofing).

Here is the Postman collection for this guide.

With these steps completed, you are done with Liveness check via Oz API. You will be able to access your media and analysis results in Web UI via browser or programmatically via API.

Server-Based Liveness

In this section, we listed the guides for the server-based liveness check integrations.

Hybrid Liveness

Since Android and iOS 8.0.0, we have introduced the hybrid Liveness analysis mode. It is a combination of the on-device and server-based analyses that sums up the benefits of these two modes. If the on-device analysis is uncertain about the real human presence, the system initiates the server-based analysis, otherwise, no additional analyses are done.

Benefits of the Hybrid Liveness

  • You need less computational capacity: in the majority of cases there's no need to apply the server-side analyses and fewer requests are being sent back and forth. On Android, only 8-9% of analyses require an additional server check, on iOS, it's 4.5-5.5%.

  • The accuracy is similar to the one of the server-based analysis: if the on-device analysis result is uncertain, the server analysis is launched. We offer the hybrid analysis as one of the default analysis modes, but you can also implement your own logic of hybrid analysis by combining the server-based and on-device analyses in your code.

  • Since the 8.3.0 release, you get the analysis result faster, and less data will be transmitted (by up to 10 times): the on-device analysis is enough in the majority of cases so that you don’t need to upload the full video to analyze it on the server, and, therefore, don’t send or receive the additional data. The customer journey gets shorter.

How to Enable Hybrid Liveness

The hybrid analysis has been available in our native (mobile) SDKs since 8.0.0. As we mentioned before, the server-based analysis is launched in the minority of cases, as if the analysis on your device has finished with a certain answer, there’s no need for a second check. This results in less server resources involved.

Integration Quick Start Guides

This section contains the most common cases of integrating the Oz Forensics Liveness and face Biometry system.

The scenarios can be combined together, for example, integrating liveness into both web and mobile applications or integrating liveness with face matching.

Server-Based Liveness:

On-Device Liveness

Face Matching

In response, you’ll get URLs and credentials for further integration and usage. When using SaaS API, you get them :

For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the . Consider the proper user role (CLIENT in most cases or CLIENT ADMIN, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.

Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .

With these steps, you are done with basic integration of Web SDK into your web application. You will be able to access recorded media and analysis results in via browser or programmatically via (please find the instructions here: , ).

In the you can find instructions for common next steps:

Please find a sample for Oz Liveness Web SDK . To make it work, replace <web-adapter-url> with the Web Adapter URL you've received from us.

To recognize the actions from either passive or active Liveness, our algorithms refer to the corresponding tags. These tags indicate the type of action that a user is performing within a media. For more information, please read the article. The detailed information on how the actions, or, in other words, gestures are called in different Oz Liveness components is .

,

Before you begin, make sure you have Oz API credentials. When using SaaS API, you get them :

For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the . Consider the proper user role (CLIENT in most cases or CLIENT ADMIN, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.

You can explore all API methods with .

To launch the analysis, POST /api/folders/{{folder_id}}/analyses/ with folder_id from the previous step. In the request body, specify the liveness check to be launched.

Oz API methods can be combined with great flexibility. Explore Oz API using the .

To enable hybrid mode on Android, when you , set Mode to HYBRID. , you need to set mode to hybrid. That’s all, as easy as falling off a log.

If you have any questions left, we’ll be happy to .

from us
here
Web Plugin Developer Guide,
here
Angular sample
React sample
Media Tags
here

Login: j.doe@yourcompany.com

Password: …

API: https://sandbox.ohio.ozforensics.com

Web Console: https://sandbox.ohio.ozforensics.com

{
    "credentials": {
        "email": "<login>", // your admin access email
        "password": "<password>" // the password you’ve got from us
     }
}
{
    "media:tags": {
        "video1": [
            "video_selfie",
            "video_selfie_blank",
            "orientation_portrait"
        ],
    }
}
{
    "analyses": [
        {
            "type": "quality"
        }
    ]
}

API

Integration of Oz Liveness Web SDK
Integration of Oz Liveness Mobile SDK.
from us
Oz Postman collection
call
API Developer Guide
How to Integrate Server-Based Liveness into Your Web Application
How to Integrate Server-Based Liveness into Your Mobile Application
How to Check Your Media for Liveness without Oz Front End
launch the analysis
To do the same on iOS
answer them
How to Integrate Server-Based Liveness into Your Web Application
How to Integrate Server-Based Liveness into Your Mobile Application
How to Check Your Media for Liveness without Oz Front End
How to Integrate On-Device Liveness into Your Mobile Application
How to Add Face Matching of Liveness Video with a Reference Photo From Your Database
How to Add Photo ID Capture and Face Matching to Your Web or Mobile Application

How to Integrate Server-Based Liveness into Your Mobile Application

This guide outlines the steps for integrating the Oz Liveness Mobile SDK into a customer mobile application for capturing facial videos and subsequently analyzing them on the server.

The SDK implements a ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results. The SDK methods for liveness analysis communicate with Oz API under the hood.

Login: j.doe@yourcompany.com

Password: …

API: https://sandbox.ohio.ozforensics.com

Web Console: https://sandbox.ohio.ozforensics.com

We also recommend that you use our logging service called telemetry, as it helps a lot in investigating attacks' details. For Oz API users, the service is enabled by default. For on-premise installations, we'll provide you with credentials.

Android

1. Add SDK to your project

In the build.gradle of your project, add:

allprojects {
    repositories {
        maven { url "https://ozforensics.jfrog.io/artifactory/main" }
    }
}

In the build.gradle of the module, add:

dependencies {
    implementation 'com.ozforensics.liveness:full:<version>'
    // You can find the version needed in the Android changelog
}

2. Initialize SDK

Rename the license file to forensics.license and place it into the project's res/raw folder.

OzLivenessSDK.init(
    context,
    listOf(LicenseSource.LicenseAssetId(R.raw.forensics))
)
OzLivenessSDK.INSTANCE.init(
        context,
        Collections.singletonList(new LicenseSource.LicenseAssetId(R.raw.forensics)),
        null
);

3. Connect SDK to Oz API

Use API credentials (login, password, and API URL) that you’ve got from us.

OzLivenessSDK.setApiConnection(
    OzConnection.fromCredentials(host, username, password),
    statusListener(
        { token -> /* token */ },
        { ex -> /* error */ }
    )
)
OzLivenessSDK.INSTANCE.setApiConnection(
        OzConnection.Companion.fromCredentials(host, username, password),
        new StatusListener<String>() {
            @Override
            public void onStatusChanged(@Nullable String s) {}
            @Override
            public void onSuccess(String token) { /* token */ }
            @Override
            public void onError(@NonNull OzException e) { /* error */ }
        }
);
OzLivenessSDK.setApiConnection(OzConnection.fromServiceToken(host, token))
OzLivenessSDK.INSTANCE.setApiConnection(
        OzConnection.Companion.fromServiceToken(host, token), 
        null
);

4. Add face recording

To start recording, use startActivityForResult:

val OZ_LIVENESS_REQUEST_CODE = 1
val intent = OzLivenessSDK.createStartIntent(listOf( OzAction.Blank)) startActivityForResult(intent, OZ_LIVENESS_REQUEST_CODE)
int OZ_LIVENESS_REQUEST_CODE = 1;
Intent intent = OzLivenessSDK.INSTANCE.createStartIntent(Collections.singletonList(OzAction.Blank));
startActivityForResult(intent, OZ_LIVENESS_REQUEST_CODE);

To obtain the captured video, use onActivityResult:

override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
    super.onActivityResult(requestCode, resultCode, data)
        if (requestCode == OZ_LIVENESS_REQUEST_CODE) {
            val sdkMediaResult = OzLivenessSDK.getResultFromIntent(data)
            val sdkErrorString = OzLivenessSDK.getErrorFromIntent(data)
            if (!sdkMediaResult.isNullOrEmpty()) {
                analyzeMedia(sdkMediaResult)
            } else println(sdkErrorString)
        }
    }
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    if (requestCode == OZ_LIVENESS_REQUEST_CODE) {
        List<OzAbstractMedia> sdkMediaResult = OzLivenessSDK.INSTANCE.getResultFromIntent(data);
        String sdkErrorString = OzLivenessSDK.INSTANCE.getErrorFromIntent(data);
        if (sdkMediaResult != null && !sdkMediaResult.isEmpty()) {
            analyzeMedia(sdkMediaResult);
        } else System.out.println(sdkErrorString);
    }
}

The sdkMediaResult object contains the captured videos.

5. Run analyses

To run the analyses, execute the code below. Mind that mediaList is an array of objects that were captured (sdkMediaResult) or otherwise created (media you captured on your own).

private fun analyzeMedia(mediaList: List<OzAbstractMedia>) {
    AnalysisRequest.Builder()
        .addAnalysis(Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, mediaList))
        .build()
        .run(object : AnalysisRequest.AnalysisListener {
            override fun onSuccess(result: List<OzAnalysisResult>) {
                result.forEach { 
                    println(it.resolution.name)
                    println(it.folderId)
                }
            }
            override fun onError(error: OzException) {
                error.printStackTrace()
            }
        })
} 
private void analyzeMedia(List<OzAbstractMedia> mediaList) {
    new AnalysisRequest.Builder()
            .addAnalysis(new Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, mediaList, Collections.emptyMap()))
            .build()
            .run(new AnalysisRequest.AnalysisListener() {
                @Override public void onStatusChange(@NonNull AnalysisRequest.AnalysisStatus analysisStatus) {}
                @Override
                public void onSuccess(@NonNull List<OzAnalysisResult> list) {
                    for (OzAnalysisResult result: list) {
                        System.out.println(result.getResolution().name());
                        System.out.println(result.getFolderId());
                    }
                }
                @Override
                public void onError(@NonNull OzException e) { e.printStackTrace(); }
    });
}

iOS

1. Add our SDK to your project

pod 'OZLivenessSDK', :git => 'https://gitlab.com/oz-forensics/oz-liveness-ios', :tag => '<version>' // You can find the version needed in  iOS changelog

2. Initialize SDK

Rename the license file to forensics.license and put it into the project.

OZSDK(licenseSources: [.licenseFileName("forensics.license")]) { licenseData, error in
    if let error = error {
        print(error.errorDescription)
    }
}

3. Connect SDK to Oz API

Use API credentials (login, password, and API URL) that you’ve got from us.

OZSDK.setApiConnection(Connection.fromCredentials(host: “https://sandbox.ohio.ozforensics.com”, login: login, password: p)) { (token, error) in
    // Your code to handle error or token
}
OZSDK.setApiConnection(Connection.fromServiceToken(host: "https://sandbox.ohio.ozforensics.com", token: token)) { (token, error) in
}

4. Add face recording

Create a controller that will capture videos as follows:

let actions: [OZVerificationMovement] = [.selfie]
let ozLivenessVC: UIViewController = OZSDK.createVerificationVCWithDelegate(delegate, actions: actions) 
self.present(ozLivenessVC, animated: true)

The delegate object must implement OZLivenessDelegate protocol:

let actions: [OZVerificationMovement] = [.selfie]
let ozLivenessVC: UIViewController = OZSDK.createVerificationVCWithDelegate(delegate, actions: actions) 
self.present(ozLivenessVC, animated: true)

5. Run analyses

Use AnalysisRequestBuilder to initiate the Liveness analysis. The communication with Oz API is under the hood of the run method.

let analysisRequest = AnalysisRequestBuilder()
let analysis = Analysis.init(
media: mediaToAnalyze, 
type: .quality, 
mode: .serverBased)
analysisRequest.uploadMedia(mediaToAnalyze)
analysisRequest.addAnalysis(analysis)
analysisRequest.run(
scenarioStateHandler: { state in }, // scenario steps progress handler
uploadProgressHandler: { (progress) in } // file upload progress handler 
) { (analysisResults : [OzAnalysisResult], error) in 
    // receive and handle analyses results here 
    for result in analysisResults {
        print(result.resolution)
        print(result.folderID)
    }
}

With these steps, you are done with basic integration of Mobile SDKs. You will be able to access recorded media and analysis results in Web Console via browser or programmatically via API.

In developer guides, you can also find instructions for customizing the SDK look-and-feel and access the full list of our Mobile SDK methods. Check out the table below:

On-Device Liveness

In this section, there's a guide for the integration of the on-device liveness check.

Oz Licensing Options

For commercial use of Oz Forensics products, a license is required. The license is time-limited and defines the software access parameters based on the terms of your agreement.

Once you initialize mobile SDK, run Web Plugin, or use Oz Bio, the system checks if your license is valid. The check runs in the background and has minimal impact on the user experience.

As you can see on the scheme above, the license is required for:

  • Mobile SDKs for iOS and Android,

  • Web SDK, which consists of Web Adapter and Web Plugin,

For each of the components, you require a separate license which is bound to this component. Thus, if you use all three components, three licenses are required.

Native SDKs (iOS and Android)

To issue a license for mobile SDK, we require your bundle (application) ID. There are two types of licenses for iOS and Android SDKs: online and offline. Any license type can be applied to any analysis mode: on-device, server-based, or hybrid.

Online License

As its name suggests, an online license requires a stable connection. Once you initialize our SDK with this license, it connects to our license server and retrieves information about license parameters, including counters of transactions or devices, where:

  • Transaction: increments each time you start a video capture.

  • Device: increments when our SDK is installed on a new device.

The online license can be transaction-limited, device-limited, or both, according to your agreement.

The main advantages of the online license are:

  • You don’t need to update your application after the license renewal,

  • And if you want to add a new bundle ID to the incense, there’s also no need to re-issue it. Everything is done on the fly.

The data exchange for the online license is quick, ensuring your users won't experience almost any delay compared to using the offline license.

Please note that even though on-device analyses don’t need the Internet themselves, you still require a connection for license verification.

Online license is the default option for Mobile SDKs. If you require the offline license, please inform your manager.

Offline License

Offline license is a type of license that can work without Internet. All license parameters are set in the license file, and you just need to add the file to your project. This license type doesn’t have any restrictions on transactions or devices.

The main benefit of the offline license is its autonomy, allowing it to function without a network connection. However, when your license expires, and you add a new one, you’ll require to release a new version of your application in Google Play and App Store. Otherwise, the SDK won’t function.

How to add a license to mobile SDK:

Web SDK

Web SDK license is almost similar to the mobile SDK offline license. It can function without network connection, and the license file contains all the necessary parameters, such as expiration date. Web SDK license also has no restrictions on transactions or devices.

The difference between Mobile SDK offline license and Web SDK license is that you don’t need to release a new application version when Web SDK license is renewed.

On-Premise (Oz BIO or Server License)

Trial

Once you're ready to move to commercial use, a new production license will be issued. We’ll provide you with new production credentials and assist you with integration and configuration. Our engineers are always available to help.

Our software offers flexible licensing options to meet your specific needs. Whether you prioritize seamless updates or prefer autonomous operation, we have a solution tailored for you. If you have any questions, please contact us.

Developer Guide

In this section, you will find the description of both API and SDK components of Oz Forensics Liveness and face biometric system. API is the backend component of the system, it is needed for all the system modules to interact with each other. SDK is the frontend component that is used to:

1) take videos or images which are then processed via API,

2) display results.

We provide two versions of API.

With full version, we provide you with all functionality of Oz API.

The Lite version is a simple and lightweight version with only the necessary functions included.

The SDK component consists of web SDK and mobile SDK.

Web SDK is a plugin that you can embed into your website page and the adapter for this plugin.

Mobile SDK is SDK for iOS and Android.

How to Add Photo ID Capture and Face Matching to Your Web or Mobile Application

Please note that the Oz Liveness Mobile SDK does not include a user interface for scanning official documents. You may need to explore alternative SDKs that offer that functionality or implement it on your own. Web SDK does include a simple photo ID capture screen.

This guide describes the steps needed to add face matching to your liveness check.

By this time you should have already implemented liveness video recording and liveness check. If not, please refer to these guides:

Adding Photo ID Capture Step to Web SDK

Simply add photo_id_front to the list of actions for the plugin, e.g.,

OzLiveness.open({
  lang: 'en',
  action: [
    'photo_id_front', 
    'video_selfie_blank'
  ],
  ...
});

Adding Face Matching to Android SDK

For the purpose of this guide, it is assumed that your reference photo (e.g., front side of an ID) is stored on the device as reference.jpg.

Modify the code that runs the analysis as follows:

private fun analyzeMedia(mediaList: List<OzAbstractMedia>) {

    val refFile = File(context.filesDir, "reference.jpg")
    val refMedia = OzAbstractMedia.OzDocumentPhoto(
        OzMediaTag.PhotoIdFront , // OzMediaTag.PhotoSelfie for a non-ID photo
        refFile.absolutePath
    )

    AnalysisRequest.Builder()
        .addAnalysis(Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, mediaList))
        .addAnalysis(Analysis(Analysis.Type.BIOMETRY, Analysis.Mode.SERVER_BASED, mediaList + refMedia))
        .build()
        .run(object : AnalysisRequest.AnalysisListener {
            override fun onSuccess(result: List<OzAnalysisResult>) {
                result.forEach { 
                    println(it.resolution.name)
                    println(it.folderId)
                }
            }
            override fun onError(error: OzException) {
                error.printStackTrace()
            }
        })
} 
private void analyzeMedia(List<OzAbstractMedia> mediaList) {
    File refFile = new File(context.getFilesDir(), "reference.jpg");
    OzAbstractMedia refMedia = new OzAbstractMedia.OzDocumentPhoto(
            OzMediaTag.PhotoIdFront , // OzMediaTag.PhotoSelfie for a non-ID photo
            refFile.getAbsolutePath()
    );
    ArrayList<OzAbstractMedia> mediaWithReferencePhoto = new ArrayList<>(mediaList);
    mediaWithReferencePhoto.add(refMedia);
    new AnalysisRequest.Builder()
            .addAnalysis(new Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, mediaList, Collections.emptyMap()))
            .addAnalysis(new Analysis(Analysis.Type.BIOMETRY, Analysis.Mode.SERVER_BASED, mediaWithReferencePhoto, Collections.emptyMap()))
            .build()
            .run(new AnalysisRequest.AnalysisListener() {
                @Override public void onStatusChange(@NonNull AnalysisRequest.AnalysisStatus analysisStatus) {}
                @Override
                public void onSuccess(@NonNull List<OzAnalysisResult> list) {
                    String folderId = list.get(0).getFolderId();
                }
                @Override
                public void onError(@NonNull OzException e) { e.printStackTrace(); }
    });
}

For on-device analyses, you can change the analysis mode from Analysis.Mode.SERVER_BASED to Analysis.Mode.ON_DEVICE

Adding Face Matching to iOS SDK

For the purpose of this guide, it is assumed that your reference photo (e.g., front side of an ID) is stored on the device as reference.jpg.

Modify the code that runs the analysis as follows:

let imageURL = URL(fileURLWithPath: NSTemporaryDirectory())
    .appendingPathComponent("reference.jpg")

let refMedia = OZMedia.init(movement: .selfie,
                   mediaType: .movement,
                   metaData: nil,
                   videoURL: nil,
                   bestShotURL: imageUrl,
                   preferredMediaURL: nil,
                   timestamp: Date())
   
var mediaBiometry = [OZMedia]()
mediaBiometry.append(refMedia)
mediaBiometry.append(contentsOf: mediaToAnalyze)
let analysisRequest = AnalysisRequestBuilder()
let analysisBiometry = Analysis.init(media: mediaBiometry, type: .biometry, mode: .serverBased)
let analysisQuality = Analysis.init(media: mediaToAnalyze, type: .quality, mode: .serverBased)
analysisRequest.addAnalysis(analysisBiometry)
analysisRequest.addAnalysis(analysisQuality)
analysisRequest.uploadMedia(mediaBiometry)
analysisRequest.run(
    scenarioStateHandler: { state in }, // scenario steps progress handler
    uploadProgressHandler: { (progress) in } // file upload progress handler
) { (analysisResults : [OzAnalysisResult], error) in
    // receive and handle analyses results here
    for result in analysisResults {
        print(result.resolution)
        print(result.folderID)
    }
}

For on-device analyses, you can change the analysis mode from mode: .serverBased to mode: .onDevice

Final notes for all SDKs

You will be able to access your media and analysis results in Web UI via browser or programmatically via API.

Face Matching

In this section, we listed the guides for the face matching checks.

How to Add Face Matching of Liveness Video with a Reference Photo From Your Database

This guide describes how to match a liveness video with a reference photo of a person that is already stored in your database.

By this time you should have already implemented liveness video recording and liveness check. If not, please refer to these guides:

In this scenario, you upload your reference image to the same folder where you have a liveness video, initiate the BIOMETRY analysis, and poll for the results.

1. Get folder_id

Given that you already have the liveness video recorded and uploaded, you will be working with the same Oz API folder where your liveness video is. Obtain the folder ID as described below, and pass it to your back end.

  • For a video recorded by Android or iOS SDK, retrieve the folder_id from the analysis’ results as shown below:

Android:

AnalysisRequest.Builder()
        ...
        .run(object : AnalysisRequest.AnalysisListener {
            override fun onSuccess(result: List<OzAnalysisResult>) {
                // save folder_id that is needed for the next step
                val folderId = result.firstOrNull()?.folderId
            }
            ...
        })
private void analyzeMedia(List<OzAbstractMedia> mediaList) {
    new AnalysisRequest.Builder()
            ...
            .run(new AnalysisRequest.AnalysisListener() {
                @Override public void onStatusChange(@NonNull AnalysisRequest.AnalysisStatus analysisStatus) {}
                @Override
                public void onSuccess(@NonNull List<OzAnalysisResult> list) {
                    String folderId = list.get(0).getFolderId();
                    }
                }
                ...
    });
}

iOS:

analysisRequest.run(
scenarioStateHandler: { state in }, 
uploadProgressHandler: { (progress) in }  
)   { (analysisResults : [OzAnalysisResult], error) in 
        // save folder_id that is needed for the next step
        let folderID = analysisResults.first?.folderID
    }
}

2. Upload your reference photo

Set the appropriate tags in the payload field of the request, depending on the nature of a reference photo that you have.

{
  "media:tags": { 
    "photo1": [
        "photo_id", "photo_id_front" // for the front side of an ID
        // OR
        "photo_selfie" // for a non-ID photo
    ]
  }
}

3. Initiate the analysis

{
    "analyses": [
        {
            "type": "biometry"
        }
    ]
}

4. Poll for the results

  • get the qualitative result from resolution (SUCCESS or DECLINED).

  • get the quantitative results from analyses.results_data.min_confidence

Here is the Postman collection for this guide.

With these steps completed, you are done with adding face matching via Oz API. You will be able to access your media and analysis results in Web UI via browser or programmatically via API.

Authentication

Getting an Access Token

To get an access token, call POST /api/authorize/auth/ with credentials (which you've got from us) containing the email and password needed in the request body. The host address should be the API address (the one you've also got from us).

The successful response will return a pair of tokens:access_token and expire_token.

access_token is a key that grants you access to system resources. To access a resource, you need to add your access_token to the header.

headers = {‘ X-Forensic-Access-Token’: <access_token>}

access_token is time-limited, the limits depend on the account type.

  • service accounts – OZ_SESSION_LONGLIVE_TTL (5 years by default),

  • other accounts – OZ_SESSION_TTL (15 minutes by default).

expire_token is the token you can use to renew your access token if necessary.

Automatic session extension

If the value ofexpire_date > current date, the value of current sessionexpire_date is set to current date + time period that is defined as shown above (depending on the account type).

Token Renewal

To renewaccess_token and expire_token, call POST /api/authorize/refresh/. Add expire_token to the request body and X-Forensic-Access-Token to the header.

In case of success, you'll receive a new pair of access_token and expire_token. The "old" pair will be deleted upon the first authentication with the renewed tokens.

Errors

How to Integrate On-Device Liveness into Your Mobile Application

This guide outlines the steps for integrating the Oz Liveness Mobile SDK into a customer mobile application for capturing facial videos and performing on-device liveness checks without sending any data to a server.

The SDK implements the ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results.

Android

1. Add SDK to your project

In the build.gradle of your project, add:

In the build.gradle of the module, add:

2. Initialize SDK

Rename the license file to forensics.license and place it into the project's res/raw folder.

3. Add face recording

To start recording, use startActivityForResult:

To obtain the captured video, use onActivityResult:

The sdkMediaResult object contains the captured videos.

4. Run analyses

To run the analyses, execute the code below. Mind that mediaList is an array of objects that were captured (sdkMediaResult) or otherwise created (media you captured on your own).

iOS

1. Add our SDK to your project

2. Initialize SDK

Rename the license file to forensics.license and put it into the project.

3. Add face recording

Create a controller that will capture videos as follows:

The delegate object must implement OZLivenessDelegate protocol:

4. Run analyses

Use AnalysisRequestBuilder to initiate the Liveness analysis.

With these steps, you are done with basic integration of Mobile SDKs. The data from the on-device analysis is not transferred anywhere, so please bear in mind you cannot access it via API or Web console. However, the internet is still required to check the license. Additionally, we recommend that you use our logging service called telemetry, as it helps a lot in investigating attacks' details. We'll provide you with credentials.

Best Shot

Please note: historically, some instances are configured to allow Best Shot only for certain gestures.

Processing steps

3. The URL to the best shot is located in the results_media -> output_images -> original_url response.

Working with Oz System: Basic Scenarios

In this section, you'll learn how to perform analyses and where to get the numeric results.

Liveness is checking that a person on a video is a real living person.

Biometry compares two or more faces from different media files and shows whether the faces belong to the same person or not.

Best shot is an addition to the Liveness check. The system chooses the best frame from a video and saves it as a picture for later use.

Blacklist checks whether a face on a photo or a video matches with one of the faces in the pre-created database.

The Quantitative Results section explains where and how to find the numeric results of analyses.

In API 6.0, we've implemented new analysis modes:

Uploading Media

To create a folder and upload media to it, call POST /api/folders/

To add files to the existing folder, call POST /api/folders/{{folder_id}}/media/

Add the files to the request body; tags should be specified in the payload.

Here's the example of the payload for a passive Liveness video and ID front side photo.

An example of usage (Postman):

The successful response will return the folder data.

Web Console
API
getting analysis results

Before you begin, make sure you have Oz API credentials. When using SaaS API, you get them :

For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the . Consider the proper user role (CLIENT in most cases or CLIENT ADMIN, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.

Oz Liveness Mobile SDK requires a license. License is bound to the bundle_id of your application, e.g., com.yourcompany.yourapp. Issue the 1-month trial license or for a long-term license.

In production, instead of hard-coding login and password in the application, it is recommended to get access token on your backend with API method then pass it to your application:

Install OZLivenessSDK via . To integrate OZLivenessSDK into an Xcode project, add to Podfile:

In production, instead of hard-coding the login and password in the application, it is recommended to get an access token on your back end using the API method, then pass it to your application:

Android source codes

iOS source codes

Android OzLiveness SDK

iOS OzLiveness SDK

in PlayMarket

in TestFlight

Oz BIO, which is needed for server analyses and is installed for .

The license is bound to URLs of your domains and/or subdomains. To add the license to your SDK instance, you need to place it to the Web SDK container as described . In rare cases, it is also possible to .

For on-premise installations, we offer a dedicated license with a limitation on activations, with each activation representing a separate Oz BIO seat. This license can be online or offline, depending on whether your Oz BIO servers have internet access. The online license is verified through our license server, while for offline licenses, we assist you in within your infrastructure and activating the license.

For test integration purposes, we provide a free trial license that is sufficient for initial use, such as testing with your datasets to check analysis accuracy. For Mobile SDKs, you can generate a one-month license yourself on our website: . If you would like to integrate with your web application, please to obtain a license, and we will also assist you in configuring your dedicated instance of our Web SDK. With the license, you will receive credentials to access our services.

Check also the Android source code.

Check also the iOS source code.

Oz API methods as well as Mobile and Web SDK methods can be combined with great flexibility. Explore the options available in the section.

However, if you prefer to include a photo ID capture step to your liveness process instead of using a stored photo, then you can refer to in this section.

For a video recorded by Web SDK, get the folder_id as described .

POST /api/folders/{{folder_id}}/media/ method, replacing the folder_id with the ID you’ve got in the previous step. This will upload your new media to the folder where your ready-made liveness video is located.

To launch the analysis, POST /api/folders/{{folder_id}}/analyses/ with the folder_id from the previous step. In the request body, specify the biometry check to be launched.

Repeat GET /api/analyses/{{analysis_id}} with the analysis_id from the previous step once a second until the state changes from PROCESSING to something else. For a finished analysis:

Oz API methods can be combined with great flexibility. Explore Oz API using the .

Oz Liveness Mobile SDK requires a license. License is bound to the bundle_id of your application, e.g., com.yourcompany.yourapp. Issue the 1-month trial license or for a long-term license.

Install OZLivenessSDK via . To integrate OZLivenessSDK into an Xcode project, add to Podfile:

The "Best shot" algorithm is intended to choose the most high-quality and well-tuned frame with a face from a video record. This algorithm works as a part of the analysis, so here, we describe only the best shot part.

1. Initiate the analysis similar to , but make sure that extract_best_shot is set to true as shown below:

If you want to use a webhook for response, add it to the payload at this step, as described .

2. Check and interpret results in the same way as for the pure analysis.

To learn what each analysis means, please refer to .

To launch one or more analyses for your media files, you need to create a folder via Oz API (or use an existing folder) and put the files into this folder. Each file should be marked by : they describe what's pictured in a media and determine the applicable analyses.

For API 4.0.8 and below, please note: if you want to upload a photo for the subsequent Liveness analysis, put it into the ZIP archive and apply the tags.

from us
on our website
email us
auth
CocoaPods
auth
How to Integrate On-Device Liveness into Your Mobile Application
On-Premise model of use
Android
iOS
here
add a license via Web Plugin
setting up an offline license server
click here
contact us
Oz API
Oz API Lite
Oz Liveness Web SDK
iOS
Android
Integration of Oz Liveness Web SDK
Integration of Oz Liveness Mobile SDK
sample app
sample app
Developer Guide
How to Add Face Matching of Liveness Video with a Reference Photo From Your Database
another guide
Integration of Oz Liveness Web SDK
Integration of Oz Liveness Mobile SDK
here
Call
call
calling
API Developer Guide
{
	"credentials": {
		"email": "{{user_email}}", // your login
		"password": "{{user_password}}" // your password
	}
}
{
    "expire_token": "{{expire_token}}"
}

Error code

Error message

What caused the error

400

Could not locate field for key_path expire_token from provided dict data

expire_token haven't been found in the request body

401

Session not found

The session with expire_token you have passed doesn't exist.

403

You have not access to refresh this session

A user who makes the request is not thisexpire_token session owner.

allprojects {
    repositories {
        maven { url "https://ozforensics.jfrog.io/artifactory/main" }
    }
}
dependencies {
    implementation 'com.ozforensics.liveness:full:<version>'
    // You can find the version needed in the Android changelog
}
OzLivenessSDK.init(
    context,
    listOf(LicenseSource.LicenseAssetId(R.raw.forensics))
)
OzLivenessSDK.INSTANCE.init(
        context,
        Collections.singletonList(new LicenseSource.LicenseAssetId(R.raw.forensics)),
        null
);
val OZ_LIVENESS_REQUEST_CODE = 1
val intent = OzLivenessSDK.createStartIntent(listOf( OzAction.Blank)) startActivityForResult(intent, OZ_LIVENESS_REQUEST_CODE)
int OZ_LIVENESS_REQUEST_CODE = 1;
Intent intent = OzLivenessSDK.INSTANCE.createStartIntent(Collections.singletonList(OzAction.Blank));
startActivityForResult(intent, OZ_LIVENESS_REQUEST_CODE);
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
    super.onActivityResult(requestCode, resultCode, data)
        if (requestCode == OZ_LIVENESS_REQUEST_CODE) {
            val sdkMediaResult = OzLivenessSDK.getResultFromIntent(data)
            val sdkErrorString = OzLivenessSDK.getErrorFromIntent(data)
            if (!sdkMediaResult.isNullOrEmpty()) {
                analyzeMedia(sdkMediaResult)
            } else println(sdkErrorString)
        }
    }
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    if (requestCode == OZ_LIVENESS_REQUEST_CODE) {
        List<OzAbstractMedia> sdkMediaResult = OzLivenessSDK.INSTANCE.getResultFromIntent(data);
        String sdkErrorString = OzLivenessSDK.INSTANCE.getErrorFromIntent(data);
        if (sdkMediaResult != null && !sdkMediaResult.isEmpty()) {
            analyzeMedia(sdkMediaResult);
        } else System.out.println(sdkErrorString);
    }
}
private fun analyzeMedia(mediaList: List<OzAbstractMedia>) {
    AnalysisRequest.Builder()
        .addAnalysis(Analysis(Analysis.Type.QUALITY, Analysis.Mode.ON_DEVICE, mediaList))
        .build()
        .run(object : AnalysisRequest.AnalysisListener {
            override fun onSuccess(result: List<OzAnalysisResult>) {
                result.forEach { 
                    println(it.resolution.name)
                    println(it.folderId)
                }
            }
            override fun onError(error: OzException) {
                error.printStackTrace()
            }
        })
} 
private void analyzeMedia(List<OzAbstractMedia> mediaList) {
    new AnalysisRequest.Builder()
            .addAnalysis(new Analysis(Analysis.Type.QUALITY, Analysis.Mode.ON_DEVICE, mediaList, Collections.emptyMap()))
            .build()
            .run(new AnalysisRequest.AnalysisListener() {
                @Override public void onStatusChange(@NonNull AnalysisRequest.AnalysisStatus analysisStatus) {}
                @Override
                public void onSuccess(@NonNull List<OzAnalysisResult> list) {
                    for (OzAnalysisResult result: list) {
                        System.out.println(result.getResolution().name());
                        System.out.println(result.getFolderId());
                    }
                }
                @Override
                public void onError(@NonNull OzException e) { e.printStackTrace(); }
    });
}
pod 'OZLivenessSDK', :git => 'https://gitlab.com/oz-forensics/oz-liveness-ios', :tag => '<version>' // You can find the version needed in  iOS changelog
OZSDK(licenseSources: [.licenseFileName("forensics.license")]) { licenseData, error in
    if let error = error {
        print(error.errorDescription)
    }
}
let actions: [OZVerificationMovement] = [.selfie]
let ozLivenessVC: UIViewController = OZSDK.createVerificationVCWithDelegate(delegate, actions: actions) 
self.present(ozLivenessVC, animated: true)
let actions: [OZVerificationMovement] = [.selfie]
let ozLivenessVC: UIViewController = OZSDK.createVerificationVCWithDelegate(delegate, actions: actions) 
self.present(ozLivenessVC, animated: true)
let analysisRequest = AnalysisRequestBuilder()
let analysis = Analysis.init(
media: mediaToAnalyze, 
type: .quality, 
mode: .onDevice)
analysisRequest.uploadMedia(mediaToAnalyze)
analysisRequest.addAnalysis(analysis)
analysisRequest.run(
scenarioStateHandler: { state in }, // scenario steps progress handler
uploadProgressHandler: { (progress) in } // file upload progress handler 
) { (analysisResults : [OzAnalysisResult], error) in 
    // receive and handle analyses results here 
    for result in analysisResults {
        print(result.resolution)
        print(result.folderID)
    }
}
request body
{
  "analyses": [{
    "type": "quality",
    "source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // // optional; omit to include all media from the folder
    "params" : {
      "extract_best_shot": true // the mandatory part for the best shot analysis
    }
  }]
}
payload
{
    "media:tags": { // this section sets the tags for the media files that you upload
    // media files are referenced by the keys in a multipart form
        "video1": [ // your file key
        // a typical set of tags for a passive Liveness video
            "video_selfie", // video of a person
            "video_selfie_blank", // no  used
            "orientation_portrait" // video orientation
        ],
        "photo1": [
        // a typical set of tags for an ID front side
            "photo_id",
            "photo_id_front"
        ]
    }
}
sample app
sample app
Developer Guide
Developer Guide
Demo app
Demo app
on our website
email us
CocoaPods
liveness
here
Liveness
Biometry (Face Matching)
Best Shot
Blacklist Check
Quantitative Results
Types of Analyses
Single Request
Instant API: Non-Persistent Mode
tags
this guide
Liveness
Liveness

Liveness

The Liveness detection algorithm is intended to detect a real living person in a media.

Requirements

Processing Steps

1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/

request body
{  
  "analyses": [
    {
      "type": "quality",
      "source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // optional; omit to include all media from the folder
      ...
    }
  ]
}

You'll needanalysis_id or folder_id from response.

2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:

  • GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.

  • GET api/folders/{{folder_id}}/analyses/ – for all analyses performed on media in the folder with the folder_id you have from the previous step.

Repeat the check until theresolution_status and resolution fields change status to any other except PROCESSING and treat this as a result.

For the Liveness Analysis, seek the confidence_spoofing value related to the video you need. It indicates a chance that a person is not a real one.

[
  {
    // you may have multiple analyses in the list
    // pick the one you need by analyse_id or type
    "analysis_id": "1111aaaa-11aa-11aa-11aa-111111aaaaaa",
    "type": "QUALITY",
    "results_media": [
      {
        // if you have multiple media in one analysis, match score with media by source_video_id/source_shots_set_id 
        "source_video_id": "1111aaaa-11aa-11aa-11aa-111111aaaaab", // for shots_set media, the key would be source_shots_set_id 
        "results_data": 
        {
          "confidence_spoofing": 0.05790174 // quantitative score for this media
        }
      "resolution_status": "SUCCESS", // qualitative resolution (based on all media)
      ...
    ]
    ...
  }
  ...
]

Oz API

Oz API is the most important component of the system. It makes sure all other components are connected with each other. Oz API:

  • provides the unified Rest API interface to run the Liveness and Biometry analyses,

  • processes authorization and user permissions management,

  • tracks and records requested orders and analyses to the database (for example, in 6.0, the database size for 100 000 orders with analyses is ~4 GB),

  • archives the inbound media files,

  • collects telemetry from connected mobile apps,

  • provides settings for specific device models,

  • generates reports with analyses' results.

In API 6.0, we introduced two new operation modes: Instant API and single request.

In the Instant API mode – also known as non-persistent – no data is stored at any point. You send a request, receive the result, and can be confident that nothing is saved. This mode is ideal for handling sensitive data and helps ensure GDPR compliance. Additionally, it reduces storage requirements on your side.

Single request mode allows you to send all media along with the analysis request in one call and receive the results in the same response. This removes the need for multiple API calls – one is sufficient. However, if needed, you can still use the multi-request mode.

Please note: the database

Biometry (Face Matching)

The Biometry algorithm is intended to compare two or more photos and detect the level of similarity of the spotted faces. As a source media, the algorithm takes photos, videos, and documents (with photos).

Requirements

Processing steps

1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/

request body
{
  "analyses": [{
    "type": "biometry",
    "source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // // optional; omit to include all media from the folder
  }]
}

You'll needanalysis_id or folder_id from response.

2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:

  • GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.

  • GET /api/folders/{{folder_id}} – for all analyses performed on media in the folder with the folder_id you have from the previous step.

Repeat until the resolution_status and resolution fields change status to any other except PROCESSING, and treat this as a result.

Check the response for the min_confidence value. It is a quantitative result of matching the people on the media uploaded.

[
  {
    // you may have multiple analyses in the list
    // pick the one you need by analyse_id or type
    "analysis_id": "1111aaaa-11aa-11aa-11aa-111111aaaaaa",
    "type": "BIOMETRY",
    "results_media": [
      {
        // if you have multiple media in one analysis, match score with media by source_video_id/source_shots_set_id 
        "source_video_id": "1111aaaa-11aa-11aa-11aa-111111aaaaab", // for shots_set media, the key would be source_shots_set_id 
        "results_data": 
        {
          "max_confidence": 0.997926354, 
          "min_confidence": 0.997926354 // quantitative score for this media
        }
      ...
    ]
    "resolution_status": "SUCCESS", // qualitative resolution (based on all media)
    ...
  }
  ...
]

Blacklist Check

How to compare a photo or video with ones from your database.

The blacklist check algorithm is designed to check the presence of a person using a database of preloaded photos. A video fragment and/or a photo can be used as a source for comparison.

Prerequisites:

Processing steps:

1. Initiate the analysis: POST/api/folders/{{folder_id}}/analyses/

request body
{
  "analyses": [{
    "type": "collection",
    "source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // // optional; omit to include all media from the folder
  }]
}

You'll needanalysis_id or folder_id from response.

2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:

  • GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.

  • GET /api/folders/{{folder_id}} – for all analyses performed on media in the folder with the folder_id you have from the previous step.

Wait for the resolution_status and resolution fields to change the status to anything other than PROCESSING and treat this as a result.

If you want to know which person from your collection matched with the media you have uploaded, find the collection analysis in the response, check results_media, and retrieve person_id. This is the ID of the person who matched with the person in your media. To get the information about this person, use GET /api/collections/{{collection_id}}/persons/{{person_id}} with IDs of your collection and person.

Blacklist (Collection) Management in Oz API

Collection in Oz API is a database of facial photos that are used to compare with the face from the captured photo or video via the Black list analysis

Person represents a human in the collection. You can upload several photos for a single person.

How to Create a Collection

The collection should be created within a company, so you require your company's company_id as a prerequisite.

If you don't know your ID, callGET /api/companies/?search_text=test, replacing "test" with your company name or its part. Save the company_id you've received.

Now, create a collection via POST /api/collections/. In the request body, specify the alias for your collection and company_id of your company:

{
  "alias": "blacklist",
  "company_id": "your_company_id"
}

In a response, you'll get your new collection identifier: collection_id.

How to Add a Person or a Photo to a Collection

{
    "media:tags": {
        "image1": [
            "photo_selfie",
            "orientation_portrait"
        ]
    }
}

The response will contain the person_id which stands for the person identifier within your collection.

If you want to add a name of the person, in the request payload, add it as metadata:

    "person:meta_data": {
        "person_info": {
            "first_name": "John",
            "middle_name": "Jameson",
            "last_name": "Doe"
        }
    },

To add more photos of the same person, call POST {{host}}/api/collections/{{collection_id}}/persons/{{person_id}}/images/ using the appropriate person_id. The request body should be filled as you did it before with POST /api/collections/{{collection_id}}/persons/.

To obtain information on all the persons within the single collection, call GET /api/collections/{{collection_id}}/persons/.

To obtain a list of photos for a single person, call GET /api/collections/{{collection_id}}/persons/{{person_id}}/images/. For each photo, the response will containperson_image_id. You'll need this ID, for instance, if you want to delete the photo.

How to Remove a Photo or a Person from a Collection

To delete a person with all their photos, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}} with the appropriate collection and person identifiers. All the photos will be deleted automatically. However, you can't delete a person entity if it has any related analyses, which means the Black list analysis used this photo for comparison and found a coincidence. To delete such a person, you'll need to delete these analyses using DELETE /api/analyses/{{analysis_id}} with analysis_id of the collection (Black list) analysis.

To delete all the collection-related analyses, get a list of folders where the Black list analysis has been used: call GET /api/folders/?analyse.type=COLLECTION. For each folder from this list (GET /api/folders/{{folder_id}}/), find the analysis_id of the required analysis, and delete the analysis – DELETE /api/analyses/{{analysis_id}}.

To delete a single photo of a person, call DELETE collections/<collection_id>/persons/<person_id>/images/<media_id>/ with collection, person, and image identifiers specified.

How to Delete a Collection

Delete the information on all the persons from this collection as described above, then call DELETE /api/collections/{{collection_id}}/ to delete the remaining collection data.

Single Request

Overview and Benefits

In version 6.0.1, we introduced a new feature which allows you to send all required data and receive the analysis result within a single request.

However, the new API operation mode significantly simplifies the process by allowing you to send a single request and receive the response synchronously. The key benefits are:

  • Single request for everything – all data is sent in one package, eliminating the risk of data loss.

  • Synchronous response – no need for polling or webhooks to retrieve results.

  • High performance – supports up to 36 analyses per minute per instance.

Usage

Response Example

In response, you receive analysis results.

You're done.

Types of Analyses and What They Check

Here, you'll get acquainted with types of analyses that Oz API provides and will learn how to interpret the output.

Using Oz API, you can perform one of the following analyses:

Each of the analyses has its threshold that determines the output of these analyses. By default, the threshold for Liveness is 0.5 or 50%, for Blacklist and Biometry (Face Matching) – 0.85 or 85%.

  • Biometry: if the final score is equal to or above the threshold, the faces on the analyzed media are considered similar.

  • Blacklist: if the final score is equal to or above the threshold, the face on the analyzed media matches with one of the faces in the database.

  • Quality: if the final score is equal to or above the threshold, the result is interpreted as an attack.

Biometry

Purpose

Output

After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:

  • 100% (1) – faces are similar, media represent the same person,

  • 0% (0) – faces are not similar and belong to different people

Quality (Liveness, Best Shot)

Purpose

The Liveness detection (Quality) algorithm aims to check whether a person in a media is a real human acting in good faith, not a fake of any kind.

The Best Shot algorithm checks for the best shot from a video (a best-quality frame where the face is seen the most properly). It is an addition to liveness.

Output

After checking, the analysis shows the chance of a spoofing attack in percents.

  • 100% (1) – an attack is detected, the person in the video is not a real living person,

  • 0% (0) – a person in the video is a real living person.

*Spoofing in biometry is a kind of scam when a person disguises as another person using both program and non-program tools like deepfake, masks, ready-made photos, or fake videos.

Documents

Purpose

The Documents analysis aims to recognize the document and check if its fields are correct according to its type.

Output

As an output, you'll get a list of document fields with recognition results for each field and a result of checking that can be:

  • The documents passed the check successfully,

  • The documents failed to pass the check.

Additionally, the result of Biometry check is displayed.

Blacklist

Purpose

The Blacklist checking algorithm is used to determine whether the person on a photo or video is present in the database of pre-uploaded images. This base can be used as a blacklist or whitelist. In the former case, the person's face is being compared with the faces of known swindlers; in the latter case, it might be a list of VIPs.

Output

After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:

  • 100% (1) – the person in an image or video matches with someone in the blacklist,

  • 0% (0) – the person is not found in the blacklist.

User Roles

Each of the new API users should obtain a role to define access restrictions for direct API connections. Set the role in the user_type field when you create a new user.

  • ADMIN is a system administrator, who has unlimited access to all system objects, but can't change the analyses' statuses;

  • CLIENT is a regular consumer account, who can upload media files, process analyses, view results in personal folders, generate reports for analyses.

  • CLIENT ADMIN is a company administrator that can manage their company account and users within it. Additionally, CLIENT ADMIN can view and edit data of all users within their company, delete files in folders, add or delete report templates with or without attachments, the reports themselves and single analyses, check statistics, add new blacklist collections.

  • CLIENT OPERATOR is similar to OPERATOR within their company.

  • CLIENT SERVICE is a service user account for automatic connection purposes. Authentication with this user creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, also by default, the lifetime of a token is extended with each request (parameterized).

For API versions below 6.0

For API 5.3 and below, to create a CLIENT user with admin or service rights, you require to set the corresponding flags to true:

  • is_admin – if set, the user obtains access to other users' data within this admin's company.

  • is_service is a flag that marks the user account as a service accountfor automatic connection purposes. Authentication with this user creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, also by default, the lifetime of a token is extended with each request (parameterized).

Here's the detailed information on access levels.

Company

Folder

Report template

Report template attachments

Report

Analysis

Collection

Person

Person image

User

System Objects

The description of the objects you can find in Oz Forensics system.

Objects Hierarchy

System objects on Oz Forensics products are hierarchically structured as shown in the picture below.

On the top level, there is a Company. You can use one copy of Oz API to work with several companies.

Object parameters

Common parameters

Besides these parameters, each object type has specific ones.

Company

User

Folder

Media

Analysis

Rules of Assigning Analyses

This article covers the default rules of applying analyses.

Analyses in Oz system can be applied in two ways:

  • manually, for instance, when you choose the Liveness scenario in our demo application;

  • automatically, when you don’t choose anything and just assign all possible analyses (via API or SDK).

Below, you will find the tags and type requirements for all analyses. If a media doesn’t match the requirements for the certain analysis, this media is ignored by algorithms.

Important: to process a photo in API 4.0.8 and below, pack it into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored.

This analysis is applied to all media.

If the folder contains less than two matching media files, the system will return an error. If there are more than two files, then all pairs will be compared, and the system will return a result for the pair with the least similar faces.

This analysis works only when you have a pre-made image database, which is called the blacklist. The analysis is applied to all media in the folder (or the ones marked as source media).

Best Shot is an addition to the Quality (Liveness) analysis. It requires the appropriate option enabled. The analysis is applied to all media files that can be processed by the Quality analysis.

The Documents analysis is applied to images with tags photo_id_front and photo_id_back (documents), and photo_selfie (selfie). The result will be positive if the system finds the selfie photo and matches it with a photo on one of the valid documents from the following list:

  • personal ID card

  • driver license

  • foreign passport

Tag-Related Errors

Quantitative Results

This article describes how to get the analysis scores.

When you perform an analysis, the result you get is a number. For biometry, it reflects a chance that the two or more people represented in your media are the same person. For liveness, it shows a chance of deepfake or a spoofing attack: that the person in uploaded media is not a real one. You can get these numbers via API from a JSON response.

  1. Make a request to the folder or folder list to get a JSON response. Set the with_analyses parameter to true.

  2. For the Biometry analysis, check the response for the min_confidence value:

This value is a quantitative result of matching the people on the media uploaded.

4. For the Liveness Analysis, seek the confidence_spoofing value related to the video you need:

This value is a chance that a person is not a real one.

To process a bunch of analysis results, you can parse the appropriate JSON response.

Android source codes

iOS source codes

Android OzLiveness SDK

iOS OzLiveness SDK

in PlayMarket

in TestFlight

You're .

You have already marked by correct tags into this folder. For API 4.0.8 and below, please note: the Liveness analysis works with videos and shotsets, images are ignored. If you want to analyze an image, upload it as a shotset (archive) with a single image and mark with the video_selfie_blank tag.

If you want to use a webhook for response, add it to the payload at this step, as described .

You're .

You have already marked by correct tags into this folder.

If you want to use a webhook for response, add it to the payload at this step, as described .

You're .

You have already marked by correct tags into this folder.

If you want to use a webhook for response, add it to the payload at this step, as described .

This article describes how to create a collection via API, how to add persons and photos to this collection and how to delete them and the collection itself if you no longer need it. You can do the same in , but this article covers API methods only.

To add a new person to your collection, call POST /api/collections/{{collection_id}}/persons/, usingcollection_id of the collection needed. In the request body, add a photo or several photos. Mark them with appropriate in the payload:

Before 6.0.1, interacting with the API required multiple requests: you had to , initiate analyses (see , , and ), and then either poll for results or for notifications when the result was ready. This flow is still supported, so if you need to send separate requests, you can continue using the existing methods that are listed above.

To use this method, call POST /api/folders/. In the X-Forensic-Access-Token header, pass your . Add media files to the request body and define the tags and metadata if needed in the payload part.

,

,

,

.

The possible results of the analyses are explained .

To configure the threshold depending on your needs, please .

For more information on how to read the numbers in analyses' results, please refer to .

The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media (for details, please refer to ).

Oz API uses a third-party OCR analysis service provided by our partner. If you want to change this service to another one, please .

OPERATOR is a system operator, who can view all system objects and choose the analysis result via the Make Decision button (usually needed if the is OPERATOR_REQUIRED);

can_start_analysis_biometry – an additional flag to allow access to analyses (enabled by default);

can_start_analysis_quality – an additional flag to allow access to (QUALITY) analyses (enabled by default);

can_start_analysis_collection – an additional flag to allow access to analyses (enabled by default).

The next level is a User. A company can contain any amount of users. There are several roles of users with different permissions. For more information, refer to .

When a user requests an analysis (or analyses), a new folder is created. This folder contains media. One user can create any number of folders. Each folder can contain any amount of media. A user applies analyses to one or more media within a folder. The rules of assigning analyses are described . The media quality requirements are listed on .

The automatic assignment means that Oz system decides itself what analyses to apply to media files based on its and type. If you upload files via the web console, you select the tags needed; if you take photo or video via Web SDK, the SDK picks the tags automatically. As for the media type, it can be IMAGE (a photo)/VIDEO/SHOTS_SET, where SHOTS_SET is a .zip archive equal to video.

The rules listed below act by default. To change the mapping configuration, please .

This analysis is applied to all media, regardless of the recorded (gesture tags begin from video_selfie).

.

sample app
sample app
Developer Guide
Developer Guide
Demo app
Demo app
authorized
created a folder and added your media
here
authorized
created a folder and added your media
here
authorized
created a folder and added your media
here
Web console
Request payload example
{
  // (optional block) folder metadata if needed
  "folder:meta_data": {
    "partner_side_folder_id": "00000000-0000-0000-0000-000000000000",
    "person_info": {
      "first_name": "John",
      "middle_name": "Jameson",
      "last_name": "Doe"
    }
  },
  // (optional block) folder metadata if needed
  "media:meta_data": {
    "video1": {
      "foo": "bar"
    }
  },
  "media:tags": {
    "video1": [
      "video_selfie",
      "video_selfie_eyes",
      "orientation_portrait"
    ]
  },
  "analyses": [
    {
      "type": "quality",
      // (optional block) folder metadata if needed
      "meta_data": {
        "example1": "some_example1"
      },
      // additional parameters
      "params": {
        "threshold_spoofing": 0.5,
        "extract_best_shot": false
      }
    }
  ]
}
{
  "company_id": "00000000-0000-0000-0000-000000000000",
  "time_created": 1744017549.366616,
  "folder_id": "00000000-0000-0000-0000-000000000000",
  "user_id": "00000000-0000-0000-0000-000000000000",
  "resolution_endpoint": null,
  "resolution_status": "FINISHED",
  "resolution_comment": "[]",
  "system_resolution": "SUCCESS",
  ...
  // folder metadata if you've added it
  "meta_data": {
    "partner_side_folder_id": "00000000-0000-0000-0000-000000000000",
    "person_info": {
      "first_name": "John",
      "middle_name": "Jameson",
      "last_name": "Doe"
    }
  },
  "media": [
    {
      "company_id": "00000000-0000-0000-0000-000000000000",
      "folder_id": "00000000-0000-0000-0000-000000000000",
      "folder_time_created": 1744017549.366616,
      "original_name": "00000000-0000-0000-0000-000000000000.mp4",
      "original_url": null,
      "media_id": "00000000-0000-0000-0000-000000000000",
      "media_type": "VIDEO_FOLDER",
      "tags": "video1": [
		"video_selfie",
		"video_selfie_eyes",
		"orientation_portrait"
	]
      "info": {},
      "time_created": 1744017549.368665,
      "time_updated": 1744017549.36867,
	  // media metadata if you've added it
      "meta_data": {
        "foo": "bar"
      },
      "thumb_url": null,
      "image_id": "00000000-0000-0000-0000-000000000000"
    }
  ],
  "time_updated": 1744017549.366629,
  "analyses": [
    {
      "company_id": "00000000-0000-0000-0000-000000000000",
      "group_id": "00000000-0000-0000-0000-000000000000",
      "folder_id": "00000000-0000-0000-0000-000000000000",
      "folder_time_created": 1744017549.366616,
      "analysis_id": "00000000-0000-0000-0000-000000000000",
      "state": "FINISHED",
      "resolution_operator": null,
      "results_media": [
        {
         ...
        }
      ],
      "results_data": null,
	  // analysis metadata if you've added it
      "meta_data": {
        "example1": "some_example1"
      },
      "time_created": 1744017549.369485,
      "time_updated": 1744017550.659305,
      "error_code": null,
      "error_message": null,
      "source_media": [
        {
	 ...
        }
      ],
      "type": "QUALITY",
      "analyse_id": "00000000-0000-0000-0000-000000000000",
      "resolution_status": "SUCCESS",
      "resolution": "SUCCESS"
    }
  ]
}

Create

Read

Update

Delete

ADMIN

+

+

+

+

OPERATOR

-

+

-

-

CLIENT

-

their company data

-

-

CLIENT SERVICE

-

their company data

-

-

CLIENT OPERATOR

-

their company data

-

-

CLIENT ADMIN

-

their company data

their company data

their company data

Create

Read

Update

Delete

ADMIN

+

+

+

+

OPERATOR

+

+

+

-

CLIENT

their folders

their folders

their folders

-

CLIENT SERVICE

within their company

within their company

within their company

-

CLIENT OPERATOR

within their company

within their company

within their company

-

CLIENT ADMIN

within their company

within their company

within their company

within their company

Create

Read

Update

Delete

ADMIN

+

+

+

+

OPERATOR

+

+

+

-

CLIENT

-

within their company

-

-

CLIENT SERVICE

-

within their company

-

-

CLIENT OPERATOR

within their company

within their company

within their company

-

CLIENT ADMIN

within their company

within their company

within their company

within their company

Create

Read

Delete

ADMIN

+

+

+

OPERATOR

+

+

-

CLIENT

-

within their company

-

CLIENT SERVICE

-

within their company

-

CLIENT OPERATOR

within their company

within their company

-

CLIENT ADMIN

within their company

within their company

within their company

Create

Read

Delete

ADMIN

+

+

+

OPERATOR

+

+

-

CLIENT

in their folders

in their folders

-

CLIENT SERVICE

within their company

within their company

-

CLIENT OPERATOR

within their company

within their company

-

CLIENT ADMIN

within their company

within their company

within their company

Create

Read

Update

Delete

ADMIN

+

+

+

+

OPERATOR

+

+

+

-

CLIENT

in their folders

in their folders

-

-

CLIENT SERVICE

within their company

within their company

within their company

-

CLIENT OPERATOR

within their company

within their company

within their company

-

CLIENT ADMIN

within their company

within their company

within their company

within their company

Create

Read

Update

Delete

ADMIN

+

+

+

+

OPERATOR

-

+

-

-

CLIENT

-

within their company

-

-

CLIENT SERVICE

within their company

within their company

-

-

CLIENT OPERATOR

-

within their company

-

-

CLIENT ADMIN

within their company

within their company

within their company

within their company

Create

Read

Delete

ADMIN

+

+

+

OPERATOR

-

+

-

CLIENT

-

within their company

-

CLIENT SERVICE

within their company

within their company

-

CLIENT OPERATOR

-

within their company

-

CLIENT ADMIN

within their company

within their company

within their company

Create

Read

Delete

ADMIN

+

+

+

OPERATOR

-

+

-

CLIENT

-

within their company

-

CLIENT SERVICE

-

within their company

-

CLIENT OPERATOR

-

within their company

-

CLIENT ADMIN

within their company

within their company

within their company

Create

Read

Update

Delete

ADMIN

+

+

+

+

OPERATOR

-

+

their data

-

CLIENT

-

their data

their data

-

CLIENT SERVICE

-

within their company

their data

-

CLIENT OPERATOR

-

within their company

their data

-

CLIENT ADMIN

within their company

within their company

within their company

within their company

Parameter

Type

Description

time_created

Timestamp

Object (except user and company) creation time

time_updated

Timestamp

Object (except user and company) update time

meta_data

Json

Any user parameters

technical_meta_data

Json

Module-required parameters; reserved for internal needs

Parameter

Type

Description

company_id

UUID

Company ID within the system

name

String

Company name within the system

Parameter

Type

Description

folder_id

UUID

Folder ID within the system

resolution_status

ResolutionStatus

The latter analysis status

Parameter

Type

Description

analyse_id

UUID

ID of the analysis

folder_id

UUID

ID of the folder

type

String

Analysis type (BIOMETRY\QUALITY\DOCUMENTS)

results_data

JSON

Results of the analysis

Code

Message

Description

202

Could not locate face on source media [media_id]

No face is found in the media that is being processed, or the source media has wrong (photo_id_back) or/and missing tag used for the media.

202

Biometry. Analyse requires at least 2 media objects to process

The algorithms did not find the two appropriate media for analysis. This might happen when only a single media has been sent for the analysis, or a media is missing a tag.

202

Processing error - did not found any document candidates on image

The Documents analysis can't be finished because the photo uploaded seems not to be a document, or it has wrong (not photo_id_*) or/and missing tags.

5

Invalid/missed tag values to process quality check

The tags applied can't be processed by the Quality algorithm (most likely, the tags begin from photo_*; for Quality, they should be marked as video_*)

5

Invalid/missed tag values to process blacklist check

The tags applied can't be processed by the Blacklist algorithm. This might happen when a media is missing a tag.

LogoOz Api scheme
"items": 
 [
  {
   "analyses": 
    [
     {
      "analysis_id": "biometry_analysis_id"
      "folder_id": "some_folder_id", 
      "type": "BIOMETRY", 
      "state": "FINISHED", 
      "results_data": 
       {
        "max_confidence": 0.997926354, 
        "min_confidence": 0.997926354
       }
"items": 
 [
  {
   "analyses": 
    [
     {
      "source_media": 
       [
        {
        "media_id": "your_media_id", 
        "media_type": "VIDEO_FOLDER",
        }
       ]
      "results_media": 
       [
        "analysis_id": "liveness_analysis_id",
        "results_data": 
         {
          "confidence_spoofing": 0.55790174
         }

API Error Codes

HTTP Response Codes

  • Response codes 2XX indicate a successfully processed request (e.g., code 200 for retrieving data, code 201 for adding a new entity, code 204 for deletion, etc.).

  • Response codes 4XX indicate that a request could not be processed correctly because of some client-side data issues (e.g., 404 when addressing a non-existing resource).

  • Response codes 5XX indicate that an internal server-side error occurred during the request processing (e.g., when database is temporarily unavailable).

Response Body with Errors

Each response error includes HTTP code and JSON data with error description. It has the following structure:

  • error_code – integer error code;

  • error_message– text error description;

  • details – additional error details (format is specified to each case). Can be empty.

Sample error response:

{
    "error_code": 0,
    "error_message": "Unknown server side error occurred",
    "details": null
}

Error codes:

  • 0 – UNKNOWN Unknown server error.

  • 1 - NOT ALLOWED An unallowed method is called. Usually is followed by the 405 HTTP status of response. For example, trying to request the PATCH method, while only GET/POST ones are supported.

  • 2 - NOT REALIZED The method is documented but is not realized by any temporary or permanent reason.

  • 3 - INVALID STRUCTURE Incorrect structure of request. Some required fields missing or a format validation error occurred.

  • 4 - INVALID VALUE Incorrect value of the parameter inside request body or query.

  • 5 - INVALID TYPE The invalid data type of the request parameter.

  • 6 - AUTH NOT PROVIDED Access token not specified.

  • 7 - AUTH INVALID The access token does not exist in the database.

  • 8 - AUTH EXPIRED Auth token is expired.

  • 9 - AUTH FORBIDDEN Access denied for the current user.

  • 10 - NOT EXIST the requested resource is not found (alternative of HTTP status_code = 404).

  • 11 - EXTERNAL SERVICE Error in the external information system.

  • 12 – DATABASE Critical database error on the server host.

Statuses in API

This article contains the full description of folders' and analyses' statuses in API.

Field name / status
analyse.state
analyse.resolution_status
folder.resolution_status
system_resolution

INITIAL

-

-

starting state

starting state

PROCESSING

starting state

starting state

analyses in progress

analyses in progress

FAILED

system error

system error

system error

system error

FINISHED

finished successfully

-

finished successfully

-

DECLINED

-

check failed

-

check failed

OPERATOR_REQUIRED

-

additional check is needed

-

additional check is needed

SUCCESS

-

check succeeded

-

check succeeded

The details on each status are below.

Analysis State (analyse.state)

This is the state when the analysis is being processed. The values of this state can be:

PROCESSING – the analysis is in progress;

FAILED – the analysis failed due to some error and couldn't get finished;

FINISHED – job's done, the analysis is finished, and you can check the result.

Analysis Result (analyse.resolution_status)

Once the analysis is finished, you'll see one of the following results:

SUCCESS– everything went fine, the check succeeded (e.g., faces match or liveness confirmed);

OPERATOR_REQUIRED (except the Liveness analysis) – the result should be additionally checked by a human operator;

The OPERATOR_REQUIRED status appears only if it is set up in biometry settings.

DECLINED – the check failed (e.g., faces don't match or some spoofing attack detected).

If the analysis hasn't been finished yet, the result inherits a value from analyse.state: PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).

Folder Status (folder.resolution_status)

A folder is an entity that contains media to analyze. If the analyses have not been finished, the stage of processing media is shown in resolution_status:

INITIAL – no analyses applied;

PROCESSING – analyses are in progress;

FAILED – any of the analyses failed due to some error and couldn't get finished;

FINISHED – media in this folder are processed, the analyses are finished.

Folder Result (system_resolution)

Folder result is the consolidated result of all analyses applied to media from this folder. Please note: the folder result is the result of the last-finished group of analyses. If all analyses are finished, the result will be:

SUCCESS– everything went fine, all analyses completed successfully;

OPERATOR_REQUIRED (except the Liveness analysis) – there are no analyses with the DECLINED status, but one or more analyses have been completed with the OPERATOR_REQUIRED status;

DECLINED – one or more analyses have been completed with the DECLINED status.

The analyses you send in a single POST request form a group. The group result is the "worst" result of analyses this group contains: INITIAL > PROCESSING > FAILED > DECLINED > OPERATOR_REQUIRED > SUCCESS, where SUCCESS means all analyses in the group have been completed successfully without any errors.

Instant API: Non-Persistent Mode

Instant API, or non-persistent operation mode, has been introduced in API 6.0.1 . It is a mode where we do not save any data everywhere. All data is being used only within a request: you send it, get the response to process it, and that's all, nothing gets recorded. This ensures you do not store any sensitive data, which might be crucial for GDPR compliance. Also, it significantly reduces storage requirements.

To enable this mode, when you prepare the config.py file to run the API, set the OZ_APP_COMPONENTS parameter to stateless. Then, call /api/instant/folders/ to send the request without saving any data.

create a folder and upload media to it
Liveness
Biometry
Blacklist
use webhooks
access token
contact us
Quantitative Results
Rules of Assigning Analyses
contact us
status
BIOMETRY
LIVENESS
BLACK LIST
User Roles
here
this page
tags
contact us
gesture
Authorize
biometry
quality (liveness, best shot)
documents
blacklist
Quality (Liveness)
Biometry
Blacklist
Best Shot
Documents
here

Parameter

Type

Description

user_id

UUID

User ID within the system

user_type

String

​​first_name

String

Name

last_name

String

Surname

middle_name

String

Middle name

email

String

User email = login

password

String

User password (only required for new users or to change)

can_start_analyze_*

String

company_id

UUID

Current user company’s ID within the system

is_admin

Boolean

is_service

Boolean

Parameter

Type

Description

media_id

UUID

Media ID

original_name

String

Original filename (how the file was called on the client machine)

original_url

Url

HTTP link to this file on the API server

tags

Array(String)

Oz API Postman Collections

Oz API Postman collections

6.0

5.3 and 5.2

5.0

Oz API 5.1.0 works with the same collection.

4.0

3.33

How to Import a Postman Collection:

Launch the client and import Oz API collection for Postman by clicking the Import button:

Click files, locate the JSON needed, and hit Open to add it:

The collection will be imported and will appear in the Postman interface:

Metadata

Overview

…
meta_data:
{
  "field1": "value1",
  "field2": "value2"
}
…

Objects and Methods

Metadata is available for most Oz system objects. Here is the list of these objects with the API methods required to add metadata. Please note: you can also add metadata to these objects during their creation.

Object

API Method

User

PATCH /api/users/{{user_id}}

Folder

PATCH /api/folders/{{folder_id}}/meta_data/

Media

PATCH /api/media/{{media_id}}/meta_data

Analysis

PATCH /api/analyses/{{analyse_id}}/meta_data

Collection

PATCH /api/collections/{{collection_id}}/meta_data/

and, for a person in a collection,

PATCH /api/collections/{{collection_id}}/persons/{{person_id}}/meta_data

Usage Examples

You may want to use metadata to group folders by a person or lead. For example, if you want to calculate conversion when a single lead makes several Liveness attempts, just add the person/lead identifier to the folder metadata.

Here is how to add the client ID iin to a folder object.

In the request body, add:

{
  "iin": "123123123"
}

You can pass an ID of a person in this field, and use this ID to combine requests with the same person and count unique persons (same ID = same person, different IDs = different persons). This ID can be a phone number, an IIN, an SSN, or any other kind of unique ID. The ID will be displayed in the report as an additional column.

If you store PII in metadata, make sure it complies with the relevant regulatory requirements.

You can also add metadata via SDK to process the information later using API methods. Please refer to the corresponding SDK sections:

Oz API Lite

What is Oz API Lite, when and how to use it.

Oz API Lite is the lightweight yet powerful version of Oz API. The Lite version is less resource-demanding, more productive, and easier to work with. The analyses are made within the API Lite image. As Oz API Lite doesn't include any additional services like statistics or data storage, this version is the one to use when you need a high performance.

Examples of methods

Oz Mobile SDK (iOS, Android, Flutter)

Oz Mobile SDK stands for the Software Developer’s Kit of the Oz Forensics Liveness and Face Biometric System, providing seamless integration with customers’ mobile apps for login and biometric identification.

Currently, both Android and iOS SDK work in the portrait mode.

API Lite Methods

From 1.1.0, Oz API Lite works with base64 as an input format and is also able to return the biometric templates in this format. To enable this option, add Content-Transfer-Encoding = base64 to the request headers.

version – component version check

Use this method to check what versions of components are used (available from 1.1.1).

Call GET /version

Input parameters

-

Request example

GET localhost/version

Successful response

In case of success, the method returns a message with the following parameters.

HTTP response content type: “application/json”.

Output parameters

Response example

Biometry

health – biometric processor status check

Use this method to check whether the biometric processor is ready to work.

Call GET /v1/face/pattern/health

Input parameters

-

Request example

GET localhost/v1/face/pattern/health

Successful response

In case of success, the method returns a message with the following parameters.

HTTP response content type: “application/json”.

Output parameters

Response example

extract – the biometric template extraction

The method is designed to extract a biometric template from an image.

HTTP request content type: “image / jpeg” or “image / png”

Call POST /v1/face/pattern/extract

Input parameters

*

The name itself is not mandatory for a parameter of the Stream type.

To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

Request example

Successful response

In case of success, the method returns a biometric template.

The content type of the HTTP response is “application/octet-stream”.

If you've passed Content-Transfer-Encoding = base64 in headers, the template will be in base64 as well.

Output parameters

*

The name itself is not mandatory for a parameter of the Stream type.

Response example

compare – the comparison of biometric templates

The method is designed to compare two biometric templates.

The content type of the HTTP request is “multipart / form-data”.

CallPOST /v1/face/pattern/compare

Input parameters

To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

Request example

Successful response

In case of success, the method returns the result of comparing the two templates.

HTTP response content type: “application/json”.

Output parameters

Response example

verify – the biometric verification

The method combines the two methods from above, extract and compare. It extracts a template from an image and compares the resulting biometric template with another biometric template that is also passed in the request.

The content type of the HTTP request is “multipart / form-data”.

Call POST /v1/face/pattern/verify

Input parameters

To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

Request example

Successful response

In case of success, the method returns the result of comparing two biometric templates and the biometric template.

The content type of the HTTP response is “multipart/form-data”.

Output parameters

Response example

extract_and_compare – extracting and comparison of templates derived from two images

The method also combines the two methods from above, extract and compare. It extracts templates from two images, compares the received biometric templates, and transmits the comparison result as a response.

The content type of the HTTP request is “multipart / form-data”.

Call POST /v1/face/pattern/extract_and_compare

Input parameters

To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

Request example

Successful response

In case of success, the method returns the result of comparing the two extracted biometric templates.

HTTP response content type: “application / json”.

Output parameters

Response example

compare_n – 1:N biometric template comparison

Use this method to compare one biometric template to N others.

The content type of the HTTP request is “multipart/form-data”.

Call POST /v1/face/pattern/compare_n

Input parameters

Request example

Successful response

In case of success, the method returns the result of the 1:N comparison.

HTTP response content type: “application / json”.

Output parameters

Response example

verify_n – 1:N biometric verification

The method combines the extract and compare_n methods. It extracts a biometric template from an image and compares it to N other biometric templates that are passed in the request as a list.

The content type of the HTTP request is “multipart/form-data”.

Call POST /v1/face/pattern/verify_n

Input parameters

To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

Request example

Successful response

In case of success, the method returns the result of the 1:N comparison.

HTTP response content type: “application / json”.

Output parameters

Response example

extract_and_compare_n – 1:N template extraction and comparison

This method also combines the extract and compare_n methods but in another way. It extracts biometric templates from the main image and a list of other images and then compares them in the 1:N mode.

The content type of the HTTP request is “multipart/form-data”.

Call POST /v1/face/pattern/extract_and_compare_n

Input parameters

To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

Request example

Successful response

In case of success, the method returns the result of the 1:N comparison.

HTTP response content type: “application / json”.

Output parameters

Response example

Method errors

HTTP response content type: “application / json”.

*

A biometric sample is an input image.

Liveness

health – checking the status of liveness processor

Use this method to check whether the liveness processor is ready to work.

Call GET /v1/face/liveness/health

Input parameters

  • None.

Request example

GET localhost/v1/face/liveness/health

Successful response

In case of success, the method returns a message with the following parameters.

HTTP response content type: “application/json”.

Output parameters

Response example

detect – presentation attack detection

The detect method is made to reveal presentation attacks. It detects a face in each image or video (since 1.2.0), sends them for analysis, and returns a result.

The method supports the following content types:

  • image/jpeg or image/png for an image;

  • multipart/form-data for images, videos, and archives. You can use payload to add any parameters that affect the analysis.

To run the method, call POST /{version}/face/liveness/detect.

Image

Accepts an image in JPEG or PNG format. No payload attached.

Request example
Successful response example

Multipart/form-data

Accepts the multipart/form-data request.

  • Each media file should have a unique name, e.g., media_key1, media_key2.

  • The payload parameters should be a JSON placed in the payload field.

Temporary IDs will be deleted once you get the result.

Request example
Successful response example

Multipart/form-data with Best Shot

To extract the best shot from your video or archive, in analyses, set extract_best_shot = true (as shown in the request example below). In this case, API Lite will analyze your archives and videos, and, in response, will return the best shot. It will be a base64 image in analysis->output_images->image_b64.

Additionally, you can change the Liveness threshold. In analyses, set the new threshold in the threshold_spoofing parameter. If the resulting score will be higher than this parameter's value, the analysis will end up with the DECLINED status. Otherwise, the status will be SUCCESS.

Request example
Successful response example
The payload field

Method errors

HTTP response content type: “application / json”.

*

A biometric sample is an input image.

Oz API Lite Postman Collection

1.2.0

1.1.1

How to Import a Postman Collection:

Launch the client and import Oz API Lite collection for Postman by clicking the Import button:

Click files, locate the JSON needed, and hit Open to add it:

The collection will be imported and will appear in the Postman interface:

Changelog

API Lite (FaceVer) changes

1.2.3 – Nov., 2024

  • Fixed the bug with the time_created and folder_id parameters of the Detect method that sometimes might have been generated incorrectly.

  • Security updates.

1.2.2 – Oct. 17, 2024

  • Updated models.

1.2.1 – Sept. 05, 2024

  • The file size for the detect Liveness method is now capped at 15 MB, with a maximum of 10 files per request.

  • Updated the gesture list for best_shot analysis: it now supports head turns (left and right), tilts (up and down), smiling, and blinking.

1.2.0 – July 26, 2024

1.1.1 – Nov. 28, 2022

1.1.0

  • API Lite now accepts base64.

09.2021

  • Improved the biometric model.

  • Added the 1:N mode.

08.2021

  • Added the CORS policy.

  • Published the documentation.

06.2021

  • Improved error messages – made them more detailed.

  • Simplified the Liveness/Detect methods.

04.2021

  • Reworked and improved the core.

  • Added anti-spoofing algorithms.

10.2020

  • Added the extract_and_compare method.

Changelog

API changes

6.0.1 – Apr. 30, 2025

  • Optimized storage and database.

  • You can now combine working systems based on asynchronous method or celery worker (local processing, celery processing). Added S3 storage mechanics for each of the combinations.

  • Implemented security updates.

  • We no longer support RAR archives.

  • Improved mechanics for calculating analysis time.

  • Replaced the is_admin and is_service flags for the CLIENT role with new roles: CLIENT ADMIN and CLIENT SERVICE, respectively. Set the roles in user_type.

  • To change the user password, call POST /api/users/{{user_id}}/change-password/. PATCH users/{{user_id}}/ no longer works.

  • To issue a service token for a user via {{host}}/api/authorize/service_token/, this user must have the CLIENT SERVICE role. You can also create a token for another user with this role: call {{host}}/api/authorize/service_token/{user_id}.

  • To delete an image of a person from collection, use DELETE collections/<collection_id>/persons/<person_id>/images/<media_id>/. DELETE images|media/<media_id> is deprecated.

  • image_id, video_id, and shots_set_id are now deprecated. Use media_id instead.

  • analyse_id is now deprecated. Use analysis_id instead.

  • can_start_analyse_biometry, can_start_analyse_collection, can_start_analyse_documents, can_start_analyse_quality are now deprecated. Use can_start_analysis_biometry, can_start_analysis_collection, can_start_analysis_documents, can_start_analysis_quality instead, respectively.

  • Also deprecated:

    • expire_date in {{host}}/api/authorize/auth and {{host}}/api/authorize/refresh (use access_token.exp from payload instead),

    • session_id in {{host}}/api/authorize/auth and {{host}}/api/authorize/refresh (use token_id instead).

  • Removed collection and person attributes from COLLECTION.analysis.

  • We no longer store separate objects for each frame in SHOTS_SET.

5.3.1 – Dec. 24, 2024

  • Improved the resource efficiency of server-based biometry analysis.

5.3.0 – Nov. 27, 2024

  • API can now extract action shots from videos of a person performing gestures. This is done to comply with the new Kazakhstan regulatory requirements for biometric identification. Dependencies with other system components are specified here.

  • Created a new report template that also complies with the requirements mentioned above.

  • If action shots are enabled, the thumbnails for the report are generated from them.

5.2.0 – Sept. 06, 2024

  • Added the new method to check the timezone settings: GET {{host}}/api/config

  • Added parameters to the GET {{host}}/api/event_sessions method:

    • time_created

    • time_created.min

    • time_created.max

    • time_updated

    • time_updated.min

    • time_updated.max

    • session_id

    • session_id.exclude

    • sorting

    • offset

    • limit

    • total_omit

  • If you create a folder using SHOT_SET, the corresponding video will be in media.video_url.

  • Fixed the bug with CLIENT ADMIN being unable to change passwords for users from their company.

5.1.1 – July 16, 2024

  • Security updates.

5.1.0 – Mar. 20, 2024

  • Face Identification 1:N is now live, significantly increasing the data processing capacity of the Oz API to find matches. Even huge face databases (containing millions of photos and more) are no longer an issue.

  • The Liveness (QUALITY) analysis now ignores photos tagged with photo_id, photo_id_front, or photo_id_back, preventing these photos from causing the tag-related analysis error.

5.0.1 – July 16, 2024

  • Security updates.

5.0.0 – Nov. 17, 2023

  • You can now apply the Liveness (QUALITY) analysis to a single image.

  • Fixed the bug where the Liveness analysis could finish with the SUCCESS result with no media uploaded.

  • The default value for the extract_best_shot parameter is now True.

  • RAR archives are no longer supported.

  • By default, analyses.results_media.results_data now contain the confidence_spoofing parameter. However, if you need all three parameters for the backward compatibility, it is possible to change the response back to three parameters: confidence_replay, confidence_liveness, and confidence_spoofing.

  • Updated the default PDF report template.

  • The name of the PDF report now contains folder_id.

4.0.8-patch1 – July 16, 2024

  • Security updates.

4.0.8 – May 22, 2023

  • Set the autorotation of logs.

  • Added the CLI command for user deletion.

  • You can now switch off the video preview generation.

  • The ADMIN access token is now valid for 5 years.

  • Added the folder identifier folder_id to the report name.

  • Fixed bugs and optimized the API work.

4.0.2 – Sept. 13, 2022

  • For the sliced video, the system now deletes the unnecessary frames.

  • Added new methods: GET and POST at media/<media_id>/snapshot/.

  • Replaced the default report template.

  • The shot set preview now keeps images’ aspect ratio.

  • ADMIN and OPERATOR receive system_company as a company they belong to.

  • Added the company_id attribute to User, Folder, Analyse, Media.

  • Added the Analysis group_id attribute.

  • Added the system_resolution attribute to Folder and Analysis.

  • The analysis resolution_status now returns the system_resolution value.

  • Removed the PATCH method for collections.

  • Added the resolution_status filter to Folder Analyses [LIST] and analyse.resolution_status filter to Folder [LIST].

  • Added the audit log for Folder, User, Company.

  • Improved the company deletion algorithm.

  • Reforged the blacklist processing logic.

  • Fixed a few bugs.

3.33.0

  • The Photo Expert and KYC modules are now removed.

  • The endpoint for the user password change is now POST users/user_id/change-password instead of PATCH.

3.32.1

  • Provided log for the Celery app.

3.32.0

  • Added filters to the Folder [LIST] request parameters: analyse.time_created, analyse.results_data for the Documents analysis, results_data for the Biometry analysis, results_media_results_data for the QUALITY analysis. To enable filters, set the with_results_media_filter query parameter to True.

3.31.0

  • Added a new attribute for users – is_active (default True). If is_active == False, any user operation is blocked.

  • Added a new exception code (1401 with status code 401) for the actions of the blocked users.

3.30.0

  • Added shots sets preview.

  • You can now save a shots set archive to a disk (with the original_local_path, original_url attributes).

  • A new original_info attribute is added to store md5, size, and mime-type of a shots set

  • Fixed ReportInfo for shots sets.

3.29.0

  • Added health check at GET api/healthcheck.

3.28.1

  • Fixed the shots set thumbnail URL.

3.28.0

  • Now, the first frame of shots set becomes this shots set's thumbnail URL.

3.27.0

  • Modified the retry policy – the default max count of analysis attempts is increased to 3 and jitter configuration introduced.

  • Changed the callback algorithm.

  • Refactored and documented the command line tools.

  • Refactored modules.

3.25.0

  • Changed the delete personal information endpoint and method from delete_pi to /pi and from POST to DELETE, respectively.

3.23.1

  • Improved the delete personal information algorithm.

  • It is now forbidden to add media to cleaned folders.

3.23.0

  • Changed the authorize/restore endpoint name from auth to auth_restore.

  • Added a new tag – video_selfie_oneshot.

  • Added the password validation setting (OZ_PASSWORD_POLICY).

  • Added auth, rest_unauthorized, rps_with_token throttling (use OZ_THROTTLING_RATES in configuration. Off by default).

  • User permissions are now used to access static files (OZ_USE_PERMISSIONS_FOR_STATIC in configuration, false by default).

  • Added a new folder endpoint – /delete_pi. It clears all personal information from a folder and analyses related to this folder.

3.22.2

  • Fixed a bug with no error while trying to synchronize empty collections.

  • If persons are uploaded, the analyse collection TFSS request is sent.

3.22.0

  • Added the fields_to_check parameter to document analysis (by default, all fields are checked).

  • Added the double_page_spread parameter to document analysis (True by default).

3.21.3

  • Fixed collection synchronization.

3.21.0

  • Authorization token can be now refreshed by expire_token.

3.20.1

  • Added support for application/x-gzip.

3.20.0

  • Renamed shots_set.images to shots_set.frames.

3.18.0

  • Added user sessions API.

  • Users can now change a folder owner (limited by permissions).

  • Changed dependencies rules.

  • Changed the access_token prolongation policy to fix bug of prolongation before checking the expiration permission.

3.16.0

  • Move oz_collection_binding (collection synchronization functional) to oz_core.

3.15.3

  • Simplified the shots sets functionality. One archive keeps one shot set.

3.15.2

  • Improved the document sides recognition for the docker version.

3.15.1

  • Moved the orientation tag check to liveness at quality analysis.

3.15.0

  • Added a default report template for Admin and Operator.

3.14.0

  • Updated the biometric model.

3.13.2

  • A new ShotsSet object is not created if there are no photos for it.

  • Updated the data exchange format for the documents' recognition module.

3.13.1

  • You can’t delete a Collection if there are associated analyses with Collection Persons.

3.13.0

  • Added time marks to analysis: time_task_send_to_broker, time_task_received, time_task_finished.

3.12.0

  • Added a new authorization engine. You can now connect with Active Directory by LDAP (settings configuration required).

3.11.0

  • A new type of media in Folders – "shots_set".

  • You can’t delete a CollectionPerson if there are analyses associated with it.

3.10.0

  • Renamed the folder field resolution_suggest to operator_status.

  • Added a folder text field operator_comment.

  • The folder fields operator_status and operator_comment can be edited only by Admin, Operator, Client Service, Client Operator, and Client Admin.

  • Only Admin and Client Admin can delete folder, folder media, report template, report template attachments, reports, and analyses (within their company).

3.9.0

  • Fixed a deletion error: when report author is deleted, their reports get deleted as well.

3.8.1

  • Client can now view only their own profile.

3.8.0

  • Client Operator can now edit only their profile.

  • Client can't delete own folders, media, reports, or analyses anymore.

  • Client Service can now create Collection Person and read reports within their company.

3.7.1

  • Client, Client Admin, Client Operator have read access to users profiles only in their company.

  • A/B testing is now available.

  • Added support for expiration date header.

  • Added document recognition module Standalone/Dockered binding support.

3.7.0

  • Added a new role of Client Operator (like Client Admin without permissions for company and account management).

  • Client Admin and Client Operator can change the analysis status.

  • Only Admin and Client Admin (for their company) can create, update and delete operations for Collection and CollectionPerson models from now on.

  • Added a check for user permissions to report template when creating a folder report.

  • Collection creation now returns status code 201 instead of 200.

Media Tags

What Are Tags for

To work properly, the resolution algorithms need each uploaded media to be marked with special tags. For video and images, the tags are different. They help algorithms to identify what should be in the photo or video and analyze the content.

Tags for Video Files

The following tag types should be specified in the system for video files.

  • To identify the data type of the video:

    • video_selfie

  • To identify the orientation of the video:

    • orientation_portrait – portrait orientation;

    • orientation_landscape – landscape orientation.

  • To identify the action on the video:

    • video_selfie_left – head turn to the left;

    • video_selfie_right – head turn to the right;

    • video_selfie_down – head tilt downwards;

    • video_selfie_high – head raise up;

    • video_selfie_smile – smile;

    • video_selfie_eyes – blink;

    • video_selfie_scan – scanning;

    • video_selfie_oneshot – a one-frame analysis;

    • video_selfie_blank – no action.

Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored by algorithms.

Example of the correct tag set for a video file with the “blink” action:

Tags for Photo Files

The following tag types should be specified in the system for photo files:

  • A tag for selfies:

    • photo_selfie – to identify the image type as “selfie”.

  • Tags for photos/scans of ID cards:

    • photo_id – to identify the image type as “ID”;

    • photo_id_front – for the photo of the ID front side;

Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored by algorithms.

Example of the correct tag set for a “selfie” photo file:

Example of the correct tag set for a photo file with the face side of an ID card:

Example of the correct set of tags for a photo file of the back of an ID card:

Depends on

​​Whether this user is an or not

​​Whether this user account is or not

List of for this file

Download and install the Postman client from this Then download the JSON file needed:

Metadata is any optional data you might need to add to a . In the meta_data section, you can include any information you want, simply by providing any number of fields with their values:

You can also change or delete metadata. Please refer to our .

Another case is security: when you need to process the analyses’ result from your back end, but don’t want to perform this using the folder ID. Add an ID (transaction_id) to this folder and use this ID to search for the required information. This case is thoroughly explained .

To check the Liveness processor, GET /v1/face/liveness/health.

To check the Biometry processor, GET /v1/face/pattern/health.

To perform the liveness check for an image, POST /v1/face/liveness/detect (it takes an image as an input and displays the evaluation of spoofing attack chance in this image)

To compare two faces in two images, POST /v1/face/pattern/extract_and_compare (it takes two images as an input, derives the biometry templates from these images, and compares them).

To compare an image with a bunch of images, POST /v1/face/pattern/extract_and_compare_n.

For the full list of Oz API Lite methods, please refer to .

Download and install the Postman client from this Then download the JSON file needed:

Introduced the new that can process videos and archives as well.

Added the .

Implemented the which involves creating a folder and executing analyses in a single request by attaching a part of the analysis in the payload.

Implemented the without storing any data locally or in database. This mode can be used either with or without other API components.

Updated the API reference: .

Updated the Postman collection. Please see the new collection and at .

The tags listed allow the algorithms recognizing the files as suitable for the (Liveness) and analyses.

photo_id_back – for the photo of the ID back side (ignored for any other analyses like or ).

User roles
user roles
admin
service
tags
page.
system object
API documentation
Web

Parameter name

Type

Description

core

String

API Lite core version number.

tfss

String

TFSS version number.

models

[String]

An array of model versions, each record contains model name and model version number.

200 OK
Content-Type: application/json
{
2	"core": "core_version",
3	"tfss": "tfss_version",
4	"models": [
5		{
6			"name": "model_name",
7			"version": "model_version"
8		}
9	]
10}

Parameter name

Type

Description

status

Int

0 – the biometric processor is working correctly.

3 – the biometric processor is inoperative.

message

String

Message.

200 OK
Content-Type: application/json
{“status”: 0, message: “”}

Parameter name

Type

Description

Not specified*

Stream

Required parameter. Image to extract the biometric template.

The “Content-Type” header field must indicate the content type.

POST localhost/v1/face/pattern/extract
Content-Type: image/jpeg
{Image byte stream}

Parameter name

Type

Description

Not specified*

Stream

A biometric template derived from an image

200 OK
Content-Type: application/octet-stream
{Biometric template byte stream}

Parameter name

Type

Description

bio_feature

Stream

Required parameter.

First biometric template.

bio_template

Stream

Required parameter.

Second biometric template.

POST localhost/v1/face/pattern/compare
Content-Type: multipart/form-data;
boundary=--BOUNDARY--
Content-Length: Message body length
--BOUNDARY--
Content-Disposition: form-data; name=”bio_feature”
Content_type: application/octet-stream
{Biometric template byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”bio_template”
Content_type: application/octet-stream
{Biometric template byte stream}
--BOUNDARY--

Parameter name

Type

Description

score

Float

The result of comparing two templates

decision

String

Recommended solution based on the score.

approved – positive. The faces match.

operator_required – additional operator verification is required.

declined – negative result. The faces don't match.

200 OK
Content-Type: application/json
{“score”: 1.0, “decision”: “approved”}

Parameter name

Type

Description

sample

Stream

Required parameter.

Image to extract the biometric template.

bio_template

Stream

Required parameter.

The biometric template to compare with.

POST localhost/v1/face/pattern/verify
Content-Type: multipart/form-data;
boundary=--BOUNDARY--
Content-Length: Message body length
--BOUNDARY--
Content-Disposition: form-data; name=”bio_template”
Content_type: application/octet-stream
{Biometric template byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”sample”
Content_type: image/jpeg
{Image byte stream}
--BOUNDARY--

Parameter name

Type

Description

score

Float

The result of comparing two templates

bio_feature

Stream

Biometric template derived from image

200 OK
Content-Type: multipart/form-data;
boundary=--BOUNDARY--
Content-Length: Message body length
--BOUNDARY--
Content-Disposition: form-data; name=”score”
Content_type: application/json
{“score”: 1.0}
--BOUNDARY--
Content-Disposition: form-data; name=”bio_feature”
Content_type: application/octet-stream
{Biometric template byte stream}
--BOUNDARY--

Parameter name

Type

Description

sample_1

Stream

Required parameter.

First image.

sample_2

Stream

Required parameter.

Second image

POST localhost/v1/face/pattern/extract_and_compare
Content-Type: multipart/form-data;
boundary=--BOUNDARY--
Content-Length: Message body length
--BOUNDARY--
Content-Disposition: form-data; name=”sample_1”
Content_type: image/jpeg
{Image byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”sample_2”
Content_type: image/jpeg
{Image byte stream}
--BOUNDARY--

Parameter name

Type

Description

score

Float

The result of comparing the two extracted templates.

decision

String

Recommended solution based on the score.

approved – positive. The faces are match.

operator_required – additional operator verification is required.

declined – negative result. The faces don't match.

200 OK
Content-Type: application/json
{“score”: 1.0, “decision”: “approved”}

Parameter name

Type

Description

template_1

Stream

This parameter is mandatory. The first (main) biometric template

templates_n

Stream

A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.

POST localhost/v1/face/pattern/compare_n
Content-Type: multipart/form-data;
boundary=--BOUNDARY--
Content-Length: message body length
--BOUNDARY--
Content-Disposition: form-data; name=”template_1”
Content_type: application/octet-stream
{biometric template byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”templates_n”; filename=”1.template”
Content_type: application/octet-stream
{biometric template byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”templates_n”; filename=”2.template”
Content_type: application/octet-stream
{biometric template byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”templates_n”; filename=”3.template”
Content_type: application/octet-stream
{biometric template byte stream}
--BOUNDARY--

Parameter name

Type

Description

results

List[JSON]

A list of N comparison results. The Nth result contains the comparison result for the main and Nth templates. The result has the fields as follows:

*filename

String

A filename for the Nth template.

*score

Float

The result of comparing the main and Nth templates.

*decision

String

Recommended solution based on the score.

approved – positive. The faces are match.

operator_required – additional operator verification is required.

declined – negative result. The faces don't match.

200 OK
Content-Type: application/json
{'results': [
    {'filename': '1.template', 'score': 0.0, 'decision': 'declined'}, 
    {'filename': '2.template', 'score': 1.0, 'decision': 'approved'}, 
    {'filename': '3.template', 'score': 0.21, 'decision': 'declined'}
]}

Parameter name

Type

Description

sample_1

Stream

This parameter is mandatory. The main image.

templates_n

Stream

A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.

POST localhost/v1/face/pattern/verify_n
Content-Type: multipart/form-data;
boundary=--BOUNDARY--
Content-Length: message body length
--BOUNDARY--
Content-Disposition: form-data; name=”sample_1”
Content_type: image/jpeg
{image byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”templates_n”; filename=”1.template”
Content_type: application/octet-stream
{biometric template byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”templates_n”; filename=”2.template”
Content_type: application/octet-stream
{biometric template byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”templates_n”; filename=”3.template”
Content_type: application/octet-stream
{biometric template byte stream}
--BOUNDARY--

Parameter name

Type

Description

results

List[JSON]

A list of N comparison results. The Nth result contains the comparison result for the template derived from the main image and the Nth template. The result has the fields as follows:

*filename

String

A filename for the Nth template.

*score

Float

The result of comparing the template derived from the main image and the Nth template.

*decision

String

Recommended solution based on the score.

approved – positive. The faces are match.

operator_required – additional operator verification is required.

declined – negative result. The faces don't match.

200 OK
Content-Type: application/json
{'results': [
    {'filename': '1.template', 'score': 0.0, 'decision': 'declined'}, 
    {'filename': '2.template', 'score': 1.0, 'decision': 'approved'}, 
    {'filename': '3.template', 'score': 0.21, 'decision': 'declined'}
]}

Parameter name

Type

Description

sample_1

Stream

This parameter is mandatory. The first (main) image.

samples_n

Stream

A list of N images. Each of them should be passed separately but the parameter name should be samples_n. You also need to pass the filename in the header.

POST localhost/v1/face/pattern/extract_and_compare_n
Content-Type: multipart/form-data;
boundary=--BOUNDARY--
Content-Length: message body length
--BOUNDARY--
Content-Disposition: form-data; name=”sample_1”
Content_type: image/jpeg
{image byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”samples_n”; filename=”1.jpeg”
Content_type: image/jpeg
{image byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”samples_n”; filename=”2.jpeg”
Content_type: image/jpeg
{image byte stream}
--BOUNDARY--
Content-Disposition: form-data; name=”samples_n”; filename=”3.jpeg”
Content_type: image/jpeg
{image byte stream}
--BOUNDARY--

Parameter name

Type

Description

results

List[JSON]

A list of N comparison results. The Nth result contains the comparison result for the main and Nth images. The result has the fields as follows:

*filename

String

A filename for the Nth image.

*score

Float

The result of comparing the main and Nth images.

*decision

String

Recommended solution based on the score.

approved – positive. The faces are match.

operator_required – additional operator verification is required.

declined – negative result. The faces don't match.

200 OK
Content-Type: application/json
{'results': [
    {'filename': '1.jpeg', 'score': 0.0, 'decision': 'declined'}, 
    {'filename': '2.jpeg', 'score': 1.0, 'decision': 'approved'}, 
    {'filename': '3.jpeg', 'score': 0.21, 'decision': 'declined'}
]}

HTTP response codes

The value of the “code” parameter

Description

400

BPE-002001

Invalid Content-Type of HTTP request

400

BPE-002002

Invalid HTTP request method

400

BPE-002003

Failed to read the biometric sample*

400

BPE-002004

Failed to read the biometric template

400

BPE-002005

Invalid Content-Type of the multiparted HTTP request part

400

BPE-003001

Failed to retrieve the biometric template

400

BPE-003002

The biometric sample* is missing face

400

BPE-003003

More than one person is present on the biometric sample*

500

BPE-001001

Internal bioprocessor error

400

BPE-001002

TFSS error. Call the biometry health method.

Parameter name

Type

Description

status

Int

0 – the liveness processor is working correctly.

3 – the liveness processor is inoperative.

message

String

Message.

200 OK
Content-Type: application/json
{“status”: 0, message: “”}
POST /v1/face/liveness/detect HTTP/1.1
Host: localhost
Content-Type: image/jpeg
Content-Length: [the size of the message body]
[Image byte stream]
HTTP/1.1 200 OK
Content-Type: application/json
{
  "passed": false,
  "score": 0.999484062
}
POST /v1/face/liveness/detect HTTP/1.1
Host: localhost
Content-Length: [the size of the message body]
Content-Type: multipart/form-data; boundary=--BOUNDARY--

--BOUNDARY--
Content-Disposition: form-data; name="media_key1"; filename="video.mp4"
Content-Type: multipart/form-data; 

[media file byte stream]
--BOUNDARY--
Content-Disposition: form-data; name="payload"

    {
        "folder:meta_data": {
            "partner_side_folder_id": "partner_side_folder_id_if_needed",
            "person_info": {
                "first_name": "John",
                "middle_name": "Jameson",
                "last_name": "Doe"
            }
        },
        "resolution_endpoint": "https://www.your-custom-endpoint.com",
        "media:meta_data": {
            "media_key1": {
                "foo": "bar2"
            }
        },
        "media:tags": {
            "media_key1": [
                "video_selfie",
                "video_selfie_blank"
            ]
        },
        "analyses": [
          {
            "type": "quality",
            "meta_data": {
              "example1": "some_example1"
            },
            "params": {
                "threshold_spoofing": 0.6,
                "extract_best_shot": false
            }
          }
]
    }
--BOUNDARY--
{
    "company_id": null,
    "time_created": 1720180784.769608,
    "folder_id": "folder_id", // temporary ID
    "user_id": null,
    "resolution_endpoint": "https://www.your-custom-endpoint.com",
    "resolution_status": "FINISHED",
    "resolution_comment": "[]",
    "system_resolution": "SUCCESS",
    "resolution_time": null,
    "resolution_author_id": null,
    "resolution_state_hash": null,
    "operator_comment": null,
    "operator_status": null,
    "is_cleared": null,
    "meta_data": {
        "partner_side_folder_id": "partner_side_folder_id_if_needed",
        "person_info": {
            "first_name": "John",
            "middle_name": "Jameson",
            "last_name": "Doe"
        }
    },
    "technical_meta_data": {},
    "time_updated": 1720180787.531983,
    "media": [
        {
            "folder_id": "folder_id", // temporary ID
            "media_id": "video_id", // temporary ID
            "media_type": "VIDEO_FOLDER",
            "info": {
                "thumb": null,
                "video": {
                    "duration": 3.76,
                    "FPS": 22.83,
                    "width": 960,
                    "height": 720,
                    "md5": "8879b4fa9ee7add77aceb8d7d5d7b92d",
                    "size": 6017119,
                    "mime-type": "video/mp4"
                }
            },
            "tags": [
                "video_selfie",
                "video_selfie_blank",
                "orientation_portrait"
            ],
            "original_name": "video-5mb.mp4",
            "original_url": null,
            "company_id": null,
            "technical_meta_data": {},
            "time_created": 1719573752.78253,
            "time_updated": 1720180787.531801,
            "meta_data": {
                "foo4": "bar5"
            },
            "thumb_url": null,
            "folder_time_created": null,
            "video_id": "video_id", // temporary ID
            "video_url": null
        }
    ],
    "analyses": [
        {
            "analyse_id": null,
            "analysis_id": null,
            "folder_id": "folder_id", // temporary ID
            "folder_time_created": null,
            "type": "QUALITY",
            "state": "FINISHED",
            "company_id": null,
            "group_id": null,
            "results_data": null,
            "confs": {
                "threshold_replay": 0.5,
                "extract_best_shot": false,
                "threshold_liveness": 0.5,
                "threshold_spoofing": 0.42
            },
            "error_message": null,
            "error_code": null,
            "resolution_operator": null,
            "technical_meta_data": {},
            "time_created": 1720180784.769944,
            "time_updated": 1720180787.531877,
            "meta_data": {
                "some_key": "some_value"
            },
            "source_media": [
                {
                    "folder_id": "folder_id", // temporary ID
                    "media_id": "video_id", // temporary ID
                    "media_type": "VIDEO_FOLDER",
                    "info": {
                        "thumb": null,
                        "video": {
                            "duration": 3.76,
                            "FPS": 22.83,
                            "width": 960,
                            "height": 720,
                            "md5": "8879b4fa9ee7add77aceb8d7d5d7b92d",
                            "size": 6017119,
                            "mime-type": "video/mp4"
                        }
                    },
                    "tags": [
                        "video_selfie",
                        "video_selfie_blank",
                        "orientation_portrait"
                    ],
                    "original_name": "video-5mb.mp4",
                    "original_url": null,
                    "company_id": null,
                    "technical_meta_data": {},
                    "time_created": 1719573752.78253,
                    "time_updated": 1720180787.531801,
                    "meta_data": {
                        "foo4": "bar5"
                    },
                    "thumb_url": null,
                    "folder_time_created": null,
                    "video_id": "video_id", // temporary ID
                    "video_url": null
                }
            ],
            "results_media": [
                {
                    "company_id": null,
                    "media_association_id": "video_id", // temporary ID
                    "analysis_id": null,
                    "results_data": {
                        "confidence_spoofing": 0.000541269779
                    },
                    "source_media_id": "video_id", // temporary ID
                    "output_images": [],
                    "collection_persons": [],
                    "folder_time_created": null
                }
            ],
            "resolution_status": "SUCCESS",
            "resolution": "SUCCESS"
        }
    ]
}
POST /v1/face/liveness/detect HTTP/1.1
Host: localhost
Content-Length: [the size of the message body]
Content-Type: multipart/form-data; boundary=--BOUNDARY--

--BOUNDARY--
Content-Disposition: form-data; name="media_key1"; filename="video.mp4"
Content-Type: multipart/form-data; 

[media file byte stream]
--BOUNDARY--
Content-Disposition: form-data; name="payload"

    {
        "folder:meta_data": {
            "partner_side_folder_id": "partner_side_folder_id_if_needed",
            "person_info": {
                "first_name": "John",
                "middle_name": "Jameson",
                "last_name": "Doe"
            }
        },
        "resolution_endpoint": "https://www.your-custom-endpoint.com",
        "media:meta_data": {
            "media_key1": {
                "foo": "bar2"
            }
        },
        "media:tags": {
            "media_key1": [
                "video_selfie",
                "video_selfie_blank"
            ]
        },
        "analyses": [
          {
            "type": "quality",
            "meta_data": {
              "example1": "some_example1"
            },
            "params": {
                "threshold_spoofing": 0.6,
                "extract_best_shot": true
            }
          }
]
    }
--BOUNDARY--
{
    "company_id": null,
    "time_created": 1720177371.120899,
    "folder_id": "folder_id", // temporary ID
    "user_id": null,
    "resolution_endpoint": "https://www.your-custom-endpoint.com",
    "resolution_status": "FINISHED",
    "resolution_comment": "[]",
    "system_resolution": "SUCCESS",
    "resolution_time": null,
    "resolution_author_id": null,
    "resolution_state_hash": null,
    "operator_comment": null,
    "operator_status": null,
    "is_cleared": null,
    "meta_data": {
        "partner_side_folder_id": "partner_side_folder_id_if_needed",
        "person_info": {
            "first_name": "John",
            "middle_name": "Jameson",
            "last_name": "Doe"
        }
    },
    "technical_meta_data": {},
    "time_updated": 1720177375.531137,
    "media": [
        {
            "folder_id": "folder_id", // temporary ID
            "media_id": "media_id", // temporary ID
            "media_type": "VIDEO_FOLDER",
            "info": {
                "thumb": null,
                "video": {
                    "duration": 3.76,
                    "FPS": 22.83,
                    "width": 960,
                    "height": 720,
                    "md5": "8879b4fa9ee7add77aceb8d7d5d7b92d",
                    "size": 6017119,
                    "mime-type": "video/mp4"
                }
            },
            "tags": [
                "video_selfie",
                "video_selfie_blank",
                "orientation_portrait"
            ],
            "original_name": "video-5mb.mp4",
            "original_url": null,
            "company_id": null,
            "technical_meta_data": {},
            "time_created": 1719573752.781861,
            "time_updated": 1720177373.772401,
            "meta_data": {
                "foo4": "bar5"
            },
            "thumb_url": null,
            "folder_time_created": null,
            "video_id": "media_id", // temporary ID
            "video_url": null
        }
    ],
    "analyses": [
        {
            "analyse_id": null,
            "analysis_id": null,
            "folder_id": "folder_id", // temporary ID
            "folder_time_created": null,
            "type": "QUALITY",
            "state": "FINISHED",
            "company_id": null,
            "group_id": null,
            "results_data": null,
            "confs": {
                "threshold_replay": 0.5,
                "extract_best_shot": true,
                "threshold_liveness": 0.5,
                "threshold_spoofing": 0.42
            },
            "error_message": null,
            "error_code": null,
            "resolution_operator": null,
            "technical_meta_data": {},
            "time_created": 1720177371.121241,
            "time_updated": 1720177375.531043,
            "meta_data": {
                "some_key": "some_value"
            },
            "source_media": [
                {
                    "folder_id": "folder_id", // temporary ID
                    "media_id": "media_id", // temporary ID
                    "media_type": "VIDEO_FOLDER",
                    "info": {
                        "thumb": null,
                        "video": {
                            "duration": 3.76,
                            "FPS": 22.83,
                            "width": 960,
                            "height": 720,
                            "md5": "8879b4fa9ee7add77aceb8d7d5d7b92d",
                            "size": 6017119,
                            "mime-type": "video/mp4"
                        }
                    },
                    "tags": [
                        "video_selfie",
                        "video_selfie_blank",
                        "orientation_portrait"
                    ],
                    "original_name": "video-5mb.mp4",
                    "original_url": null,
                    "company_id": null,
                    "technical_meta_data": {},
                    "time_created": 1719573752.781861,
                    "time_updated": 1720177373.772401,
                    "meta_data": {
                        "foo4": "bar5"
                    },
                    "thumb_url": null,
                    "folder_time_created": null,
                    "video_id": "media_id", // temporary ID
                    "video_url": null
                }
            ],
            "results_media": [
                {
                    "company_id": null,
                    "media_association_id": "media_id", // temporary ID
                    "analysis_id": null,
                    "results_data": {
                        "confidence_spoofing": 0.000541269779
                    },
                    "source_media_id": "media_id", // temporary ID
                    "output_images": [
                        {
                            "folder_id": "folder_id", // temporary ID
                            "media_id": "media_id", // temporary ID
                            "media_type": "IMAGE_RESULT_ANALYSIS_SINGLE",
                            "info": {
                                "thumb": null,
                                "original": {
                                    "md5": "e6effeceb94e79b8cb204c6652283b57",
                                    "width": 720,
                                    "height": 960,
                                    "size": 145178,
                                    "mime-type": "image/jpeg"
                                }
                            },
                            "tags": [],
                            "original_name": "<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=720x960 at 0x766811DF8E90>",
                            "original_url": null,
                            "company_id": null,
                            "technical_meta_data": {},
                            "time_created": 1719573752.781861,
                            "time_updated": 1719573752.781871,
                            "meta_data": null,
                            "folder_time_created": null,
                            "image_b64": "",
                            "media_association_id": "media_id" // temporary ID
                        }
                    ],
                    "collection_persons": [],
                    "folder_time_created": null
                }
            ],
            "resolution_status": "SUCCESS",
            "resolution": "SUCCESS"
        }
    ]
}
{
    "folder:meta_data": {
        "partner_side_folder_id": "partner_side_folder_id_if_needed",
        "person_info": {
            "first_name": "John",
            "middle_name": "Jameson",
            "last_name": "Doe"
        }   },
    "resolution_endpoint": "https://www.your-custom-endpoint.com",
    "media:meta_data": {
        "media_key1": {
            "foo": "bar2",
            "additional_info": "additional_info" // might affect the score
        },
        "media_key2": {
            "foo2": "bar3"
        },
        "media_key3": {
            "foo4": "bar5"
        }
    },
    "media:tags": {
        "media_key1": [
            "video_selfie",
            "video_selfie_blank",
            "orientation_portrait"
        ],
        "media_key2": [
            "photo_selfie"
        ],
        "media_key3": [
            "video_selfie",
            "video_selfie_blank",
            "orientation_portrait"
        ]
    },
"analyses": [
    {
      "type": "quality",
      "meta_data": {
        "some_key": "some_value"
      },
      "params": {
      	"threshold_spoofing": 0.42, // affects resolution
      	"extract_best_shot":true // analysis will return the best shot
      }
    }
  ]
}

HTTP response codes

The value of the “code” parameter

Description

400

LDE-002001

Invalid Content-Type of HTTP request

400

LDE-002002

Invalid HTTP request method

400

LDE-002004

Failed to extract the biometric sample*

400

LDE-002005

Invalid Content-Type of the multiparted HTTP request part

500

LDE-001001

Liveness detection processor internal error

400

LDE-001002

TFSS error. Call the Liveness health method.

"video1": [
  "video_selfie",
  "video_selfie_eyes",
  "orientation_portrait"
]
"photo1": [
  "photo_selfie"
]
"photo1": [
  "photo_id",
  "photo_id_front"
]
"photo1": [
  "photo_id",
  "photo_id_back"
]
video-related
tags

How to Restore the Previous Design after an Update

If you want to get back to the previous (up to 6.4.2) versions' design, reset the customization settings of the capture screen and apply the parameters that are listed below.

OzLivenessSDK.config.customization = UICustomization(
    // customization parameters for the toolbar
    toolbarCustomization = ToolbarCustomization(
        closeIconTint = Color.ColorHex("#FFFFFF"),
        backgroundColor = Color.ColorHex("#000000"),
        backgroundAlpha = 100,
    ),
    // customization parameters for the center hint
    centerHintCustomization = CenterHintCustomization(
        verticalPosition = 70
    ),
    // customization parameters for the hint animation
    new HintAnimation(
        hideAnimation = true
    ),
    // customization parameters for the frame around the user face
    faceFrameCustomization = FaceFrameCustomization(
        strokeDefaultColor = Color.ColorHex("#EC574B"),
        strokeFaceInFrameColor = Color.ColorHex("#00FF00"),
        strokeWidth = 6,
    ),
    // customization parameters for the background outside the frame
    backgroundCustomization = BackgroundCustomization(
        backgroundAlpha = 100
    ),
)
OzLivenessSDK.INSTANCE.getConfig().setCustomization(new UICustomization(
// customization parameters for the toolbar
new ToolbarCustomization(
    R.drawable.ib_close,
    new Color.ColorHex("#FFFFFF"),
    new Color.ColorHex("#000000"),
    100, // toolbar text opacity (in %)
    ),
// customization parameters for the center hint
new CenterHintCustomization(
    70, // vertical position (in %)
),
// customization parameters for the hint animation
new HintAnimation(
    true, // hide animation
),
// customization parameters for the frame around the user face
new FaceFrameCustomization(     
    new Color.ColorHex("#EC574B"), 
    new Color.ColorHex("#00FF00"),
    6, // frame stroke width (in dp)
 ),
// customization parameters for the background outside the frame
new BackgroundCustomization(
    100 // background opacity (in %)
),
  )
);
API Methods
Android
iOS
Flutter
page.
single request mode
instant analysis mode
Oz API 6.0
call
call
call
call
call
Liveness detect method
version check method
https://apidoc.ozforensics.com/
here
Quality
Biometry
Quality
Biometry
Android

Connecting SDK to API

OzLivenessSDK.setApiConnection(OzConnection.fromServiceToken(host, token))
OzLivenessSDK.INSTANCE.setApiConnection(
        OzConnection.Companion.fromServiceToken(host, token), 
        null
);

Please note: in your host application, it is recommended that you set the API address on the screen that precedes the liveness check. Setting the API URL initiates a service call to the API, which may cause excessive server load when being done at the application initialization or startup.

Alternatively, you can use the login and password provided by your Oz Forensics account manager:

OzLivenessSDK.setApiConnection(
    OzConnection.fromCredentials(host, username, password),
    statusListener(
        { token -> /* token */ },
        { ex -> /* error */ }
    )
)
OzLivenessSDK.INSTANCE.setApiConnection(
        OzConnection.Companion.fromCredentials(host, username, password),
        new StatusListener<String>() {
            @Override
            public void onStatusChanged(@Nullable String s) {}
            @Override
            public void onSuccess(String token) { /* token */ }
            @Override
            public void onError(@NonNull OzException e) { /* error */ }
        }
);

Although, the preferred option is authentication via access token – for security reasons.

OzLivenessSDK.setEventsConnection(
    OzConnection.fromCredentials(
        "https://tm.ozforensics.com/",
        "<your_telemetry_user_eg_tm@company.com>",
        "your_telemetry_password"
    )
)
OzLivenessSDK.setEventsConnection(
        OzConnection.fromCredentials(
                "https://tm.ozforensics.com/",
                "<your_telemetry_user_eg_tm@company.com>",
                "your_telemetry_password"
        )
);

Clearing authorization:

OzLivenessSDK.setApiConnection(null)
OzLivenessSDK.INSTANCE.setApiConnection(null, null);

Other Methods

Check for the presence of the saved Oz API access token:

val isLoggedIn = OzLivenessSDK.isLoggedIn
boolean isLoggedIn = OzLivenessSDK.INSTANCE.isLoggedIn();

LogOut:

OzLivenessSDK.logout()
OzLivenessSDK.INSTANCE.logout();

On-Device Mode

The common way of Oz Mobile SDK to work is the server-based mode, when the Liveness and Biometry analyses are performed on a server as shown in the scheme below:

But there's also an option to perform checks without server calls or even Internet connection. This option stands for on-device analyses.

The on-device analyses are being performed faster and more secure as all data is processed directly on device, nothing is being sent anywhere. In this case, you don’t need a server at all, neither need you the API connection.

However, the API connection might be needed for some additional functions like telemetry or server-side SDK configuration.

The on-device analysis mode is useful when:

  • you do not collect, store or process personal data;

  • you need to identify a person quickly regardless of network conditions such as a distant region, inside a building, underground, etc.;

  • you’re on a tight budget as you can save money on the hardware part.

Launching the On-Device Analysis

To launch the on-device check, set the appropriate mode for Android or iOS SDK.

Android:

analysisCancelable = AnalysisRequest.Builder()
    .addAnalysis(Analysis(Analysis.Type.QUALITY, Analysis.Mode.ON_DEVICE, mediaToAnalyze))
analysisCancelable = new AnalysisRequest.Builder()
    .addAnalysis(new Analysis(Analysis.Type.QUALITY, Analysis.Mode.ON_DEVICE, mediaList)) 

iOS:

let analysis = Analysis.init(media: mediaToAnalyze, type: .quality, mode: .onDevice)

Adding SDK to a Project

Add the following URL to the build.gradle of the project:

allprojects {
  repositories {
    maven { url "https://ozforensics.jfrog.io/artifactory/main" }
  }
}

for the server-based version only

Please note: this is the default version.

dependencies {
  implementation 'com.ozforensics.liveness:sdk:VERSION'
}

for both server-based and on-device versions

Please note: the resulting file will be larger.

dependencies {
implementation 'com.ozforensics.liveness:full:VERSION'
}

Also, regardless of the mode chosen, add:

android {
  compileOptions {
    sourceCompatibility JavaVersion.VERSION_1_8
    targetCompatibility JavaVersion.VERSION_1_8
  }
}

Android

To start using Oz Android SDK, follow the steps below.

Resources

Recommended Android version: 5+ (the newer the smartphone is, the faster the analyses are).

Recommended versions of components:

Gradle

7.5.1

Kotlin

1.7.21

AGP

7.3.1

Java Target Level

1.8

JDK

17

We do not support emulators.

Available languages: EN, ES, HY, KK, KY, TR, PT-BR.

To obtain the sample apps source code for the Oz Liveness SDK, proceed to the GitLab repository:

Follow the link below to see a list of SDK methods and properties:

Getting a License for Android SDK

To pass your license file to the SDK, call the OzLivenessSDK.init method with a list of LicenseSources. Use one of the following:

  • LicenseSource.LicenseAssetId should contain a path to a license file called forensics.license, which has to be located in the project's res/raw folder.

  • LicenseSource.LicenseFilePath should contain a file path to the place in the device's storage where the license file is located.

OzLivenessSDK.init(context,
    listOf(
        LicenseSource.LicenseAssetId(R.raw.your_license_name),
        LicenseSource.LicenseFilePath("absolute_path_to_your_license_file")
    ),
    object : StatusListener<LicensePayload> {
        override fun onSuccess(result: LicensePayload) { /*check the license payload*/ }
        override fun onError(error: OzException) { /*handle the exception */ }
    }
  )
OzLivenessSDK.INSTANCE.getConfig().setBaseURL(BASE_URL);
OzLivenessSDK.INSTANCE.init(context,
    Arrays.asList(
        new LicenseSource.LicenseAssetId(R.raw.forensics),
        new LicenseSource.LicenseFilePath("absolute_path_to_your_license_file")
    ),
    new StatusListener<LicensePayload>() {
        @Override public void onStatusChanged(@Nullable String s) {}
        @Override public void onSuccess(LicensePayload licensePayload) { /*check the license payload*/ }
        @Override public void onError(@NonNull OzException e) { /*handle the exception */ }
    }
);

In case of any license errors, the onError function is called. Use it to handle the exception as shown above. Otherwise, the system will return information about license. To check the license data manually, use the getLicensePayload method.

Possible License Errors

Error message
What to Do

License error. License at (your_URI) not found

The license file is missing. Please check its name and path to the file.

License error. Cannot parse license from (your_URI), invalid format

The license file is somehow damaged. Please email us the file.

License error. Bundle company.application.id is not in the list allowed by license (bundle.id1, bundle.id2)

The bundle (application) identifier you specified is missing in the allowed list. Please check the spelling, if it is correct, you need to get another license for your application.

License error. Current date yyyy-mm-dd hh:mm:ss is later than license expiration date yyyy-mm-dd hh:mm:ss

Your license has expired. Please contact us.

License is not initialized. Call 'OzLivenessSDK.init before using SDK

You haven't initialized the license. Call OzLivenessSDK.init with your license data as explained above.

Master License for Android

Master license is the offline license that allows using Mobile SDKs with any bundle_id, unlike the regular licenses. To get a master license, create a pair of keys as shown below. Email us the public key, and we will email you the master license shortly after that. Your application needs to sign its bundle_id with the private key, and the Mobile SDK checks the signature using the public key from the master license. Master licenses are time-limited.

Generating Keys

This section describes the process of creating your private and public keys.

Creating a Private Key

To create a private key, run the commands below one by one.

openssl genpkey -algorithm RSA -outform DER -out privateKey.der -pkeyopt rsa_keygen_bits:2048
# for MacOS
base64 -i privateKey.der -o privateKey.txt
# for Linux 
base64 -w 0 privateKey.der > privateKey.txt

You will get these files:

  • privateKey.der is a private .der key;

  • privateKey.txt is privateKey.der encoded by base64. This key containing will be used as the host app bundle_id signature.

File examples:

Creating a Public Key

To create a public key, run this command.

openssl rsa -pubout -in privateKey.der -out publicKey.pub

You will get the public key file: publicKey.pub. To get a license, please email us this file. We will email you the license.

File example:

SDK Integration

SDK initialization:

fun init(
    context: Context,
    licenseSources: List<LicenseSource>,
    masterLicenseSignature: String,
    statusListener: StatusListener<LicensePayload>? = null,
)

For Android 6.0 (API level 23) and older:

  1. Add the implementation 'com.madgag.spongycastle:prov:1.58.0.0' dependency;

  2. Before creating a signature, call Security.insertProviderAt(org.spongycastle.jce.provider.BouncyCastleProvider(), 1)

Prior to the SDK initializing, create a base64-encoded signature for the host app bundle_id using the private key.

Signature creation example:

private fun getMasterSignature(): String {
    Security.insertProviderAt(org.spongycastle.jce.provider.BouncyCastleProvider(), 1)

    val privateKeyBase64String = "the string copied from the privateKey.txt file"
    // with key example:
    // val privateKeyBase64String = "MIIEpAIBAAKCAQEAxnpv02nNR34uNS0yLRK1o7Za2hs4Rr0s1V1/e1JZpCaK8o5/3uGV+qiaTbKqU6x1tTrlXwE2BRzZJLLQdTfBL/rzqVLQC/n+kAmvsqtHMTUqKquSybSTY/zAxqHF3Fk59Cqisr/KQamPh2tmg3Gu61rr9gU1rOglnuqt7FioNMCMvjW7ciPv+jiawLxaPrzNiApLqHVN+xCFh6LLb4YlGRaNUXlOgnoLGWSQEsLwBZFkDJDSLTJheNVn9oa3PXg4OIlJIPlYVKzIDDcSTNKdzM6opkS5d+86yjI1aTKEH3Zs64+QoEuoDfXUxS3TOUFx8P+wfjOR5tYAT+7TRN4ocwIDAQABAoIBAATWJPV05ZCxbXTURh29D/oOToZ0FVn78CS+44Vgy1hprAcfG9SVkK8L/r6X9PiXAkNJTR+Uivly64Oua8//bNC7f8aHgxRXojFmWwayj8iOMBncFnad1N2h4hy1AnpNHlFp3I8Yh1g0RpAZOOVJFucbTxaup9Ev0wLdWyGgQ3ENmRXAyLU5iUDwUSXg59RCBFKcmsMT2GmmJt1BU4P3lL9KVyLBktqeDWR/l5K5y8pPo6K7m9NaOkynpZo+mHVoOTCtmTj5TC/MH9YRHlF15VxQgBbZXuBPxlYoQCsMDEcZlMBWNw3cNR6VBmGiwHIc/tzSHZVsbY0VRCYEbxhCBZkCgYEA+Uz0VYKnIWViQF2Na6LFuqlfljZlkOvdpU4puYTCdlfpKNT3txYzO0T00HHY9YG9k1AW78YxQwsopOXDCmCqMoRqlbn1SBe6v49pVB85fPYU2+L+lftpPlx6Wa0xcgzwOBZonHb4kvp1tWhUH+B5t27gnvRz/rx5jV2EfmWinycCgYEAy8/aklZcgoXWf93N/0EZcfzQo90LfftkKonpzEyxSzqCw7B9fHY68q/j9HoP4xgJXUKbx1Fa8Wccc0DSoXsSiQFrLhnT8pE2s1ZWvPaUqyT5iOZOW6R+giFSLPWEdwm6+BeFoPQQFHf8XH3Z2QoAepPrEPiDoGN1GSIXcCwoe9UCgYEAgoKj4uQsJJKT1ghj0bZ79xVWQigmEbE47qI1u7Zhq1yoZkTfjcykc2HNHBaNszEBks45w7qo7WU5GOJjsdobH6kst0eLvfsWO9STGoPiL6YQE3EJQHFGjmwRbUL7AK7/Tw2EJG0wApn150s/xxRYBAyasPxegTwgEj6j7xu7/78CgYEAxbkI52zG5I0o0fWBcf9ayx2j30SDcJ3gx+/xlBRW74986pGeu48LkwMWV8fO/9YCx6nl7JC9dHI+xIT/kk8OZUGuFBRUbP95nLPHBB0Hj50YRDqBjCBh5qaizSEGeGFFNIfFSKddri3U8nnZTNiKLGCx7E3bjE7QfCh5qoX8ZF0CgYAtsEPTNKWZKA23qTFI+XAg/cVZpbSjvbHDSE8QB6X8iaKJFXbmIC0LV5tQO/KT4sK8g40m2N9JWUnaryTiXClaUGU3KnSlBdkIA+I77VvMKMGSg+uf4OdfJvvcs4hZTqZRdTm3dez8rsUdiW1cX/iI/dJxF4964YIFR65wL+SoRg=="
    val sig = Signature.getInstance("SHA512WithRSA")
    val keySpec = PKCS8EncodedKeySpec(Base64.decode(privateKeyBase64String, Base64.DEFAULT))
    val keyFactory = KeyFactory.getInstance("RSA")
    sig.initSign(keyFactory.generatePrivate(keySpec))
    sig.update(packageName.toByteArray(Charsets.UTF_8))
    return Base64.encodeToString(sig.sign(), Base64.DEFAULT).replace("\n", "")
}
Signature example for com.ozforensics.liveness.demo

Please note: this signature does not match with the pair of keys listed above.

KohJ1rsUgLMzZHpHGAZDK2efHPnMj9tw9VIedBLvyZt0B2JH3SWfJLJ8X6JNz3bR2sce6PR2wdEIFln0r1pUnD+6WBCgexKIHAv7esiRVQZoZOEANDBwDvJVv73H/0qL2LGlhxKzbBg5CxGPClTBQdLo1P+7HsTXHHG/Hf6m3rdu1OUeGXVPoaS2NzE8kiRH6gb8Nhr7PBLTUeMKTeLoiX13hvwjOqhV1ANhgS97T4hC2+ZilZt4RektgRY/+fGmWnOqErNeYuz/WSInfaJS0YEWhJW3gXKPjdCzNGIBIqbxaFSjU46wu/alh2+tBRFnrYFl1dRQVcTlW0VwwZHcug==

Pass the signature as the masterLicenseSignature parameter during the SDK initialization.

If the signature is invalid, the initialization continues as usual: the SDK checks the list of bundle_id included into the license, like it does it by default without a master license.

Capturing Videos

To start recording, use thestartActivityForResult method:

val intent = OzLivenessSDK.createStartIntent(listOf(OzAction.Smile, OzAction.Blank))
startActivityForResult(intent, REQUEST_CODE)
List<OzAction> actions  = Arrays.asList(OzAction.Smile, OzAction.Scan);
Intent intent = OzLivenessSDK.createStartIntent(actions);
startActivityForResult(intent, REQUEST_CODE);
childFragmentManager.beginTransaction()
    .replace(R.id.content, LivenessFragment.create(actions))
    .commit()
// subscribing to the Fragment result
childFragmentManager.setFragmentResultListener(OzLivenessSDK.Extra.REQUEST_CODE, this) { _, result ->
    when (result.getInt(OzLivenessSDK.Extra.EXTRA_RESULT_CODE)) {
        OzLivenessResultCode.SUCCESS -> { /* start analysis */ }
        else -> { /* show error */ }  
    }
}
getSupportFragmentManager().beginTransaction()
        .replace(R.id.content, LivenessFragment.Companion.create(actions, null, null, false))
        .addToBackStack(null)
        .commit();
// subscribing to the Fragment result
getSupportFragmentManager().setFragmentResultListener(OzLivenessSDK.Extra.REQUEST_CODE, this, (requestKey, result) -> {
            switch (result.getInt(OzLivenessSDK.Extra.EXTRA_RESULT_CODE)) {
                case OzLivenessResultCode.SUCCESS: {/* start analysis */}
                default: {/* show error */}
            }
        });

To obtain the captured video, use theonActivityResult method:

override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
  super.onActivityResult(requestCode, resultCode, data)
    if (requestCode == REQUEST_CODE) {
      sdkMediaResult = OzLivenessSDK.getResultFromIntent(data)
      sdkErrorString = OzLivenessSDK.getErrorFromIntent(data)
    }
}
@Override
protected void onActivityResult(int requestCode, int resultCode, @androidx.annotation.Nullable Intent data) {
    super.onActivityResult(requestCode, resultCode, data);
    if (requestCode == REQUEST_CODE) {
        List<OzAbstractMedia> sdkMediaResult = OzLivenessSDK.INSTANCE.getResultFromIntent(data);
        String sdkErrorString = OzLivenessSDK.INSTANCE.getErrorFromIntent(data);
    }

If you use our SDK just for capturing videos, omit the Checking Liveness and Face Biometry step.

If a user closes the capturing screen manually, resultCode receives the Activity.RESULT_CANCELED value.

Code example:

when (resultCode) {
    Activity.RESULT_CANCELED -> *USER CLOSED THE SCREEN*
    OzLivenessResultCode.SUCCESS -> {
        val sdkMediaResult = OzLivenessSDK.getResultFromIntent(data)
        *SUCCESS*
    }
    else -> {
        val errorMessage = OzLivenessSDK.getErrorFromIntent(data)
        *FAILURE*
    }
}

Checking Liveness and Face Biometry

If you use our SDK just for capturing videos, omit this step.

To check liveness and face biometry, you need to upload media to our system and then analyze them.

Here’s an example of performing a check:

analysisCancelable = AnalysisRequest.Builder()
 // mediaToAnalyze is an array of OzAbstractMedia that were captured or otherwise created 
    .addAnalysis(Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, mediaToAnalyze))// or ON_DEVICE if you want the on-device analysis
    .build()
//initiating the analyses and setting up a listener
    .run(object : AnalysisRequest.AnalysisListener {
        override fun onStatusChange(status: AnalysisRequest.AnalysisStatus) { handleStatus(status) // or your status handler
        }
        override fun onSuccess(result: RequestResult) {
            handleResults(result) // or your result handler
        }
        override fun onError(error: OzException) { handleError(error) // or your error handler 
        }
    })
analysisCancelable = new AnalysisRequest.Builder()
// mediaToAnalyze is an array of OzAbstractMedia that were captured or otherwise created 
        .addAnalysis(new Analysis(Analysis.Type.QUALITY, Analysis.Mode.SERVER_BASED, mediaToAnalyze)) // or ON_DEVICE if you want the on-device analysis
        .build()
//initiating the analyses and setting up a listener
        .run(new AnalysisRequest.AnalysisListener() { 
            @Override
            public void onSuccess(@NonNull RequestResult list) { handleResults(list); } // or your result handler
            @Override
            public void onError(@NonNull OzException e) { handleError(e); } // or your error handler
            @Override
            public void onStatusChange(@NonNull AnalysisRequest.AnalysisStatus analysisStatus) { handleStatus(analysisStatus); } // or your status handler
        })

To delete media files after the checks are finished, use the clearActionVideos method.

Adding Metadata

To add metadata to a folder, use the addFolderMeta method.

    .addFolderMeta(
        mapOf(
            "key1" to "value1",
            "key2" to "value2"
        )
    )
.addFolderMeta(Collections.singletonMap("key", "value")) 

Extracting the Best Shot

In the params field of the Analysis structure, you can pass any additional parameters (key + value), for instance, to extract the best shot on the server side.

mapOf("extract_best_shot" to true)

Using Media from Another SDK

       val file = File(context.filesDir, "media.mp4") // use context.getExternalFilesDir(null) instead of context.filesDir for external app storage
       val media = OzAbsractMedia.OzVideo(OzMediaTag.VideoSelfieSmile, file.absolutePath)

Adding Media to a Certain Folder

If you want to add your media to the existing folder, use the setFolderId method:

    .setFolderId(folderId)

iOS

To start using Oz iOS SDK, follow the steps below.

Resources

Minimal iOS version: 11.

Minimal Xcode version: 16.

Available languages: EN, ES, HY, KK, KY, TR, PT-BR.

A sample app source code using the Oz Liveness SDK is located in the GitLab repository:

Follow the link below to see a list of SDK methods and properties:

Android Localization: Adding a Custom or Updating an Existing Language Pack

Please note: this feature has been implemented in 8.1.0.

To add or update the language pack for Oz Android SDK, please follow these instructions:

The localization record consists of the localization key and its string value, e.g., <string name="about">"About"</string>.

  1. Create the file called strings.xml.

  2. Copy the strings from the attached file to your freshly created file.

  3. Redefine the strings you need in the appropriate localization records.

A list of keys for Android:

The keys action_*_go refer to the appropriate gestures. Others refer to the hints for any gesture, info messages, or errors.

When new keys appear with new versions, if no translation is provided in your file, the new strings are shown in English.

Changelog

Android SDK changes

8.16.3 – Apr. 08, 2025

  • Security updates.

8.16.2 – Mar. 19, 2025

  • Resolved the issue with possible SDK crashes when closing the Liveness screen.

8.16.1 – Mar. 14, 2025

  • Security updates.

8.16 – Mar. 11, 2025

  • Updated the authorization logic.

  • Improved voiceover.

  • Fixed the issue with SDK lags and the non-responding error that users might have encountered on some devices after completing the video recording.

  • Resolved the issue with SDK crashes on some devices that might have occurred because of trying to access non-initialized or closed resources.

  • Security updates.

8.15.6 – Feb. 26, 2025

  • Security updates.

8.15.5 – Feb. 18, 2025

  • Fixed the bug with green videos on some smartphone models.

  • Security updates.

8.15.4 – Feb. 11, 2025

  • Fixed bugs that could have caused crashes on some phone models.

8.15.0 – Dec. 30, 2024

  • Changed the wording for the head_down gesture: the new wording is “tilt down”.

  • Added proper focus order for TalkBack when the antiscam hint is enabled.

  • Added the public setting extract_action_shot in the Demo Application.

  • Fixed bugs.

  • Security updates.

8.14.1 – Dec. 5, 2024

  • Fixed the bug when the recorded videos might appear green.

  • Resolved codec issues on some smartphone models.

8.14.0 – Dec. 2, 2024

  • Accessibility updates according to WCAG requirements: the SDK hints and UI controls can be voiced.

  • Improved user experience with head movement gestures.

  • Moved the large video compression step to the Liveness screen closure.

  • Fixed the bug when the best shot frame could contain an image with closed eyes.

  • Minor bug fixes and telemetry updates.

8.13.0 – Nov. 12, 2024

  • Security and telemetry updates.

8.12.4 – Oct. 01, 2024

  • Security updates.

8.12.2 – Sept. 10, 2024

  • Security updates.

8.12.0 – Aug. 29, 2024

  • Security and telemetry updates.

8.11.0 – Aug. 19, 2024

  • Fixed the RuntimeException error with the server-based Liveness that appeared on some devices.

  • Security updates.

8.10.0 – July 26, 2024

  • Security updates.

  • Bug fixes.

8.9.0 – July 18, 2024

  • Updated the Android Gradle plugin version to 8.0.0.

  • Internal SDK improvements.

8.8.3 – July 11, 2024

  • Internal SDK improvements.

8.8.2 – June 21, 2024

  • Security updates.

8.8.1 – June 12, 2024

  • Security updates.

8.8.0 – June 04, 2024

  • Security updates.

8.7.3 – June 03, 2024

  • Security updates.

8.7.0 – May 06, 2024

  • Added a description for the error that occurs when providing an empty string as an ID in the setFolderID method.

  • Fixed a bug causing an endless spinner to appear if the user switches to another application during the Liveness check.

  • Fixed some smartphone model specific-bugs.

8.6.0 – Apr. 05, 2024

  • Upgraded the on-device Liveness model.

  • Security updates.

8.5.0 – Feb. 27, 2024

  • Removed the pause after the Scan gesture.

  • If the recorded video is larger than 10 MB, it gets compressed.

  • Security and logging updates.

8.4.4 – Feb. 06, 2024

  • Changed the master license validation algorithm.

8.4.3 – Jan. 29, 2024

  • Downgraded the required compileSdkVersion from 34 to 33.

8.4.2 – Jan. 15, 2024

  • Security updates.

8.4.0 – Jan. 04, 2024

  • Updated the on-device Liveness model.

  • Fixed some bugs.

8.3.3 – Dec. 11, 2023

  • Internal licensing improvements.

8.3.2 – Nov. 30, 2023

  • Internal SDK improvements.

8.3.1 – Nov. 24, 2023

  • Bug fixes.

8.3.0 – Nov. 17, 2023

  • Implemented the possibility of using a master license that works with any bundle_id.

  • Video compression failure on some phone models is now fixed.

8.2.1 – Nov. 01, 2023

  • Bug fixes.

8.2.0 – Oct. 23, 2023

  • The Analysis structure now contains the sizeReductionStrategy field. This field defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully.

  • The messages for the errors that are retrieved from API are now detailed.

8.1.1 – Oct. 02, 2023

8.1.0 – Sept. 07, 2023

  • Updated the Liveness on-device model.

  • Added the Portuguese (Brazilian) locale.

  • If a media hasn't been uploaded correctly, the system repeats the upload.

  • Created a new method to retrieve the telemetry (logging) identifier: getEventSessionId.

  • The login and auth methods are now deprecated. Use the setAPIConnection method instead.

  • OzConfig.baseURL and OzConfig.permanentAccessToken are now deprecated.

  • If a user closes the screen during video capture, the appropriate error is now being handled by SDK.

  • Fixed some bugs and improved the SDK work.

8.0.3 – Aug. 24, 2023

  • Fixed errors.

8.0.2 – July 13, 2023

  • The SDK now works properly with baseURL set to null.

8.0.1 – June 28, 2023

  • The dependencies' versions have been brought into line with Kotlin version.

8.0.0 – June 19, 2023

  • Added the new analysis mode – hybrid (Liveness only). If the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.

  • Kotlin version requirements lowered to 1.7.21.

  • Improved the on-device models.

  • For some phone models, fixed the fatal device error.

  • The hint text width can now exceed the frame width (when using the main camera).

  • Photos taken during the One Shot analysis are now being sent to the server in the original size.

  • Removed the OzAnalysisResult class. The onSuccess method ofAnalysisRequest.run now uses the RequestResult structure instead of List<OzAnalysisResult>.

  • All exceptions are moved to the com.ozforensics.liveness.sdk.core.exceptions package (See changes below).

  • Classes related to AnalysisRequest are moved to the com.ozforensics.liveness.sdk.analysispackage (See changes below).

  • The methods below are no longer supported:

Public interface changes

New entities

  • AnalysisRequest.Type.HYBRID in com.ozforensics.liveness.sdk.analysis.entity

  • AnalysisError in com.ozforensics.liveness.sdk.analysis.entity

  • SourceMedia in com.ozforensics.liveness.sdk.analysis.entity

  • ResultMedia in com.ozforensics.liveness.sdk.analysis.entity

  • RequestResult in com.ozforensics.liveness.sdk.analysis.entity

Moved entities

  • NoAnalysisException from com.ozforensics.liveness.sdk.exceptions в com.ozforensics.liveness.sdk.core.exceptions

  • NoNetworkException from com.ozforensics.liveness.sdk.exceptions в com.ozforensics.liveness.sdk.core.exceptions

  • TokenException from com.ozforensics.liveness.sdk.exceptions в com.ozforensics.liveness.sdk.core.exceptions

  • NoMediaInAnalysisException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.core.exceptions

  • EmptyMediaListException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.core.exceptions

  • NoSuchMediaException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.core.exceptions

  • LicenseException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.security.exception

  • Analysis from com.ozforensics.liveness.sdk.analysis.entity to com.ozforensics.liveness.sdk.core.model

  • AnalysisRequest from com.ozforensics.liveness.sdk.analysis to com.ozforensics.liveness.sdk.core

  • AnalysisListener from com.ozforensics.liveness.sdk.analysis to com.ozforensics.liveness.sdk.core

  • AnalysisStatus from com.ozforensics.liveness.sdk.analysis to com.ozforensics.liveness.sdk.core

  • AnalysisRequest.Builder from com.ozforensics.liveness.sdk.analysis to com.ozforensics.liveness.sdk.core

  • OzException from com.ozforensics.liveness.sdk.exceptions to com.ozforensics.liveness.sdk.core.exceptions

Changed classes

OzLivenessSDK

  • Removed uploadMediaAndAnalyze

  • Removed uploadMedia

  • Removed runOnDeviceBiometryAnalysis

  • Removed runOnDeviceLivenessAnalysis

AnalysisRequest

  • Removed build(): AnalysisRequest

AnalysisRequest.Builder

  • Removed addMedia

  • Removed onSuccess(result: List<OzAnalysisResult>)

  • Added onSuccess(result: RequestResult)

7.3.1 – June 07, 2023

  • Restructured the settings screen.

  • Added the center hint background customization.

  • Added new face frame forms (Circle, Square).

  • The OzLivenessSDK::init method no longer crashes if there is a StatusListener parameter passed.

  • Changed the scan gesture animation.

Please note: for this version, we updated Kotlin to 1.8.20.

7.2.0 – May 04, 2023

  • Improved the SDK algorithms.

7.1.4 – Mar. 30, 2023

  • Updated the model for the on-device analyses.

  • Fixed the animation for sunglasses/mask.

  • The oval size for Liveness is now smaller.

7.1.3 – Mar. 03, 2023

  • Fixed the error with the server-based analyses while using permanentAccessToken for authorization.

7.1.2 – Feb. 22, 2023

  • You can now hide the status bar and system buttons (works with 7.0.0 and higher).

  • OzLivenessSDK.init now requires context as the first parameter.

  • OzAnalysisResult now shows the server-based analyses' scores properly.

  • Fixed initialization issues, displaying of wrong customization settings, authorization failures on Android <7.1.1.

7.1.1 – Jan. 16, 2023

  • Fixed crashes for Android v.6 and below.

  • Fixed oval positioning for some phone models.

  • Internal fixes and improvements.

7.1.0 – Dec. 16, 2022

  • Updated security.

  • Implemented some internal improvements.

  • The addMedia method is now deprecated, please use uploadMedia for uploading.

7.0.0 – Nov. 23, 2022

  • Changed the way of sharing dependencies. Due to security issues, now we share two types of libraries as shown below: sdk is a server analysis only, full provides both server and on-device analyses:

  • UICustomization has been implemented instead of OzCustomization.

  • Added the Spanish locale.

6.4.2.3

  • Fixed the bug with freezes that had appeared on some phone models.

  • SDK now captures videos in 720p.

6.4.1

  • Synchronized the names of the analysis modes with iOS: SERVER_BASED and ON_DEVICE.

  • Fixed the bug with displaying of localization settings.

6.4.0

  • Now you can use Fragment as Liveness screen.

  • Added a new field to the Analysis structure. The params field is for any additional parameters, for instance, if you need to set extracting the best shot on server to true. The best shot algorithm chooses the most high-quality frame from a video.

6.3.7

  • The Zoom in and Zoom out gestures are no longer supported.

6.3.6

  • Updated the biometry model.

6.3.5

  • Added a new simplified API – AnalysisRequest. With it, it’s easier to create a request for the media and analysis you need.

6.3.4

  • Published the on-device module for on-device liveness and biometry analyses. To add this module to your project, use:

To launch these analyses, use runOnDeviceBiometryAnalysis and runOnDeviceLivenessAnalysis methods from the OzLivenessSDK class:

6.3.3

  • Liveness now goes smoother.

  • Fixed freezes on Xiaomi devices.

  • Optimized image converting.

6.3.1

  • New metadata parameter for OzLivenessSDK.uploadMedia and new OzLivenessSDK.uploadMediaAndAnalyze method to pass this parameter to folders.

6.2.8

  • Added functions for SDK initialization with LicenseSources: LicenseSource.LicenseAssetId and LicenseSource.LicenseFilePath. Use the OzLivenessSDK.init method to start initialization.

  • Now you can get the license info upon initialization val licensePayload = OzLivenessSDK.getLicensePayload().

6.2.4

  • Added the Kyrgyz locale.

6.2.0

  • Added local analysis functions.

  • You can now configure the face frame.

  • Fixed version number at the Liveness screen.

6.1.0

  • Added the main camera support.

6.0.1

  • Added configuration from license support.

6.0.0

  • Added the OneShot gesture.

  • Added new states for OzAnalysisResult.Resolution.

  • Added the uploadMediaAndAnalyze method to load a bunch of media to the server at once and send them to analysis immediately.

  • OzMedia is renamed to OzAbstractMedia and got subclasses for images and videos.

  • Fixed camera bugs for some devices.

5.1.0

  • Access token updates automatically.

  • Renamed accessToken to permanentAccessToken.

  • Added R8 rules.

  • Configuration became easier: config settings are mutable.

5.0.2

  • Fixed the oval frame.

  • Removed the unusable parameters from AnalyseRequest.

  • Removed default attempt limits.

5.0.0

  • To customize the configuration options, the config property is added instead of baseURL, accessToken, etc. Use OzConfig.Builder for initialization.

  • Added license support. Licences should be installed as raw resources. To pass them to OzConfig, use setLicenseResourceId.

  • Replaced the context-dependent methods with analogs.

  • Improved the image analysis.

  • Removed unusable dependencies.

  • Fixed logging.

Getting a License for iOS SDK

License

  1. Rename this file to forensics.license and put it into the project. In this case, you don't need to set the path to the license.

  2. During the runtime: when initializing SDK, use the following method.

or

LicenseSource a source of license, and LicenseData is the information about your license. Please note: this method checks whether you have an active license or not and if yes, this license won't be replaced with a new one. To force the license replacement, use the setLicense method.

In case of any license errors, the system will use your error handling code as shown above. Otherwise, the system will return information about license. To check the license data manually, use OZSDK.licenseData.

Possible License Errors

Capturing Videos

Create a controller that will capture videos as follows:

Once video is captured, the system calls the onOZLivenessResult method:

If you use our SDK just for capturing videos, omit the Checking Liveness and Face Biometry step.

If a user closes the capturing screen manually, the failedBecauseUserCancelled error appears.

Connecting SDK to API

Please note: in your host application, it is recommended that you set the API address on the screen that precedes the liveness check. Setting the API URL initiates a service call to the API, which may cause excessive server load when being done at the application initialization or startup.

Alternatively, you can use the login and password provided by your Oz Forensics account manager:

Adding SDK to a Client’s Mobile App

CocoaPods

Since 8.1.0, you can also use a simpler code:

By default, the full version is being installed. It contains both server-based and on-device analysis modes. To install the server-based version only, use the following code:

For 8.1.0 and higher:

SPM

Please note: installation via SPM is available for versions 8.7.0 and above.

Manual Installation

You can also add the necessary frameworks to your project manually.

  • OZLivenessSDK.xcframework,

  • OZLivenessSDKResources.bundle,

  • OZLivenessSDKOnDeviceResources.bundle (if you don't need the on-device analyses, skip this file).

  1. Make sure that:

  • both xcframework are in Target-Build Phases -> Link Binary With Libraries and Target-General -> Frameworks, Libraries, and Embedded Context;

  • the bundle file(s) are in Target-Build Phases -> Copy Bundle Resources.

Android SDK Methods and Properties

OzLivenessSDK

A singleton for Oz SDK.

clearActionVideos

Deletes all action videos from file system.

Parameters

-

Returns

-

createStartIntent

Creates an intent to start the Liveness activity.

Returns

-

getErrorFromIntent

Utility function to get the SDK error from OnActivityResult's intent.

Returns

getLicensePayload

Retrieves the SDK license payload.

Parameters

-

Returns

getResultFromIntent

Utility function to get SDK results from OnActivityResult's intent.

Returns

A list of OzAbstractMedia objects.

init

Initializes SDK with license sources.

Returns

-

log

Enables logging using the Oz Liveness SDK logging mechanism.

Returns

-

setApiConnection

Connection to API.

setEventsConnection

Connection to the telemetry server.

logout

Deletes the saved token.

Parameters

-

Returns

-

getEventSessionId

Retrieves the telemetry session ID.

Parameters

-

Returns

The telemetry session ID (String parameter).

version

Retrieves the SDK version.

Parameters

-

Returns

The SDK version (String parameter).

generateSignedPayload

Generates the payload with media signatures.

Returns

Payload to be sent along with media files that were used for generation.

AnalysisRequest

A class for performing checks.

run

The analysis launching method.

class Builder

A builder class for AnalysisRequest.

build

Creates the AnalysisRequest instance.

Parameters

-

Returns

addAnalysis

Adds an analysis to your request.

Returns

Error if any.

addAnalyses

Adds a list of analyses to your request. Allows executing several analyses for the same folder on the server side.

Returns

Error if any.

addFolderMeta

Adds metadata to a folder you create (for the server-based analyses only). You can add a pair key-value as additional information to the folder with the analysis result on the server side.

Returns

Error if any.

uploadMedia

Uploads one or more media to a folder.

Returns

Error if any.

setFolderId

For the previously created folder, sets a folderId. The folder should exist on the server side. Otherwise, a new folder will be created.

Returns

Error if any.

OzConfig

Configuration for OzLivenessSDK (use OzLivenessSDK.config).

setSelfieLength

Sets the length of the Selfie gesture (in milliseconds).

Returns

Error if any.

allowDebugVisualization

The possibility to enable additional debug info by clicking on version text.

attemptSettings

The number of attempts before SDK returns error.

uploadMediaSettings

Settings for repeated media upload.

faceAlignmentTimeout

Timeout for face alignment (measured in milliseconds).

livenessErrorCallback

Interface implementation to retrieve error by Liveness detection.

localizationCode

Locale to display string resources.

logging

Logging settings.

useMainCamera

Uses the main (rear) camera instead of the front camera for liveness detection.

disableFramesCountValidation

Disables the option that prevents videos to be too short (3 frames or less).

UICustomization

Customization for OzLivenessSDK (use OzLivenessSDK.config.customization).

hideStatusBar

Hides the status bar and the three buttons at the bottom. The default value is True.

toolbarCustomization

A set of customization parameters for the toolbar.

centerHintCustomization

A set of customization parameters for the center hint that guides a user through the process of taking an image of themselves.

hintAnimation

A set of customization parameters for the hint animation.

faceFrameCustomization

A set of customization parameters for the frame around the user face.

backgroundCustomization

A set of customization parameters for the background outside the frame.

versionTextCustomization

A set of customization parameters for the SDK version text.

antiscamCustomization

A set of customization parameters for the antiscam message that warns user about their actions being recorded.

logoCustomization

Logo customization parameters. Custom logo should be allowed by license.

Variables and Objects

enum OzAction

Contains the action from the captured video.

class LicensePayload

Contains the extended info about licensing conditions.

sealed class OzAbstractMedia

A class for the captured media that can be:

OzDocumentPhoto

A document photo.

OzShotSet

A set of shots in an archive.

OzVideo

A Liveness video.

enum OzMediaTag

Contains an action from the captured video.

sealed class LicenseSource

A class for license that can be:

LicenseAssetId

Contains the license ID.

LicenseFilePath

Contains the path to a license.

class AnalysisStatus

A class for analysis status that can be:

RunningAnalysis

This status means the analysis is launched.

UploadingMedia

This status means the media is being uploaded.

enum Type

The type of the analysis.

Currently, the DOCUMENTS analysis can't be performed in the on-device mode.

enum Mode

The mode of the analysis.

class Analysis

Contains information on what media to analyze and what analyses to apply.

enum Resolution

The general status for all analyses applied to the folder created.

class OzAttemptsSettings

Holder for attempts counts before SDK returns error.

enum OzLocalizationCode

class OzLogging

Contains logging settings.

sealed class Color

A class for color that can be (depending on the value received):

ColorRes

ColorHex

ColorInt

enum GeometryType

Frame shape settings.

class AnalysisError

Exception class for AnalysisRequest.

class SourceMedia

Structure that describes media used in AnalysisRequest.

class ResultMedia

Structure that describes the analysis result for the single media.

class RequestResult

Consolidated result for all analyses performed.

class AnalysisResult

Result of the analysis for all media it was applied to.

class OzConnection

Defines the authentication method.

OzConnection.fromServiceToken

Authentication via token.

OzConnection.fromCredentials

Authentication via credentials.

class OzUploadMediaSettings

Defines the settings for the repeated media upload.

enum SizeReductionStrategy

Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.

Error Description

Master License for iOS

Master license is the offline license that allows using Mobile SDKs with any bundle_id, unlike the regular licenses. To get a master license, create a pair of keys as shown below. Email us the public key, and we will email you the master license shortly after that. Your application needs to sign its bundle_id with the private key, and the Mobile SDK checks the signature using the public key from the master license. Master licenses are time-limited.

Generating Keys

This section describes the process of creating your private and public keys.

Creating a Private Key

To create a private key, run the commands below one by one.

You will get these files:

  • privateKey.der is a private .der key;

  • privateKey.txt is privateKey.der encoded by base64. This key containing will be used as the host app bundle_id signature.

File examples:

Creating a Public Key

To create a public key, run this command.

You will get the public key file: publicKey.pub. To get a license, please email us this file. We will email you the license.

File example:

SDK Integration

SDK initialization:

License setting:

Prior to the SDK initializing, create a base64-encoded signature for the host app bundle_id using the private key.

Signature creation example:

Signature example for com.ozforensics.liveness.demo

NcdfzExUuQcdSxdY3MUVp4rUGnJTfeTGh/EN0hqiDUSadxf9jhaYe3EHmBPb+DafWrpvP6h6ZON3fHZcqqlykpWtVlPNr7ZpchPJXgHSdIOetKjPxYWh8xrt0NUSzm9Mymv1vz7UUdLiPFXDbanAedkg4YOX8uVJjfP8gb+suLdeTE3CHm+fdK5IVMP1SMu0XsBiiWBtVZp15l6tz7fDv+frr5kCr8I/LAHo6pCHBtzsXUiYw6ylrU/PHI1QYuyKK96iTxBt3gU+/MrfHl2DHLief3Fs10m3doJmTPizwWi/8h6fvq+6ZZlV/q4S2uPE3gpQZrZe2u3FuSE8gEgVoA==

Pass the signature as the masterLicenseSignature parameter either during the SDK initialization or license setting.

If the signature is invalid, the initialization continues as usual: the SDK checks the list of bundle_id included into the license like it does it by default without a master license.

Checking Liveness and Face Biometry

If you use our SDK just for capturing videos, omit this step.

To check liveness and face biometry, you need to upload media to our system and then analyze them.

Below, you'll see the example of performing a check and its description.

To delete media files after the checks are finished, use the cleanTempDirectory method.

Adding Metadata

To add metadata to a folder, use AnalysisRequest.addFolderMeta.

Extracting the Best Shot

In the params field of the Analysis structure, you can pass any additional parameters (key + value), for instance, to extract the best shot on the server side.

Using Media from Another SDK

Adding Media to a Certain Folder

If you want to add your media to the existing folder, use the addFolderId method:

Oz Liveness and Biometry Key Concepts

Oz Forensics specializes in liveness and face matching: we develop products that help you to identify your clients remotely and avoid any kind of spoofing or deepfake attack. Oz software helps you to add facial recognition to your software systems and products. You can integrate Oz modules in many ways depending on your needs. We are constantly improving our components, increasing their quality.

Oz Liveness

  • Oz Liveness is responsible for recognizing a living person on a video it receives. Oz Liveness can distinguish a real human from their photo, video, mask, or other kinds of spoofing and deepfake attacks. The algorithm is certified in ISO-30137-3 standard by NIST accreditation iBeta biometric test laboratory with 100% accuracy.

Our liveness technology protects both against injection and presentation attacks.

The injection attack detection is layered. Our SDK examines user environment to detect potential manipulations: browser, camera, etc. Further on, the deep neural network comes into play to defend against even the most sophisticated injection attacks.

Oz Face Matching (Biometry)

  • Oz Face Matching (Biometry) aims to identify the person, verifying that the person who performs the check and the papers' owner are the same person. Oz Biometry looks through the video, finds the best shot where the person is clearly seen, and compares it with the photo from ID or another document. The algorithm's accuracy is 99.99% confirmed by NIST FRVT.

Our biometry technology has both 1:1 Face Verification and 1:N Face Identification, which are also based on ML algorithms. To train our neural networks, we use an own framework based on state-of-the-art technologies. The large private dataset (over 4.5 million unique faces) with a wide representation of ethnic groups as well as using other attributes (predicted race, age, etc.) helps our biometric models to provide the robust matching scores.

Our face detector can work with photos and videos. Also, the face detector excels in detecting faces in images of IDs and passports (which can be rotated or of low quality).

To connect SDK to Oz API, specify the API URL and as shown below.

By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for as shown below:

For details, please refer to the Checking Liveness and Face Biometry sections for and .

Add this to the build.gradle of the module (VERSION is the version you need to implement. Please refer to ):

Embed Oz Android SDK into your project as described .

Get a trial license for SDK on our or a production license by . We'll need your application id. Add the license to your project as described .

Connect SDK to API as described . This step is optional, as this connection is required only when you need to process data on a server. If you use the , the data is not transferred anywhere, and no connection is needed.

Capture videos using methods described . You'll send them for analysis afterward.

Analyze media you've taken at the previous step. The process of checking liveness and face biometry is described .

If you want to customize the look-and-feel of Oz Android SDK, please refer to .

Download the demo app latest build .

You can generate the trial license or contact us by to get a productive license. To create the license, your applicationId (bundle id) is required.

The OpenSSL command specification:

actions – a list of while recording video.

For Fragment, use the code below. LivenessFragment is the representation of the Liveness screen UI.

sdkMediaResult – an object with video capturing results for interactions with Oz API (a list of the objects),

sdkErrorString – description of , if any.

To interpret the results of analyses, please refer to .

To use a media file that is captured with another SDK (not Oz Android SDK), specify the path to it in :

Embed Oz iOS SDK into your project as described .

Get a trial license for SDK on our or a production license by . We'll need your bundle id. Add the license to your project as described .

Connect SDK to API as described . This step is optional, as this connection is required only when you need to process data on a server. If you use the , the data is not transferred anywhere, and no connection is needed.

Capture videos by creating the controller as described . You'll send them for analysis afterwards.

Upload and analyze media you've taken at the previous step. The process of checking liveness and face biometry is described .

If you want to customize the look-and-feel of Oz iOS SDK, please refer to .

Download the demo app latest build .

Go to the folder for the locale needed, or create a new folder. Proceed to for the details.

You can now disable video validation that has been implemented to avoid recording of extremely short videos (3 frames and less): switch the option off using .

The length of the Selfie gesture is now (affects the video file size).

You can instead of Oz logo if your license allows it.

If multiple analyses are applied to the folder simultaneously, the system sends them as a group. It means that the “worst” of the results will be taken as resolution, not the latest. Please refer to for details.

For the Liveness analysis, the system now treats the highest score as a quantitative result. The Liveness analysis output is described .

You can now add a custom or update an existing language pack. The instructions can be found .

Added the antiscam widget and its . This feature allows you to alert your customers that the video recording is being conducted, for instance, for loan application purposes. The purpose of this is to safeguard against scammers who may attempt to deceive an individual into approving a fraudulent transaction.

Added customization for the .

Implemented a range of options and switched to the new design. To restore the previous settings, please refer to .

You can generate the trial license or contact us by to get a productive license. To create the license, your bundle id is required. After you get a license file, there are two ways to add the license to your project.

Error message
What to Do

action – a list of user’s while capturing the video.

The method returns the results of video capturing: the [] objects. The system uses these objects to perform checks.

To connect SDK to Oz API, specify the API URL and as shown below.

By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for as shown below:

To integrate OZLivenessSDK into an Xcode project via the dependency manager, add the following code to Podfile:

Version is optional as, by default, the newest version is integrated. However, if necessary, you can find the older version number in .

Add the following package dependencies via SPM: (if you need a guide on adding the package dependencies, please refer to the ). OzLivenessSDK is mandatory. If you don't need the on-device analyses, skip the OzLivenessSDKOnDevice file.

Download the SDK files from and add them to your project.

Download the TensorFlow framework 2.11 from .

The – String.

The license payload () – the object that contains the extended info about licensing conditions.

The class instance.

Contains the locale code according to .

The OpenSSL command specification:

To interpret the results of analyses, please refer to .

To use a media file that is captured with another SDK (not Oz iOS SDK), specify the path to it in the structure (the bestShotURL property):

The presentation attack detection is based on deep neural networks of various architectures, combined with a proprietary ensembling algorithm to achieve optimal performance. The networks consider multiple factors, including reflection, focus, background scene, motion patterns, etc. We offer both passive (no gestures) and active (various gestures) , ensuring that your customers enjoy the user experience while delivering accurate results for you. The iBeta test was conducted using passive Liveness, and since then, we have significantly enhanced our networks to better meet the needs of our clients.

The Oz software combines accuracy in analysis with ease of integration and use. To further simplify the integration process, we have provided a detailed description of all the key concepts of our system in this section. If you're ready to get started, please refer to our , which provide the step-by-step instructions on how to achieve your facial recognition goals quickly and easily.

access token
iOS
Android
Changelog
here
website
email
here
here
on-device mode
here
here
this section
Android SDK Methods and Properties
here
here
email
https://www.openssl.org/docs/man1.1.1/man1/openssl-pkcs8.html
Fragment
Types of Analyses
here
website
email
here
here
on-device mode
here
here
this section
iOS SDK Methods and Properties
here
this guide

Removed method

Replacement

OzLivenessSDK.uploadMediaAndAnalyze

AnalysisRequest.run

OzLivenessSDK.uploadMedia

AnalysisRequest.Builder.uploadMedia

OzLivenessSDK.runOnDeviceBiometryAnalysis

AnalysisRequest.run

OzLivenessSDK.runOnDeviceLivenessAnalysis

AnalysisRequest.run

AnalysisRequest.build(): AnalysisRequest

-

AnalysisRequest.Builder.addMedia

AnalysisRequest.Builder.uploadMedia

implementation 'com.ozforensics.liveness:full:7.0.0'
implementation 'com.ozforensics.liveness:sdk:7.0.0'
    implementation 'com.ozforensics.liveness:on-device:6.3.4'
val mediaList: List<OzAbstractMedia> = ...
val biometryAnalysisResult: OzAnalysisResult = OzLivenessSDK.runOnDeviceBiometryAnalysis(mediaList)
val livenessAnalysisResult: OzAnalysisResult = OzLivenessSDK.runOnDeviceLivenessAnalysis(mediaList)
OZSDK(licenseSources: [.licenseFileName(“forensics.license”)]) { licenseData, error in
      if let error = error {
        print(error)
      }
    }
OZSDK(licenseSources: [.licenseFilePath(“path_to_file”)]) { licenseData, error in
      if let error = error {
        print(error)
      }
    }

License error. License at (your_URI) not found

The license file is missing. Please check its name and path to the file.

License error. Cannot parse license from (your_URI), invalid format

The license file is somehow damaged. Please email us the file.

License error. Bundle company.application.id is not in the list allowed by license (bundle.id1, bundle.id2)

The bundle (application) identifier you specified is missing in the allowed list. Please check the spelling, if it is correct, you need to get another license for your application.

License error. Current date yyyy-mm-dd hh:mm:ss is later than license expiration date yyyy-mm-dd hh:mm:ss

Your license has expired. Please contact us.

License is not initialized.

You haven't initialized the license. Please add the license to your project as described above.

let actions: [OZVerificationMovement] = [.selfie]
let ozLivenessVC: UIViewController = OZSDK.createVerificationVCWithDelegate(self, actions: actions)
self.present(ozLivenessVC, animated: true)
extension viewController: OZLivenessDelegate {
 func onError(status: OZVerificationStatus?) {
        // show error
   }
 }
 func onOZLivenessResult(results: [OZMedia]) {
   // proceed to the checks step
 }
}
OZSDK.setApiConnection(Connection.fromServiceToken(host: "https://sandbox.ohio.ozforensics.com", token: token)) { (token, error) in
}
OZSDK.setApiConnection(Connection.fromCredentials(host: “https://sandbox.ohio.ozforensics.com”, login: login, password: p)) { (token, error) in
    // Your code to handle error or token
}
let eventsConnection = Connection.fromCredentials(host: https://tm.ozforensics.com/,
                                login: <your_telemetry_user_eg_tm@company.com>,
                                password: your_telemetry_password)
OZSDK.setEventsConnection(eventsConnection) { (token, error) in
}
pod 'OZLivenessSDK', :git => 'https://gitlab.com/oz-forensics/oz-liveness-ios', :tag => 'VERSION'
// for the latest version
pod ‘OZLivenessSDK’
// OR, for the specific version
// pod ‘OZLivenessSDK’, ‘8.10.0’
pod 'OZLivenessSDK/Core', :git => 'https://gitlab.com/oz-forensics/oz-liveness-ios.git',  :tag => 'VERSION'
pod ‘OZLivenessSDK/Core’
// OR
// pod ‘OZLivenessSDK/Core’, ‘8.1.0’

Parameter

Type

Description

data

Intent

The object to test

Parameter

Type

Description

data

Intent

The object to test

Parameter

Type

Description

tag

String

Message tag

log

String

Message log

Parameter

Type

Description

key

String

Key for metadata.

value

String

Value for metadata.

Parameter

Type

Description

folderID

String

A folder identifier.

Parameter

Type

Description

selfieLength

Int

The length of the Selfie gesture (in milliseconds). Should be within 500-5000 ms, the default length is 700

Parameter

Type

Description

allowDebugVisualization

Boolean

Enables or disables the debug info.

Parameter

Type

Description

faceAlignmentTimeout

Long

A timeout value

Parameter

Type

Description

livenessErrorCallback

ErrorHandler

A callback value

Parameter

Type

Description

useMainCamera

Boolean

True– rear camera, False– front camera

Parameter

Type

Description

disableFramesCountValidation

Boolean

True– validation is off, False– validation is on

Parameter

Type

Description

image

Bitmap (@DrawableRes)

Logo image

size

Size

Logo size (in dp)

Case

Description

OneShot

The best shot from the video taken

Blank

A selfie with face alignment check

Scan

Scan

HeadRight

Head turned right

HeadLeft

Head turned left

HeadDown

Head tilted downwards

HeadUp

Head lifted up

EyeBlink

Blink

Smile

Smile

Parameter

Type

Description

expires

Float

The expiration interval

features

Features

License features

appIDS

[String]

An array of bundle IDs

Case

Description

Blank

A video with no gesture

PhotoSelfie

A selfie photo

VideoSelfieOneShot

A video with the best shot taken

VideoSelfieScan

A video with the scanning gesture

VideoSelfieEyes

A video with the blink gesture

VideoSelfieSmile

A video with the smile gesture

VideoSelfieHigh

A video with the lifting head up gesture

VideoSelfieDown

A video with the tilting head downwards gesture

VideoSelfieRight

A video with the turning head right gesture

VideoSelfieLeft

A video with the turning head left gesture

PhotoIdPortrait

A photo from a document

PhotoIdBack

A photo of the back side of the document

PhotoIdFront

A photo of the front side of the document

Parameter

Type

Description

id

Int

License ID

Parameter

Type

Description

path

String

An absolute path to a license

Case

Description

BIOMETRY

The algorithm that allows comparing several media and check if the people on them are the same person or not

QUALITY

The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.

DOCUMENTS

The analysis that aims to recognize the document and check if its fields are correct according to its type.

Case

Description

ON_DEVICE

The on-device analysis with no server needed

SERVER_BASED

The server-based analysis

HYBRID

The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.

Case

Description

FAILED

One or more analyses failed due to some error and couldn't get finished

DECLINED

The check failed (e.g., faces don't match or some spoofing attack detected)

SUCCESS

Everything went fine, the check succeeded (e.g., faces match or liveness confirmed)

OPERATOR_REQUIRED

The result should be additionally checked by a human operator

Parameter

Type

Description

singleCount

Int

Attempts on a single action/gesture

commonCount

Int

Total number of attempts on all actions/gestures if you use a sequence of them

Case

Description

EN

English

HY

Armenian

KK

Kazakh

KY

Kyrgyz

TR

Turkish

ES

Spanish

PT-BR

Portuguese (Brazilian)

Parameter

Type

Description

allowDefaultLogging

Boolean

Allows logging to LogCat

allowFileLogging

Boolean

Allows logging to an internal file

journalObserver

StatusListener

An event listener to receive journal events on the application side

Parameter

Type

Description

resId

Int

Link to the color in the Android resource system

Parameter

Type

Description

hex

String

Color hex (e.g., #FFFFFF)

Parameter

Type

Description

color

Int

The Int value of a color in Android

Case

Description

Oval

Oval frame

Rectangle

Rectangular frame

Circle

Circular frame

Square

Square frame

Parameter

Type

Description

apiErrorCode

Int

Error code

message

String

Error message

Parameter

Type

Description

host

String

API address

token

String

Access token

Parameter

Type

Description

host

String

API address

username

String

User name

password

String

Password

Parameter

Type

Description

attemptsCount

Int

Number of attempts for media upload

attemptsTimeout

Int

Timeout between attempts

Case

Description

UPLOAD_ORIGINAL

The original video

UPLOAD_COMPRESSED

The compressed video

UPLOAD_BEST_SHOT

The best shot taken from the video

UPLOAD_NOTHING

Nothing is sent (note that no folder will be created)

user actions
OzAbstractMedia
errors
OzAbstractMedia
openssl genpkey -algorithm RSA -outform DER -out privateKey.der -pkeyopt rsa_keygen_bits:2048
# for MacOS
base64 -i privateKey.der -o privateKey.txt
# for Linux 
base64 -w 0 privateKey.der > privateKey.txt
openssl rsa -pubout -in privateKey.der -out publicKey.pub
OZSDK(licenseSources: [LicenseSource], masterLicenseSignature: String? = nil, completion: @escaping ((LicenseData?, LicenseError?) -> Void))
setLicense(licenseSource: LicenseSource, masterLicenseSignature: String? = nil)
private func getSignature() -> String? {
    let privateKeyBase64String = "the string copied from the privateKey.txt file"
    // with key example:
    // let privateKeyBase64String = "MIIEogIBAAKCAQEAvxpyONpif2AjXiiG8fs9pQkn5C9yfiP0lD95Z0UF84t0Ox1S5U1UuVE5kkTYYGvS2Wm7ykUEGuHhqt/PyOAxrrNkAGz3OcVTsvcqPmwcf4UNdYZmug6EnQ5ok9wxYARS0aYqJUdzUb4dKOYal6WpHZE4yLx08R0zQ5jPkg5asT2u2PLB7JHZNnXwBcvdUonAgocNzdakUbWTNHKMxjwdAvwdIICdIneLZ9nCqe1d0cx7JBIhLzSPu/DVRANF+DOsE9JM8DT/Snnjok2xXzqpxBs1GwqiMJh98KYP78AVRWFuq3qbq0hWpjbq+mWl8xa7UMv8WxVd4PvQkWVYq/ojJwIDAQABAoIBAEvkydXwTMu/N2yOdcEmAP5I25HQkgysZNZ3OtSbYdit2lQbui8cffg23MFNHA125L65Mf4LWK0AZenBhriE6NYzohRVMf28cxgQ9rLhppOyGH1DCgr79wiUj02hVe6G6Qkfj39Ml+yvrs7uS0NMZBQ89yspRNv4t8IxrsWXc8cNQr33fdArlZ021Z12u2wdamaagiFwTa6ZYcQ5OYl3d/xL+oAwf9ywHwRrVM2JksGCxrcLJ7JCOL6lLyjp8rRrIG4iZ1V8YDfUNHmwD4w1fl30H6ejA+Cy5qge7CBZK+hqKr+hOcfBfakfOtgcEbFq2L8DqHoSaTeY6n9wjPJiFrkCgYEA8fc/Cg+d0BN98aON80TNT+XLEZxXMH+8qdmJYe/LB2EckBj1N0en87OEhqMFm9BjYswGf/uO+q2kryEat1n3nejbENN9XaO36ahlXTpQ6gdHO3TuO+hnnUkXeUNgiGYs+1L8Ot6PuNykwL0BZ09U0iBVoawEjTAg9tLNfVW2upsCgYEAyi/75YFT3ApxfSf/9XR74a0aKrfGGlBppcGvxuhVUC2aW2fPEZGXo4qlEhP2tyVTfj78F2p0Fnf3NSyvLZAzSwKo4w8EyZfXn1RI4sM95kzIMhH3Gz8VxCZWKEgr7cKNU5Zhs8un/VFd9Mc0KyZfmVy4VrZ5JumgahBRzSn9zGUCgYA7TTt3/cfRvVU6qbkajBw9nrYcRNLhogzdG+GdzSVXU6eqcVN4DunMwoySatXvEC2rgxF8wGyUZ4ZbHaPsl/ImE3HNN+gb0Qo8C/d719UI5mvA2LGioRzz4XwNTkQUaeZQWlBTJUTYK8t9KVV0um6xaRdTnlMnP0p088lFFILKTQKBgDsR98selKyF1JBXPl2s8YCGfU2bsWIAukz2IG/BcyNgn2czFfkxCxd5qy5z7LGnUxRgPHBu5oml9PBxJKDwLzwsA8GKosBu/00KZ9zwY8ZECn0uaH5qWOacuLE+HK9zFq0kE1lfF65XtlaMWH5+0JFS2HxlBVJMEVTLfcquCPtNAoGAG6ytPm24AS1satPwlKnZODQEg0kc7d5S42jbt4X7lwICY/7gORSdbNS8OmrqYhP/8tDOAUtzPQ20fEt43/VA87bq88BVzoSp4GVQcSL44MzRBQHQwTVkoVnbCXSD9K9gZ71wii+m+8rZZ0EMdiTR3hsRXRuSmw4t8y3CuzlZ9k4="
    guard let data = Data(base64Encoded: privateKeyBase64String, options: [.ignoreUnknownCharacters]) else {
      return nil
    }
     
    let sizeInBits = data.count * 8
    let keyDict: [CFString: Any] = [
      kSecAttrKeyType: kSecAttrKeyTypeRSA,
      kSecAttrKeyClass: kSecAttrKeyClassPrivate,
      kSecAttrKeySizeInBits: NSNumber(value: sizeInBits)
    ]
     
    var error: Unmanaged<CFError>?
    guard let secKey = SecKeyCreateWithData(data as CFData, keyDict as CFDictionary, &error) else {
      return nil
    }
     
    guard let bundleID = Bundle.main.bundleIdentifier else {
      return nil
    }
    guard let signature = SecKeyCreateSignature(secKey,
                          .rsaSignatureMessagePKCS1v15SHA512,
                          Data(bundleID.utf8) as CFData,
                          &error) else {
      return nil
    }
    return (signature as Data).base64EncodedString()
  }
let analysisRequest = AnalysisRequestBuilder()
// create one or more analyses
let analysis = Analysis.init(
	media: mediaToAnalyze, // mediaToAnalyze is an array of OzMedia that were captured or otherwise created
	type: .quality, // check the analysis types in iOS methods
	mode: .serverBased) // or .onDevice if you want the on-device analysis
analysisRequest.uploadMedia(mediaToAnalyze)
analysisRequest.addAnalysis(analysis)
// initiate the analyses
analysisRequest.run(
	statusHandler: { state in }, // scenario steps progress handler
	errorHandler: { _ in }  
) { result in
    // receive and handle analyses results here 
}
let analysis = Analysis.init(media: mediaToAnalyze, type: .quality, mode: .serverBased)
var folderMeta: [String: Any] = ["key1": "value1"]
analysisRequest.addFolderMeta(folderMeta)
...
let analysis = Analysis.init(media: mediaToAnalyze, type: .quality, mode: .serverBased, params: [“extract_best_shot” : true])
let referenceMedia = OZMedia.init(movement: .selfie,
                  mediaType: .movement,
                  metaData: ["meta":"data"],
                  videoURL: nil,
                  bestShotURL: imageUrl,
                  preferredMediaURL: nil,
                  timestamp: Date())
let analysis = Analysis.init(media: mediaToAnalyze, type: .quality, mode: .serverBased)
analysisRequest.addFolderId(IdRequired)
iOS
Logooz-forensics / oz-liveness-androidGitLab
LogoFiles · main · oz-forensics / Oz Forensics Public projects / oz-liveness-ios-sampleGitLab

How to Restore the Previous Design after an Update

If you want to get back to the previous (up to 6.4.2) versions' design, reset the customization settings of the capture screen and apply the parameters that are listed below.

// customization parameters for the toolbar
let toolbarCustomization = ToolbarCustomization(
   closeButtonColor: .white,
   backgroundColor: .black)

// customization parameters for the center hint
let centerHintCustomization = CenterHintCustomization(
   verticalPosition: 70)
   
// customization parameters for the center hint animation
let hintAnimationCustomization = HintAnimationCustomization(
    hideAnimation: true)

// customization parameters for the frame around the user face
let faceFrameCustomization = FaceFrameCustomization(
   strokeWidth: 6,
   strokeFaceAlignedColor: .green,
   strokeFaceNotAlignedColor: .red)

// customization parameters for the background outside the frame
let backgroundCustomization = BackgroundCustomization(
   backgroundColor: .clear)

OZSDK.customization = OZCustomization(toolbarCustomization: toolbarCustomization,
   centerHintCustomization: centerHintCustomization,
   hintAnimationCustomization: hintAnimationCustomization,
   faceFrameCustomization: faceFrameCustomization,
   versionCustomization: vesrionCustomization,
   backgroundCustomization: backgroundCustomization)

Adding the Plugin to Your Web Page

Requirements

A dedicated Web Adapter in our cloud or the adapter deployed on-premise. The adapter's URL is required for adding the plugin.

Processing Steps

To embed the plugin in your page, add a reference to the primary script of the plugin (plugin_liveness.php) that you received from Oz Forensics to the HTML code of the page. web-sdk-root-url is the Web Adapter link you've received from us.

<script src="/plugin_liveness.php"></script>
For versions below 1.4.0

Add a reference to the file with styles and to the primary script of the plugin (plugin_liveness.php) that you received from Oz Forensics to the HTML code of the page. web-sdk-root-url is the Web Adapter link you've received from us.

<link rel="stylesheet" href="https:///plugin/ozliveness.css" />
<script src="https:///plugin_liveness.php"></script>

For Angular and Vue, script (and files, if you use a version lower than 1.4.0) should be added in the same way. For React apps, use head at your template's main page to load and initialize the OzLiveness plugin. Please note: if you use <React.StrictMode>, you may experience issues with Web Liveness.

Description of the on_error Callback

This callback is called when the system encounters any error. It contains the error details and telemetry ID that you can use for further investigation.

on_error { 
    "code": "error_code", 
    "event_session_id": "id_of_telemetry_session_with_error", 
    "message": "<error decription>", 
    "context": {}  // additional information if any
}

Closing or Hiding the Plugin

Closing the Plugin

To force the closing of the plugin window, use the close() method. All requests to server and callback functions (except on_close) within the current session will be aborted.

Example:

var session_id = 123;

OzLiveness.open({
  // We transfer the arbitrary meta data, by which we can later identify the session in Oz API
  meta: {
    session_id: session_id 
  },
  // After sending the data, forcibly close the plugin window and independently request the result
  on_submit: function() {
    OzLiveness.close();
    my_result_function(session_id);
  }
});

Hiding the Plugin Window without Cancelling the Callbacks

To hide the plugin window without cancelling the requests for analysis results and user callback functions, call the hide() method. Use this method, for instance, if you want to display your own upload indicator after submitting data.

An example of usage:

OzLiveness.open({
  // When receiving an intermediate result, hide the plugin window and show your own loading indicators
  on_result: function(result) {
    OzLiveness.hide();
    if (result.state === 'processing') {
      show_my_loader();
    }
  },
  on_complete: function() {
    hide_my_loader();
  }
});
here
UI customization
this article
here
email
access token
CocoaPods
Changelog
https://gitlab.com/oz-forensics/oz-mobile-ios-sdk
Apple documentation
here
here
ISO 639-1
https://www.openssl.org/docs/man1.1.1/man1/openssl-pkcs8.html
Types of Analyses
Liveness options
integration guides
disableFramesCountValidation
configurable
set your own logo
customization
hint animation
error text
LicensePayload
AnalysisRequest
this article
here
actions
OZMedia
OzMedia
retrieving an MP4 video
here

Parameter

Type

Description

actions

A list of possible actions

Parameter

Type

Description

context

Context

The Context class

licenseSources

A list of license references

statusListener

StatusListener

Optional listener to check the license load result

Parameter

Type

Description

connection

Connection type

statusListener

StatusListener<String?>

Listener

Parameter

Type

Description

connection

Connection type

statusListener

StatusListener<String?>

Listener

Parameter

Type

Description

media

An array of media files

folderMeta (optional)

[string:any]

Additional folder metadata

Parameter

Type

Description

onStatusChange

A callback function as follows:

The function is executed when the status of the AnalysisRequest changes.

onError

A callback function as follows:

onError(error: OzException) { handleError() }

The function is executed in case of errors.

onSuccess

A callback function as follows:

handleResults() }

The function is executed when all the analyses are completed.

Parameter

Type

Description

analysis

A structure for analysis

Parameter

Type

Description

analysis

A list of Analysis structures

Parameter

Type

Description

mediaList

An OzAbstractMedia object or a list of objects.

Parameter

Type

Description

attemptsSettings

Sets the number of attempts

Parameter

Type

Description

uploadMediaSettings

Sets the number of attempts and timeout between them

Parameter

Type

Description

localizationCode

A locale code

Parameter

Type

Description

logging

Logging settings

Parameter

Type

Description

closeIconRes

Int (@DrawableRes)

An image for the close button

closeIconTint

Close button color

titleTextFont

Int (@FontRes)

Toolbar title text font

titleTextFontStyle

Int (values from android.graphics.Typeface properties, e.g., Typeface.BOLD)

Toolbar title text font style

titleTextSize

Int

Toolbar title text size (in sp, 12-18)

titleTextAlpha

Int

Toolbar title text opacity (in %, 0-100)

titleTextColor

Toolbar title text color

backgroundColor

Toolbar background color

backgroundAlpha

Int

Toolbar background opacity (in %, 0-100)

isTitleCentered

Boolean

Defines whether the text on the toolbar is centered or not

title

String

Text on the toolbar

Parameter

Type

Description

textFont

String

Center hint text font

textStyle

Int (values from android.graphics.Typeface properties, e.g.,Typeface.BOLD)

Center hint text style

textSize

Int

Center hint text size (in sp, 12-34)

textColor

Center hint text color

textAlpha

Int

Center hint text opacity (in %, 0-100)

verticalPosition

Int

Center hint vertical position from the screen bottom (in %, 0-100)

backgroundColor

Center hint background color

backgroundOpacity

Int

Center hint background opacity

backgroundCornerRadius

Int

Center hint background frame corner radius (in dp, 0-20)

Parameter

Type

Description

hintGradientColor

Gradient color

hintGradientOpacity

Int

Gradient opacity

animationIconSize

Int

A side size of the animation icon square

hideAnimation

Boolean

A switcher for hint animation, if True, the animation is hidden

Parameter

Type

Description

geometryType

The frame type: oval, rectangle, circle, square

cornerRadius

Int

Rectangle corner radius (in dp, 0-20)

strokeDefaultColor

Frame color when a face is not aligned properly

strokeFaceInFrameColor

Frame color when a face is aligned properly

strokeAlpha

Int

Frame opacity (in %, 0-100)

strokeWidth

Int

Frame stroke width (in dp, 0-20)

strokePadding

Int

A padding from the stroke to the face alignment area (in dp, 0-10)

Parameter

Type

Description

backgroundColor

Background color

backgroundAlpha

Int

Background opacity (in %, 0-100)

Parameter

Type

Description

textFont

Int (@FontRes)

SDK version text font

textSize

Int

SDK version text size (in sp, 12-16)

textColor

SDK version text color

textAlpha

Int

SDK version text opacity (in %, 20-100)

Parameter

Type

Description

textMessage

String

Antiscam message text

textFont

String

Antiscam message text font

textSize

Int

Antiscam message text size (in px, 12-18)

textColor

Antiscam message text color

textAlpha

Int

Antiscam message text opacity (in %, 0-100)

backgroundColor

Antiscam message background color

backgroundOpacity

Int

Antiscam message background opacity

cornerRadius

Int

Background frame corner radius (in px, 0-20)

flashColor

Color of the flashing indicator close to the antiscam message

Parameter

Type

Description

tag

A tag for a document photo.

photoPath

String

An absolute path to a photo.

additionalTags (optional)

String

Additional tags if needed (including those not from the OzMediaTag enum).

metaData

Map<String, String>

Media metadata

Parameter

Type

Description

tag

A tag for a shot set

archivePath

String

A path to an archive

additionalTags (optional)

String

Additional tags if needed (including those not from the OzMediaTag enum)

metaData

Map<String, String>

Media metadata

Parameter

Type

Description

tag

A tag for a video

videoPath

String

A path to a video

bestShotPath (optional)

String

URL of the best shot in PNG

preferredMediaPath (optional)

String

URL of the API media container

additionalTags (optional)

String

Additional tags if needed (including those not from the OzMediaTag enum)

metaData

Map<String, String>

Media metadata

Parameter

Type

Description

analysis

Contains information on what media to analyze and what analyses to apply.

Parameter

Type

Description

media

The object that is being uploaded at the moment

index

Int

Number of this object in a list

from

Int

Objects quantity

percentage

Int

Completion percentage

Parameter

Type

Description

type

Type

The type of the analysis

mode

Mode

The mode of the analysis

mediaList

An array of the OzAbstractMedia objects

params (optional)

Map<String, Any>

Additional parameters

sizeReductionStrategy

Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully

Parameter

Type

Description

mediaId

String

Media identifier

mediaType

String

Type of the media

originalName

String

Original media name

ozMedia

Media object

tags

List<String>

Tags for media

Parameter

Type

Description

confidenceScore

Float

Resulting score

isOnDevice

Boolean

Mode of the analysis

resolution

Consolidated analysis result

sourceMedia

Source media

type

Type of the analysis

Parameter

Type

Description

analysisResults

Analysis result

folderId

String

Folder identifier

resolution

Consolidated analysis result

Parameter

Type

Description

resolution

Consolidated analysis result

type

Type of the analysis

mode

Resulting score

resultMedia

A list of results of the analyses for single media

confidenceScore

Float

Resulting score

analysisId

String

Analysis identifier

params

@RawValue Map<String, Any>

Additional folder parameters

error

Error if any

serverRawResponse

String

Response from backend

Error Code

Error Message

Description

ERROR = 3

Error.

An unknown error has happened

ATTEMPTS_EXHAUSTED_ERROR = 4

Error. Attempts exhausted for liveness action.

VIDEO_RECORD_ERROR = 5

Error by video record.

An error happened during video recording

NO_ACTIONS_ERROR = 6

Error. OzLivenessSDK started without actions.

FORCE_CLOSED = 7

Error. Liveness activity is force closed from client application.

A user closed the Liveness screen during video recording

DEVICE_HAS_NO_FRONT_CAMERA = 8

Error. Device has not front camera.

No front camera found

DEVICE_HAS_NO_MAIN_CAMERA = 9

Error. Device has not main camera.

No rear camera found

DEVICE_CAMERA_CONFIGURATION_NOT_SUPPORTED = 10

Error. Device camera configuration is not supported.

Oz Liveness doesn't support the camera configuration of the device

FACE_ALIGNMENT_TIMEOUT = 12

Error. Face alignment timeout in OzLivenessSDK.config.faceAlignmentTimeout milliseconds

ERROR = 13

The check was interrupted by user

User has closed the screen during the Liveness check.

Flutter

In this section, we explain how to use Oz Flutter SDK for iOS and Android.

Before you start, it is recommended that you install:

  • Flutter 3.0.0 or higher;

  • Android SDK 21 or higher;

  • dart 2.18.6 or higher;

  • iOS platform 13 or higher;

  • Xcode.

iOS Localization: Adding a Custom or Updating an Existing Language Pack

Please note: this feature has been implemented in 8.1.0.

To add or update the language pack for Oz iOS SDK, use the set(languageBundle: Bundle) method. It shows the SDK that you are going to use the non-standard bundle. In OzLocalizationCode, use the custom language (optional).

The localization record consists of the localization key and its string value, e.g., "about" = "About".

  • If you don’t set the custom language and bundle, the SDK uses the pre-installed languages only.

  • If the custom bundle is set (and language is not), it has a priority when checking translations, i.e, SDK checks for the localization record in the custom bundle localization file. If the key is not found in the custom bundle, the standard bundle text for this key is used.

  • If both custom bundle and language are set, SDK retrieves all the translations from the custom bundle localization file.

A list of keys for iOS:

The keys Action.*.Task refer to the appropriate gestures. Others refer to the hints for any gesture, info messages, or errors.

When new keys appear with new versions, if no translation is provided by your custom bundle localization file, you’ll see the default (English) text.

iOS SDK Methods and Properties

OZSDK

A singleton for Oz SDK.

Methods

OZSDK

Parameter

Type

Description

licenseSources

The source of the license

Returns

-

setLicense

Forces the license installation.

Parameter

Type

Description

licenseSource

Source of the license

setApiConnection

Retrieves an access token for a user.

Parameter

Type

Description

apiConnection

Authorization parameters

Returns

The access token or an error.

setEventsConnection

Retrieves an access token for a user to send telemetry.

Parameter

Type

Description

eventsConnection

Telemetry authorization parameters

Returns

The access token or an error.

isLoggedIn

Checks whether an access token exists.

Parameters

-

Returns

The result – the true or false value.

logout

Deletes the saved access token

Parameters

-

Returns

-

createVerificationVCWithDelegate

Creates the Liveness check controller.

Parameter

Type

Description

delegate

The delegate for Oz Liveness

actions

Captured action

cameraPosition (optional)

AVCaptureDevice.Position

front – front camera (default), back – rear camera

Returns

UIViewController or an exception.

createVerificationVC

Creates the Liveness check controller.

Parameter

Type

Description

actions

Captured action

FaceCaptureCompletion

type alias used as follows:

public typealias FaceCaptureCompletion = (_ results: [OZMedia]?, _ error: OZVerificationStatus?) -> Void

cameraPosition (optional)

AVCaptureDevice.Position

front – front camera (default), back – rear camera

Returns

UIViewController or an exception.

cleanTempDirectory

Deletes all videos.

Parameters

-

Returns

-

getEventSessionId

Retrieves the telemetry session ID.

Parameters

-

Returns

The telemetry session ID (String parameter).

set

Sets the bundle to look for translations in.

Parameter

Type

Description

languageBundle

Bundle

The bundle that contains translations

Returns

-

setSelfieLength

Sets the length of the Selfie gesture (in milliseconds).

Parameter

Type

Description

selfieLength

Int

The length of the Selfie gesture (in milliseconds). Should be within 500-5000 ms, the default length is 700

generateSignedPayload

Generates the payload with media signatures.

Parameter

Type

Description

media

An array of media files

folderMeta

[String]

Additional folder metadata

Returns

Payload to be sent along with media files that were used for generation.

Properties

localizationCode

SDK locale (if not set, works automatically).

Parameter

Type

Description

localizationCode

The localization code

host

The host to call for Liveness video analysis.

Parameter

Type

Description

host

String

Host address

attemptSettings

The holder for attempts counts before SDK returns error.

Parameter

Type

Description

singleCount

Int

Attempts on a single action/gesture

commonCount

Int

Total number of attempts on all actions/gestures if you use a sequence of them

faceAlignmentTimeout

Float

Time needed to align face into frame

uploadMediaSettings

Sets the number of attempts and timeout between them

version

The SDK version.

Parameter

Type

Description

version

String

Version number

OZLivenessDelegate

A delegate for OZSDK.

Methods

onOZLivenessResult

Gets the Liveness check results.

Parameter

Type

Description

results

An array of the OzMedia objects.

Returns

-

onError

The error processing method.

Parameter

Type

Description

status

The error description.

Returns

-

AnalysisRequest

A protocol for performing checks.

Methods

AnalysisRequestBuilder

Creates the AnalysisRequest instance.

Parameter

Type

Description

folderId (optional)

String

The identifier to define when you need to upload media to a certain folder.

Returns

The AnalysisRequest instance.

addAnalysis

Adds an analysis to the AnalysisRequest instance.

Parameter

Type

Description

analysis

A structure containing information on the analyses required.

Returns

-

uploadMedia

Uploads media on server.

Parameter

Type

Description

media

Media or an array of media objects to be uploaded.

Returns

-

addFolderId

Adds the folder ID to upload media to a certain folder.

Parameter

Type

Description

folderId

String

The folder identifier.

Returns

-

addFolderMeta

Adds metadata to a folder.

Parameter

Type

Description

meta

[String]

An array of metadata as follows:

["meta1": "data1"]

Returns

-

run

Runs the analyses.

Parameter

Type

Description

statusHandler

A callback function as follows:

The handler that is executed when the scenario state changes

errorHandler

A callback function as follows:

errorHandler: @escaping ((_ error: Error) -> Void)

Error handler

completionHandler

A callback function as follows:

The handler that is executed when the run method completes.

Returns

The analysis result or an error.

Customization

Customization for OzLivenessSDK (use OZSDK.customization).

toolbarCustomization

A set of customization parameters for the toolbar.

Parameter

Type

Description

closeButtonIcon

UIImage

An image for the close button

closeButtonColor

UIColor

Close button tintColor

titleFont

UIFont

Toolbar title text font

titleColor

UIColor

Toolbar title text color

backgroundColor

UIColor

Toolbar background color

titleText

String

Text on the toolbar

centerHintCustomization

A set of customization parameters for the center hint that guides a user through the process of taking an image of themselves.

Parameter

Type

Description

textFont

UIFont

Center hint text font

textColor

UIColor

Center hint text color

backgroundColor

UIColor

Center hint text background

verticalPosition

Int

Center hint vertical position from the screen top (in %, 0-100)

hideTextBackground

Bool

Hides text background

backgroundCornerRadius

Int

Center hint background frame corner radius

hintAnimationCustomization

A set of customization parameters for the hint animation.

Parameter

Type

Description

hideAnimation

Bool

A switcher for hint animation, if True, the animation is hidden

animationIconSize

CGfloat

A side size of the animation icon square

hintGradientColor

UIColor

The close-to-frame gradient color

faceFrameCustomization

A set of customization parameters for the frame around the user face.

Parameter

Type

Description

geometryType

The frame type: oval, rectangle, circle, or square

cornerRadius

CGFloat

Rectangle corner radius (in dp)

strokeFaceNotAlignedColor

UIColor

Frame color when a face is not aligned properly

strokeFaceAlignedColor

UIColor

Frame color when a face is aligned properly

strokeWidth

CGFloat

Frame stroke width (in dp, 0-20)

strokePadding

CGFloat

A padding from the stroke to the face alignment area (in dp, 0-10)

backgroundCustomization

A set of customization parameters for the background outside the frame.

Parameter

Type

Description

backgroundColor

UIColor

Background color

versionCustomization

A set of customization parameters for the SDK version text.

Parameter

Type

Description

textFont

UIFont

SDK version text font

textColor

UIColor

SDK version text color

antiscamCustomization

A set of customization parameters for the antiscam message that warns user about their actions being recorded.

Parameter

Type

Description

customizationEnableAntiscam

Bool

Adds the antiscam message

customizationAntiscamTextMessage

String

Antiscam message text

customizationAntiscamTextFont

UIFont

Antiscam message text font

customizationAntiscamTextColor

UIColor

Antiscam message text color

customizationAntiscamBackgroundColor

UIColor

Antiscam message text background color

customizationAntiscamCornerRadius

CGFloat

Background frame corner radius

customizationAntiscamFlashColor

UIColor

Color of the flashing indicator close to the antiscam message

logoCustomization

Logo customization parameters. Custom logo should be allowed by license.

Parameter

Type

Description

image

UIImage

Logo image

size

CGSize

Logo size (in dp)

Variables and Objects

enum LicenseSource

A source of a license.

Case

Description

licenseFilePath

An absolute path to a license (String)

licenseFileName

The name of the license file

struct LicenseData

The license data.

Parameter

Type

Description

appIDS

[String]

An array of bundle IDs

expires

TimeInterval

The expiration interval

features

Features

License features

configs (optional)

ABTestingConfigs

Additional configuration

enum OzVerificationMovement

Contains action from the captured video.

Case

Description

smile

Smile

eyes

Blink

scanning

Scan

selfie

A selfie with face alignment check

one_shot

The best shot from the video taken

left

Head turned left

right

Head turned right

down

Head tilted downwards

up

Head lifted up

enum OZLocalizationCode

Case

Description

en

English

hy

Armenian

kk

Kazakh

ky

Kyrgyz

tr

Turkish

es

Spanish

pt-BR

Portuguese (Brazilian)

custom(String)

Custom language (language ISO 639-1 code, two letters)

struct OZMedia

Contains all the information on the media captured.

Parameter

Type

Description

movement

User action type

mediaType

Type of media

metaData

[String] as follows:

["meta1": "data1"]

Metadata if any

videoURL

URL

URL of the Liveness video

bestShotURL

URL

URL of the best shot in PNG

preferredMediaURL

URL

URL of the API media container

timestamp

Date

Timestamp for the check completion

enum MediaType

The type of media captured.

Case

Description

movement

A media with an action

documentBack

The back side of the document

documentFront

The front side of the document

enum OZVerificationStatus

Error description. These errors are deprecated and will be deleted in the upcoming releases.

Case

Description

userNotProcessed

The Liveness check was not processed

failedBecauseUserCancelled

The check was interrupted by user

failedBecauseCameraPermissionDenied

The Liveness check can't be performed: no camera access

failedBecauseOfBackgroundMode

The Liveness check can't be performed: background mode

failedBecauseOfTimeout

The Liveness check can't be performed: timeout

failedBecauseOfAttemptLimit

The Liveness check can't be performed: attempts limit exceeded

failedBecausePreparingTimout

The Liveness check can't be performed: face alignment timeout

failedBecauseOfLowMemory

The Liveness check can't be performed: no memory left

struct Analysis

Contains information on what media to analyze and what analyses to apply.

Parameter

Type

Description

media

An array of the OzMedia objects

type

The type of the analysis

mode

The mode of the analysis

sizeReductionStrategy

Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully

params (optional)

String

Additional parameters

enum AnalysisType

The type of the analysis.

Case

Description

biometry

The algorithm that allows comparing several media and check if the people on them are the same person or not

quality

The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.

document

The analysis that aims to recognize the document and check if its fields are correct according to its type.

blacklist

The analysis that compares a face on a captured media with faces from the pre-made media database.

Currently, the .document analysis can't be performed in the on-device mode.

enum AnalysisMode

The mode of the analysis.

Case

Description

onDevice

The on-device analysis with no server needed

serverBased

The server-based analysis

hybrid

The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.

enum ScenarioState

Shows the media processing status.

Case

Description

addToFolder

The system is creating a folder and adding files to this folder

addAnalyses

The system is adding analyses

waitAnalysisResult

The system is waiting for the result

struct AnalysisStatus

Shows the files' uploading status.

Parameter

Type

Description

media

The object that is being uploaded at the moment

index

Int

Number of this object in a list

from

Int

Objects quantity

progress

Progress

Object uploading status

RequestStatus

Shows the analysis processing status.

Parameter

Type

Description

status

Processing analysis status

progressStatus

Media uploading status

ResultMedia

Describes the analysis result for the single media.

Parameter

Type

Description

resolution

Consolidated analysis result

sourceId

String

Media identifier

isOnDevice

Bool

Analysis mode

confidenceScore

Float

Resulting score

mediaType

String

Media file type: VIDEO / IMAGE / SHOT_SET

media

Media that is being analyzed

error

AnalysisError (inherits from Error)

Error

RequestResult

Contains the consolidated analysis results for all media.

Parameter

Type

Description

resolution

Consolidated analysis result

folderId

String

Folder identifier

analysisResults

A list of analysis results

class AnalysisResult

Contains the results of the checks performed.

Parameter

Type

Description

resolution

Analysis resolution

type

Analysis type

mode

Analysis mode

analysisId

String

Analysis identifier

error

AnalysisError (inherits from Error)

Error

resultMedia

Results of the analysis for single media files

confidenceScore

Float

The resulting score

serverRawResponse

String

Server response

enum AnalyseResolutionStatus

The general status for all analyses applied to the folder created.

Case

Description

INITIAL

No analyses have been applied yet

PROCESSING

The analyses are in progress

FAILED

One or more analyses failed due to some error and couldn't get finished

FINISHED

The analyses are finished

DECLINED

The check failed (e.g., faces don't match or some spoofing attack detected)

SUCCESS

Everything went fine, the check succeeded (e.g., faces match or liveness confirmed)

OPERATOR_REQUIRED

The result should be additionally checked by a human operator

struct AnalyseResolution

Contains the results for single analyses.

Parameter

Type

Description

analyseResolutionStatus

The analysis status

type

The analysis type

folderID

String

The folder identifier

score

Float

The result of the check performed

enum GeometryType

Frame shape settings.

Case

Description

oval

Oval frame

rectangle(cornerRadius: CGFloat)

Rectangular frame (with corner radius)

circle

Circular frame

square(cornerRadius: CGFloat)

Square frame (with corner radius)

enum LicenseError

Possible license errors.

Case

Description

licenseFileNotFound

The license is not found

licenseParseError

Cannot parse the license file, the license might be invalid

licenseBundleError

The bundle_id in the license file doesn't match with bundle_id used.

licenseExpired

The license is expired

enum Connection

The authorization type.

Case

Description

fromServiceToken

Authorization with a token:

  • host: String

  • token: String

fromCredentials

Authorization with credentials:

  • host: String

  • login: String

  • password: String

struct UploadMediaSettings

Defines the settings for the repeated media upload.

Parameter
Type
Description

attemptsCount

Int

Number of attempts for media upload

attemptsTimeout

Int

Timeout between attempts

enum SizeReductionStrategy

Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.

uploadOriginal

The original video

uploadCompressed

The compressed video

uploadBestShot

The best shot taken from the video

uploadNothing

Nothing is sent (note that no folder will be created)

Changelog

iOS SDK changes

8.16.2 – Apr. 22, 2025

  • Xcode updated to version 16 to comply with Apple requirements.

8.16.1 – Apr. 09, 2025

  • Security updates.

8.16.0 – Mar. 11, 2025

  • Updated the authorization logic.

  • Improved voiceover.

  • SDK now compresses videos if their size exceeds 10 MB.

  • Head movement gestures are now handled properly.

  • Security updates.

8.15.0 – Dec. 30, 2024

  • Changed the wording for the head_down gesture: the new wording is “tilt down”.

  • Added proper focus order for VoiceOver when the antiscam hint is enabled.

  • Added the public setting extract_action_shot in the Demo Application.

  • Bug fixes.

  • Security updates.

8.14.0 – Dec. 3, 2024

  • Accessibility updates according to WCAG requirements: the SDK hints and UI controls can be voiced.

  • Improved user experience with head movement gestures.

  • Minor bug fixes and telemetry updates.

8.13.0 – Nov. 11, 2024

  • The screen brightness no longer changes when the rear camera is used.

  • Fixed the video recording issues on some smartphone models.

  • Security and telemetry updates.

8.12.2 – Oct. 24, 2024

  • Internal SDK improvements.

8.12.1 – Sept. 30, 2024

  • Added Xcode 16 support.

  • Security and telemetry updates.

8.11.0 – Sept. 10, 2024

  • Security updates.

8.10.1 – Aug. 23, 2024

  • Bug fixes.

8.10.0 – Aug. 1, 2024

  • SDK now requires Xcode 15 and higher.

  • Security updates.

  • Bug fixes.

8.9.1 – July 16, 2024

  • Internal SDK improvements.

8.8.3 – July 11, 2024

  • Internal SDK improvements.

8.8.2 – June 25, 2024

  • Bug fixes.

8.8.1 – June 25, 2024

  • Logging updates.

8.8.0 – June 18, 2024

  • Security updates.

8.7.0 – May 10, 2024

  • Added a description for the error that occurs when providing an empty string as an ID in the addFolderID method.

  • Bug fixes.

8.6.0 – Apr. 10, 2024

  • The messages displayed by the SDK after uploading media have been synchronized with Android.

  • The bug causing analysis delays that might have occurred for the One Shot gesture has been fixed.

8.5.0 – Mar. 06, 2024

  • Removed the pause after the Scan gesture.

  • Security and logging updates.

8.4.2 – Jan. 24, 2024

  • Security updates.

8.4.0 – Jan. 09, 2024

  • Changed the default behavior in case a localization key is missing: now the English string value is displayed instead of a key.

  • Fixed some bugs.

8.3.3 – Dec. 11, 2023

  • Internal licensing improvements.

8.3.0 – Nov. 17, 2023

  • Implemented the possibility of using a master license that works with any bundle_id.

  • Fixed the bug with background color flashing.

8.2.1 – Nov. 11, 2023

  • Bug fixes.

8.2.0 – Oct. 30, 2023

  • The Analysis structure now contains the sizeReductionStrategy field. This field defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully.

  • The messages for the errors that are retrieved from API are now detailed.

  • The toFrameGradientColor option in hintAnimationCustomization is now deprecated, please use the hintGradientColor option instead.

  • Got back the iOS 11 support.

8.1.1 – Oct. 09, 2023

8.1.0 – Sept. 07, 2023

  • Updated the Liveness on-device model.

  • Added the Portuguese (Brazilian) locale.

  • If a media hasn't been uploaded correctly, the system now repeats the upload.

  • Added a new method to retrieve the telemetry (logging) identifier: getEventSessionId.

  • The setPermanentAccessToken, configure and login methods are now deprecated. Please use the setApiConnection method instead.

  • The setLicense(from path:String) method is now deprecated. Please use the setLicense(licenseSource: LicenseSource) method instead.

  • Fixed some bugs and improved the SDK work.

8.0.2 – Aug. 15, 2023

  • Fixed some bugs and improved the SDK algorithms.

8.0.0 – June 27, 2023

  • Added the new analysis mode – hybrid (Liveness only). If the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.

  • Improved the on-device models.

  • Updated the run method.

  • Added new structures: RequestStatus (analysis state), ResultMedia (analysis result for a single media) and RequestResult (consolidated analysis result for all media).

  • The updated AnalysisResult structure should be now used instead of OzAnalysisResult.

  • For the OZMedia object, you can now specify additional tags that are not included into our tags list.

  • The Selfie video length is now about 0.7 sec, the file size and upload time are reduced.

  • The hint text width can now exceed the frame width (when using the main camera).

  • The methods below are no longer supported:

Removed method

Replacement

analyse

AnalysisRequest.run

addToFolder

uploadMedia

documentAnalyse

AnalysisRequest.run

uploadAndAnalyse

AnalysisRequest.run

runOnDeviceBiometryAnalysis

AnalysisRequest.run

runOnDeviceLivenessAnalysis

AnalysisRequest.run

addMedia

uploadMedia

7.3.0 – June 06, 2023

  • Added the center hint background customization.

  • Added new face frame forms (Circle, Square).

  • Synchronized the default customization values with Android.

  • Added the Spanish locale.

  • iOS 11 is no longer supported, the minimal required version is 12.

7.2.1 – May 24, 2023

  • Fixed the issue with the server-based One shot analysis.

7.2.0 – May 18, 2023

  • Improved the SDK algorithms.

7.1.6 – May 04, 2023

  • Fixed error handling when uploading a file to API. From this version, an error will be raised to a host application in case of an error during file upload.

7.1.5 – Apr. 03, 2023

  • Improved the on-device Liveness.

7.1.4 – Mar. 24, 2023

  • Fixed the animation for sunglasses/mask.

  • Fixed the bug with the .document analysis.

  • Updated the descriptions of customization methods and structures.

7.1.2 – Feb. 21, 2023

  • Updated the TensorFlow version to 2.11.

  • Fixed several bugs, including the Biometry check failures on some phone models.

7.1.1 – Feb. 06, 2023

  • Added customization for the hint animation.

7.1.0 – Jan. 20, 2023

  • Integrated a new model.

  • Added the uploadMedia method to AnalysisRequest. The addMedia method is now deprecated.

  • Fixed the combo analysis error.

  • Added a button to reset the SDK theme and language settings.

  • Fixed some bugs and localization issues.

  • Extended the network request timeout to 90 sec.

  • Added a setting for the animation icon size.

7.0.0 – Dec. 08, 2022

6.7.0

6.4.0

  • Synchronized the version numbers with Android SDK.

  • Added a new field to the Analysis structure. The params field is for any additional parameters, for instance, if you need to set extracting the best shot on server to true. The best shot algorithm chooses the most high-quality frame from a video.

  • Fixed some localization issues.

  • Changed the Combo gesture.

  • Now you can launch the Liveness check to analyze images taken with another SDK.

3.0.1

  • The Zoom in and Zoom out gestures are no longer supported.

3.0.0

  • Added a new simplified analysis structure – AnalysisRequest.

2.3.0

  • Added methods of on-device analysis: runOnDeviceLivenessAnalysis and runOnDeviceBiometryAnalysis.

  • You can choose the installation version. Standard installation gives access to full functionality. The core version (OzLivenessSDK/Core) installs SDK without the on-device functionality.

  • Added a method to upload data to server and start analyzing it immediately: uploadAndAnalyse.

  • Improved the licensing process, now you can add a license when initializing SDK: OZSDK(licenseSources: [LicenseSource], completion: @escaping ((LicenseData?, LicenseError?) -> Void)), where LicenseSource is a path to physical location of your license, LicenseData contains the license information.

  • Added the setLicense method to force license adding.

2.2.3

  • Added the Turkish locale.

2.2.1

  • Added the Kyrgyz locale.

  • Added Completion Handler for analysis results.

  • Added Error User Info to telemetry to show detailed info in case of an analysis error.

2.2.0

  • Added local on-device analysis.

  • Added oval and rectangular frames.

  • Added Xcode 12.5.1+ support.

2.1.4

  • Added SDK configuration with licenses.

2.1.3

  • Added the One Shot gesture.

  • Improved OZVerificationResult: added bestShotURL which contains the best shot image and preferredMediaURL which contains an URL to the best quality video.

  • When performing a local check, you can now choose a main or back camera.

2.1.2

  • Authorization sessions extend automatically.

  • Updated authorization interfaces.

2.1.1

  • Added the Kazakh locale.

  • Added license error texts.

2.1.0

  • You can cancel network requests.

  • You can specify Bundle for license.

  • Added analysis parameterization documentAnalyse.

  • Fixed building errors (Xcode 12.4 / Cocoapods 1.10.1).

2.0.0

  • Added license support.

  • Added Xcode 12 support instead of 11.

  • Fixed the documentAnalyse error where you had to fill analyseStates to launch the analysis.

  • Fixed logging.

Customizing iOS SDK Interface

Please note: the customization methods should be called before the video capturing ones.

// customization parameters for the toolbar
let toolbarCustomization = ToolbarCustomization(
   closeButtonIcon: UIImage(named: "example"),
   closeButtonColor: .black.withAlphaComponent(0.8),
   titleText: "",
   titleFont: .systemFont(ofSize: 18, weight: .regular),
   titleColor: .gray,
   backgroundColor: .lightGray)

// customization parameters for the center hint
let centerHintCustomization = CenterHintCustomization(
   textColor: .white,
   textFont: .systemFont(ofSize: 22, weight: .regular),
   verticalPosition: 42,
   backgroundColor: UIColor.init(hexRGBA: "1C1C1E8F")!,
   hideTextBackground: false,
   backgroundCornerRadius: 14)
   
// customization parameters for the center hint animation
let hintAnimationCustomization = HintAnimationCustomization(
    hideAnimation: false,
    animationIconSize: 80,
    toFrameGradientColor: UIColor.red)

// customization parameters for the frame around the user face
let faceFrameCustomization = FaceFrameCustomization(
   strokeWidth: 4,
   strokeFaceAlignedColor: .green,
   strokeFaceNotAlignedColor: .red,
   geometryType: .rect(cornerRadius: 10),
   strokePadding: 3)

// customization parameters for the SDK version text
let versionCustomization = VersionLabelCustomization(
   textFont: .systemFont(ofSize: 12, weight: .regular),
   textColor: .gray
)

// customization parameters for the background outside the frame
let backgroundCustomization = BackgroundCustomization(
   backgroundColor: .lightGray
)

// customization parameters for the antiscam protection text
let antiscamCustomization: AntiscamCustomization = AntiscamCustomization(
  customizationEnableAntiscam: false,
  customizationAntiscamTextMessage: "Face recognition",
  customizationAntiscamTextFont: UIFont.systemFont(ofSize: 15, weight: .semibold),
  customizationAntiscamTextColor: UIColor.black,
  customizationAntiscamBackgroundColor: UIColor.init(hexRGBA: "F2F2F7FF")!,
  customizationAntiscamCornerRadius: 18,
  customizationAntiscamFlashColor: UIColor.init(hexRGBA: "FF453AFF")!)

// customization parameters for your logo
// should be allowed by license
let logoCustomization = LogoCustomization(image: UIImage(), size: CGSize(width: 100, height: 100))

OZSDK.customization = Customization(toolbarCustomization: toolbarCustomization,
                   antiscamCustomization: antiscamCustomization,
                   centerHintCustomization: centerHintCustomization,
                   hintAnimationCustomization: hintAnimationCustomization,
                   faceFrameCustomization: faceFrameCustomization,
                   versionCustomization: vesrionCustomization,
                   backgroundCustomization: backgroundCustomization,
                  logoCustomization: logoCustomization)

Changelog

8.16.0 – Apr. 30, 2025

  • Changed the wording for the head_down gesture: the new wording is “tilt down”.

  • Updated the authorization logic.

  • Improved voiceover.

  • Bug fixes.

  • Security updates.

  • Android: you can now disable video validation that has been implemented to avoid recording extremely short videos (3 frames and less).

  • iOS: SDK now compresses videos if their size exceeds 10 MB.

  • iOS: Head movement gestures are now handled properly.

  • iOS: Xcode updated to version 16 to comply with Apple requirements.

8.14.0 – Dec. 17, 2024

  • Security and telemetry updates.

  • The SDK hints and UI controls can be voiced in accordance to WCAG requirements.

  • Improved user experience with head movement gestures.

  • Android: moved the large video compression step to the Liveness screen closure.

  • Android: fixed the bug when the best shot frame could contain an image with closed eyes.

  • Android: resolved codec issues on some smartphone models.

  • Android: fixed the bug when the recorded videos might appear green.

  • iOS: added Xcode 16 support.

  • iOS: the screen brightness no longer changes when the rear camera is used.

  • iOS: fixed the video recording issues on some smartphone models.

8.12.0 – Oct. 11, 2024

  • The executeLiveness method is now deprecated, please use startLiveness instead.

  • Updated the code needed to obtain the Liveness results.

  • Security and telemetry updates.

8.8.2 – June 27, 2024

  • Added descriptions for the errors that occur when providing an empty string as an ID in the addFolderID (iOS) and setFolderID (Android) methods.

  • Android: fixed a bug causing an endless spinner to appear if the user switches to another application during the Liveness check.

  • Android: fixed some smartphone model specific-bugs.

  • Security and logging updates.

8.6.0 – Apr. 15, 2024

  • Android: upgraded the on-device Liveness model.

  • Android: security updates.

  • iOS: the messages displayed by the SDK after uploading media have been synchronized with Android.

  • iOS: the bug causing analysis delays that might have occurred for the One Shot gesture has been fixed.

8.5.0 – Mar. 20, 2024

  • Removed the pause after the Scan gesture.

  • Security and logging updates.

  • Bug fixes.

  • Android: if the recorded video is larger than 10 MB, it gets compressed.

8.4.0 – Jan. 11, 2024

  • Android: updated the on-device Liveness model.

  • iOS: changed the default behavior in case a localization key is missing: now the English string value is displayed instead of a key.

  • Fixed some bugs.

8.3.0 – Nov. 30, 2023

  • Implemented the possibility of using a master license that works with any bundle_id.

  • Fixed the bug with background color flashing.

  • Video compression failure on some phone models is now fixed.

8.2.0 – Nov. 17, 2023

  • First version.

How to Install and Use Oz Flutter Plugin

Installation and Licensing

Add the lines below in pubspec.yaml of the project you want to add the plugin to.

  ozsdk:
    git:
      url: https://gitlab.com/oz-forensics/oz-mobile-flutter-plugin.git
      ref: '8.8.2'

Add the license file (e.g., license.json or forensics.license) to the Flutter application/assets folder. In pubspec.yaml, specify the Flutter asset:

assets
  - assets/license.json // please note that the license file name must match to the one placed in assets

For Android, add the Oz repository to /android/build.gradle, allprojects → repositories section:

allprojects {
    repositories {
        google()
        mavenCentral()
        maven { url ‘https://ozforensics.jfrog.io/artifactory/main’ } // repository URL
    }
}

For Flutter 8.24.0 and above or Android Gradle plugin 8.0.0 and above, add to android/gradle.properties:

android.nonTransitiveRClass=false

The minimum SDK version should be 21 or higher:

defaultConfig {
  ...
  minSDKVersion 21
  ...
}

For iOS, set the minimum platform to 13 or higher in the Runner → Info → Deployment target → iOS Deployment Target.

In ios/Podfile, comment the use_frameworks! line (#use_frameworks!).

Getting Started with Flutter

Initializing SDK

Initialize SDK by calling the init plugin method. Note that the license file name and path should match the ones specified in pubspec.yaml (e.g., assets/license.json).

await OZSDK.initSDK([<% license path and license file name %>]);

Connecting SDK to API

Use the API credentials (login, password, and API URL) that you’ve received from us.

await OZSDK.setApiConnectionWithCredentials(<login>, <password>, <host>);

In production, instead of hard-coding the login and password inside the application, it is recommended to get the access token on your backend via the API auth method, then pass it to your application:

 await OZSDK.setApiConnectionWithToken(token, host);
await OZSDK.setEventConnectionWithCredentials(<login>, <password>, <host>);

or

await OZSDK.setEventConnectionWithToken(<token>, <host>);

Capturing Videos

To start recording, use the startLiveness method to obtain the recorded media:

await OZSDK.startLiveness(<actions>, <use_main_camera>);

Parameter

Type

Description

actions

List<VerificationAction>

Actions from the captured video

use_main_camera

Boolean

If True, uses the main camera, otherwise the front one.

Please note: for versions 8.11 and below, the method name is executeLiveness, and it returns the recorded media.

To obtain the media result, subscribe to livenessResult as shown below:

class Screen extends StatefulWidget {
  static const route = 'liveness';

  const Screen({super.key});

  @override
  State<Screen> createState() => _ScreenState();
}

class _ScreenState extends State<Screen> {
  late StreamSubscription<List<Media>> _subscription;

  @override
  void initState() {
    super.initState();

    // subscribe to liveness result
    _subscription = OZSDK.livenessResult.listen(
      (List<Media> medias) {
          // media contains liveness media
      },
      onError: (Object error) {
        // handle error, in most cases PlatformException
      },
    );
  }

  @override
  Widget build(BuildContext context) {
    // omitted to shorten the example
  }

  void _startLiveness() async {
    // use startLiveness to start liveness screen
    OZSDK.startLiveness(<list of actions>);
  }

  @override
  void dispose() {
    // cancel subscription
    _subscription.cancel();
    super.dispose();
  }
}

Checking Liveness and Face Biometry

To run the analyses, execute the code below.

Create the Analysis object:

List<Analysis> analysis = [ Analysis(Type.quality, Mode.serverBased, <media>, {}), ];

Execute the formed analysis:

final analysisResult = await OZSDK.analyze(analysis, [], {}) ?? [];

The analysisResult list of objects contains the result of the analysis.

If you want to use media captured by another SDK, the code should look like this:

media = Media(FileTypedocumentPhoto, VerificationAction.oneShot, “photo_selfie”, null, <path to image>, null, null, “”)

The whole code block will look like this:

// replace VerificationAction.blank with your Liveness gesture if needed
final cameraMedia = await OZSDK.executeLiveness([VerificationAction.blank], use_main_camera);

final analysis = [
  Analysis(Type.quality, Mode.serverBased, cameraMedia, {}),
];

final analysisResult = await OZSDK.analyze(analysis, [], {});
// replace VerificationAction.blank with your Liveness gesture if needed
final cameraMedia = await OZSDK.executeLiveness([VerificationAction.blank], use_main_camera);
final biometryMedia = [...cameraMedia];
biometryMedia.add(
  Media(
    FileType.documentPhoto,
    VerificationAction.blank,
    MediaType.movement,
    null,
    <your reference image path>,
    null,
    null,
    MediaTag.photoSelfie,
  ),
);

final analysis = [
  Analysis(Type.quality, Mode.serverBased, cameraMedia, {}),
  Analysis(Type.biometry, Mode.serverBased, biometryMedia, {}),
];

final analysisResult = await OZSDK.analyze(analysis, [], {});

Web Plugin

Web Plugin is a plugin called by your web application. It works in a browser context. The Web Plugin communicates with Web Adapter, which, in turn, communicates with Oz API.

For the samples below, replace https://web-sdk.sandbox.ohio.ozforensics.com in index.html.

Launching the Plugin

The plugin window is launched with open(options) method:

OzLiveness.open({
  lang: 'en',
  action: [
    'photo_id_front', // request photo ID picture
    'video_selfie_blank' // request passive liveness video
  ],
  meta: { 
    // Your unique identifier that you can use later to find this folder in Oz API 
    // Optional, yet recommended
    'transaction_id': '<your_transaction_id>',
    // You can add iin if you plan to group transactions by the person identifier 
    'iin': '<your_client_iin>',
    // Other meta data
    'meta_key': 'meta_value',
  },
  on_error: function (result) {
  // error details
  console.error('on_error', result);
  },
  on_complete: function (result) {
    // This callback is invoked when the analysis is complete
    // It is recommended to commence the transaction on your backend, 
    // using transaction_id to find the folder in Oz API and get the results
    console.log('on_complete', result);
  },
  on_capture_complete: function (result) {
    // Handle captured data here if necessary
    console.log('on_capture_complete', result);
  }
});

Parameters

The full list of OzLiveness.open() parameters:

  • options– an object with the following settings:

    • token – (optional) the auth token;

    • license – an object containing the license data;

    • licenseUrl – a string containing the path to the license;

    • lang – a string containing the identifier of one of the installed language packs;

    • params– an object with identifiers and additional parameters:

      • extract_best_shot– true or false: run the best frame choice in the Quality analysis;

    • action– an array of strings with identifiers of actions to be performed. Available actions:

      • photo_id_front – photo of the ID front side;

      • photo_id_back – photo of the ID back side;

      • video_selfie_left – turn head to the left;

      • video_selfie_right – turn head to the right;

      • video_selfie_down – tilt head downwards;

      • video_selfie_high – raise head up;

      • video_selfie_smile – smile;

      • video_selfie_eyes – blink;

      • video_selfie_scan – scanning;

      • video_selfie_blank – no action, simple selfie;

      • video_selfie_best – special action to select the best shot from a video and perform analysis on it instead of the full video.

    • overlay_options – the document's template displaying options:

      • show_document_pattern: true/false – true by default, displays a template image, if set to false, the image is replaced by a rectangular frame;

    • on_error – a callback function (with one argument) that is called in case of any error happened during video capturing and retrieves the error information: an object with the error code, error message, and telemetry ID for logging.

    • on_close– a callback function (no arguments) that is called after the plugin window is closed (whether manually by the user or automatically after the check is completed).

    • device_id – (optional) identifier of camera that is being used.

    • disable_adaptive_aspect_ratio (since 1.5.0) – if True, disables the video adaptive aspect ratio, so your video doesn’t automatically adjust to the window aspect ratio. The default value is False, and by default, the video adjusts to the closest ratio of 4:3, 3:4, 16:9, or 9:16. Please note: smartphones still require the portrait orientation to work.

    • get_user_media_timeout (since 1.5.0) – when Web SDK can’t get access to the user camera, after this timeout it displays a hint on how to solve the problem. The default value is 40000 (ms).

Oz Liveness Web SDK

Oz Liveness Web SDK is a module for processing data on clients' devices. With Oz Liveness Web SDK, you can take photos and videos of people via their web browsers and then analyze these media. Most browsers and devices are supported. Available languages: EN, ES, PT-BR, KK.

For Angular and React, replace https://web-sdk.sandbox.ohio.ozforensics.com in index.html.

Web SDK requires HTTPS (with SSL encryption) to work; however, at localhost and 127.0.01, you can check the resources' availability via HTTP.

Oz Liveness Web SDK consists of two components:

The integration guides can be found here:

This is a guide on how to start with Oz Web SDK:

Flutter SDK Methods and Properties

clearActionVideos

Deletes all action videos from file system (iOS 8.4.0 and higher, Android).

Returns

Future<Void>.

getSDKVersion

Returns the SDK version.

Returns

Future<String>.

initSDK

Initializes SDK with license sources.

Parameter

Type

Description

licenses

List<String>

A list of licences

Returns

Case

Text

True

Initialization has completed successfully

False

Initialization error

setApiConnectionWithCredentials

Authentication via credentials.

Parameter

Type

Description

email

String

User email

password

String

User password

host

String

Server URL

Returns

Case

Text

Success

Nothing (void)

Failed

PlatformException:

  • code = AUTHENTICATION_FAILED

  • message = exception details

setApiConnectionWithToken

Authentication via access token.

Parameter

Type

Description

token

String

User email

host

String

Server URL

Returns

Case

Text

Success

Nothing (void)

Failed

PlatformException:

  • code = AUTHENTICATION_FAILED

  • message = exception details

setEventConnectionWithCredentials

Connection to the telemetry server via credentials.

Parameter

Type

Description

email

String

User email

password

String

User password

host

String

Server URL

Returns

Case

Text

Success

Nothing (void)

Failed

PlatformException:

  • code = AUTHENTICATION_FAILED

  • message = exception details

setEventConnectionWithToken

Connection to the telemetry server via access token.

Parameter

Type

Description

token

String

User email

host

String

Server URL

Returns

Case

Text

Success

Nothing (void)

Failed

PlatformException:

  • code = AUTHENTICATION_FAILED

  • message = exception details

isLoggedIn

Checks whether an access token exists.

Returns

Case

Returns

Token exists

True

Token does not exist

False

logout

Deletes the saved access token.

Returns

Nothing (void).

supportedLanguages

Returns the list of SDK supported languages.

Returns

startLiveness

Starts the Liveness video capturing process.

Parameter

Type

Description

actions

Actions to execute

mainCamera

Boolean

Use main (True) or front (False) camera

setSelfieLength

Sets the length of the Selfie gesture (in milliseconds).

Parameter

Type

Description

selfieLength

Int

The length of the Selfie gesture (in milliseconds). Should be within 500-5000 ms, the default length is 700

Returns

Error if any.

Analyze

Launches the analyses.

Parameter

Type

Description

analysis

The list of Analysis structures

uploadMedia

The list of the captures videos

params

Map<String, Any>

Additional parameters

Returns

setLocalization

Sets the SDK localization.

Parameter

Type

Description

locale

The SDK language

attemptSettings

The number of attempts before SDK returns error.

Parameter

Type

Description

singleCount

int

Attempts on a single action/gesture

commonCount

int

Total number of attempts on all actions/gestures if you use a sequence of them

setUICustomization

Sets the UI customization values for OzLivenessSDK. The values are described in the Customization structures section. Structures can be found in the lib\customization.dart file.

setfaceAlignmentTimeout

Sets the timeout for the face alignment for actions.

Parameter

Type

Description

timeout

int

Timeout in milliseconds

Fonts and Other Customized Resources

For iOS

Add fonts and drawable resources to the application/ios project.

For Android

Fonts and images should be placed into related folders:

ozforensics_flutter_plugin\android\src\main\res\drawable ozforensics_flutter_plugin\android\src\main\res\font

Customization structures

These are defined in the customization.dart file.

UICustomization

Contains the information about customization parameters.

ToolbarCustomization

Toolbar customization parameters.

Parameter

Type

Description

closeButtonIcon

String

Close button icon received from plugin

closeButtonColor

String

Color #XXXXXX

titleText

String

Header text

titleFont

String

Text font

titleSize

int

Font size

titleFontStyle

String

Font style

titleColor

String

Color #XXXXXX

titleAlpha

int

Header text opacity

isTitleCentered

bool

Sets the text centered

backgroundColor

String

Header background color #XXXXXX

backgroundAlpha

int

Header background opacity

CenterHintCustomization

Center hint customization parameters.

Parameter

Type

Description

textFont

String

Text font

textFontStyle

String

Font style

textColor

String

Color #XXXXXX

textSize

int

Font size

verticalPosition

int

Y position

textAlpha

int

Text opacity

centerBackground

bool

Sets the text centered

HintAnimation

Hint animation customization parameters.

Parameter

Type

Description

hideAnimation

bool

Hides the hint animation

animationIconSize

int

Animation icon size in px (40-160)

hintGradientColor

String

Color #XXXXXX

hintGradientOpacity

int

Gradient color

FaceFrameCustomization

Frame around face customization parameters.

Parameter

Type

Description

geometryType

String

Frame shape received from plugin

geometryTypeRadius

int

Corner radius for rectangle

strokeWidth

int

Frame stroke width

strokeFaceNotAlignedColor

String

Color #XXXXXX

strokeFaceAlignedColor

String

Color #XXXXXX

strokeAlpha

int

Stroke opacity

strokePadding

int

Stroke padding

VersionLabelCustomization

SDK version customization parameters.

Parameter

Type

Description

textFont

String

Text font

textFontStyle

String

Font style

textColor

String

Color #XXXXXX

textSize

int

Font size

textAlpha

int

Text opacity

BackgroundCustomization

Background customization parameters.

Parameter

Type

Description

backgroundColor

String

Color #XXXXXX

backgroundAlpha

int

Background opacity

Flutter structures

Defined in the models.dart file.

enum Locale

Stores the language information.

Case

Description

en

English

hy

Armenian

kk

Kazakh

ky

Kyrgyz

tr

Turkish

es

Spanish

pt_br

Portuguese (Brazilian)

enum MediaType

The type of media captured.

Case

Description

movement

A media with an action

documentBack

The back side of the document

documentFront

The front side of the document

enum FileType

The type of media captured.

Case

Description

documentPhoto

A photo of a document

video

A video

shotSet

A frame archive

enum MediaTag

Contains an action from the captured video.

Case

Description

blank

A video with no gesture

photoSelfie

A selfie photo

videoSelfieOneShot

A video with the best shot taken

videoSelfieScan

A video with the scanning gesture

videoSelfieEyes

A video with the blink gesture

videoSelfieSmile

A video with the smile gesture

videoSelfieHigh

A video with the lifting head up gesture

videoSelfieDown

A video with the tilting head downwards gesture

videoSelfieRight

A video with the turning head right gesture

videoSelfieLeft

A video with the turning head left gesture

photoIdPortrait

A photo from a document

photoIdBack

A photo of the back side of the document

photoIdFront

A photo of the front side of the document

Media

Stores information about media.

Parameter

Type

Description

Platform

fileType

The type of the file

Android

movement

An action on a media

iOS

mediatype

String

A type of media

iOS

videoPath

String

A path to a video

bestShotPath

String

path of the best shot in PNG for video or image path for liveness

preferredMediaPath

String

URL of the API media container

photoPath

String

A path to a photo

archivePath

String

A path to an archive

tag

A tag for media

Android

RequestResult

Stores information about the analysis result.

Parameter

Type

Description

Platform

folderId

String

The folder identifier

type

The analysis type

errorCode

int

The error code

Android only

errorMessage

String

The error message

mode

The mode of the analysis

confidenceScore

Double

The resulting score

resolution

The completed analysis' result

status

Boolean

The analysis state:

true- success;

false- failed

Analysis

Stores data about a single analysis.

Parameter

Type

Description

type

The type of the analysis

mode

The mode of the analysis

mediaList

Media to analyze

params

Map<String, String>

Additional analysis parameters

sizeReductionStrategy

Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully

Structures

enum Type

Analysis type.

Case

Description

biometry

The algorithm that allows comparing several media and check if the people on them are the same person or not

quality

The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind.

enum Mode

Analysis mode.

Case

Description

onDevice

The on-device analysis with no server needed

serverBased

The server-based analysis

hybrid

The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.

enum VerificationAction

Contains the action from the captured video.

Case

Description

oneShot

The best shot from the video taken

blank

A selfie with face alignment check

scan

Scan

headRight

Head turned right

headLeft

Head turned left

headDown

Head tilted downwards

headUp

Head lifted up

eyeBlink

Blink

smile

Smile

Resolution

The general status for all analyses applied to the folder created.

Case

Description

failed

One or more analyses failed due to some error and couldn't get finished

declined

The check failed (e.g., faces don't match or some spoofing attack detected)

success

Everything went fine, the check succeeded (e.g., faces match or liveness confirmed)

operatorRequired

The result should be additionally checked by a human operator

enum SizeReductionStrategy

Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.

uploadOriginal

The original video

uploadCompressed

The compressed video

uploadBestShot

The best shot taken from the video

uploadNothing

Nothing is sent (note that no folder will be created)

Customization resources

This is a Map to define the platform-specific resources on the plugin level.

closeButtonIcon

This key is a Map for the close button icon.

Key

Value

Close

Android drawable resource / iOS Pods resource

Arrow

Android drawable resource / iOS Pods resource

titleFont

This key is a Map containing the data on the uploaded fonts.

Key

Value

Flutter application font name

Android font resource / iOS Pods resource, used to retrieve the font on the plugin level

titleStyle

This key is a Map containing the data on the uploaded font styles.

Key

Value

Flutter application font style name

Name of the style retrieved for the font creation on the plugin level

faceFrameGeometry

This key is a Map containing the data on grame shape.

Key

Value

Oval

Oval shape

Rectangle

Rectangular shape

Description of the on_complete Callback

Safe

When result_mode is safe, the on_complete callback contains the state of the analysis only:

{
  "state": "finished"
}

Status

For the status value, the callback contains the state of the analysis, and for each of the analysis types, the name of the type, state, and resolution.

{
 "state": "finished",
 "analyses": {
   "quality": {
     "state": "finished",
     "resolution": "success"
   }
 }
}

Folder

The folder value is almost similar to the status value, with the only difference: the folder_id is added.

{
 "state": "finished",
 "folder_id": "your_folder_id",
 "analyses": {
   "quality": {
     "state": "finished",
     "resolution": "success"
   }
 }
}

Full

If result_mode is set to full, you receive the full information on the analysis:

  • everything that you could see in the folder mode;

  • timestamps;

  • metadata;

  • analyses’, company, analysis group IDs;

  • thresholds;

  • media info;

  • and more.

Description of the on_result Callback

Safe

When result_mode is safe, the on_result callback contains the state of the analysis only:

{
 "state": "processing"
}

or

{
 "state": "finished"
}

Status

For the status value, the callback contains the state of the analysis, and for each of the analysis types, the name of the type, state, and resolution.

{
 "state": "processing",
 "analyses": {
   "quality": {
     "state": "processing",
     "resolution": ""
   }
 }
}

or

{
 "state": "finished",
 "analyses": {
   "quality": {
     "state": "finished",
     "resolution": "success"
   }
 }
}

Folder

The folder value is almost similar to the status value, with the only difference: the folder_id is added.

{
 "state": "processing",
 "folder_id": "your_folder_id",
 "analyses": {
   "quality": {
     "state": "processing",
     "resolution": ""
   }
 }
}

Full

If result_mode is set to full, you will either receive:

  • once the analysis is finished, the full information on the analysis:

    • everything that you could see in the folder mode;

    • timestamps;

    • metadata;

    • analyses’, company, analysis group IDs;

    • thresholds;

    • media info;

    • and more.

Localization: Adding a Custom Language Pack

The add_lang(lang_id, lang_obj) method allows adding a new or customized language pack.

Parameters:

  • lang_id: a string value that can be subsequently used as lang parameter for the open() method;

  • lang_obj: an object that includes identifiers of translation strings as keys and translation strings themselves as values.

A list of language identifiers:

lang_id
Language

en

English

es

Spanish

pt-br*

Portuguese (Brazilian)

kz

Kazakh

*Formerly pt, changed in 1.3.1.

An example of usage:

OzLiveness.add_lang('en', enTranslation), where enTranslation is a JSON object.

// Editing the button text
OzLiveness.add_lang('en', {
  action_photo_button: 'Take a photo'
});

To set the SDK language, when you launch the plugin, specify the language identifier in lang:

OzLiveness.open({
    lang: 'es', // the identifier of the needed language
    ...
});

You can check which locales are installed in Web SDK: use the ozLiveness.get_langs() method. If you have added a locale manually, it will also be shown.

A list of all language identifiers:

The keys oz_action_*_go refer to the appropriate gestures. oz_tutorial_camera_* – to the hints on how to enable camera in different browsers. Others refer to the hints for any gesture, info messages, or errors.

Since 1.5.0, if your language pack doesn't include a key, the message for this key will be shown in English.

Before 1.5.0

If your language pack doesn't include a key, the translation for this key won't be shown.

Capturing Video and Description of the on_capture_complete Callback

In this article, you’ll learn how to capture videos and send them through your backend to Oz API.

1. Overview

Here is the data flow for your scenario:

1. Oz Web SDK takes a video and makes it available for the host application as a frame sequence.

2. The host application calls your backend using an archive of these frames.

3. After the necessary preprocessing steps, your backend calls Oz API, which performs all necessary analyses and returns the analyses’ results.

4. Your backend responds back to the host application if needed.

2. Implementation

On the server side, Web SDK must be configured to operate in the Capture mode:

OZLiveness.open({
  ... // other parameters
  on_capture_complete: function(result) {
         // Your code to process media/send it to your API, this is STEP #2
  }
})

The result object structure depends on whether any virtual camera is detected or not.

No Virtual Camera Detected

{
	"action": <action>,
	"best_frame": <bestframe>,
	"best_frame_png": <bestframe_png>,
	"best_frame_bounding_box": {
		"left": <bestframe_bb_left>,
		"top": <bestframe_bb_top>,
		"right": <bestframe_bb_right>,
		"bottom": <bestframe_bb_bottom>
		},
	"best_frame_landmarks": {
		"left_eye": [bestframe_x_left_eye, bestframe_y_left_eye],
		"right_eye": [bestframe_x_right_eye, bestframe_y_right_eye],
		"nose_base": [bestframe_x_nose_base, bestframe_y_nose_base],
		"mouth_bottom": [bestframe_x_mouth_bottom, bestframe_y_mouth_bottom],
		"left_ear": [bestframe_x_left_ear, bestframe_y_left_ear],
		"right_ear": [bestframe_x_right_ear, bestframe_y_right_ear]
		},
	"frame_list": [<frame1>, <frame2>],
	"frame_bounding_box_list": [
		{
		"left": <frame1_bb_left>,
		"top": <frame1_bb_top>,
		"right": <frame1_bb_right>,
		"bottom": <frame1_bb_bottom>
		},
		{
		"left": <frame2_bb_left>,
		"top": <frame2_bb_top>,
		"right": <frame2_bb_right>,
		"bottom": <frame2_bb_bottom>
		},
	],
	"frame_landmarks": [
		{
		"left_eye": [frame1_x_left_eye, frame1_y_left_eye],
		"right_eye": [frame1_x_right_eye, frame1_y_right_eye],
		"nose_base": [frame1_x_nose_base, frame1_y_nose_base],
		"mouth_bottom": [frame1_x_mouth_bottom, frame1_y_mouth_bottom],
		"left_ear": [frame1_x_left_ear, frame1_y_left_ear],
		"right_ear": [frame1_x_right_ear, frame1_y_right_ear]
		},
		{
		"left_eye": [frame2_x_left_eye, frame2_y_left_eye],
		"right_eye": [frame2_x_right_eye, frame2_y_right_eye],
		"nose_base": [frame2_x_nose_base, frame2_y_nose_base],
		"mouth_bottom": [frame2_x_mouth_bottom, frame2_y_mouth_bottom],
		"left_ear": [frame2_x_left_ear, frame2_y_left_ear],
		"right_ear": [frame2_x_right_ear, frame2_y_right_ear]
		}
	],
"from_virtual_camera": null,
"additional_info": <additional_info>
}

Any Virtual Camera Detected

{
	"action": <action>,
	"best_frame": null,
	"best_frame_png": null,
	"best_frame_bounding_box": null,
	"best_frame_landmarks": null
	"frame_list": null,
	"frame_bounding_box_list": null,
	"frame_landmarks": null,
	"from_virtual_camera": {
	"additional_info": <additional_info>,
		"best_frame": <bestframe>,
		"best_frame_png": <best_frame_png>,
		"best_frame_bounding_box": {
			"left": <bestframe_bb_left>,
			"top": <bestframe_bb_top>,
			"right": <bestframe_bb_right>,
			"bottom": <bestframe_bb_bottom>
			},
		"best_frame_landmarks": {
			"left_eye": [bestframe_x_left_eye, bestframe_y_left_eye],
			"right_eye": [bestframe_x_right_eye, bestframe_y_right_eye],
			"nose_base": [bestframe_x_nose_base, bestframe_y_nose_base],
			"mouth_bottom": [bestframe_x_mouth_bottom, bestframe_y_mouth_bottom],
			"left_ear": [bestframe_x_left_ear, bestframe_y_left_ear],
			"right_ear": [bestframe_x_right_ear, bestframe_y_right_ear]
			},
		"frame_list": [<frame1>, <frame2>],
		"frame_bounding_box_list": [
			{
			"left": <frame1_bb_left>,
			"top": <frame1_bb_top>,
			"right": <frame1_bb_right>,
			"bottom": <frame1_bb_bottom>
			},
			{
			"left": <frame2_bb_left>,
			"top": <frame2_bb_top>,
			"right": <frame2_bb_right>,
			"bottom": <frame2_bb_bottom>
			},
			],
		"frame_landmarks": [
			{
			"left_eye": [frame1_x_left_eye, frame1_y_left_eye],
			"right_eye": [frame1_x_right_eye, frame1_y_right_eye],
			"nose_base": [frame1_x_nose_base, frame1_y_nose_base],
			"mouth_bottom": [frame1_x_mouth_bottom, frame1_y_mouth_bottom],
			"left_ear": [frame1_x_left_ear, frame1_y_left_ear],
			"right_ear": [frame1_x_right_ear, frame1_y_right_ear]
			},
			{
			"left_eye": [frame2_x_left_eye, frame2_y_left_eye],
			"right_eye": [frame2_x_right_eye, frame2_y_right_eye],
			"nose_base": [frame2_x_nose_base, frame2_y_nose_base],
			"mouth_bottom": [frame2_x_mouth_bottom, frame2_y_mouth_bottom],
			"left_ear": [frame2_x_left_ear, frame2_y_left_ear],
			"right_ear": [frame2_x_right_ear, frame2_y_right_ear]
			}
		]
	}
}

Here’s the list of variables with descriptions.

Variable

Type

Description

best_frame

String

The best frame, JPEG in the data URL format

best_frame_png

String

The best frame, PNG in the data URL format, it is required for protection against virtual cameras when video is not used

best_frame_bounding_box

Array[Named_parameter: Int]

The coordinates of the bounding box where the face is located in the best frame

best_frame_landmarks

Array[Named_parameter: Array[Int, Int]]

The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the best frame

frame_list

Array[String]

All frames in the data URL format

frame_bounding_box_list

Array[Array[Named_parameter: Int]]

The coordinates of the bounding boxes where the face is located in the corresponding frames

frame_landmarks

Array[Named_parameter: Array[Int, Int]]

The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the corresponding frames

action

String

An action code

additional_info

String

Information about client environment

Please note:

  • You can retrieve the MP4 video from a folder using the /api/folders/{{folder_id}} request with this folder's ID. In the JSON that you receive, look for the preview_url in source_media. The preview_url parameter contains the link to the video. From the plugin, MP4 videos are unavailable (only as frame sequences).

  • Also, in the POST {{host}}/api/folders request, you need to add the additional_info field. It is required for the capture architecture mode to gather the necessary information about client environment. Here’s the example of filling in the request’s body:

"VIDEO_FILE_KEY": VIDEO_FILE_ZIP_BINARY
"payload": "{
        "media:meta_data": {
           "VIDEO_FILE_KEY": {
              "additional_info": <additional_info>
              }
           }
}"
  • Oz API accepts data without the base64 encoding.

Security Recommendations

Retrieve the analysis response and process it on the back end

Even though the analysis result is available to the host application via Web Plugin callbacks, it is recommended that the application back end receives it directly from Oz API. All decisions of the further process flow should be made on the back end as well. This eliminates any possibility of malicious manipulation with analysis results within the browser context.

To find your folder from the back end, you can follow these steps:

  1. On the front end, add your unique identifier to the folder metadata.

OzLiveness.open({
  ...
  meta: { 
  // the user or lead ID from an external lead generator 
  // that you can pass to keep track of multiple attempts made by the same user
    'user_id': '<user_or_lead_id>',
  // the unique attempt ID
    'transaction_id': '<unique_transaction_id>'
  }
});

You can add your own key-value pairs to attach user document numbers, phone numbers, or any other textual information. However, ensure that tracking personally identifiable information (PII) complies with relevant regulatory requirements.

/api/folders/?meta_data=transaction_id==unique_id1&with_analyses=true
  1. Use the on_complete callback of the plugin to be notified when the analysis is done. Once used, call your back end and pass the transaction_id value.

  2. On the back end side, find the folder by the identifier you've specified using the Oz API Folder LIST method:

    /api/folders/?meta_data=transaction_id==unique_id1&with_analyses=true

    To speed up the processing of your request, we recommend adding the time filter as well:

    /api/folders/?meta_data=transaction_id==unique_id1&with_analyses=true&time_created.min=([CURRENT_TIME]-1hour)
  3. In the response, find the analysis results and folder_id for future reference.

Limit amount of the information sent to Web Plugin from the server

"result_mode": "safe"

Using a Webhook to Get Results

The webhook feature simplifies getting analyses' results. Instead of polling after the analyses are launched, add a webhook that will call your website once the results are ready.

When you create a folder, add the webhook endpoint (resolution_endpoint) into the payload section of your request body:

Payload example
{    
  "resolution_endpoint": "address.com", // use address of your website here
    ... // other request details - folder etc.
}

You'll receive a notification each time the analyses are completed for this folder. The webhook request will contain information about the folder and its corresponding analyses.

Look-and-Feel Customization

To set your own look-and-feel options, use the style section in the Ozliveness.open method. The options are listed below the example.

baseColorCustomization

Main color settings.

baseFontCustomization

Main font settings.

titleFontCustomization

Title font settings.

buttonCustomization

Buttons’ settings.

toolbarCustomization

Toolbar settings.

centerHintCustomization

Center hint settings.

hintAnimation

Hint animation settings.

faceFrameCustomization

Face frame settings.

documentFrameCustomization

Document capture frame settings.

backgroundCustomization

Background settings.

antiscamCustomization

Scam protection settings: the antiscam message warns user about their actions being recorded.

versionTextCustomization

SDK version text settings.

maskCustomization

3D mask settings. The mask has been implemented in 1.2.1.

Migrating to the New Design from the Previous Versions (before 1.0.1)

Table of parameters' correspondence:

Changelog

Web SDK changes

1.7.13 – Apr. 02, 2025

  • Added support for upcoming API 6.0.

  • Improved accessibility: the hints throughout the customer journey (camera access, processing data, uploading data, requesting results) are now properly and completely voiced by screen readers in assertive mode (changes in hints are announced immediately).

  • Created an endpoint for license verification: [GET] /check_license.php.

  • Reduced the bundle size.

  • Fixed the issue with missing analysis details in the on_complete callback when using result_mode: full.

  • Fixed the issue when the camera switch button might have been missed.

  • The front camera no longer displays the user's actions as in a mirror image.

  • Improved error handling.

  • Improved support for low-performance devices.

  • Added the closed eyes check to the Scan gesture.

  • Major security and telemetry updates.

  • Bug fixes.

1.6.15 – Dec. 27, 2024

  • Simplified the checks that require user to move their head: turning left or right, tilting, or looking down.

  • Decreased the distance threshold for the head-moving actions: turning left or right, tilting, or looking down.

  • The application's behavior when the opened dev-tools are detected is now manageable.

  • You can now configure method signatures to make them trusted via checksum of the modified function.

  • Changed the wording for the head_down gesture: the new wording is “tilt down”.

  • Fixed an issue where an arrow incorrectly appeared after capturing head-movement gestures.

  • Fixed an issue where the oval disappeared when the "Great!" phrase was displayed.

  • Security updates.

1.6.12 – Oct. 18, 2024

  • Security updates.

  • Resolved the issue where a video could not be generated from a sequence of frames.

1.6.0 – June 24, 2024

  • The on_complete callback now is called upon folder status change.

  • Updated instructions for camera access in the Android Chrome and Facebook browsers. New keys:

    • error_no_camera_access,

    • oz_tutorial_camera_android_chrome_with_screens_title,

    • oz_tutorial_camera_android_chrome_instruction_screen_click_settings,

    • oz_tutorial_camera_android_chrome_instruction_screen_permissions,

    • oz_tutorial_camera_android_chrome_instruction_screen_allow_access,

    • try_again,

    • oz_tutorial_camera_external_browser_button,

    • oz_tutorial_camera_external_browser_manual_open_link,

    • oz_tutorial_camera_external_browser_title.

  • Added the get_langs() method that returns a list of locales available in the installed Web SDK.

  • Added an error for the case of setting a non-available locale.

  • Added an error for the case of lacking of a necessary resource. New key: unable_load_resource.

  • Changed texts for the error_connection_lost and error_service_unavailable errors.

  • Uploaded new Web SDK string files.

  • The crop function no longer adds borders for images smaller than 512×512.

1.5.3 – May 28, 2024

  • In case of camera access timeout, we now display a page with instructions for users to enable camera access: default for all browsers and specific for Facebook.

    • accessing_camera_switch_to_another_browser,

    • error_camera_timeout_instruction,

    • error_camera_timeout_title,

    • error_camera_timeout_android_facebook_instruction.

1.5.0 – May 05, 2024

  • Improved user experience for card printer machines. Users no longer need to get that close to the screen with face frame.

  • Added the disable_adaptive_aspect_ratio parameter to the Web Plugin. This parameter switches off the default video aspect ratio adjustment to the window.

  • Implemented the get_user_media_timeout parameter for Web Plugin: when SDK can’t get access to the user camera, after this timeout it displays a hint on how to solve the problem.

    • oz_tutorial_camera_android_edge_browser

    • oz_tutorial_camera_android_edge_instruction

    • oz_tutorial_camera_android_edge_title

    • error_camera_timeout_instruction

    • error_camera_timeout_title

  • Improved the localization: when SDK can’t find a translation for a key, it displays a message in English.

  • You can now distribute the serverless Web SDK via Node Package Manager.

  • You can switch off the display of API errors in modal windows. Set the disable_adapter_errors_on_screen parameter in the configuration file to True.

  • The mobile browsers now use the rear camera to take the documents’ photos.

  • Updated samples.

  • Fixed the bug with abnormal 3D mask reaction when user needs to repeat a gesture.

  • Logging and security updates.

1.4.3 – Apr. 15, 2024

  • Fixed the bug where the warning about incorrect device orientation was not displayed when a mobile user attempted to take a video with their face in landscape orientation.

1.4.2 – Mar. 14, 2024

  • Debugging improvements.

1.4.1 – Feb. 27, 2024

  • Major security updates: improved protection against virtual cameras and JavaScript tampering.

  • Improved WebView support:

    • Added camera access instructions for applications within the generic WebView browsers on Android and iOS. The corresponding events are added to telemetry.

    • Improved the React Native app integration by adding the webkit-playsinline attribute, thereby fixing the issue of the full-screen camera launch on iOS WebView.

  • The iFrame using error when iframe_allowed = False is now shown properly.

  • New localization keys:

    • oz_tutorial_camera_android_webview_browser

    • oz_tutorial_camera_android_webview_instruction

    • oz_tutorial_camera_android_webview_title

1.4.0 – Feb. 07, 2024

  • The iframe support is back: set the iframe_allowed parameter in the Web Adapter configuration file to True.

  • The interval for polling for the analyses’ results is now configurable. Change it in the results_polling_interval parameter of the Web Adapter configuration file if necessary.

  • You can now select the front or back camera via Web Plugin. In the OzLiveness.open() method, set cameraFacingMode to user for the front camera and environment for the back one. This parameter only works when the use_for_liveness option in the Web Adapter configuration file is not set.

  • The plugin styles are now being added automatically. Please remove <link rel="stylesheet" href="/plugin/ozliveness.css" /> from your page to prevent style conflicts.

  • Fixed some bugs and updated telemetry.

1.3.1 – Jan. 12, 2024

  • Improved the protection against injection attacks.

  • Replaced the code for Brazilian Portuguese from pt to pt-br to match the ISO standard.

  • Removed the lang_default adapter parameter.

  • The 3D mask transparency became customizable.

  • Implemented the possibility of using a master license that works with any domain.

  • Added the master_license_signature option into Web Adapter configuration parameters.

  • Fixed some bugs.

1.2.2 – Dec. 15, 2023

  • Internal SDK improvements.

1.2.1 – Nov. 04, 2023

  • Updated telemetry (logging).

1.1.5 – Oct. 27, 2023

  • Logging updates.

1.1.4 – Oct. 2023

  • Security updates.

1.1.3 – Sept. 29, 2023

  • Internal SDK improvements.

1.1.2 – Sept. 21, 2023

  • Internal SDK improvements.

1.1.1 – Aug. 29, 2023

  • Fixed some bugs.

1.1.0 – Aug. 24, 2023

  • Changed the signature of the on_error() callback: now it returns an object with the error code, error message, and telemetry ID for logging.

  • Added the configuration parameter for the debug mode. If True, the Web SDK enables access to the /debug.php page, which contains information about the current configuration and the current license.

  • Fixed some bugs and improved logging.

1.0.2 – July 06, 2023

  • If your device has multiple cameras, you can now choose one when launching the Web Plugin.

1.0.1 – July 01, 2023

  • Added the Portuguese, Spanish, and Kazakh locales.

  • Added the combo gesture.

  • Added the progress bar for media upload.

  • Removed the Zoom in / Zoom out gestures.

  • On tablets, you can now capture video in landscape orientation.

  • Removed the lang_allow option from Web Adapter configuration file.

0.9.1 – Mar. 01, 2023

  • In the capture architecture, when a virtual camera is detected, the additional_info parameter is inside the from_virtual_camera section.

  • You can now crop the lossless frame without losing quality.

  • Fixed face landmarks for the capture architecture.

0.9.0 – Feb. 20, 2023

  • Improved the recording quality;

    • added detailed error descriptions;

    • now you can set the license in JS during the runtime;

    • when you set a license in OzLiveness.open(), it rewrites the previous license;

    • the license no longer requires port and protocol;

    • you can now specify subdomains in the license;

    • upon the launch of the plugin on a server, the license payload is displayed in the Docker log;

    • localhost and 127.0.0.1 no longer ask for a license;

  • The on_capture_complete callback is now available on any architecture: it is called once a video is taken and returns info on actions from the video;

  • Oz Web Liveness and Oz Web Adapter versions are displayed in the Docker log upon launch;

  • Deleted the deprecated adapter_version field from order metadata;

  • Fixed the Switch camera button in Google Chrome;

  • Upon the start of Web SDK, the actual configuration parameters are displayed in the Docker log.

0.7.6 – Sept. 27, 2022

  • Changed the extension of some Oz system files from .bin to .dat.

0.7.5

  • Additional scripts are now called using the main script's address.

0.7.4

  • Web SDK now can be installed via static files only (works for the capture type of architecture).

  • Web SDK can now work with CDN.

  • Now, you can launch several Oz Liveness plugins on different pages. In this case, you need to specify the path to scripts in head of these pages.

If you update the Web SDK version from 0.4.0, the license should be updated as well.

0.4.1

  • Fixed a bug with the shooting screen.

0.4.0

  • Added licensing (requires origin).

0.3.2044

  • Fixed Angular integration.

0.3.2043

  • Fixed the bug where the IMAGE_FOLDER section was missed in the JSON response with the lossless frame enabled.

0.3.2042

  • Fixed issues with the ravenjs library.

0.3.2041

  • A frame for taking a documents photo is now customizable.

0.3.2012

  • Implemented security updates.

0.3.2009 (0.4.8)

  • Metadata now contains names of all cameras you can use.

0.3.2005 (0.4.8)

  • Video and zip formats now allow loading a lossless image.

  • Fixed Best Shot.

0.3.2004 (0.4.8)

  • Separated the error code and error description in server responses.

0.3.2001 (0.4.6)

  • If the SDK mode is set in the environment variables architecture, api_url, it is passed to settings automatically.

  • In the Lite mode, you can select the best frame for any action.

  • In the Lite mode, an image sent via API gets the on_complete status only after a successful liveness.

  • You can manage CORS using the environment variables (CORS headers are not added by default).

0.3.1999

  • Added the folder value for result_mode: it returns the same value as status but with folder_id.

0.3.1997 (0.4.5)

  • Updated encryption: now only metadata required to decrypt an object is encrypted.

  • Updated data transfer: images are being sent in separate form fields.

  • Added the camera parameters check.

0.3.1992 (0.4.4)

  • Enabled a new method for image encryption.

  • Optimized image transfer format.

0.3.1991

  • Added the use_for_liveness option: mobile devices use back camera by default, on desktop, flip and oval circling are off. By default, the option is switched off.

0.3.1990 (0.4.3)

  • Decreased video length for video_selfie_best (the Selfie gesture) from 1 to 0,2 sec.

  • Loading scripts is now customizable.

  • Improved UX.

0.3.1988 (0.4.2)

  • Added the Kazakh locale.

  • Added a guide for accessing the camera on a desktop.

  • Improved logging: plugin_liveness.php requests and recording user-agent to the server log.

  • Added the Lite mode.

0.3.1987 (0.4.1)

  • Added encryption.

  • Updated libraries.

0.3.1986 (0.3.91)

  • You can now hide the Oz Forensics logo.

0.3.1984

  • Updated a guide for Facebook, Instagram, Samsung, Opera.

  • Added handlers for unknown variables and a guide for “unknown” browsers.

0.3.1983

  • Optimized memory usage for a frame.

  • Added a guide on how to switch cameras on using Android browsers.

Browser Compatibility

Please note: for the plugin to work, your browser version should support JavaScript ES6 and be the one as follows or newer.

*Web SDK doesn't work in Internet Explorer compatibility mode due to lack of important functions.

Customization Options for Older Versions (before 1.0.1)

To set your own look-and-feel options, use the style section in the Ozliveness.open method. Here is what you can change:

  • faceFrame – the color of the frame around a face:

    • faceReady – the frame color when the face is correctly placed within the frame;

    • faceNotReady – the frame color when the face is placed improperly and can't be analyzed.

  • centerHint – the text of the hint that is displayed in the center.

    • textSize – the size of the text;

    • color – the color of the text;

    • yPosition – the vertical position measured from top;

    • letterSpacing – the spacing between letters;

    • fontStyle – the style of font (bold, italic, etc.).

  • closeButton – the button that closes the plugin:

    • image – the button image, can be an image in PNG or dataURL in base64.

  • backgroundOutsideFrame – the color of the overlay filling (outside the frame):

    • color – the fill color.

Example:

No-Server Licensing

Mostly, license is set on the server side (Web Adapter). This article covers a rare case when you use Web Plugin only.

To generate the license, we need the domain name of the website where you are going to use Oz Forensics Web SDK, for instance, your-website.com. You can also define subdomains.

To find the origin, in the developer mode, run window.origin on the page you are going to embed Oz Web SDK in. At localhost / 127.0.0.1, license can work without this information.

Set the license as shown below:

  • With license data:

  • With license path:

Check whether the license is updated properly.

Example

Proceed to your website origin and launch Liveness -> Simple selfie.

Once the license is added, the system will check its validity on launch.

Installation in Docker

Hardware and Software Requirements

To launch the services, you'll require:

  • CPU: 16 cores,

  • RAM: 32 GB,

  • Disk: 100 GB, SSD,

  • Linux-compatible OS,

  • Docker 19.03+ (or Podman 4.4+),

  • Docker Compose 1.27+ (or podman-compose 1.2.0+, if you use Podman).

For Docker installations with multiple API servers, you'll also require shared volume or NFS.

Distribution Package Contents

The package you get consists of the following directories and files:

Installation

Installing TFSS and the API on the Same Host

  1. Put the license file in ./configs/tfss/license.key.

  2. Unzip the file that contains models into the ./data/tfss/models directory.

  3. Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh script.

  1. Set the initial passwords and values:

Configuration
  • configs\api\config.py

    • Line 15: 'PORT' is the same port as set in line 2 of configs\nginx\default.conf. Needed to set URLs to serve static via nginx.

    • Line 21: 'HOST' is the same name as set in docker-compose for oz-api-nginx container. Needed to set URLs to serve static via nginx.

    • Line 24-28: 'DB_*' are parameters for connecting to PostgreSQL database. Must refer to oz-api-pg name and parameters, that are set in configs\init\init-db.sh, configs\postgres\init.sql

    • Line 33-34: 'TFSS' must refer to oz-tfss container name and port, that are set in the docker-compose.yaml oz-tfss start command.

    • Line 36: Regula. Currently, we support only external Regula.

    • Line 54-57: Redis connection. Change password or Redis container name and port, corresponding to lines 2 and 4 of configs\redis\redis.conf.

    • Line 69: Celery workers' healthcheck list. Remove Celery workers from list, if you have disabled them in docker-compose.yaml.

    • Line 141: O2N. Change o2n name and port, corresponding to docker-compose.yaml.

  • configs\init\init-*.sh

    • VARS section of each file (Lines 4-9) must refer to PostgreSQL names, ports in docker-compose, parameters in config.py and sets up user and database that are created during startup.

  • nginx\default.conf

    • Line 2: Listen port. Must be set corresponding to the docker-compose oz-api-nginx port parameter and config.py

    • Lines 27, 43, 48: Service names in redirect. oz-api, oz-statistic. Change if container names are changed in docker-compose.yaml

  • configs\pg-o2n\init.sql

    • Username, database name, password that are pre-created in database.

    • Username in lines 1, 9.

    • Password in line 1.

    • DB name in lines 8, 16.

  • configs\postgres\init.sql

    • Username, database name, password that are pre-created in database.

    • Username in lines 1, 9.

    • Password in line 1.

    • DB name in lines 8, 16.

  • configs\redis\redis.conf

    • Line 2: password for security. Refers to config.py.

    • Line 4: port. Refers to config.py.

  • data\tfss

    • Must have 'models' folder with models.

  • configs\webui\aquireToken.sh

    • Line 3-6: API parameters. Web UI should point to the oz-api-nginx container. Set name and port same as in oz-api-nginx.

    • Line 5-6: login and password must be the same as in configs\init\init-user.sh (if you have created another user manually, you can also use other credentials)

  • configs\o2n\config.env

    • Lines 6-10: pg-o2n parameters. Must be the same as listed in init-o2n.sh and pg-o2n\init.sql.

    • Line 12: password for superuser in PostgreSQL for O2N.

    • Lines 14-16: service parameters for superuser in PostgreSQL.

    • Lines 24-29: database parameters. Must be the same as listed in configs\postgres\init.sql.

    • Line 38: 'APP_ENV' User 'local' for http, 'https' for https.

    • Line 51-57: mail parameters. Set up for 'send password to email' option.

  1. For this configuration, run all services on a single host:

We recommend using PostgreSQL as a container only for testing purposes. For the production deployment, it is recommended to use a standalone database.

Installing TFSS and the API on Separate Hosts

TFSS Host

  1. Create a directory and unzip the distribution package into it. The package contains Docker Compose manifests and directories with the configuration files required for operation.

  2. Put the license file in ./configs/tfss/license.key.

  3. Unzip the file that contains models into the ./data/tfss/models directory.

  4. Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh script.

  5. For this configuration, run TFSS service on a separate host:

API Host

  1. Create a directory and unzip the distribution package into it. The package contains Docker Compose manifests and directories with the configuration files required for operation.

  2. Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh script.

  1. For this configuration, run all services on a single host:

Deployment Architecture

Terms and Definitions

Components' Description

Oz API components:

  • APP is the API front app that receives REST requests, performs preprocessing, and creates tasks for other API components.

  • Celery is the asynchronous task queue. API has the following celery queues:

    • Celery-default processes system-wide tasks.

    • Celery-maintenance processes maintenance tasks.

    • Celery-tfss processes analysis tasks.

    • Celery-resolution checks for completion of all nested analyses within a folder and changes folder status.

    • Celery-preview_convert creates a video preview for media.

    • Celery-beat is a CronJob for managing maintenance celery tasks.

    • Celery-Flower is a Celery metrics collector.

    • Celery-regula (optional) processes document analysis tasks.

  • Redis is a message broker and result backend for Celery.

  • RabbitMQ (optional) can be used as a message broker for Celery instead of Redis.

  • Nginx serves static media files for external HTTP(s) requests.

  • O2N (optional) processes the Blacklist analysis.

BIO-Updater checks for models updates and downloads new models.

Oz BIO (TFSS) runs TensorFlow with AI models and makes decisions for incoming media.

The BIO-Updater and BIO components require access to the following external resources:

Deployment Scenarios

The deployment scenario depends on the workload you expect.

Autoscaling is implemented on the basis of ClusterAutoscaler and must be supported by your infrastructure.

Small business or PoC

  • Type of containerization: Docker,

  • Type of installation: Docker compose,

  • Autoscaling/HA: none.

Requirements

Software

  • Docker 19.03+,

  • Podman 4.4+,

  • Python 3.4+.

Storage

  • Depends on media quality, the type and number of analyses, and the required archive depth.

  • Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.

Staff qualification:

  • Basic knowledge of Linux and Docker.

Deployment

  1. Single node.

Resources:

  • 1 node,

  • 16 CPU/32 RAM.

  1. Two nodes.

Resources:

  • 2 nodes,

  • 16 CPU/32 RAM for the first node; 8 CPU/16 RAM for the second node.

Medium Load

  • Type of containerization: Docker/Podman,

  • Type of installation: Docker compose,

  • Autoscaling/HA: manual scaling; HA is partially supported.

Requirements

Computational resources

Depending on load, you can change the number of nodes. However, for 5+ nodes, we recommend that you proceed to the High Load section.

    • 2 Nodes:

      • 24 CPU/32 RAM per node.

    • 3 Nodes:

      • 16 CPU/24 RAM per node.

    • 4 Nodes:

      • 8 CPU/16 RAM for two nodes (each),

      • 16 CPU/24 RAM for two nodes (each).

We recommend using external self-managed PostgreSQL database and NFS share.

Software

  • Docker 19.03+,

  • Podman 4.4+,

  • Python 3.4+.

Storage

  • Depends on media quality, the type and number of analyses, and the required archive depth.

  • Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.

Staff qualification:

  • Advanced knowledge of Linux, Docker, and Postgres.

Deployment

2 nodes:

3 nodes:

4 nodes:

High Load

  • Type of containerization: Type of containerization: Docker containers with Kubernetes orchestration,

  • Type of installation: Helm charts,

  • Autoscaling/HA: supports autoscaling; HA for most components.

Requirements

Computational resources

3-4 nodes. Depending on load, you can change the number of nodes.

  • 16 CPU/32 RAM Nodes for the BIO pods,

  • 8+ CPU/16+ RAM Nodes for all other workload.

We recommend using external self-managed PostgreSQL database.

Requires RWX (ReadWriteMany) StorageClass or NFS share.

Software

  • Docker 19.03+,

  • Python 3.4+.

Storage

  • Depends on media quality, the type and number of analyses, and the required archive depth.

  • Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.

Staff qualification:

  • Advanced knowledge of Linux, Docker, Kubernetes, and Postgres.

Deployment Scheme

[]

onStatusChange(status: AnalysisRequest.) { handleStatus() }

onSuccess(result: ) {

[]

[]

[]

List<>

List<>

The number of action is exceeded

No found in a video

Time limit for the is exceeded

Please find the Flutter repository .

Initializes OZSDK with the license data. The closure is either license data or .

[]

The handler that is executed when the method completes. The closure is either an array of objects or an .

[]

statusHandler: @escaping ((_ status: ) -> Void)

completionHandler: @escaping (_ results : ) -> Void)

Contains the locale code according to .

[]

[]

[]

The sample is now available on SwiftUI. Please find it .

The length of the Selfie gesture is now (affects the video file size).

You can instead of Oz logo if your license allows it.

The code in is now up-to-date.

If multiple analyses are applied to the folder simultaneously, the system sends them as a group. It means that the “worst” of the results will be taken as resolution, not the latest. Please refer to for details.

For the Liveness analysis, the system now treats the highest score as a quantitative result. The Liveness analysis output is described .

You can now add a custom or update an existing language pack. The instructions can be found .

Added the antiscam widget and its . This feature allows you to alert your customers that the video recording is being conducted, for instance, for loan application purposes. The purpose of this is to safeguard against scammers who may attempt to deceive an individual into approving a fraudulent transaction.

Implemented a range of options and switched to the new design. To restore the previous settings, please refer to .

The run method now works similar to the one in Android SDK and returns an .

To customize the Oz Liveness interface, use OZCustomization as shown below. For the description of customization parameters, please refer to .

The length of the Selfie gesture is now (affects the video file size).

Please find the Flutter repository .

By default, logs are saved along with the analyses' data. If you need to keep the logs distinct from the analysis data, set up the separate connection for as shown below:

Please find a sample for Oz Liveness Web SDK . To make it work, replace <web-adapter-url> with the Web Adapter URL you've received from us.

sample

sample

sample

sample

GET /api/folders/?meta_data=transaction_id==<your_transaction_id> to find a folder in Oz API from your backend by your unique identifier.

Read more about .

meta– an object with names of meta fields in keys and their string values in values. is transferred to Oz API and can be used to obtain analysis results or for searching;

on_submit– a callback function (no arguments) that is called after submitting customer data to the server (unavailable for the ).

on_capture_complete – a callback function (with one argument) that is called after the video is captured and retrieves the information on this video. The example of the response is described .

on_result– a callback function (with one argument) that is called periodically during the analysis and retrieves an intermediate result (unavailable for the capture mode). The result content depends on the Web Adapter result_mode and is described .

on_complete– a callback function (with one argument) that is called after the check is completed and retrieves the analysis result (unavailable for the capture mode). The result content depends on the Web Adapter result_mode and is described .

style – .

enable_3d_mask – enables the 3D mask as the default face capture behavior. This parameter works only if load_3d_mask in the Web Adapter is set to true; the default value is false.

cameraFacingMode (since 1.4.0) – the parameter that defines which camera to use; possible values: user (front camera), environment (rear camera). This parameter only works if the use_for_liveness option in the file is undefined. If use_for_liveness is set (with any value), cameraFacingMode gets overridden and ignored.

Please find a sample for Oz Liveness Web SDK . To make it work, replace <web-adapter-url> with the Web Adapter URL you've received from us.

Client side – a JavaScript file that is being loaded within the frontend part of your application. It is called .

Server side – a separate server module with . The module is called Liveness.

Oz Web SDK can be provided via SaaS, when the server part works on our servers and is maintained by our engineers, and you just use it, or on-premise, when Oz Web Adapter is installed on your servers. for more details and choose the model that is convenient for you.

Oz Web SDK requires a to work. To issue a license, we need the domain name of the website where you are going to use our SDK.

the plugin into your page.

If you want to customize the look-and-feel of Oz Web SDK, please refer to .

List<>.

List<>

List<>

List<>

List<>.

List<>

This callback is called after the check is completed. It retrieves the analysis result (unavailable for the capture mode). The result content depends on the Web Adapter result_mode .

Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .

Please note: The options listed below are for testing purposes only. If you require more information than what is available in the Safe mode, please follow .

score (the value of the min_confidence or confidence_spoofing parameters; please refer to for details);

This callback is called periodically during the analysis’ processing. It retrieves an intermediate result (unavailable for the capture mode). The result content depends on the Web Adapter result_mode .

Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples .

Please note: the options listed below are for testing purposes only. If you require more information than what is available in the Safe mode, please follow .

while the analysis is in progress, the response similar to the for processing;

The architecture parameter must be to capture in the app_config.json file.

In your Web app, add a callback to process captured media when opening the Web SDK :

The video from Oz Web SDK is a frame sequence, so, to send it to Oz API, you’ll need to archive the frames and transmit them as a ZIP file via the POST /api/folders request (check our).

Web Adapter may send analysis results to the Web Plugin with various levels of verbosity. It is recommended that, in production, the level of verbosity is set to minimum. In the Web Adapter file, set the result_mode parameter to "safe".

For possible regulatory requirements, updated the with a new parameter: extract_action_shot. If True, for each gesture, the system saves the corresponding image to display it in report, e.g., closed eyes for blinking, instead of random frame for thumbnail.

Improved the selection of .

Added several localization records to the . New localization keys:

Added several localization records into the . New keys:

Some users may have experienced freezes while using WebView. Now, users can tap a button to continue working with the application. The corresponding string has been added to the string file in the . Key: tap_to_continue.

You can now use Web SDK for the analysis: to compare the face from your Liveness video with faces from your database. Create a collection (or collections) with these photos via or , and add the corresponding ID (or IDs) to the analyses.collection_ids array in the Web Adapter configuration file.

To enhance your clients’ experience with Web SDK, we implemented the 3D-mask that replaces the oval during face capture. To make it work, set the load_3d_mask in to true.

Implemented the new design for SDK and demo, including the scam protection option: the antiscam message warns user about their actions being recorded. Please check the new customization options .

Reforged :

to pass the information about the bounding box – landmarks that define where the face in the frame is;

You can now of Web SDK.

Browser
Version

Set the initial passwords and values as described in .

Term
Description

Statistic (optional) provides ' collection for Web UI.

Web UI provides the .

Small Business or PoC
Medium Load
High

Please find the installation guide here: .

May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to for media size reference.

Please find the installation guide here: .

From 2 to 4 Docker nodes (see ):

May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to for media size reference.

Please find the installation guide here: .

May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to for media size reference.

here
ISO 639-1
here
Readme.md
here
UI customization
this article
here
here
Angular
React
Vue
Svelte
Adding the Plugin to Your Web Page
Launching the Plugin
Closing or Hiding the Plugin
Localization: Adding a Custom Language Pack
Look-and-Feel Customization
Security Recommendations
Browser Compatibility
No-Server Licensing
Call
Oz API
Metadata
capture mode
configuration parameter
here
configuration parameter
here
the customization section
configuration parameters
Web Adapter configuration
here
Angular sample
React sample
Oz Liveness Web Plugin
OZ API
Oz
Web Adapter
How to Integrate Server-Based Liveness into Your Web Application
How to Add Photo ID Capture and Face Matching to Your Web or Mobile Application
Contact us
license
Integrate
this section
configuration parameter
here
Security Recommendations
this article
configuration parameter
here
Security Recommendations
set
plugin
Postman collections
configuration
OzAction
LicenseSource
OzConnection
OzConnection
OzAbstractMedia
AnalysisStatus
RequestResult
Analysis
Analysis
OzAbstractMedia
OzAttemptsSettings
OzUploadMediaSettings
OzLocalizationCode
OzLogging
Color
Color
Color
Color
Color
Color
GeometryType
Color
Color
Color
Color
Color
Color
Color
OzMediaTag
OzMediaTag
OzMediaTag
Analysis
OzAbstractMedia
OzAbstractMedia
SizeReductionStrategy
OzAbstractMedia
Resolution
SourceMedia
Type
AnalysisResult
Resolution
Resolution
Type
Mode
ResultMedia
AnalysisError
attempts
actions
face alignment
LicenseError
configurable
set your own logo
customization
array of analysis' results
iOS SDK Methods and Properties
this article
here
configurable
Locale
RequestResult
here
status response
OzLiveness.open({
style: {
    baseColorCustomization: {
        textColorPrimary: "#000000",
        backgroundColorPrimary: "#FFFFFF",
        textColorSecondary: "#8E8E93",
        backgroundColorSecondary: "#F2F2F7",
        iconColor: "#00A5BA"
    },
    baseFontCustomization: {
        textFont: "Roboto, sans-serif",
        textSize: "16px",
        textWeight: "400",
        textStyle: "normal"
    },
    titleFontCustomization: {
        textFont: "inherit",
        textSize: "36px",
        textWeight: "500",
        textStyle: "normal"
    },
    buttonCustomization: {
        textFont: "inherit",
        textSize: "14px",
        textWeight: "500",
        textStyle: "normal",
        textColorPrimary: "#FFFFFF",
        backgroundColorPrimary: "#00A5BA",
        textColorSecondary: "#00A5BA",
        backgroundColorSecondary: "#DBF2F5",
        cornerRadius: "10px"
    },
    toolbarCustomization: {
        closeButtonIcon: "cross",
        iconColor: "#707070"
    },
    centerHintCustomization: {
        textFont: "inherit",
        textSize: "24px",
        textWeight: "500",
        textStyle: "normal",
        textColor: "#FFFFFF",
        backgroundColor: "#1C1C1E",
        backgroundOpacity: "56%",
        backgroundCornerRadius: "14px",
        verticalPosition: "38%"
    },
    hintAnimation: {
        hideAnimation: false,
        hintGradientColor: "#00BCD5",
        hintGradientOpacity: "100%",
        animationIconSize: "80px"
    },
    faceFrameCustomization: {
        geometryType: "oval",
        cornersRadius: "0px",
        strokeDefaultColor: "#D51900",
        strokeFaceInFrameColor: "#00BCD5",
        strokeOpacity: "100%",
        strokeWidth: "6px",
        strokePadding: "4px"
    },
    documentFrameCustomization: {
        cornersRadius: "20px",
        templateColor: "#FFFFFF",
        templateOpacity: "100%"
    },
    backgroundCustomization: {
        backgroundColor: "#FFFFFF",
        backgroundOpacity: "88%"
    },
    antiscamCustomization: {
        enableAntiscam: false,
        textMessage: "",
        textFont: "inherit",
        textSize: "14px",
        textWeight: "500",
        textStyle: "normal",
        textColor: "#000000",
        textOpacity: "100%",
        backgroundColor: "#F2F2F7",
        backgroundOpacity: "100%",
        backgroundCornerRadius: "20px",
        flashColor: "#FF453A"
    },
    versionTextCustomization: {
        textFont: "inherit",
        textSize: "16px",
        textWeight: "500",
        textStyle: "normal",
        textColor: "#000000",
        textOpacity: "56%"
    },
    maskCustomization: {
            maskColor: "#008700",
            glowColor: "#000102",
            minAlpha: "30%", // 0 to 1 or 0% to 100%
            maxAlpha: "100%" // 0 to 1 or 0% to 100%

    }

}
});

Parameter

Description

textColorPrimary

Main text color

backgroundColorPrimary

Main background color

textColorSecondary

Secondary text color

backgroundColorSecondary

Secondary background color

iconColor

Icons’ color

Parameter

Description

textFont

Font

textSize

Font size

textWeight

Font weight

textStyle

Font style

Parameter

Description

textFont

Font

textSize

Font size

textWeight

Font weight

textStyle

Font style

Parameter

Description

textFont

Font

textSize

Font size

textWeight

Font weight

textStyle

Font style

textColorPrimary

Main text color

backgroundColorPrimary

Main background color

textColorSecondary

Secondary text color

backgroundColorSecondary

Secondary background color

cornerRadius

Button corner radius

Parameter

Description

closeButtonIcon

Close button icon

iconColor

Close button icon color

Parameter

Description

textFont

Font

textSize

Font size

textWeight

Font weight

textStyle

Font style

textColor

Text color

backgroundColor

Background color

backgroundOpacity

Background opacity

backgroundCornerRadius

Frame corner radius

verticalPosition

Vertical position

Parameter

Description

hideAnimation

Disable animation

hintGradientColor

Gradient color

hintGradientOpacity

Gradient opacity

animationIconSize

Animation icon size

Parameter

Description

geometryType

Frame shape: rectangle or oval

cornersRadius

Frame corner radius (for rectangle)

strokeDefaultColor

Frame color when a face is not aligned properly

strokeFaceInFrameColor

Frame color when a face is aligned properly

strokeOpacity

Stroke opacity

strokeWidth

Stroke width

strokePadding

Padding from stroke

Parameter

Description

cornersRadius

Frame corner radius

templateColor

Document template color

templateOpacity

Document template opacity

Parameter

Description

backgroundColor

Background color

backgroundOpacity

Background opacity

Parameter

Description

textMessage

Antiscam message text

textFont

Font

textSize

Font size

textWeight

Font weight

textStyle

Font style

textColor

Text color

textOpacity

Text opacity

backgroundColor

Background color

backgroundOpacity

Background opacity

backgroundCornerRadius

Frame corner radius

flashColor

Flashing indicator color

Parameter

Description

textFont

Font

textSize

Font size

textWeight

Font weight

textStyle

Font style

textColor

Text color

textOpacity

Text opacity

Parameter

Description

maskColor

The color of the mask itself

glowColor

The color of the glowing mask shape

minAlpha

Minimum mask transparency level. Implemented in 1.3.1

maxAlpha

Maximum mask transparency level. Implemented in 1.3.1

Previous design

New design

doc_color

-

face_color_success

faceFrame.faceReady

faceFrameCustomization.strokeFaceInFrameColor

face_color_fail

faceFrame.faceNotReady

faceFrameCustomization.strokeDefaultColor

centerHint.textSize

centerHintCustomization.textSize

centerHint.color

centerHintCustomization.textColor

centerHint.yPosition

centerHintCustomization.verticalPosition

centerHint.letterSpacing

-

centerHint.fontStyle

centerHintCustomization.textStyle

closeButton.image

-

backgroundOutsideFrame.color

backgroundCustomization.backgroundColor

Google Chrome (and other browsers based on the Chromium engine)

56

Mozilla Firefox

55

Safari

11

Microsoft Edge*

17

Opera

47

OzLiveness.open({
  // ...
  style: {
        // the backward compatibility block
    doc_color: "", 
    face_color_success: "",
    face_color_fail: "", 
	// the current customization block
    faceFrame: {
      faceReady: "",
      faceNotReady: "",
    },
    centerHint: {
      textSize: "",
      color: "",
      yPosition: "",
      letterSpacing: "", 
      fontStyle: "", 
    },
    closeButton: {
      image: "",
    },
    backgroundOutsideFrame: {
      color: "", 
    },
  },
  // ...
});
OzLiveness.open({
    license: {
        'payload_b64': 'some_payload',
        'signature': 'some_data',
        'enc_public_key': 'some_key'
    },
    ...,
})
OzLiveness.open({
    licenseUrl: 'https://some_url',
    ...,
})
// scripts for preliminary checks of hosts to ensure compliance 
// with software and hardware requirements
|-[checkers]
|   |--pre-checker-all.sh
|   |--pre-checker-api.sh
|   |--pre-checker-bio.sh
// subdirectories with configuration files for the services in use
|-[configs]
|   |--[api]
|   |--[init]
|   |--[nginx]
|   |--[o2n]
|   |--[pg-o2n]
|   |--[postgres]
|   |--[redis]
|   |--[statistic]
|   |--[tfss]
|   |--[webui]
|   |--config.env
// service data and the TFSS models
|-[data]
|   |--api
|   |--pg-o2n
|   |--postgres
|   |--redis
|   |--tfss
// manifest for running all services on a single host
|-docker-compose-all.yml
// manifest for services related only to the API
|-docker-compose-api.yml
// manifest for the TFSS service
|-docker-compose-bio.yml
docker compose --env-file configs/config.env -f docker-compose-all.yml 
docker compose --env-file configs/config.env -f docker-compose-all.yml up -d
./pre-checker-bio.sh
docker compose --env-file configs/config.env -f docker-compose-bio.yml up -d
./pre-checker-bio.sh
docker compose --env-file configs/config.env -f docker-compose-api.yml up -d
guide on user creation via Web Console
guide on user creation via Web Console
guide on user creation via Web Console

APM

Analyses per minute. Please note:

  • Analysis is a request for Quality (Liveness) or Biometry analysis using a single media.

  • A single analysis with multiple media counts as separate analyses in terms of APM.

  • Multiple analysis types on single media (two media for Biometry) count as separate analyses in terms of APM.

PoC

Proof of Concept

Node

A Node is a worker machine. Can be either a virtual or a physical machine.

HA

High availability

K8s

Kubernetes

SC

StorageClass

RWX

ReadWriteMany

Use cases

  • Testing/Development purposes

  • Small installations with low number of APM

  • Typical usage with moderate load

  • High load with HA and autoscaling

  • Usage with cloud provider

  • ~ APM

  • ~ analyses per month

  • ~ APM

  • analyses per month

  • APM

  • analyses per month

Environment

Docker

Docker

Kubernetes

HA

No

Partially

Yes

Pros

  • Requires a minimal amount of computing resources

  • Low complexity, so no high-qualified engineers are needed on-site

  • Easy to manage and support

  • Partially supports HA

  • Can be scaled up to support higher workload

  • HA and autoscaling

  • Observability and manageability

  • Allows high workload and can be scaled up

Cons

  • Suitable only for low loads, no high APM

  • No scaling and high-availability

  • API HA requires precise balancing

  • Higher staff qualification requirements

  • High staff qualification requirements

  • Additional infrastructure requirements

External resource requirements

  • PostgreSQL

  • For Kubernetes deployments:

    • K8s v1.25+

    • ingress-nginx

    • clusterIssuer

    • kube-metrics

    • Prometheus

    • clusterAutoscaler

  • PostgreSQL

Web Adapter configuration file
the best shot
Web SDK strings file
Web SDK strings file
localization section
Configuration file settings
here
Added the parameters
statistics
web interface
https://api.cryptlex.com/
https://*.infra.ozforensics.ai/
https://*.s3.amazonaws.com/
Docker
this article
Docker
this article
Kubernetes
this article
LicenseSource
LicenseSource
Connection
Connection
OZLivenessDelegate
OzVerificationMovement
OzVerificationMovement
OZMedia
error
OZMedia
OZLocalizationCode
UploadMediaSettings
OzMedia
OZVerificationStatus
Analysis
OZMedia
RequestStatus
RequestResult
GeometryType
OZVerificationMovement
MediaType
OzMedia
AnalysisType
AnalysisMode
SizeReductionStrategy
OzMedia
ScenarioState
AnalysisStatus
AnalysisResolutionStatus
OZMedia
AnalysisResolutionStatus
AnalysisResult
AnalyseResolutionStatus
AnalysisType
AnalysisMode
ResultMedia
AnalyseResolutionStatus
AnalysisType
VerificationAction
Analysis
Media
Locale
FileType
VerificationAction
MediaTag
Type
Mode
Resolution
Type
Mode
Media
SizeReductionStrategy
customize the look-and-feel
step 4 of the same host installation
schemes
telemetry
telemetry
telemetry
Black List
API
Web UI
licensing
  1. On-Premise

  1. SaaS

  • This part is fully covered by the Oz Forensics engineers. You get a link for Oz Web Plugin (see step 2).

Customizing Android SDK

Configuration

We recommend applying these settings when starting the app.

// connecting to the API server
OzLivenessSDK.setApiConnection(OzConnection.fromServiceToken(HOST, TOKEN))
// settings for the number of attempts to detect an action
OzLivenessSDK.config.attemptSettings = attemptSettings 
// the possibility to display additional debug information (you can do it by clicking the SDK version number)
OzLivenessSDK.config.allowDebugVisualization = allowDebugVisualization 
// logging settings
OzLivenessSDK.config.logging = ozLogging 
OzConfig config = OzLivenessSDK.INSTANCE.getConfig();
// connecting to the API server
OzLivenessSDK.setApiConnection(OzConnection.fromServiceToken(HOST, TOKEN));
// settings for the number of attempts to detect an action
config.setAttemptSettings(attemptSettings); 
// the possibility to display additional debug information (you can do it by clicking the SDK version number)
config.setAllowDebugVisualization(allowDebugVisualization); 
// logging settings
config.setLogging(ozLogging); 

Interface Customization

OzLivenessSDK.config.customization = UICustomization(
    // customization parameters for the toolbar
    toolbarCustomization = ToolbarCustomization(
        closeIconRes = R.drawable.ib_close,
        closeIconTint = Color.ColorRes(R.color.white),
        titleTextFont = R.font.roboto,
        titleTextSize = 18,
        titleTextAlpha = 100,
        titleTextColor = Color.ColorRes(R.color.white),
        backgroundColor = Color.ColorRes(R.color.black),
        backgroundAlpha = 60,
        isTitleCentered = true,
        title = "Analysis"
    ),
    // customization parameters for the center hint
   centerHintCustomization = CenterHintCustomization(
        textFont = R.font.roboto,
        textColor = Color.ColorRes(R.color.text_color),
        textSize = 20,
        verticalPosition = 50,
        textStyle = R.style.Sdk_Text_Primary,
        backgroundColor = Color.ColorRes(R.color.color_surface),
        backgroundOpacity = 56,
        backgroundCornerRadius = 14,
        textAlpha = 100
    ),
    // customization parameters for the hint animation
    hintAnimation = HintAnimation(
    hintGradientColor = Color.ColorRes(R.color.red),
    hintGradientOpacity = 80,
    animationIconSize = 120,
    hideAnimation = false
),
    // customization parameters for the frame around the user face
    faceFrameCustomization = FaceFrameCustomization(
        geometryType = GeometryType.Rectangle(10), // 10 is the corner radius
        strokeDefaultColor = Color.ColorRes(R.color.error_red),
        strokeFaceInFrameColor = Color.ColorRes(R.color.success_green),
        strokeAlpha = 100,
        strokeWidth = 5,
        strokePadding = 3,
    ),
    // customization parameters for the background outside the frame
    backgroundCustomization = BackgroundCustomization(
        backgroundColor = Color.ColorRes(R.color.black),
        backgroundAlpha = 60
    ),
    // customization parameters for the SDK version text
    versionTextCustomization = VersionTextCustomization(
        textFont = R.font.roboto,
        textSize = 12,
        textColor = Color.ColorRes(R.color.white),
        textAlpha = 100,
    ),
    // customization parameters for the antiscam protection text
    antiScamCustomization = AntiScamCustomization(
        textMessage = "",
        textFont = R.font.roboto,
        textSize = 14,
        textColor = Color.ColorRes(R.color.text_color),
        textAlpha = 100,
        backgroundColor = Color.ColorRes(R.color.color_surface),
        backgroundOpacity = 100,
        cornerRadius = 20,
        flashColor = Color.ColorRes(R.color.green)
    ),
    // custom logo parameters
    // should be allowed by license
    logoCustomization = LogoCustomization(
    image = Image.Drawable(R.drawable.ic_logo),
    size = Size(176, 64),
    )
)
OzLivenessSDK.INSTANCE.getConfig().setCustomization(new UICustomization(
// customization parameters for the toolbar
new ToolbarCustomization(
    R.drawable.ib_close,
    new Color.ColorRes(R.color.white),
    R.style.Sdk_Text_Primary,
    new Color.ColorRes(R.color.white),
    R.font.roboto,
    Typeface.NORMAL,
    100, // toolbar text opacity (in %)
    18, // toolbar text size (in sp)
    new Color.ColorRes(R.color.black),
    60, // toolbar alpha (in %)
    "Liveness", // toolbar title
    true // center toolbar title
    ),
// customization parameters for the center hint
new CenterHintCustomization(
    R.font.roboto,
    new Color.ColorRes(R.color.text_color),
    20,
    50,
    R.style.Sdk_Text_Primary,
    new Color.ColorRes(R.color.color_surface),
    100, // background opacity
    14, // corner radius for background frame   
    100 // text opacity
    ),
// customization parameters for the hint animation
new HintAnimation(
    new Color.ColorRes(R.color.red), // gradient color
    80, // gradient opacity (in %)
    120, // the side size of the animation icon square
    false // hide animation
    ),
// customization parameters for the frame around the user face
new FaceFrameCustomization(
    GeometryType.RECTANGLE,
    10, // frame corner radius (for GeometryType.RECTANGLE)
    new Color.ColorRes(R.color.error_red), 
    new Color.ColorRes(R.color.success_green),
    100, // frame stroke alpha (in %)
    5, // frame stroke width (in dp)
    3 // frame stroke padding (in dp)
    ),
// customization parameters for the background outside the frame
new BackgroundCustomization(
    new Color.ColorRes(R.color.black),
    60 // background alpha (in %)
    ),
 // customization parameters for the SDK version text
 new VersionTextCustomization(
     R.style.Sdk_Text_Primary,
     R.font.roboto,
     12, // version text size
     new Color.ColorRes(R.color.white),
     100 // version text alpha
     ),
 // customization parameters for the antiscam protection text
 new AntiScamCustomization(
    "Recording .. ",
    R.font.roboto,
    12,
    new Color.ColorRes(R.color.text_color),
    100,
    R.style.Sdk_Text_Primary,
    new Color.ColorRes(R.color.color_surface),
    100,
    14,
    new Color.ColorRes(R.color.green)
    )
// custom logo parameters
 new LogoCustomization(
    new Image.Drawable(R.drawable.ic_logo),
    new Size(176, 64)
    )
  )
);

By default, SDK uses the locale of the device. To switch the locale, use the code below:

OzLivenessSDK.config.localizationCode = OzLivenessSDK.OzLocalizationCode.EN
OzLivenessSDK.INSTANCE.getConfig().setLocalizationCode(OzLivenessSDK.OzLocalizationCode.EN)

Install our Web SDK. Our engineers will help you to install the components needed using the or manually. The license will be installed as well; to update it, please refer to .

Configure the .

To customize the Oz Liveness interface, use UIcustomization as shown below. For the description of customization parameters, please refer to .

adapter
standalone installer
this article
Android SDK Methods and Properties
6KB
Oz API Check with No Front End.postman_collection.json
6KB
Face Matching with a Reference Photo.postman_collection.json
257KB
OZ-Forensic 6.0.1.postman_collection.json
301KB
OZ-Forensic 5.2.0-.postman_collection.json
299KB
OZ-Forensic 5.0.0.postman_collection.json
165KB
OZ-Forensic 4.0.0.postman_collection.json
168KB
OZ-Forensic 3.33.0.postman_collection.json
11KB
Oz API Lite 1.2.0.json
8KB
Oz API Lite 1.1.1.postman_collection.json
1KB
privateKey.der
2KB
privateKey.txt
451B
publicKey.pub
15KB
Oz_SDK_Android_Strings.zip
archive
2KB
privateKey.txt
1KB
private_key.der
451B
publicKey.pub
12KB
Oz_SDK_iOS_Strings.zip
archive
100KB
on_complete_full.json
Example
23KB
Web SDK Strings 1.6.0. EN.zip
archive
Postman
iBeta Level 1
iBeta Level 2