Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This article describes Oz components that can be integrated into your infrastructure in various combinations depending on your needs.
The typical integration scenarios are described in the Integration Quick Start Guides section.
Oz API is the central component of the system. It provides RESTful application programming interface to the core functionality of Liveness and Face matching analyses, along with many important supplemental features:
Persistence: your media and analyses are stored for future reference unless you explicitly delete them,
Authentication, roles and access management,
Asynchronous analyses,
Ability to work with videos as well as images.
For more information, please refer to Oz API Key Concepts and Oz API Developer Guide. To test Oz API, please check the Postman collection here.
Under the logical hood, Oz API has the following components:
File storage and database where media, analyses, and other data are stored,
The Oz BIO module that runs neural network models to perform facial biometry magic,
Licensing logic.
The front-end components (Oz Liveness Mobile or Web SDK) connect to Oz API to perform server-side analyses either directly or via customer's back end.
iOS and Android SDK are collectively referred to as Mobile SDKs or Native SDKs. They are written on Swift and Kotlin/Java, respectively, and designed to be integrated into your native mobile application.
Mobile SDKs implement the out-of-the-box customizable user interface for capturing Liveness video and ensure that the two main objectives are met:
The capture process is smooth for users,
The quality of a video is optimal for the subsequent Liveness analysis.
After Liveness video is recorded and available to your mobile application, you can run the server-side analysis. You can use corresponding SDK methods, call the API directly from your mobile application, or pass the media to your backend and interact with Oz API from there.
The basic integration option is described in the Quick Start Guide.
Mobile SDKs are also capable of On-device Liveness and Face matching. On-device analyses may be a good option in low-risk context, or when you don’t want the media to leave the users’ smartphones. Oz API is not required for On-device analyses. To learn how it works, please refer to this Integration Quick Start Guide.
Web Adapter and Web Plugin together constitute Web SDK.
Web SDK is designed to be integrated into your web applications and have the same main goals as Mobile SDKs:
The capture process is smooth for users,
The quality of a video is optimal for the subsequent Liveness analysis.
Web Adapter needs to be set up on a server side. Web Plugin is called by your web application and works in a browser context. It communicates with Web Adapter, which, in turn, communicates with Oz API.
Web SDK adds the two-layer protection against injection attacks:
Collects information about browser context and camera properties to detect usage of virtual cameras or other injection methods.
Records liveness video in a format that allows server-side neural networks to search for traces of injection attack in the video itself.
Check the Integration Quick Start Guide for the basic integration scenario, and explore the Web SDK Developer Guide for more details.
Web UI is a convenient web application that allows to explore the stored API data in the easy way. It relies on API authentication and database and does not store any data on its own.
Web UI has an intuitive interface, yet the user guide is available here.
This article describes the main types of analyses that Oz software is able to perform.
Liveness checks whether a person in a media is a real human.
Face Matching examines two or more media to identify similarities between the faces depicted in them.
Black list looks for resemblances between an individual featured in a media and individuals in a pre-existing photo database.
These analyses are accessible in the Oz API for both SaaS and On-Premise models. Liveness and Face Matching are also offered in the On-Device model. Please visit this page to learn more about the usage models.
The Liveness check is important to protect facial recognition from the two types of attacks.
A presentation attack, also known as a spoofing attack, refers to the attempt of an individual to deceive a facial recognition system by presenting into a camera video, photo, or any other type of media that mimics the appearance of a genuine user. These attacks can include the use of realistic masks, or involve digital manipulation with images and videos, such as deep fakes.
An injection attack is an attempt to deceive a facial recognition system by replacing physical camera input with a prerecorded image or video, or by manipulating physical camera output before it becomes input to a facial recognition. Virtual camera software is the most common tool for injection attacks.
Oz Liveness is able to detect both types of attacks. Any component can detect presentation attacks, and for injection attack detection, use Oz Liveness Web SDK. To learn about how to use Oz components to prevent attacks, check our integration quick start guides:
Once the Liveness check is finished, you can check both qualitative and quantitative analysis results.
Asking users to perform a gesture, such as smiling or turning their head, is a popular requirement when recording a Liveness video. With Oz Liveness Mobile and Web SDK, you can also request gestures from users. However, our Liveness check relies on other factors, analyzed by neural networks, and does not depend on gestures. For more details, please check Passive and Active Liveness.
Liveness check also can return the best shot from a video: a best-quality frame where the face is seen the most properly.
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media.
Wonder how to integrate face matching into your processes? Check our integration quick start guides.
In Oz API, you can configure one or more black lists, or face collections. These collections are databases of people depicted in photos. When the Black list analysis is being conducted, Oz software compares the face in a photo or video taken with faces of this pre-made database and shows whether a face exists in a collection.
For additional information, please refer to this article.
Oz Forensics specializes in liveness and face matching: we develop products that help you to identify your clients remotely and avoid any kind of spoofing or deepfake attack. Oz software helps you to add facial recognition to your software systems and products. You can integrate Oz modules in many ways depending on your needs. We are constantly improving our components, increasing their quality.
Oz Liveness is responsible for recognizing a living person on a video it receives. Oz Liveness can distinguish a real human from their photo, video, mask, or other kinds of spoofing and deepfake attacks. The algorithm is certified in ISO-30137-3 standard by NIST accreditation iBeta biometric test laboratory with 100% accuracy.
Our liveness technology protects both against injection and presentation attacks.
The injection attack detection is layered. Our SDK examines user environment to detect potential manipulations: browser, camera, etc. Further on, the deep neural network comes into play to defend against even the most sophisticated injection attacks.
The presentation attack detection is based on deep neural networks of various architectures, combined with a proprietary ensembling algorithm to achieve optimal performance. The networks consider multiple factors, including reflection, focus, background scene, motion patterns, etc. We offer both passive (no gestures) and active (various gestures) Liveness options, ensuring that your customers enjoy the user experience while delivering accurate results for you. The iBeta test was conducted using passive Liveness, and since then, we have significantly enhanced our networks to better meet the needs of our clients.
Oz Face Matching (Biometry) aims to identify the person, verifying that the person who performs the check and the papers' owner are the same person. Oz Biometry looks through the video, finds the best shot where the person is clearly seen, and compares it with the photo from ID or another document. The algorithm's accuracy is 99.99% confirmed by NIST FRVT.
Our biometry technology has both 1:1 Face Verification and 1:N Face Identification, which are also based on ML algorithms. To train our neural networks, we use an own framework based on state-of-the-art technologies. The private large dataset (over 4.5 million unique faces) with a wide representation of ethnic groups as well as using other attributes (predicted race, age, etc.) helps our biometric models to provide the robust matching scores.
Our face detector can work with photos and videos. Also, the face detector excels in detecting faces in images of IDs and passports (which can be rotated or of low quality).
The Oz software combines accuracy in analysis with ease of integration and use. To further simplify the integration process, we have provided a detailed description of all the key concepts of our system in this section. If you're ready to get started, please refer to our integration guides, which provide the step-by-step instructions on how to achieve your facial recognition goals quickly and easily.
Since Android and iOS 8.0.0, we have introduced the hybrid Liveness analysis mode. It is a combination of the on-device and server-based analyses that sums up the benefits of these two modes. If the on-device analysis is uncertain about the real human presence, the system initiates the server-based analysis, otherwise, no additional analyses are done.
You need less computational capacity: in the majority of cases there's no need to apply the server-side analyses and fewer requests are being sent back and forth.
The accuracy is similar to the one of the server-based analysis: if the on-device analysis result is uncertain, the server analysis is launched. We offer the hybrid analysis as one of the default analysis modes, but you can also implement your own logic of hybrid analysis by combining the server-based and on-device analyses in your code.
Since the 8.3.0 release, you get the analysis result faster, and less data will be transmitted (by up to 10 times): the on-device analysis is enough in the majority of cases so that you don’t need to upload the full video to analyze it on the server, and, therefore, don’t send or receive the additional data. The customer journey gets shorter.
The hybrid analysis has been available in our native (mobile) SDKs since 8.0.0. As we mentioned before, the server-based analysis is launched in the minority of cases, as if the analysis on your device has finished with a certain answer, there’s no need for a second check. This results in less server resources involved.
Describing how passive and active liveness works.
The objective of the Liveness check is to verify the authenticity and physical presence of an individual in front of the camera. In the passive Liveness check, it is sufficient to capture a user's face while they look into the camera. Conversely, the active Liveness check requires the user to perform an action such as smiling, blinking, or turning their head. While passive Liveness is more user-friendly, active Liveness may be necessary in some situations to confirm that the user is aware of undergoing the Liveness check.
In our Mobile or Web SDKs, you can define what action the user is required to do. You can also combine several actions into a sequence. Actions vary in the following dimensions:
User experience,
File size,
Liveness check accuracy,
Suitability for review by a human operator or in court.
In most cases, the Selfie action is optimal, but you can choose other actions based on your specific needs. Here is a summary of available actions:
To enable hybrid mode on Android, when you , set Mode
to HYBRID
. , you need to set mode
to hybrid
. That’s all, as easy as falling off a log.
If you have any questions left, we’ll be happy to .
To recognize the actions from either passive or active Liveness, our algorithms refer to the corresponding tags. These tags indicate the type of action that a user is performing within a media. For more information, please read the article. The detailed information on how the actions, or, in other words, gestures are called in different Oz Liveness components is .
| A short video, around 0.7 sec. Users are not required to do anything. Recommended for most cases. It offers the best combination of user experience and liveness check accuracy. |
| Similar to “Simple selfie” but only one image is chosen instead of the whole video. Recommended when media size is the most important factor. Hard to evaluate for a spoofing by a human, e.g., by an operator or in a court. |
| A 5-second video where a user is asked to follow the text looking at it. Recommended when the longer video is required, e.g., for subsequent review by a human operator or in a court. |
| A user is required to complete a particular gesture within 5 seconds. Use active liveness when you need a confirmation that the user is aware of undergoing a Liveness check. Video length and file size may vary depending on how soon a user completes a gesture. |
Liveness and Face Matching can also be provided by the Oz API Lite module. Oz API Lite is conceptually different from Oz API.
Fully stateless, no persistence,
Extremely easy to scale horizontally,
No built-in authentication and access management,
Works with single images, not videos.
Oz API Lite is suitable when you want to embed it into your product and/or have extremely high performance requirements (millions checks per week).
For more details, please refer to the Oz API Lite Developer Guide.
We offer different usage models for the Oz software to meet your specific needs. You can either utilize the software as a service from one of our cloud instances or integrate it into your existing infrastructure. Regardless of the usage model you choose, all Oz modules will function equally. It’s only up to you what to pick, depending on your needs.
With the SaaS model, you can access one of our clouds without having to install our software in your own infrastructure.
Choose SaaS when you want:
Faster start as you don’t need to procure or allocate hardware within your company and set up a new instance.
Zero infrastructure cost as server components are located in Oz cloud.
Lower maintenance cost as Oz maintains and upgrades server components.
No cross-border data transfer for the regions where Oz has cloud instances.
The on-premise model implies that all the Oz components required are installed within your infrastructure. Choose on-premise for:
Your data not leaving your infrastructure.
Full and detailed control over the configuration.
We also provide an opportunity of using the on-device Liveness and Face matching. This model is available in Mobile SDKs.
Consider the on-device option when:
You can’t transmit facial images to any server due to privacy concerns
The network conditions whereon you plan using Oz products are extremely poor.
The choice is yours to make, but we're always available to provide assistance.
The Oz API is a comprehensive Rest API that enables facial biometrics, allowing for both face matching and liveness checks. This write-up provides an overview of the essential concepts that one should keep in mind while using the Oz API.
To ensure security, every Oz API call requires an access token in its HTTP headers. To obtain this token, execute the POST /api/authorize/auth
method with login and password provided by us. Pass this token in X-Forensics-Access-Token
header in subsequent Oz API calls.
This article provides comprehensive details on the authentication process. Kindly refer to it for further information.
Furthermore, the Oz API offers distinct user roles, ranging from CLIENT
, who can perform checks and access reports but lacks administrative rights, e.g., deleting folders, to ADMIN
, who enjoys nearly unrestricted access to all system objects. For additional information, please consult this guide.
The unit of work in Oz API is a folder: you can upload interrelated media to a folder, run analyses on them, and check for the aggregated result. A folder can contain the unlimited number of media, and each of the media can be a target of several analyses. Also, analyses can be performed on a bunch of media.
Media OZ API works with photos and videos. Video can be either a regular video container, e.g., MP4 or MOV, or a ZIP archive with a sequence of images. Oz API uses the file mime type to define whether media is an image, a video, or a shot set.
It is also important to determine the semantics of a content, e.g., if an image is a photo of a document or a selfie of a person. This is achieved by using tags. The selection of tags impacts whether specific types of analyses will recognize or ignore particular media files. The most important tags are:
photo_id_front
– for the front side of a photo ID
photo_selfie
– for a non-document reference photo
video_selfie_blank
– for a liveness video recorded beyond Oz Liveness SDK
if a media file is captured using the Oz Liveness SDK, the tags are assigned automatically.
The full list of Oz media tags with their explanation and examples can be found here.
Since video analysis may take a few seconds, the analyses are performed asynchronously. This implies that you initiate an analysis (/api/folders/{{folder_id}}/analyses/
) and then monitor the outcomes by polling until processing is complete (/api/analyses/{{analyse_id}}
for a single analysis or /api/folders/{{folder_id}}/analyses/
for all folder’s analyses). Alternatively, there is a webhook option available. To see an example of how to use both the polling and webhook options, please check this guide.
These were the key concepts of Oz API. To gain a deeper understanding of its capabilities, please refer to the Oz API section of our developer guide.
This section contains the most common cases of integrating the Oz Forensics Liveness and face Biometry system.
The scenarios can be combined together, for example, integrating liveness into both web and mobile applications or integrating liveness with face matching.
In this section, we listed the guides for the server-based liveness check integrations.
In this section, there's a guide for the integration of the on-device liveness check.
This guide outlines the steps for integrating the Oz Liveness Web SDK into a customer web application for capturing facial videos and subsequently analyzing them on a server.
The SDK implements the ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results. Under the hood, it communicates with Oz API.
Oz Liveness Web SDK detects both presentation and injection attacks. An injection attack is an attempt to feed pre-recorded video into the system using a virtual camera.
Finally, while the cloud-based service provides the fully-fledged functionality, we also offer an on-premise version with the same functions but no need for sending any data to our cloud. We recommend starting with the SaaS mode and then reconnecting your web app to the on-premise Web Adapter and Oz API to ensure seamless integration between your front end and back end. With these guidelines in mind, integrating the Oz Liveness Web SDK into your web application can be a simple and straightforward process.
Tell us domain names of the pages from which you are going to call Web SDK and email for admin access, e.g.:
In response, you’ll get URLs and credentials for further integration and usage. When using SaaS API, you get them from us:
For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the guide on user creation via Web Console. Consider the proper user role (CLIENT
in most cases or CLIENT ADMIN
, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.
Add the following tags to your HTML code. Use Web Adapter URL received before:
Add the code that opens the plugin and handles the results:
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples here.
With these steps, you are done with basic integration of Web SDK into your web application. You will be able to access recorded media and analysis results in Web Console via browser or programmatically via API (please find the instructions here: retrieving an MP4 video, getting analysis results).
In the Web Plugin Developer Guide, you can find instructions for common next steps:
Customizing plugin look-and-feel
Adding custom language pack
Tuning plugin behavior
Plugin parameters and callbacks
Security recommendations
Please find a sample for Oz Liveness Web SDK here. To make it work, replace <web-adapter-url>
with the Web Adapter URL you've received from us.
For Angular and React, replace https://web-sdk.sandbox.ohio.ozforensics.com
in index.html.
Oz API is a rich Rest API for facial biometry, where you can do liveness checks and face matching. Oz API features are:
Persistence: your media and analysis are stored for future reference unless you explicitly delete it,
Ability to work with videos as well as images,
Asynchronous analyses,
Authentication,
Roles and access management.
The unit of work in Oz API is a folder: you can upload interrelated media to a folder, run analyses on them, and check for the aggregated result.
This step-by-step guide describes how to perform a liveness check on a facial image or video that you already have with Oz backend: create a folder, upload your media to this folder, initiate liveness check and poll for the results.
For better accuracy and user experience, we recommend that you use our Web and/or Native SDK on your front end for face capturing. Please refer to the relevant guides:
Before you begin, make sure you have Oz API credentials. When using SaaS API, you get them from us:
For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the guide on user creation via Web Console. Consider the proper user role (CLIENT
in most cases or CLIENT ADMIN
, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.
You can explore all API methods with Oz Postman collection.
For security reasons, we recommend obtaining the access token instead of using the credentials. Call POST /api/authorize/auth with your login and password in the request body.
Get the access_token from the response and add it to the X-Forensic-Access-Token of all subsequence requests.
This step is not needed for API 5.0.0 and above.
With API 4.0.8 and below, Oz API requires video or archived sequence of images in order to perform liveness check. If you want to analyze a single image, you need to transform it into a ZIP archive. Oz API will treat this archive as a video.
Make sure that you use corresponding video-related tags later on.
To create a folder and upload your media into it, call POST /api/folders/ method, adding the media you need to the body part of the request.
In the payload field, set the appropriate tags:
The successful response will return code 201 and the folder_id
you’ll need later on.
To launch the analysis, call POST /api/folders/{{folder_id}}/analyses/ with the folder_id from the previous step. In the request body, specify the liveness check to be launched.
The results will be available in a short while. The method will return analyse_id that you’ll need at the next step.
Repeat calling GET /api/analyses/{{analyse_id}} with the analyse_id
from the previous step once a second until the state changes from PROCESSING
to something else. For a finished analysis:
get the qualitative result from resolution (SUCCESS
or DECLINED
).
get the quantitative results from results_media[0].results_data.confidence_spoofing
. confidence_spoofing
ranges from 0.0 (real person) to 1.0 (spoofing).
Here is the Postman collection for this guide.
With these steps completed, you are done with Liveness check via Oz API. You will be able to access your media and analysis results in Web UI via browser or programmatically via API.
Oz API methods can be combined with great flexibility. Explore Oz API using the API Developer Guide.
This guide outlines the steps for integrating the Oz Liveness Mobile SDK into a customer mobile application for capturing facial videos and subsequently analyzing them on the server.
The SDK implements a ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results. The SDK methods for liveness analysis communicate with Oz API under the hood.
Before you begin, make sure you have Oz API credentials. When using SaaS API, you get them from us:
For the on-premise Oz API, you need to create a user yourself or ask your team that manages the API. See the guide on user creation via Web Console. Consider the proper user role (CLIENT
in most cases or CLIENT ADMIN
, if you are going to make SDK work with the pre-created folders from other API users). In the end, you need to obtain a similar set of credentials as you would get for the SaaS scenario.
Oz Liveness Mobile SDK requires a license. License is bound to the bundle_id of your application, e.g., com.yourcompany.yourapp
. Issue the 1-month trial license on our website or email us for a long-term license.
In the build.gradle of your project, add:
In the build.gradle of the module, add:
Rename the license file to forensics.license and place it into the project's res/raw folder.
Use API credentials (login, password, and API URL) that you’ve got from us.
In production, instead of hard-coding login and password in the application, it is recommended to get access token on your backend with API auth method then pass it to your application:
To start recording, use startActivityForResult:
To obtain the captured video, use onActivityResult
:
The sdkMediaResult
object contains the captured videos.
To run the analyses, execute the code below. Mind that mediaList
is an array of objects that were captured (sdkMediaResult
) or otherwise created (media you captured on your own).
Install OZLivenessSDK via CocoaPods. To integrate OZLivenessSDK into an Xcode project, add to Podfile:
Rename the license file to forensics.license and put it into the project.
Use API credentials (login, password, and API URL) that you’ve got from us.
In production, instead of hard-coding the login and password in the application, it is recommended to get an access token on your back end using the API auth method, then pass it to your application:
Create a controller that will capture videos as follows:
The delegate object must implement OZLivenessDelegate protocol:
Use AnalysisRequestBuilder to initiate the Liveness analysis. The communication with Oz API is under the hood of the run method.
With these steps, you are done with basic integration of Mobile SDKs. You will be able to access recorded media and analysis results in Web Console via browser or programmatically via API.
In developer guides, you can also find instructions for customizing the SDK look-and-feel and access the full list of our Mobile SDK methods. Check out the table below:
Domain names from which WebSDK will be called:
www.yourbrand.com
www.yourbrand2.com
Email for admin access:
j.doe@yourcompany.com
Login: j.doe@yourcompany.com
Password: …
API: https://sandbox.ohio.ozforensics.com/
Web Console: https://sandbox.ohio.ozforensics.com
Web Adapter: https://web-sdk.cdn.sandbox.ozforensics.com/your_company_name/
Login: j.doe@yourcompany.com
Password: …
API: https://sandbox.ohio.ozforensics.com
Web Console: https://sandbox.ohio.ozforensics.com
Login: j.doe@yourcompany.com
Password: …
API: https://sandbox.ohio.ozforensics.com
Web Console: https://sandbox.ohio.ozforensics.com
Android sample app source codes
iOS sample app source codes
Android OzLiveness SDK Developer Guide
iOS OzLiveness SDK Developer Guide
Demo app in PlayMarket
Demo app in TestFlight
In this section, we listed the guides for the face matching checks.
This guide describes how to match a liveness video with a reference photo of a person that is already stored in your database.
However, if you prefer to include a photo ID capture step to your liveness process instead of using a stored photo, then you can refer to another guide in this section.
By this time you should have already implemented liveness video recording and liveness check. If not, please refer to these guides:
In this scenario, you upload your reference image to the same folder where you have a liveness video, initiate the BIOMETRY analysis, and poll for the results.
folder_id
Given that you already have the liveness video recorded and uploaded, you will be working with the same Oz API folder where your liveness video is. Obtain the folder ID as described below, and pass it to your back end.
For a video recorded by Web SDK, get the folder_id
as described here.
For a video recorded by Android or iOS SDK, retrieve the folder_id
from the analysis’ results as shown below:
Android:
iOS:
Call POST /api/folders/{{folder_id}}/media/
method, replacing the folder_id
with the ID you’ve got in the previous step. This will upload your new media to the folder where your ready-made liveness video is located.
Set the appropriate tags in the payload field of the request, depending on the nature of a reference photo that you have.
To launch the analysis, call POST /api/folders/{{folder_id}}/analyses/
with the folder_id
from the previous step. In the request body, specify the biometry check to be launched.
Repeat calling GET /api/analyses/{{analyse_id}}
with the analyse_id
from the previous step once a second until the state changes from PROCESSING
to something else. For a finished analysis:
get the qualitative result from resolution (SUCCESS
or DECLINED
).
get the quantitative results from analyses.results_data.min_confidence
Here is the Postman collection for this guide.
With these steps completed, you are done with adding face matching via Oz API. You will be able to access your media and analysis results in Web UI via browser or programmatically via API.
Oz API methods can be combined with great flexibility. Explore Oz API using the API Developer Guide.
Please note that the Oz Liveness Mobile SDK does not include a user interface for scanning official documents. You may need to explore alternative SDKs that offer that functionality or implement it on your own. Web SDK does include a simple photo ID capture screen.
This guide describes the steps needed to add face matching to your liveness check.
By this time you should have already implemented liveness video recording and liveness check. If not, please refer to these guides:
Simply add photo_id_front to the list of actions for the plugin, e.g.,
For the purpose of this guide, it is assumed that your reference photo (e.g., front side of an ID) is stored on the device as reference.jpg.
Modify the code that runs the analysis as follows:
For on-device analyses, you can change the analysis mode from Analysis.Mode.SERVER_BASED
to Analysis.Mode.ON_DEVICE
Check also the Android sample app source code.
For the purpose of this guide, it is assumed that your reference photo (e.g., front side of an ID) is stored on the device as reference.jpg.
Modify the code that runs the analysis as follows:
For on-device analyses, you can change the analysis mode from mode: .serverBased
to mode: .onDevice
Check also the iOS sample app source code.
You will be able to access your media and analysis results in Web UI via browser or programmatically via API.
Oz API methods as well as Mobile and Web SDK methods can be combined with great flexibility. Explore the options available in the Developer Guide section.
This guide outlines the steps for integrating the Oz Liveness Mobile SDK into a customer mobile application for capturing facial videos and performing on-device liveness checks without sending any data to a server.
The SDK implements the ready-to-use face capture user interface that is essential for seamless customer experience and accurate liveness results.
Oz Liveness Mobile SDK requires a license. License is bound to the bundle_id of your application, e.g., com.yourcompany.yourapp
. Issue the 1-month trial license on our website or email us for a long-term license.
In the build.gradle of your project, add:
In the build.gradle of the module, add:
Rename the license file to forensics.license and place it into the project's res/raw folder.
To start recording, use startActivityForResult:
To obtain the captured video, use onActivityResult
:
The sdkMediaResult
object contains the captured videos.
To run the analyses, execute the code below. Mind that mediaList
is an array of objects that were captured (sdkMediaResult) or otherwise created (media you captured on your own).
Install OZLivenessSDK via CocoaPods. To integrate OZLivenessSDK into an Xcode project, add to Podfile:
Rename the license file to forensics.license and put it into the project.
Create a controller that will capture videos as follows:
The delegate object must implement OZLivenessDelegate protocol:
Use AnalysisRequestBuilder to initiate the Liveness analysis.
With these steps, you are done with basic integration of Mobile SDKs. The data from the on-device analysis is not transferred anywhere, so please bear in mind you cannot access it via API or Web console.
In this section, you'll learn how to perform analyses and where to get the numeric results.
Liveness is checking that a person on a video is a real living person.
Biometry compares two or more faces from different media files and shows whether the faces belong to the same person or not.
Best shot is an addition to the Liveness check. The system chooses the best frame from a video and saves it as a picture for later use.
Blacklist checks whether a face on a photo or a video matches with one of the faces in the pre-created database.
The Quantitative Results section explains where and how to find the numeric results of analyses.
In this section, you will find the description of both API and SDK components of Oz Forensics Liveness and face biometric system. API is the backend component of the system, it is needed for all the system modules to interact with each other. SDK is the frontend component that is used to:
1) take videos or images which are then processed via API,
2) display results.
We provide two versions of API.
With full version, we provide you with all functionality of Oz API.
The Lite version is a simple and lightweight version with only the necessary functions included.
The SDK component consists of web SDK and mobile SDK.
Web SDK is a plugin that you can embed into your website page and the adapter for this plugin.
Mobile SDK is SDK for iOS and Android.
Oz API is the most important component of the system. It makes sure all other components are connected with each other. Oz API:
provides the unified Rest API interface to run the Liveness and Biometry analyses
processes authorization and user permissions management
tracks and records requested orders and analyses to the database
archives the inbound media files
collects telemetry from connected mobile apps
provides settings for specific device models
generates reports with analyses results
Description of the Rest API scheme:
To learn what each analysis means, please refer to .
Android sample app source codes
iOS sample app source codes
Android OzLiveness SDK Developer Guide
iOS OzLiveness SDK Developer Guide
Demo app in PlayMarket
Demo app in TestFlight
The Liveness detection algorithm is intended to detect a real living person in a media.
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
For API 4.0.8 and below, please note: the Liveness analysis works with videos and shotsets, images are ignored. If you want to analyze an image, upload it as a shotset (archive) with a single image and mark with the video_selfie_blank
tag.
1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET api/folders/{{folder_id}}/analyses/
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat the check until theresolution_status
and resolution
fields change status to any other except PROCESSING
and treat this as a result.
For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need. It indicates a chance that a person is not a real one.
To get an access token, call POST /api/authorize/auth/
with credentials (which you've got from us) containing the email and password needed in the request body. The host address should be the API address (the one you've also got from us).
The successful response will return a pair of tokens:access_token
and expire_token
.
access_token is a key that grants you access to system resources. To access a resource, you need to add your access_token to the header.
headers = {‘ X-Forensic-Access-Token’: <access_token>}
access_token is time-limited, the limits depend on the account type.
service accounts – OZ_SESSION_LONGLIVE_TTL
(5 years by default),
other accounts – OZ_SESSION_TTL
(15 minutes by default).
expire_token is the token you can use to renew your access token if necessary.
If the value ofexpire_date
> current date, the value of current sessionexpire_date
is set to current date + time period that is defined as shown above (depending on the account type).
To renewaccess_token
and expire_token
, call POST
/api/authorize/refresh/.
Add expire_token
to the request body and X-Forensic-Access-Token to the header.
In case of success, you'll receive a new pair of access_token
and expire_token
. The "old" pair will be deleted upon the first authentication with the renewed tokens.
To launch one or more analyses for your media files, you need to create a folder via Oz API (or use an existing folder) and put the files into this folder. Each file should be marked by tags: they describe what's pictured in a media and determine the applicable analyses.
For API 4.0.8 and below, please note: if you want to upload a photo for the subsequent Liveness analysis, put it into the ZIP archive and apply the video-related tags.
To create a folder and upload media to it, call POST /api/folders/
To add files to the existing folder, call POST /api/folders/{{folder_id}}/media/
Add the files to the request body; tags should be specified in the payload.
Here's the example of the payload for a passive Liveness video and ID front side photo.
An example of usage (Postman):
The successful response will return the folder data.
Error code
Error message
What caused the error
400
Could not locate field for key_path
expire_token
from provided dict data
expire_token
haven't been found in the request body
401
Session not found
The session with expire_token
you have passed doesn't exist.
403
You have not access to refresh this session
A user who makes the request is not thisexpire_token
session owner.
The Biometry algorithm is intended to compare two or more photos and detect the level of similarity of the spotted faces. As a source media, the algorithm takes photos, videos, and documents (with photos).
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat until the resolution_status
and resolution
fields change status to any other except PROCESSING
, and treat this as a result.
Check the response for the min_confidence
value. It is a quantitative result of matching the people on the media uploaded.
The "Best shot" algorithm is intended to choose the most high-quality and well-tuned frame with a face from a video record. This algorithm works as a part of the analysis, so here, we describe only the best shot part.
1. the analysis similar to , but make sure that the "extract_best_shot"
is set to True
as shown below:
If you want to use a webhook for response, add it to the payload at this step, as described .
2. Check and interpret results in the same way as for the pure analysis.
You're .
You have already marked by correct tags into this folder.
1. the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described .
How to compare a photo or video with ones from your database.
The blacklist check algorithm is designed to check the presence of a person using a database of preloaded photos. A video fragment and/or a photo can be used as a source for comparison.
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
1. Initiate the analysis: POST/api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Wait for the resolution_status
and resolution
fields to change the status to anything other than PROCESSING
and treat this as a result.
If you want to know which person from your collection matched with the media you have uploaded, find the collection
analysis in the response, check results_media
, and retrieve person_id
. This is the ID of the person who matched with the person in your media. To get the information about this person, use GET /api/collections/{{collection_id}}/persons/{{person_id}}
with IDs of your collection and person.
This article describes how to get the analysis scores.
When you perform an analysis, the result you get is a number. For biometry, it reflects a chance that the two or more people represented in your media are the same person. For liveness, it shows a chance of deepfake or a spoofing attack: that the person in uploaded media is not a real one. You can get these numbers via API from a JSON response.
Make a request to the folder or folder list to get a JSON response. Set the with_analyses
parameter to True
.
For the Biometry analysis, check the response for the min_confidence
value:
This value is a quantitative result of matching the people on the media uploaded.
4. For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need:
This value is a chance that a person is not a real one.
To process a bunch of analysis results, you can parse the appropriate JSON response.
This article describes how to create a collection via API, how to add persons and photos to this collection and how to delete them and the collection itself if you no longer need it. You can do the same in Web console, but this article covers API methods only.
Collection in Oz API is a database of facial photos that are used to compare with the face from the captured photo or video via the Black list analysis
Person represents a human in the collection. You can upload several photos for a single person.
The collection should be created within a company, so you require your company's company_id
as a prerequisite.
If you don't know your ID, callGET /api/companies/?search_text=test
, replacing "test" with your company name or its part. Save the company_id
you've received.
Now, create a collection via POST /api/collections/
. In the request body, specify the alias for your collection and company_id
of your company:
In a response, you'll get your new collection identifier: collection_id
.
To add a new person to your collection, call POST /api/collections/{{collection_id}}/persons/
, usingcollection_id
of the collection needed. In the request body, add a photo or several photos. Mark them with appropriate tags in the payload:
The response will contain the person_id
which stands for the person identifier within your collection.
If you want to add a name of the person, in the request payload, add it as metadata:
To add more photos of the same person, call POST {{host}}/api/collections/{{collection_id}}/persons/{{person_id}}/images/
using the appropriate person_id
. The request body should be filled as you did it before with POST /api/collections/{{collection_id}}/persons/
.
To obtain information on all the persons within the single collection, call GET /api/collections/{{collection_id}}/persons/
.
To obtain a list of photos for a single person, call GET /api/collections/{{collection_id}}/persons/{{person_id}}/images/
. For each photo, the response will containperson_image_id
. You'll need this ID, for instance, if you want to delete the photo.
To delete a person with all their photos, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}
with the appropriate collection and person identifiers. All the photos will be deleted automatically. However, you can't delete a person entity if it has any related analyses, which means the Black list analysis used this photo for comparison and found a coincidence. To delete such a person, you'll need to delete these analyses using DELETE /api/analyses/{{analyse_id}}
with analyse_id
of the collection (Black list) analysis.
To delete all the collection-related analyses, get a list of folders where the Black list analysis has been used: call GET /api/folders/?analyse.type=COLLECTION
. For each folder from this list (GET /api/folders/{{folder_id}}/
), find the analyse_id
of the required analysis, and delete the analysis – DELETE /api/analyses/{{analyse_id}}
.
To delete a single photo of a person, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}/images/{{person_image_id}}
with collection, person, and image identifiers specified.
Delete the information on all the persons from this collection as described above, then call DELETE /api/collections/{{collection_id}}/
to delete the remaining collection data.
The webhook feature simplifies getting analyses' results. Instead of polling after the analyses are launched, add a webhook that will call your website once the results are ready.
When you create a folder, add the webhook endpoint (resolution_endpoint
) into the payload section of your request body:
You'll receive a notification each time the analyses are completed for this folder. The webhook request will contain information about the folder and its corresponding analyses.
Here, you'll get acquainted with types of analyses that Oz API provides and will learn how to interpret the output.
Using Oz API, you can perform one of the following analyses:
The possible results of the analyses are explained here.
Each of the analyses has its threshold that determines the output of these analyses. By default, the threshold for Liveness is 0.5 or 50%, for Blacklist and Biometry (Face Matching) – 0.85 or 85%.
Biometry: if the final score is equal to or above the threshold, the faces on the analyzed media are considered similar.
Blacklist: if the final score is equal to or above the threshold, the face on the analyzed media matches with one of the faces in the database.
Quality: if the final score is equal to or above the threshold, the result is interpreted as an attack.
To configure the threshold depending on your needs, please contact us.
For more information on how to read the numbers in analyses' results, please refer to Quantitative Results.
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media (for details, please refer to Rules of Assigning Analyses).
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – faces are similar, media represent the same person,
0% (0) – faces are not similar and belong to different people
The Liveness detection (Quality) algorithm aims to check whether a person in a media is a real human acting in good faith, not a fake of any kind.
The Best Shot algorithm checks for the best shot from a video (a best-quality frame where the face is seen the most properly). It is an addition to liveness.
After checking, the analysis shows the chance of a spoofing attack in percents.
100% (1) – an attack is detected, the person in the video is not a real living person,
0% (0) – a person in the video is a real living person.
*Spoofing in biometry is a kind of scam when a person disguises as another person using both program and non-program tools like deepfake, masks, ready-made photos, or fake videos.
The Documents analysis aims to recognize the document and check if its fields are correct according to its type.
Oz API uses a third-party OCR analysis service provided by our partner. If you want to change this service to another one, please contact us.
As an output, you'll get a list of document fields with recognition results for each field and a result of checking that can be:
The documents passed the check successfully,
The documents failed to pass the check.
Additionally, the result of Biometry check is displayed.
The Blacklist checking algorithm is used to determine whether the person on a photo or video is present in the database of pre-uploaded images. This base can be used as a blacklist or whitelist. In the former case, the person's face is being compared with the faces of known swindlers; in the latter case, it might be a list of VIPs.
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – the person in an image or video matches with someone in the blacklist,
0% (0) – the person is not found in the blacklist.
The description of the objects you can find in Oz Forensics system.
System objects on Oz Forensics products are hierarchically structured as shown in the picture below.
On the top level, there is a Company. You can use one copy of Oz API to work with several companies.
The next level is a User. A company can contain any amount of users. There are several roles of users with different permissions. For more information, refer to User Roles.
When a user requests an analysis (or analyses), a new folder is created. This folder contains media. One user can create any number of folders. Each folder can contain any amount of media. A user applies analyses to one or more media within a folder. The rules of assigning analyses are described here. The media quality requirements are listed on this page.
Besides these parameters, each object type has specific ones.
Each of the new API users obtains a role to define access restrictions for direct API connections.
Every role is combined with flags is_admin
and is_service
, which implies restrictions additionally.
is_service
is a flag that marks the user account as a service account for automatic connection purposes. This user authentication creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, by default, the lifetime of a token is extended with each request (parameterized).
ADMIN
is a system administrator. Has unlimited access to all system objects, but can't change the analyses' statuses;
OPERATOR
is a system operator. Can view all system objects and choose the analysis result via the Make Decision button (usually needed if the status is OPERATOR_REQUIRED
);
CLIENT
is a regular consumer account. Can upload media files, process analyses, view results in personal folders, generate reports for analyses.
is_admin
– if set, the user obtains access to other users' data within this admin's company
can_start_analyse_biometry
– an additional flag to allow access to BIOMETRY analyses (enabled by default);
can_start_analyse_quality
– an additional flag to allow access to LIVENESS (QUALITY) analyses (enabled by default);
CLIENT ADMIN
is a company administrator that can manage their company account and users within it. Additionally, CLIENT ADMIN
can view and edit data of all users within their company, delete files in folders, add or delete report templates with or without attachments, the reports themselves and single analyses, check statistics, add new blacklist collections. The role is present in Web UI only. Outside Web UI, CLIENT ADMIN
is replaced by the CLIENT
role with the is_admin
flag set to true
.
CLIENT OPERATOR
is similar to OPERATOR
within their company.
Here's the detailed information on access levels.
This article contains the full description of folders' and analyses' statuses in API.
The details on each status are below.
This is the state when the analysis is being processed. The values of this state can be:
PROCESSING
– the analysis is in progress;
FAILED
– the analysis failed due to some error and couldn't get finished;
FINISHED
– job's done, the analysis is finished, and you can check the result.
Once the analysis is finished, you'll see one of the following results:
SUCCESS
– everything went fine, the check succeeded (e.g., faces match or liveness confirmed);
OPERATOR_REQUIRED
(except the Liveness analysis) – the result should be additionally checked by a human operator;
The OPERATOR_REQUIRED status
appears only if it is set up in biometry settings.
DECLINED
– the check failed (e.g., faces don't match or some spoofing attack detected).
If the analysis hasn't been finished yet, the result inherits a value from analyse.state
: PROCESSING
(the analysis is in progress) / FAILED
(the analysis failed due to some error and couldn't get finished).
A folder is an entity that contains media to analyze. If the analyses have not been finished, the stage of processing media is shown in resolution_status
:
INITIAL
– no analyses applied;
PROCESSING
– analyses are in progress;
FAILED
– any of the analyses failed due to some error and couldn't get finished;
FINISHED
– media in this folder are processed, the analyses are finished.
Folder result is the consolidated result of all analyses applied to media from this folder. Please note: the folder result is the result of the last-finished group of analyses. If all analyses are finished, the result will be:
SUCCESS
– everything went fine, all analyses completed successfully;
OPERATOR_REQUIRED
(except the Liveness analysis) – there are no analyses with the DECLINED
status, but one or more analyses have been completed with the OPERATOR_REQUIRED
status;
DECLINED
– one or more analyses have been completed with the DECLINED
status.
The analyses you send in a single POST
request form a group. The group result is the "worst" result of analyses this group contains: INITIAL
> PROCESSING
> FAILED
> DECLINED
> OPERATOR_REQUIRED
> SUCCESS
, where SUCCESS
means all analyses in the group have been completed successfully without any errors.
To work properly, the resolution algorithms need each uploaded media to be marked with special tags. For video and images, the tags are different. They help algorithms to identify what should be in the photo or video and analyze the content.
The following tag types should be specified in the system for video files.
To identify the data type of the video:
video_selfie
To identify the orientation of the video:
orientation_portrait
– portrait orientation;
orientation_landscape
– landscape orientation.
To identify the action on the video:
video_selfie_left
– head turn to the left;
video_selfie_right
– head turn to the right;
video_selfie_down
– head tilt downwards;
video_selfie_high
– head raise up;
video_selfie_smile
– smile;
video_selfie_eyes
– blink;
video_selfie_scan
– scanning;
video_selfie_oneshot
– a one-frame analysis;
video_selfie_blank
– no action.
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a video file with the “blink” action:
The following tag types should be specified in the system for photo files:
A tag for selfies:
photo_selfie
– to identify the image type as “selfie”.
Tags for photos/scans of ID cards:
photo_id
– to identify the image type as “ID”;
photo_id_front
– for the photo of the ID front side;
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a “selfie” photo file:
Example of the correct tag set for a photo file with the face side of an ID card:
Example of the correct set of tags for a photo file of the back of an ID card:
Metadata is available for most Oz system objects. Here is the list of these objects with the API methods required to add metadata. Please note: you can also add metadata to these objects during their creation.
You may want to use metadata to group folders by a person or lead. For example, if you want to calculate conversion when a single lead makes several Liveness attempts, just add the person/lead identifier to the folder metadata.
Here is how to add the client ID iin
to a folder object.
In the request body, add:
You can pass an ID of a person in this field, and use this ID to combine requests with the same person and count unique persons (same ID = same person, different IDs = different persons). This ID can be a phone number, an IIN, an SSN, or any other kind of unique ID. The ID will be displayed in the report as an additional column.
If you store PII in metadata, make sure it complies with the relevant regulatory requirements.
You can also add metadata via SDK to process the information later using API methods. Please refer to the corresponding SDK sections:
This article covers the default rules of applying analyses.
Analyses in Oz system can be applied in two ways:
manually, for instance, when you choose the Liveness scenario in our demo application;
automatically, when you don’t choose anything and just assign all possible analyses (via API or SDK).
Below, you will find the tags and type requirements for all analyses. If a media doesn’t match the requirements for the certain analysis, this media is ignored by algorithms.
Important: to process a photo in API 4.0.8 and below, pack it into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored.
This analysis is applied to all media.
If the folder contains less than two matching media files, the system will return an error. If there are more than two files, then all pairs will be compared, and the system will return a result for the pair with the least similar faces.
This analysis works only when you have a pre-made image database, which is called the blacklist. The analysis is applied to all media in the folder (or the ones marked as source media).
Best Shot is an addition to the Quality (Liveness) analysis. It requires the appropriate option enabled. The analysis is applied to all media files that can be processed by the Quality analysis.
The Documents analysis is applied to images with tags photo_id_front
and photo_id_back
(documents), and photo_selfie
(selfie). The result will be positive if the system finds the selfie photo and matches it with a photo on one of the valid documents from the following list:
personal ID card
driver license
foreign passport
Field name / status | analyse.state | analyse.resolution_status | folder.resolution_status | system_resolution |
---|
The tags listed allow the algorithms recognizing the files as suitable for the (Liveness) and analyses.
photo_id_back
– for the photo of the ID back side (ignored for any other analyses like or ).
Metadata is any optional data you might need to add to a . In the meta_data
section, you can include any information you want, simply by providing any number of fields with their values:
You can also change or delete metadata. Please refer to our .
Another case is security: when you need to process the analyses’ result from your back end, but don’t want to perform this using the folder ID. Add an ID (transaction_id
) to this folder and use this ID to search for the required information. This case is thoroughly explained .
The automatic assignment means that Oz system decides itself what analyses to apply to media files based on its and type. If you upload files via the web console, you select the tags needed; if you take photo or video via Web SDK, the SDK picks the tags automatically. As for the media type, it can be IMAGE
(a photo)/VIDEO
/SHOTS_SET
, where SHOTS_SET
is a .zip archive equal to video.
The rules listed below act by default. To change the mapping configuration, please .
This analysis is applied to all media, regardless of the recorded (gesture tags begin from video_selfie
).
Parameter
Type
Description
time_created
Timestamp
Object (except user and company) creation time
time_updated
Timestamp
Object (except user and company) update time
meta_data
Json
Any user parameters
technical_meta_data
Json
Module-required parameters; reserved for internal needs
Parameter
Type
Description
company_id
UUID
Company ID within the system
name
String
Company name within the system
Parameter
Type
Description
user_id
UUID
User ID within the system
user_type
String
first_name
String
Name
last_name
String
Surname
middle_name
String
Middle name
String
User email = login
password
String
User password (only required for new users or to change)
can_start_analyze_*
String
Depends on user roles
company_id
UUID
Current user company’s ID within the system
is_admin
Boolean
Whether this user is an admin or not
is_service
Boolean
Whether this user account is service or not
Parameter
Type
Description
folder_id
UUID
Folder ID within the system
resolution_status
ResolutionStatus
The latter analysis status
Parameter
Type
Description
media_id
UUID
Media ID
original_name
String
Original filename (how the file was called on the client machine)
original_url
Url
HTTP link to this file on the API server
tags
Array(String)
List of tags for this file
Parameter
Type
Description
analyse_id
UUID
ID of the analysis
folder_id
UUID
ID of the folder
type
String
Analysis type (BIOMETRY\QUALITY\DOCUMENTS)
results_data
JSON
Results of the analysis
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
-
-
CLIENT
-
their company data
-
-
CLIENT SERVICE
-
their company data
-
-
CLIENT OPERATOR
-
their company data
-
-
CLIENT ADMIN
-
their company data
their company data
their company data
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
their folders
their folders
their folders
-
CLIENT SERVICE
within their company
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
-
within their company
-
-
CLIENT SERVICE
-
within their company
-
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
+
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
-
within their company
-
CLIENT OPERATOR
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
+
+
-
CLIENT
in their folders
in their folders
-
CLIENT SERVICE
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
in their folders
in their folders
-
-
CLIENT SERVICE
within their company
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
-
-
CLIENT
-
within their company
-
-
CLIENT SERVICE
within their company
within their company
-
-
CLIENT OPERATOR
-
within their company
-
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
-
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
within their company
within their company
-
CLIENT OPERATOR
-
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
-
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
-
within their company
-
CLIENT OPERATOR
-
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
their data
-
CLIENT
-
their data
their data
-
CLIENT SERVICE
-
within their company
their data
-
CLIENT OPERATOR
-
within their company
their data
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
INITIAL | - | - | starting state | starting state |
PROCESSING | starting state | starting state | analyses in progress | analyses in progress |
FAILED | system error | system error | system error | system error |
FINISHED | finished successfully | - | finished successfully | - |
DECLINED | - | check failed | - | check failed |
OPERATOR_REQUIRED | - | additional check is needed | - | additional check is needed |
SUCCESS | - | check succeeded | - | check succeeded |
Object | API Method |
User |
|
Folder |
|
Media |
|
Analysis |
|
Collection |
and, for a person in a collection,
|
Code | Message | Description |
202 | Could not locate face on source media [media_id] | No face is found in the media that is being processed, or the source media has wrong ( |
202 | Biometry. Analyse requires at least 2 media objects to process | The algorithms did not find the two appropriate media for analysis. This might happen when only a single media has been sent for the analysis, or a media is missing a tag. |
202 | Processing error - did not found any document candidates on image | The Documents analysis can't be finished because the photo uploaded seems not to be a document, or it has wrong (not |
5 | Invalid/missed tag values to process quality check | The tags applied can't be processed by the Quality algorithm (most likely, the tags begin from |
5 | Invalid/missed tag values to process blacklist check | The tags applied can't be processed by the Blacklist algorithm. This might happen when a media is missing a tag. |
Response codes 2XX indicate a successfully processed request (e.g., code 200 for retrieving data, code 201 for adding a new entity, code 204 for deletion, etc.).
Response codes 4XX indicate that a request could not be processed correctly because of some client-side data issues (e.g., 404 when addressing a non-existing resource).
Response codes 5XX indicate that an internal server-side error occurred during the request processing (e.g., when database is temporarily unavailable).
Each response error includes HTTP code and JSON data with error description. It has the following structure:
error_code
– integer error code;
error_message
– text error description;
details
– additional error details (format is specified to each case). Can be empty.
Sample error response:
Error codes:
0 – UNKNOWN
Unknown server error.
1 - NOT ALLOWED
An unallowed method is called. Usually is followed by the 405 HTTP status of response. For example, trying to request the PATCH method, while only GET/POST ones are supported.
2 - NOT REALIZED
The method is documented but is not realized by any temporary or permanent reason.
3 - INVALID STRUCTURE
Incorrect structure of request. Some required fields missing or a format validation error occurred.
4 - INVALID VALUE
Incorrect value of the parameter inside request body or query.
5 - INVALID TYPE
The invalid data type of the request parameter.
6 - AUTH NOT PROVIDED
Access token not specified.
7 - AUTH INVALID
The access token does not exist in the database.
8 - AUTH EXPIRED
Auth token is expired.
9 - AUTH FORBIDDEN
Access denied for the current user.
10 - NOT EXIST
the requested resource is not found (alternative of HTTP status_code = 404).
11 - EXTERNAL SERVICE
Error in the external information system.
12 – DATABASE
Critical database error on the server host.
What is Oz API Lite, when and how to use it.
Oz API Lite is the lightweight yet powerful version of Oz API. The Lite version is less resource-demanding, more productive, and easier to work with. The analyses are made within the API Lite image. As Oz API Lite doesn't include any additional services like statistics or data storage, this version is the one to use when you need a high performance.
To check the Liveness processor, call GET /v1/face/liveness/health
.
To check the Biometry processor, call GET /v1/face/pattern/health
.
To perform the liveness check for an image, call POST /v1/face/liveness/detect
(it takes an image as an input and displays the evaluation of spoofing attack chance in this image)
To compare two faces in two images, call POST /v1/face/pattern/extract_and_compare
(it takes two images as an input, derives the biometry templates from these images, and compares them).
To compare an image with a bunch of images, call POST /v1/face/pattern/extract_and_compare_n
.
For the full list of Oz API Lite methods, please refer to API Methods.
API changes
Face Identification 1:N is now live, significantly increasing the data processing capacity of the Oz API to find matches. Even huge face databases (containing millions of photos and more) are no longer an issue.
The Liveness (QUALITY) analysis now ignores photos tagged with photo_id
, photo_id_front
, or photo_id_back
, preventing these photos from causing the tag-related analysis error.
You can now apply the Liveness (QUALITY) analysis to a single image.
Fixed the bug where the Liveness analysis could finish with the SUCCESS result with no media uploaded.
The default value for the extract_best_shot
parameter is now True.
RAR archives are no longer supported.
By default, analyses.results_media.results_data
now contain the confidence_spoofing
parameter. However, if you need all three parameters for the backward compatibility, it is possible to change the response back to three parameters: confidence_replay
, confidence_liveness
, and confidence_spoofing
.
Updated the default PDF report template.
The name of the PDF report now contains folder_id
.
Set the autorotation of logs.
Added the CLI command for user deletion.
You can now switch off the video preview generation.
The ADMIN access token is now valid for 5 years.
Added the folder identifier folder_id
to the report name.
Fixed bugs and optimized the API work.
For the sliced video, the system now deletes the unnecessary frames.
Added new methods: GET and POST at media/<media_id>/snapshot/
Replaced the default report template.
The shot set preview now keeps images’ aspect ratio.
ADMIN and OPERATOR receive system_company
as a company they belong to.
Added the company_id
attribute to User, Folder, Analyse, Media.
Added the Analysis group_id
attribute.
Added the system_resolution
attribute to Folder and Analysis.
The analysis resolution_status
now returns the system_resolution
value.
Removed the PATCH method for collections.
Added the resolution_status
filter to Folder Analyses [LIST] and analyse.resolution_status
filter to Folder [LIST].
Added the audit log for Folder, User, Company.
Improved the company deletion algorithm.
Reforged the blacklist processing logic.
Fixed a few bugs.
The Photo Expert and KYC modules are now removed.
The endpoint for the user password change is now POST
users/user_id/change-password instead of PATCH
.
Provided log for the Celery app.
Added filters to the Folder [LIST] request parameters: analyse.time_created
, analyse.results_data
for the Documents analysis, results_data
for the Biometry analysis, results_media_results_data
for the QUALITY analysis. To enable filters, set the with_results_media_filter
query parameter to True
.
Added a new attribute for users – is_active
(default True
). If is_active == False
, any user operation is blocked.
Added a new exception code (1401 with status code 401) for the actions of the blocked users.
Added shots sets preview.
You can now save a shots set archive to a disk (with the original_local_path
, original_url
attributes).
A new original_info attribute is added to store md5, size, and mime-type of a shots set
Fixed ReportInfo
for shots sets.
Added health check at GET api/healthcheck.
Fixed the shots set thumbnail URL.
Now, the first frame of shots set becomes this shots set's thumbnail URL.
Modified the retry policy – the default max count of analysis attempts is increased to 3 and jitter configuration introduced.
Changed the callback algorithm.
Refactored and documented the command line tools.
Refactored modules.
Changed the delete personal information endpoint and method from delete_pi
to /pi
and from POST to DELETE, respectively.
Improved the delete personal information algorithm.
It is now forbidden to add media to cleaned folders.
Changed the authorize/restore endpoint name from auth
to auth_restore
.
Added a new tag – video_selfie_oneshot
.
Added the password validation setting (OZ_PASSWORD_POLICY
).
Added auth
, rest_unauthorized
, rps_with_token
throttling (use OZ_THROTTLING_RATES
in configuration. Off by default).
User permissions are now used to access static files (OZ_USE_PERMISSIONS_FOR_STATIC
in configuration, false by default).
Added a new folder endpoint – /delete_pi
. It clears all personal information from a folder and analyses related to this folder.
Fixed a bug with no error while trying to synchronize empty collections
If persons are uploaded, the analyse collection TFSS request is sent.
Added the fields_to_check
parameter to document analysis (by default, all fields are checked).
Added the double_page
_spread parameter to document analysis (True
by default).
Fixed collection synchronization.
Authorization token can be now refreshed by expire_token
.
Added support for application/x-gzip.
Renamed shots_set.images to shots_set.frames.
Added user sessions API.
Users can now change a folder owner (limited by permissions).
Changed dependencies rules.
Changed the access_token prolongation policy to fix bug of prolongation before checking the expiration permission.
Move oz_collection_binding
(collection synchronization functional) to oz_core.
Simplified the shots sets functionality. One archive keeps one shot set.
Improved the document sides recognition for the docker version.
Moved the orientation tag check to liveness at quality analysis.
Added a default report template for Admin and Operator.
Updated the biometric model.
A new ShotsSet object is not created if there are no photos for it.
Updated the data exchange format for the documents' recognition module.
You can’t delete a Collection if there are associated analyses with Collection Persons.
Added time marks to analysis: time_task_send_to_broker
, time_task_received
, time_task_finished.
Added a new authorization engine. You can now connect with Active Directory by LDAP (settings configuration required).
A new type of media in Folders – "shots_set".
You can’t delete a CollectionPerson if there are analyses associated with it.
Renamed the folder field “resolution_suggest” to “operator_status”.
Added a folder text field “operator_comment”.
The folder fields “operator_status” and “operator_comment” can be edited only by Admin, Operator, Client Service, Client Operator, and Client Admin.
Only Admin and Client Admin can delete folder, folder media, report template, report template attachments, reports, and analyses (within their company).
Fixed a deletion error: when report author is deleted, their reports get deleted as well.
Client can now view only their own profile.
Client Operator can now edit only their profile.
Client can't delete own folders, media, reports, or analyses anymore.
Client Service can now create Collection Person and read reports within their company.
Client, Client Admin, Client Operator have read access to users profiles only in their company.
A/B testing is now available.
Added support for expiration date header.
Added document recognition module Standalone/Dockered binding support.
Added a new role of Client Operator (like Client Admin without permissions for company and account management).
Client Admin and Client Operator can change the analysis status.
Only Admin and Client Admin (for their company) can create, update and delete operations for Collection and CollectionPerson models from now on.
Added a check for user permissions to report template when creating a folder report.
Collection creation now returns status code 201 instead of 200.
Download and install the Postman client from this page. Then download the JSON file needed:
5.0
Oz API 5.1.0 works with the same collection.
Launch the client and import Oz API collection for Postman by clicking the Import button:
Click files, locate the JSON needed, and hit Open to add it:
The collection will be imported and will appear in the Postman interface:
Oz Mobile SDK stands for the Software Developer’s Kit of the Oz Forensics Liveness and Face Biometric System, providing seamless integration with customers’ mobile apps for login and biometric identification.
Currently, both Android and iOS SDK work in the portrait mode.
API Lite (FaceVer) changes
API Lite now accepts base64
Improved the biometric model
Added the 1:N mode
Added the CORS policy
Published the documentation
Improved error messages – made them more detailed
Simplified the Liveness/Detect methods
Reworked and improved the core
Added anti-spoofing algorithms
Added the extract_and_compare method
Download and install the Postman client from this Then download the JSON file needed:
Added the
From 1.1.0, Oz API Lite works with base64 as an input format and is also able to return the biometric templates in this format. To enable this option, add Content-Transfer-Encoding = base64
to the request headers.
Use this method to check what versions of components are used (available from 1.1.1).
Call GET /version
-
GET localhost/version
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
Use this method to check whether the biometric processor is ready to work.
Call GET /v1/face/pattern/health
-
GET localhost/v1/face/pattern/health
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
The method is designed to extract a biometric template from an image.
HTTP request content type: “image / jpeg” or “image / png”
Call POST /v1/face/pattern/extract
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns a biometric template.
The content type of the HTTP response is “application/octet-stream”.
If you've passed Content-Transfer-Encoding = base64
in headers, the template will be in base64 as well.
The method is designed to compare two biometric templates.
The content type of the HTTP request is “multipart / form-data”.
CallPOST /v1/face/pattern/compare
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing the two templates.
HTTP response content type: “application/json”.
The method combines the two methods from above, extract and compare. It extracts a template from an image and compares the resulting biometric template with another biometric template that is also passed in the request.
The content type of the HTTP request is “multipart / form-data”.
Call POST /v1/face/pattern/verify
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing two biometric templates and the biometric template.
The content type of the HTTP response is “multipart/form-data”.
The method also combines the two methods from above, extract and compare. It extracts templates from two images, compares the received biometric templates, and transmits the comparison result as a response.
The content type of the HTTP request is “multipart / form-data”.
Call POST /v1/face/pattern/extract_and_compare
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing the two extracted biometric templates.
HTTP response content type: “application / json”.
Use this method to compare one biometric template to N others.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/compare_n
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
The method combines the extract and compare_n methods. It extracts a biometric template from an image and compares it to N other biometric templates that are passed in the request as a list.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/verify_n
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
This method also combines the extract and compare_n methods but in another way. It extracts biometric templates from the main image and a list of other images and then compares them in the 1:N mode.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/
extract_and_compare_n
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
HTTP response content type: “application / json”.
Use this method to check whether the liveness processor is ready to work.
Call GET /v1/face/liveness/health
None.
GET localhost/v1/face/liveness/health
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
The method is designed to detect presentation attacks on images.
HTTP request content type: “image/jpeg” or “image/png”
Call POST /v1/face/liveness/detect
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns an estimate of the presence of a presentation attack in the image.
HTTP response content type: “application/json”.
HTTP response content type: “application / json”.
To start using Oz Android SDK, follow the steps below.
Recommended Android version: 5+ (the newer the smartphone is, the faster the analyses are).
Recommended versions of components:
We do not support emulators.
Available languages: EN, ES, HY, KK, KY, TR, PT-BR.
To obtain the sample apps source code for the Oz Liveness SDK, proceed to the GitLab repository:
Follow the link below to see a list of SDK methods and properties:
The common way of Oz Mobile SDK to work is the server-based mode, when the Liveness and Biometry analyses are performed on a server as shown in the scheme below:
But there's also an option to perform checks without server calls or even Internet connection. This option stands for on-device analyses.
The on-device analyses are being performed faster and more secure as all data is processed directly on device, nothing is being sent anywhere. In this case, you don’t need a server at all, neither need you the API connection.
However, the API connection might be needed for some additional functions like telemetry or server-side SDK configuration.
The on-device analysis mode is useful when:
you do not collect, store or process personal data;
you need to identify a person quickly regardless of network conditions such as a distant region, inside a building, underground, etc.;
you’re on a tight budget as you can save money on the hardware part.
To launch the on-device check, set the appropriate mode
for Android or iOS SDK.
Android:
iOS:
To pass your license file to the SDK, call the OzLivenessSDK.init
method with a list of LicenseSources
. Use one of the following:
LicenseSource.LicenseAssetId
should contain a path to a license file called forensics.license
, which has to be located in the project's res/raw folder.
LicenseSource.LicenseFilePath
should contain a file path to the place in the device's storage where the license file is located.
In case of any license errors, the onError
function is called. Use it to handle the exception as shown above. Otherwise, the system will return information about license. To check the license data manually, use the getLicensePayload
method.
Master license is the offline license that allows using Mobile SDKs with any bundle_id
, unlike the regular licenses. To get a master license, create a pair of keys as shown below. Email us the public key, and we will email you the master license shortly after that.
Your application needs to sign its bundle_id
with the private key, and the Mobile SDK checks the signature using the public key from the master license. Master licenses are time-limited.
This section describes the process of creating your private and public keys.
To create a private key, run the commands below one by one.
You will get these files:
privateKey.der is a private .der key;
privateKey.txt is privateKey.der encoded by base64. This key containing will be used as the host app bundle_id signature.
File examples:
To create a public key, run this command.
You will get the public key file: publicKey.pub. To get a license, please email us this file. We will email you the license.
File example:
SDK initialization:
For Android 6.0 (API level 23) and older:
Add the implementation 'com.madgag.spongycastle:prov:1.58.0.0'
dependency;
Before creating a signature, call Security.insertProviderAt(org.spongycastle.jce.provider.BouncyCastleProvider(), 1)
Prior to the SDK initializing, create a base64-encoded signature for the host app bundle_id
using the private key.
Signature creation example:
Pass the signature as the masterLicenseSignature
parameter during the SDK initialization.
If the signature is invalid, the initialization continues as usual: the SDK checks the list of bundle_id
included into the license, like it does it by default without a master license.
Embed Oz Android SDK into your project as described .
Get a trial license for SDK on our or a production license by . We'll need your application id
. Add the license to your project as described .
Connect SDK to API as described . This step is optional, as this connection is required only when you need to process data on a server. If you use the , the data is not transferred anywhere, and no connection is needed.
Capture videos using methods described . You'll send them for analysis afterward.
Analyze media you've taken at the previous step. The process of checking liveness and face biometry is described .
If you want to customize the look-and-feel of Oz Android SDK, please refer to .
Download the demo app latest build .
For details, please refer to the Checking Liveness and Face Biometry sections for and .
You can generate the trial license or contact us by to get a productive license. To create the license, your applicationId
(bundle id
) is required.
Error message | What to Do |
---|
The OpenSSL command specification:
Parameter name
Type
Description
core
String
API Lite core version number.
tfss
String
TFSS version number.
models
[String]
An array of model versions, each record contains model name and model version number.
Parameter name
Type
Description
status
Int
0 – the biometric processor is working correctly.
3 – the biometric processor is inoperative.
message
String
Message.
Parameter name
Type
Description
Not specified*
Stream
Required parameter. Image to extract the biometric template.
The “Content-Type” header field must indicate the content type.
Parameter name
Type
Description
Not specified*
Stream
A biometric template derived from an image
Parameter name
Type
Description
bio_feature
Stream
Required parameter.
First biometric template.
bio_template
Stream
Required parameter.
Second biometric template.
Parameter name
Type
Description
score
Float
The result of comparing two templates
decision
String
Recommended solution based on the score.
approved – positive. The faces match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
sample
Stream
Required parameter.
Image to extract the biometric template.
bio_template
Stream
Required parameter.
The biometric template to compare with.
Parameter name
Type
Description
score
Float
The result of comparing two templates
bio_feature
Stream
Biometric template derived from image
Parameter name
Type
Description
sample_1
Stream
Required parameter.
First image.
sample_2
Stream
Required parameter.
Second image
Parameter name
Type
Description
score
Float
The result of comparing the two extracted templates.
decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
template_1
Stream
This parameter is mandatory. The first (main) biometric template
templates_n
Stream
A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the main and Nth templates. The result has the fields as follows:
*filename
String
A filename for the Nth template.
*score
Float
The result of comparing the main and Nth templates.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
sample_1
Stream
This parameter is mandatory. The main image.
templates_n
Stream
A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the template derived from the main image and the Nth template. The result has the fields as follows:
*filename
String
A filename for the Nth template.
*score
Float
The result of comparing the template derived from the main image and the Nth template.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Parameter name
Type
Description
sample_1
Stream
This parameter is mandatory. The first (main) image.
samples_n
Stream
A list of N images. Each of them should be passed separately but the parameter name should be samples_n. You also need to pass the filename in the header.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the main and Nth images. The result has the fields as follows:
*filename
String
A filename for the Nth image.
*score
Float
The result of comparing the main and Nth images.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
HTTP response codes
The value of the “code” parameter
Description
400
BPE-002001
Invalid Content-Type of HTTP request
400
BPE-002002
Invalid HTTP request method
400
BPE-002003
Failed to read the biometric sample*
400
BPE-002004
Failed to read the biometric template
400
BPE-002005
Invalid Content-Type of the multiparted HTTP request part
400
BPE-003001
Failed to retrieve the biometric template
400
BPE-003002
The biometric sample* is missing face
400
BPE-003003
More than one person is present on the biometric sample*
500
BPE-001001
Internal bioprocessor error
400
BPE-001002
TFSS error. Call the biometry health method.
Parameter name
Type
Description
status
Int
0 – the liveness processor is working correctly.
3 – the liveness processor is inoperative.
message
String
Message.
Parameter name
Type
Description
Not specified*
Stream
Required parameter. Picture.
The “Content-Type” header field must indicate the content type.
Parameter name
Type
Description
score
Float
Evaluation of the presence of a presentation attack in the image on a scale from 0 (no signs of an attack) to 1 (maximum confidence in the presence of an attack).
passed
Boolean
Recommended solution based on the score.
True – there is no presentation attack on the image.
False – the image contains a presentation attack.
HTTP response codes
The value of the “code” parameter
Description
400
LDE-002001
Invalid Content-Type of HTTP request
400
LDE-002002
Invalid HTTP request method
400
LDE-002004
Failed to extract the biometric sample*
400
LDE-002005
Invalid Content-Type of the multiparted HTTP request part
500
LDE-001001
Liveness detection processor internal error
400
LDE-001002
TFSS error. Call the Liveness health method.
Gradle | 7.5.1 |
Kotlin | 1.7.21 |
AGP | 7.3.1 |
Java Target Level | 1.8 |
JDK | 17 |
License error. License at (your_URI) not found | The license file is missing. Please check its name and path to the file. |
License error. Cannot parse license from (your_URI), invalid format | The license file is somehow damaged. Please email us the file. |
License error. Bundle company.application.id is not in the list allowed by license (bundle.id1, bundle.id2) | The bundle (application) identifier you specified is missing in the allowed list. Please check the spelling, if it is correct, you need to get another license for your application. |
License error. Current date yyyy-mm-dd hh:mm:ss is later than license expiration date yyyy-mm-dd hh:mm:ss | Your license has expired. Please contact us. |
License is not initialized. Call 'OzLivenessSDK.init before using SDK | You haven't initialized the license. Call |
Add the following URL to the build.gradle
of the project:
Add this to the build.gradle
of the module (VERSION is the version you need to implement. Please refer to Changelog):
Please note: this is the default version.
Please note: the resulting file will be larger.
Also, regardless of the mode chosen, add:
To connect SDK to Oz API, specify the API URL and access token as shown below.
Please note: in your host application, it is recommended that you set the API address on the screen that precedes the liveness check. Setting the API URL initiates a service call to the API, which may cause excessive server load when being done at the application initialization or startup.
Alternatively, you can use the login and password provided by your Oz Forensics account manager:
Although, the preferred option is authentication via access token – for security reasons.
For telemetry, set the separate connection as shown below:
Clearing authorization:
Check for the presence of the saved Oz API access token:
LogOut:
To start recording, use thestartActivityForResult
method:
actions
– a list of user actions while recording video.
For Fragment, use the code below. LivenessFragment
is the Fragment representation of the Liveness screen UI.
To obtain the captured video, use theonActivityResult
method:
sdkMediaResult
– an object with video capturing results for interactions with Oz API (a list of the OzAbstractMedia objects),
sdkErrorString
– description of errors, if any.
If you use our SDK just for capturing videos, omit the Checking Liveness and Face Biometry step.
If a user closes the capturing screen manually, resultCode
receives the Activity.RESULT_CANCELED
value.
Code example:
If you use our SDK just for capturing videos, omit this step.
To check liveness and face biometry, you need to upload media to our system and then analyze them.
To interpret the results of analyses, please refer to Types of Analyses.
Here’s an example of performing a check:
To delete media files after the checks are finished, use the clearActionVideos
method.
To add metadata to a folder, use the addFolderMeta
method.
In the params
field of the Analysis
structure, you can pass any additional parameters (key + value), for instance, to extract the best shot on the server side.
To use a media file that is captured with another SDK (not Oz Android SDK), specify the path to it in OzAbstractMedia:
If you want to add your media to the existing folder, use the setFolderId
method:
We recommend applying these settings when starting the app.
To customize the Oz Liveness interface, use UIcustomization
as shown below. For the description of customization parameters, please refer to Android SDK Methods and Properties.