Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In this section, you'll learn how to perform analyses and where to get the numeric results.
Liveness is checking that a person on a video is a real living person.
Biometry compares two or more faces from different media files and shows whether the faces belong to the same person or not.
Best shot is an addition to the Liveness check. The system chooses the best frame from a video and saves it as a picture for later use.
Blacklist checks whether a face on a photo or a video matches with one of the faces in the pre-created database.
The Quantitative Results section explains where and how to find the numeric results of analyses.
To learn what each analysis means, please refer to Types of Analyses.
Oz API is the most important component of the system. It makes sure all other components are connected with each other. Oz API:
provides the unified Rest API interface to run the Liveness and Biometry analyses
processes authorization and user permissions management
tracks and records requested orders and analyses to the database
archives the inbound media files
collects telemetry from connected mobile apps
provides settings for specific device models
generates reports with analyses results
Description of the Rest API scheme:
In this section, you will find the description of both API and SDK components of Oz Forensics Liveness and face biometric system. API is the backend component of the system, it is needed for all the system modules to interact with each other. SDK is the frontend component that is used to:
1) take videos or images which are then processed via API,
2) display results.
We provide two versions of API.
With full version, we provide you with all functionality of Oz API.
The Lite version is a simple and lightweight version with only the necessary functions included.
The SDK component consists of web SDK and mobile SDK.
Web SDK is a plugin that you can embed into your website page and the adapter for this plugin.
Mobile SDK is SDK for iOS and Android.
To launch one or more analyses for your media files, you need to create a folder via Oz API (or use an existing folder) and put the files into this folder. Each file should be marked by tags: they describe what's pictured in a media and determine the applicable analyses.
For API 4.0.8 and below, please note: if you want to upload a photo for the subsequent Liveness analysis, put it into the ZIP archive and apply the video-related tags.
To create a folder and upload media to it, call POST /api/folders/
To add files to the existing folder, call POST /api/folders/{{folder_id}}/media/
Add the files to the request body; tags should be specified in the payload.
Here's the example of the payload for a passive Liveness video and ID front side photo.
An example of usage (Postman):
The successful response will return the folder data.
The Liveness detection algorithm is intended to detect a real living person in a media.
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
For API 4.0.8 and below, please note: the Liveness analysis works with videos and shotsets, images are ignored. If you want to analyze an image, upload it as a shotset (archive) with a single image and mark with the video_selfie_blank
tag.
1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET api/folders/{{folder_id}}/analyses/
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat the check until theresolution_status
and resolution
fields change status to any other except PROCESSING
and treat this as a result.
For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need. It indicates a chance that a person is not a real one.
The "Best shot" algorithm is intended to choose the most high-quality and well-tuned frame with a face from a video record. This algorithm works as a part of the analysis, so here, we describe only the best shot part.
1. the analysis similar to , but make sure that the "extract_best_shot"
is set to True
as shown below:
If you want to use a webhook for response, add it to the payload at this step, as described .
2. Check and interpret results in the same way as for the pure analysis.
How to compare a photo or video with ones from your database.
The blacklist check algorithm is designed to check the presence of a person using a database of preloaded photos. A video fragment and/or a photo can be used as a source for comparison.
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
1. Initiate the analysis: POST/api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Wait for the resolution_status
and resolution
fields to change the status to anything other than PROCESSING
and treat this as a result.
If you want to know which person from your collection matched with the media you have uploaded, find the collection
analysis in the response, check results_media
, and retrieve person_id
. This is the ID of the person who matched with the person in your media. To get the information about this person, use GET /api/collections/{{collection_id}}/persons/{{person_id}}
with IDs of your collection and person.
This article describes how to create a collection via API, how to add persons and photos to this collection and how to delete them and the collection itself if you no longer need it. You can do the same in Web console, but this article covers API methods only.
Collection in Oz API is a database of facial photos that are used to compare with the face from the captured photo or video via the Black list analysis
Person represents a human in the collection. You can upload several photos for a single person.
The collection should be created within a company, so you require your company's company_id
as a prerequisite.
If you don't know your ID, callGET /api/companies/?search_text=test
, replacing "test" with your company name or its part. Save the company_id
you've received.
Now, create a collection via POST /api/collections/
. In the request body, specify the alias for your collection and company_id
of your company:
In a response, you'll get your new collection identifier: collection_id
.
To add a new person to your collection, call POST /api/collections/{{collection_id}}/persons/
, usingcollection_id
of the collection needed. In the request body, add a photo or several photos. Mark them with appropriate tags in the payload:
The response will contain the person_id
which stands for the person identifier within your collection.
If you want to add a name of the person, in the request payload, add it as metadata:
To add more photos of the same person, call POST {{host}}/api/collections/{{collection_id}}/persons/{{person_id}}/images/
using the appropriate person_id
. The request body should be filled as you did it before with POST /api/collections/{{collection_id}}/persons/
.
To obtain information on all the persons within the single collection, call GET /api/collections/{{collection_id}}/persons/
.
To obtain a list of photos for a single person, call GET /api/collections/{{collection_id}}/persons/{{person_id}}/images/
. For each photo, the response will containperson_image_id
. You'll need this ID, for instance, if you want to delete the photo.
To delete a person with all their photos, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}
with the appropriate collection and person identifiers. All the photos will be deleted automatically. However, you can't delete a person entity if it has any related analyses, which means the Black list analysis used this photo for comparison and found a coincidence. To delete such a person, you'll need to delete these analyses using DELETE /api/analyses/{{analyse_id}}
with analyse_id
of the collection (Black list) analysis.
To delete all the collection-related analyses, get a list of folders where the Black list analysis has been used: call GET /api/folders/?analyse.type=COLLECTION
. For each folder from this list (GET /api/folders/{{folder_id}}/
), find the analyse_id
of the required analysis, and delete the analysis – DELETE /api/analyses/{{analyse_id}}
.
To delete a single photo of a person, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}/images/{{person_image_id}}
with collection, person, and image identifiers specified.
Delete the information on all the persons from this collection as described above, then call DELETE /api/collections/{{collection_id}}/
to delete the remaining collection data.
The webhook feature simplifies getting analyses' results. Instead of polling after the analyses are launched, add a webhook that will call your website once the results are ready.
When you create a folder, add the webhook endpoint (resolution_endpoint
) into the payload section of your request body:
You'll receive a notification each time the analyses are completed for this folder. The webhook request will contain information about the folder and its corresponding analyses.
The description of the objects you can find in Oz Forensics system.
System objects on Oz Forensics products are hierarchically structured as shown in the picture below.
On the top level, there is a Company. You can use one copy of Oz API to work with several companies.
The next level is a User. A company can contain any amount of users. There are several roles of users with different permissions. For more information, refer to User Roles.
When a user requests an analysis (or analyses), a new folder is created. This folder contains media. One user can create any number of folders. Each folder can contain any amount of media. A user applies analyses to one or more media within a folder. The rules of assigning analyses are described here. The media quality requirements are listed on this page.
Besides these parameters, each object type has specific ones.
Parameter
Type
Description
time_created
Timestamp
Object (except user and company) creation time
time_updated
Timestamp
Object (except user and company) update time
meta_data
Json
Any user parameters
technical_meta_data
Json
Module-required parameters; reserved for internal needs
Parameter
Type
Description
company_id
UUID
Company ID within the system
name
String
Company name within the system
Parameter
Type
Description
user_id
UUID
User ID within the system
user_type
String
first_name
String
Name
last_name
String
Surname
middle_name
String
Middle name
String
User email = login
password
String
User password (only required for new users or to change)
can_start_analyze_*
String
Depends on user roles
company_id
UUID
Current user company’s ID within the system
is_admin
Boolean
Whether this user is an admin or not
is_service
Boolean
Whether this user account is service or not
Parameter
Type
Description
folder_id
UUID
Folder ID within the system
resolution_status
ResolutionStatus
The latter analysis status
Parameter
Type
Description
media_id
UUID
Media ID
original_name
String
Original filename (how the file was called on the client machine)
original_url
Url
HTTP link to this file on the API server
tags
Array(String)
List of tags for this file
Parameter
Type
Description
analyse_id
UUID
ID of the analysis
folder_id
UUID
ID of the folder
type
String
Analysis type (BIOMETRY\QUALITY\DOCUMENTS)
results_data
JSON
Results of the analysis
This article contains the full description of folders' and analyses' statuses in API.
The details on each status are below.
This is the state when the analysis is being processed. The values of this state can be:
PROCESSING
– the analysis is in progress;
FAILED
– the analysis failed due to some error and couldn't get finished;
FINISHED
– job's done, the analysis is finished, and you can check the result.
Once the analysis is finished, you'll see one of the following results:
SUCCESS
– everything went fine, the check succeeded (e.g., faces match or liveness confirmed);
OPERATOR_REQUIRED
(except the Liveness analysis) – the result should be additionally checked by a human operator;
The OPERATOR_REQUIRED status
appears only if it is set up in biometry settings.
DECLINED
– the check failed (e.g., faces don't match or some spoofing attack detected).
If the analysis hasn't been finished yet, the result inherits a value from analyse.state
: PROCESSING
(the analysis is in progress) / FAILED
(the analysis failed due to some error and couldn't get finished).
A folder is an entity that contains media to analyze. If the analyses have not been finished, the stage of processing media is shown in resolution_status
:
INITIAL
– no analyses applied;
PROCESSING
– analyses are in progress;
FAILED
– any of the analyses failed due to some error and couldn't get finished;
FINISHED
– media in this folder are processed, the analyses are finished.
Folder result is the consolidated result of all analyses applied to media from this folder. Please note: the folder result is the result of the last-finished group of analyses. If all analyses are finished, the result will be:
SUCCESS
– everything went fine, all analyses completed successfully;
OPERATOR_REQUIRED
(except the Liveness analysis) – there are no analyses with the DECLINED
status, but one or more analyses have been completed with the OPERATOR_REQUIRED
status;
DECLINED
– one or more analyses have been completed with the DECLINED
status.
The analyses you send in a single POST
request form a group. The group result is the "worst" result of analyses this group contains: INITIAL
> PROCESSING
> FAILED
> DECLINED
> OPERATOR_REQUIRED
> SUCCESS
, where SUCCESS
means all analyses in the group have been completed successfully without any errors.
Each of the new API users obtains a role to define access restrictions for direct API connections.
Every role is combined with flags is_admin
and is_service
, which implies restrictions additionally.
is_service
is a flag that marks the user account as a service account for automatic connection purposes. This user authentication creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, by default, the lifetime of a token is extended with each request (parameterized).
ADMIN
is a system administrator. Has unlimited access to all system objects, but can't change the analyses' statuses;
CLIENT
is a regular consumer account. Can upload media files, process analyses, view results in personal folders, generate reports for analyses.
is_admin
– if set, the user obtains access to other users' data within this admin's company
CLIENT ADMIN
is a company administrator that can manage their company account and users within it. Additionally, CLIENT ADMIN
can view and edit data of all users within their company, delete files in folders, add or delete report templates with or without attachments, the reports themselves and single analyses, check statistics, add new blacklist collections. The role is present in Web UI only. Outside Web UI, CLIENT ADMIN
is replaced by the CLIENT
role with the is_admin
flag set to true
.
CLIENT OPERATOR
is similar to OPERATOR
within their company.
Here's the detailed information on access levels.
Field name / status | analyse.state | analyse.resolution_status | folder.resolution_status | system_resolution |
---|
OPERATOR
is a system operator. Can view all system objects and choose the analysis result via the Make Decision button (usually needed if the is OPERATOR_REQUIRED
);
can_start_analyse_biometry
– an additional flag to allow access to analyses (enabled by default);
can_start_analyse_quality
– an additional flag to allow access to (QUALITY) analyses (enabled by default);
INITIAL | - | - | starting state | starting state |
PROCESSING | starting state | starting state | analyses in progress | analyses in progress |
FAILED | system error | system error | system error | system error |
FINISHED | finished successfully | - | finished successfully | - |
DECLINED | - | check failed | - | check failed |
OPERATOR_REQUIRED | - | additional check is needed | - | additional check is needed |
SUCCESS | - | check succeeded | - | check succeeded |
| Create | Read | Update | Delete |
ADMIN | + | + | + | + |
OPERATOR | - | + | - | - |
CLIENT | - | their company data | - | - |
CLIENT SERVICE | - | their company data | - | - |
CLIENT OPERATOR | - | their company data | - | - |
CLIENT ADMIN | - | their company data | their company data | their company data |
| Create | Read | Update | Delete |
ADMIN | + | + | + | + |
OPERATOR | + | + | + | - |
CLIENT | their folders | their folders | their folders | - |
CLIENT SERVICE | within their company | within their company | within their company | - |
CLIENT OPERATOR | within their company | within their company | within their company | - |
CLIENT ADMIN | within their company | within their company | within their company | within their company |
| Create | Read | Update | Delete |
ADMIN | + | + | + | + |
OPERATOR | + | + | + | - |
CLIENT | - | within their company | - | - |
CLIENT SERVICE | - | within their company | - | - |
CLIENT OPERATOR | within their company | within their company | within their company | - |
CLIENT ADMIN | within their company | within their company | within their company | within their company |
| Create | Read | Delete |
ADMIN | + | + | + |
OPERATOR | + | + | - |
CLIENT | - | within their company | - |
CLIENT SERVICE | - | within their company | - |
CLIENT OPERATOR | within their company | within their company | - |
CLIENT ADMIN | within their company | within their company | within their company |
| Create | Read | Delete |
ADMIN | + | + | + |
OPERATOR | + | + | - |
CLIENT | in their folders | in their folders | - |
CLIENT SERVICE | within their company | within their company | - |
CLIENT OPERATOR | within their company | within their company | - |
CLIENT ADMIN | within their company | within their company | within their company |
| Create | Read | Update | Delete |
ADMIN | + | + | + | + |
OPERATOR | + | + | + | - |
CLIENT | in their folders | in their folders | - | - |
CLIENT SERVICE | within their company | within their company | within their company | - |
CLIENT OPERATOR | within their company | within their company | within their company | - |
CLIENT ADMIN | within their company | within their company | within their company | within their company |
| Create | Read | Update | Delete |
ADMIN | + | + | + | + |
OPERATOR | - | + | - | - |
CLIENT | - | within their company | - | - |
CLIENT SERVICE | within their company | within their company | - | - |
CLIENT OPERATOR | - | within their company | - | - |
CLIENT ADMIN | within their company | within their company | within their company | within their company |
| Create | Read | Delete |
ADMIN | + | + | + |
OPERATOR | - | + | - |
CLIENT | - | within their company | - |
CLIENT SERVICE | within their company | within their company | - |
CLIENT OPERATOR | - | within their company | - |
CLIENT ADMIN | within their company | within their company | within their company |
| Create | Read | Delete |
ADMIN | + | + | + |
OPERATOR | - | + | - |
CLIENT | - | within their company | - |
CLIENT SERVICE | - | within their company | - |
CLIENT OPERATOR | - | within their company | - |
CLIENT ADMIN | within their company | within their company | within their company |
| Create | Read | Update | Delete |
ADMIN | + | + | + | + |
OPERATOR | - | + | their data | - |
CLIENT | - | their data | their data | - |
CLIENT SERVICE | - | within their company | their data | - |
CLIENT OPERATOR | - | within their company | their data | - |
CLIENT ADMIN | within their company | within their company | within their company | within their company |
Here, you'll get acquainted with types of analyses that Oz API provides and will learn how to interpret the output.
Using Oz API, you can perform one of the following analyses:
The possible results of the analyses are explained here.
Each of the analyses has its threshold that determines the output of these analyses. By default, the threshold for Liveness is 0.5 or 50%, for Blacklist and Biometry (Face Matching) – 0.85 or 85%.
Biometry: if the final score is equal to or above the threshold, the faces on the analyzed media are considered similar.
Blacklist: if the final score is equal to or above the threshold, the face on the analyzed media matches with one of the faces in the database.
Quality: if the final score is equal to or above the threshold, the result is interpreted as an attack.
To configure the threshold depending on your needs, please contact us.
For more information on how to read the numbers in analyses' results, please refer to Quantitative Results.
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media (for details, please refer to Rules of Assigning Analyses).
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – faces are similar, media represent the same person,
0% (0) – faces are not similar and belong to different people
The Liveness detection (Quality) algorithm aims to check whether a person in a media is a real human acting in good faith, not a fake of any kind.
The Best Shot algorithm checks for the best shot from a video (a best-quality frame where the face is seen the most properly). It is an addition to liveness.
After checking, the analysis shows the chance of a spoofing attack in percents.
100% (1) – an attack is detected, the person in the video is not a real living person,
0% (0) – a person in the video is a real living person.
*Spoofing in biometry is a kind of scam when a person disguises as another person using both program and non-program tools like deepfake, masks, ready-made photos, or fake videos.
The Documents analysis aims to recognize the document and check if its fields are correct according to its type.
Oz API uses a third-party OCR analysis service provided by our partner. If you want to change this service to another one, please contact us.
As an output, you'll get a list of document fields with recognition results for each field and a result of checking that can be:
The documents passed the check successfully,
The documents failed to pass the check.
Additionally, the result of Biometry check is displayed.
The Blacklist checking algorithm is used to determine whether the person on a photo or video is present in the database of pre-uploaded images. This base can be used as a blacklist or whitelist. In the former case, the person's face is being compared with the faces of known swindlers; in the latter case, it might be a list of VIPs.
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – the person in an image or video matches with someone in the blacklist,
0% (0) – the person is not found in the blacklist.
To work properly, the resolution algorithms need each uploaded media to be marked with special tags. For video and images, the tags are different. They help algorithms to identify what should be in the photo or video and analyze the content.
The following tag types should be specified in the system for video files.
To identify the data type of the video:
video_selfie
To identify the orientation of the video:
orientation_portrait
– portrait orientation;
orientation_landscape
– landscape orientation.
To identify the action on the video:
video_selfie_left
– head turn to the left;
video_selfie_right
– head turn to the right;
video_selfie_down
– head tilt downwards;
video_selfie_high
– head raise up;
video_selfie_smile
– smile;
video_selfie_eyes
– blink;
video_selfie_scan
– scanning;
video_selfie_oneshot
– a one-frame analysis;
video_selfie_blank
– no action.
The tags listed allow the algorithms recognizing the files as suitable for the Quality (Liveness) and Biometry analyses.
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a video file with the “blink” action:
The following tag types should be specified in the system for photo files:
A tag for selfies:
photo_selfie
– to identify the image type as “selfie”.
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a “selfie” photo file:
Example of the correct tag set for a photo file with the face side of an ID card:
Example of the correct set of tags for a photo file of the back of an ID card:
API changes
Face Identification 1:N is now live, significantly increasing the data processing capacity of the Oz API to find matches. Even huge face databases (containing millions of photos and more) are no longer an issue.
The Liveness (QUALITY) analysis now ignores photos tagged with photo_id
, photo_id_front
, or photo_id_back
, preventing these photos from causing the tag-related analysis error.
You can now apply the Liveness (QUALITY) analysis to a single image.
Fixed the bug where the Liveness analysis could finish with the SUCCESS result with no media uploaded.
The default value for the extract_best_shot
parameter is now True.
RAR archives are no longer supported.
By default, analyses.results_media.results_data
now contain the confidence_spoofing
parameter. However, if you need all three parameters for the backward compatibility, it is possible to change the response back to three parameters: confidence_replay
, confidence_liveness
, and confidence_spoofing
.
Updated the default PDF report template.
The name of the PDF report now contains folder_id
.
Set the autorotation of logs.
Added the CLI command for user deletion.
You can now switch off the video preview generation.
The ADMIN access token is now valid for 5 years.
Added the folder identifier folder_id
to the report name.
Fixed bugs and optimized the API work.
For the sliced video, the system now deletes the unnecessary frames.
Added new methods: GET and POST at media/<media_id>/snapshot/
Replaced the default report template.
The shot set preview now keeps images’ aspect ratio.
ADMIN and OPERATOR receive system_company
as a company they belong to.
Added the company_id
attribute to User, Folder, Analyse, Media.
Added the Analysis group_id
attribute.
Added the system_resolution
attribute to Folder and Analysis.
The analysis resolution_status
now returns the system_resolution
value.
Removed the PATCH method for collections.
Added the resolution_status
filter to Folder Analyses [LIST] and analyse.resolution_status
filter to Folder [LIST].
Added the audit log for Folder, User, Company.
Improved the company deletion algorithm.
Reforged the blacklist processing logic.
Fixed a few bugs.
The Photo Expert and KYC modules are now removed.
The endpoint for the user password change is now POST
users/user_id/change-password instead of PATCH
.
Provided log for the Celery app.
Added filters to the Folder [LIST] request parameters: analyse.time_created
, analyse.results_data
for the Documents analysis, results_data
for the Biometry analysis, results_media_results_data
for the QUALITY analysis. To enable filters, set the with_results_media_filter
query parameter to True
.
Added a new attribute for users – is_active
(default True
). If is_active == False
, any user operation is blocked.
Added a new exception code (1401 with status code 401) for the actions of the blocked users.
Added shots sets preview.
You can now save a shots set archive to a disk (with the original_local_path
, original_url
attributes).
A new original_info attribute is added to store md5, size, and mime-type of a shots set
Fixed ReportInfo
for shots sets.
Added health check at GET api/healthcheck.
Fixed the shots set thumbnail URL.
Now, the first frame of shots set becomes this shots set's thumbnail URL.
Modified the retry policy – the default max count of analysis attempts is increased to 3 and jitter configuration introduced.
Changed the callback algorithm.
Refactored and documented the command line tools.
Refactored modules.
Changed the delete personal information endpoint and method from delete_pi
to /pi
and from POST to DELETE, respectively.
Improved the delete personal information algorithm.
It is now forbidden to add media to cleaned folders.
Changed the authorize/restore endpoint name from auth
to auth_restore
.
Added a new tag – video_selfie_oneshot
.
Added the password validation setting (OZ_PASSWORD_POLICY
).
Added auth
, rest_unauthorized
, rps_with_token
throttling (use OZ_THROTTLING_RATES
in configuration. Off by default).
User permissions are now used to access static files (OZ_USE_PERMISSIONS_FOR_STATIC
in configuration, false by default).
Added a new folder endpoint – /delete_pi
. It clears all personal information from a folder and analyses related to this folder.
Fixed a bug with no error while trying to synchronize empty collections
If persons are uploaded, the analyse collection TFSS request is sent.
Added the fields_to_check
parameter to document analysis (by default, all fields are checked).
Added the double_page
_spread parameter to document analysis (True
by default).
Fixed collection synchronization.
Authorization token can be now refreshed by expire_token
.
Added support for application/x-gzip.
Renamed shots_set.images to shots_set.frames.
Added user sessions API.
Users can now change a folder owner (limited by permissions).
Changed dependencies rules.
Changed the access_token prolongation policy to fix bug of prolongation before checking the expiration permission.
Move oz_collection_binding
(collection synchronization functional) to oz_core.
Simplified the shots sets functionality. One archive keeps one shot set.
Improved the document sides recognition for the docker version.
Moved the orientation tag check to liveness at quality analysis.
Added a default report template for Admin and Operator.
Updated the biometric model.
A new ShotsSet object is not created if there are no photos for it.
Updated the data exchange format for the documents' recognition module.
You can’t delete a Collection if there are associated analyses with Collection Persons.
Added time marks to analysis: time_task_send_to_broker
, time_task_received
, time_task_finished.
Added a new authorization engine. You can now connect with Active Directory by LDAP (settings configuration required).
A new type of media in Folders – "shots_set".
You can’t delete a CollectionPerson if there are analyses associated with it.
Renamed the folder field “resolution_suggest” to “operator_status”.
Added a folder text field “operator_comment”.
The folder fields “operator_status” and “operator_comment” can be edited only by Admin, Operator, Client Service, Client Operator, and Client Admin.
Only Admin and Client Admin can delete folder, folder media, report template, report template attachments, reports, and analyses (within their company).
Fixed a deletion error: when report author is deleted, their reports get deleted as well.
Client can now view only their own profile.
Client Operator can now edit only their profile.
Client can't delete own folders, media, reports, or analyses anymore.
Client Service can now create Collection Person and read reports within their company.
Client, Client Admin, Client Operator have read access to users profiles only in their company.
A/B testing is now available.
Added support for expiration date header.
Added document recognition module Standalone/Dockered binding support.
Added a new role of Client Operator (like Client Admin without permissions for company and account management).
Client Admin and Client Operator can change the analysis status.
Only Admin and Client Admin (for their company) can create, update and delete operations for Collection and CollectionPerson models from now on.
Added a check for user permissions to report template when creating a folder report.
Collection creation now returns status code 201 instead of 200.
This article covers the default rules of applying analyses.
Analyses in Oz system can be applied in two ways:
manually, for instance, when you choose the Liveness scenario in our demo application;
automatically, when you don’t choose anything and just assign all possible analyses (via API or SDK).
The automatic assignment means that Oz system decides itself what analyses to apply to media files based on its tags and type. If you upload files via the web console, you select the tags needed; if you take photo or video via Web SDK, the SDK picks the tags automatically. As for the media type, it can be IMAGE
(a photo)/VIDEO
/SHOTS_SET
, where SHOTS_SET
is a .zip archive equal to video.
Below, you will find the tags and type requirements for all analyses. If a media doesn’t match the requirements for the certain analysis, this media is ignored by algorithms.
The rules listed below act by default. To change the mapping configuration, please contact us.
This analysis is applied to all media, regardless of the gesture recorded (gesture tags begin from video_selfie
).
Important: to process a photo in API 4.0.8 and below, pack it into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored.
This analysis is applied to all media.
If the folder contains less than two matching media files, the system will return an error. If there are more than two files, then all pairs will be compared, and the system will return a result for the pair with the least similar faces.
This analysis works only when you have a pre-made image database, which is called the blacklist. The analysis is applied to all media in the folder (or the ones marked as source media).
Best Shot is an addition to the Quality (Liveness) analysis. It requires the appropriate option enabled. The analysis is applied to all media files that can be processed by the Quality analysis.
The Documents analysis is applied to images with tags photo_id_front
and photo_id_back
(documents), and photo_selfie
(selfie). The result will be positive if the system finds the selfie photo and matches it with a photo on one of the valid documents from the following list:
personal ID card
driver license
foreign passport
Metadata is available for most Oz system objects. Here is the list of these objects with the API methods required to add metadata. Please note: you can also add metadata to these objects during their creation.
You may want to use metadata to group folders by a person or lead. For example, if you want to calculate conversion when a single lead makes several Liveness attempts, just add the person/lead identifier to the folder metadata.
Here is how to add the client ID iin
to a folder object.
In the request body, add:
You can pass an ID of a person in this field, and use this ID to combine requests with the same person and count unique persons (same ID = same person, different IDs = different persons). This ID can be a phone number, an IIN, an SSN, or any other kind of unique ID. The ID will be displayed in the report as an additional column.
If you store PII in metadata, make sure it complies with the relevant regulatory requirements.
You can also add metadata via SDK to process the information later using API methods. Please refer to the corresponding SDK sections:
Response codes 2XX indicate a successfully processed request (e.g., code 200 for retrieving data, code 201 for adding a new entity, code 204 for deletion, etc.).
Response codes 4XX indicate that a request could not be processed correctly because of some client-side data issues (e.g., 404 when addressing a non-existing resource).
Response codes 5XX indicate that an internal server-side error occurred during the request processing (e.g., when database is temporarily unavailable).
Each response error includes HTTP code and JSON data with error description. It has the following structure:
error_code
– integer error code;
error_message
– text error description;
details
– additional error details (format is specified to each case). Can be empty.
Sample error response:
Error codes:
0 – UNKNOWN
Unknown server error.
1 - NOT ALLOWED
An unallowed method is called. Usually is followed by the 405 HTTP status of response. For example, trying to request the PATCH method, while only GET/POST ones are supported.
2 - NOT REALIZED
The method is documented but is not realized by any temporary or permanent reason.
3 - INVALID STRUCTURE
Incorrect structure of request. Some required fields missing or a format validation error occurred.
4 - INVALID VALUE
Incorrect value of the parameter inside request body or query.
5 - INVALID TYPE
The invalid data type of the request parameter.
6 - AUTH NOT PROVIDED
Access token not specified.
7 - AUTH INVALID
The access token does not exist in the database.
8 - AUTH EXPIRED
Auth token is expired.
9 - AUTH FORBIDDEN
Access denied for the current user.
10 - NOT EXIST
the requested resource is not found (alternative of HTTP status_code = 404).
11 - EXTERNAL SERVICE
Error in the external information system.
12 – DATABASE
Critical database error on the server host.
5.0
Oz API 5.1.0 works with the same collection.
Launch the client and import Oz API collection for Postman by clicking the Import button:
Click files, locate the JSON needed, and hit Open to add it:
The collection will be imported and will appear in the Postman interface:
Metadata is any optional data you might need to add to a . In the meta_data
section, you can include any information you want, simply by providing any number of fields with their values:
You can also change or delete metadata. Please refer to our .
Another case is security: when you need to process the analyses’ result from your back end, but don’t want to perform this using the folder ID. Add an ID (transaction_id
) to this folder and use this ID to search for the required information. This case is thoroughly explained .
Download and install the Postman client from this Then download the JSON file needed:
Code
Message
Description
202
Could not locate face on source media [media_id]
No face is found in the media that is being processed, or the source media has wrong (photo_id_back
) or/and missing tag used for the media.
202
Biometry. Analyse requires at least 2 media objects to process
The algorithms did not find the two appropriate media for analysis. This might happen when only a single media has been sent for the analysis, or a media is missing a tag.
202
Processing error - did not found any document candidates on image
The Documents analysis can't be finished because the photo uploaded seems not to be a document, or it has wrong (not photo_id_*
) or/and missing tags.
5
Invalid/missed tag values to process quality check
The tags applied can't be processed by the Quality algorithm (most likely, the tags begin from photo_*
; for Quality, they should be marked as video_*
)
5
Invalid/missed tag values to process blacklist check
The tags applied can't be processed by the Blacklist algorithm. This might happen when a media is missing a tag.
Object | API Method |
User |
|
Folder |
|
Media |
|
Analysis |
|
Collection |
and, for a person in a collection,
|
Download and install the Postman client from this page. Then download the JSON file needed:
Launch the client and import Oz API Lite collection for Postman by clicking the Import button:
Click files, locate the JSON needed, and hit Open to add it:
The collection will be imported and will appear in the Postman interface:
The common way of Oz Mobile SDK to work is the server-based mode, when the Liveness and Biometry analyses are performed on a server as shown in the scheme below:
But there's also an option to perform checks without server calls or even Internet connection. This option stands for on-device analyses.
The on-device analyses are being performed faster and more secure as all data is processed directly on device, nothing is being sent anywhere. In this case, you don’t need a server at all, neither need you the API connection.
However, the API connection might be needed for some additional functions like telemetry or server-side SDK configuration.
The on-device analysis mode is useful when:
you do not collect, store or process personal data;
you need to identify a person quickly regardless of network conditions such as a distant region, inside a building, underground, etc.;
you’re on a tight budget as you can save money on the hardware part.
To launch the on-device check, set the appropriate mode
for Android or iOS SDK.
Android:
iOS:
For details, please refer to the Checking Liveness and Face Biometry sections for iOS and Android.
Oz Mobile SDK stands for the Software Developer’s Kit of the Oz Forensics Liveness and Face Biometric System, providing seamless integration with customers’ mobile apps for login and biometric identification.
Currently, both Android and iOS SDK work in the portrait mode.
Master license is the offline license that allows using Mobile SDKs with any bundle_id
, unlike the regular licenses. To get a master license, create a pair of keys as shown below. Email us the public key, and we will email you the master license shortly after that.
Your application needs to sign its bundle_id
with the private key, and the Mobile SDK checks the signature using the public key from the master license. Master licenses are time-limited.
This section describes the process of creating your private and public keys.
To create a private key, run the commands below one by one.
You will get these files:
privateKey.der is a private .der key;
privateKey.txt is privateKey.der encoded by base64. This key containing will be used as the host app bundle_id signature.
File examples:
To create a public key, run this command.
You will get the public key file: publicKey.pub. To get a license, please email us this file. We will email you the license.
File example:
SDK initialization:
For Android 6.0 (API level 23) and older:
Add the implementation 'com.madgag.spongycastle:prov:1.58.0.0'
dependency;
Before creating a signature, call Security.insertProviderAt(org.spongycastle.jce.provider.BouncyCastleProvider(), 1)
Prior to the SDK initializing, create a base64-encoded signature for the host app bundle_id
using the private key.
Signature creation example:
Pass the signature as the masterLicenseSignature
parameter during the SDK initialization.
If the signature is invalid, the initialization continues as usual: the SDK checks the list of bundle_id
included into the license, like it does it by default without a master license.
The OpenSSL command specification:
If you want to get back to the previous (up to 6.4.2) versions' design, reset the customization settings of the capture screen and apply the parameters that are listed below.
If you use our SDK just for capturing videos, omit this step.
To check liveness and face biometry, you need to upload media to our system and then analyze them.
Here’s an example of performing a check:
To delete media files after the checks are finished, use the clearActionVideos
method.
To add metadata to a folder, use the addFolderMeta
method.
In the params
field of the Analysis
structure, you can pass any additional parameters (key + value), for instance, to extract the best shot on the server side.
If you want to add your media to the existing folder, use the setFolderId
method:
To interpret the results of analyses, please refer to .
To use a media file that is captured with another SDK (not Oz Android SDK), specify the path to it in :
To customize the Oz Liveness interface, use UIcustomization
as shown below. For the description of customization parameters, please refer to .
To start using Oz iOS SDK, follow the steps below.
Embed Oz iOS SDK into your project as described here.
Connect SDK to API as described here. This step is optional, as this connection is required only when you need to process data on a server. If you use the on-device mode, the data is not transferred anywhere, and no connection is needed.
Capture videos by creating the controller as described here. You'll send them for analysis afterwards.
Upload and analyze media you've taken at the previous step. The process of checking liveness and face biometry is described here.
If you want to customize the look-and-feel of Oz iOS SDK, please refer to this section.
Recommended iOS version: 12 and higher.
Recommended Xcode version: 12.5 and higher.
Available languages: EN, ES, HY, KK, KY, TR, PT-BR.
A sample app source code using the Oz Liveness SDK is located in the GitLab repository:
Follow the link below to see a list of SDK methods and properties:
Download the demo app latest build here.
Android SDK changes
Added a description for the error that occurs when providing an empty string as an ID in the setFolderID
method.
Fixed a bug causing an endless spinner to appear if the user switches to another application during the Liveness check.
Fixed some smartphone model specific-bugs.
Upgraded the on-device Liveness model.
Security updates.
The length of the Selfie gesture is now configurable (affects the video file size).
You can set your own logo instead of Oz logo if your license allows it.
Removed the pause after the Scan gesture.
If the recorded video is larger than 10 MB, it gets compressed.
Security and logging updates.
Changed the master license validation algorithm.
Downgraded the required compileSdkVersion
from 34 to 33.
Security updates.
Updated the on-device Liveness model.
Fixed some bugs.
Internal licensing improvements.
Internal SDK improvements.
Bug fixes.
Implemented the possibility of using a master license that works with any bundle_id
.
Video compression failure on some phone models is now fixed.
Bug fixes.
The Analysis
structure now contains the sizeReductionStrategy
field. This field defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully.
The messages for the errors that are retrieved from API are now detailed.
If multiple analyses are applied to the folder simultaneously, the system sends them as a group. It means that the “worst” of the results will be taken as resolution, not the latest. Please refer to this article for details.
For the Liveness analysis, the system now treats the highest score as a quantitative result. The Liveness analysis output is described here.
Updated the Liveness on-device model.
Added the Portuguese (Brazilian) locale.
You can now add a custom or update an existing language pack. The instructions can be found here.
If a media hasn't been uploaded correctly, the system repeats the upload.
Created a new method to retrieve the telemetry (logging) identifier: getEventSessionId
.
The login
and auth
methods are now deprecated. Use the setAPIConnection
method instead.
OzConfig.baseURL
and OzConfig.permanentAccessToken
are now deprecated.
If a user closes the screen during video capture, the appropriate error is now being handled by SDK.
Fixed some bugs and improved the SDK work.
Fixed errors.
The SDK now works properly with baseURL
set to null
.
The dependencies' versions have been brought into line with Kotlin version.
Added the new analysis mode – hybrid (Liveness only). If the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Kotlin version requirements lowered to 1.7.21.
Improved the on-device models.
For some phone models, fixed the fatal device error.
The hint text width can now exceed the frame width (when using the main camera).
Photos taken during the One Shot analysis are now being sent to the server in the original size.
Removed the OzAnalysisResult
class. The onSuccess
method ofAnalysisRequest.run
now uses the RequestResult
structure instead of List<OzAnalysisResult>.
All exceptions are moved to the com.ozforensics.liveness.sdk.core.exceptions
package (See changes below).
Classes related to AnalysisRequest
are moved to the com.ozforensics.liveness.sdk.analysis
package (See changes below).
The methods below are no longer supported:
Restructured the settings screen.
Added the center hint background customization.
Added new face frame forms (Circle, Square).
Added the antiscam widget and its customization. This feature allows you to alert your customers that the video recording is being conducted, for instance, for loan application purposes. The purpose of this is to safeguard against scammers who may attempt to deceive an individual into approving a fraudulent transaction.
The OzLivenessSDK::init
method no longer crashes if there is a StatusListener
parameter passed.
Changed the scan gesture animation.
Please note: for this version, we updated Kotlin to 1.8.20.
Improved the SDK algorithms.
Updated the model for the on-device analyses.
Fixed the animation for sunglasses/mask.
The oval size for Liveness is now smaller.
Fixed the error with the server-based analyses while using permanentAccessToken
for authorization.
Added customization for the hint animation.
You can now hide the status bar and system buttons (works with 7.0.0 and higher).
OzLivenessSDK.init
now requires context
as the first parameter.
OzAnalysisResult
now shows the server-based analyses' scores properly.
Fixed initialization issues, displaying of wrong customization settings, authorization failures on Android <7.1.1.
Fixed crashes for Android <6.
Fixed oval positioning for some phone models.
Internal fixes and improvements.
Updated security.
Implemented some internal improvements.
The addMedia
method is now deprecated, please use uploadMedia
for uploading.
Changed the way of sharing dependencies. Due to security issues, now we share two types of libraries as shown below: sdk
is a server analysis only, full
provides both server and on-device analyses:
UICustomization
has been implemented instead of OzCustomization
.
Implemented a range of UI customization options and switched to the new design. To restore the previous settings, please refer to this article.
Added the Spanish locale.
Fixed the bug with freezes that had appeared on some phone models.
SDK now captures videos in 720p.
Synchronized the names of the analysis modes with iOS: SERVER_BASED and ON_DEVICE.
Fixed the bug with displaying of localization settings.
Now you can use Fragment as Liveness screen.
Added a new field to the Analysis
structure. The params
field is for any additional parameters, for instance, if you need to set extracting the best shot on server to true. The best shot algorithm chooses the most high-quality frame from a video.
The Zoom in and Zoom out gestures are no longer supported.
Updated the biometry model.
Added a new simplified API – AnalysisRequest. With it, it’s easier to create a request for the media and analysis you need.
Published the on-device module for on-device liveness and biometry analyses. To add this module to your project, use:
To launch these analyses, use runOnDeviceBiometryAnalysis
and runOnDeviceLivenessAnalysis
methods from the OzLivenessSDK
class:
Liveness goes smoother.
Fixed freezes on Xiaomi devices.
Optimized image converting.
New metadata parameter for OzLivenessSDK.uploadMedia
and new OzLivenessSDK.uploadMediaAndAnalyze
method to pass this parameter to folders.
Added functions for SDK initialization with LicenseSources: LicenseSource.LicenseAssetId
and LicenseSource.LicenseFilePath
. Use the OzLivenessSDK.init
method to start initialization.
Now you can get the license info upon initialization val licensePayload = OzLivenessSDK.getLicensePayload().
Added the Kyrgyz locale.
Added local analysis functions.
You can now configure the face frame.
Fixed version number at the Liveness screen.
Added the main camera support.
Added configuration from license support.
Added the OneShot gesture.
Added new states for OzAnalysisResult.Resolution.
Added the uploadMediaAndAnalyze
method to load a bunch of media to the server at once and send them to analysis immediately.
OzMedia
is renamed to OzAbstractMedia
and got subclasses for images and videos.
Fixed camera bugs for some devices.
Access token updates automatically.
Renamed accessToken
to permanentAccessToken
.
Added R8 rules.
Configuration became easier: config settings are mutable.
Fixed the oval frame.
Removed the unusable parameters from AnalyseRequest
.
Removed default attempt limits.
To customize the configuration options, the config property is added instead of baseURL, accessToken, etc. Use OzConfig.Builder
for initialization.
Added license support. Licences should be installed as raw resources. To pass them to OzConfig
, use setLicenseResourceId
.
Replaced the context-dependent methods with analogs.
Improved the image analysis.
Removed unusable dependencies
Fixed logging
A singleton for Oz SDK.
Deletes all action videos from file system.
Parameters
-
Returns
-
Creates an intent to start the Liveness activity.
Returns
-
Utility function to get the SDK error from OnActivityResult's intent.
Returns
Retrieves the SDK license payload.
Parameters
-
Returns
Utility function to get SDK results from OnActivityResult's intent.
Returns
A list of OzAbstractMedia objects.
Initializes SDK with license sources.
Returns
-
Enables logging using the Oz Liveness SDK logging mechanism.
Returns
-
Connection to API.
Connection to the telemetry server.
Deletes the saved token.
Parameters
-
Returns
-
Retrieves the telemetry session ID.
Parameters
-
Returns
The telemetry session ID (String parameter).
Retrieves the SDK version.
Parameters
-
Returns
The SDK version (String parameter).
A class for performing checks.
The analysis launching method.
A builder class for AnalysisRequest.
Creates the AnalysisRequest instance.
Parameters
-
Returns
Adds an analysis to your request.
Returns
Error if any.
Adds a list of analyses to your request. Allows executing several analyses for the same folder on the server side.
Returns
Error if any.
Adds metadata to a folder you create (for the server-based analyses only). You can add a pair key-value as additional information to the folder with the analysis result on the server side.
Returns
Error if any.
Uploads one or more media to a folder.
Returns
Error if any.
For the previously created folder, sets a folderId. The folder should exist on the server side. Otherwise, a new folder will be created.
Returns
Error if any.
Configuration for OzLivenessSDK (use OzLivenessSDK.config).
Sets the length of the Selfie gesture (in milliseconds).
Returns
Error if any.
The possibility to enable additional debug info by clicking on version text.
The number of attempts before SDK returns error.
Settings for repeated media upload.
Timeout for face alignment (measured in milliseconds).
Interface implementation to retrieve error by Liveness detection.
Locale to display string resources.
Logging settings.
Uses the main (rear) camera instead of the front camera for liveness detection.
Customization for OzLivenessSDK (use OzLivenessSDK.config.customization
).
Hides the status bar and the three buttons at the bottom. The default value is True
.
A set of customization parameters for the toolbar.
A set of customization parameters for the center hint that guides a user through the process of taking an image of themselves.
A set of customization parameters for the hint animation.
A set of customization parameters for the frame around the user face.
A set of customization parameters for the background outside the frame.
A set of customization parameters for the SDK version text.
A set of customization parameters for the antiscam message that warns user about their actions being recorded.
Logo customization parameters. Custom logo should be allowed by license.
Contains the action from the captured video.
Contains the extended info about licensing conditions.
A class for the captured media that can be:
A document photo.
A set of shots in an archive.
A Liveness video.
Contains an action from the captured video.
A class for license that can be:
Contains the license ID.
Contains the path to a license.
A class for analysis status that can be:
This status means the analysis is launched.
This status means the media is being uploaded.
The type of the analysis.
Currently, the DOCUMENTS
analysis can't be performed in the on-device mode.
The mode of the analysis.
Contains information on what media to analyze and what analyses to apply.
The general status for all analyses applied to the folder created.
Holder for attempts counts before SDK returns error.
Contains logging settings.
A class for color that can be (depending on the value received):
Frame shape settings.
Exception class for AnalysisRequest.
Structure that describes media used in AnalysisRequest.
Structure that describes the analysis result for the single media.
Consolidated result for all analyses performed.
Result of the analysis for all media it was applied to.
Defines the authentication method.
Authentication via token.
Authentication via credentials.
Defines the settings for the repeated media upload.
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
The – String.
The license payload () – the object that contains the extended info about licensing conditions.
The class instance.
Contains the locale code according to .
Removed method
Replacement
OzLivenessSDK.uploadMediaAndAnalyze
AnalysisRequest.run
OzLivenessSDK.uploadMedia
AnalysisRequest.Builder.uploadMedia
OzLivenessSDK.runOnDeviceBiometryAnalysis
AnalysisRequest.run
OzLivenessSDK.runOnDeviceLivenessAnalysis
AnalysisRequest.run
AnalysisRequest.build(): AnalysisRequest
-
AnalysisRequest.Builder.addMedia
AnalysisRequest.Builder.uploadMedia
Parameter | Type | Description |
data | Intent | The object to test |
Parameter | Type | Description |
data | Intent | The object to test |
Parameter | Type | Description |
tag | String | Message tag |
log | String | Message log |
Parameter | Type | Description |
key | String | Key for metadata. |
value | String | Value for metadata. |
Parameter | Type | Description |
folderID | String | A folder identifier. |
Parameter | Type | Description |
selfieLength | Int | The length of the Selfie gesture (in milliseconds). Should be within 500-5000 ms, the default length is 700 |
Parameter | Type | Description |
allowDebugVisualization | Boolean | Enables or disables the debug info. |
Parameter | Type | Description |
faceAlignmentTimeout | Long | A timeout value |
Parameter | Type | Description |
livenessErrorCallback | ErrorHandler | A callback value |
Parameter | Type | Description |
useMainCamera | Boolean | Switches cameras |
Parameter | Type | Description |
image | Bitmap (@DrawableRes) | Logo image |
size | Size | Logo size (in dp) |
Case | Description |
OneShot | The best shot from the video taken |
Blank | A selfie with face alignment check |
Scan | Scan |
HeadRight | Head turned right |
HeadLeft | Head turned left |
HeadDown | Head tilted downwards |
HeadUp | Head lifted up |
EyeBlink | Blink |
Smile | Smile |
Parameter | Type | Description |
expires | Float | The expiration interval |
features | Features | License features |
appIDS | [String] | An array of bundle IDs |
Case | Description |
Blank | A video with no gesture |
PhotoSelfie | A selfie photo |
VideoSelfieOneShot | A video with the best shot taken |
VideoSelfieScan | A video with the scanning gesture |
VideoSelfieEyes | A video with the blink gesture |
VideoSelfieSmile | A video with the smile gesture |
VideoSelfieHigh | A video with the lifting head up gesture |
VideoSelfieDown | A video with the tilting head downwards gesture |
VideoSelfieRight | A video with the turning head right gesture |
VideoSelfieLeft | A video with the turning head left gesture |
PhotoIdPortrait | A photo from a document |
PhotoIdBack | A photo of the back side of the document |
PhotoIdFront | A photo of the front side of the document |
Parameter | Type | Description |
id | Int | License ID |
Parameter | Type | Description |
path | String | An absolute path to a license |
Case | Description |
BIOMETRY | The algorithm that allows comparing several media and check if the people on them are the same person or not |
QUALITY | The algorithm that aims to check whether a person in a video is a real human acting in good faith, not a fake of any kind. |
DOCUMENTS | The analysis that aims to recognize the document and check if its fields are correct according to its type. |
Case | Description |
ON_DEVICE | The on-device analysis with no server needed |
SERVER_BASED | The server-based analysis |
HYBRID | The hybrid analysis for Liveness: if the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check. |
Case | Description |
FAILED | One or more analyses failed due to some error and couldn't get finished |
DECLINED | The check failed (e.g., faces don't match or some spoofing attack detected) |
SUCCESS | Everything went fine, the check succeeded (e.g., faces match or liveness confirmed) |
OPERATOR_REQUIRED | The result should be additionally checked by a human operator |
Parameter | Type | Description |
singleCount | Int | Attempts on a single action/gesture |
commonCount | Int | Total number of attempts on all actions/gestures if you use a sequence of them |
Case | Description |
EN | English |
HY | Armenian |
KK | Kazakh |
KY | Kyrgyz |
TR | Turkish |
ES | Spanish |
PT-BR | Portuguese (Brazilian) |
Parameter | Type | Description |
allowDefaultLogging | Boolean | Allows logging to LogCat |
allowFileLogging | Boolean | Allows logging to an internal file |
journalObserver | StatusListener | An event listener to receive journal events on the application side |
Parameter | Type | Description |
resId | Int | Link to the color in the Android resource system |
Parameter | Type | Description |
hex | String | Color hex (e.g., #FFFFFF) |
Parameter | Type | Description |
color | Int | The Int value of a color in Android |
Case | Description |
Oval | Oval frame |
Rectangle | Rectangular frame |
Circle | Circular frame |
Square | Square frame |
Parameter | Type | Description |
apiErrorCode | Int | Error code |
message | String | Error message |
Parameter | Type | Description |
host | String | API address |
token | String | Access token |
Parameter | Type | Description |
host | String | API address |
username | String | User name |
password | String | Password |
Parameter | Type | Description |
attemptsCount | Int | Number of attempts for media upload |
attemptsTimeout | Int | Timeout between attempts |
Case | Description |
UPLOAD_ORIGINAL | The original video |
UPLOAD_COMPRESSED | The compressed video |
UPLOAD_BEST_SHOT | The best shot taken from the video |
UPLOAD_NOTHING | Nothing is sent (note that no folder will be created) |
Create a controller that will capture videos as follows:
action
– a list of user’s actions while capturing the video.
Once video is captured, the system calls the onOZLivenessResult
method:
The method returns the results of video capturing: the [
OZMedia
]
objects. The system uses these objects to perform checks.
If you use our SDK just for capturing videos, omit the Checking Liveness and Face Biometry step.
If a user closes the capturing screen manually, the failedBecauseUserCancelled
error appears.
Parameter | Type | Description |
actions | A list of possible actions |
Parameter | Type | Description |
context | Context | The Context class |
licenseSources | A list of license references |
statusListener | StatusListener | Optional listener to check the license load result |
Parameter | Type | Description |
connection | Connection type |
statusListener | StatusListener<String?> | Listener |
Parameter | Type | Description |
connection | Connection type |
statusListener | StatusListener<String?> | Listener |
Parameter | Type | Description |
onStatusChange | A callback function as follows: | The function is executed when the status of the AnalysisRequest changes. |
onError | A callback function as follows:
| The function is executed in case of errors. |
onSuccess | A callback function as follows:
| The function is executed when all the analyses are completed. |
Parameter | Type | Description |
analysis | A structure for analysis |
Parameter | Type | Description |
analysis | A list of Analysis structures |
Parameter | Type | Description |
mediaList | An OzAbstractMedia object or a list of objects. |
Parameter | Type | Description |
attemptsSettings | Sets the number of attempts |
Parameter | Type | Description |
uploadMediaSettings | Sets the number of attempts and timeout between them |
Parameter | Type | Description |
localizationCode | A locale code |
Parameter | Type | Description |
logging | Logging settings |
Parameter | Type | Description |
closeIconRes | Int (@DrawableRes) | An image for the close button |
closeIconTint | Close button color |
titleTextFont | Int (@FontRes) | Toolbar title text font |
titleTextFontStyle | Int (values from android.graphics.Typeface properties, e.g., Typeface.BOLD) | Toolbar title text font style |
titleTextSize | Int | Toolbar title text size (in sp, 12-18) |
titleTextAlpha | Int | Toolbar title text opacity (in %, 0-100) |
titleTextColor | Toolbar title text color |
backgroundColor | Toolbar background color |
backgroundAlpha | Int | Toolbar background opacity (in %, 0-100) |
isTitleCentered | Boolean | Defines whether the text on the toolbar is centered or not |
title | String | Text on the toolbar |
Parameter | Type | Description |
textFont | String | Center hint text font |
textStyle | Int (values from android.graphics.Typeface properties, e.g.,Typeface.BOLD) | Center hint text style |
textSize | Int | Center hint text size (in sp, 12-34) |
textColor | Center hint text color |
textAlpha | Int | Center hint text opacity (in %, 0-100) |
verticalPosition | Int | Center hint vertical position from the screen bottom (in %, 0-100) |
backgroundColor | Center hint background color |
backgroundOpacity | Int | Center hint background opacity |
backgroundCornerRadius | Int | Center hint background frame corner radius (in dp, 0-20) |
Parameter | Type | Description |
hintGradientColor | Gradient color |
hintGradientOpacity | Int | Gradient opacity |
animationIconSize | Int | A side size of the animation icon square |
hideAnimation | Boolean | A switcher for hint animation, if |
Parameter | Type | Description |
geometryType | The frame type: oval, rectangle, circle, square |
cornerRadius | Int | Rectangle corner radius (in dp, 0-20) |
strokeDefaultColor | Frame color when a face is not aligned properly |
strokeFaceInFrameColor | Frame color when a face is aligned properly |
strokeAlpha | Int | Frame opacity (in %, 0-100) |
strokeWidth | Int | Frame stroke width (in dp, 0-20) |
strokePadding | Int | A padding from the stroke to the face alignment area (in dp, 0-10) |
Parameter | Type | Description |
backgroundColor | Background color |
backgroundAlpha | Int | Background opacity (in %, 0-100) |
Parameter | Type | Description |
textFont | Int (@FontRes) | SDK version text font |
textSize | Int | SDK version text size (in sp, 12-16) |
textColor | SDK version text color |
textAlpha | Int | SDK version text opacity (in %, 20-100) |
Parameter | Type | Description |
textMessage | String | Antiscam message text |
textFont | String | Antiscam message text font |
textSize | Int | Antiscam message text size (in px, 12-18) |
textColor | Antiscam message text color |
textAlpha | Int | Antiscam message text opacity (in %, 0-100) |
backgroundColor | Antiscam message background color |
backgroundOpacity | Int | Antiscam message background opacity |
cornerRadius | Int | Background frame corner radius (in px, 0-20) |
flashColor | Color of the flashing indicator close to the antiscam message |
Parameter | Type | Description |
tag | A tag for a document photo. |
photoPath | String | An absolute path to a photo. |
additionalTags (optional) | String | Additional tags if needed (including those not from the OzMediaTag enum). |
Parameter | Type | Description |
tag | A tag for a shot set |
archivePath | String | A path to an archive |
additionalTags (optional) | String | Additional tags if needed (including those not from the OzMediaTag enum) |
Parameter | Type | Description |
tag | A tag for a video |
videoPath | String | A path to a video |
bestShotPath (optional) | String | URL of the best shot in PNG |
preferredMediaPath (optional) | String | URL of the API media container |
additionalTags (optional) | String | Additional tags if needed (including those not from the OzMediaTag enum) |
Parameter | Type | Description |
analysis | Contains information on what media to analyze and what analyses to apply. |
Parameter | Type | Description |
media | The object that is being uploaded at the moment |
index | Int | Number of this object in a list |
from | Int | Objects quantity |
percentage | Int | Completion percentage |
Parameter | Type | Description |
type | Type | The type of the analysis |
mode | Mode | The mode of the analysis |
mediaList | An array of the OzAbstractMedia objects |
params (optional) | Map<String, Any> | Additional parameters |
sizeReductionStrategy | Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully |
Parameter | Type | Description |
mediaId | String | Media identifier |
mediaType | String | Type of the media |
originalName | String | Original media name |
ozMedia | Media object |
tags | List<String> | Tags for media |
Parameter | Type | Description |
confidenceScore | Float | Resulting score |
isOnDevice | Boolean | Mode of the analysis |
resolution | Consolidated analysis result |
sourceMedia | Source media |
type | Type of the analysis |
Parameter | Type | Description |
analysisResults | Analysis result |
folderId | String | Folder identifier |
resolution | Consolidated analysis result |
Parameter | Type | Description |
resolution | Consolidated analysis result |
type | Type of the analysis |
mode | Resulting score |
resultMedia | A list of results of the analyses for single media |
confidenceScore | Float | Resulting score |
analysisId | String | Analysis identifier |
params | @RawValue Map<String, Any> | Additional folder parameters |
error | Error if any |
serverRawResponse | String | Response from backend |
Error Code | Error Message | Description |
ERROR = 3 | Error. | An unknown error has happened |
ATTEMPTS_EXHAUSTED_ERROR = 4 | Error. Attempts exhausted for liveness action. |
VIDEO_RECORD_ERROR = 5 | Error by video record. | An error happened during video recording |
NO_ACTIONS_ERROR = 6 | Error. OzLivenessSDK started without actions. |
FORCE_CLOSED = 7 | Error. Liveness activity is force closed from client application. | A user closed the Liveness screen during video recording |
DEVICE_HAS_NO_FRONT_CAMERA = 8 | Error. Device has not front camera. | No front camera found |
DEVICE_HAS_NO_MAIN_CAMERA = 9 | Error. Device has not main camera. | No rear camera found |
DEVICE_CAMERA_CONFIGURATION_NOT_SUPPORTED = 10 | Error. Device camera configuration is not supported. | Oz Liveness doesn't support the camera configuration of the device |
FACE_ALIGNMENT_TIMEOUT = 12 | Error. Face alignment timeout in OzLivenessSDK.config.faceAlignmentTimeout milliseconds |
ERROR = 13 | The check was interrupted by user | User has closed the screen during the Liveness check. |
iOS SDK changes
The messages displayed by the SDK after uploading media have been synchronized with Android.
The bug causing analysis delays that might have occurred for the One Shot gesture has been fixed.
Removed the pause after the Scan gesture.
Security and logging updates.
Security updates.
Changed the default behavior in case a localization key is missing: now the English string value is displayed instead of a key.
Fixed some bugs.
Internal licensing improvements.
Implemented the possibility of using a master license that works with any bundle_id
.
Fixed the bug with background color flashing.
Bug fixes.
The Analysis
structure now contains the sizeReductionStrategy
field. This field defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully.
The messages for the errors that are retrieved from API are now detailed.
The toFrameGradientColor
option in hintAnimationCustomization
is now deprecated, please use the hintGradientColor
option instead.
Got back the iOS 11 support.
Updated the Liveness on-device model.
Added the Portuguese (Brazilian) locale.
If a media hasn't been uploaded correctly, the system now repeats the upload.
Added a new method to retrieve the telemetry (logging) identifier: getEventSessionId
.
The setPermanentAccessToken
, configure
and login
methods are now deprecated. Please use the setApiConnection
method instead.
The setLicense(from path:String)
method is now deprecated. Please use the setLicense(licenseSource: LicenseSource)
method instead.
Fixed some bugs and improved the SDK work.
Fixed some bugs and improved the SDK algorithms.
Added the new analysis mode – hybrid (Liveness only). If the score received from an on-device analysis is too high, the system initiates a server-based analysis as an additional check.
Improved the on-device models.
Updated the run method.
Added new structures: RequestStatus
(analysis state), ResultMedia
(analysis result for a single media) and RequestResult
(consolidated analysis result for all media).
The updated AnalysisResult
structure should be now used instead of OzAnalysisResult
.
For the OZMedia
object, you can now specify additional tags that are not included into our tags list.
The Selfie video length is now about 0.7 sec, the file size and upload time are reduced.
The hint text width can now exceed the frame width (when using the main camera).
The methods below are no longer supported:
Added the center hint background customization.
Added new face frame forms (Circle, Square).
Synchronized the default customization values with Android.
Added the Spanish locale.
iOS 11 is no longer supported, the minimal required version is 12.
Fixed the issue with the server-based One shot analysis.
Improved the SDK algorithms.
Fixed error handling when uploading a file to API. From this version, an error will be raised to a host application in case of an error during file upload.
Improved the on-device Liveness.
Fixed the animation for sunglasses/mask.
Fixed the bug with the .document
analysis.
Updated the descriptions of customization methods and structures.
Updated the TensorFlow version to 2.11.
Fixed several bugs, including the Biometry check failures on some phone models.
Added customization for the hint animation.
Integrated a new model.
Added the uploadMedia
method to AnalysisRequest
. The addMedia
method is now deprecated.
Fixed the combo analysis error.
Added a button to reset the SDK theme and language settings.
Fixed some bugs and localization issues.
Extended the network request timeout to 90 sec.
Added a setting for the animation icon size.
Synchronized the version numbers with Android SDK.
Added a new field to the Analysis
structure. The params
field is for any additional parameters, for instance, if you need to set extracting the best shot on server to true. The best shot algorithm chooses the most high-quality frame from a video.
Fixed some localization issues.
Changed the Combo gesture.
Now you can launch the Liveness check to analyze images taken with another SDK.
The Zoom in and Zoom out gestures are no longer supported.
Added a new simplified analysis structure – AnalysisRequest
.
Added methods of on-device analysis: runOnDeviceLivenessAnalysis
and runOnDeviceBiometryAnalysis
.
You can choose the installation version. Standard installation gives access to full functionality. The core
version (OzLivenessSDK/Core
) installs SDK without the on-device functionality.
Added a method to upload data to server and start analyzing it immediately: uploadAndAnalyse
.
Improved the licensing process, now you can add a license when initializing SDK: OZSDK(licenseSources: [LicenseSource], completion: @escaping ((LicenseData?, LicenseError?) -> Void))
, where LicenseSource
is a path to physical location of your license, LicenseData
contains the license information.
Added the setLicense
method to force license adding.
Added the Turkish locale
Added the Kyrgyz locale
Added Completion Handler for analysis results.
Added Error User Info to telemetry to show detailed info in case of an analysis error.
Added local on-device analysis.
Added oval and rectangular frames.
Added Xcode 12.5.1+ support.
Added SDK configuration with licenses.
Added the One Shot gesture.
Improved OZVerificationResult
: added bestShotURL
which contains the best shot image and preferredMediaURL
which contains an URL to the best quality video.
When performing a local check, you can now choose a main or back camera.
Authorization sessions extend automatically
Updated authorization interfaces.
Added the Kazakh locale
Added license error texts
You can cancel network requests
You can specify Bundle for license
Added analysis parameterization documentAnalyse
.
Fixed building errors (Xcode 12.4 / Cocoapods 1.10.1)
Added license support
Added Xcode 12 support instead of 11.
Fixed the documentAnalyse
error where you had to fill analyseStates
to launch the analysis
Fixed logging
[]
onStatusChange(status: AnalysisRequest.
) { handleStatus() }
onSuccess(result:
) {
[]
[]
[]
List<>
List<>
The number of action is exceeded
No found in a video
Time limit for the is exceeded
The length of the Selfie gesture is now (affects the video file size).
You can instead of Oz logo if your license allows it.
The code in is now up-to-date.
If multiple analyses are applied to the folder simultaneously, the system sends them as a group. It means that the “worst” of the results will be taken as resolution, not the latest. Please refer to for details.
For the Liveness analysis, the system now treats the highest score as a quantitative result. The Liveness analysis output is described .
You can now add a custom or update an existing language pack. The instructions can be found .
Added the antiscam widget and its . This feature allows you to alert your customers that the video recording is being conducted, for instance, for loan application purposes. The purpose of this is to safeguard against scammers who may attempt to deceive an individual into approving a fraudulent transaction.
Implemented a range of options and switched to the new design. To restore the previous settings, please refer to .
The run
method now works similar to the one in Android SDK and returns an .
Removed method | Replacement |
analyse | AnalysisRequest.run |
addToFolder | uploadMedia |
documentAnalyse | AnalysisRequest.run |
uploadAndAnalyse | AnalysisRequest.run |
runOnDeviceBiometryAnalysis | AnalysisRequest.run |
runOnDeviceLivenessAnalysis | AnalysisRequest.run |
addMedia | uploadMedia |
A singleton for Oz SDK.
Initializes OZSDK with the license data. The closure is either license data or LicenseError.
Returns
-
Forces the license installation.
Retrieves an access token for a user.
Returns
The access token or an error.
Retrieves an access token for a user to send telemetry.
Returns
The access token or an error.
Checks whether an access token exists.
Parameters
-
Returns
The result – the true or false value.
Deletes the saved access token
Parameters
-
Returns
-
Creates the Liveness check controller.
Returns
UIViewController or an exception.
Creates the Liveness check controller.
Returns
UIViewController or an exception.
Deletes all videos.
Parameters
-
Returns
-
Retrieves the telemetry session ID.
Parameters
-
Returns
The telemetry session ID (String parameter).
Sets the bundle to look for translations in.
Parameters
Returns
-
Sets the length of the Selfie gesture (in milliseconds).
SDK locale (if not set, works automatically).
The host to call for Liveness video analysis.
The holder for attempts counts before SDK returns error.
The SDK version.
A delegate for OZSDK.
Gets the Liveness check results.
Returns
-
The error processing method.
Returns
-
A protocol for performing checks.
Creates the AnalysisRequest instance.
Returns
The AnalysisRequest instance.
Adds an analysis to the AnalysisRequest instance.
Returns
-
Uploads media on server.
Returns
-
Adds the folder ID to upload media to a certain folder.
Returns
-
Adds metadata to a folder.
Returns
-
Runs the analyses.
Returns
The analysis result or an error.
Customization for OzLivenessSDK (use OZSDK.customization
).
A set of customization parameters for the toolbar.
A set of customization parameters for the center hint that guides a user through the process of taking an image of themselves.
A set of customization parameters for the hint animation.
A set of customization parameters for the frame around the user face.
A set of customization parameters for the background outside the frame.
A set of customization parameters for the SDK version text.
A set of customization parameters for the antiscam message that warns user about their actions being recorded.
Logo customization parameters. Custom logo should be allowed by license.
A source of a license.
The license data.
Contains action from the captured video.
Contains the locale code according to ISO 639-1.
Contains all the information on the media captured.
The type of media captured.
Error description.
Contains information on what media to analyze and what analyses to apply.
The type of the analysis.
Currently, the .document analysis can't be performed in the on-device mode.
The mode of the analysis.
Shows the media processing status.
Shows the files' uploading status.
Shows the analysis processing status.
Describes the analysis result for the single media.
Contains the consolidated analysis results for all media.
Contains the results of the checks performed.
The general status for all analyses applied to the folder created.
Contains the results for single analyses.
Frame shape settings.
Possible license errors.
The authorization type.
Defines the settings for the repeated media upload.
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
Add the lines below in pubspec.yaml of the project you want to add the plugin to.
Add the license file (e.g., license.json or forensics.license) to the Flutter application/assets folder. In pubspec.yaml, specify the Flutter asset:
For Android, add the Oz repository to /android/build.gradle, allprojects → repositories section:
The minimum SDK version should be 21 or higher:
For iOS, set the minimum platform to 13 or higher in the Runner → Info → Deployment target → iOS Deployment Target.
In ios/Podfile, comment the use_frameworks!
line (#use_frameworks!
).
Initialize SDK by calling the init
plugin method. Note that the license file name and path should match the ones specified in pubspec.yaml (e.g., assets/license.json).
Use the API credentials (login, password, and API URL) that you’ve received from us.
In production, instead of hard-coding the login and password inside the application, it is recommended to get the access token on your backend via the API auth method, then pass it to your application:
To start recording, use the executeLiveness
method to obtain the recorded media:
The media
object contains the captured media data.
To run the analyses, execute the code below.
Create the Analysis
object:
Execute the formed analysis:
The analysisResult
list of objects contains the result of the analysis.
If you want to use media captured by another SDK, the code should look like this:
Deletes all action videos from file system (iOS 8.4.0 and higher, Android).
Returns
Future<Void>.
Returns the SDK version.
Returns
Future<String>.
Initializes SDK with license sources.
Returns
Authentication via credentials.
Returns
Authentication via access token.
Returns
Connection to the telemetry server via credentials.
Returns
Connection to the telemetry server via access token.
Returns
Checks whether an access token exists.
Returns
Deletes the saved access token.
Returns
Nothing (void).
Returns the list of SDK supported languages.
Returns
Starts the Liveness video capturing process.
Returns
Sets the length of the Selfie gesture (in milliseconds).
Returns
Error if any.
Launches the analyses.
Returns
Sets the SDK localization.
The number of attempts before SDK returns error.
Sets the UI customization values for OzLivenessSDK. The values are described in the Customization structures section. Structures can be found in the lib\customization.dart file.
Loads the customized resources’ parameters from the iOS/Android customization and passes them to the Flutter aplication level.
Returns
The JSON object woth the customization resources' parameters.
Sets the timeout for the face alignment for actions.
Add fonts and drawable resources to the application/ios project.
Fonts and images should be placed into related folders:
ozforensics_flutter_plugin\android\src\main\res\drawable ozforensics_flutter_plugin\android\src\main\res\font
The loadCustomizationResources
method delivers the customization data to the Flutter application layer. This structure can be extended according to customers’ demands.
These are defined in the customization.dart file.
Contains the information about customization parameters.
Toolbar customization parameters.
Center hint customization parameters.
Hint animation customization parameters.
Frame around face customization parameters.
SDK version customization parameters.
Background customization parameters.
Defined in the models.dart file.
Stores the language information.
The type of media captured.
The type of media captured.
Contains an action from the captured video.
Stores information about media.
Stores information about the analysis result.
Stores data about a single analysis.
Analysis type.
Analysis mode.
Contains the action from the captured video.
The general status for all analyses applied to the folder created.
Defines what type of media is being sent to the server in case of the hybrid analysis once the on-device analysis is finished successfully. By default, the system uploads the compressed video.
Loaded by the loadCustomizationResources method. This is a Map to define the platform-specific resources on the plugin level.
This key is a Map for the close button icon.
This key is a Map containing the data on the uploaded fonts.
This key is a Map containing the data on the uploaded font styles.
This key is a Map containing the data on grame shape.
Oz Liveness Web SDK is a module for processing data on clients' devices. With Oz Liveness Web SDK, you can take photos and videos of people via their web browsers and then analyze these media. Most browsers and devices are supported. Available languages: EN, ES, PT-BR, KK.
For Angular and React, replace https://web-sdk.sandbox.ohio.ozforensics.com
in index.html.
Web SDK uses HTTPS to work; however, it is possible to use HTTP at localhost and 127.0.01.
Oz Liveness Web SDK consists of two components:
The integration guides can be found here:
This is a guide on how to start with Oz Web SDK:
Parameter | Type | Description |
---|---|---|
List<>.
List<>.
List<>.
Please find a sample for Oz Liveness Web SDK . To make it work, replace <web-adapter-url>
with the Web Adapter URL you've received from us.
Client side – a JavaScript file that is being loaded within the frontend part of your application. It is called .
Server side – a separate server module with . The module is called Liveness.
Oz Web SDK can be provided via SaaS, when the server part works on our servers and is maintained by our engineers, and you just use it, or on-premise, when Oz Web Adapter is installed on your servers. for more details and choose the model that is convenient for you.
Oz Web SDK requires a to work. To issue a license, we need the domain name of the website where you are going to use our SDK.
the plugin into your page.
If you want to customize the look-and-feel of Oz Web SDK, please refer to .