Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Oz API is the most important component of the system. It makes sure all other components are connected with each other. Oz API:
provides the unified Rest API interface to run the Liveness and Biometry analyses
processes authorization and user permissions management
tracks and records requested orders and analyses to the database
archives the inbound media files
collects telemetry from connected mobile apps
provides settings for specific device models
generates reports with analyses results
Description of the Rest API scheme:
To launch one or more analyses for your media files, you need to create a folder via Oz API (or use an existing folder) and put the files into this folder. Each file should be marked by tags: they describe what's pictured in a media and determine the applicable analyses.
For API 4.0.8 and below, please note: if you want to upload a photo for the subsequent Liveness analysis, put it into the ZIP archive and apply the video-related tags.
To create a folder and upload media to it, call POST /api/folders/
To add files to the existing folder, call POST /api/folders/{{folder_id}}/media/
Add the files to the request body; tags should be specified in the payload.
Here's the example of the payload for a passive Liveness video and ID front side photo.
An example of usage (Postman):
The successful response will return the folder data.
The Liveness detection algorithm is intended to detect a real living person in a media.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET api/folders/{{folder_id}}/analyses/
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat the check until theresolution_status
and resolution
fields change status to any other except PROCESSING
and treat this as a result.
For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need. It indicates a chance that a person is not a real one.
You're .
You have already marked by correct tags into this folder.
For API 4.0.8 and below, please note: the Liveness analysis works with videos and shotsets, images are ignored. If you want to analyze an image, upload it as a shotset (archive) with a single image and mark with the video_selfie_blank
tag.
1. the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described .
The Biometry algorithm is intended to compare two or more photos and detect the level of similarity of the spotted faces. As a source media, the algorithm takes photos, videos, and documents (with photos).
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat until the resolution_status
and resolution
fields change status to any other except PROCESSING
, and treat this as a result.
Check the response for the min_confidence
value. It is a quantitative result of matching the people on the media uploaded.
This article describes how to create a collection via API, how to add persons and photos to this collection and how to delete them and the collection itself if you no longer need it. You can do the same in Web console, but this article covers API methods only.
Collection in Oz API is a database of facial photos that are used to compare with the face from the captured photo or video via the Black list analysis
Person represents a human in the collection. You can upload several photos for a single person.
The collection should be created within a company, so you require your company's company_id
as a prerequisite.
If you don't know your ID, callGET /api/companies/?search_text=test
, replacing "test" with your company name or its part. Save the company_id
you've received.
Now, create a collection via POST /api/collections/
. In the request body, specify the alias for your collection and company_id
of your company:
In a response, you'll get your new collection identifier: collection_id
.
To add a new person to your collection, call POST /api/collections/{{collection_id}}/persons/
, usingcollection_id
of the collection needed. In the request body, add a photo or several photos. Mark them with appropriate tags in the payload:
The response will contain the person_id
which stands for the person identifier within your collection.
If you want to add a name of the person, in the request payload, add it as metadata:
To add more photos of the same person, call POST {{host}}/api/collections/{{collection_id}}/persons/{{person_id}}/images/
using the appropriate person_id
. The request body should be filled as you did it before with POST /api/collections/{{collection_id}}/persons/
.
To obtain information on all the persons within the single collection, call GET /api/collections/{{collection_id}}/persons/
.
To obtain a list of photos for a single person, call GET /api/collections/{{collection_id}}/persons/{{person_id}}/images/
. For each photo, the response will containperson_image_id
. You'll need this ID, for instance, if you want to delete the photo.
To delete a person with all their photos, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}
with the appropriate collection and person identifiers. All the photos will be deleted automatically. However, you can't delete a person entity if it has any related analyses, which means the Black list analysis used this photo for comparison and found a coincidence. To delete such a person, you'll need to delete these analyses using DELETE /api/analyses/{{analyse_id}}
with analyse_id
of the collection (Black list) analysis.
To delete all the collection-related analyses, get a list of folders where the Black list analysis has been used: call GET /api/folders/?analyse.type=COLLECTION
. For each folder from this list (GET /api/folders/{{folder_id}}/
), find the analyse_id
of the required analysis, and delete the analysis – DELETE /api/analyses/{{analyse_id}}
.
To delete a single photo of a person, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}/images/{{person_image_id}}
with collection, person, and image identifiers specified.
Delete the information on all the persons from this collection as described above, then call DELETE /api/collections/{{collection_id}}/
to delete the remaining collection data.
The description of the objects you can find in Oz Forensics system.
System objects on Oz Forensics products are hierarchically structured as shown in the picture below.
On the top level, there is a Company. You can use one copy of Oz API to work with several companies.
The next level is a User. A company can contain any amount of users. There are several roles of users with different permissions. For more information, refer to User Roles.
When a user requests an analysis (or analyses), a new folder is created. This folder contains media. One user can create any number of folders. Each folder can contain any amount of media. A user applies analyses to one or more media within a folder. The rules of assigning analyses are described here. The media quality requirements are listed on this page.
Parameter
Type
Description
time_created
Timestamp
Object (except user and company) creation time
time_updated
Timestamp
Object (except user and company) update time
meta_data
Json
Any user parameters
technical_meta_data
Json
Module-required parameters; reserved for internal needs
Besides these parameters, each object type has specific ones.
Parameter
Type
Description
company_id
UUID
Company ID within the system
name
String
Company name within the system
Parameter
Type
Description
user_id
UUID
User ID within the system
user_type
String
first_name
String
Name
last_name
String
Surname
middle_name
String
Middle name
String
User email = login
password
String
User password (only required for new users or to change)
can_start_analyze_*
String
company_id
UUID
Current user company’s ID within the system
is_admin
Boolean
is_service
Boolean
Parameter
Type
Description
folder_id
UUID
Folder ID within the system
resolution_status
ResolutionStatus
The latter analysis status
Parameter
Type
Description
media_id
UUID
Media ID
original_name
String
Original filename (how the file was called on the client machine)
original_url
Url
HTTP link to this file on the API server
tags
Array(String)
Parameter
Type
Description
analyse_id
UUID
ID of the analysis
folder_id
UUID
ID of the folder
type
String
Analysis type (BIOMETRY\QUALITY\DOCUMENTS)
results_data
JSON
Results of the analysis
Here, you'll get acquainted with types of analyses that Oz API provides and will learn how to interpret the output.
Using Oz API, you can perform one of the following analyses:
Each of the analyses has its threshold that determines the output of these analyses. By default, the threshold for Liveness is 0.5 or 50%, for Blacklist and Biometry (Face Matching) – 0.85 or 85%.
Biometry: if the final score is equal to or above the threshold, the faces on the analyzed media are considered similar.
Blacklist: if the final score is equal to or above the threshold, the face on the analyzed media matches with one of the faces in the database.
Quality: if the final score is equal to or above the threshold, the result is interpreted as an attack.
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – faces are similar, media represent the same person,
0% (0) – faces are not similar and belong to different people
The Liveness detection (Quality) algorithm aims to check whether a person in a media is a real human acting in good faith, not a fake of any kind.
The Best Shot algorithm checks for the best shot from a video (a best-quality frame where the face is seen the most properly). It is an addition to liveness.
After checking, the analysis shows the chance of a spoofing attack in percents.
100% (1) – an attack is detected, the person in the video is not a real living person,
0% (0) – a person in the video is a real living person.
*Spoofing in biometry is a kind of scam when a person disguises as another person using both program and non-program tools like deepfake, masks, ready-made photos, or fake videos.
The Documents analysis aims to recognize the document and check if its fields are correct according to its type.
As an output, you'll get a list of document fields with recognition results for each field and a result of checking that can be:
The documents passed the check successfully,
The documents failed to pass the check.
Additionally, the result of Biometry check is displayed.
The Blacklist checking algorithm is used to determine whether the person on a photo or video is present in the database of pre-uploaded images. This base can be used as a blacklist or whitelist. In the former case, the person's face is being compared with the faces of known swindlers; in the latter case, it might be a list of VIPs.
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – the person in an image or video matches with someone in the blacklist,
0% (0) – the person is not found in the blacklist.
This article covers the default rules of applying analyses.
Analyses in Oz system can be applied in two ways:
manually, for instance, when you choose the Liveness scenario in our demo application;
automatically, when you don’t choose anything and just assign all possible analyses (via API or SDK).
Below, you will find the tags and type requirements for all analyses. If a media doesn’t match the requirements for the certain analysis, this media is ignored by algorithms.
Important: to process a photo in API 4.0.8 and below, pack it into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored.
This analysis is applied to all media.
If the folder contains less than two matching media files, the system will return an error. If there are more than two files, then all pairs will be compared, and the system will return a result for the pair with the least similar faces.
This analysis works only when you have a pre-made image database, which is called the blacklist. The analysis is applied to all media in the folder (or the ones marked as source media).
Best Shot is an addition to the Quality (Liveness) analysis. It requires the appropriate option enabled. The analysis is applied to all media files that can be processed by the Quality analysis.
The Documents analysis is applied to images with tags photo_id_front
and photo_id_back
(documents), and photo_selfie
(selfie). The result will be positive if the system finds the selfie photo and matches it with a photo on one of the valid documents from the following list:
personal ID card
driver license
foreign passport
Each of the new API users obtains a role to define access restrictions for direct API connections.
Every role is combined with flags is_admin
and is_service
, which implies restrictions additionally.
is_service
is a flag that marks the user account as a service account for automatic connection purposes. This user authentication creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, by default, the lifetime of a token is extended with each request (parameterized).
ADMIN
is a system administrator. Has unlimited access to all system objects, but can't change the analyses' statuses;
CLIENT
is a regular consumer account. Can upload media files, process analyses, view results in personal folders, generate reports for analyses.
is_admin
– if set, the user obtains access to other users' data within this admin's company
CLIENT ADMIN
is a company administrator that can manage their company account and users within it. Additionally, CLIENT ADMIN
can view and edit data of all users within their company, delete files in folders, add or delete report templates with or without attachments, the reports themselves and single analyses, check statistics, add new blacklist collections. The role is present in Web UI only. Outside Web UI, CLIENT ADMIN
is replaced by the CLIENT
role with the is_admin
flag set to true
.
CLIENT OPERATOR
is similar to OPERATOR
within their company.
Here's the detailed information on access levels.
Depends on
Whether this user is an or not
Whether this user account is or not
List of for this file
,
,
,
.
The possible results of the analyses are explained .
To configure the threshold depending on your needs, please .
For more information on how to read the numbers in analyses' results, please refer to .
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media (for details, please refer to ).
Oz API uses a third-party OCR analysis service provided by our partner. If you want to change this service to another one, please .
The automatic assignment means that Oz system decides itself what analyses to apply to media files based on its and type. If you upload files via the web console, you select the tags needed; if you take photo or video via Web SDK, the SDK picks the tags automatically. As for the media type, it can be IMAGE
(a photo)/VIDEO
/SHOTS_SET
, where SHOTS_SET
is a .zip archive equal to video.
The rules listed below act by default. To change the mapping configuration, please .
This analysis is applied to all media, regardless of the recorded (gesture tags begin from video_selfie
).
OPERATOR
is a system operator. Can view all system objects and choose the analysis result via the Make Decision button (usually needed if the is OPERATOR_REQUIRED
);
can_start_analyse_biometry
– an additional flag to allow access to analyses (enabled by default);
can_start_analyse_quality
– an additional flag to allow access to (QUALITY) analyses (enabled by default);
Code
Message
Description
202
Could not locate face on source media [media_id]
No face is found in the media that is being processed, or the source media has wrong (photo_id_back
) or/and missing tag used for the media.
202
Biometry. Analyse requires at least 2 media objects to process
The algorithms did not find the two appropriate media for analysis. This might happen when only a single media has been sent for the analysis, or a media is missing a tag.
202
Processing error - did not found any document candidates on image
The Documents analysis can't be finished because the photo uploaded seems not to be a document, or it has wrong (not photo_id_*
) or/and missing tags.
5
Invalid/missed tag values to process quality check
The tags applied can't be processed by the Quality algorithm (most likely, the tags begin from photo_*
; for Quality, they should be marked as video_*
)
5
Invalid/missed tag values to process blacklist check
The tags applied can't be processed by the Blacklist algorithm. This might happen when a media is missing a tag.
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
-
-
CLIENT
-
their company data
-
-
CLIENT SERVICE
-
their company data
-
-
CLIENT OPERATOR
-
their company data
-
-
CLIENT ADMIN
-
their company data
their company data
their company data
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
their folders
their folders
their folders
-
CLIENT SERVICE
within their company
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
-
within their company
-
-
CLIENT SERVICE
-
within their company
-
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
+
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
-
within their company
-
CLIENT OPERATOR
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
+
+
-
CLIENT
in their folders
in their folders
-
CLIENT SERVICE
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
+
+
+
-
CLIENT
in their folders
in their folders
-
-
CLIENT SERVICE
within their company
within their company
within their company
-
CLIENT OPERATOR
within their company
within their company
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
-
-
CLIENT
-
within their company
-
-
CLIENT SERVICE
within their company
within their company
-
-
CLIENT OPERATOR
-
within their company
-
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
-
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
within their company
within their company
-
CLIENT OPERATOR
-
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Delete
ADMIN
+
+
+
OPERATOR
-
+
-
CLIENT
-
within their company
-
CLIENT SERVICE
-
within their company
-
CLIENT OPERATOR
-
within their company
-
CLIENT ADMIN
within their company
within their company
within their company
Create
Read
Update
Delete
ADMIN
+
+
+
+
OPERATOR
-
+
their data
-
CLIENT
-
their data
their data
-
CLIENT SERVICE
-
within their company
their data
-
CLIENT OPERATOR
-
within their company
their data
-
CLIENT ADMIN
within their company
within their company
within their company
within their company
This article contains the full description of folders' and analyses' statuses in API.
INITIAL
-
-
starting state
starting state
PROCESSING
starting state
starting state
analyses in progress
analyses in progress
FAILED
system error
system error
system error
system error
FINISHED
finished successfully
-
finished successfully
-
DECLINED
-
check failed
-
check failed
OPERATOR_REQUIRED
-
additional check is needed
-
additional check is needed
SUCCESS
-
check succeeded
-
check succeeded
The details on each status are below.
This is the state when the analysis is being processed. The values of this state can be:
PROCESSING
– the analysis is in progress;
FAILED
– the analysis failed due to some error and couldn't get finished;
FINISHED
– job's done, the analysis is finished, and you can check the result.
Once the analysis is finished, you'll see one of the following results:
SUCCESS
– everything went fine, the check succeeded (e.g., faces match or liveness confirmed);
OPERATOR_REQUIRED
(except the Liveness analysis) – the result should be additionally checked by a human operator;
The OPERATOR_REQUIRED status
appears only if it is set up in biometry settings.
DECLINED
– the check failed (e.g., faces don't match or some spoofing attack detected).
If the analysis hasn't been finished yet, the result inherits a value from analyse.state
: PROCESSING
(the analysis is in progress) / FAILED
(the analysis failed due to some error and couldn't get finished).
A folder is an entity that contains media to analyze. If the analyses have not been finished, the stage of processing media is shown in resolution_status
:
INITIAL
– no analyses applied;
PROCESSING
– analyses are in progress;
FAILED
– any of the analyses failed due to some error and couldn't get finished;
FINISHED
– media in this folder are processed, the analyses are finished.
Folder result is the consolidated result of all analyses applied to media from this folder. Please note: the folder result is the result of the last-finished group of analyses. If all analyses are finished, the result will be:
SUCCESS
– everything went fine, all analyses completed successfully;
OPERATOR_REQUIRED
(except the Liveness analysis) – there are no analyses with the DECLINED
status, but one or more analyses have been completed with the OPERATOR_REQUIRED
status;
DECLINED
– one or more analyses have been completed with the DECLINED
status.
The analyses you send in a single POST
request form a group. The group result is the "worst" result of analyses this group contains: INITIAL
> PROCESSING
> FAILED
> DECLINED
> OPERATOR_REQUIRED
> SUCCESS
, where SUCCESS
means all analyses in the group have been completed successfully without any errors.
Metadata is any optional data you might need to add to a system object. In the meta_data
section, you can include any information you want, simply by providing any number of fields with their values:
Metadata is available for most Oz system objects. Here is the list of these objects with the API methods required to add metadata. Please note: you can also add metadata to these objects during their creation.
Object
API Method
User
PATCH /api/users/{{user_id}}
Folder
PATCH /api/folders/{{folder_id}}/meta_data/
Media
PATCH /api/media/{{media_id}}/meta_data
Analysis
PATCH /api/analyses/{{analyse_id}}/meta_data
Collection
PATCH /api/collections/{{collection_id}}/meta_data/
and, for a person in a collection,
PATCH /api/collections/{{collection_id}}/persons/{{person_id}}/meta_data
You can also change or delete metadata. Please refer to our API documentation.
You may want to use metadata to group folders by a person or lead. For example, if you want to calculate conversion when a single lead makes several Liveness attempts, just add the person/lead identifier to the folder metadata.
Here is how to add the client ID iin
to a folder object.
In the request body, add:
You can pass an ID of a person in this field, and use this ID to combine requests with the same person and count unique persons (same ID = same person, different IDs = different persons). This ID can be a phone number, an IIN, an SSN, or any other kind of unique ID. The ID will be displayed in the report as an additional column.
Another case is security: when you need to process the analyses’ result from your back end, but don’t want to perform this using the folder ID. Add an ID (transaction_id
) to this folder and use this ID to search for the required information. This case is thoroughly explained here.
If you store PII in metadata, make sure it complies with the relevant regulatory requirements.
You can also add metadata via SDK to process the information later using API methods. Please refer to the corresponding SDK sections:
What is Oz API Lite, when and how to use it.
Oz API Lite is the lightweight yet powerful version of Oz API. The Lite version is less resource-demanding, more productive, and easier to work with. The analyses are made within the API Lite image. As Oz API Lite doesn't include any additional services like statistics or data storage, this version is the one to use when you need a high performance.
To check the Liveness processor, call GET /v1/face/liveness/health
.
To check the Biometry processor, call GET /v1/face/pattern/health
.
To perform the liveness check for an image, call POST /v1/face/liveness/detect
(it takes an image as an input and displays the evaluation of spoofing attack chance in this image)
To compare two faces in two images, call POST /v1/face/pattern/extract_and_compare
(it takes two images as an input, derives the biometry templates from these images, and compares them).
To compare an image with a bunch of images, call POST /v1/face/pattern/extract_and_compare_n
.
For the full list of Oz API Lite methods, please refer to API Methods.
How to compare a photo or video with ones from your database.
The blacklist check algorithm is designed to check the presence of a person using a database of preloaded photos. A video fragment and/or a photo can be used as a source for comparison.
You're authorized.
You have already created a folder and added your media marked by correct tags into this folder.
1. Initiate the analysis: POST/api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described here.
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Wait for the resolution_status
and resolution
fields to change the status to anything other than PROCESSING
and treat this as a result.
If you want to know which person from your collection matched with the media you have uploaded, find the collection
analysis in the response, check results_media
, and retrieve person_id
. This is the ID of the person who matched with the person in your media. To get the information about this person, use GET /api/collections/{{collection_id}}/persons/{{person_id}}
with IDs of your collection and person.
The "Best shot" algorithm is intended to choose the most high-quality and well-tuned frame with a face from a video record. This algorithm works as a part of the liveness analysis, so here, we describe only the best shot part.
Please note: historically, some instances are configured to allow Best Shot only for certain gestures.
1. Initiate the analysis similar to Liveness, but make sure that the "extract_best_shot"
is set to True
as shown below:
If you want to use a webhook for response, add it to the payload at this step, as described here.
2. Check and interpret results in the same way as for the pure Liveness analysis.
3. The URL to the best shot is located in the results_media -> output_images -> original_url
response.
This article describes how to get the analysis scores.
When you perform an analysis, the result you get is a number. For biometry, it reflects a chance that the two or more people represented in your media are the same person. For liveness, it shows a chance of deepfake or a spoofing attack: that the person in uploaded media is not a real one. You can get these numbers via API from a JSON response.
Make a request to the folder or folder list to get a JSON response. Set the with_analyses
parameter to True
.
For the Biometry analysis, check the response for the min_confidence
value:
This value is a quantitative result of matching the people on the media uploaded.
4. For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need:
This value is a chance that a person is not a real one.
To process a bunch of analysis results, you can parse the appropriate JSON response.
Download and install the Postman client from this page. Then download the JSON file needed:
Oz API 5.1.0 works with the same collection.
Launch the client and import Oz API collection for Postman by clicking the Import button:
Click files, locate the JSON needed, and hit Open to add it:
The collection will be imported and will appear in the Postman interface:
In this section, you'll learn how to perform analyses and where to get the numeric results.
Liveness is checking that a person on a video is a real living person.
Biometry compares two or more faces from different media files and shows whether the faces belong to the same person or not.
Best shot is an addition to the Liveness check. The system chooses the best frame from a video and saves it as a picture for later use.
Blacklist checks whether a face on a photo or a video matches with one of the faces in the pre-created database.
The Quantitative Results section explains where and how to find the numeric results of analyses.
To learn what each analysis means, please refer to Types of Analyses.
The webhook feature simplifies getting analyses' results. Instead of polling after the analyses are launched, add a webhook that will call your website once the results are ready.
When you create a folder, add the webhook endpoint (resolution_endpoint
) into the payload section of your request body:
You'll receive a notification each time the analyses are completed for this folder. The webhook request will contain information about the folder and its corresponding analyses.
To get an access token, call POST /api/authorize/auth/
with credentials (which you've got from us) containing the email and password needed in the request body. The host address should be the API address (the one you've also got from us).
The successful response will return a pair of tokens:access_token
and expire_token
.
access_token is a key that grants you access to system resources. To access a resource, you need to add your access_token to the header.
headers = {‘ X-Forensic-Access-Token’: <access_token>}
access_token is time-limited, the limits depend on the account type.
service accounts – OZ_SESSION_LONGLIVE_TTL
(5 years by default),
other accounts – OZ_SESSION_TTL
(15 minutes by default).
expire_token is the token you can use to renew your access token if necessary.
If the value ofexpire_date
> current date, the value of current sessionexpire_date
is set to current date + time period that is defined as shown above (depending on the account type).
To renewaccess_token
and expire_token
, call POST
/api/authorize/refresh/.
Add expire_token
to the request body and X-Forensic-Access-Token to the header.
In case of success, you'll receive a new pair of access_token
and expire_token
. The "old" pair will be deleted upon the first authentication with the renewed tokens.
Error code
Error message
What caused the error
400
Could not locate field for key_path
expire_token
from provided dict data
expire_token
haven't been found in the request body
401
Session not found
The session with expire_token
you have passed doesn't exist.
403
You have not access to refresh this session
A user who makes the request is not thisexpire_token
session owner.
Download and install the Postman client from this page. Then download the JSON file needed:
Launch the client and import Oz API Lite collection for Postman by clicking the Import button:
Click files, locate the JSON needed, and hit Open to add it:
The collection will be imported and will appear in the Postman interface:
From 1.1.0, Oz API Lite works with base64 as an input format and is also able to return the biometric templates in this format. To enable this option, add Content-Transfer-Encoding = base64
to the request headers.
Use this method to check what versions of components are used (available from 1.1.1).
Call GET /version
-
GET localhost/version
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
Parameter name
Type
Description
core
String
API Lite core version number.
tfss
String
TFSS version number.
models
[String]
An array of model versions, each record contains model name and model version number.
Use this method to check whether the biometric processor is ready to work.
Call GET /v1/face/pattern/health
-
GET localhost/v1/face/pattern/health
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
Parameter name
Type
Description
status
Int
0 – the biometric processor is working correctly.
3 – the biometric processor is inoperative.
message
String
Message.
The method is designed to extract a biometric template from an image.
HTTP request content type: “image / jpeg” or “image / png”
Call POST /v1/face/pattern/extract
Parameter name
Type
Description
Not specified*
Stream
Required parameter. Image to extract the biometric template.
The “Content-Type” header field must indicate the content type.
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns a biometric template.
The content type of the HTTP response is “application/octet-stream”.
If you've passed Content-Transfer-Encoding = base64
in headers, the template will be in base64 as well.
Parameter name
Type
Description
Not specified*
Stream
A biometric template derived from an image
The method is designed to compare two biometric templates.
The content type of the HTTP request is “multipart / form-data”.
CallPOST /v1/face/pattern/compare
Parameter name
Type
Description
bio_feature
Stream
Required parameter.
First biometric template.
bio_template
Stream
Required parameter.
Second biometric template.
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing the two templates.
HTTP response content type: “application/json”.
Parameter name
Type
Description
score
Float
The result of comparing two templates
decision
String
Recommended solution based on the score.
approved – positive. The faces match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
The method combines the two methods from above, extract and compare. It extracts a template from an image and compares the resulting biometric template with another biometric template that is also passed in the request.
The content type of the HTTP request is “multipart / form-data”.
Call POST /v1/face/pattern/verify
Parameter name
Type
Description
sample
Stream
Required parameter.
Image to extract the biometric template.
bio_template
Stream
Required parameter.
The biometric template to compare with.
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing two biometric templates and the biometric template.
The content type of the HTTP response is “multipart/form-data”.
Parameter name
Type
Description
score
Float
The result of comparing two templates
bio_feature
Stream
Biometric template derived from image
The method also combines the two methods from above, extract and compare. It extracts templates from two images, compares the received biometric templates, and transmits the comparison result as a response.
The content type of the HTTP request is “multipart / form-data”.
Call POST /v1/face/pattern/extract_and_compare
Parameter name
Type
Description
sample_1
Stream
Required parameter.
First image.
sample_2
Stream
Required parameter.
Second image
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of comparing the two extracted biometric templates.
HTTP response content type: “application / json”.
Parameter name
Type
Description
score
Float
The result of comparing the two extracted templates.
decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
Use this method to compare one biometric template to N others.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/compare_n
Parameter name
Type
Description
template_1
Stream
This parameter is mandatory. The first (main) biometric template
templates_n
Stream
A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the main and Nth templates. The result has the fields as follows:
*filename
String
A filename for the Nth template.
*score
Float
The result of comparing the main and Nth templates.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
The method combines the extract and compare_n methods. It extracts a biometric template from an image and compares it to N other biometric templates that are passed in the request as a list.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/verify_n
Parameter name
Type
Description
sample_1
Stream
This parameter is mandatory. The main image.
templates_n
Stream
A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the template derived from the main image and the Nth template. The result has the fields as follows:
*filename
String
A filename for the Nth template.
*score
Float
The result of comparing the template derived from the main image and the Nth template.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
This method also combines the extract and compare_n methods but in another way. It extracts biometric templates from the main image and a list of other images and then compares them in the 1:N mode.
The content type of the HTTP request is “multipart/form-data”.
Call POST /v1/face/pattern/
extract_and_compare_n
Parameter name
Type
Description
sample_1
Stream
This parameter is mandatory. The first (main) image.
samples_n
Stream
A list of N images. Each of them should be passed separately but the parameter name should be samples_n. You also need to pass the filename in the header.
To transfer data in base64, add Content-Transfer-Encoding = base64
to the request headers.
In case of success, the method returns the result of the 1:N comparison.
HTTP response content type: “application / json”.
Parameter name
Type
Description
results
List[JSON]
A list of N comparison results. The Nth result contains the comparison result for the main and Nth images. The result has the fields as follows:
*filename
String
A filename for the Nth image.
*score
Float
The result of comparing the main and Nth images.
*decision
String
Recommended solution based on the score.
approved – positive. The faces are match.
operator_required – additional operator verification is required.
declined – negative result. The faces don't match.
HTTP response content type: “application / json”.
HTTP response codes
The value of the “code” parameter
Description
400
BPE-002001
Invalid Content-Type of HTTP request
400
BPE-002002
Invalid HTTP request method
400
BPE-002003
Failed to read the biometric sample*
400
BPE-002004
Failed to read the biometric template
400
BPE-002005
Invalid Content-Type of the multiparted HTTP request part
400
BPE-003001
Failed to retrieve the biometric template
400
BPE-003002
The biometric sample* is missing face
400
BPE-003003
More than one person is present on the biometric sample*
500
BPE-001001
Internal bioprocessor error
400
BPE-001002
TFSS error. Call the biometry health method.
Use this method to check whether the liveness processor is ready to work.
Call GET /v1/face/liveness/health
None.
GET localhost/v1/face/liveness/health
In case of success, the method returns a message with the following parameters.
HTTP response content type: “application/json”.
Parameter name
Type
Description
status
Int
0 – the liveness processor is working correctly.
3 – the liveness processor is inoperative.
message
String
Message.
The detect
method is made to reveal presentation attacks. It detects a face in each image or video (since 1.2.0), sends them for analysis, and returns a result.
The method supports the following content types:
image/jpeg
or image/png
for an image;
multipart/form-data
for images, videos, and archives. You can use payload
to add any parameters that affect the analysis.
To run the method, call POST /{version}/face/liveness/detect
.
Accepts an image in JPEG or PNG format. No payload
attached.
Accepts the multipart/form-data request.
Each media file should have a unique name, e.g., media_key1
, media_key2
.
The payload
parameters should be a JSON placed in the payload
field.
Temporary IDs will be deleted once you get the result.
To extract the best shot from your video or archive, in analyses
, set extract_best_shot
= true
(as shown in the request example below). In this case, API Lite will analyze your archives and videos, and, in response, will return the best shot. It will be a base64 image in analysis->output_images->image_b64
.
Additionally, you can change the Liveness threshold. In analyses
, set the new threshold in the threshold_spoofing
parameter. If the resulting score will be higher than this parameter's value, the analysis will end up with the DECLINED status. Otherwise, the status will be SUCCESS.
HTTP response content type: “application / json”.
HTTP response codes
The value of the “code” parameter
Description
400
LDE-002001
Invalid Content-Type of HTTP request
400
LDE-002002
Invalid HTTP request method
400
LDE-002004
Failed to extract the biometric sample*
400
LDE-002005
Invalid Content-Type of the multiparted HTTP request part
500
LDE-001001
Liveness detection processor internal error
400
LDE-001002
TFSS error. Call the Liveness health method.
API Lite (FaceVer) changes
Fixed the bug with the time_created
and folder_id
parameters of the Detect method that sometimes might have been generated incorrectly.
Security updates.
Updated models.
The file size for the detect Liveness method is now capped at 15 MB, with a maximum of 10 files per request.
Updated the gesture list for best_shot
analysis: it now supports head turns (left and right), tilts (up and down), smiling, and blinking.
API Lite now accepts base64.
Improved the biometric model.
Added the 1:N mode.
Added the CORS policy.
Published the documentation.
Improved error messages – made them more detailed.
Simplified the Liveness/Detect methods.
Reworked and improved the core.
Added anti-spoofing algorithms.
Added the extract_and_compare
method.
Response codes 2XX indicate a successfully processed request (e.g., code 200 for retrieving data, code 201 for adding a new entity, code 204 for deletion, etc.).
Response codes 4XX indicate that a request could not be processed correctly because of some client-side data issues (e.g., 404 when addressing a non-existing resource).
Response codes 5XX indicate that an internal server-side error occurred during the request processing (e.g., when database is temporarily unavailable).
Each response error includes HTTP code and JSON data with error description. It has the following structure:
error_code
– integer error code;
error_message
– text error description;
details
– additional error details (format is specified to each case). Can be empty.
Sample error response:
Error codes:
0 – UNKNOWN
Unknown server error.
1 - NOT ALLOWED
An unallowed method is called. Usually is followed by the 405 HTTP status of response. For example, trying to request the PATCH method, while only GET/POST ones are supported.
2 - NOT REALIZED
The method is documented but is not realized by any temporary or permanent reason.
3 - INVALID STRUCTURE
Incorrect structure of request. Some required fields missing or a format validation error occurred.
4 - INVALID VALUE
Incorrect value of the parameter inside request body or query.
5 - INVALID TYPE
The invalid data type of the request parameter.
6 - AUTH NOT PROVIDED
Access token not specified.
7 - AUTH INVALID
The access token does not exist in the database.
8 - AUTH EXPIRED
Auth token is expired.
9 - AUTH FORBIDDEN
Access denied for the current user.
10 - NOT EXIST
the requested resource is not found (alternative of HTTP status_code = 404).
11 - EXTERNAL SERVICE
Error in the external information system.
12 – DATABASE
Critical database error on the server host.
To work properly, the resolution algorithms need each uploaded media to be marked with special tags. For video and images, the tags are different. They help algorithms to identify what should be in the photo or video and analyze the content.
The following tag types should be specified in the system for video files.
To identify the data type of the video:
video_selfie
To identify the orientation of the video:
orientation_portrait
– portrait orientation;
orientation_landscape
– landscape orientation.
To identify the action on the video:
video_selfie_left
– head turn to the left;
video_selfie_right
– head turn to the right;
video_selfie_down
– head tilt downwards;
video_selfie_high
– head raise up;
video_selfie_smile
– smile;
video_selfie_eyes
– blink;
video_selfie_scan
– scanning;
video_selfie_oneshot
– a one-frame analysis;
video_selfie_blank
– no action.
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a video file with the “blink” action:
The following tag types should be specified in the system for photo files:
A tag for selfies:
photo_selfie
– to identify the image type as “selfie”.
Tags for photos/scans of ID cards:
photo_id
– to identify the image type as “ID”;
photo_id_front
– for the photo of the ID front side;
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a “selfie” photo file:
Example of the correct tag set for a photo file with the face side of an ID card:
Example of the correct set of tags for a photo file of the back of an ID card:
API changes
Improved the resource efficiency of server-based biometry analysis.
API can now extract action shots from videos of a person performing gestures. This is done to comply with the new Kazakhstan regulatory requirements for biometric identification. Dependencies with other system components are specified here.
Created a new report template that also complies with the requirements mentioned above.
If action shots are enabled, the thumbnails for the report are generated from them.
Added the new method to check the timezone settings: GET {{host}}/api/config
Added parameters to the GET {{host}}/api/event_sessions
method:
time_created
time_created.min
time_created.max
time_updated
time_updated.min
time_updated.max
session_id
session_id.exclude
sorting
offset
limit
total_omit
If you create a folder using SHOT_SET, the corresponding video will be in media.video_url
.
Fixed the bug with CLIENT ADMIN being unable to change passwords for users from their company.
Security updates.
Face Identification 1:N is now live, significantly increasing the data processing capacity of the Oz API to find matches. Even huge face databases (containing millions of photos and more) are no longer an issue.
The Liveness (QUALITY) analysis now ignores photos tagged with photo_id
, photo_id_front
, or photo_id_back
, preventing these photos from causing the tag-related analysis error.
Security updates.
You can now apply the Liveness (QUALITY) analysis to a single image.
Fixed the bug where the Liveness analysis could finish with the SUCCESS result with no media uploaded.
The default value for the extract_best_shot
parameter is now True.
RAR archives are no longer supported.
By default, analyses.results_media.results_data
now contain the confidence_spoofing
parameter. However, if you need all three parameters for the backward compatibility, it is possible to change the response back to three parameters: confidence_replay
, confidence_liveness
, and confidence_spoofing
.
Updated the default PDF report template.
The name of the PDF report now contains folder_id
.
Security updates.
Set the autorotation of logs.
Added the CLI command for user deletion.
You can now switch off the video preview generation.
The ADMIN access token is now valid for 5 years.
Added the folder identifier folder_id
to the report name.
Fixed bugs and optimized the API work.
For the sliced video, the system now deletes the unnecessary frames.
Added new methods: GET and POST at media/<media_id>/snapshot/
.
Replaced the default report template.
The shot set preview now keeps images’ aspect ratio.
ADMIN and OPERATOR receive system_company
as a company they belong to.
Added the company_id
attribute to User, Folder, Analyse, Media.
Added the Analysis group_id
attribute.
Added the system_resolution
attribute to Folder and Analysis.
The analysis resolution_status
now returns the system_resolution
value.
Removed the PATCH method for collections.
Added the resolution_status
filter to Folder Analyses [LIST] and analyse.resolution_status
filter to Folder [LIST].
Added the audit log for Folder, User, Company.
Improved the company deletion algorithm.
Reforged the blacklist processing logic.
Fixed a few bugs.
The Photo Expert and KYC modules are now removed.
The endpoint for the user password change is now POST
users/user_id/change-password instead of PATCH
.
Provided log for the Celery app.
Added filters to the Folder [LIST] request parameters: analyse.time_created
, analyse.results_data
for the Documents analysis, results_data
for the Biometry analysis, results_media_results_data
for the QUALITY analysis. To enable filters, set the with_results_media_filter
query parameter to True
.
Added a new attribute for users – is_active
(default True
). If is_active == False
, any user operation is blocked.
Added a new exception code (1401 with status code 401) for the actions of the blocked users.
Added shots sets preview.
You can now save a shots set archive to a disk (with the original_local_path
, original_url
attributes).
A new original_info attribute is added to store md5, size, and mime-type of a shots set
Fixed ReportInfo
for shots sets.
Added health check at GET api/healthcheck.
Fixed the shots set thumbnail URL.
Now, the first frame of shots set becomes this shots set's thumbnail URL.
Modified the retry policy – the default max count of analysis attempts is increased to 3 and jitter configuration introduced.
Changed the callback algorithm.
Refactored and documented the command line tools.
Refactored modules.
Changed the delete personal information endpoint and method from delete_pi
to /pi
and from POST to DELETE, respectively.
Improved the delete personal information algorithm.
It is now forbidden to add media to cleaned folders.
Changed the authorize/restore endpoint name from auth
to auth_restore
.
Added a new tag – video_selfie_oneshot
.
Added the password validation setting (OZ_PASSWORD_POLICY
).
Added auth
, rest_unauthorized
, rps_with_token
throttling (use OZ_THROTTLING_RATES
in configuration. Off by default).
User permissions are now used to access static files (OZ_USE_PERMISSIONS_FOR_STATIC
in configuration, false by default).
Added a new folder endpoint – /delete_pi
. It clears all personal information from a folder and analyses related to this folder.
Fixed a bug with no error while trying to synchronize empty collections.
If persons are uploaded, the analyse collection TFSS request is sent.
Added the fields_to_check
parameter to document analysis (by default, all fields are checked).
Added the double_page
_spread parameter to document analysis (True
by default).
Fixed collection synchronization.
Authorization token can be now refreshed by expire_token
.
Added support for application/x-gzip.
Renamed shots_set.images to shots_set.frames.
Added user sessions API.
Users can now change a folder owner (limited by permissions).
Changed dependencies rules.
Changed the access_token prolongation policy to fix bug of prolongation before checking the expiration permission.
Move oz_collection_binding
(collection synchronization functional) to oz_core
.
Simplified the shots sets functionality. One archive keeps one shot set.
Improved the document sides recognition for the docker version.
Moved the orientation tag check to liveness at quality analysis.
Added a default report template for Admin and Operator.
Updated the biometric model.
A new ShotsSet object is not created if there are no photos for it.
Updated the data exchange format for the documents' recognition module.
You can’t delete a Collection if there are associated analyses with Collection Persons.
Added time marks to analysis: time_task_send_to_broker
, time_task_received
, time_task_finished.
Added a new authorization engine. You can now connect with Active Directory by LDAP (settings configuration required).
A new type of media in Folders – "shots_set".
You can’t delete a CollectionPerson if there are analyses associated with it.
Renamed the folder field resolution_suggest
to operator_status
.
Added a folder text field operator_comment
.
The folder fields operator_status
and operator_comment
can be edited only by Admin, Operator, Client Service, Client Operator, and Client Admin.
Only Admin and Client Admin can delete folder, folder media, report template, report template attachments, reports, and analyses (within their company).
Fixed a deletion error: when report author is deleted, their reports get deleted as well.
Client can now view only their own profile.
Client Operator can now edit only their profile.
Client can't delete own folders, media, reports, or analyses anymore.
Client Service can now create Collection Person and read reports within their company.
Client, Client Admin, Client Operator have read access to users profiles only in their company.
A/B testing is now available.
Added support for expiration date header.
Added document recognition module Standalone/Dockered binding support.
Added a new role of Client Operator (like Client Admin without permissions for company and account management).
Client Admin and Client Operator can change the analysis status.
Only Admin and Client Admin (for their company) can create, update and delete operations for Collection and CollectionPerson models from now on.
Added a check for user permissions to report template when creating a folder report.
Collection creation now returns status code 201 instead of 200.
Introduced the new that can process videos and archives as well.
Added the .
The tags listed allow the algorithms recognizing the files as suitable for the (Liveness) and analyses.
photo_id_back
– for the photo of the ID back side (ignored for any other analyses like or ).
Updated the Postman collection. Please see the new collection and at .