Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In this section, you'll learn how to perform analyses and where to get the numeric results.
Liveness is checking that a person on a video is a real living person.
Biometry compares two or more faces from different media files and shows whether the faces belong to the same person or not.
Best shot is an addition to the Liveness check. The system chooses the best frame from a video and saves it as a picture for later use.
Blacklist checks whether a face on a photo or a video matches with one of the faces in the pre-created database.
The Quantitative Results section explains where and how to find the numeric results of analyses.
To learn what each analysis means, please refer to Types of Analyses.
In this section, you will find the description of both API and SDK components of Oz Forensics Liveness and face biometric system. API is the backend component of the system, it is needed for all the system modules to interact with each other. SDK is the frontend component that is used to:
1) take videos or images which are then processed via API,
2) display results.
We provide two versions of API.
With full version, we provide you with all functionality of Oz API.
The Lite version is a simple and lightweight version with only the necessary functions included.
The SDK component consists of web SDK and mobile SDK.
Web SDK is a plugin that you can embed into your website page and the adapter for this plugin.
Mobile SDK is SDK for iOS and Android.
To get an access token, call POST /api/authorize/auth/
with credentials (which you've got from us) containing the email and password needed in the request body. The host address should be the API address (the one you've also got from us).
The successful response will return a pair of tokens:access_token
and expire_token
.
access_token is a key that grants you access to system resources. To access a resource, you need to add your access_token to the header.
headers = {‘ X-Forensic-Access-Token’: <access_token>}
access_token is time-limited, the limits depend on the account type.
service accounts – OZ_SESSION_LONGLIVE_TTL
(5 years by default),
other accounts – OZ_SESSION_TTL
(15 minutes by default).
expire_token is the token you can use to renew your access token if necessary.
If the value ofexpire_date
> current date, the value of current sessionexpire_date
is set to current date + time period that is defined as shown above (depending on the account type).
To renewaccess_token
and expire_token
, call POST
/api/authorize/refresh/.
Add expire_token
to the request body and X-Forensic-Access-Token to the header.
In case of success, you'll receive a new pair of access_token
and expire_token
. The "old" pair will be deleted upon the first authentication with the renewed tokens.
The Liveness detection algorithm is intended to detect a real living person in a media.
You're .
You have already marked by correct tags into this folder.
For API 4.0.8 and below, please note: the Liveness analysis works with videos and shotsets, images are ignored. If you want to analyze an image, upload it as a shotset (archive) with a single image and mark with the video_selfie_blank
tag.
1. the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described .
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET api/folders/{{folder_id}}/analyses/
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat the check until theresolution_status
and resolution
fields change status to any other except PROCESSING
and treat this as a result.
For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need. It indicates a chance that a person is not a real one.
The Biometry algorithm is intended to compare two or more photos and detect the level of similarity of the spotted faces. As a source media, the algorithm takes photos, videos, and documents (with photos).
You're .
You have already marked by correct tags into this folder.
1. the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described .
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Repeat until the resolution_status
and resolution
fields change status to any other except PROCESSING
, and treat this as a result.
Check the response for the min_confidence
value. It is a quantitative result of matching the people on the media uploaded.
The "Best shot" algorithm is intended to choose the most high-quality and well-tuned frame with a face from a video record. This algorithm works as a part of the analysis, so here, we describe only the best shot part.
Please note: historically, some instances are configured to allow Best Shot only for certain gestures.
1. the analysis similar to , but make sure that the "extract_best_shot"
is set to True
as shown below:
If you want to use a webhook for response, add it to the payload at this step, as described .
2. Check and interpret results in the same way as for the pure analysis.
3. The URL to the best shot is located in the results_media -> output_images -> original_url
response.
How to compare a photo or video with ones from your database.
The blacklist check algorithm is designed to check the presence of a person using a database of preloaded photos. A video fragment and/or a photo can be used as a source for comparison.
You're .
You have already marked by correct tags into this folder.
1. the analysis: POST/api/folders/{{folder_id}}/analyses/
If you want to use a webhook for response, add it to the payload at this step, as described .
You'll needanalyse_id
or folder_id
from response.
2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:
GET /api/analyses/{{analyse_id}}
– for the analyse_id
you have from the previous step.
GET /api/folders/{{folder_id}}
– for all analyses performed on media in the folder with the folder_id
you have from the previous step.
Wait for the resolution_status
and resolution
fields to change the status to anything other than PROCESSING
and treat this as a result.
If you want to know which person from your collection matched with the media you have uploaded, find the collection
analysis in the response, check results_media
, and retrieve person_id
. This is the ID of the person who matched with the person in your media. To get the information about this person, use GET /api/collections/{{collection_id}}/persons/{{person_id}}
with IDs of your collection and person.
To launch one or more analyses for your media files, you need to create a folder via Oz API (or use an existing folder) and put the files into this folder. Each file should be marked by : they describe what's pictured in a media and determine the applicable analyses.
For API 4.0.8 and below, please note: if you want to upload a photo for the subsequent Liveness analysis, put it into the ZIP archive and apply the tags.
To create a folder and upload media to it, POST /api/folders/
To add files to the existing folder, POST /api/folders/{{folder_id}}/media/
Add the files to the request body; tags should be specified in the payload.
Here's the example of the payload for a passive Liveness video and ID front side photo.
An example of usage (Postman):
The successful response will return the folder data.
Error code
Error message
What caused the error
400
Could not locate field for key_path
expire_token
from provided dict data
expire_token
haven't been found in the request body
401
Session not found
The session with expire_token
you have passed doesn't exist.
403
You have not access to refresh this session
A user who makes the request is not thisexpire_token
session owner.
This article describes how to create a collection via API, how to add persons and photos to this collection and how to delete them and the collection itself if you no longer need it. You can do the same in Web console, but this article covers API methods only.
Collection in Oz API is a database of facial photos that are used to compare with the face from the captured photo or video via the Black list analysis
Person represents a human in the collection. You can upload several photos for a single person.
The collection should be created within a company, so you require your company's company_id
as a prerequisite.
If you don't know your ID, callGET /api/companies/?search_text=test
, replacing "test" with your company name or its part. Save the company_id
you've received.
Now, create a collection via POST /api/collections/
. In the request body, specify the alias for your collection and company_id
of your company:
In a response, you'll get your new collection identifier: collection_id
.
To add a new person to your collection, call POST /api/collections/{{collection_id}}/persons/
, usingcollection_id
of the collection needed. In the request body, add a photo or several photos. Mark them with appropriate tags in the payload:
The response will contain the person_id
which stands for the person identifier within your collection.
If you want to add a name of the person, in the request payload, add it as metadata:
To add more photos of the same person, call POST {{host}}/api/collections/{{collection_id}}/persons/{{person_id}}/images/
using the appropriate person_id
. The request body should be filled as you did it before with POST /api/collections/{{collection_id}}/persons/
.
To obtain information on all the persons within the single collection, call GET /api/collections/{{collection_id}}/persons/
.
To obtain a list of photos for a single person, call GET /api/collections/{{collection_id}}/persons/{{person_id}}/images/
. For each photo, the response will containperson_image_id
. You'll need this ID, for instance, if you want to delete the photo.
To delete a person with all their photos, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}
with the appropriate collection and person identifiers. All the photos will be deleted automatically. However, you can't delete a person entity if it has any related analyses, which means the Black list analysis used this photo for comparison and found a coincidence. To delete such a person, you'll need to delete these analyses using DELETE /api/analyses/{{analyse_id}}
with analyse_id
of the collection (Black list) analysis.
To delete all the collection-related analyses, get a list of folders where the Black list analysis has been used: call GET /api/folders/?analyse.type=COLLECTION
. For each folder from this list (GET /api/folders/{{folder_id}}/
), find the analyse_id
of the required analysis, and delete the analysis – DELETE /api/analyses/{{analyse_id}}
.
To delete a single photo of a person, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}}/images/{{person_image_id}}
with collection, person, and image identifiers specified.
Delete the information on all the persons from this collection as described above, then call DELETE /api/collections/{{collection_id}}/
to delete the remaining collection data.
This article describes how to get the analysis scores.
When you perform an analysis, the result you get is a number. For biometry, it reflects a chance that the two or more people represented in your media are the same person. For liveness, it shows a chance of deepfake or a spoofing attack: that the person in uploaded media is not a real one. You can get these numbers via API from a JSON response.
Make a request to the folder or folder list to get a JSON response. Set the with_analyses
parameter to True
.
For the Biometry analysis, check the response for the min_confidence
value:
This value is a quantitative result of matching the people on the media uploaded.
4. For the Liveness Analysis, seek the confidence_spoofing
value related to the video you need:
This value is a chance that a person is not a real one.
To process a bunch of analysis results, you can parse the appropriate JSON response.
Here, you'll get acquainted with types of analyses that Oz API provides and will learn how to interpret the output.
Using Oz API, you can perform one of the following analyses:
The possible results of the analyses are explained here.
Each of the analyses has its threshold that determines the output of these analyses. By default, the threshold for Liveness is 0.5 or 50%, for Blacklist and Biometry (Face Matching) – 0.85 or 85%.
Biometry: if the final score is equal to or above the threshold, the faces on the analyzed media are considered similar.
Blacklist: if the final score is equal to or above the threshold, the face on the analyzed media matches with one of the faces in the database.
Quality: if the final score is equal to or above the threshold, the result is interpreted as an attack.
To configure the threshold depending on your needs, please contact us.
For more information on how to read the numbers in analyses' results, please refer to Quantitative Results.
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media (for details, please refer to Rules of Assigning Analyses).
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – faces are similar, media represent the same person,
0% (0) – faces are not similar and belong to different people
The Liveness detection (Quality) algorithm aims to check whether a person in a media is a real human acting in good faith, not a fake of any kind.
The Best Shot algorithm checks for the best shot from a video (a best-quality frame where the face is seen the most properly). It is an addition to liveness.
After checking, the analysis shows the chance of a spoofing attack in percents.
100% (1) – an attack is detected, the person in the video is not a real living person,
0% (0) – a person in the video is a real living person.
*Spoofing in biometry is a kind of scam when a person disguises as another person using both program and non-program tools like deepfake, masks, ready-made photos, or fake videos.
The Documents analysis aims to recognize the document and check if its fields are correct according to its type.
Oz API uses a third-party OCR analysis service provided by our partner. If you want to change this service to another one, please contact us.
As an output, you'll get a list of document fields with recognition results for each field and a result of checking that can be:
The documents passed the check successfully,
The documents failed to pass the check.
Additionally, the result of Biometry check is displayed.
The Blacklist checking algorithm is used to determine whether the person on a photo or video is present in the database of pre-uploaded images. This base can be used as a blacklist or whitelist. In the former case, the person's face is being compared with the faces of known swindlers; in the latter case, it might be a list of VIPs.
After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:
100% (1) – the person in an image or video matches with someone in the blacklist,
0% (0) – the person is not found in the blacklist.
To work properly, the resolution algorithms need each uploaded media to be marked with special tags. For video and images, the tags are different. They help algorithms to identify what should be in the photo or video and analyze the content.
The following tag types should be specified in the system for video files.
To identify the data type of the video:
video_selfie
To identify the orientation of the video:
orientation_portrait
– portrait orientation;
orientation_landscape
– landscape orientation.
To identify the action on the video:
video_selfie_left
– head turn to the left;
video_selfie_right
– head turn to the right;
video_selfie_down
– head tilt downwards;
video_selfie_high
– head raise up;
video_selfie_smile
– smile;
video_selfie_eyes
– blink;
video_selfie_scan
– scanning;
video_selfie_oneshot
– a one-frame analysis;
video_selfie_blank
– no action.
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a video file with the “blink” action:
The following tag types should be specified in the system for photo files:
A tag for selfies:
photo_selfie
– to identify the image type as “selfie”.
Tags for photos/scans of ID cards:
photo_id
– to identify the image type as “ID”;
photo_id_front
– for the photo of the ID front side;
Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET
type, and mark it with video_*.
Otherwise, it will be ignored by algorithms.
Example of the correct tag set for a “selfie” photo file:
Example of the correct tag set for a photo file with the face side of an ID card:
Example of the correct set of tags for a photo file of the back of an ID card:
Oz API is the most important component of the system. It makes sure all other components are connected with each other. Oz API:
provides the unified Rest API interface to run the Liveness and Biometry analyses
processes authorization and user permissions management
tracks and records requested orders and analyses to the database
archives the inbound media files
collects telemetry from connected mobile apps
provides settings for specific device models
generates reports with analyses results
Description of the Rest API scheme:
The webhook feature simplifies getting analyses' results. Instead of polling after the analyses are launched, add a webhook that will call your website once the results are ready.
When you create a folder, add the webhook endpoint (resolution_endpoint
) into the payload section of your request body:
You'll receive a notification each time the analyses are completed for this folder. The webhook request will contain information about the folder and its corresponding analyses.
The description of the objects you can find in Oz Forensics system.
System objects on Oz Forensics products are hierarchically structured as shown in the picture below.
On the top level, there is a Company. You can use one copy of Oz API to work with several companies.
The next level is a User. A company can contain any amount of users. There are several roles of users with different permissions. For more information, refer to User Roles.
When a user requests an analysis (or analyses), a new folder is created. This folder contains media. One user can create any number of folders. Each folder can contain any amount of media. A user applies analyses to one or more media within a folder. The rules of assigning analyses are described here. The media quality requirements are listed on