All pages
Powered by GitBook
1 of 30

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Oz API

Oz API is the most important component of the system. It makes sure all other components are connected with each other. Oz API:

  • provides the unified Rest API interface to run the Liveness and Biometry analyses,

  • processes authorization and user permissions management,

  • tracks and records requested orders and analyses to the database (for example, in 6.0, the database size for 100 000 orders with analyses is ~4 GB),

  • archives the inbound media files,

  • collects telemetry from connected mobile apps,

  • provides settings for specific device models,

  • generates reports with analyses' results.

For the latest API methods collection, please refer to our .

In API 6.0, we introduced two new operation modes: Instant API and single request.

In the Instant API mode – also known as non-persistent – no data is stored at any point. You send a request, receive the result, and can be confident that nothing is saved. This mode is ideal for handling sensitive data and helps ensure GDPR compliance. Additionally, it reduces storage requirements on your side.

Single request mode allows you to send all media along with the analysis request in one call and receive the results in the same response. This removes the need for multiple API calls – one is sufficient. However, if needed, you can still use the multi-request mode.

API reference

API

Basic Scenarios

Liveness is checking that a person on a video is a real living person.

Liveness

Biometry compares two or more faces from different media files and shows whether the faces belong to the same person or not.

Biometry (Face Matching)

Best shot is an addition to the Liveness check. The system chooses the best frame from a video and saves it as a picture for later use.

Best Shot

Blacklist checks whether a face on a photo or a video matches with one of the faces in the pre-created database.

Blacklist (1:N) Check

Best Shot

The "Best shot" algorithm is intended to choose the most high-quality and well-tuned frame with a face from a video record. This algorithm works as a part of the analysis, so here, we describe only the best shot part.

Please note: historically, some instances are configured to allow Best Shot only for certain gestures.

Processing steps

Authentication and Non-Instant Data Handling

In API 6.0, we've implemented new analysis modes:

Authentication
Uploading Media
Quantitative Results
Using a Webhook to Get Results
Single Request
Instant API: Non-Persistent Mode

1. Initiate the analysis similar to Liveness, but make sure that extract_best_shot is set to true as shown below:

If you want to use a webhook for response, add it to the payload at this step, as described here.

2. Check and interpret results in the same way as for the pure Liveness analysis.

3. The URL to the best shot is located in the results_media -> output_images -> original_url response.

liveness
request body
{
  "analyses": [{
    "type": "quality",
    "source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // // optional; omit to include all media from the folder
    "params" : {
      "extract_best_shot": true // the mandatory part for the best shot analysis
    }
  }]
}

Liveness

The Liveness detection algorithm is intended to detect a real living person in a media.

Requirements

  1. You're authorized.

  2. You have already marked by correct tags into this folder. For API 4.0.8 and below, please note: the Liveness analysis works with videos and shotsets, images are ignored. If you want to analyze an image, upload it as a shotset (archive) with a single image and mark with the video_selfie_blank tag.

Processing Steps

1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/

If you want to use a webhook for response, add it to the payload at this step, as described .

You'll needanalysis_id or folder_id from response.

2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:

  • GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.

  • GET api/folders/{{folder_id}}/analyses/ – for all analyses performed on media in the folder with the folder_id you have from the previous step.

Repeat the check until theresolution_status and resolution fields change status to any other except PROCESSING and treat this as a result.

For the Liveness Analysis, seek the confidence_spoofing value related to the video you need. It indicates a chance that a person is not a real one.

Using a Webhook to Get Results

The webhook feature simplifies getting analyses' results. Instead of polling after the analyses are launched, add a webhook that will call your website once the results are ready.

When you create a folder, add the webhook endpoint (resolution_endpoint) into the payload section of your request body:

Postman
Payload example
{    
  "resolution_endpoint": "address.com", // use address of your website here
    ... // other request details - folder etc.
}

You'll receive a notification each time the analyses are completed for this folder. The webhook request will contain information about the folder and its corresponding analyses.

Blacklist (Collection) Management in Oz API

This article describes how to create a collection via API, how to add persons and photos to this collection and how to delete them and the collection itself if you no longer need it. You can do the same in , but this article covers API methods only.

Collection in Oz API is a database of facial photos that are used to compare with the face from the captured photo or video via the Black list analysis

Person represents a human in the collection. You can upload several photos for a single person.

Instant API: Non-Persistent Mode

Instant API, or non-persistent operation mode, has been introduced in API 6.0.1. It is a mode where we do not save any data anywhere. All data is being used only within a request: you send it, receive the response, and that's all, nothing gets recorded. This ensures you do not store any sensitive data, which might be crucial for GDPR compliance. Also, it significantly reduces storage requirements.

To enable this mode, when you prepare the config.py file to run the API, set the OZ_APP_COMPONENTS parameter to stateless. Call POST /api/instant/folders/ to send the request without saving any data. Authorization for Instant API should be set on your side.

Please note: as Instant API doesn't store data, it is not intended to work with Blacklist (1:N).

Quantitative Results

This article describes how to get the analysis scores.

When you perform an analysis, the result you get is a number. For biometry, it reflects a chance that the two or more people represented in your media are the same person. For liveness, it shows a chance of deepfake or a spoofing attack: that the person in uploaded media is not a real one. You can get these numbers via API from a JSON response.

  1. .

  2. Make a request to the folder or folder list to get a JSON response. Set the with_analyses parameter to true.

Uploading Media

To launch one or more analyses for your media files, you need to create a folder via Oz API (or use an existing folder) and put the files into this folder. Each file should be marked by : they describe what's pictured in a media and determine the applicable analyses.

For API 4.0.8 and below, please note: if you want to upload a photo for the subsequent Liveness analysis, put it into the ZIP archive and apply the tags.

To create a folder and upload media to it, call POST /api/folders/

To add files to the existing folder, call POST /api/folders/{{folder_id}}/media/

Oz API Lite

What is Oz API Lite, when and how to use it.

API Lite is deprecated and no longer maintained. Its functionality has been added to API full: see .

Oz API Lite is the lightweight yet powerful version of Oz API. The Lite version is less resource-demanding, more productive, and easier to work with. The analyses are made within the API Lite image. As Oz API Lite doesn't include any additional services like statistics or data storage, this version is the one to use when you need a high performance.

Oz API Postman Collections

Download and install the Postman client from this Then download the JSON file needed:

Oz API Postman collections

6.0

Oz API Lite Postman Collection

Download and install the Postman client from this Then download the JSON file needed:

1.2.0

1.1.1

Examples of methods

To check the Liveness processor, call GET /v1/face/liveness/health.

To check the Biometry processor, call GET /v1/face/pattern/health.

To perform the liveness check for an image, call POST /v1/face/liveness/detect (it takes an image as an input and displays the evaluation of spoofing attack chance in this image)

To compare two faces in two images, call POST /v1/face/pattern/extract_and_compare (it takes two images as an input, derives the biometry templates from these images, and compares them).

To compare an image with a bunch of images, call POST /v1/face/pattern/extract_and_compare_n.

For the full list of Oz API Lite methods, please refer to API Methods.

Instant API
created a folder and added your media
here
How to Create a Collection

The collection should be created within a company, so you require your company's company_id as a prerequisite.

If you don't know your ID, callGET /api/companies/?search_text=test, replacing "test" with your company name or its part. Save the company_id you've received.

Now, create a collection via POST /api/collections/. In the request body, specify the alias for your collection and company_id of your company:

In a response, you'll get your new collection identifier: collection_id.

How to Add a Person or a Photo to a Collection

To add a new person to your collection, call POST /api/collections/{{collection_id}}/persons/, usingcollection_id of the collection needed. In the request body, add a photo or several photos. Mark them with appropriate tags in the payload:

The response will contain the person_id which stands for the person identifier within your collection.

If you want to add a name of the person, in the request payload, add it as metadata:

To add more photos of the same person, call POST {{host}}/api/collections/{{collection_id}}/persons/{{person_id}}/images/ using the appropriate person_id. The request body should be filled as you did it before with POST /api/collections/{{collection_id}}/persons/.

To obtain information on all the persons within the single collection, call GET /api/collections/{{collection_id}}/persons/.

To obtain a list of photos for a single person, call GET /api/collections/{{collection_id}}/persons/{{person_id}}/images/. For each photo, the response will containperson_image_id. You'll need this ID, for instance, if you want to delete the photo.

How to Remove a Photo or a Person from a Collection

To delete a person with all their photos, call DELETE /api/collections/{{collection_id}}/persons/{{person_id}} with the appropriate collection and person identifiers. All the photos will be deleted automatically. However, you can't delete a person entity if it has any related analyses, which means the Black list analysis used this photo for comparison and found a coincidence. To delete such a person, you'll need to delete these analyses using DELETE /api/analyses/{{analysis_id}} with analysis_id of the collection (Black list) analysis.

To delete all the collection-related analyses, get a list of folders where the Black list analysis has been used: call GET /api/folders/?analyse.type=COLLECTION. For each folder from this list (GET /api/folders/{{folder_id}}/), find the analysis_id of the required analysis, and delete the analysis – DELETE /api/analyses/{{analysis_id}}.

To delete a single photo of a person, call DELETE collections/<collection_id>/persons/<person_id>/images/<media_id>/ with collection, person, and image identifiers specified.

How to Delete a Collection

Delete the information on all the persons from this collection as described above, then call DELETE /api/collections/{{collection_id}}/ to delete the remaining collection data.

Web console
If you use Instant API with Web SDK, in Web Adapter configuration, set architecture to lite. The version of Web SDK should be 1.7.14 or above.

Requirements

  • CPU: 16 cores, 32 threads, base frequency – 2.3 GHz, single-core maximum turbo frequency – 4 GHz.

  • RAM: 32 GB, DDR 5, Dual Channel.

To evaluate your RPS and RPM and configure your system for optimal performance, please contact us.

Configuration File Parameters

Prior to the launch, prepare a configuration file with the parameters listed below.

Mandatory Parameters

These parameters are crucial to run Instant API.

Installation

Docker

Docker Compose

Instant API Methods

You can find the Instant API methods here or download the collection below.

10KB
OZ-Forensic Instant 6.0.0.postman_collection.json
Open

For the Biometry analysis, check the response for the min_confidence value:

This value is a quantitative result of matching the people on the media uploaded.

4. For the Liveness Analysis, seek the confidence_spoofing value related to the video you need:

This value is a chance that a person is not a real one.

To process a bunch of analysis results, you can parse the appropriate JSON response.

Authorize
"items": 
 [
  {
   "analyses": 
    [
     {
      "analysis_id": "biometry_analysis_id"
      "folder_id": "some_folder_id", 
      "type": "BIOMETRY", 
      "state": "FINISHED", 
      "results_data": 
       {
        "max_confidence": 0.997926354, 
        "min_confidence": 0.997926354
       }
"items": 
 [
  {
   "analyses": 
    [
     {
      "source_media": 
       [
        {
        "media_id": "your_media_id", 
        "media_type": "VIDEO_FOLDER",
        }
       ]
      "results_media": 
       [
        "analysis_id": "liveness_analysis_id",
        "results_data": 
         {
          "confidence_spoofing": 0.55790174
         }
request body
{  
  "analyses": [
    {
      "type": "quality",
      "source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // optional; omit to include all media from the folder
      ...
    }
  ]
}
[
  {
    // you may have multiple analyses in the list
    // pick the one you need by analyse_id or type
    "analysis_id": "1111aaaa-11aa-11aa-11aa-111111aaaaaa",
    "type": "QUALITY",
    "results_media": [
      {
        // if you have multiple media in one analysis, match score with media by source_video_id/source_shots_set_id 
        "source_video_id": "1111aaaa-11aa-11aa-11aa-111111aaaaab", // for shots_set media, the key would be source_shots_set_id 
        "results_data": 
        {
          "confidence_spoofing": 0.05790174 // quantitative score for this media
        }
      "resolution_status": "SUCCESS", // qualitative resolution (based on all media)
      ...
    ]
    ...
  }
  ...
]
{
  "alias": "blacklist",
  "company_id": "your_company_id"
}
{
    "media:tags": {
        "image1": [
            "photo_selfie",
            "orientation_portrait"
        ]
    }
}
    "person:meta_data": {
        "person_info": {
            "first_name": "John",
            "middle_name": "Jameson",
            "last_name": "Doe"
        }
    },
# application components list, values for Instant API: auth,stateless
# auth is for Oz authentication component
OZ_APP_COMPONENTS=stateless
# local storage support enable
OZ_LOCAL_STORAGE_SUPPORT_ENABLE=false
# service tfss host
OZ_SERVICE_TFSS_HOST=http://xxx.xxx.xxx.xxx:xxxx
# allowed hosts
APP_ALLOWED_HOSTS=example-host1.com,example-host2.com
# secret key
OZ_API_SECRET_KEY=long_secret_key
CONTAINER_NAME=<container name> \
DEPLOY_INSTANT_PORT_EXT=<external port> \
INSTANT_IMAGE=<provided image name> \
ADDITIONAL_PARAMS="-e LICENSE_KEY=<your license key>" \

docker run -d --name $CONTAINER_NAME
      $ADDITIONAL_PARAMS
      -p ${DEPLOY_INSTANT_PORT_EXT}:8080
      $INSTANT_IMAGE
services:
  oz-api-instant:
    image: <provided image>
    container_name: oz-api-instant
    environment:
        - LICENSE_KEY=<your license key>
        # - TF_ENABLE_ONEDNN_OPTS=1 # In some cases, especially for AMD CPUs, set to 0
        # - API_LISTEN_PORT=8080
        # - LOG_LEVEL=info # ['critical', 'error', 'warning', 'info', 'debug', 'trace']
    restart: always
    ports:
      - 8080:8080

Add the files to the request body; tags should be specified in the payload.

Here's the example of the payload for a passive Liveness video and ID front side photo.

An example of usage (Postman):

The successful response will return the folder data.

tags
video-related
payload
{
    "media:tags": { // this section sets the tags for the media files that you upload
    // media files are referenced by the keys in a multipart form
        "video1": [ // your file key
        // a typical set of tags for a passive Liveness video
            "video_selfie", // video of a person
            "video_selfie_blank", // no gesture used
            "orientation_portrait" // video orientation
        ],
        "photo1": [
        // a typical set of tags for an ID front side
            "photo_id",
            "photo_id_front"
        ]
    }
}
6.0 Instant API

5.3 and 5.2

5.0

Oz API 5.1.0 works with the same collection.

4.0

3.33

How to Import a Postman Collection:

Launch the client and import Oz API collection for Postman by clicking the Import button:

Click files, locate the JSON needed, and hit Open to add it:

The collection will be imported and will appear in the Postman interface:

page.
256KB
OZ-Forensic 6.0.0.postman_collection.json
Open
10KB
OZ-Forensic Instant 6.0.0.postman_collection.json
Open
301KB
OZ-Forensic 5.2.0-.postman_collection.json
Open
299KB
OZ-Forensic 5.0.0.postman_collection.json
Open
165KB
OZ-Forensic 4.0.0.postman_collection.json
Open
168KB
OZ-Forensic 3.33.0.postman_collection.json
Open
How to Import a Postman Collection:

Launch the client and import Oz API Lite collection for Postman by clicking the Import button:

Click files, locate the JSON needed, and hit Open to add it:

The collection will be imported and will appear in the Postman interface:

page.
11KB
Oz API Lite 1.2.0.json
Open
8KB
Oz API Lite 1.1.1.postman_collection.json
Open

Blacklist (1:N) Check

How to compare a photo or video with ones from your database.

The blacklist check algorithm is designed to check the presence of a person using a database of preloaded photos. A video fragment and/or a photo can be used as a source for comparison.

Prerequisites:

  1. You're authorized.

  2. You have already marked by correct tags into this folder.

Processing steps:

1. Initiate the analysis: POST/api/folders/{{folder_id}}/analyses/

If you want to use a webhook for response, add it to the payload at this step, as described .

You'll needanalysis_id or folder_id from response.

2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:

  • GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.

  • GET /api/folders/{{folder_id}} – for all analyses performed on media in the folder with the folder_id you have from the previous step.

Wait for the resolution_status and resolution fields to change the status to anything other than PROCESSING and treat this as a result.

If you want to know which person from your collection matched with the media you have uploaded, find the collection analysis in the response, check results_media, and retrieve person_id. This is the ID of the person who matched with the person in your media. To get the information about this person, use GET /api/collections/{{collection_id}}/persons/{{person_id}} with IDs of your collection and person.

Biometry (Face Matching)

The Biometry algorithm is intended to compare two or more photos and detect the level of similarity of the spotted faces. As a source media, the algorithm takes photos, videos, and documents (with photos).

Requirements

  1. You're authorized.

  2. You have already marked by correct tags into this folder.

Processing steps

1. Initiate the analysis for the folder: POST /api/folders/{{folder_id}}/analyses/

If you want to use a webhook for response, add it to the payload at this step, as described .

You'll needanalysis_id or folder_id from response.

2. If you use a webhook, just wait for it to return the information needed. Otherwise, initiate polling:

  • GET /api/analyses/{{analysis_id}} – for the analysis_id you have from the previous step.

  • GET /api/folders/{{folder_id}} – for all analyses performed on media in the folder with the folder_id you have from the previous step.

Repeat until the resolution_status and resolution fields change status to any other except PROCESSING, and treat this as a result.

Check the response for the min_confidence value. It is a quantitative result of matching the people on the media uploaded.

Single Request

Overview and Benefits

In version 6.0.1, we introduced a new feature which allows you to send all required data and receive the analysis result within a single request.

Before 6.0.1, interacting with the API required multiple requests: you had to , initiate analyses (see , , and ), and then either poll for results or for notifications when the result was ready. This flow is still supported, so if you need to send separate requests, you can continue using the existing methods that are listed above.

However, the new API operation mode significantly simplifies the process by allowing you to send a single request and receive the response synchronously. The key benefits are:

Changelog

API Lite (FaceVer) changes

1.2.3 – Nov., 2024

  • Fixed the bug with the time_created and folder_id parameters of the Detect method that sometimes might have been generated incorrectly.

API Error Codes

HTTP Response Codes

  • Response codes 2XX indicate a successfully processed request (e.g., code 200 for retrieving data, code 201 for adding a new entity, code 204 for deletion, etc.).

Security updates.

1.2.2 – Oct. 17, 2024

  • Updated models.

1.2.1 – Sept. 05, 2024

  • The file size for the detect Liveness method is now capped at 15 MB, with a maximum of 10 files per request.

  • Updated the gesture list for best_shot analysis: it now supports head turns (left and right), tilts (up and down), smiling, and blinking.

1.2.0 – July 26, 2024

  • Introduced the new Liveness detect method that can process videos and archives as well.

1.1.1 – Nov. 28, 2022

  • Added the version check method.

1.1.0

  • API Lite now accepts base64.

09.2021

  • Improved the biometric model.

  • Added the 1:N mode.

08.2021

  • Added the CORS policy.

  • Published the documentation.

06.2021

  • Improved error messages – made them more detailed.

  • Simplified the Liveness/Detect methods.

04.2021

  • Reworked and improved the core.

  • Added anti-spoofing algorithms.

10.2020

  • Added the extract_and_compare method.

created a folder and added your media
here
created a folder and added your media
here

Single request for everything – all data is sent in one package, eliminating the risk of data loss.

  • Synchronous response – no need for polling or webhooks to retrieve results.

  • High performance – supports up to 36 analyses per minute per instance.

  • Usage

    To use this method, call POST /api/folders/. In the X-Forensic-Access-Token header, pass your access token. Add media files to the request body and define the tags and metadata if needed in the payload part.

    Request Example

    Response Example

    In response, you receive analysis results.

    You're done.

    create a folder and upload media to it
    Liveness
    Biometry
    Blacklist
    use webhooks
    Response codes 4XX indicate that a request could not be processed correctly because of some client-side data issues (e.g., 404 when addressing a non-existing resource).
  • Response codes 5XX indicate that an internal server-side error occurred during the request processing (e.g., when database is temporarily unavailable).

  • Response Body with Errors

    Each response error includes HTTP code and JSON data with error description. It has the following structure:

    • error_code – integer error code;

    • error_message– text error description;

    • details – additional error details (format is specified to each case). Can be empty.

    Sample error response:

    Error codes:

    • 0 – UNKNOWN Unknown server error.

    • 1 - NOT ALLOWED An unallowed method is called. Usually is followed by the 405 HTTP status of response. For example, trying to request the PATCH method, while only GET/POST ones are supported.

    • 2 - NOT REALIZED The method is documented but is not realized by any temporary or permanent reason.

    • 3 - INVALID STRUCTURE Incorrect structure of request. Some required fields missing or a format validation error occurred.

    • 4 - INVALID VALUE Incorrect value of the parameter inside request body or query.

    • 5 - INVALID TYPE The invalid data type of the request parameter.

    • 6 - AUTH NOT PROVIDED Access token not specified.

    • 7 - AUTH INVALID The access token does not exist in the database.

    • 8 - AUTH EXPIRED Auth token is expired.

    • 9 - AUTH FORBIDDEN Access denied for the current user.

    • 10 - NOT EXIST the requested resource is not found (alternative of HTTP status_code = 404).

    • 11 - EXTERNAL SERVICE Error in the external information system.

    • 12 – DATABASE Critical database error on the server host.

    request body
    {
      "analyses": [{
        "type": "collection",
        "source_media": ["1111aaaa-11aa-11aa-11aa-111111aaaaaa"], // // optional; omit to include all media from the folder
      }]
    }
    request body
    {
      "analyses": [{
        "type": "biometry",
        // optional; omit to include all media from the folder
        "source_media": [
          "1111aaaa-11aa-11aa-11aa-111111aaaaaa", 
          "2222bbbb-22bb-22bb-22bb-222222bbbbbb" 
          ]
      }]
    }
    [
      {
        // you may have multiple analyses in the list
        // pick the one you need by analyse_id or type
        "analysis_id": "1111aaaa-11aa-11aa-11aa-111111aaaaaa",
        "type": "BIOMETRY",
        "results_media": [
          {
            // if you have multiple media in one analysis, match score with media by source_video_id/source_shots_set_id 
            "source_video_id": "1111aaaa-11aa-11aa-11aa-111111aaaaab", // for shots_set media, the key would be source_shots_set_id 
            "results_data": 
            {
              "max_confidence": 0.997926354, 
              "min_confidence": 0.997926354 // quantitative score for this media
            }
          ...
        ]
        "resolution_status": "SUCCESS", // qualitative resolution (based on all media)
        ...
      }
      ...
    ]
    {
      // (optional block) folder metadata if needed
      "folder:meta_data": {
        "partner_side_folder_id": "00000000-0000-0000-0000-000000000000",
        "person_info": {
          "first_name": "John",
          "middle_name": "Jameson",
          "last_name": "Doe"
        }
      },
      // (optional block) folder metadata if needed
      "media:meta_data": {
        "video1": {
          "foo": "bar"
        }
      },
      "media:tags": {
        "video1": [
          "video_selfie",
          "video_selfie_eyes",
          "orientation_portrait"
        ]
      },
      "analyses": [
        {
          "type": "quality",
          // (optional block) folder metadata if needed
          "meta_data": {
            "example1": "some_example1"
          },
          // additional parameters
          "params": {
            "threshold_spoofing": 0.5,
            "extract_best_shot": false
          }
        }
      ]
    }
    {
      "company_id": "00000000-0000-0000-0000-000000000000",
      "time_created": 1744017549.366616,
      "folder_id": "00000000-0000-0000-0000-000000000000",
      "user_id": "00000000-0000-0000-0000-000000000000",
      "resolution_endpoint": null,
      "resolution_status": "FINISHED",
      "resolution_comment": "[]",
      "system_resolution": "SUCCESS",
      ...
      // folder metadata if you've added it
      "meta_data": {
        "partner_side_folder_id": "00000000-0000-0000-0000-000000000000",
        "person_info": {
          "first_name": "John",
          "middle_name": "Jameson",
          "last_name": "Doe"
        }
      },
      "media": [
        {
          "company_id": "00000000-0000-0000-0000-000000000000",
          "folder_id": "00000000-0000-0000-0000-000000000000",
          "folder_time_created": 1744017549.366616,
          "original_name": "00000000-0000-0000-0000-000000000000.mp4",
          "original_url": null,
          "media_id": "00000000-0000-0000-0000-000000000000",
          "media_type": "VIDEO_FOLDER",
          "tags": "video1": [
    		"video_selfie",
    		"video_selfie_eyes",
    		"orientation_portrait"
    	]
          "info": {},
          "time_created": 1744017549.368665,
          "time_updated": 1744017549.36867,
    	  // media metadata if you've added it
          "meta_data": {
            "foo": "bar"
          },
          "thumb_url": null,
          "image_id": "00000000-0000-0000-0000-000000000000"
        }
      ],
      "time_updated": 1744017549.366629,
      "analyses": [
        {
          "company_id": "00000000-0000-0000-0000-000000000000",
          "group_id": "00000000-0000-0000-0000-000000000000",
          "folder_id": "00000000-0000-0000-0000-000000000000",
          "folder_time_created": 1744017549.366616,
          "analysis_id": "00000000-0000-0000-0000-000000000000",
          "state": "FINISHED",
          "resolution_operator": null,
          "results_media": [
            {
             ...
            }
          ],
          "results_data": null,
    	  // analysis metadata if you've added it
          "meta_data": {
            "example1": "some_example1"
          },
          "time_created": 1744017549.369485,
          "time_updated": 1744017550.659305,
          "error_code": null,
          "error_message": null,
          "source_media": [
            {
    	 ...
            }
          ],
          "type": "QUALITY",
          "analyse_id": "00000000-0000-0000-0000-000000000000",
          "resolution_status": "SUCCESS",
          "resolution": "SUCCESS"
        }
      ]
    }
    {
        "error_code": 0,
        "error_message": "Unknown server side error occurred",
        "details": null
    }

    How to Issue a Service Token

    Here’s a step-by-step guide on how to issue a service token in Oz API 5 and 6.

    1

    Step 1

    Authorize using your ADMIN account: {{host}}/api/authorize/auth.

    Example request

    Example response

    2

    Step 2 (optional)

    This step can be omitted if a company already exists.

    As a user must belong to a company, create a company: call {{host}}/api/companies/

    3

    Step 3

    Create a service user. Call {{host}}/api/users/ and write down user_id that you will get in response.

    4

    Step 4

    If you need to obtain the service token to use it, for instance, with Web SDK, authorize as ADMIN (same as in Step 1) and call:

    • API 6: {{host}}/api/authorize/service_token/{user_id} with user_id

    Authentication

    Getting an Access Token

    To get an access token, call POST /api/authorize/auth/ with credentials (which you've got from us) containing the email and password needed in the request body. The host address should be the API address (the one you've also got from us).

    The successful response will return a pair of tokens:access_token and expire_token.

    access_token is a key that grants you access to system resources. To access a resource, you need to add your access_token to the header.

    headers = {‘ X-Forensic-Access-Token’: <access_token>}

    access_token is time-limited, the limits depend on the account type.

    • service accounts – OZ_SESSION_LONGLIVE_TTL (5 years by default),

    • other accounts – OZ_SESSION_TTL (15 minutes by default).

    expire_token is the token you can use to renew your access token if necessary.

    Automatic session extension

    If the value ofexpire_date > current date, the value of current sessionexpire_date is set to current date + time period that is defined as shown above (depending on the account type).

    Token Renewal

    To renewaccess_token and expire_token, call POST /api/authorize/refresh/. Add expire_token to the request body and X-Forensic-Access-Token to the header.

    In case of success, you'll receive a new pair of access_token and expire_token. The "old" pair will be deleted upon the first authentication with the renewed tokens.

    Errors

    Metadata

    Overview

    Metadata is any optional data you might need to add to a system object. In the meta_data section, you can include any information you want, simply by providing any number of fields with their values:

    Objects and Methods

    Metadata is available for most Oz system objects. Here is the list of these objects with the API methods required to add metadata. Please note: you can also add metadata to these objects during their creation.

    You can also change or delete metadata. Please refer to our .

    Usage Examples

    You may want to use metadata to group folders by a person or lead. For example, if you want to calculate conversion when a single lead makes several Liveness attempts, just add the person/lead identifier to the folder metadata.

    Here is how to add the client ID iin to a folder object.

    In the request body, add:

    You can pass an ID of a person in this field, and use this ID to combine requests with the same person and count unique persons (same ID = same person, different IDs = different persons). This ID can be a phone number, an IIN, an SSN, or any other kind of unique ID. The ID will be displayed in the report as an additional column.

    Another case is security: when you need to process the analyses’ result from your back end, but don’t want to perform this using the folder ID. Add an ID (transaction_id) to this folder and use this ID to search for the required information. This case is thoroughly explained .

    If you store PII in metadata, make sure it complies with the relevant regulatory requirements.

    You can also add metadata via SDK to process the information later using API methods. Please refer to the corresponding SDK sections:

    Media Tags

    What Are Tags for

    To work properly, the resolution algorithms need each uploaded media to be marked with special tags. For video and images, the tags are different. They help algorithms to identify what should be in the photo or video and analyze the content.

    Tags for Video Files

    The following tag types should be specified in the system for video files.

    • To identify the data type of the video:

      • video_selfie

    • To identify the orientation of the video:

    The tags listed allow the algorithms recognizing the files as suitable for the (Liveness) and analyses.

    Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored by algorithms.

    Example of the correct tag set for a video file with the “blink” action:

    Tags for Photo Files

    The following tag types should be specified in the system for photo files:

    • A tag for selfies:

      • photo_selfie – to identify the image type as “selfie”.

    • Tags for photos/scans of ID cards:

    Important: in API 4.0.8 and below, to launch the Quality analysis for a photo, pack the image into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored by algorithms.

    Example of the correct tag set for a “selfie” photo file:

    Example of the correct tag set for a photo file with the face side of an ID card:

    Example of the correct set of tags for a photo file of the back of an ID card:

    Types of Analyses and What They Check

    Here, you'll get acquainted with types of analyses that Oz API provides and will learn how to interpret the output.

    Using Oz API, you can perform one of the following analyses:

    • ,

    • ,

    Statuses in API

    This article contains the full description of folders' and analyses' statuses in API.

    Field name / status
    analyse.state
    analyse.resolution_status
    folder.resolution_status
    system_resolution

    Rules of Assigning Analyses

    This article covers the default rules of applying analyses.

    Analyses in Oz system can be applied in two ways:

    • manually, for instance, when you choose the Liveness scenario in our demo application;

    • automatically, when you don’t choose anything and just assign all possible analyses (via API or SDK).

    The automatic assignment means that Oz system decides itself what analyses to apply to media files based on its and type. If you upload files via the web console, you select the tags needed; if you take photo or video via Web SDK, the SDK picks the tags automatically. As for the media type, it can be IMAGE

    {
    	"credentials": {
    		"email": "{{user_email}}", // your login
    		"password": "{{user_password}}" // your password
    	}
    }
    …
    meta_data:
    {
      "field1": "value1",
      "field2": "value2"
    }
    …

    starting state

    starting state

    analyses in progress

    analyses in progress

    FAILED

    system error

    system error

    system error

    system error

    FINISHED

    finished successfully

    -

    finished successfully

    -

    DECLINED

    -

    check failed

    -

    check failed

    OPERATOR_REQUIRED

    -

    additional check is needed

    -

    additional check is needed

    SUCCESS

    -

    check succeeded

    -

    check succeeded

    The details on each status are below.

    Analysis State (analyse.state)

    This is the state when the analysis is being processed. The values of this state can be:

    PROCESSING – the analysis is in progress;

    FAILED – the analysis failed due to some error and couldn't get finished;

    FINISHED – job's done, the analysis is finished, and you can check the result.

    Analysis Result (analyse.resolution_status)

    Once the analysis is finished, you'll see one of the following results:

    SUCCESS– everything went fine, the check succeeded (e.g., faces match or liveness confirmed);

    OPERATOR_REQUIRED (except the Liveness analysis) – the result should be additionally checked by a human operator;

    The OPERATOR_REQUIRED status appears only if it is set up in biometry settings.

    DECLINED – the check failed (e.g., faces don't match or some spoofing attack detected).

    If the analysis hasn't been finished yet, the result inherits a value from analyse.state: PROCESSING (the analysis is in progress) / FAILED (the analysis failed due to some error and couldn't get finished).

    Folder Status (folder.resolution_status)

    A folder is an entity that contains media to analyze. If the analyses have not been finished, the stage of processing media is shown in resolution_status:

    INITIAL – no analyses applied;

    PROCESSING – analyses are in progress;

    FAILED – any of the analyses failed due to some error and couldn't get finished;

    FINISHED – media in this folder are processed, the analyses are finished.

    Folder Result (system_resolution)

    Folder result is the consolidated result of all analyses applied to media from this folder. Please note: the folder result is the result of the last-finished group of analyses. If all analyses are finished, the result will be:

    SUCCESS– everything went fine, all analyses completed successfully;

    OPERATOR_REQUIRED (except the Liveness analysis) – there are no analyses with the DECLINED status, but one or more analyses have been completed with the OPERATOR_REQUIRED status;

    DECLINED – one or more analyses have been completed with the DECLINED status.

    The analyses you send in a single POST request form a group. The group result is the "worst" result of analyses this group contains: INITIAL > PROCESSING > FAILED > DECLINED > OPERATOR_REQUIRED > SUCCESS, where SUCCESS means all analyses in the group have been completed successfully without any errors.

    INITIAL

    -

    -

    starting state

    starting state

    PROCESSING

    Error code

    Error message

    What caused the error

    400

    Could not locate field for key_path expire_token from provided dict data

    expire_token haven't been found in the request body

    401

    Session not found

    The session with expire_token you have passed doesn't exist.

    403

    You have not access to refresh this session

    A user who makes the request is not thisexpire_token session owner.

    Object

    API Method

    User

    PATCH /api/users/{{user_id}}

    Folder

    PATCH /api/folders/{{folder_id}}/meta_data/

    Media

    PATCH /api/media/{{media_id}}/meta_data

    Analysis

    PATCH /api/analyses/{{analyse_id}}/meta_data

    Collection

    PATCH /api/collections/{{collection_id}}/meta_data/

    and, for a person in a collection,

    PATCH /api/collections/{{collection_id}}/persons/{{person_id}}/meta_data

    API documentation
    here
    iOS
    Android
    Web

    orientation_portrait – portrait orientation;

  • orientation_landscape – landscape orientation.

  • To identify the action on the video:

    • video_selfie_left – head turn to the left;

    • video_selfie_right – head turn to the right;

    • video_selfie_down – head tilt downwards;

    • video_selfie_high – head raise up;

    • video_selfie_smile – smile;

    • video_selfie_eyes – blink;

    • video_selfie_scan – scanning;

    • video_selfie_oneshot – a one-frame analysis;

    • video_selfie_blank – no action.

  • photo_id – to identify the image type as “ID”;

  • photo_id_front – for the photo of the ID front side;

  • photo_id_back – for the photo of the ID back side (ignored for any other analyses like Quality or Biometry).

  • Quality
    Biometry
    with your company name.

    Example request

    Example response

    Example request

    Example response

    As in API 6.0, the logic of issuing a service token has slightly changed, here are examples for both API 6 and API 5 (and below) cases.

    API 6

    In the request body, define user_type as CLIENT_SERVICE.

    API 5 and below

    Set the is_service flag value to true.

    from the previous step.
  • API 5 and below: {{host}}/api/authorize/service_token.

  • Example request

    Example response

    In response, you will get a service token that you can use in any service processes.

    For Web SDK, specify this token’s value as api_token in the Web Adapter configuration file.

    {
        "expire_token": "{{expire_token}}"
    }
    {
      "iin": "123123123"
    }
    "video1": [
      "video_selfie",
      "video_selfie_eyes",
      "orientation_portrait"
    ]
    "photo1": [
      "photo_selfie"
    ]
    "photo1": [
      "photo_id",
      "photo_id_front"
    ]
    "photo1": [
      "photo_id",
      "photo_id_back"
    ]
    curl -L 'https://{{host}}/api/authorize/auth' \
    -H 'Content-Type: application/json' \
    --data-raw '{
        "credentials": {
            "email": "[email protected]",
            "password": "your_admin_password"
        }
    }'
    {
      …
        "user": {
            "user_type": "ADMIN",
      …
        },
        "access_token": "<token>",
        …
    }
    curl -L 'https://{{host}}/api/companies/'
    -H 'X-Forensic-Access-Token: token_id'
    -H 'Content-Type: application/json'
    -d '{ "name": "your_company_name" }'
    {
        "company_id": "company_id",
        "name": "your_company_name",
        "in_deletion": false,
        "technical_meta_data": {}
    }
     curl -L 'https://{{host}}/api/users/'
    -H 'X-Forensic-Access-Token: token_id'
    -H 'Content-Type: application/json'
    --data-raw '{
      "credentials": {
        "email": "<[email protected]>",
        "password": "<your_service_user_password>"
      },
      "profile": {
        "company_id": " company_id",
        <!-- the next line is for API 6 -->
        "user_type": "CLIENT_SERVICE",
        "first_name": "first_name",
        "last_name": "last_name",
        "middle_name": "",
        "is_admin": false,
        <!-- the next line is for API 5 and below -->
        "is_service": true,
        "can_start_analyse_biometry": true,
        "can_start_analyse_collection": true,
        "can_start_analyse_documents": true,
        "can_start_analyse_quality": true
      }
    }
    {
        "user_id": "user_id",
        "user_type": "CLIENT_SERVICE",
         …
        "is_active": true,
         …
        "is_service": true
    }
    {
      "credentials": {
        "email": " <[email protected]> ",
        "password": "your_client_service_user_password"
      },
      "profile": {
        "company_id": "{{company_id}}",
        "user_type": "CLIENT_SERVICE",
        "first_name": "john",
        "last_name": "doe",
        "middle_name": "",
        "can_start_analysis_biometry": true,
        "can_start_analysis_collection": true,
        "can_start_analysis_documents": true,
        "can_start_analysis_quality": true
      }
    }
    
    {
        "credentials": {
            "email": "[email protected]",
            "password": "your_client_service_user_password"
        },
        "profile": {
            "company_id": "{{company_id}}",
            "user_type": "CLIENT",
            "first_name": "john",
            "last_name": "doe",
            "middle_name": "",
            "is_admin": false,
            "is_service": true,
            "can_start_analyse_biometry": true,
            "can_start_analyse_collection": true,
            "can_start_analyse_documents": true,
            "can_start_analyse_quality": true
        }
    }
    curl -L 'https://{{host}}/api/authorize/service_token/{{user_id}}' \
    -H 'X-Forensic-Access-Token: token_id' \
    -H 'Content-Type: application/json'
    {
        "token_id": "token_id",
        "user_id": "user_id",
        "access_token": "service_token",
        "expire_date": 1904659888.282587,
        "session_id": 0
    }
    (deprecated),
  • blacklist.

  • The possible results of the analyses are explained here.

    Each of the analyses has its threshold that determines the output of these analyses. By default, the threshold for Liveness is 0.5 or 50%, for Blacklist and Biometry (Face Matching) – 0.85 or 85%.

    • Biometry: if the final score is equal to or above the threshold, the faces on the analyzed media are considered similar.

    • Blacklist: if the final score is equal to or above the threshold, the face on the analyzed media matches with one of the faces in the database.

    • Quality: if the final score is equal to or above the threshold, the result is interpreted as an attack.

    To configure the threshold depending on your needs, please .

    For more information on how to read the numbers in analyses' results, please refer to .

    Biometry

    Purpose

    The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media (for details, please refer to Rules of Assigning Analyses).

    Output

    After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:

    • 100% (1) – faces are similar, media represent the same person,

    • 0% (0) – faces are not similar and belong to different people

    Quality (Liveness, Best Shot)

    Purpose

    The Liveness detection (Quality) algorithm aims to check whether a person in a media is a real human acting in good faith, not a fake of any kind.

    The Best Shot algorithm checks for the best shot from a video (a best-quality frame where the face is seen the most properly). It is an addition to liveness.

    Output

    After checking, the analysis shows the chance of a spoofing attack in percents.

    • 100% (1) – an attack is detected, the person in the video is not a real living person,

    • 0% (0) – a person in the video is a real living person.

    *Spoofing in biometry is a kind of scam when a person disguises as another person using both program and non-program tools like deepfake, masks, ready-made photos, or fake videos.

    Documents

    This analysis type is deprecated.

    Purpose

    The Documents analysis aims to recognize the document and check if its fields are correct according to its type.

    Oz API uses a third-party OCR analysis service provided by our partner. If you want to change this service to another one, please contact us.

    Output

    As an output, you'll get a list of document fields with recognition results for each field and a result of checking that can be:

    • The documents passed the check successfully,

    • The documents failed to pass the check.

    Additionally, the result of Biometry check is displayed.

    Blacklist

    Purpose

    The Blacklist checking algorithm is used to determine whether the person on a photo or video is present in the database of pre-uploaded images. This base can be used as a blacklist or whitelist. In the former case, the person's face is being compared with the faces of known swindlers; in the latter case, it might be a list of VIPs.

    Output

    After comparison, the algorithm provides a number that represents the similarity level. The number varies from 100 to 0% (1 to 0), where:

    • 100% (1) – the person in an image or video matches with someone in the blacklist,

    • 0% (0) – the person is not found in the blacklist.

    biometry
    quality (liveness, best shot)
    documents
    (a photo)/
    VIDEO
    /
    SHOTS_SET
    , where
    SHOTS_SET
    is a .zip archive equal to video.

    Below, you will find the tags and type requirements for all analyses. If a media doesn’t match the requirements for the certain analysis, this media is ignored by algorithms.

    The rules listed below act by default. To change the mapping configuration, please contact us.

    Quality (Liveness)

    This analysis is applied to all media, regardless of the gesture recorded (gesture tags begin from video_selfie).

    Important: to process a photo in API 4.0.8 and below, pack it into a .zip archive, apply the SHOTS_SET type, and mark it with video_*. Otherwise, it will be ignored.

    Biometry

    This analysis is applied to all media.

    If the folder contains less than two matching media files, the system will return an error. If there are more than two files, then all pairs will be compared, and the system will return a result for the pair with the least similar faces.

    Blacklist

    This analysis works only when you have a pre-made image database, which is called the blacklist. The analysis is applied to all media in the folder (or the ones marked as source media).

    Best Shot

    Best Shot is an addition to the Quality (Liveness) analysis. It requires the appropriate option enabled. The analysis is applied to all media files that can be processed by the Quality analysis.

    Documents

    This analysis type is deprecated.

    The Documents analysis is applied to images with tags photo_id_front and photo_id_back (documents), and photo_selfie (selfie). The result will be positive if the system finds the selfie photo and matches it with a photo on one of the valid documents from the following list:

    • personal ID card

    • driver license

    • foreign passport

    Tag-Related Errors

    Code

    Message

    Description

    202

    Could not locate face on source media [media_id]

    No face is found in the media that is being processed, or the source media has wrong (photo_id_back) or/and missing tag used for the media.

    202

    Biometry. Analysis requires at least 2 media objects to process

    The algorithms did not find the two appropriate media for analysis. This might happen when only a single media has been sent for the analysis, or a media is missing a tag.

    202

    Processing error - did not found any document candidates on image

    The Documents analysis can't be finished because the photo uploaded seems not to be a document, or it has wrong (not photo_id_*) or/and missing tags.

    5

    tags

    Invalid/missed tag values to process quality check

    The tags applied can't be processed by the Quality algorithm (most likely, the tags begin from photo_*; for Quality, they should be marked as video_*)

    5

    Invalid/missed tag values to process blacklist check

    The tags applied can't be processed by the Blacklist algorithm. This might happen when a media is missing a tag.

    contact us
    Quantitative Results

    System Objects

    The description of the objects you can find in Oz Forensics system.

    Objects Hierarchy

    System objects on Oz Forensics products are hierarchically structured as shown in the picture below.

    On the top level, there is a Company. You can use one copy of Oz API to work with several companies.

    The next level is a User. A company can contain any amount of users. There are several roles of users with different permissions. For more information, refer to User Roles.

    When a user requests an analysis (or analyses), a new folder is created. This folder contains media. One user can create any number of folders. Each folder can contain any amount of media. A user applies analyses to one or more media within a folder. The rules of assigning analyses are described . The media quality requirements are listed on .

    Object parameters

    Common parameters

    Besides these parameters, each object type has specific ones.

    Company

    User

    Folder

    Media

    Analysis

    Module-required parameters; reserved for internal needs

    Surname

    middle_name

    String

    Middle name

    email

    String

    User email = login

    password

    String

    User password (only required for new users or to change)

    can_start_analyze_*

    String

    Depends on

    company_id

    UUID

    Current user company’s ID within the system

    is_admin

    Boolean

    ​​Whether this user is an or not

    is_service

    Boolean

    ​​Whether this user account is or not

    List of for this file

    Results of the analysis

    Parameter

    Type

    Description

    time_created

    Timestamp

    Object (except user and company) creation time

    time_updated

    Timestamp

    Object (except user and company) update time

    meta_data

    Json

    Any user parameters

    technical_meta_data

    Parameter

    Type

    Description

    company_id

    UUID

    Company ID within the system

    name

    String

    Company name within the system

    Parameter

    Type

    Description

    user_id

    UUID

    User ID within the system

    user_type

    String

    User roles

    ​​first_name

    String

    Name

    last_name

    Parameter

    Type

    Description

    folder_id

    UUID

    Folder ID within the system

    resolution_status

    ResolutionStatus

    The latter analysis status

    Parameter

    Type

    Description

    media_id

    UUID

    Media ID

    original_name

    String

    Original filename (how the file was called on the client machine)

    original_url

    Url

    HTTP link to this file on the API server

    tags

    Parameter

    Type

    Description

    analyse_id

    UUID

    ID of the analysis

    folder_id

    UUID

    ID of the folder

    type

    String

    Analysis type (BIOMETRY\QUALITY\DOCUMENTS)

    results_data

    here
    this page

    Json

    String

    Array(String)

    JSON

    user roles
    admin
    service
    tags

    Changelog

    API changes

    6.4.0 – Nov. 24, 2025

    Only for SaaS.

    • Updated API to support upcoming features.

    6.3.5 – Nov. 03, 2025

    • Fixed bugs that could cause the GET /api/folders/ request to return incorrect results.

    6.3.4 – Oct. 20, 2025

    • Updated API to support upcoming features.

    • Fixed bugs.

    6.3.3 – Sept. 29, 2025

    • Resolved an issue with POST /api/instant/folders/ and POST /api/folders/ returning “500 Internal Server Error” when the video sent is corrupted. Now system returns “400 Bad Request”.

    • Updated API to support upcoming features.

    6.3.0 – Aug 05, 2025

    • Updated API 6 to support Kazakhstan regulatory requirements: added the functionality of extracting action shots from videos of a person performing gestures.

    • You can remove admin rights from a CLIENT ADMIN user and change their role to CLIENT via PATCH /api/users/{{user_id}}.

    • You can generate a service token for a user with the OPERATOR role.

    6.2.5 – June 18, 2025

    • Optimization and performance updates.

    6.2.3 – June 02, 2025

    • Analyses can now be done in parallel with each other. To enable this feature, add the OZ_ANALYSE_PARALLELED_CHECK_MEDIA_ENABLED parameter to config.py and set it to true (the default value is false).

    • For the instant mode, authorization can be disabled. Add the OZ_AUTHORIZE_DISABLED_STATELESS parameter to config.py and set it to true (the default value is false

    6.0.1 – Apr. 30, 2025

    Please note: this version doesn't support the Kazakhstan regulatory requirements.

    • Optimized storage and database.

    • Implemented the which involves creating a folder and executing analyses in a single request by attaching a part of the analysis in the payload.

    • Implemented the without storing any data locally or in database. This mode can be used either with or without other API components.

    • You can now combine working systems based on asynchronous method or celery worker (local processing, celery processing). Added S3 storage mechanics for each of the combinations.

    Deprecated Endpoint or Parameter
    Replacement

    5.3.1 – Dec. 24, 2024

    • Improved the resource efficiency of server-based biometry analysis.

    5.3.0 – Nov. 27, 2024

    • API can now extract action shots from videos of a person performing gestures. This is done to comply with the new Kazakhstan regulatory requirements for biometric identification.

    • Created a new report template that also complies with the requirements mentioned above.

    • If action shots are enabled, the thumbnails for the report are generated from them.

    5.2.0 – Sept. 06, 2024

    • Updated the Postman collection. Please see the new collection and at .

    • Added the new method to check the timezone settings: GET {{host}}/api/config

    • Added parameters to the GET {{host}}/api/event_sessions method:

    5.1.1 – July 16, 2024

    • Security updates.

    5.1.0 – Mar. 20, 2024

    • Face Identification 1:N is now live, significantly increasing the data processing capacity of the Oz API to find matches. Even huge face databases (containing millions of photos and more) are no longer an issue.

    • The Liveness (QUALITY) analysis now ignores photos tagged with photo_id, photo_id_front, or photo_id_back, preventing these photos from causing the tag-related analysis error.

    5.0.1 – July 16, 2024

    • Security updates.

    5.0.0 – Nov. 17, 2023

    • You can now apply the Liveness (QUALITY) analysis to a single image.

    • Fixed the bug where the Liveness analysis could finish with the SUCCESS result with no media uploaded.

    • The default value for the extract_best_shot parameter is now True.

    4.0.8-patch1 – July 16, 2024

    • Security updates.

    4.0.8 – May 22, 2023

    • Set the autorotation of logs.

    • Added the CLI command for user deletion.

    • You can now switch off the video preview generation.

    • The ADMIN access token is now valid for 5 years.

    4.0.2 – Sept. 13, 2022

    • For the sliced video, the system now deletes the unnecessary frames.

    • Added new methods: GET and POST at media/<media_id>/snapshot/.

    • Replaced the default report template.

    3.33.0

    • The Photo Expert and KYC modules are now removed.

    • The endpoint for the user password change is now POST users/user_id/change-password instead of PATCH.

    3.32.1

    • Provided log for the Celery app.

    3.32.0

    • Added filters to the Folder [LIST] request parameters: analyse.time_created, analyse.results_data for the Documents analysis, results_data for the Biometry analysis, results_media_results_data for the QUALITY analysis. To enable filters, set the with_results_media_filter query parameter to True.

    3.31.0

    • Added a new attribute for users – is_active (default True). If is_active == False, any user operation is blocked.

    • Added a new exception code (1401 with status code 401) for the actions of the blocked users.

    3.30.0

    • Added shots sets preview.

    • You can now save a shots set archive to a disk (with the original_local_path, original_url attributes).

    • A new original_info attribute is added to store md5, size, and mime-type of a shots set

    3.29.0

    • Added health check at GET api/healthcheck.

    3.28.1

    • Fixed the shots set thumbnail URL.

    3.28.0

    • Now, the first frame of shots set becomes this shots set's thumbnail URL.

    3.27.0

    • Modified the retry policy – the default max count of analysis attempts is increased to 3 and jitter configuration introduced.

    • Changed the callback algorithm.

    • Refactored and documented the command line tools.

    • Refactored modules.

    3.25.0

    • Changed the delete personal information endpoint and method from delete_pi to /pi and from POST to DELETE, respectively.

    3.23.1

    • Improved the delete personal information algorithm.

    • It is now forbidden to add media to cleaned folders.

    3.23.0

    • Changed the authorize/restore endpoint name from auth to auth_restore.

    • Added a new tag – video_selfie_oneshot.

    • Added the password validation setting (OZ_PASSWORD_POLICY).

    3.22.2

    • Fixed a bug with no error while trying to synchronize empty collections.

    • If persons are uploaded, the analyse collection TFSS request is sent.

    3.22.0

    • Added the fields_to_check parameter to document analysis (by default, all fields are checked).

    • Added the double_page_spread parameter to document analysis (True by default).

    3.21.3

    • Fixed collection synchronization.

    3.21.0

    • Authorization token can be now refreshed by expire_token.

    3.20.1

    • Added support for application/x-gzip.

    3.20.0

    • Renamed shots_set.images to shots_set.frames.

    3.18.0

    • Added user sessions API.

    • Users can now change a folder owner (limited by permissions).

    • Changed dependencies rules.

    • Changed the access_token prolongation policy to fix bug of prolongation before checking the expiration permission.

    3.16.0

    • Move oz_collection_binding (collection synchronization functional) to oz_core.

    3.15.3

    • Simplified the shots sets functionality. One archive keeps one shot set.

    3.15.2

    • Improved the document sides recognition for the docker version.

    3.15.1

    • Moved the orientation tag check to liveness at quality analysis.

    3.15.0

    • Added a default report template for Admin and Operator.

    3.14.0

    • Updated the biometric model.

    3.13.2

    • A new ShotsSet object is not created if there are no photos for it.

    • Updated the data exchange format for the documents' recognition module.

    3.13.1

    • You can’t delete a Collection if there are associated analyses with Collection Persons.

    3.13.0

    • Added time marks to analysis: time_task_send_to_broker, time_task_received, time_task_finished.

    3.12.0

    • Added a new authorization engine. You can now connect with Active Directory by LDAP (settings configuration required).

    3.11.0

    • A new type of media in Folders – "shots_set".

    • You can’t delete a CollectionPerson if there are analyses associated with it.

    3.10.0

    • Renamed the folder field resolution_suggest to operator_status.

    • Added a folder text field operator_comment.

    • The folder fields operator_status and operator_comment

    3.9.0

    • Fixed a deletion error: when report author is deleted, their reports get deleted as well.

    3.8.1

    • Client can now view only their own profile.

    3.8.0

    • Client Operator can now edit only their profile.

    • Client can't delete own folders, media, reports, or analyses anymore.

    • Client Service can now create Collection Person and read reports within their company.

    3.7.1

    • Client, Client Admin, Client Operator have read access to users profiles only in their company.

    • A/B testing is now available.

    • Added support for expiration date header.

    • Added document recognition module Standalone/Dockered binding support.

    3.7.0

    • Added a new role of Client Operator (like Client Admin without permissions for company and account management).

    • Client Admin and Client Operator can change the analysis status.

    • Only Admin and Client Admin (for their company) can create, update and delete operations for Collection and CollectionPerson models from now on.

    Security updates.
    ) to use instant API without authorization.
  • Fixed the issue with MP4 videos that sometimes could not be played after downloading from SDK.

  • We now return the correct error for the non-authorized requests.

  • Fixed the bug with “spontaneous” error 500 that had been caused by too few frames in video. Added the check for the number of frames and more descriptive error messages.

  • Performance, security, and installation updates.

  • Implemented security updates.

  • We no longer support RAR archives.

  • We no longer support Active Directory. This functionality will be returned in the upcoming releases.

  • Improved mechanics for calculating analysis time.

  • Replaced the is_admin and is_service flags for the CLIENT role with new roles: CLIENT ADMIN and CLIENT SERVICE, respectively. Set the roles in user_type.

  • To issue a service token for a user via {{host}}/api/authorize/service_token/, this user must have the CLIENT SERVICE role. You can also create a token for another user with this role: call {{host}}/api/authorize/service_token/{user_id}.

  • Removed collection and person attributes from COLLECTION.analysis.

  • We no longer store separate objects for each frame in SHOTS_SET. If you want to save an image from your video, consider enabling best shot.

  • We no longer support Podman for installation.

  • Updated the API reference: Oz API 6.0.

  • Changed endpoints and parameters:

  • can_start_analyse_documents

    can_start_analysis_documents

    can_start_analyse_quality

    can_start_analysis_quality

    expire_date in {{host}}/api/authorize/auth and {{host}}/api/authorize/refresh

    access_token.exp from payload

    session_id in {{host}}/api/authorize/auth and {{host}}/api/authorize/refresh

    token_id

    time_created

  • time_created.min

  • time_created.max

  • time_updated

  • time_updated.min

  • time_updated.max

  • session_id

  • session_id.exclude

  • sorting

  • offset

  • limit

  • total_omit

  • If you create a folder using SHOT_SET, the corresponding video will be in media.video_url.

  • Fixed the bug with CLIENT ADMIN being unable to change passwords for users from their company.

  • RAR archives are no longer supported.
  • By default, analyses.results_media.results_data now contain the confidence_spoofing parameter. However, if you need all three parameters for the backward compatibility, it is possible to change the response back to three parameters: confidence_replay, confidence_liveness, and confidence_spoofing.

  • Updated the default PDF report template.

  • The name of the PDF report now contains folder_id.

  • Added the folder identifier folder_id to the report name.

  • Fixed bugs and optimized the API work.

  • The shot set preview now keeps images’ aspect ratio.
  • ADMIN and OPERATOR receive system_company as a company they belong to.

  • Added the company_id attribute to User, Folder, Analyse, Media.

  • Added the Analysis group_id attribute.

  • Added the system_resolution attribute to Folder and Analysis.

  • The analysis resolution_status now returns the system_resolution value.

  • Removed the PATCH method for collections.

  • Added the resolution_status filter to Folder Analyses [LIST] and analyse.resolution_status filter to Folder [LIST].

  • Added the audit log for Folder, User, Company.

  • Improved the company deletion algorithm.

  • Reforged the blacklist processing logic.

  • Fixed a few bugs.

  • Fixed ReportInfo for shots sets.

    Added auth, rest_unauthorized, rps_with_token throttling (use OZ_THROTTLING_RATES in configuration. Off by default).

  • User permissions are now used to access static files (OZ_USE_PERMISSIONS_FOR_STATIC in configuration, false by default).

  • Added a new folder endpoint – /delete_pi. It clears all personal information from a folder and analyses related to this folder.

  • can be edited only by Admin, Operator, Client Service, Client Operator, and Client Admin.
  • Only Admin and Client Admin can delete folder, folder media, report template, report template attachments, reports, and analyses (within their company).

  • Added a check for user permissions to report template when creating a folder report.
  • Collection creation now returns status code 201 instead of 200.

  • PATCH users/{{user_id}}/ to change user password

    POST /api/users/{{user_id}}/change-password

    DELETE images|media/<media_id> to delete an image of a person from collection

    DELETE collections/<collection_id>/persons/<person_id>/images/<media_id>/

    image_id, video_id, and shots_set_id

    media_id

    analyse_id

    analysis_id

    can_start_analyse_biometry

    can_start_analysis_biometry

    can_start_analyse_collection

    can_start_analysis_collection

    single request mode
    instant analysis mode
    here
    https://apidoc.ozforensics.com/

    User Roles

    Each of the new API users should obtain a role to define access restrictions for direct API connections. Set the role in the user_type field when you create a new user.

    • ADMIN is a system administrator, who has unlimited access to all system objects, but can't change the analyses' statuses;

    • OPERATOR is a system operator, who can view all system objects and choose the analysis result via the Make Decision button (usually needed if the status is OPERATOR_REQUIRED);

    • CLIENT is a regular consumer account, who can upload media files, process analyses, view results in personal folders, generate reports for analyses.

      • can_start_analysis_biometry – an additional flag to allow access to analyses (enabled by default);

      • can_start_analysis_quality – an additional flag to allow access to

    • CLIENT ADMIN is a company administrator that can manage their company account and users within it. Additionally, CLIENT ADMIN can view and edit data of all users within their company, delete files in folders, add or delete report templates with or without attachments, the reports themselves and single analyses, check statistics, add new blacklist collections.

    • CLIENT OPERATOR is similar to OPERATOR within their company.

    • CLIENT SERVICE is a service user account for automatic connection purposes. Authentication with this user creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, also by default, the lifetime of a token is extended with each request (parameterized).

    For API versions below 6.0

    For API 5.3 and below, to create a CLIENT user with admin or service rights, you require to set the corresponding flags to true:

    • is_admin – if set, the user obtains access to other users' data within this admin's company.

    • is_service

    Here's the detailed information on access levels.

    Company

    Folder

    Report template

    Report template attachments

    Report

    Analysis

    Collection

    Person

    Person image

    User

    (QUALITY) analyses (enabled by default);
  • can_start_analysis_collection – an additional flag to allow access to BLACK LIST analyses (enabled by default).

  • is a flag that marks the user account as a service accountfor automatic connection purposes. Authentication with this user creates a long-live access token (5 years by default). The token lifetime for regular uses is 15 minutes by default (parameterized) and, also by default, the lifetime of a token is extended with each request (parameterized).

    CLIENT

    -

    their company data

    -

    -

    CLIENT SERVICE

    -

    their company data

    -

    -

    CLIENT OPERATOR

    -

    their company data

    -

    -

    CLIENT ADMIN

    -

    their company data

    their company data

    their company data

    CLIENT

    their folders

    their folders

    their folders

    -

    CLIENT SERVICE

    within their company

    within their company

    within their company

    -

    CLIENT OPERATOR

    within their company

    within their company

    within their company

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    within their company

    CLIENT

    -

    within their company

    -

    -

    CLIENT SERVICE

    -

    within their company

    -

    -

    CLIENT OPERATOR

    within their company

    within their company

    within their company

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    within their company

    within their company

    -

    CLIENT SERVICE

    -

    within their company

    -

    CLIENT OPERATOR

    within their company

    within their company

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    in their folders

    -

    CLIENT SERVICE

    within their company

    within their company

    -

    CLIENT OPERATOR

    within their company

    within their company

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    CLIENT

    in their folders

    in their folders

    -

    -

    CLIENT SERVICE

    within their company

    within their company

    within their company

    -

    CLIENT OPERATOR

    within their company

    within their company

    within their company

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    within their company

    CLIENT

    -

    within their company

    -

    -

    CLIENT SERVICE

    within their company

    within their company

    -

    -

    CLIENT OPERATOR

    -

    within their company

    -

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    within their company

    within their company

    -

    CLIENT SERVICE

    within their company

    within their company

    -

    CLIENT OPERATOR

    -

    within their company

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    within their company

    -

    CLIENT SERVICE

    -

    within their company

    -

    CLIENT OPERATOR

    -

    within their company

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    CLIENT

    -

    their data

    their data

    -

    CLIENT SERVICE

    -

    within their company

    their data

    -

    CLIENT OPERATOR

    -

    within their company

    their data

    -

    CLIENT ADMIN

    within their company

    within their company

    within their company

    within their company

    Create

    Read

    Update

    Delete

    ADMIN

    +

    +

    +

    +

    OPERATOR

    -

    +

    -

    Create

    Read

    Update

    Delete

    ADMIN

    +

    +

    +

    +

    OPERATOR

    +

    +

    +

    Create

    Read

    Update

    Delete

    ADMIN

    +

    +

    +

    +

    OPERATOR

    +

    +

    +

    Create

    Read

    Delete

    ADMIN

    +

    +

    +

    OPERATOR

    +

    +

    -

    CLIENT

    Create

    Read

    Delete

    ADMIN

    +

    +

    +

    OPERATOR

    +

    +

    -

    CLIENT

    Create

    Read

    Update

    Delete

    ADMIN

    +

    +

    +

    +

    OPERATOR

    +

    +

    +

    Create

    Read

    Update

    Delete

    ADMIN

    +

    +

    +

    +

    OPERATOR

    -

    +

    -

    Create

    Read

    Delete

    ADMIN

    +

    +

    +

    OPERATOR

    -

    +

    -

    CLIENT

    Create

    Read

    Delete

    ADMIN

    +

    +

    +

    OPERATOR

    -

    +

    -

    CLIENT

    Create

    Read

    Update

    Delete

    ADMIN

    +

    +

    +

    +

    OPERATOR

    -

    +

    their data

    BIOMETRY
    LIVENESS

    -

    -

    -

    -

    in their folders

    -

    -

    -

    -

    -

    API Lite Methods

    From 1.1.0, Oz API Lite works with base64 as an input format and is also able to return the biometric templates in this format. To enable this option, add Content-Transfer-Encoding = base64 to the request headers.

    version – component version check

    Use this method to check what versions of components are used (available from 1.1.1).

    Call GET /version

    Input parameters

    -

    Request example

    GET localhost/version

    Successful response

    In case of success, the method returns a message with the following parameters.

    HTTP response content type: “application/json”.

    Output parameters

    Response example

    Biometry

    health – biometric processor status check

    Use this method to check whether the biometric processor is ready to work.

    Call GET /v1/face/pattern/health

    Input parameters

    -

    Request example

    GET localhost/v1/face/pattern/health

    Successful response

    In case of success, the method returns a message with the following parameters.

    HTTP response content type: “application/json”.

    Output parameters

    Response example

    extract – the biometric template extraction

    The method is designed to extract a biometric template from an image.

    HTTP request content type: “image / jpeg” or “image / png”

    Call POST /v1/face/pattern/extract

    Input parameters

    *

    The name itself is not mandatory for a parameter of the Stream type.

    To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

    Request example

    Successful response

    In case of success, the method returns a biometric template.

    The content type of the HTTP response is “application/octet-stream”.

    If you've passed Content-Transfer-Encoding = base64 in headers, the template will be in base64 as well.

    Output parameters

    *

    The name itself is not mandatory for a parameter of the Stream type.

    Response example

    compare – the comparison of biometric templates

    The method is designed to compare two biometric templates.

    The content type of the HTTP request is “multipart / form-data”.

    CallPOST /v1/face/pattern/compare

    Input parameters

    To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

    Request example

    Successful response

    In case of success, the method returns the result of comparing the two templates.

    HTTP response content type: “application/json”.

    Output parameters

    Response example

    verify – the biometric verification

    The method combines the two methods from above, extract and compare. It extracts a template from an image and compares the resulting biometric template with another biometric template that is also passed in the request.

    The content type of the HTTP request is “multipart / form-data”.

    Call POST /v1/face/pattern/verify

    Input parameters

    To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

    Request example

    Successful response

    In case of success, the method returns the result of comparing two biometric templates and the biometric template.

    The content type of the HTTP response is “multipart/form-data”.

    Output parameters

    Response example

    extract_and_compare – extracting and comparison of templates derived from two images

    The method also combines the two methods from above, extract and compare. It extracts templates from two images, compares the received biometric templates, and transmits the comparison result as a response.

    The content type of the HTTP request is “multipart / form-data”.

    Call POST /v1/face/pattern/extract_and_compare

    Input parameters

    To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

    Request example

    Successful response

    In case of success, the method returns the result of comparing the two extracted biometric templates.

    HTTP response content type: “application / json”.

    Output parameters

    Response example

    compare_n – 1:N biometric template comparison

    Use this method to compare one biometric template to N others.

    The content type of the HTTP request is “multipart/form-data”.

    Call POST /v1/face/pattern/compare_n

    Input parameters

    Request example

    Successful response

    In case of success, the method returns the result of the 1:N comparison.

    HTTP response content type: “application / json”.

    Output parameters

    Response example

    verify_n – 1:N biometric verification

    The method combines the extract and compare_n methods. It extracts a biometric template from an image and compares it to N other biometric templates that are passed in the request as a list.

    The content type of the HTTP request is “multipart/form-data”.

    Call POST /v1/face/pattern/verify_n

    Input parameters

    To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

    Request example

    Successful response

    In case of success, the method returns the result of the 1:N comparison.

    HTTP response content type: “application / json”.

    Output parameters

    Response example

    extract_and_compare_n – 1:N template extraction and comparison

    This method also combines the extract and compare_n methods but in another way. It extracts biometric templates from the main image and a list of other images and then compares them in the 1:N mode.

    The content type of the HTTP request is “multipart/form-data”.

    Call POST /v1/face/pattern/extract_and_compare_n

    Input parameters

    To transfer data in base64, add Content-Transfer-Encoding = base64 to the request headers.

    Request example

    Successful response

    In case of success, the method returns the result of the 1:N comparison.

    HTTP response content type: “application / json”.

    Output parameters

    Response example

    Method errors

    HTTP response content type: “application / json”.

    *

    A biometric sample is an input image.

    Liveness

    health – checking the status of liveness processor

    Use this method to check whether the liveness processor is ready to work.

    Call GET /v1/face/liveness/health

    Input parameters

    • None.

    Request example

    GET localhost/v1/face/liveness/health

    Successful response

    In case of success, the method returns a message with the following parameters.

    HTTP response content type: “application/json”.

    Output parameters

    Response example

    detect – presentation attack detection

    The detect method is made to reveal presentation attacks. It detects a face in each image or video (since 1.2.0), sends them for analysis, and returns a result.

    The method supports the following content types:

    • image/jpeg or image/png for an image;

    • multipart/form-data for images, videos, and archives. You can use payload to add any parameters that affect the analysis.

    To run the method, call POST /{version}/face/liveness/detect.

    Image

    Accepts an image in JPEG or PNG format. No payload attached.

    Request example
    Successful response example

    Multipart/form-data

    Accepts the multipart/form-data request.

    • Each media file should have a unique name, e.g., media_key1, media_key2.

    • The payload parameters should be a JSON placed in the payload field.

    Temporary IDs will be deleted once you get the result.

    Request example
    Successful response example

    Multipart/form-data with Best Shot

    To extract the best shot from your video or archive, in analyses, set extract_best_shot = true (as shown in the request example below). In this case, API Lite will analyze your archives and videos, and, in response, will return the best shot. It will be a base64 image in analysis->output_images->image_b64.

    Additionally, you can change the Liveness threshold. In analyses, set the new threshold in the threshold_spoofing parameter. If the resulting score will be higher than this parameter's value, the analysis will end up with the DECLINED status. Otherwise, the status will be SUCCESS.

    Request example
    Successful response example
    The payload field

    Method errors

    HTTP response content type: “application / json”.

    *

    A biometric sample is an input image.

    Recommended solution based on the score.

    approved – positive. The faces are match.

    operator_required – additional operator verification is required.

    declined – negative result. The faces don't match.

    Recommended solution based on the score.

    approved – positive. The faces are match.

    operator_required – additional operator verification is required.

    declined – negative result. The faces don't match.

    Recommended solution based on the score.

    approved – positive. The faces are match.

    operator_required – additional operator verification is required.

    declined – negative result. The faces don't match.

    Failed to read the biometric template

    400

    BPE-002005

    Invalid Content-Type of the multiparted HTTP request part

    400

    BPE-003001

    Failed to retrieve the biometric template

    400

    BPE-003002

    The biometric sample* is missing face

    400

    BPE-003003

    More than one person is present on the biometric sample*

    500

    BPE-001001

    Internal bioprocessor error

    400

    BPE-001002

    TFSS error. Call the biometry health method.

    Invalid Content-Type of the multiparted HTTP request part

    500

    LDE-001001

    Liveness detection processor internal error

    400

    LDE-001002

    TFSS error. Call the Liveness health method.

    Parameter name

    Type

    Description

    core

    String

    API Lite core version number.

    tfss

    String

    TFSS version number.

    models

    [String]

    An array of model versions, each record contains model name and model version number.

    Parameter name

    Type

    Description

    status

    Int

    0 – the biometric processor is working correctly.

    3 – the biometric processor is inoperative.

    message

    String

    Message.

    Parameter name

    Type

    Description

    Not specified*

    Stream

    Required parameter. Image to extract the biometric template.

    The “Content-Type” header field must indicate the content type.

    Parameter name

    Type

    Description

    Not specified*

    Stream

    A biometric template derived from an image

    Parameter name

    Type

    Description

    bio_feature

    Stream

    Required parameter.

    First biometric template.

    bio_template

    Stream

    Required parameter.

    Second biometric template.

    Parameter name

    Type

    Description

    score

    Float

    The result of comparing two templates

    decision

    String

    Recommended solution based on the score.

    approved – positive. The faces match.

    operator_required – additional operator verification is required.

    declined – negative result. The faces don't match.

    Parameter name

    Type

    Description

    sample

    Stream

    Required parameter.

    Image to extract the biometric template.

    bio_template

    Stream

    Required parameter.

    The biometric template to compare with.

    Parameter name

    Type

    Description

    score

    Float

    The result of comparing two templates

    bio_feature

    Stream

    Biometric template derived from image

    Parameter name

    Type

    Description

    sample_1

    Stream

    Required parameter.

    First image.

    sample_2

    Stream

    Required parameter.

    Second image

    Parameter name

    Type

    Description

    score

    Float

    The result of comparing the two extracted templates.

    decision

    String

    Recommended solution based on the score.

    approved – positive. The faces are match.

    operator_required – additional operator verification is required.

    declined – negative result. The faces don't match.

    Parameter name

    Type

    Description

    template_1

    Stream

    This parameter is mandatory. The first (main) biometric template

    templates_n

    Stream

    A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.

    Parameter name

    Type

    Description

    results

    List[JSON]

    A list of N comparison results. The Nth result contains the comparison result for the main and Nth templates. The result has the fields as follows:

    *filename

    String

    A filename for the Nth template.

    *score

    Float

    The result of comparing the main and Nth templates.

    *decision

    Parameter name

    Type

    Description

    sample_1

    Stream

    This parameter is mandatory. The main image.

    templates_n

    Stream

    A list of N biometric templates. Each of them should be passed separately but the parameter name should be templates_n. You also need to pass the filename in the header.

    Parameter name

    Type

    Description

    results

    List[JSON]

    A list of N comparison results. The Nth result contains the comparison result for the template derived from the main image and the Nth template. The result has the fields as follows:

    *filename

    String

    A filename for the Nth template.

    *score

    Float

    The result of comparing the template derived from the main image and the Nth template.

    *decision

    Parameter name

    Type

    Description

    sample_1

    Stream

    This parameter is mandatory. The first (main) image.

    samples_n

    Stream

    A list of N images. Each of them should be passed separately but the parameter name should be samples_n. You also need to pass the filename in the header.

    Parameter name

    Type

    Description

    results

    List[JSON]

    A list of N comparison results. The Nth result contains the comparison result for the main and Nth images. The result has the fields as follows:

    *filename

    String

    A filename for the Nth image.

    *score

    Float

    The result of comparing the main and Nth images.

    *decision

    HTTP response codes

    The value of the “code” parameter

    Description

    400

    BPE-002001

    Invalid Content-Type of HTTP request

    400

    BPE-002002

    Invalid HTTP request method

    400

    BPE-002003

    Failed to read the biometric sample*

    400

    Parameter name

    Type

    Description

    status

    Int

    0 – the liveness processor is working correctly.

    3 – the liveness processor is inoperative.

    message

    String

    Message.

    HTTP response codes

    The value of the “code” parameter

    Description

    400

    LDE-002001

    Invalid Content-Type of HTTP request

    400

    LDE-002002

    Invalid HTTP request method

    400

    LDE-002004

    Failed to extract the biometric sample*

    400

    String

    String

    String

    BPE-002004

    LDE-002005

    200 OK
    Content-Type: application/json
    {
    2	"core": "core_version",
    3	"tfss": "tfss_version",
    4	"models": [
    5		{
    6			"name": "model_name",
    7			"version": "model_version"
    8		}
    9	]
    10}
    200 OK
    Content-Type: application/json
    {“status”: 0, message: “”}
    POST localhost/v1/face/pattern/extract
    Content-Type: image/jpeg
    {Image byte stream}
    200 OK
    Content-Type: application/octet-stream
    {Biometric template byte stream}
    POST localhost/v1/face/pattern/compare
    Content-Type: multipart/form-data;
    boundary=--BOUNDARY--
    Content-Length: Message body length
    --BOUNDARY--
    Content-Disposition: form-data; name=”bio_feature”
    Content_type: application/octet-stream
    {Biometric template byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”bio_template”
    Content_type: application/octet-stream
    {Biometric template byte stream}
    --BOUNDARY--
    200 OK
    Content-Type: application/json
    {“score”: 1.0, “decision”: “approved”}
    POST localhost/v1/face/pattern/verify
    Content-Type: multipart/form-data;
    boundary=--BOUNDARY--
    Content-Length: Message body length
    --BOUNDARY--
    Content-Disposition: form-data; name=”bio_template”
    Content_type: application/octet-stream
    {Biometric template byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”sample”
    Content_type: image/jpeg
    {Image byte stream}
    --BOUNDARY--
    200 OK
    Content-Type: multipart/form-data;
    boundary=--BOUNDARY--
    Content-Length: Message body length
    --BOUNDARY--
    Content-Disposition: form-data; name=”score”
    Content_type: application/json
    {“score”: 1.0}
    --BOUNDARY--
    Content-Disposition: form-data; name=”bio_feature”
    Content_type: application/octet-stream
    {Biometric template byte stream}
    --BOUNDARY--
    POST localhost/v1/face/pattern/extract_and_compare
    Content-Type: multipart/form-data;
    boundary=--BOUNDARY--
    Content-Length: Message body length
    --BOUNDARY--
    Content-Disposition: form-data; name=”sample_1”
    Content_type: image/jpeg
    {Image byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”sample_2”
    Content_type: image/jpeg
    {Image byte stream}
    --BOUNDARY--
    200 OK
    Content-Type: application/json
    {“score”: 1.0, “decision”: “approved”}
    POST localhost/v1/face/pattern/compare_n
    Content-Type: multipart/form-data;
    boundary=--BOUNDARY--
    Content-Length: message body length
    --BOUNDARY--
    Content-Disposition: form-data; name=”template_1”
    Content_type: application/octet-stream
    {biometric template byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”templates_n”; filename=”1.template”
    Content_type: application/octet-stream
    {biometric template byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”templates_n”; filename=”2.template”
    Content_type: application/octet-stream
    {biometric template byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”templates_n”; filename=”3.template”
    Content_type: application/octet-stream
    {biometric template byte stream}
    --BOUNDARY--
    200 OK
    Content-Type: application/json
    {'results': [
        {'filename': '1.template', 'score': 0.0, 'decision': 'declined'}, 
        {'filename': '2.template', 'score': 1.0, 'decision': 'approved'}, 
        {'filename': '3.template', 'score': 0.21, 'decision': 'declined'}
    ]}
    POST localhost/v1/face/pattern/verify_n
    Content-Type: multipart/form-data;
    boundary=--BOUNDARY--
    Content-Length: message body length
    --BOUNDARY--
    Content-Disposition: form-data; name=”sample_1”
    Content_type: image/jpeg
    {image byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”templates_n”; filename=”1.template”
    Content_type: application/octet-stream
    {biometric template byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”templates_n”; filename=”2.template”
    Content_type: application/octet-stream
    {biometric template byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”templates_n”; filename=”3.template”
    Content_type: application/octet-stream
    {biometric template byte stream}
    --BOUNDARY--
    200 OK
    Content-Type: application/json
    {'results': [
        {'filename': '1.template', 'score': 0.0, 'decision': 'declined'}, 
        {'filename': '2.template', 'score': 1.0, 'decision': 'approved'}, 
        {'filename': '3.template', 'score': 0.21, 'decision': 'declined'}
    ]}
    POST localhost/v1/face/pattern/extract_and_compare_n
    Content-Type: multipart/form-data;
    boundary=--BOUNDARY--
    Content-Length: message body length
    --BOUNDARY--
    Content-Disposition: form-data; name=”sample_1”
    Content_type: image/jpeg
    {image byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”samples_n”; filename=”1.jpeg”
    Content_type: image/jpeg
    {image byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”samples_n”; filename=”2.jpeg”
    Content_type: image/jpeg
    {image byte stream}
    --BOUNDARY--
    Content-Disposition: form-data; name=”samples_n”; filename=”3.jpeg”
    Content_type: image/jpeg
    {image byte stream}
    --BOUNDARY--
    200 OK
    Content-Type: application/json
    {'results': [
        {'filename': '1.jpeg', 'score': 0.0, 'decision': 'declined'}, 
        {'filename': '2.jpeg', 'score': 1.0, 'decision': 'approved'}, 
        {'filename': '3.jpeg', 'score': 0.21, 'decision': 'declined'}
    ]}
    200 OK
    Content-Type: application/json
    {“status”: 0, message: “”}
    POST /v1/face/liveness/detect HTTP/1.1
    Host: localhost
    Content-Type: image/jpeg
    Content-Length: [the size of the message body]
    [Image byte stream]
    HTTP/1.1 200 OK
    Content-Type: application/json
    {
      "passed": false,
      "score": 0.999484062
    }
    POST /v1/face/liveness/detect HTTP/1.1
    Host: localhost
    Content-Length: [the size of the message body]
    Content-Type: multipart/form-data; boundary=--BOUNDARY--
    
    --BOUNDARY--
    Content-Disposition: form-data; name="media_key1"; filename="video.mp4"
    Content-Type: multipart/form-data; 
    
    [media file byte stream]
    --BOUNDARY--
    Content-Disposition: form-data; name="payload"
    
        {
            "folder:meta_data": {
                "partner_side_folder_id": "partner_side_folder_id_if_needed",
                "person_info": {
                    "first_name": "John",
                    "middle_name": "Jameson",
                    "last_name": "Doe"
                }
            },
            "resolution_endpoint": "https://www.your-custom-endpoint.com",
            "media:meta_data": {
                "media_key1": {
                    "foo": "bar2"
                }
            },
            "media:tags": {
                "media_key1": [
                    "video_selfie",
                    "video_selfie_blank"
                ]
            },
            "analyses": [
              {
                "type": "quality",
                "meta_data": {
                  "example1": "some_example1"
                },
                "params": {
                    "threshold_spoofing": 0.6,
                    "extract_best_shot": false
                }
              }
    ]
        }
    --BOUNDARY--
    {
        "company_id": null,
        "time_created": 1720180784.769608,
        "folder_id": "folder_id", // temporary ID
        "user_id": null,
        "resolution_endpoint": "https://www.your-custom-endpoint.com",
        "resolution_status": "FINISHED",
        "resolution_comment": "[]",
        "system_resolution": "SUCCESS",
        "resolution_time": null,
        "resolution_author_id": null,
        "resolution_state_hash": null,
        "operator_comment": null,
        "operator_status": null,
        "is_cleared": null,
        "meta_data": {
            "partner_side_folder_id": "partner_side_folder_id_if_needed",
            "person_info": {
                "first_name": "John",
                "middle_name": "Jameson",
                "last_name": "Doe"
            }
        },
        "technical_meta_data": {},
        "time_updated": 1720180787.531983,
        "media": [
            {
                "folder_id": "folder_id", // temporary ID
                "media_id": "video_id", // temporary ID
                "media_type": "VIDEO_FOLDER",
                "info": {
                    "thumb": null,
                    "video": {
                        "duration": 3.76,
                        "FPS": 22.83,
                        "width": 960,
                        "height": 720,
                        "md5": "8879b4fa9ee7add77aceb8d7d5d7b92d",
                        "size": 6017119,
                        "mime-type": "video/mp4"
                    }
                },
                "tags": [
                    "video_selfie",
                    "video_selfie_blank",
                    "orientation_portrait"
                ],
                "original_name": "video-5mb.mp4",
                "original_url": null,
                "company_id": null,
                "technical_meta_data": {},
                "time_created": 1719573752.78253,
                "time_updated": 1720180787.531801,
                "meta_data": {
                    "foo4": "bar5"
                },
                "thumb_url": null,
                "folder_time_created": null,
                "video_id": "video_id", // temporary ID
                "video_url": null
            }
        ],
        "analyses": [
            {
                "analyse_id": null,
                "analysis_id": null,
                "folder_id": "folder_id", // temporary ID
                "folder_time_created": null,
                "type": "QUALITY",
                "state": "FINISHED",
                "company_id": null,
                "group_id": null,
                "results_data": null,
                "confs": {
                    "threshold_replay": 0.5,
                    "extract_best_shot": false,
                    "threshold_liveness": 0.5,
                    "threshold_spoofing": 0.42
                },
                "error_message": null,
                "error_code": null,
                "resolution_operator": null,
                "technical_meta_data": {},
                "time_created": 1720180784.769944,
                "time_updated": 1720180787.531877,
                "meta_data": {
                    "some_key": "some_value"
                },
                "source_media": [
                    {
                        "folder_id": "folder_id", // temporary ID
                        "media_id": "video_id", // temporary ID
                        "media_type": "VIDEO_FOLDER",
                        "info": {
                            "thumb": null,
                            "video": {
                                "duration": 3.76,
                                "FPS": 22.83,
                                "width": 960,
                                "height": 720,
                                "md5": "8879b4fa9ee7add77aceb8d7d5d7b92d",
                                "size": 6017119,
                                "mime-type": "video/mp4"
                            }
                        },
                        "tags": [
                            "video_selfie",
                            "video_selfie_blank",
                            "orientation_portrait"
                        ],
                        "original_name": "video-5mb.mp4",
                        "original_url": null,
                        "company_id": null,
                        "technical_meta_data": {},
                        "time_created": 1719573752.78253,
                        "time_updated": 1720180787.531801,
                        "meta_data": {
                            "foo4": "bar5"
                        },
                        "thumb_url": null,
                        "folder_time_created": null,
                        "video_id": "video_id", // temporary ID
                        "video_url": null
                    }
                ],
                "results_media": [
                    {
                        "company_id": null,
                        "media_association_id": "video_id", // temporary ID
                        "analysis_id": null,
                        "results_data": {
                            "confidence_spoofing": 0.000541269779
                        },
                        "source_media_id": "video_id", // temporary ID
                        "output_images": [],
                        "collection_persons": [],
                        "folder_time_created": null
                    }
                ],
                "resolution_status": "SUCCESS",
                "resolution": "SUCCESS"
            }
        ]
    }
    POST /v1/face/liveness/detect HTTP/1.1
    Host: localhost
    Content-Length: [the size of the message body]
    Content-Type: multipart/form-data; boundary=--BOUNDARY--
    
    --BOUNDARY--
    Content-Disposition: form-data; name="media_key1"; filename="video.mp4"
    Content-Type: multipart/form-data; 
    
    [media file byte stream]
    --BOUNDARY--
    Content-Disposition: form-data; name="payload"
    
        {
            "folder:meta_data": {
                "partner_side_folder_id": "partner_side_folder_id_if_needed",
                "person_info": {
                    "first_name": "John",
                    "middle_name": "Jameson",
                    "last_name": "Doe"
                }
            },
            "resolution_endpoint": "https://www.your-custom-endpoint.com",
            "media:meta_data": {
                "media_key1": {
                    "foo": "bar2"
                }
            },
            "media:tags": {
                "media_key1": [
                    "video_selfie",
                    "video_selfie_blank"
                ]
            },
            "analyses": [
              {
                "type": "quality",
                "meta_data": {
                  "example1": "some_example1"
                },
                "params": {
                    "threshold_spoofing": 0.6,
                    "extract_best_shot": true
                }
              }
    ]
        }
    --BOUNDARY--
    {
        "company_id": null,
        "time_created": 1720177371.120899,
        "folder_id": "folder_id", // temporary ID
        "user_id": null,
        "resolution_endpoint": "https://www.your-custom-endpoint.com",
        "resolution_status": "FINISHED",
        "resolution_comment": "[]",
        "system_resolution": "SUCCESS",
        "resolution_time": null,
        "resolution_author_id": null,
        "resolution_state_hash": null,
        "operator_comment": null,
        "operator_status": null,
        "is_cleared": null,
        "meta_data": {
            "partner_side_folder_id": "partner_side_folder_id_if_needed",
            "person_info": {
                "first_name": "John",
                "middle_name": "Jameson",
                "last_name": "Doe"
            }
        },
        "technical_meta_data": {},
        "time_updated": 1720177375.531137,
        "media": [
            {
                "folder_id": "folder_id", // temporary ID
                "media_id": "media_id", // temporary ID
                "media_type": "VIDEO_FOLDER",
                "info": {
                    "thumb": null,
                    "video": {
                        "duration": 3.76,
                        "FPS": 22.83,
                        "width": 960,
                        "height": 720,
                        "md5": "8879b4fa9ee7add77aceb8d7d5d7b92d",
                        "size": 6017119,
                        "mime-type": "video/mp4"
                    }
                },
                "tags": [
                    "video_selfie",
                    "video_selfie_blank",
                    "orientation_portrait"
                ],
                "original_name": "video-5mb.mp4",
                "original_url": null,
                "company_id": null,
                "technical_meta_data": {},
                "time_created": 1719573752.781861,
                "time_updated": 1720177373.772401,
                "meta_data": {
                    "foo4": "bar5"
                },
                "thumb_url": null,
                "folder_time_created": null,
                "video_id": "media_id", // temporary ID
                "video_url": null
            }
        ],
        "analyses": [
            {
                "analyse_id": null,
                "analysis_id": null,
                "folder_id": "folder_id", // temporary ID
                "folder_time_created": null,
                "type": "QUALITY",
                "state": "FINISHED",
                "company_id": null,
                "group_id": null,
                "results_data": null,
                "confs": {
                    "threshold_replay": 0.5,
                    "extract_best_shot": true,
                    "threshold_liveness": 0.5,
                    "threshold_spoofing": 0.42
                },
                "error_message": null,
                "error_code": null,
                "resolution_operator": null,
                "technical_meta_data": {},
                "time_created": 1720177371.121241,
                "time_updated": 1720177375.531043,
                "meta_data": {
                    "some_key": "some_value"
                },
                "source_media": [
                    {
                        "folder_id": "folder_id", // temporary ID
                        "media_id": "media_id", // temporary ID
                        "media_type": "VIDEO_FOLDER",
                        "info": {
                            "thumb": null,
                            "video": {
                                "duration": 3.76,
                                "FPS": 22.83,
                                "width": 960,
                                "height": 720,
                                "md5": "8879b4fa9ee7add77aceb8d7d5d7b92d",
                                "size": 6017119,
                                "mime-type": "video/mp4"
                            }
                        },
                        "tags": [
                            "video_selfie",
                            "video_selfie_blank",
                            "orientation_portrait"
                        ],
                        "original_name": "video-5mb.mp4",
                        "original_url": null,
                        "company_id": null,
                        "technical_meta_data": {},
                        "time_created": 1719573752.781861,
                        "time_updated": 1720177373.772401,
                        "meta_data": {
                            "foo4": "bar5"
                        },
                        "thumb_url": null,
                        "folder_time_created": null,
                        "video_id": "media_id", // temporary ID
                        "video_url": null
                    }
                ],
                "results_media": [
                    {
                        "company_id": null,
                        "media_association_id": "media_id", // temporary ID
                        "analysis_id": null,
                        "results_data": {
                            "confidence_spoofing": 0.000541269779
                        },
                        "source_media_id": "media_id", // temporary ID
                        "output_images": [
                            {
                                "folder_id": "folder_id", // temporary ID
                                "media_id": "media_id", // temporary ID
                                "media_type": "IMAGE_RESULT_ANALYSIS_SINGLE",
                                "info": {
                                    "thumb": null,
                                    "original": {
                                        "md5": "e6effeceb94e79b8cb204c6652283b57",
                                        "width": 720,
                                        "height": 960,
                                        "size": 145178,
                                        "mime-type": "image/jpeg"
                                    }
                                },
                                "tags": [],
                                "original_name": "<PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=720x960 at 0x766811DF8E90>",
                                "original_url": null,
                                "company_id": null,
                                "technical_meta_data": {},
                                "time_created": 1719573752.781861,
                                "time_updated": 1719573752.781871,
                                "meta_data": null,
                                "folder_time_created": null,
                                "image_b64": "",
                                "media_association_id": "media_id" // temporary ID
                            }
                        ],
                        "collection_persons": [],
                        "folder_time_created": null
                    }
                ],
                "resolution_status": "SUCCESS",
                "resolution": "SUCCESS"
            }
        ]
    }
    {
        "folder:meta_data": {
            "partner_side_folder_id": "partner_side_folder_id_if_needed",
            "person_info": {
                "first_name": "John",
                "middle_name": "Jameson",
                "last_name": "Doe"
            }   },
        "resolution_endpoint": "https://www.your-custom-endpoint.com",
        "media:meta_data": {
            "media_key1": {
                "foo": "bar2",
                "additional_info": "additional_info" // might affect the score
            },
            "media_key2": {
                "foo2": "bar3"
            },
            "media_key3": {
                "foo4": "bar5"
            }
        },
        "media:tags": {
            "media_key1": [
                "video_selfie",
                "video_selfie_blank",
                "orientation_portrait"
            ],
            "media_key2": [
                "photo_selfie"
            ],
            "media_key3": [
                "video_selfie",
                "video_selfie_blank",
                "orientation_portrait"
            ]
        },
    "analyses": [
        {
          "type": "quality",
          "meta_data": {
            "some_key": "some_value"
          },
          "params": {
          	"threshold_spoofing": 0.42, // affects resolution
          	"extract_best_shot":true // analysis will return the best shot
          }
        }
      ]
    }