This callback is called periodically during the analysis’ processing. It retrieves an intermediate result (unavailable for the capture
mode). The result content depends on the Web Adapter result_mode
configuration parameter.
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples here.
When result_mode
is safe
, the on_result
callback contains the state of the analysis only:
or
For the status
value, the callback contains the state of the analysis, and for each of the analysis types, the name of the type, state, and resolution.
or
The folder
value is almost similar to the status
value, with the only difference: the folder_id
is added.
If result_mode
is set to full
, you will either receive:
while the analysis is in progress, the response similar to the status
response for processing;
once the analysis is finished, the full information on the analysis:
everything that you could see in the folder
mode;
timestamps;
metadata;
analyses’, company, analysis group IDs;
thresholds;
media info;
and more.
The plugin window is launched with open(options)
method:
The full list of OzLiveness.open()
parameters:
options
– an object with the following settings:
token
– (optional) the auth token;
license
– an object containing the license data;
licenseUrl
– a string containing the path to the license;
lang
– a string containing the identifier of one of the installed language packs;
params
– an object with identifiers and additional parameters:
extract_best_shot
– true
or false
: run the best frame choice in the Quality analysis;
action
– an array of strings with identifiers of actions to be performed.
Available actions:
photo_id_front
– photo of the ID front side;
photo_id_back
– photo of the ID back side;
video_selfie_left
– turn head to the left;
video_selfie_right
– turn head to the right;
video_selfie_down
– tilt head downwards;
video_selfie_high
– raise head up;
video_selfie_smile
– smile;
video_selfie_eyes
– blink;
video_selfie_scan
– scanning;
video_selfie_blank
– no action, simple selfie;
video_selfie_best
– special action to select the best shot from a video and perform analysis on it instead of the full video.
overlay_options
– the document's template displaying options:
show_document_pattern
: true/false
– true
by default, displays a template image, if set to false
, the image is replaced by a rectangular frame;
on_error
– a callback function (with one argument) that is called in case of any error happened during video capturing and retrieves the error information: an object with the error code, error message, and telemetry ID for logging.
on_close
– a callback function (no arguments) that is called after the plugin window is closed (whether manually by the user or automatically after the check is completed).
device_id
– (optional) identifier of camera that is being used.
disable_adaptive_aspect_ratio
(since 1.5.0) – if True
, disables the video adaptive aspect ratio, so your video doesn’t automatically adjust to the window aspect ratio. The default value is False
, and by default, the video adjusts to the closest ratio of 4:3, 3:4, 16:9, or 9:16. Please note: smartphones still require the portrait orientation to work.
get_user_media_timeout
(since 1.5.0) – when Web SDK can’t get access to the user camera, after this timeout it displays a hint on how to solve the problem. The default value is 40000 (ms).
GET /api/folders/?meta_data=transaction_id==<your_transaction_id>
to find a folder in Oz API from your backend by your unique identifier.
Read more about .
meta
– an object with names of meta fields in keys and their string values in values. is transferred to Oz API and can be used to obtain analysis results or for searching;
on_submit
– a callback function (no arguments) that is called after submitting customer data to the server (unavailable for the ).
on_capture_complete
– a callback function (with one argument) that is called after the video is captured and retrieves the information on this video. The example of the response is described .
on_result
– a callback function (with one argument) that is called periodically during the analysis and retrieves an intermediate result (unavailable for the capture
mode). The result content depends on the Web Adapter result_mode
and is described .
on_complete
– a callback function (with one argument) that is called after the check is completed and retrieves the analysis result (unavailable for the capture
mode). The result content depends on the Web Adapter result_mode
and is described .
style
– .
cameraFacingMode
(since 1.4.0) – the parameter that defines which camera to use; possible values: user
(front camera), environment
(rear camera). This parameter only works if the use_for_liveness
option in the file is undefined. If use_for_liveness
is set (with any value), cameraFacingMode
gets overridden and ignored.
This callback is called after the check is completed. It retrieves the analysis result (unavailable for the capture
mode). The result content depends on the Web Adapter result_mode
configuration parameter.
Keep in mind that it is more secure to get your back end responsible for the decision logic. You can find more details including code samples here.
When result_mode
is safe
, the on_complete
callback contains the state of the analysis only:
For the status
value, the callback contains the state of the analysis, and for each of the analysis types, the name of the type, state, and resolution.
The folder
value is almost similar to the status
value, with the only difference: the folder_id
is added.
If result_mode
is set to full
, you receive the full information on the analysis:
everything that you could see in the folder
mode;
score (the value of the min_confidence
or confidence_spoofing
parameters; please refer to this article for details);
timestamps;
metadata;
analyses’, company, analysis group IDs;
thresholds;
media info;
and more.
In this article, you’ll learn how to capture videos and send them through your backend to Oz API.
Here is the data flow for your scenario:
1. Oz Web SDK takes a video and makes it available for the host application as a frame sequence.
2. The host application calls your backend using an archive of these frames.
3. After the necessary preprocessing steps, your backend calls Oz API, which performs all necessary analyses and returns the analyses’ results.
4. Your backend responds back to the host application if needed.
On the server side, Web SDK must be configured to operate in the Capture
mode:
The architecture
parameter must be set to capture
in the app_config.json file.
In your Web app, add a callback to process captured media when opening the Web SDK plugin:
The result object structure depends on whether any virtual camera is detected or not.
Here’s the list of variables with descriptions.
The video from Oz Web SDK is a frame sequence, so, to send it to Oz API, you’ll need to archive the frames and transmit them as a ZIP file via the POST /api/folders
request (check our Postman collections).
You can retrieve the MP4 video from a folder using the /api/folders/{{folder_id}}
request with this folder's ID. In the JSON that you receive, look for the preview_url
in source_media
. The preview_url
parameter contains the link to the video. From the plugin, MP4 videos are unavailable (only as frame sequences).
Also, in the POST {{host}}/api/folders
request, you need to add the additional_info
field. It is required for the capture
architecture mode to gather the necessary information about client environment. Here’s the example of filling in the request’s body:
Oz API accepts data without the base64 encoding.
Variable
Type
Description
best_frame
String
The best frame, JPEG in the data URL format
best_frame_png
String
The best frame, PNG in the data URL format, it is required for protection against virtual cameras when video is not used
best_frame_bounding_box
Array[Named_parameter: Int]
The coordinates of the bounding box where the face is located in the best frame
best_frame_landmarks
Array[Named_parameter: Array[Int, Int]]
The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the best frame
frame_list
Array[String]
All frames in the data URL format
frame_bounding_box_list
Array[Array[Named_parameter: Int]]
The coordinates of the bounding boxes where the face is located in the corresponding frames
frame_landmarks
Array[Named_parameter: Array[Int, Int]]
The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the corresponding frames
action
String
An action code
additional_info
String
Information about client environment