LogoLogo
To the Oz WebsiteOz API ReferenceContact Us
  • General
    • Oz Liveness and Biometry Key Concepts
      • Solution Architecture
      • Liveness, Face Matching, Black List Checks
      • Passive and Active Liveness
      • Hybrid Liveness
      • Oz API Key Concepts
      • Oz API vs. Oz API Lite
      • SaaS, On-premise, On-device: What to Choose
      • Oz Licensing Options
    • Integration Quick Start Guides
      • Server-Based Liveness
        • How to Integrate Server-Based Liveness into Your Web Application
        • How to Integrate Server-Based Liveness into Your Mobile Application
        • How to Check Your Media for Liveness without Oz Front End
      • On-Device Liveness
        • How to Integrate On-Device Liveness into Your Mobile Application
      • Face Matching
        • How to Add Face Matching of Liveness Video with a Reference Photo From Your Database
        • How to Add Photo ID Capture and Face Matching to Your Web or Mobile Application
  • Guides
    • Developer Guide
      • API
        • Oz API
          • Working with Oz System: Basic Scenarios
            • Authentication
            • Uploading Media
            • Liveness
            • Biometry (Face Matching)
            • Best Shot
            • Blacklist Check
              • Blacklist (Collection) Management in Oz API
            • Quantitative Results
            • Using a Webhook to Get Results
            • Single Request
            • Instant API: Non-Persistent Mode
          • System Objects
          • User Roles
          • Types of Analyses and What They Check
          • Rules of Assigning Analyses
          • Statuses in API
          • Media Tags
          • Metadata
          • API Error Codes
          • Oz API Postman Collections
          • Changelog
        • Oz API Lite
          • API Lite Methods
          • Oz API Lite Postman Collection
          • Changelog
      • SDK
        • Oz Mobile SDK (iOS, Android, Flutter)
          • On-Device Mode
          • Android
            • Getting a License for Android SDK
              • Master License for Android
            • Adding SDK to a Project
            • Connecting SDK to API
            • Capturing Videos
            • Checking Liveness and Face Biometry
            • Customizing Android SDK
              • How to Restore the Previous Design after an Update
            • Android Localization: Adding a Custom or Updating an Existing Language Pack
            • Android SDK Methods and Properties
            • Changelog
          • iOS
            • Getting a License for iOS SDK
              • Master License for iOS
            • Adding SDK to a Client’s Mobile App
            • Connecting SDK to API
            • Capturing Videos
            • Checking Liveness and Face Biometry
            • Customizing iOS SDK Interface
              • How to Restore the Previous Design after an Update
            • iOS Localization: Adding a Custom or Updating an Existing Language Pack
            • iOS SDK Methods and Properties
            • Changelog
          • Flutter
            • How to Install and Use Oz Flutter Plugin
            • Flutter SDK Methods and Properties
            • Changelog
        • Oz Liveness Web SDK
          • Web Plugin
            • Adding the Plugin to Your Web Page
            • Launching the Plugin
              • Description of the on_complete Callback
              • Description of the on_result Callback
              • Capturing Video and Description of the on_capture_complete Callback
              • Description of the on_error Callback
            • Closing or Hiding the Plugin
            • Localization: Adding a Custom Language Pack
            • Look-and-Feel Customization
              • Customization Options for Older Versions (before 1.0.1)
            • Security Recommendations
            • Browser Compatibility
            • No-Server Licensing
          • Changelog
    • Administrator Guide
      • Deployment Architecture
      • Installation in Docker
      • Installation in Kubernetes
      • Performance and Scalability Guide
      • Publishing API Methods in the Internet: Security Recommendations
      • Monitoring
      • License Server
      • Web Adapter Configuration
        • Installation and Licensing
        • Configuration File Settings
        • Configuration Using Environment Variables
        • Server Configuration via Environment Variables
      • Oz API Configuration
    • User Guide
      • Oz Web UI
        • Requesting Analyses
        • Users and Companies
        • Blacklist
        • Statistics
        • Settings
        • Changelog
  • Other
    • Media Quality Requirements
    • Oz SDK Media Quality Checks
    • Media File Size Overview
    • Compatibility
    • FAQ
    • Tips and Tricks
      • Oz Liveness Gestures: Table of Correspondence
      • Sudo without Password
      • Android: Certificate Validation Error
    • Previous Documentation
      • Mobile SDK
        • Android
          • Interactions with the Oz API Server
          • Uploading and Analyzing Media
        • iOS
          • Uploading and Analyzing Media
      • User Guides
        • Oz Demo Kit
        • Web UI
      • Oz Modules Installation
        • Standalone Installer
        • Oz System Lite
Powered by GitBook
On this page
  • 1. Overview
  • 2. Implementation
  • No Virtual Camera Detected
  • Any Virtual Camera Detected
  • Please note:

Was this helpful?

Export as PDF
  1. Guides
  2. Developer Guide
  3. SDK
  4. Oz Liveness Web SDK
  5. Web Plugin
  6. Launching the Plugin

Capturing Video and Description of the on_capture_complete Callback

In this article, you’ll learn how to capture videos and send them through your backend to Oz API.

PreviousDescription of the on_result CallbackNextDescription of the on_error Callback

Last updated 1 year ago

Was this helpful?

1. Overview

Here is the data flow for your scenario:

1. Oz Web SDK takes a video and makes it available for the host application as a frame sequence.

2. The host application calls your backend using an archive of these frames.

3. After the necessary preprocessing steps, your backend calls Oz API, which performs all necessary analyses and returns the analyses’ results.

4. Your backend responds back to the host application if needed.

2. Implementation

On the server side, Web SDK must be configured to operate in the Capture mode:

OZLiveness.open({
  ... // other parameters
  on_capture_complete: function(result) {
         // Your code to process media/send it to your API, this is STEP #2
  }
})

The result object structure depends on whether any virtual camera is detected or not.

No Virtual Camera Detected

{
	"action": <action>,
	"best_frame": <bestframe>,
	"best_frame_png": <bestframe_png>,
	"best_frame_bounding_box": {
		"left": <bestframe_bb_left>,
		"top": <bestframe_bb_top>,
		"right": <bestframe_bb_right>,
		"bottom": <bestframe_bb_bottom>
		},
	"best_frame_landmarks": {
		"left_eye": [bestframe_x_left_eye, bestframe_y_left_eye],
		"right_eye": [bestframe_x_right_eye, bestframe_y_right_eye],
		"nose_base": [bestframe_x_nose_base, bestframe_y_nose_base],
		"mouth_bottom": [bestframe_x_mouth_bottom, bestframe_y_mouth_bottom],
		"left_ear": [bestframe_x_left_ear, bestframe_y_left_ear],
		"right_ear": [bestframe_x_right_ear, bestframe_y_right_ear]
		},
	"frame_list": [<frame1>, <frame2>],
	"frame_bounding_box_list": [
		{
		"left": <frame1_bb_left>,
		"top": <frame1_bb_top>,
		"right": <frame1_bb_right>,
		"bottom": <frame1_bb_bottom>
		},
		{
		"left": <frame2_bb_left>,
		"top": <frame2_bb_top>,
		"right": <frame2_bb_right>,
		"bottom": <frame2_bb_bottom>
		},
	],
	"frame_landmarks": [
		{
		"left_eye": [frame1_x_left_eye, frame1_y_left_eye],
		"right_eye": [frame1_x_right_eye, frame1_y_right_eye],
		"nose_base": [frame1_x_nose_base, frame1_y_nose_base],
		"mouth_bottom": [frame1_x_mouth_bottom, frame1_y_mouth_bottom],
		"left_ear": [frame1_x_left_ear, frame1_y_left_ear],
		"right_ear": [frame1_x_right_ear, frame1_y_right_ear]
		},
		{
		"left_eye": [frame2_x_left_eye, frame2_y_left_eye],
		"right_eye": [frame2_x_right_eye, frame2_y_right_eye],
		"nose_base": [frame2_x_nose_base, frame2_y_nose_base],
		"mouth_bottom": [frame2_x_mouth_bottom, frame2_y_mouth_bottom],
		"left_ear": [frame2_x_left_ear, frame2_y_left_ear],
		"right_ear": [frame2_x_right_ear, frame2_y_right_ear]
		}
	],
"from_virtual_camera": null,
"additional_info": <additional_info>
}

Any Virtual Camera Detected

{
	"action": <action>,
	"best_frame": null,
	"best_frame_png": null,
	"best_frame_bounding_box": null,
	"best_frame_landmarks": null
	"frame_list": null,
	"frame_bounding_box_list": null,
	"frame_landmarks": null,
	"from_virtual_camera": {
	"additional_info": <additional_info>,
		"best_frame": <bestframe>,
		"best_frame_png": <best_frame_png>,
		"best_frame_bounding_box": {
			"left": <bestframe_bb_left>,
			"top": <bestframe_bb_top>,
			"right": <bestframe_bb_right>,
			"bottom": <bestframe_bb_bottom>
			},
		"best_frame_landmarks": {
			"left_eye": [bestframe_x_left_eye, bestframe_y_left_eye],
			"right_eye": [bestframe_x_right_eye, bestframe_y_right_eye],
			"nose_base": [bestframe_x_nose_base, bestframe_y_nose_base],
			"mouth_bottom": [bestframe_x_mouth_bottom, bestframe_y_mouth_bottom],
			"left_ear": [bestframe_x_left_ear, bestframe_y_left_ear],
			"right_ear": [bestframe_x_right_ear, bestframe_y_right_ear]
			},
		"frame_list": [<frame1>, <frame2>],
		"frame_bounding_box_list": [
			{
			"left": <frame1_bb_left>,
			"top": <frame1_bb_top>,
			"right": <frame1_bb_right>,
			"bottom": <frame1_bb_bottom>
			},
			{
			"left": <frame2_bb_left>,
			"top": <frame2_bb_top>,
			"right": <frame2_bb_right>,
			"bottom": <frame2_bb_bottom>
			},
			],
		"frame_landmarks": [
			{
			"left_eye": [frame1_x_left_eye, frame1_y_left_eye],
			"right_eye": [frame1_x_right_eye, frame1_y_right_eye],
			"nose_base": [frame1_x_nose_base, frame1_y_nose_base],
			"mouth_bottom": [frame1_x_mouth_bottom, frame1_y_mouth_bottom],
			"left_ear": [frame1_x_left_ear, frame1_y_left_ear],
			"right_ear": [frame1_x_right_ear, frame1_y_right_ear]
			},
			{
			"left_eye": [frame2_x_left_eye, frame2_y_left_eye],
			"right_eye": [frame2_x_right_eye, frame2_y_right_eye],
			"nose_base": [frame2_x_nose_base, frame2_y_nose_base],
			"mouth_bottom": [frame2_x_mouth_bottom, frame2_y_mouth_bottom],
			"left_ear": [frame2_x_left_ear, frame2_y_left_ear],
			"right_ear": [frame2_x_right_ear, frame2_y_right_ear]
			}
		]
	}
}

Here’s the list of variables with descriptions.

Variable

Type

Description

best_frame

String

The best frame, JPEG in the data URL format

best_frame_png

String

The best frame, PNG in the data URL format, it is required for protection against virtual cameras when video is not used

best_frame_bounding_box

Array[Named_parameter: Int]

The coordinates of the bounding box where the face is located in the best frame

best_frame_landmarks

Array[Named_parameter: Array[Int, Int]]

The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the best frame

frame_list

Array[String]

All frames in the data URL format

frame_bounding_box_list

Array[Array[Named_parameter: Int]]

The coordinates of the bounding boxes where the face is located in the corresponding frames

frame_landmarks

Array[Named_parameter: Array[Int, Int]]

The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the corresponding frames

action

String

An action code

additional_info

String

Information about client environment

Please note:

  • You can retrieve the MP4 video from a folder using the /api/folders/{{folder_id}} request with this folder's ID. In the JSON that you receive, look for the preview_url in source_media. The preview_url parameter contains the link to the video. From the plugin, MP4 videos are unavailable (only as frame sequences).

  • Also, in the POST {{host}}/api/folders request, you need to add the additional_info field. It is required for the capture architecture mode to gather the necessary information about client environment. Here’s the example of filling in the request’s body:

"VIDEO_FILE_KEY": VIDEO_FILE_ZIP_BINARY
"payload": "{
        "media:meta_data": {
           "VIDEO_FILE_KEY": {
              "additional_info": <additional_info>
              }
           }
}"
  • Oz API accepts data without the base64 encoding.

The architecture parameter must be to capture in the app_config.json file.

In your Web app, add a callback to process captured media when opening the Web SDK :

The video from Oz Web SDK is a frame sequence, so, to send it to Oz API, you’ll need to archive the frames and transmit them as a ZIP file via the POST /api/folders request (check our).

set
plugin
Postman collections