# Capturing Video and Description of the on\_capture\_complete Callback

## 1.  Overview <a href="#h.azejp0s1y9bu" id="h.azejp0s1y9bu"></a>

Here is the data flow for your scenario:

1\.   Oz Web SDK takes a video and makes it available for the host application as a **frame sequence**.

2\.   The host application calls your backend using an archive of these frames.

3\.   After the necessary preprocessing steps, your backend calls Oz API, which performs all necessary analyses and returns the analyses’ results.

4\.   Your backend responds back to the host application if needed.

<figure><img src="https://2532558063-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F5g6dgsxRbyrCvB0uAf8f%2Fuploads%2FDnBcWZDTtNT0rog0Xh2d%2Fcomponents-public-webplugin-capturemode.svg?alt=media&#x26;token=b5b57096-0a58-4c39-a12a-96fc0c0f884d" alt=""><figcaption></figcaption></figure>

## &#x20;2.  Implementation

On the server side, Web SDK must be configured to operate in the `Capture` mode:

The `architecture` parameter must be [set](https://doc.ozforensics.com/oz-knowledge/guides/administrator-guide/web-adapter/configuration-file-settings) to `capture` in the app\_config.json file.

In your Web app, add a callback to process captured media when opening the Web SDK [plugin](https://doc.ozforensics.com/oz-knowledge/guides/developer-guide/sdk/oz-liveness-websdk/web-plugin):

```php
OzLiveness.open({
  ... // other parameters
  on_capture_complete: function(result) {
         // Your code to process media/send it to your API, this is STEP #2
  }
})
```

The result object structure depends on whether any virtual camera is detected or not.

### No Virtual Camera Detected

```php
{
	"action": <action>,
	"best_frame": <bestframe>,
	"best_frame_png": <bestframe_png>,
	"best_frame_bounding_box": {
		"left": <bestframe_bb_left>,
		"top": <bestframe_bb_top>,
		"right": <bestframe_bb_right>,
		"bottom": <bestframe_bb_bottom>
		},
	"best_frame_landmarks": {
		"left_eye": [bestframe_x_left_eye, bestframe_y_left_eye],
		"right_eye": [bestframe_x_right_eye, bestframe_y_right_eye],
		"nose_base": [bestframe_x_nose_base, bestframe_y_nose_base],
		"mouth_bottom": [bestframe_x_mouth_bottom, bestframe_y_mouth_bottom],
		"left_ear": [bestframe_x_left_ear, bestframe_y_left_ear],
		"right_ear": [bestframe_x_right_ear, bestframe_y_right_ear]
		},
	"frame_list": [<frame1>, <frame2>],
	"frame_bounding_box_list": [
		{
		"left": <frame1_bb_left>,
		"top": <frame1_bb_top>,
		"right": <frame1_bb_right>,
		"bottom": <frame1_bb_bottom>
		},
		{
		"left": <frame2_bb_left>,
		"top": <frame2_bb_top>,
		"right": <frame2_bb_right>,
		"bottom": <frame2_bb_bottom>
		},
	],
	"frame_landmarks": [
		{
		"left_eye": [frame1_x_left_eye, frame1_y_left_eye],
		"right_eye": [frame1_x_right_eye, frame1_y_right_eye],
		"nose_base": [frame1_x_nose_base, frame1_y_nose_base],
		"mouth_bottom": [frame1_x_mouth_bottom, frame1_y_mouth_bottom],
		"left_ear": [frame1_x_left_ear, frame1_y_left_ear],
		"right_ear": [frame1_x_right_ear, frame1_y_right_ear]
		},
		{
		"left_eye": [frame2_x_left_eye, frame2_y_left_eye],
		"right_eye": [frame2_x_right_eye, frame2_y_right_eye],
		"nose_base": [frame2_x_nose_base, frame2_y_nose_base],
		"mouth_bottom": [frame2_x_mouth_bottom, frame2_y_mouth_bottom],
		"left_ear": [frame2_x_left_ear, frame2_y_left_ear],
		"right_ear": [frame2_x_right_ear, frame2_y_right_ear]
		}
	],
"from_virtual_camera": null,
"additional_info": <additional_info>
}
```

### Any Virtual Camera Detected

```php
{
	"action": <action>,
	"best_frame": null,
	"best_frame_png": null,
	"best_frame_bounding_box": null,
	"best_frame_landmarks": null
	"frame_list": null,
	"frame_bounding_box_list": null,
	"frame_landmarks": null,
	"from_virtual_camera": {
	"additional_info": <additional_info>,
		"best_frame": <bestframe>,
		"best_frame_png": <best_frame_png>,
		"best_frame_bounding_box": {
			"left": <bestframe_bb_left>,
			"top": <bestframe_bb_top>,
			"right": <bestframe_bb_right>,
			"bottom": <bestframe_bb_bottom>
			},
		"best_frame_landmarks": {
			"left_eye": [bestframe_x_left_eye, bestframe_y_left_eye],
			"right_eye": [bestframe_x_right_eye, bestframe_y_right_eye],
			"nose_base": [bestframe_x_nose_base, bestframe_y_nose_base],
			"mouth_bottom": [bestframe_x_mouth_bottom, bestframe_y_mouth_bottom],
			"left_ear": [bestframe_x_left_ear, bestframe_y_left_ear],
			"right_ear": [bestframe_x_right_ear, bestframe_y_right_ear]
			},
		"frame_list": [<frame1>, <frame2>],
		"frame_bounding_box_list": [
			{
			"left": <frame1_bb_left>,
			"top": <frame1_bb_top>,
			"right": <frame1_bb_right>,
			"bottom": <frame1_bb_bottom>
			},
			{
			"left": <frame2_bb_left>,
			"top": <frame2_bb_top>,
			"right": <frame2_bb_right>,
			"bottom": <frame2_bb_bottom>
			},
			],
		"frame_landmarks": [
			{
			"left_eye": [frame1_x_left_eye, frame1_y_left_eye],
			"right_eye": [frame1_x_right_eye, frame1_y_right_eye],
			"nose_base": [frame1_x_nose_base, frame1_y_nose_base],
			"mouth_bottom": [frame1_x_mouth_bottom, frame1_y_mouth_bottom],
			"left_ear": [frame1_x_left_ear, frame1_y_left_ear],
			"right_ear": [frame1_x_right_ear, frame1_y_right_ear]
			},
			{
			"left_eye": [frame2_x_left_eye, frame2_y_left_eye],
			"right_eye": [frame2_x_right_eye, frame2_y_right_eye],
			"nose_base": [frame2_x_nose_base, frame2_y_nose_base],
			"mouth_bottom": [frame2_x_mouth_bottom, frame2_y_mouth_bottom],
			"left_ear": [frame2_x_left_ear, frame2_y_left_ear],
			"right_ear": [frame2_x_right_ear, frame2_y_right_ear]
			}
		]
	}
}
```

Here’s the list of variables with descriptions.

| Variable                   | Type                                       | Description                                                                                                               |
| -------------------------- | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------- |
| `best_frame`               | String                                     | The best frame, JPEG in the data URL format                                                                               |
| `best_frame_png`           | String                                     | The best frame, PNG in the data URL format, it is required for protection against virtual cameras when video is not used  |
| `best_frame_bounding_box`  | Array\[Named\_parameter: Int]              | The coordinates of the bounding box where the face is located in the best frame                                           |
| `best_frame_landmarks`     | Array\[Named\_parameter: Array\[Int, Int]] | The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the best frame           |
| `frame_list`               | Array\[String]                             | All frames in the data URL format                                                                                         |
| frame\_bounding\_box\_list | Array\[Array\[Named\_parameter: Int]]      | The coordinates of the bounding boxes where the face is located in the corresponding frames                               |
| `frame_landmarks`          | Array\[Named\_parameter: Array\[Int, Int]] | The coordinates of the face landmarks (left eye, right eye, nose, mouth, left ear, right ear) in the corresponding frames |
| `action`                   | String                                     | An action code                                                                                                            |
| `additional_info`          | String                                     | Information about client environment                                                                                      |

### Please note:

* The video from Oz Web SDK is a frame sequence, so, to send it to Oz API, you’ll need to archive the frames and transmit them as a ZIP file via the POST `/api/folders` request (check our[ ](https://www.google.com/url?q=https://doc.ozforensics.com/oz-knowledge/guides/developer-guide/api/oz-api/postman-collections\&sa=D\&source=editors\&ust=1660309141772324\&usg=AOvVaw2pIvICW-DOSa-Wt_IVT1tj)[Postman collections](https://doc.ozforensics.com/oz-knowledge/guides/developer-guide/api/oz-api/postman-collections)).
* You can retrieve the **MP4 video** from a folder using the `/api/folders/{{folder_id}}` request with this folder's ID. In the JSON that you receive, look for the `preview_url` in `source_media`. The `preview_url` parameter contains the link to the video. From the plugin, MP4 videos are unavailable (only as frame sequences).
* Also, in the POST `{{host}}/api/folders` request, you need to add the `additional_info` field. It is required for the `capture` architecture mode to gather the necessary information about client environment. Here’s the example of filling in the request’s body:

```json
"VIDEO_FILE_KEY": VIDEO_FILE_ZIP_BINARY
"payload": "{
        "media:meta_data": {
           "VIDEO_FILE_KEY": {
              "additional_info": <additional_info>
              }
           }
}"
```

* Oz API accepts data without the base64 encoding.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://doc.ozforensics.com/oz-knowledge/guides/developer-guide/sdk/oz-liveness-websdk/web-plugin/launching-the-plugin/capturing-video-and-description-of-the-on_capture_complete-callback.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
