Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
To launch the services, you'll require:
CPU: 16 cores,
RAM: 32 GB,
Disk: 100 GB, SSD,
Linux-compatible OS,
Docker 19.03+ (or Podman 4.4+),
Docker Compose 1.27+ (or podman-compose 1.2.0+, if you use Podman).
For Docker installations with multiple API servers, you'll also require shared volume or NFS.
The package you get consists of the following directories and files:
Put the license file in ./configs/tfss/license.key
.
Unzip the file that contains models into the ./data/tfss/models
directory.
Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh
script.
Set the initial passwords and values:
For this configuration, run all services on a single host:
We recommend using PostgreSQL as a container only for testing purposes. For the production deployment, it is recommended to use a standalone database.
Create a directory and unzip the distribution package into it. The package contains Docker Compose manifests and directories with the configuration files required for operation.
Put the license file in ./configs/tfss/license.key
.
Unzip the file that contains models into the ./data/tfss/models
directory.
Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh
script.
For this configuration, run TFSS service on a separate host:
Create a directory and unzip the distribution package into it. The package contains Docker Compose manifests and directories with the configuration files required for operation.
Before starting system configuration, we recommend running the host readiness check scripts. Navigate to the checkers directory and run the pre-checker-all.sh
script.
Set the initial passwords and values as described in step 4 of the same host installation.
For this configuration, run all services on a single host:
The Oz system consists of many interconnected components.
Most of the components require scaling to ensure functionality during an increase in . The scheme below represents how the system processes an analysis using supportive software.
This guide explains how to scale components and improve performance for both Docker and Kubernetes deployments.
For K8s deployments, use HPA. In the chart values, you'll find the use_hpa
parameter for each component that supports HPA.
BIO is the most resource-consuming component as it performs media analysis processing.
The BIO component may take up to 10 minutes to start (applicable for versions <1.2).
Scaling BIO nodes might be challenging during a rapid increase in APM.
For Docker installations, plan the required number of components to handle peak loads.
For Kubernetes installations, schedule the creation of additional BIO pods to manage demand using cronjobHpaUpdater
in values.yaml
.
CPU must support avx
, avx2
, fma
, sse4.1
, and sse4.2
.
If CPU support avx512f
instructions: to slightly increase performance, you can use a specific BIO image.
We recommend using Intel CPU. AMD CPUs are also supported, but require additional configuration.
For better performance, each BIO should reside on a dedicated node with reserved resources.
APM expectations may vary, depending on speed of CPUs (GHz).
Recommended: 2.5+ GHz.
For better performance: 3.0+ GHz.
Each BIO instance can handle an unlimited number of simultaneous requests. However, as the number increases, the execution time for each request grows.
The assignment of analyses to BIO instances is managed by celery-tfss
workers. Each celery-tfss
instance is configured to handle 2 analyses simultaneously, as this provides optimal performance and analysis time.
For Kubernetes installations, this behavior can be adjusted using the concurrency parameter in values.yaml. In the default configuration, the number of celery-tfss
instances should match the number of BIOs.
The required number of BIOs can be calculated using the formula below, based on the average analysis time and the expected number of analyses:
Here,
N(BIO) is the required number of BIO nodes
n(APM) is the expected number of APM,
t(analysis) is the measured average duration of a single analysis (3 seconds by default),
C is concurrency (2 by default).
For a large number of requests involving .zip archives (e.g., if you use Web SDK), you might require additional scaling of the celery-preview_convert
service.
Oz API is not intended to be a long-term analysis storage system, so you might encounter longer API response time after 10-30 million folders stored in database (depending on the database performance). Thus, we recommend archiving or deleting data from API after one year of storing it.
For the self-managed database, you'll require the following resources:
API
CPU: 4 cores (up to 8),
RAM: 8 GB (up to 16),
SSD storage.
O2N
CPU: 4 cores (up to 8),
RAM: equal to the database size,
SSD storage.
O2N database requires the Pgvector extension.
Do not create indexes for O2N database as they will reduce accuracy.
For O2N, parallelism is crucial.
Scale up RAM with the growth of your database.
If the number of active connections allows you to stay within the shared_buffers
limit, you can increase work_mem
.
To increase API performance, consider using this list of indexes:
For high-load environments, achieving the best performance requires precise database tuning. Please contact our support for assistance.
This table represents analysis duration for different gestures.
Below is a list of recommended and non-recommended practices for using Oz API methods.
/authorize
Refresh expired token when possible.
Monitor token expiration.
Use service token only in when appropriate.
Making requests with an expired token.
Creating a new token instead of refreshing an expiring one.
Requesting a new token on each API call.
/api/companies
Create a new named company for your deployment.
Avoid
Using the default system_company
.
/api/users
Create named accounts for users.
Create a separate named service account for each business process.
Sharing accounts between users.
Creating a new user for each analysis request.
/api/healthcheck
Using healthcheck too frequently as it may overwhelm system with internal checks.
/api/folders
Always use the time-limiting query parameters: time.created.min
and time.created.max
.
Use the total_omit=true
query parameter.
Use the folder_id
query parameter when you know the folder ID.
Use the technical_meta_data
query parameter in case you have meta_data
set.
Use the limit
query parameter when possible.
Using the with_analyzes=true
query parameter when it is unnecessary, in requests involving long time periods or unspecified durations.
Using the technical_meta_data
query parameter with unspecified additional parameters or time period.
/api/folders
Placing too many data in the payload meta_data
.
By default, all API methods are published without restrictions, that may possess security threats. For accessing API methods from Internet, we recommend enabling limitations on WAF, border L7 balancer, etc.
If you use Web SDK only, you don't need to publish API methods on the Internet.
The information below is relevant for Oz API 5.2.
For Oz API with Mobile SDK, make sure these methods are accessible from the Internet:
You may need to extend this list depending on how Oz API has been integrated into your infrastructure.
For Web SDK, make sure these methods are accessible from the Internet. Your Web SDK URL
is the Web Adapter URL you have received from us.
How to install an offline licensing server
Oz Forensics provides flexible licensing rules for all products. To support licensing for servers without an Internet connection, you need to install an offline license server.
The installation is complete. For the next steps, use the Web interface on port 8090
.
Open the default page in your Web browser http://server:8090
and authorize with your login and password.
Go to theSettings
section.
The easiest way to activate the license server is via the Internet. Enter the license key and click ACTIVATE
.
If your server isn't connected to the Internet, use the offline activation option. Click the SWITCH TO OFFLINE ACTIVATION
link.
Enter the license key and hit NEXT
.
ClickDOWNLOAD REQUEST FILE
.
Enter the activation code in the offline activation response window and hit ACTIVATE
.
Add an extra command line parameter with the LICENSE_KEY
environment variable with license server host address and port.
Create the license.key
file with the content of license host and IP and bind this file to your container:
These instructions and dashboard are for Helm chart 0.10.20 and API 5.1.
For monitoring, we use Prometheus and Grafana.
Our charts already contain custom resources for Prometheus: serviceMonitor
. By default, serviceMonitor
is disabled, to enable it, set enable: true
in prometheus_exporters
→ serviceMonitor
.
To ensure that Prometheus Operator spots the parameters, add the corresponding Namespace
or serviceMonitor
itself to the spec
section as shown below:
If you don't specify the parameters, all Namespace
will be added:
If everything is correct, and serviceMonitor
has been added to the cluster, you'll see the corresponding custom resource in Custom Resources → monitoring.coreos.com → Service Monitor.
Make sure that Service Monitor contains the resources listed in the screen below:
You can also check these resources in Prometheus itself. Proceed to Status and select Targets in the drop-down menu.
Here is what should be seen:
Proceed to our repository and find the branch that matches your chart version.
Download Oz_dashboard_client.json
.
Open Grafana and, in the Home menu, select Dashboards.
Click New and choose Import from the drop-down menu.
Select Upload dashboard JSON file and locate the Oz_dashboard_client.json
file you've downloaded. Change filename or directory if neededm but this is optional.
Add the prometheus
data source to obtain metrics.
Click Import and save the dashboard.
namespace
is a label of the namespace from :tensorflow:cc:saved_model:load_latency{clustername="$cluster"}
,
quantile
is a quantile value for tables that require it. Possible values: 0.95, 0.5, 0.90, 0.99, 1.
Below, you'll find the description of component metrics and tables, along with pre-configured alerts.
Illustrative screenshots:
Illustrative screenshots:
Illustrative screenshots:
Illustrative screenshots:
The duration of the analysis depends on the gesture used for Liveness. gestures are generally analyzed faster, while gestures provide higher accuracy but take more time. The longer the gesture takes, the longer the analysis will take.
API performance mainly depends on and performance. Most of the available use a database to return results. Thus, to maintain low API response time, we recommend using a high-performance database. Additionally, to reduce the number of requests for analysis results via the /api/folders
methods, please consider .
You can install the software on any Linux-based non-virtual server. to get the files needed and follow the steps below.
You'll get the offline_activation_request.txt
file. Send it to to receive the activation code.
If necessary, install Prometheus into your cluster using .
For more details on how to work with Service Monitor in Prometheus, please refer to .
You can customize the alerts according to your needs. Please proceed to to find the alert files.
Gesture
Average analysis time
50th percentile
[video_selfie_best]
3.895
3.13
[video_selfie_blank]
5.491
4.984
[video_selfie_down]
11.051
8.052
[video_selfie_eyes]
9.953
7.399
[video_selfie_high]
10.937
8.112
[video_selfie_left]
9.713
7.558
[video_selfie_right]
9.95
7.446
[video_selfie_scan]
9.425
7.29
[video_selfie_smile]
10.25
7.621
flower_events_total
Total number of tasks
flower_task_runtime_seconds_sum
Sum of task completion durations
histogram_quantile($quantile, sum(rate(flower_task_runtime_seconds_bucket[1m])) by (le))
Quantile for task execution based on the quantile variable value
flower_events_total{type="task-failed"}
With the type="task-failed"
label, displays the number of failed tasks
Liveness task rate
Average time of all Celery requests for Liveness
task-received
should be roughly equal to task-succeeded
,
shouldn't be any task-failed
Liveness task duration
Quantile for task execution
0.95 quantile should be 8 seconds or less
Succeeded vs failed tasks rate
Total number of Celery requests
task-received
should be roughly equal to task-succeeded
,
shouldn't be any task-failed
All tasks duration (AVG)
Average time of all Celery requests for all models
The durations should be 6 seconds or less
Queue size
Message queue in Redis
Queue size and the number of unacked messages
redis_up
1
means Redis is working,
0
– service is down
redis_commands_total
Total number of commands in Redis
redis_commands_duration_seconds_total
Redis process duration
redis_key_size
Redis (as a message broker) queue size
redis_key_size{key="unacked"}
With the "inacked"
label, displays the number of tasks that are being processed by Redis
Command rate
Number of requests per second
Nothing, just to stay informed
Commands duration
Average and maximum command execution duration
AVG < 15µs
MAX < 1ms
Connected clients
Number of connected clients
Shouldn't be 0
:tensorflow:serving:request_count
Total number of requests to TFSS
:tensorflow:serving:request_latency_bucket
Histogram of order processing time
:tensorflow:serving:request_latency_sum
Sum of processing durations for each order
:tensorflow:serving:request_latency_count
Total number of orders
:tensorflow:cc:saved_model:load_attempt_count
Uploaded models
Model request rate
Number of requests to TFSS per second and per minute
Nothing, just to stay informed
Model latency ($quantile
-quantile)
0.95 quantile of the TFSS request processing time. You can set the quantile value in the $quantile
variable
Nothing, just to stay informed
Model latency (AVG)
Average order processing time
Nothing, just to stay informed
HTTP probe success
The result of the built-in blackbox check. Blackbox sends requests to verify that TFSS works properly
Should be 1
nginx_up
1
means Nginx is working,
0
– service is down
nginx_connections_accepted
Number of connections accepted by Nginx
nginx_connections_handled
Number of connections handled by Nginx
nginx_connections_active
Number of active Nginx connections
Request rate
Total number of requests to Nginx
Nothing, just to stay informed
Active connections
Connection states
Shouldn't be any pending connections
Processed connections rate
Processing success rate
Numbers of accepted and handled connections should be equal
absent(kube_pod_container_status_ready{container="oz-api", namespace="api-prod"}
Displays that there is no ready API containers
Please note: this page covers the on-premise model of usage only.
If you use Oz Web SDK via SaaS, please contacts our engineers.
Our engineers will help you to install Oz Web SDK using the standalone installer (requires your technical personnel to take part) or manually (everything is done by us). Once installed, the adapter part generates the plugin files: file with styles (ozliveness.css
) and the primary script of the plugin (plugin_liveness.php
).
This part covers the license update process as the license is installed during the SDK installation; but a new license can be installed in the same way.
To generate the license, we need the domain name of the website where you are going to use Oz Forensics Web SDK, for instance, your-website.com. You can also define subdomains.
To find the origin, in the developer mode, run window.origin
on the page you are going to embed Oz Web SDK in. At localhost / 127.0.0.1, license can work without this information.
Unzip the file received:
2. Copy the JSON license file to the host where you’ve deployed the container from the ozforensics/oz-webliveness-dev:latest
image.
Example
-i ~/ozforensics/keys/id_rsa-test-hostname-vm
is the path to the public ssh key of your host license.
0000aaaa-00aa-00aa-00aa-00000aaaaa.WebSDK_your_website.2022-10-11.json
is the JSON license file.
user
is the username on host.
hostname
is the host alias.
/opt/oz/web-sdk
is the directory where you’ve deployed the Web SDK container.
3. Replace the license file.
Example
4. Restart the Web SDK container for the new license to be applied.
Example
web-sdk
is the name of the container you’ve deployed from the ozforensics/oz-webliveness-dev:latest
image.
Once the license is added, the system will check its validity on launch.
Error
Description
License error. License at <> not found
Cannot find the license file
License error. Cannot parse license from <>, invalid format
The license containing is somehow invalid: e.g., incorrect JSON
License error. Current date is later than license expiration date
Your license has expired and needs renewal
License error. Origin is not in the list permitted by license
Your domain or subdomain name can't be found in the list of allowed URLs
WA_CORS_ORIGINS
defines what sources are allowed to make requests. The's no default value. Please bear in mind that if you don't set this value, the CORS headers will be switched off and no such headers will be added within the Web SDK container.
WA_CORS_METHODS
(optional) – HTTP methods allowed to use. If the variable is not set, it gets the default value, which is 'GET, POST, OPTIONS'
. If the variable is not used, any method is accepted.
WA_CORS_HEADERS
(optional) – HTTP headers allowed to use. If the variable is not set, it gets the default value which is 'DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type'.
If the variable is not used, any header is accepted.
An example of using environment variables for server configuration:
This part covers the containing of the /core/app_config.json
file.
use_for_liveness
– the option is used when bank managers are taking clients' videos. If the option is set to true
, mobile devices use the back camera by default, and on desktop, flip and oval circling are off.
preinit
– this optional parameter switches on the preliminary loading of scripts and face detection initialization. This is needed to reduce the plugin
loading time. The default value is off
that means all the scripts are loaded after calling OzLiveness.open()
. The script
value means that the scripts will be loaded before the plugin launches. The full
value enables preliminary loading of scripts and face detection initialization.
architecture
– this optional parameter is used to choose the architecture for Web SDK. The default value is normal
.
api_url
– Oz API server address, a text parameter;
api_token
– Oz API access token, a text parameter;
api_use_token
– a parameter to specify the source of the Oz API access token for the system. Possible values = config
, client
.
If the parameter value is client
, then an Oz API access token is expected to be derived from the JS plug-in;
If you specify config
in this parameter, a token will be retrieved from the api_token parameter of the Oz Liveness Web Adapter configuration file.
video_actions_list
– block of video file tags used in the system, a text array. Current tag list.
photo_actions_list
– block of photo file tags used in the system, a text array. Current tag list.
actions_default_importance
– this parameter specifies whether override with an actions array from a web plug-in is allowed at launching an analysis. Possible values = true
, false
;
If true
, the Adapter will use an actions array from the configuration file.
If false
, the Adapter will use an actions array forwarded from your browser with the use of the open(options)
method.
actions_default
– the actions array. Options include:
video_count
– the number of transmitted video files, a numeric parameter;
photo_front
– whether there is a front-side page in the document. Possible values = true
, false;
photo_back
– whether there is a back-side page in the document. Possible values = true
, false
.
analyses
– a block for configuring the launch of analyses. Options include:
quality
– launch of Oz Liveness analysis. Possible values = true
, false
;
biometry
– launch of Oz Biometry analysis. Possible values = true
, false
;
documents
– launch of Oz Text analysis. Possible values = true
, false
;
collection_ids
(since 1.4.0) – an array of the identifiers of collections for the Black List analysis.
extract_best_shot
– a parameter that specifies if a direct link to the best shot (bestshot) extracted from the video should be appended to the analysis result. Possible values = true
, false
. If true
, the response will contain a link to the extracted image inoutput_images
(results_media
);
result_mode
– a parameter that specifies the contents of the server response with verification results. Possible values:
safe
– only the state of analyses is returned (completed or not yet);
status
– results of completed analyses are returned;
folder
– same as status but with folder identifier added;
full
– full Oz API response on the analyses is returned in the JSON format.
result_codes
– a block of response codes with annotations.
delete_old_sessions
: true
, false
– whether you want to delete old sessions
delete_old_sessions_offset_minutes
: ***
– old sessions deletion time offset (in minutes)
video_required_actions_list
– the array of required actions
save_lossless_frame
: true
– saving the original frame without compression
video_file_format
– optional; here you can choose the video file format. This video file is passed to the api
. Possible values: zip
(recommended) and mov
(less secure). If you need to retrieve your captured video in the MP4 format, please find the instructions here.
debug
(since 1.1.0) – if true
, enables access to the /debug.php page, which contains information about the current configuration and the current license.
load_3d_mask
(since 1.2.1) – if true
, loads the model to process the video taken with the 3D mask functionality. The default value is false
which means that the model is not used, and the 3D mask is unavailable (the enable_3d_mask
parameter is being ignored).
enable_3d_mask
(since 1.2.1) – enables the 3D mask as the default face capture behavior. This parameter works only if load_3d_mask
is set to true
; the default value is false
.
master_license_signature
(since 1.3.1) – the parameter for the master license signature; the default value is null
.
results_polling_interval
(since 1.4.0) – the interval for polling for the analyses’ results in ms; the default value is 1000.
get_user_media_timeout
(since 1.5.0) – it defines camera access timeout in sec; after this timeout it displays a hint on how to solve the problem; default_default
for all browsers and android_facebook
for Facebook
disable_adapter_errors_on_screen
(since 1.5.0) – if true
, disables the display of errors in modal windows, allowing you to view them solely using the on_error
callback. The default value is False
.
To install Oz product via Kubernetes, consider using Helm charts.
Oz API and related components: Helm chart.
API 5.2: version 0.11.x,
API 5.3 (regulatory update for Kazakhstan): 0.12.x.
Web SDK: Helm chart. Please note: the version of the chart is not tied to the Web SDK version.
For testing purposes, the database installed and created automatically by the chart is sufficient. However, for production, we strongly recommend using a separate, self-managed database.
Recommended PostgreSQL version: 15.5.
Create a database using the script(s) below.
To increase performance, consider using this list of indexes:
API and Web SDK charts require RWX SC (CephFS, EFS, NFS, Longhorn, etc.).
To deploy in Kubernetes, download the chart version you require and adjust the values.yaml
file. This file specifies parameters for deployment of Oz products.
This example is based on the 0.11.28 chart version.
Adjust the values.yaml
file, setting the following mandatory parameters before deployment:
ozDockerHubCreds
: you'll receive them from Oz Engineer.
UserParams
:
URLs
:
apiURL
: URL for API. May be internal, if you use Web SDK only. For Mobile SDKs, should be public. Please refer to this article for more information.
DB
: must be set, if you use an external PostgreSQL server. For details, please check Database Creation.
use_chart_postgres
: false by default. Enables internal PostgreSQL server (not recommended for production).
postgresUser
: same as <<USERNAME>>
.
postgresHost
: the hostname of your PostgreSQL server.
postgresDB
: same as <<DB_NAME>>
.
postgresUserPassword
: same as <<PASSWORD>>
.
postgresPort
: 5432 by default.
o2nDB
:
use_chart_o2nDB
: false by default. Enables internal PostgreSQL server (not recommended for production).
startinit
: true
by default. Enables database init scripts. Set to false
after chart is deployed.
creds
:
postgresHost
: the hostname of your PostgreSQL server with O2N database.
postgresPort
: 5432 by default.
postgresDB
: same as <<O2N_DB_NAME>>
.
postgresUser
: same as <<O2N_USERNAME>>
.
postgresUserPassword
: Same as <<O2N_PASSWORD>>
.
Creds
:
apiAdminLogin
: login for new (default) user for API. Will be created on the first run.
apiAdminPass
: password for the default user.
webUILocalAdminLogin
: local Admin for Web UI. Should differ from apiAdminLogin
.
webUILocalAdminPass
: password for webUILocalAdminLogin
.
BIO
:
licenseKey
: you'll receive it from Oz Engineer / Sales.
clientToken
: you'll receive it from Oz Engineer.
pvc
:
api
:
static
:
storageClassName
: RWX StorageClass.
size
: Expected size for PV.
Params
:
Global
:
startinits
: false
by default. Set to true
on the first run, then, after successful deployment, change back to false
.
To adjust API behavior, you might want to change other parameters. Please refer to comments in the values.yaml
file.
BIO is a part of the API chart. The BIO pods require separate nodes for each pod. To ensure BIO resides on dedicated nodes, you can use affinity and tolerations.
The BIO behavior can be customized via Params
-> global
-> affinity
in values.yaml
.
The default parameters are listed below:
The example of chart deployment via Helm:
Installation of Web SDK requires API pre-installed. Except specific cases, Web SDK cannot work without API.
For proper deployment, Web SDK requires an API service account. Pre-create a user for Web SDK with the CLIENT type and is_service
flag set. Please refer to User Roles for more details.
This example is based on the 1.5.1+onPremise chart version.
Adjust the values.yaml
file, setting the following mandatory parameters before deployment:
ozDockerHubCreds
: you'll receive them from Oz Engineer.
UserParams
:
URLs
:
apiURL
: API URL. Can be an internal API URL.
webSDKURL
: WebSDK url that will be used for public access.
Creds
:
AdminLogin
: login of the user that should be pre-created in API. Do not use the default admin login.
AdminPass
: password of the pre-created user.
PVC
:
persistentStorage
: false
be default. Set to true
if you use more than 1 Web SDK pod.
storageClassName
: RWX StorageClass.
Params
:
websdk
:
license
: should contain your Web SDK license. You'll receive it from Oz Engineer / Sales.
To adjust API behavior, you might want to change other parameters. Please refer to comments in the values.yaml
file.
The example of chart deployment via Helm:
Some configuration file parameters can be changed without direct editing of the /core/app_config.json
file. You can do it using the environment variables.
WA_ARCHITECTURE
– redefines the architecture
parameter
WA_API_URL
– redefines the api_url
parameter
Here's an example of using environment variables for the dockered Web SDK in the Lite mode:
The configuration settings are contained in theconfig.py
file. Its location depends on the installation method:
host machine or Docker container oz-api: /opt/gateway/configs
All incoming media files are saved in the local directory mounted to one of possible endpoints depending of the installation method:
host server or Docker container: /opt/gateway/static
in case of standalone installer: /var/lib/docker/volumes/api_oz-api-static-vol/_data
any path specified via configuration
In most of the integration cases, the media files can be accessed in web using the direct links to randomly generated filenames.
To access the media, you need to specify in the configuration file their external host name or IP address, port, and connection protocol.
Please note: this page covers the on-premise model of usage only.
If you use Oz Web SDK via SaaS, please contacts our engineers.
Oz Liveness WEB Adapter is set up via changes in the configuration file stored at the Oz Liveness WEB Adapter server: /core/app_config.json
For Angular and React, replace https://web-sdk.sandbox.ozforensics.com
in index.html.
Oz API components:
APP is the API front app that receives REST requests, performs preprocessing, and creates tasks for other API components.
Celery is the asynchronous task queue. API has the following celery queues:
Celery-default processes system-wide tasks.
Celery-maintenance processes maintenance tasks.
Celery-tfss processes analysis tasks.
Celery-resolution checks for completion of all nested analyses within a folder and changes folder status.
Celery-preview_convert creates a video preview for media.
Celery-beat is a CronJob for managing maintenance celery tasks.
Celery-Flower is a Celery metrics collector.
Celery-regula (optional) processes document analysis tasks.
Redis is a message broker and result backend for Celery.
RabbitMQ (optional) can be used as a message broker for Celery instead of Redis.
Nginx serves static media files for external HTTP(s) requests.
O2N (optional) processes the Blacklist analysis.
BIO-Updater checks for models updates and downloads new models.
Oz BIO (TFSS) runs TensorFlow with AI models and makes decisions for incoming media.
The BIO-Updater and BIO components require access to the following external resources:
The deployment scenario depends on the workload you expect.
Autoscaling is implemented on the basis of ClusterAutoscaler and must be supported by your infrastructure.
Type of containerization: Docker,
Type of installation: Docker compose,
Autoscaling/HA: none.
Software
Docker 19.03+,
Podman 4.4+,
Python 3.4+.
Storage
Depends on media quality, the type and number of analyses, and the required archive depth.
Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.
Staff qualification:
Basic knowledge of Linux and Docker.
Single node.
Resources:
1 node,
16 CPU/32 RAM.
Two nodes.
Resources:
2 nodes,
16 CPU/32 RAM for the first node; 8 CPU/16 RAM for the second node.
Type of containerization: Docker/Podman,
Type of installation: Docker compose,
Autoscaling/HA: manual scaling; HA is partially supported.
Computational resources
Depending on load, you can change the number of nodes. However, for 5+ nodes, we recommend that you proceed to the High Load section.
2 Nodes:
24 CPU/32 RAM per node.
3 Nodes:
16 CPU/24 RAM per node.
4 Nodes:
8 CPU/16 RAM for two nodes (each),
16 CPU/24 RAM for two nodes (each).
We recommend using external self-managed PostgreSQL database and NFS share.
Software
Docker 19.03+,
Podman 4.4+,
Python 3.4+.
Storage
Depends on media quality, the type and number of analyses, and the required archive depth.
Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.
Staff qualification:
Advanced knowledge of Linux, Docker, and Postgres.
2 nodes:
3 nodes:
4 nodes:
Type of containerization: Type of containerization: Docker containers with Kubernetes orchestration,
Type of installation: Helm charts,
Autoscaling/HA: supports autoscaling; HA for most components.
Computational resources
3-4 nodes. Depending on load, you can change the number of nodes.
16 CPU/32 RAM Nodes for the BIO pods,
8+ CPU/16+ RAM Nodes for all other workload.
We recommend using external self-managed PostgreSQL database.
Requires RWX (ReadWriteMany) StorageClass or NFS share.
Software
Docker 19.03+,
Python 3.4+.
Storage
Depends on media quality, the type and number of analyses, and the required archive depth.
Each analysis request performs read and write operations on the storage. Any additional latency in these operations will impact the analysis time.
Staff qualification:
Advanced knowledge of Linux, Docker, Kubernetes, and Postgres.
: /var/lib/docker/volumes/api_oz-api-config-vol/_data
Please find a sample for Oz Liveness Web SDK . To make it work, replace <web-adapter-url>
with the Web Adapter URL you've received from us.
Statistic (optional) provides ' collection for Web UI.
Web UI provides the .
Please find the installation guide here: .
May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to for media size reference.
Please find the installation guide here: .
From 2 to 4 Docker nodes (see ):
May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to for media size reference.
Please find the installation guide here: .
May be calculated as: [average media size] * 2 * [analyses per day] * [archive depth in days]. Please refer to for media size reference.
APM
Analyses per minute. Please note:
Analysis is a request for Quality (Liveness) or Biometry analysis using a single media.
A single analysis with multiple media counts as separate analyses in terms of APM.
Multiple analysis types on single media (two media for Biometry) count as separate analyses in terms of APM.
PoC
Proof of Concept
Node
A Node is a worker machine. Can be either a virtual or a physical machine.
HA
High availability
K8s
Kubernetes
SC
StorageClass
RWX
ReadWriteMany
Use cases
Testing/Development purposes
Small installations with low number of APM
Typical usage with moderate load
High load with HA and autoscaling
Usage with cloud provider
~ APM
~ analyses per month
~ APM
analyses per month
APM
analyses per month
Environment
Docker
Docker
Kubernetes
HA
No
Partially
Yes
Pros
Requires a minimal amount of computing resources
Low complexity, so no high-qualified engineers are needed on-site
Easy to manage and support
Partially supports HA
Can be scaled up to support higher workload
HA and autoscaling
Observability and manageability
Allows high workload and can be scaled up
Cons
Suitable only for low loads, no high APM
No scaling and high-availability
API HA requires precise balancing
Higher staff qualification requirements
High staff qualification requirements
Additional infrastructure requirements
External resource requirements
PostgreSQL
For Kubernetes deployments:
K8s v1.25+
ingress-nginx
clusterIssuer
kube-metrics
Prometheus
clusterAutoscaler
PostgreSQL
task-received
≈ task-succeeded, task-failed = 0
task-received
≈ task-succeeded, task-failed = 0
Shouldn't be 0