The Oz system consists of many interconnected components.
Most of the components require scaling to ensure functionality during an increase in . The scheme below represents how the system processes an analysis using supportive software.
This guide explains how to scale components and improve performance for both Docker and Kubernetes deployments.
For K8s deployments, use HPA. In the chart values, you'll find the use_hpa parameter for each component that supports HPA.
Scaling Oz BIO and Celery-TFSS
BIO is the most resource-consuming component as it performs media analysis processing.
The BIO component may take up to 10 minutes to start (applicable for versions <1.2).
Scaling BIO nodes might be challenging during a rapid increase in APM.
For Docker installations, plan the required number of components to handle peak loads.
For Kubernetes installations, schedule the creation of additional BIO pods to manage demand using cronjobHpaUpdater in values.yaml.
Node requirements
CPU must support avx, avx2, fma, sse4.1, and sse4.2.
If CPU support avx512f instructions: to slightly increase performance, you can use a specific BIO image.
We recommend using Intel CPU. AMD CPUs are also supported, but require additional configuration.
For better performance, each BIO should reside on a dedicated node with reserved resources.
APM expectations may vary, depending on speed of CPUs (GHz).
Recommended: 2.5+ GHz.
For better performance: 3.0+ GHz.
Scaling
Each BIO instance can handle an unlimited number of simultaneous requests. However, as the number increases, the execution time for each request grows.
The assignment of analyses to BIO instances is managed by celery-tfss workers. Each celery-tfss instance is configured to handle 2 analyses simultaneously, as this provides optimal performance and analysis time.
For Kubernetes installations, this behavior can be adjusted using the concurrency parameter in values.yaml. In the default configuration, the number of celery-tfss instances should match the number of BIOs.
The required number of BIOs can be calculated using the formula below, based on the average analysis time and the expected number of analyses:
Here,
N(BIO) is the required number of BIO nodes
n(APM) is the expected number of APM,
t(analysis) is the measured average duration of a single analysis (3 seconds by default),
C is concurrency (2 by default).
For a large number of requests involving .zip archives (e.g., if you use Web SDK), you might require additional scaling of the celery-preview_convert service.
Database
Oz API is not intended to be a long-term analysis storage system, so you might encounter longer API response time after 10-30 million folders stored in database (depending on the database performance). Thus, we recommend archiving or deleting data from API after one year of storing it.
For the self-managed database, you'll require the following resources:
API
CPU: 4 cores (up to 8),
RAM: 8 GB (up to 16),
SSD storage.
O2N
CPU: 4 cores (up to 8),
RAM: equal to the database size,
SSD storage.
PostgreSQL database parameters
max_connections: 2000 (may vary depending on number of API calls),
shared_buffers: 2 GB (amount of RAM divided by 2),
effective_cache_size: 6 GB (RAM – 2 GB),
maintenance_work_mem: 512 MB,
checkpoint_completion_target: 0.9,
wal_buffers: 16 MB,
default_statistics_target: 100,
random_page_cost: 1.1,
effective_io_concurrency: 200,
work_mem: 16 MB,
min_wal_size: 1 GB,
max_wal_size: 4 GB,
max_worker_processes: 4 (equal to the number of CPUs),
max_parallel_workers_per_gather: 2 (number of CPUs divided by 2),
max_parallel_workers: 4 (equal to the number of CPUs),
max_parallel_maintenance_workers: 2 (number of CPUs divided by 2),
If database is Dockered, set shm_size: '1gb'.
O2N database requires the Pgvector extension.
Do not create indexes for O2N database as they will reduce accuracy.
For O2N, parallelism is crucial.
Scale up RAM with the growth of your database.
If the number of active connections allows you to stay within the shared_buffers limit, you can increase work_mem.
To increase API performance, consider using this list of indexes:
Gateway indexes
CREATE INDEX CONCURRENTLY gw_analyse_abstract_company_id_idx ON public.gw_analyse_abstract USING btree (company_id);CREATE INDEX CONCURRENTLY gw_analyse_abstract_folder_id ON public.gw_analyse_abstract USING btree (folder_id);CREATE INDEX CONCURRENTLY gw_analyse_abstract_time_updated_state_idx ON public.gw_analyse_abstract USING btree (time_updated) WHERE (state='PROCESSING'::analyse_state);CREATE INDEX CONCURRENTLY idx_gw_analyse_abstract_state_time_updated ON public.gw_analyse_abstract USING btree (state, time_updated);CREATE INDEX CONCURRENTLY gw_analyse_collection_collection_id_idx ON public.gw_analyse_collection USING btree (collection_id);CREATE INDEX CONCURRENTLY gw_analyse_collection_person_media_association_person_id_idx ON public.gw_analyse_collection_person_media_association USING btree (person_id);CREATE INDEX CONCURRENTLY gw_analyse_cpma_media_association_id_idx ON public.gw_analyse_collection_person_media_association USING btree (media_association_id);CREATE INDEX CONCURRENTLY gw_analyse_result_group_image_analyse_id_idx ON public.gw_analyse_result_group_image USING btree (analyse_id);CREATE INDEX CONCURRENTLY gw_analyse_result_group_image_image_id_idx ON public.gw_analyse_result_group_image USING btree (image_id);CREATE INDEX CONCURRENTLY gw_analyse_result_image_media_association_id_idx ON public.gw_analyse_result_image USING btree (media_association_id); CREATE INDEX CONCURRENTLY gw_analyse_source_media_association_analyse_id_idx ON public.gw_analyse_source_media_association USING btree (analyse_id);CREATE INDEX CONCURRENTLY gw_analyse_source_media_association_source_image_id_idx ON public.gw_analyse_source_media_association USING btree (source_image_id); CREATE INDEX CONCURRENTLY gw_analyse_source_media_association_source_shots_set_id_idx ON public.gw_analyse_source_media_association USING btree (source_shots_set_id);CREATE INDEX CONCURRENTLY gw_analyse_source_media_association_source_video_id_idx ON public.gw_analyse_source_media_association USING btree (source_video_id);CREATE INDEX CONCURRENTLY gw_collection_company_id_idx ON public.gw_collection USING btree (company_id);CREATE INDEX CONCURRENTLY gw_collection_creator_id_idx ON public.gw_collection USING btree (creator_id);CREATE INDEX CONCURRENTLY gw_collection_person_collection_id_idx ON public.gw_collection_person USING btree (collection_id);CREATE INDEX CONCURRENTLY gw_collection_person_image_person_id_idx ON public.gw_collection_person_image USING btree (person_id);CREATE INDEX CONCURRENTLY gw_folder_company_id_idx ON public.gw_folder USING btree (company_id) WHERE (NOT is_archive);CREATE INDEX CONCURRENTLY gw_folder_event_session_id_idx ON public.gw_folder USING btree (((meta_data ->>'event_session_id'::text))) WHERE (NOT is_archive);CREATE INDEX CONCURRENTLY gw_folder_resolution_author_id_idx ON public.gw_folder USING btree (resolution_author_id);CREATE INDEX CONCURRENTLY gw_folder_resolution_comment_fts_idx ON public.gw_folder USING gin (to_tsvector('english'::regconfig, (resolution_comment)::text));CREATE INDEX CONCURRENTLY gw_folder_time_created_folder_id_idx ON public.gw_folder USING btree (time_created DESC, folder_id DESC);CREATE INDEX CONCURRENTLY gw_folder_time_created_folder_id_nonarchive_idx ON public.gw_folder USING btree (time_created DESC, folder_id DESC) WHERE (is_archive = false);CREATE INDEX CONCURRENTLY gw_folder_time_created_idx ON public.gw_folder USING btree (time_created DESC);CREATE INDEX CONCURRENTLY gw_folder_time_created_nonarchive_idx ON public.gw_folder USING btree (time_created DESC) WHERE (is_archive = false);CREATE INDEX CONCURRENTLY gw_folder_user_id_idx ON public.gw_folder USING btree (user_id);CREATE INDEX CONCURRENTLY idx_gw_folder_transaction_id ON public.gw_folder USING btree (company_id, ((meta_data ->>'transaction_id'::text))) WHERE (NOT is_archive);CREATE INDEX CONCURRENTLY idx_gw_folder_user_id_time_created ON public.gw_folder USING btree (user_id, time_created);CREATE INDEX CONCURRENTLY gw_folder_image_folder_id_idx ON public.gw_folder_image USING btree (folder_id);CREATE INDEX CONCURRENTLY gw_folder_report_author_id_idx ON public.gw_folder_report USING btree (author_id);CREATE INDEX CONCURRENTLY gw_folder_report_folder_id_idx ON public.gw_folder_report USING btree (folder_id);CREATE INDEX CONCURRENTLY gw_folder_report_report_template_id_idx ON public.gw_folder_report USING btree (report_template_id);CREATE INDEX CONCURRENTLY idx_gw_folder_shots_set_folder_id ON public.gw_folder_shots_set USING btree (folder_id);CREATE INDEX CONCURRENTLY gw_folder_video_folder_id_idx ON public.gw_folder_video USING btree (folder_id);CREATE INDEX CONCURRENTLY gw_logging_event_record_session_id_idx ON public.gw_logging_event_record USING btree (session_id);CREATE INDEX CONCURRENTLY gw_logging_event_record_time_created_idx ON public.gw_logging_event_record USING btree (time_created);CREATE INDEX CONCURRENTLY gw_logging_event_record_timemark_idx ON public.gw_logging_event_record USING btree (timemark);CREATE INDEX CONCURRENTLY gw_logging_event_session_time_created_idx ON public.gw_logging_event_session USING btree (time_created);CREATE INDEX CONCURRENTLY gw_logging_event_session_user_id_idx ON public.gw_logging_event_session USING btree (user_id);CREATE INDEX CONCURRENTLY gw_logging_event_session_time_updated_idx ON gw_logging_event_session(time_updated);CREATE INDEX CONCURRENTLY gw_report_template_company_id_idx ON public.gw_report_template USING btree (company_id);CREATE INDEX CONCURRENTLY gw_report_template_name_company_id_idx ON public.gw_report_template USING btree (name, company_id);CREATE INDEX CONCURRENTLY gw_report_template_attachment_filename_report_template_id_idx ON public.gw_report_template_attachment USING btree (filename, report_template_id);CREATE INDEX CONCURRENTLY gw_report_template_attachment_report_template_id_idx ON public.gw_report_template_attachment USING btree (report_template_id);CREATE INDEX CONCURRENTLY gw_role_abstract_user_company_id_idx ON public.gw_role_abstract_user USING btree (company_id);CREATE INDEX CONCURRENTLY gw_role_email_restore_code_user_id_idx ON public.gw_role_email_restore_code USING btree (user_id);CREATE INDEX CONCURRENTLY gw_role_session_old_access_token_idx ON public.gw_role_session USING btree (old_access_token);CREATE INDEX CONCURRENTLY gw_role_session_user_id_idx ON public.gw_role_session USING btree (user_id);CREATE INDEX CONCURRENTLY gw_role_user_photo_user_id_idx ON public.gw_role_user_photo USING btree (user_id);CREATE INDEX CONCURRENTLY gw_shots_set_frame_shots_set_id_idx ON public.gw_shots_set_frame USING btree (shots_set_id);CREATE INDEX CONCURRENTLY gw_utils_audit_actor_id_idx ON public.gw_utils_audit USING btree (actor_id);CREATE INDEX CONCURRENTLY gw_utils_media_company_id_idx ON public.gw_utils_media USING btree (company_id);
For high-load environments, achieving the best performance requires precise database tuning. Please contact our support for assistance.
Gesture Performance
The duration of the analysis depends on the gesture used for Liveness. Passive Liveness gestures are generally analyzed faster, while Active Liveness gestures provide higher accuracy but take more time. The longer the gesture takes, the longer the analysis will take.
This table represents analysis duration for different gestures.
Gesture
Average analysis time
50th percentile
[video_selfie_best]
3.895
3.13
[video_selfie_blank]
5.491
4.984
[video_selfie_down]
11.051
8.052
[video_selfie_eyes]
9.953
7.399
[video_selfie_high]
10.937
8.112
[video_selfie_left]
9.713
7.558
[video_selfie_right]
9.95
7.446
[video_selfie_scan]
9.425
7.29
[video_selfie_smile]
10.25
7.621
API Methods Performance
API performance mainly depends on database and storage performance. Most of the available API methods use a database to return results. Thus, to maintain low API response time, we recommend using a high-performance database. Additionally, to reduce the number of requests for analysis results via the /api/foldersmethods, please consider webhook callback usage.
Below is a list of recommended and non-recommended practices for using Oz API methods.
/authorize
Do
Refresh expired token when possible.
Monitor token expiration.
Use service token only in when appropriate.
Avoid
Making requests with an expired token.
Creating a new token instead of refreshing an expiring one.
Requesting a new token on each API call.
/api/companies
Do
Create a new named company for your deployment.
Avoid
Using the default system_company.
/api/users
Do
Create named accounts for users.
Create a separate named service account for each business process.
Avoid
Sharing accounts between users.
Creating a new user for each analysis request.
/api/healthcheck
Avoid
Using healthcheck too frequently as it may overwhelm system with internal checks.
GET /api/folders
Do
Always use the time-limiting query parameters: time.created.min and time.created.max.
Use the total_omit=true query parameter.
Use the folder_id query parameter when you know the folder ID.
Use the technical_meta_data query parameter in case you have meta_data set.
Use the limit query parameter when possible.
Avoid
Using the with_analyzes=true query parameter when it is unnecessary, in requests involving long time periods or unspecified durations.
Using the technical_meta_data query parameter with unspecified additional parameters or time period.