Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Since Android and iOS 8.0.0, we have introduced the hybrid Liveness analysis mode. It is a combination of the on-device and server-based analyses that sums up the benefits of these two modes. If the on-device analysis is uncertain about the real human presence, the system initiates the server-based analysis, otherwise, no additional analyses are done.
You need less computational capacity: in the majority of cases there's no need to apply the server-side analyses and fewer requests are being sent back and forth. On Android, only 8-9% of analyses require an additional server check, on iOS, it's 4.5-5.5%.
The accuracy is similar to the one of the server-based analysis: if the on-device analysis result is uncertain, the server analysis is launched. We offer the hybrid analysis as one of the default analysis modes, but you can also implement your own logic of hybrid analysis by combining the server-based and on-device analyses in your code.
Since the 8.3.0 release, you get the analysis result faster, and less data will be transmitted (by up to 10 times): the on-device analysis is enough in the majority of cases so that you don’t need to upload the full video to analyze it on the server, and, therefore, don’t send or receive the additional data. The customer journey gets shorter.
The hybrid analysis has been available in our native (mobile) SDKs since 8.0.0. As we mentioned before, the server-based analysis is launched in the minority of cases, as if the analysis on your device has finished with a certain answer, there’s no need for a second check. This results in less server resources involved.
To enable hybrid mode on Android, when you launch the analysis, set Mode
to HYBRID
. To do the same on iOS, you need to set mode
to hybrid
. That’s all, as easy as falling off a log.
If you have any questions left, we’ll be happy to answer them.
This article describes the main types of analyses that Oz software is able to perform.
Liveness checks whether a person in a media is a real human.
Face Matching examines two or more media to identify similarities between the faces depicted in them.
Black list looks for resemblances between an individual featured in a media and individuals in a pre-existing photo database.
These analyses are accessible in the Oz API for both SaaS and On-Premise models. Liveness and Face Matching are also offered in the On-Device model. Please visit this page to learn more about the usage models.
The Liveness check is important to protect facial recognition from the two types of attacks.
A presentation attack, also known as a spoofing attack, refers to the attempt of an individual to deceive a facial recognition system by presenting into a camera video, photo, or any other type of media that mimics the appearance of a genuine user. These attacks can include the use of realistic masks, or involve digital manipulation with images and videos, such as deep fakes.
An injection attack is an attempt to deceive a facial recognition system by replacing physical camera input with a prerecorded image or video, or by manipulating physical camera output before it becomes input to a facial recognition. Virtual camera software is the most common tool for injection attacks.
Oz Liveness is able to detect both types of attacks. Any component can detect presentation attacks, and for injection attack detection, use Oz Liveness Web SDK. To learn about how to use Oz components to prevent attacks, check our integration quick start guides:
Once the Liveness check is finished, you can check both qualitative and quantitative analysis results.
Asking users to perform a gesture, such as smiling or turning their head, is a popular requirement when recording a Liveness video. With Oz Liveness Mobile and Web SDK, you can also request gestures from users. However, our Liveness check relies on other factors, analyzed by neural networks, and does not depend on gestures. For more details, please check Passive and Active Liveness.
Liveness check also can return the best shot from a video: a best-quality frame where the face is seen the most properly.
The Biometry algorithm allows comparing several media and check if the people on them are the same person or not. As sources, you can use images, videos, and scans of documents (with photo). To perform the analysis, the algorithm requires at least two media.
Wonder how to integrate face matching into your processes? Check our integration quick start guides.
In Oz API, you can configure one or more black lists, or face collections. These collections are databases of people depicted in photos. When the Black list analysis is being conducted, Oz software compares the face in a photo or video taken with faces of this pre-made database and shows whether a face exists in a collection.
For additional information, please refer to this article.
We offer different usage models for the Oz software to meet your specific needs. You can either utilize the software as a service from one of our cloud instances or integrate it into your existing infrastructure. Regardless of the usage model you choose, all Oz modules will function equally. It’s only up to you what to pick, depending on your needs.
With the SaaS model, you can access one of our clouds without having to install our software in your own infrastructure.
Choose SaaS when you want:
Faster start as you don’t need to procure or allocate hardware within your company and set up a new instance.
Zero infrastructure cost as server components are located in Oz cloud.
Lower maintenance cost as Oz maintains and upgrades server components.
No cross-border data transfer for the regions where Oz has cloud instances.
The on-premise model implies that all the Oz components required are installed within your infrastructure. Choose on-premise for:
Your data not leaving your infrastructure.
Full and detailed control over the configuration.
We also provide an opportunity of using the on-device Liveness and Face matching. This model is available in Mobile SDKs.
Consider the on-device option when:
You can’t transmit facial images to any server due to privacy concerns
The network conditions whereon you plan using Oz products are extremely poor.
The choice is yours to make, but we're always available to provide assistance.
Oz Forensics specializes in liveness and face matching: we develop products that help you to identify your clients remotely and avoid any kind of spoofing or deepfake attack. Oz software helps you to add facial recognition to your software systems and products. You can integrate Oz modules in many ways depending on your needs. We are constantly improving our components, increasing their quality.
Oz Liveness is responsible for recognizing a living person on a video it receives. Oz Liveness can distinguish a real human from their photo, video, mask, or other kinds of spoofing and deepfake attacks. The algorithm is certified in ISO-30137-3 standard by NIST accreditation iBeta biometric test laboratory with 100% accuracy.
Our liveness technology protects both against injection and presentation attacks.
The injection attack detection is layered. Our SDK examines user environment to detect potential manipulations: browser, camera, etc. Further on, the deep neural network comes into play to defend against even the most sophisticated injection attacks.
The presentation attack detection is based on deep neural networks of various architectures, combined with a proprietary ensembling algorithm to achieve optimal performance. The networks consider multiple factors, including reflection, focus, background scene, motion patterns, etc. We offer both passive (no gestures) and active (various gestures) Liveness options, ensuring that your customers enjoy the user experience while delivering accurate results for you. The iBeta test was conducted using passive Liveness, and since then, we have significantly enhanced our networks to better meet the needs of our clients.
Oz Face Matching (Biometry) aims to identify the person, verifying that the person who performs the check and the papers' owner are the same person. Oz Biometry looks through the video, finds the best shot where the person is clearly seen, and compares it with the photo from ID or another document. The algorithm's accuracy is 99.99% confirmed by NIST FRVT.
Our biometry technology has both 1:1 Face Verification and 1:N Face Identification, which are also based on ML algorithms. To train our neural networks, we use an own framework based on state-of-the-art technologies. The large private dataset (over 4.5 million unique faces) with a wide representation of ethnic groups as well as using other attributes (predicted race, age, etc.) helps our biometric models to provide the robust matching scores.
Our face detector can work with photos and videos. Also, the face detector excels in detecting faces in images of IDs and passports (which can be rotated or of low quality).
The Oz software combines accuracy in analysis with ease of integration and use. To further simplify the integration process, we have provided a detailed description of all the key concepts of our system in this section. If you're ready to get started, please refer to our integration guides, which provide the step-by-step instructions on how to achieve your facial recognition goals quickly and easily.
Describing how passive and active liveness works.
The objective of the Liveness check is to verify the authenticity and physical presence of an individual in front of the camera. In the passive Liveness check, it is sufficient to capture a user's face while they look into the camera. Conversely, the active Liveness check requires the user to perform an action such as smiling, blinking, or turning their head. While passive Liveness is more user-friendly, active Liveness may be necessary in some situations to confirm that the user is aware of undergoing the Liveness check.
In our Mobile or Web SDKs, you can define what action the user is required to do. You can also combine several actions into a sequence. Actions vary in the following dimensions:
User experience,
File size,
Liveness check accuracy,
Suitability for review by a human operator or in court.
In most cases, the Selfie action is optimal, but you can choose other actions based on your specific needs. Here is a summary of available actions:
To recognize the actions from either passive or active Liveness, our algorithms refer to the corresponding tags. These tags indicate the type of action that a user is performing within a media. For more information, please read the Media Tags article. The detailed information on how the actions, or, in other words, gestures are called in different Oz Liveness components is here.
The Oz API is a comprehensive Rest API that enables facial biometrics, allowing for both face matching and liveness checks. This write-up provides an overview of the essential concepts that one should keep in mind while using the Oz API.
To ensure security, every Oz API call requires an access token in its HTTP headers. To obtain this token, execute the POST /api/authorize/auth
method with login and password provided by us. Pass this token in X-Forensics-Access-Token
header in subsequent Oz API calls.
This article provides comprehensive details on the authentication process. Kindly refer to it for further information.
Furthermore, the Oz API offers distinct user roles, ranging from CLIENT
, who can perform checks and access reports but lacks administrative rights, e.g., deleting folders, to ADMIN
, who enjoys nearly unrestricted access to all system objects. For additional information, please consult this guide.
The unit of work in Oz API is a folder: you can upload interrelated media to a folder, run analyses on them, and check for the aggregated result. A folder can contain the unlimited number of media, and each of the media can be a target of several analyses. Also, analyses can be performed on a bunch of media.
Media OZ API works with photos and videos. Video can be either a regular video container, e.g., MP4 or MOV, or a ZIP archive with a sequence of images. Oz API uses the file mime type to define whether media is an image, a video, or a shot set.
It is also important to determine the semantics of a content, e.g., if an image is a photo of a document or a selfie of a person. This is achieved by using tags. The selection of tags impacts whether specific types of analyses will recognize or ignore particular media files. The most important tags are:
photo_id_front
– for the front side of a photo ID
photo_selfie
– for a non-document reference photo
video_selfie_blank
– for a liveness video recorded beyond Oz Liveness SDK
if a media file is captured using the Oz Liveness SDK, the tags are assigned automatically.
The full list of Oz media tags with their explanation and examples can be found here.
Since video analysis may take a few seconds, the analyses are performed asynchronously. This implies that you initiate an analysis (/api/folders/{{folder_id}}/analyses/
) and then monitor the outcomes by polling until processing is complete (/api/analyses/{{analyse_id}}
for a single analysis or /api/folders/{{folder_id}}/analyses/
for all folder’s analyses). Alternatively, there is a webhook option available. To see an example of how to use both the polling and webhook options, please check this guide.
These were the key concepts of Oz API. To gain a deeper understanding of its capabilities, please refer to the Oz API section of our developer guide.
Liveness and Face Matching can also be provided by the Oz API Lite module. Oz API Lite is conceptually different from Oz API.
Fully stateless, no persistence,
Extremely easy to scale horizontally,
No built-in authentication and access management,
Works with single images, not videos.
Oz API Lite is suitable when you want to embed it into your product and/or have extremely high performance requirements (millions checks per week).
For more details, please refer to the Oz API Lite Developer Guide.
This article describes Oz components that can be integrated into your infrastructure in various combinations depending on your needs.
The typical integration scenarios are described in the Integration Quick Start Guides section.
Oz API is the central component of the system. It provides RESTful application programming interface to the core functionality of Liveness and Face matching analyses, along with many important supplemental features:
Persistence: your media and analyses are stored for future reference unless you explicitly delete them,
Authentication, roles and access management,
Asynchronous analyses,
Ability to work with videos as well as images.
For more information, please refer to Oz API Key Concepts and Oz API Developer Guide. To test Oz API, please check the Postman collection here.
Under the logical hood, Oz API has the following components:
File storage and database where media, analyses, and other data are stored,
The Oz BIO module that runs neural network models to perform facial biometry magic,
Licensing logic.
The front-end components (Oz Liveness Mobile or Web SDK) connect to Oz API to perform server-side analyses either directly or via customer's back end.
iOS and Android SDK are collectively referred to as Mobile SDKs or Native SDKs. They are written on Swift and Kotlin/Java, respectively, and designed to be integrated into your native mobile application.
Mobile SDKs implement the out-of-the-box customizable user interface for capturing Liveness video and ensure that the two main objectives are met:
The capture process is smooth for users,
The quality of a video is optimal for the subsequent Liveness analysis.
After Liveness video is recorded and available to your mobile application, you can run the server-side analysis. You can use corresponding SDK methods, call the API directly from your mobile application, or pass the media to your backend and interact with Oz API from there.
The basic integration option is described in the Quick Start Guide.
Mobile SDKs are also capable of On-device Liveness and Face matching. On-device analyses may be a good option in low-risk context, or when you don’t want the media to leave the users’ smartphones. Oz API is not required for On-device analyses. To learn how it works, please refer to this Integration Quick Start Guide.
Web Adapter and Web Plugin together constitute Web SDK.
Web SDK is designed to be integrated into your web applications and have the same main goals as Mobile SDKs:
The capture process is smooth for users,
The quality of a video is optimal for the subsequent Liveness analysis.
Web Adapter needs to be set up on a server side. Web Plugin is called by your web application and works in a browser context. It communicates with Web Adapter, which, in turn, communicates with Oz API.
Web SDK adds the two-layer protection against injection attacks:
Collects information about browser context and camera properties to detect usage of virtual cameras or other injection methods.
Records liveness video in a format that allows server-side neural networks to search for traces of injection attack in the video itself.
Check the Integration Quick Start Guide for the basic integration scenario, and explore the Web SDK Developer Guide for more details.
Web UI is a convenient user interface that allows to explore the stored API data in the easy way. It relies on API authentication and database and does not store any data on its own.
Web console has an intuitive interface, yet the user guide is available here.
Selfie
A short video, around 0.7 sec. Users are not required to do anything. Recommended for most cases. It offers the best combination of user experience and liveness check accuracy.
One shot
Similar to “Simple selfie” but only one image is chosen instead of the whole video. Recommended when media size is the most important factor. Hard to evaluate for a spoofing by a human, e.g., by an operator or in a court.
Scan
A 5-second video where a user is asked to follow the text looking at it. Recommended when the longer video is required, e.g., for subsequent review by a human operator or in a court.
Smile
Blink
Tilt head up
Tilt head down
Turn head left
Turn head right
A user is required to complete a particular gesture within 5 seconds.
Use active liveness when you need a confirmation that the user is aware of undergoing a Liveness check.
Video length and file size may vary depending on how soon a user completes a gesture.