Skip to main content
Version: always use the latest available version.
The Liveness Check Agent is designed to ensure that the person behind the screen is a live person and is present during the process, without needing to leave the chat. It is recommended to use this agent when you only need to validate a person’s liveness. This agent can be used in any country.
This is not considered a complete biometric process.

Prerequisites

✅ Checklist to start without blockers

  • The user must be able to record video from the chat (camera permissions must be enabled on the device).
  • Recommended: define the handling flow if you enable Human in the loop (who reviews, response times, and what happens if there is no response).

What steps does the user complete?

1

Read instructions

The user sees a message in the chat with the steps to follow for recording a video and an image with instructions for a proper recording.
2

Record a video selfie

The user records a short video in which they must mention a unique numeric sequence (OTP).
3

Constant communication

The user receives messages indicating that their video is being analyzed: “We are verifying your video. This will only take a few seconds.” If the analysis is successful, they will see “Your liveness check was successful.” If the analysis encounters errors: “We had a problem analyzing the video. Please try the process again.”

How is the agent composed?

The agent contains tools that analyze the video submitted by the user:
  1. OTP generation: generates the numeric sequence that the user must say when recording the video.
  2. Speech to text: converts the audio to text to compare whether it matches the issued code.
  3. OTP validation: validates that the mentioned code is correct.
  4. Passive liveness validation: validates the liveness of the image extracted from the video submitted by the user. This process identifies and neutralizes deepfakes or any impersonation attempt, guaranteeing the presence of a live person’s face.
  5. Lipsync validation (optional): validates that the video audio matches the video, comparing the audio frequency with the video frequency to confirm they belong to the same source.
  6. Human in the Loop (enterprise only): if the process fails, you can opt for Human in the Loop (HIL), where the case will be reviewed by a human operator.

Connection from Marketplace

1

Access the platform

Go to apps.jelou.ai. On the home screen you will see Brain Studio and Connect. In Brain Studio, select Marketplace.
2

Find the agent in Marketplace

In Marketplace you will see three tabs: Catalog, Installed, and Developers. Click on Developers and use the search bar to find Liveness check agent. Then click the download icon to install it.
3

Confirm the installation

If the installation was completed successfully, you will see the Installed tag in green.
4

Open the agent in Brain

Go to Brain. In the sidebar you will see your installed skills and tools. Search for Liveness check agent and click to open it: the flow will load in your canvas.
5

Configure the version and inputs

Click on the input node, open Advanced settings (at the bottom), and select the agent version. If you will use default values, you do not need to modify inputs. If you are going to change any input or customize messages, review the Configuration section.
6

Configure the outputs

This agent has 5 error outputs and 1 success output. You can direct each output to:
  • A text input with a custom message.
  • Connect (only if your organization has this module).
7

Run tests

With the configuration ready, run tests using a skill or a prior agent as a “precursor” to trigger the flow.

Configuration

Language
string
default:"Es"
Variable: language. Defines the language of the experience.Available values: Es (Spanish), En (English)
Max. liveness check attempts
number
default:"3"
Variable: retries. Maximum number of liveness check attempts.Available values: 1-5
Notification email for exceeded retries
string
Variable: customerServiceEmail. Email to be notified when the maximum number of attempts is exceeded.
Enable Liveness Introduction Video
boolean
default:"false"
Variable: enableIntroVideo. Determines whether the introduction video should be shown to the user before starting the process.
URL to display liveness introduction media
string
Variable: introMediaUrl. The URL provided to the user to load an introduction video or image that will be used in the process.
OTP code length in Chat
number
default:"4"
Variable: otpLength. Number of digits in the OTP code.Available values: 3-6
Maximum OTP code duration
number
default:"1"
Variable: otpDuration. Maximum duration of the OTP code, in minutes.Available values: 1-10
Enable Human in the loop
boolean
default:"false"
Variable: enableHumanInLoop. This variable indicates whether the review process with a human agent should be activated.
Enable LipSync
boolean
default:"false"
Variable: enableLipSync. Enable LipSync for lip movement comparison with audio.
Max. agent attempts
string
default:"Unlimited"
Variable: retriesAgent. Maximum number of agent attempts before blocking the user.Available values: Unlimited, 1, 2, 3
User block duration
string
default:"1 day"
Variable: blockingInHours. Block duration when the agent attempt limit is exceeded.Available values: 1 day, 1 week, 1 month
Enable custom messages
boolean
default:"false"
Variable: enableCustomMessages. Enable custom messages.
Variable: speechToTextMessages. Custom messages for audio transcription errors.JSON format:
{
  "no_text": {
    "description": "When you recorded the video selfie we could not capture the sound of your voice. You may be in a very noisy place, or you sent a video without sound.",
    "advice": "Try again, making sure the video selfie has sound."
  },
  "error_http": {
    "description": "We're sorry, there was an error in the audio transcription.",
    "advice": "Try again, making sure the video selfie has sound."
  },
  "error_code": {
    "description": "We're sorry, there was an error in the audio transcription.",
    "advice": "Try again, making sure the video selfie has sound."
  }
}
Variable: OTPValidationMessages. Custom messages for OTP validation errors.JSON format:
{
  "client_error": {
    "description": "⚠️ The numeric sequence you mentioned is *[transcription]*, and it is not correct.",
    "advice": "Try again, making sure the numeric sequence is correct."
  },
  "OTP_ERROR": {
    "description": "⚠️ We're sorry, you exceeded the time limit for us to validate your video selfie.",
    "advice": "Try again, making sure the numeric sequence is correct."
  },
  "CODE_ERROR": {
    "description": "We're sorry, an unexpected error occurred in the service.",
    "advice": "Try again, making sure the numeric sequence is correct."
  },
  "HTTP_ERROR": {
    "description": "We're sorry, an unexpected error occurred in the service.",
    "advice": "Try again, making sure the numeric sequence is correct."
  }
}
Variable: passiveLivenessMessages. Custom messages for passive liveness errors.JSON format:
{
  "resp_face": {
    "description": "We're sorry! We analyzed your video and *could not detect any face*.",
    "advice": "Try again, making sure the video selfie has a visible face."
  },
  "more_than_one_face": {
    "description": "We're sorry! We analyzed your video and *detected more than one face*.",
    "advice": "Try again, making sure the video selfie has only one visible face."
  },
  "face_far": {
    "description": "We're sorry! We analyzed your video and found that *your face was a little far from the camera*.",
    "advice": "Try again, making sure the video selfie has your face facing the camera."
  },
  "face_close": {
    "description": "We're sorry! We analyzed your video and it appears *you moved your face too close to the camera and a complete image could not be captured*.",
    "advice": "Try again, making sure the video selfie has your face facing the camera."
  },
  "face_cut_covered": {
    "description": "We're sorry! We analyzed your video and *it seems your face was covered or you were not well centered to the camera*.",
    "advice": "Try again, making sure the video selfie has your face facing the camera."
  },
  "eyes_closed": {
    "description": "We're sorry! We analyzed your video and could not detect your eyes well, it seems *you were wearing glasses or your eyes were slightly closed*.",
    "advice": "Try again, making sure the video selfie has your eyes open."
  },
  "low_quality": {
    "description": "We're sorry! The video we received is low quality and we could not process it correctly.",
    "advice": "Try again, making sure the video selfie has adequate quality."
  },
  "code_error": {
    "description": "We're sorry, an unexpected error occurred in the service.",
    "advice": "Try again, making sure the video selfie has adequate quality."
  },
  "error_http": {
    "description": "We're sorry, an unexpected error occurred in the service.",
    "advice": "Try again, making sure the video selfie has adequate quality."
  },
  "error_general": {
    "description": "We're sorry, an unexpected error occurred in the service.",
    "advice": "Try again, making sure the video selfie has adequate quality."
  }
}
Enable instructions
boolean
default:"true"
Variable: enableInstructions. Instructions in an image are enabled by default.

Frequently asked questions

No, the liveness check process is only a “stage” of the biometric process, therefore it is not considered facial biometrics.
None. The only input is the video selfie that the user records with their phone.
Yes, you can enable the instruction video in the input Enable Liveness Introduction Video (true). Instructions in an image are enabled by default.
Yes, you can add your own instruction link in the input URL to display liveness introduction media, this way the default image or video is disabled.
The process closes when 1 hour passes without activity in the chat. If the user returns to the chat, they must start the flow from the beginning.