Schedule a 15-minute appointment with a client advisor today. Start

Veyetals

Our Technology

basic features phone screen

veyetals is powered by artificial intelligence image processing cloud computing photoplethysmography

How It Works

Lighting Check

veyetals relies on video imaging with environmental lighting reflecting off human skin in a natural, well-lit area/room.

Video Callibration

A short optimized video of the users face is recorded on the device, which is used for vitals analysis and deleted immediately afterwards.

Environment Adjustments

If needed, environmental adjustments and video corrections are completed and processed by the AI.

Skin Analysis

our unique AI algorithm identifies the facial skin, to obtain a plethysmographic signal from changes in light reflected from the skin

Data Extraction & Calculation

the ai uses remote photoplethysmography (rPPG) technology to analyze the facial skin and detect biometrics: Heart Rate, Heart Rate Variability, Oxygen Saturation, Blood Pressure, and Mental Stress.

Results Generation

In just under a minute, your results are generated within 90-95% accuracy levels and shown to you. Results are rendered anonymous on our end to ensure the user's privacy and security.

Remote Photoplethysmography (rPPg)

Veyetals uses remote photoplethysmography (rPPg) to detect vital signs completely contactless, without any external devices.

rPPg uses the contrast between specular and diffused reflection to calculate the variance of RGB (red, green, blue) light reflection variations in human skin. The light reflected from the skin is known as specular reflection. Diffused reflection, on the other hand, is the reflection that remains after absorption and dispersion in skin tissue, which fluctuates depending on blood volume changes.

Via: Algorithmic principles of remote-PPG (Wang, et al).
Via: Algorithmic principles of remote-PPG (Wang, et al).

rPPg has four main components in the process: Skin pixel selection, Signal extraction, Signal filtering, and Output calculations.

First, The face and facial features are detected though video imaging via smartphone/webcam camera. Then, the average of each pixel colour (red, green, blue) of the face is measured over time. Subsequently, the AI detects noise from head motions, and generates our offered vital signs. The results are then calculated and shown to you with 90-95% accuracy (our AI is constantly learning, improving on this number!)