Drive a MetaHuman from any backend over a single TCP socket. PCM audio in, lip-sync, emotion, and full-body gestures out — sequenced through one deterministic state machine.
Your backend speaks TCP. Send PCM and JSON over a single socket — no SDK lock-in.
02Decode → route → blend
The plugin parses, dispatches to lip-sync or animator on the game thread, and blends safely.
03MetaHuman performs
Face + body update at 30 FPS, in-engine. Drop-in for Unreal projects, no Blueprint edits.
UE 5.3+
Tested on 5.3 / 5.4 / 5.5
MetaHuman
Face + body, all rigs
Cross-platform
Win · Mac · Linux client
Fab Marketplace
Commercial license · soon
Expression system
8 emotions × 22 microexpressions.
Set primary emotion blends with intensity 0–1. Layer microexpressions on top — eye narrowing, lip tightening, brow raise, blink rate. The state machine handles cross-fades and recovery.
Active blend
Joy · 0.78
Cross-fade 240ms
Neutral
0.42
Joy
0.78
Sadness
0.10
Anger
0.05
Fear
0.18
Surprise
0.55
Disgust
0.02
Trust
0.66
Microexpressions · 22 channels
Body gestures
Full-body montages, not just talking heads.
Trigger named gestures by string. Sequence them, blend with talking-head animation, and override gaze targets mid-performance.