Learn how to film, light, and frame yourself to create a realistic digital twin avatar in HeyGen, using simple gear and best practices for quality.
Creating a great digital twin avatar in HeyGen isn’t about fancy gear or Hollywood-level production. It’s about a few simple choices: where you record, how you frame yourself, and how you move and speak on camera.
In this post, we turn Adam Halper and Nik Nolte’s Bootcamp session into a practical guide you can follow step by step, whether you’re brand new to avatars or ready to level up your existing one.
You’ll learn:
- Why you’d want an avatar in the first place
- How to set up your space, camera, and lighting
- What to do (and not do) with your hands, face, and eyes
- How to create multiple “looks” from the same person
- How HeyGen helps you check and improve your footage
Why create a digital twin avatar in HeyGen?
With a HeyGen avatar, creating video becomes as simple as:
- Type (or paste) a script
- Click generate
Instead of re-recording yourself every time, your avatar:
- Looks, moves, and sounds like you
- Can wear different outfits and appear in different environments
- Can deliver content at scale without you being on camera each time
That means you can:
- Scale content for courses, onboarding, marketing, and client updates
- Remove the friction of “recording days”
- Update and improve scripts without re-filming everything
The only real difference between your avatar and the ones in HeyGen’s launch videos and public library is the quality of the footage you upload to train it.
Safety first: You control your likeness
HeyGen takes identity safety seriously. A few key points from the session:
- You must consent to creating your digital twin
- The process is designed to prevent misuse of other people’s identities
- Content is monitored using both automated systems and human review
- Harmful or abusive content is blocked by policy
Bottom line: you control your avatar and how it is used.
Start with your use case
Before you hit record, ask:
“What will I use this avatar for?”
Your answer should influence where and how you film.
Examples:
- Client communication: Film in your office or a professional workspace
- Teaching and courses: Film where you’d typically present or in a clean, neutral environment
- Social media content: Film in a casual or on-brand space that matches your style
If you can’t film in the exact environment (e.g. “I want to be in Tokyo” but you are not flying to Tokyo), the next best move is:
- Film in front of a solid, single-color wall or green screen
- Make sure your clothing, hair, and skin contrast clearly with the background
This makes background removal much cleaner, so you can later drop your avatar into any scene you like.
Recording options: Webcam vs camera vs phone
HeyGen gives you two main ways to record avatar footage:
- Live webcam capture
- Fastest way to get started
- Guided flow with a script you can read
- Great for a first “test” avatar or a casual look
- Upload from camera or smartphone
- Best for highest quality looks
- Use your phone in landscape on a tripod, or a DSLR / mirrorless camera
- Upload directly to HeyGen
How long should you record?
- Minimum: 30 seconds per look
- Recommended: Up to 2 minutes per look
If you plan to:
- Deliver long speeches or lectures: Record closer to 2 minutes to give the model more examples of your natural motion
- Create short social or ad-style clips: 30–60 seconds is usually enough
Framing: How close should you be to the camera?
Framing is one of the most important quality factors.
You want:
- Your face to occupy at least ~50% of the frame
- Your body position to stay stable relative to the camera
Good framing looks like:
- Head and upper torso visible
- Your face centered
- Enough room for natural hand movement, but not so far that your face is tiny
Avoid:
- Being too far away (small face → less pixel detail → weaker resemblance and lip sync)
- Constantly moving towards and away from the camera so your face size keeps changing
- Standing off to one side of the frame without a good reason
If you move in the recording, the camera should move with you. What you don’t want is your face growing and shrinking in the frame.
Lighting: Soft, even, and predictable
Lighting can make or break your avatar.
Aim for:
- Soft, even light across your face (no harsh shadows)
- Avoid overexposure (bright hotspots on forehead, nose, or cheeks)
- A setup where you look clear and natural from both eyes
You can:
- Use a ring light or two softbox lights at roughly 45° angles
- Use a window as a key light, but be careful with changing daylight
Caution with natural light:
- Moving clouds = changing brightness and color
- Over a 1–2 minute recording, that can introduce flicker and light shifts in your avatar
If you want repeatable results, a simple inexpensive lighting kit is often better than relying on sunlight.
Audio: Clear and quiet wins
Your avatar is audio-driven. That means the audio you use drives:
- Lip sync
- Expressions
- Overall realism
When recording training footage:
- Record in a quiet room
- Make sure your voice is clearly the loudest thing the mic hears
- A bit of ambient sound (like distant traffic or a soft room tone) is fine
- Avoid loud fans, echoey rooms, or overpowering background noise
For best results later, use clear, expressive voiceovers in your generated videos too. A dull, flat voice will lead to a dull, flat avatar.
Hands, gestures, and body language
Your body language in the training video heavily influences your avatar’s behavior.
A few big rules from Nik’s demos:
Avoid very specific gestures
If you do a very clear, symbolic gesture in your input footage (for example, thumbs up, counting on your fingers, pointing), the model learns that as part of your default motion.
That can cause problems later if:
- You are talking about a serious topic
- Your avatar randomly throws a thumbs up or “number 1” gesture
So for base training footage:
- Use generic, natural movement, not symbolic gestures
- Avoid overly playful or “meme” style moves unless that’s truly your use case
Avoid constant, exaggerated motion
If your whole recording is big, fast, frantic gestures:
- The avatar can look hectic or unnatural
- It may feel “too much” for most professional content
On the other hand, if your recording is very stiff and expressionless:
- The avatar will look flat and monotone
The sweet spot is:
- Natural, relaxed posture
- Subtle hand movements
- Occasional emphasis with your hands, then return to a neutral “anchor” position (hands gently together, relaxed at your sides, or lightly resting in front of you)
Eye contact and teleprompters
Eye contact is a huge part of realism.
Common issues:
- Reading a script that is too low (eyes look down instead of at the camera)
- Teleprompter text too big or too wide, forcing your eyes to sweep left and right
Better:
- Position your teleprompter (or on-screen text) as close to the lens as possible
- Keep text narrow and small enough that your eyes move minimally
- Practice a few takes so you can glance and then come back to the lens
If your eyes drift too much in your training footage, the avatar will likely do the same.
HeyGen does offer AI eye contact correction, but for the most realistic look, it’s still best to film with proper eye contact from the start.
Creating multiple looks from the same person
You’re not limited to one avatar look.
You can record:
- A more formal look (blazer, neutral background) for business presentations
- A casual look (t-shirt, home office) for social content or community updates
- A seated look vs a standing look
- Different hair styles, outfits, and environments
Adam and Nik recommend:
- Recording multiple 30+ second clips in the same session (with different hand positions and expressions)
- Uploading them as separate “looks” for the same person
That way, in AI Studio you can choose the avatar look that best matches the script and context.
Using green screen and background removal
If you want maximum flexibility across different environments, a green screen or solid background is a great option.
Two approaches:
- Record on a green screen / solid wall and upload as is
- Use HeyGen’s background removal to cut yourself out
- Drop your avatar onto any background or slide deck
- Key out the background yourself before uploading
- Do the chroma key (green screen removal) in your own editor
- Add a few backgrounds yourself
- Upload that as your avatar footage
Both work. The key is good contrast between you and the background, and consistent lighting.
Technical specs: Resolution and frame rate
You do not need cinema-level specs, but a few guidelines help:
- Resolution: 1080p or 4K works best
- Orientation: Record in 16:9 landscape; you can crop to portrait later in AI Studio
- Frame rate:
- 30 fps is perfect
- 60 fps is fine, but HeyGen won’t output 60 fps, so it doesn’t boost avatar quality
More important than higher fps is stability:
- Put your camera or phone on a tripod
- Don’t adjust or move the camera mid-recording
How HeyGen helps you check and improve your footage
You are not left guessing if your footage is good enough.
HeyGen includes a video score that:
- Automatically evaluates your upload for visual and audio quality
- Flags specific issues (e.g. lighting, framing, noise)
- Points to exactly where (time stamps) problems show up
Other built-in helpers:
- Background removal for swapping environments or placing your avatar over slides
- AI eye contact to correct small gaze issues
- Support for top character-consistent image models (e.g. Nano, Flux) when you want to generate new photo-based looks of your avatar
One important note from Adam:
Photo-based looks are fantastic and very realistic to strangers, but video-based looks (trained on real motion) are still the most accurate and lifelike to people who know you well.
Use:
- Video looks when you want maximum authenticity to your real self
- Photo-based looks when you need fast, flexible variations (different settings, props, outfits)
Where to start inside HeyGen
If you are brand new and just want to try:
- Log into HeyGen
- Go to create avatar
- Choose start with video
- Either:
- Record with your webcam using the guided script, or
- Upload a prerecorded clip from your phone or camera
- Let HeyGen process it into an avatar
- Click create in studio, add a script, and generate your first avatar video
From there, you can experiment with different looks, backgrounds, and use cases as you get more comfortable.








