background leftbackground right

What’s new at HeyGen: April 2026

Holly Xiao
Written byHolly Xiao
Last UpdatedMay 6th, 2026
HeyGen logo and text "April 2026 What's new at HeyGen" on a colorful gradient background.
Create AI videos with 230+ avatars in 140+ languages.
Get started for free
Summary

Explore HeyGen’s latest updates, including advanced avatars, AI video automation, and new tools that turn prompts, agents, and workflows into fully produced videos.

April was a wild ride. We launched 15 features and products, and when you stack them up, a story takes shape. Video stopped being a tool you open and started being something your tools just do. Your coding agent now ships video. Your podcast now becomes social clips in 175+ languages and dialects while you’re in line for coffee. Your Digital Twin can now walk across a cinematic scene while interacting with up to two other people.

Here’s what shipped, why it matters, and how to play with it this week.

One 15-second clip, unlimited looks and videos

Avatar V is here, and it’s our most advanced avatar model yet. It does something no avatar model has done before: combine the flexibility of photo avatars with the realism of video avatars in a single model.

Here’s the part that feels almost unfair. You record one 15-second clip. From that single take, you get full upper-body movement (waist up, not just face and shoulders), so it actually feels like real studio footage. You get multi-look generation, so you can select different outfits, settings, and styles from one base recording without losing your identity. You also get multi-angle consistency across shots, with no drift and no uncanny valley. And you get long-form stability, so your face, voice, and presence stay locked in even on the longest videos.

If you’re a creator, founder, marketer, or anyone whose face is part of the brand, you’re done with reshoots. Record fifteen seconds of yourself once and get a year of content. Different outfits for different campaigns, different settings for different audiences, but the same on-screen presence everywhere.

Create a Digital Twin >

Seedance 2.0, now starring you

Seedance 2.0 is the model everyone has been talking about, and HeyGen is the first avatar video platform integrating it with identity-verified human faces. Every other implementation of Seedance is locked to fictional characters. With HeyGen, your real Digital Twin gets cinematic motion, dynamic camera work, and up to three avatars in a single scene.

The more interesting story is where you can actually use it. Seedance 2.0 is now wired into multiple surfaces inside HeyGen, so you can reach for it in whichever workflow already fits how you work.

Inside Avatar Shots

This is the most direct path. Open Avatar Shots, choose Seedance 2.0 as your generation engine, and direct your Digital Twin through cinematic scenes with full body motion and dynamic camera work. Pick the prompt, the scene, the camera move, and Seedance does the rest. If you came here to make a brand film with your face in it, this is the front door.

Inside Video Agent

If you’d rather just describe what you want, hand the brief to Video Agent. Seedance 2.0 is now a first-class engine in the Video Agent creation flow, alongside Avatar V. Give Video Agent a prompt, and it automatically composes the right mix of Seedance cinematic shots and Avatar V scenes into a finished video. You don’t have to step out into Avatar Shots manually or pick the engine yourself. The agent picks it for you.

Across Digital Twin everywhere

The same Digital Twin you built for Avatar V is the one Seedance casts in those cinematic shots. Record once, walk through a coffee shop, gesture across a stage, hand off a line to two other Twins of yourself or your team. One identity, two engines, full coverage from talking head to cinematic scene.

You can now produce the kind of cinematic, story-driven video that used to require a director, a crew, a location, and a week of editing, and you can do it from whatever workflow already fits the job. Direct it yourself in Avatar Shots, or describe it once and let Video Agent compose it for you. Brand films, founder narratives, ad creative with real production value. All from your desk, with your real face, in an afternoon.

Create with Avatar Shots >

Turn one podcast into a hundred clips

Picture this. You recorded a 90-minute podcast on Friday. Then, your editor needs to work six hours to pull clips. By Tuesday, your audience has already moved on.

Instant Highlights v2 kills that problem. Upload any long-form video, whether it’s a podcast, webinar, keynote, interview, or livestream. You get publish-ready short clips for every platform in one workflow.

Type “the part where she talks about fundraising mistakes” and the AI finds the moment inside a 3-hour episode. Face tracking with dynamic camera follow makes a 9-by-16 cutdown of your keynote feel like a real op, framed it instead of a dumb static center crop. Two people on screen get auto-stacked split frame, or speaker cut. Captions, translation into more than 175 languages with lip sync, and 4K upscaling are all baked in. No third-party tools, no extra exports.

If you produce long-form content, this collapses your post-production stack into one upload. The clips ship the same day as the episode. Your audience hears it while it’s still hot, and your global audience hears it in their own language without waiting for a localization vendor.

Clip a video >

Now your AI agent can ship video

For everyone building with AI agents right now, there’s a quiet question that nobody answers cleanly. Agents always end at a human, so what’s the last mile? For most teams today, the answer is a wall of markdown. We don’t think that’s the answer. We think the answer is video.

In 12 days, we shipped three things that make that real.

HeyGen CLI wraps our v3 API in a single binary with structured JSON output and a built-in wait flag. You can pass a prompt, and let Video Agent pick the avatar, voice, script, and layout for you. Or you can pass a JSON body and drive every detail yourself. macOS and Linux, no runtime dependencies.

If you’re a developer, you can now generate videos exactly like you run a curl command. Drop it into a build pipeline, a cron job, or a Slack bot. Your product can ship video as easily as it ships email.

HyperFrames is now open source under Apache 2.0. The pitch is wild. You write videos as plain HTML. AI coding agents like Claude Code, Cursor, Codex, and Gemini CLI build them natively. HyperFrames renders the result to MP4. The same input always produces the same output, so it slots right into CI and CD without surprise.

Video is now code. That means version control, code review, automated testing, and reproducible builds for video, the same way you already do for software. For engineering teams, this is the first time video fits inside the workflow you already use to ship everything else.

HeyGen Skills might be our favorite. You’re inside Claude Code or Cursor, and you tell the agent what kind of avatar you want. The agent builds it, selects a voice, writes the script, generates the video, and hands you a share link. Your avatar is saved locally. After that, every video is just one prompt. Try it. It feels like magic.

The friction between “I have an idea” and “I have a finished video” is now one sentence to your coding agent. No tabs, no app switching, no logins. If you live in an IDE, video is now native to your work.

The shared install pattern across all three is npx skills add heygen-com/. Three different surfaces, one consistent way agents reach them.

We open-sourced video

When we open-sourced HyperFrames, we said we wanted it to be the way AI agents and creative teams ship video together. Since then, we’ve been steadily filling in every part of that loop. Today, HyperFrames is a packaged solution that covers the full lifecycle, from idea to render to share.

Here’s what’s now in the box and what each piece unlocks for you.

HyperFrames Timeline. A proper visual timeline editor for HTML-based video. You can scrub through your composition, trim and rearrange scenes, adjust timing, and preview transitions, like you would in any modern video editor. You no longer need to read code to fine-tune a video your agent built. Anyone on your team, technical or not, can polish the final cut.

HyperFrames in Claude Design. A native bridge between Claude’s design surface and the HyperFrames renderer. Designers working in Claude can jump from a static layout into a fully rendered video without leaving the canvas. The handoff between design and motion disappears. The same artifact you used to ship as an image can now ship as a video in the same workflow.

HDR rendering. High dynamic range output is now baked into the renderer, so highlights, shadows, and color depth come through with the same fidelity you’d expect from a premium pipeline. Your videos look broadcast-quality on modern displays, with no extra steps or color-grading workflow.

Canvas editing in Studio. A free-form canvas in HeyGen Studio for visually arranging, editing, and iterating on HyperFrames compositions. You get a flexible, designer-friendly workspace for building video the way you’d build a presentation or a Figma board, with all the power of HyperFrames.

Community discovery. A built-in way to browse, remix, and learn from videos created by people in the HyperFrames community, paired with a celebration of our HyperFrames community milestone. You don’t have to start every project from a blank canvas. Find a composition you love, fork it, and make it your own.

Together, these pieces turn HyperFrames from “a way to write video as code” into a complete production stack. Your AI agent can build the first draft, your team can polish it on the timeline, your designers can finish it in Claude or Studio, the renderer ships broadcast-quality output, and your community helps you find the next idea. One packaged solution covering every step from blank page to published video.

The repo lives on GitHub at heygen-com/hyperframes. The quickstart is at hyperframes.heygen.com/quickstart. There are more than fifty components in the block catalog. If you’re an agent, you’re one npx hyperframes init away from playing.

Video where you already work

The other big April pattern was integrations. The idea is simple. Take a tool people already live in, and make video the natural next step.

The HeyGen and Gamma integration lets you go from a prompt or document to an avatar-narrated presentation video in one place. Drop in PPT, PPTX, or PDF. Get an avatar-narrated video out, fully editable in AI Studio. The Granola integration automatically turns your meeting notes into a recap video.

If you’re a B2B team that lives in decks, meetings, email, or design files, you don’t have to context switch to a new tool to make a video. The deck you already wrote becomes a narrated presentation. The meeting that just ended becomes a recap your team can rewatch. The email you just drafted becomes a personalized video. Video stops being a separate project and just becomes the natural last step of the work you were already doing.

The takeaway

Video has a place in every agent loop now. Whether it’s CLI in your terminal, the IDE you ship from, the podcast file you uploaded, the deck you already wrote, or the meeting that just ended. April put video where the work already is.

May keeps the cadence going. Subscribe to our changelog, or run npx skills add heygen-com/hyperframes and start building. We can’t wait to see what you make.

About

Meet Holly Xiao, Head of Marketing at HeyGen. With deep expertise in product and growth marketing, Holly has led marketing teams at Drift, Envoy, and Canvas, crafting narratives that fuel business growth through clear positioning and storytelling. At HeyGen, she’s helping redefine how businesses use AI-powered video to scale enterprise communication and engagement.


Continue Reading

Latest blog posts related to What’s new at HeyGen: April 2026.

Browse All

Start creating videos with AI

See how businesses like yours scale content creation and drive growth with the most innovative AI video.

CTA background