Open source models are crazy. Alibaba just dropped Wan 2.2 Animate, a new model that can animate any character based on a simple souce image. It handles facial expressions and body movements like no other model. 10 insane examples: (sound on)
WAN 2.2 Animate - AI角色动画生成与视频角色替换
WAN 2.2 Animate是阿里巴巴推出的突破性140亿参数AI模型,核心聚焦角色动画生成和视频内角色替换任务。这款基于先进混合专家(MoE)架构开发的开源模型采用Apache 2.0许可,在两大核心能力上表现突出:借助参考动作让静态角色照片生成流畅动画,以及无缝替换已有视频中的目标角色。凭借出色的时序稳定性、自然表情完整保留与一致的光照自适应能力,WAN 2.2 Animate能够输出电影级质量的角色动画与替换效果。
生成结果将在此显示
提交任务后,AI 生成的内容将在这里展示
Wan 2.2 Animate YouTube 视频
观看展示 Wan 2.2 Animate 强大 AI 角色动画和视频转换能力的演示和教程
WAN 2.2 Animate 在 X 上的热门评价
看看大家在 X (Twitter) 上对 WAN 2.2 Animate 的评价
Wan 2.2 Animate is CRAZY and it actually excels at 3 things from my tests: 1. Lip syncing (so far the best open source I have seen, beating Runway Act2) 2. Consistent lighting & shadows with color tone replication when you swap a character 3. It keeps the replacement character Show more
Wan 2.2 Animate Lip syncing Test. Definitely way better than Runway Act2 in my opinion. Takes about 8 minutes for a HD video to be processed at 720p vertical (reels style)
You've probably seen viral character swaps from Wan 2.2 Animate. But now you can swap the character AND background using a reference video + new image. I turned myself into a YouTuber in Paris. How it works 👇
New tool to swap characters in a video: Wan 2.2 Animate Spent a few hours testing it out this weekend and have some thoughts on strengths + weaknesses. It's particularly strong at videos like this where you need to replicate lip sync and body movement. Other tips ⬇️
Wan 2.2 Animate is actually Crazy!! You can replace characters from a simple source image. No need to first frame anymore. It handles facial expressions and body movements like no other model I have ever seen. It is open source and free to use, that's the crazy part!
pretty much the end for dancing influencers lol… wan 2.2 animate can not only copy crazy camera moves, but it literally mimics body moves and.. even facial expressions accurately like honestly… can you even tell which one’s the real video?
omg... it's over... Hollywood has officially been left behind you can swap out any actor from any film in one click using Higgsfield’s Wan Replace, it even works with intense camera motion now, anyone can achieve Hollywood level motion control using AI here’s how to do it:
Just tried Wan2.2-Animate... and HOLY SMOKES, it's PERFECT! 🤯 @Alibaba_Wan
What used to take hours in After Effects now takes just ONE prompt. Nano Banana, Seedream 4, Wan 2.2, Runway Aleph et al are pioneering instruction-based editing -- collapsing complex VFX pipelines into a single, implicit step. Here's everything you need to know in 10 mins: Show more
什么是 WAN 2.2 Animate
阿里巴巴先进的140亿参数角色动画AI模型
WAN 2.2 Animate 在270亿 MoE 架构中具有140亿活跃参数,通过双模式功能使静态角色照片栩栩如生。
什么是 WAN 2.2 Animate
阿里巴巴先进的140亿参数角色动画AI模型
WAN 2.2 Animate 在270亿 MoE 架构中具有140亿活跃参数,通过双模式功能使静态角色照片栩栩如生。
WAN 2.2 Animate 核心功能亮点
带你领略这款角色动画工具的顶尖技术实力与出众创作能力
140亿参数MoE架构
采用前沿的混合专家设计,总参数量达270亿,单次推理仅启用140亿参数,在保证推理高效、优化GPU显存占用的同时,输出顶尖水准的动画质量。
双重动画模式
支持两种创作模式:动画模式可依托参考动作让静态照片活起来,替换模式能无缝更换视频中的角色,同时完整保留原始场景的光照与运动动态。
整体动作复制
精准捕捉并迁移参考视频中的全身动作与面部表情,输出自然逼真的角色动画,对细微手势细节与情感表达都能做到精准还原。
高时序稳定性
依托先进的帧一致性算法,长序列动画也能保持顺滑无闪烁,哪怕是运动复杂的高难度动态场景,也能维持连贯的角色外观与流畅的运动状态。
自然表情保留
通过智能面部特征追踪维持真实的情感表达与微动作,呈现栩栩如生的角色动画,完整保留人类表情的细节,包括眼部动作与唇部同步。
自适应光照集成
替换角色时自动分析并匹配原场景的光照条件,实现无缝视觉融合,色调、阴影与高光都能和原始视频环境完美匹配一致。
Apache 2.0开源
遵循宽松的Apache 2.0许可发布,支持完全商业使用、修改与分发,可获取完整模型权重与训练方案,适配自定义开发与研究应用。
灵活的解析度支援
支持生成多种解析度的动画,涵盖480p和720p选项,可输出最长120秒的视频,能适配各类创作场景与平台的输出质量要求。
如何使用 WAN 2.2 Animate
使用 WAN 2.2 Animate 创建角色动画的分步指南
上传PNG/JPG格式的清晰角色照片和MP4/MOV/AVI格式的参考视频。匹配输入之间的构图和宽高比以获得最佳动画效果。