- Notifications
You must be signed in to change notification settings - Fork102
An Open Source Modular Framework From Face to FACS Based Avatar Animation (Unity3D / Blender)
License
NumesSanguis/FACSvatar
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Notes 2023-01-12
- I wanted to simplify the quick start and smooth some rough edges before releasing version v0.4.0, but life held me up and I never merged this into master/main branch. This version is in all aspects better than v0.3.4, so I'm merging it after all in its current state.
- Version v0.5.0 is on its way though (no release date yet). Small spoilers:
- Python version 3.10+
Support for Blender 3.3 LTS(2023-06-17) Updated FACSvatar-Blender Blender add-on to support Blender LTS v3.3 & v3.6- Will have a proper GUI (written in Vue3 (JavaScript Framework)) that communicates with Python.
FACSvatar is An Open Source Modular Framework for Real-Time FACS based Facial Animation
Or in plain English:
Track facial expressions with any software and visualize that data on any avatar in real-time,powered by the FACS representation.No more need to modify your avatar to support your tracking software.All written in your favorite programming language, on any OS, and across machines.
Muscleimage source.
- Facial Action Coding System (FACS): A description of how muscle groups in the human face contract/relaxto make any facial configuration possible.(learn more).
- Action Unit (AU): The strength of contraction of a single muscle group.
- Modular: Software and OS independent. You only need to know what data goes in and what comes out.
- Extendable: Write your code, add a ZeroMQ message socket, and let it talk to other modules.
- Real-time: Create lively avatars that respond to your user.
- Machine/Deep Learning: Input/output data-fied facial configurations.
(Above demo video link:https://www.youtube.com/watch?v=J2FvrIl-ypU)
Message to:
- Animators: Copy facial expressions from a video/webcam to your avatar.
- Affective Computing: Enable Human-Agent Interaction (HAI) by inputting your human-analysis into a ML-model,output FACS values, and have your Embodied Conversational Agent (ECA) display it.
- Psychologists: Create stimuli with the same facial configurations across avatars of different sex, age and ethnicity.
FACSvatar is already operable with:
- Tracking software:
- OpenFace: Extract facial AUs from videos/webcam.
- Visualization software:
- Blender 2.80+
- MB-Lab 3D avatar generation
- Unity3D
- FACSHuman
- Blender 2.80+
- Modules for additional data processing, and allowing
m trackers - to - n avatars
(modules
folder) - ZeroMQ: This framework's glue, allowing modules to communicate with each other.
- Containerization with Docker to run FACSvatar modules everywhere.
Disclaimers: This is an open-source project, hopefully being flexible enough for your facial animation needs.This is not software supported by a company / commercially, but by users like you.If you need some new capability, you likely have to code it yourself (or ask/hire someone),but questions for guidance are always welcome (make aGitHub issue)!For commercial usage, please check thelicense page.Read more about FACSvatar's limitations (TODO doc link).
Read the Docs:https://facsvatar.readthedocs.io/
Please cite the following paper when using this framework in a paper:
- DOI:https://doi.org/10.1145/10.1145/3267851.3267918
- ISBN: 978-1-4503-6013-5/18/11
- COMPLETE re-write of the documentation:Check it out!
- Python modules:
- Standardization pass over all modules / code clean-up
- Consistency fix: ROUTER / DEALER sockets use JSON formatted data
- DOC string per class and function
- Logger instead of print() statements
- Debug as option to enable logger
- File structure for proper import of modules / pip?
- Use config file (in addition to command line arguments) + config filepath argument
- Easy run: Docker container per module + Docker Compose
- Demo video
FACSvatar is tested on Ubuntu and Windows, but should work on MacOS.
This quickstart has 2 parts:
- Start FACSvatar modules using Docker - modules in containers(seehere for Python instructions)
- Visualize in Unity3D or Blender
Downloads - Go to therelease page of this GitHub repo and download:
- (Real-time only) openface_2.1.0_zeromq.zip
- Unzip and execute
download_models.sh or .ps1
to download trained models
- Unzip and execute
- Windows 7 / 8 / 10 Home version <2004 : unity_FACSvatar_standalone_docker-ip.zip
- Windows 10 Home v2004+ / Pro / Enterprise / Education: unity_FACSvatar_standalone.zip
- Windows / Linux / Mac:Unity3D editor (documentation)
- Source code (zip / tar.gz) or download this repository with:
git clone https://github.com/NumesSanguis/FACSvatar.git
- Press the green
Clone or Download
button on this page -->Download ZIP
- (Real-time only) openface_2.1.0_zeromq.zip
Docker Install - Let's you execute applications without worrying about OS or programming language.
- General Docker instructions
- Docker Toolbox for Windows 7/8/10 Home version <2004
- Docker for Windows 10 Home v2004+
- Docker for Windows 10 Pro, Enterprise or Education
- Ubuntu:Docker anddocker-compose and
sudo usermod -a -G docker $USER
Docker Modules - Open a terminal (W7/8: cmd.exe / W10: PowerShell) and navigate to folder
FACSvatar/modules
, then execute:docker-compose pull
(Downloads FACSvatar Docker containers)docker-compose up
(Starts downloaded Docker containers)
See visualization engine instructions
- Open a 2nd terminal in folder
FACSvatar/modules
and execute:docker-compose exec facsvatar_facsfromcsv bash
- Inside Docker container - Start facial animation with:
python main.py --pub_ip facsvatar_bridge
- Navigate inside folder
openface_x.x.x_zeromq
- (Windows 7/8/10 Home version <2004 - only) Get Docker machine ip by opening a 2nd terminal and execute:
docker-machine ip
(likely to be 192.168.99.100) - (Windows 7/8/10 Home version <2004 - only) Open
config.xml
, change<IP>127.0.0.1</IP>
to<IP>machine ip from step 3</IP>
(<IP>192.168.99.100</IP>
) and save and close. - Double click
OpenFaceOffline.exe
–> menu: File –> Open Webcam
Tested on version: 2018.2.20f1
- Open the folder
unity_FACSvatar
as a project with Unity3D - Press play (now it's waiting for facial data)
OR (Windows-only TODO):
- Navigate inside unzipped folder unity_FACSvatar_standalone(_docker-ip) and double-click
unity_FACSvatar.exe
Extra: Use the numbers 0, 1, 2 on your keyboard to change camera.
Follow instructions here:https://github.com/NumesSanguis/FACSvatar-Blender
See the quickstart video (:warning: note that theBlender script part is outdated (from 15:15) due the new FACSvatar Blender add-on):
About
An Open Source Modular Framework From Face to FACS Based Avatar Animation (Unity3D / Blender)