<- all cases

Meet Jan Jambon's Digital Twin

The challenge

To prove that Flanders is at the forefront of innovation, Flanders Investment & Trade asked our venture nocomputer to build a digital twin of none other than our Minister President, Jan Jambon, who would be able to interact — entirely unscripted — with him on stage at the Flanders International Business Awards. To make the twin sound, look, move, and behave like Minister Jambon, we created a sophisticated technology pipeline that utilized various cutting-edge technologies — including voice cloning, GPT-steered dialogue, and a 3D model that was his spitting image.

3D model

The cornerstone of any digital twin is undoubtedly its visual representation — the 3D model of the replicated individual. To have a base for the model, our team captured several images of Jambon, both in profile and front view. Then, our 3D modeler Kurt went to work by first creating the base mesh, a basic 3D model of Jambon’s face. Kurt used only seven pictures to create a hyper-realistic version of Jan. To refine the digital twin, Kurt opted for two essential technologies: Blender combined with MetaHuman, a product of Unreal Engine. To this day, these are the best tools to produce the most hyperrealistic digital twins.

This process was started in Blender, where Kurt mapped out the key facial features. Then, he got down to texture painting and sculpting, adding unique Jan Jambon characteristics to the 3D model. To finalize the model, Kurt replaced the generic MetaHuman hair sets with custom assets, specifically groomed in Blender to match Jan’s hairdo. After exporting the hair set to Unreal Engine to fit the digital twin, Kurt further refined it and even had physics applied so the hair would move realistically.

Voice cloning

Once we had the 3D model, it was time to bring it to life by making it talk. The first step? Cloning Minister Jambon’s voice. In the past, cloning voices required complex, custom AI models - certainly for Dutch-speaking persons. In addition, we needed to converse with the clone in real time, which was impossible using a custom model. So, we decided to use ElevenLabs, a tool that can clone voices in real time. However, a limitation was that it had to be in English, as the tool did not support Flemish. Luckily, during the development of this project, ElevenLabs suddenly released Dutch voice cloning — revolutionary, as we thought this would still take a while. This release allowed us to completely change the concept and make a Dutch-speaking or multilingual clone.

So, we got to work and gathered the necessary auditory input by cleaning up audio from a Flemish television show, De Zevende Dag, in addition to data we obtained in our travelling studio we set up in his office. Once we had our data, we cloned Jan’s voice and made it perfect with some fine-tuning. The result? Stunningly accurate. Now, we only need a brain to come up with the right thing to say.

AI-powered, real-time conversations

For the clone to have real-time conversations, we added GPT to our pipeline. This way, the digital twin would not just look and move like Jambon but also act like him. We programmed GPT with guidelines on what Jambon would likely say and how he would behave, ensuring the AI’s responses aligned with his personality.

In this case, correct prompting was extremely important as we cloned a key political figure — we had to make sure the clone did not say anything (politically) incorrect. We thoroughly tested this, and GPT never strayed from its objective. It was truly unwavering, thanks to some strong prompting work.

The finishing touches

To make it all come to life, we had to add expressions and movement. To implement natural movements, we used motion capture technology. We wore a mocap suit and mimicked movements, which were then mapped onto our digital Jambon. This way, he could walk on stage — on screen, of course — and look at the audience naturally. Moreover, we added some recurring movements so he would not be static.

Movements

When it came to facial expressions, we used advanced, cutting-edge AI technology, namely Audio2Face. This program analyzes the voice and suggests corresponding facial expressions and lip movements. To get the emotions just right, we tweaked the expressions. This ensured our digital twin didn’t just sound real but actually felt real.

The takeaway

Well, twinning is winning, and this twin definitely won. With his sophisticated, cutting-edge tech pipeline, we eventually made the metahuman come to life at the Flanders International Business Awards to have an unscripted conversation with his real version. In addition to its performance on stage, all attendees could visit our booth and have an entirely unscripted discussion with Jan Jambon’s twin.

Since our booth was in a large open space with a lot of background noise, we equipped our installation with headphones, a microphone, and a button so that guests could ask their burning questions and listen to “Jan’s” answers — illustrating the potential of technology to enhance human interaction in an impactful way, if implemented right.

Want to know more?

Related cases