The Genius supercomputer
An AI showcase application for the new KU Leuven GPU supercomputer called Genius.
- artificial intelligence
- computer vision
- deep learning
- machine learning
Unleashing the Genius
Some time ago we got word that KU Leuven ICTS was looking for an application to demo the AI capabilities of their new GPU supercomputer called Genius. And who better to build such an application than the newly minted Deep Learning division of an AI prototyping company? That’s right! Brainjar, aiming to bring the iterative and rapid development practices of Craftworkz to cutting edge deep learning projects, worked with the people at ICTS to create a demo that would demonstrate the power of their new GPU powerhouse.
The Genius supercomputer
But first, a bit more on Genius: You see, Genius is different from your average, run-of the-mill supercomputer because it makes use of GPUs rather than CPUs. GPUs, or Graphics Processing Units, were originally designed to handle the large amount of parallel computations required to render computer graphics. Individually, a GPU computes slower than a CPU, but it can handle many computations simultaneously compared to a CPU that only handles a few computations at a time.
This makes GPUs ideally suited for tasks that require a lot of parallel computations. And one of the main industries (besides video gaming and cryptocurrency mining) that make eager use of this is artificial intelligence.
Artificial intelligence, or more precisely, the new and exciting AI subdomain called Deep Learning, works by “training” so-called Neural Networks to perform tasks. How it works is out-of-scope for this article but if you're interested, check out our full blog post.
Based on the requirements set out by KU Leuven, we came up with the following idea: A real-time neural style transfer application that allows people to see the world rendered in the style of one of six different famous paintings. You can check this article if you are interested more in the nitty gritty technical details.
The keyword here is real-time. Neural style transfers are already quite possible, but the problem is that they typically work on still images, and the process takes a few minutes depending on how big the image is. Even when using our heavy duty workstation (built to handle our semi-large Neural Networks) and an algorithm called “Fast Neural Style Transfer”, it still takes between 1,7 and 2 seconds for a single image.
This is where Genius really shines: Because a single node only needs 0,006 seconds to process a single frame, we can process video at over 160 frames per second, enough to satisfy even the most demanding framerate snobs.