The first experiment in our Generative AI Playground was all about turning incomprehensible structured data into captivating, human-readable reports. In our second experiment, we wanted to explore the possibilities of function calling, a groundbreaking GPT feature we already discussed in our whitepaper, The Generative AI Playbook.
In this article, we test the limits of this technology by creating an AI assistant that could answer complex, nuanced queries about movies. We chose this use case because a few years ago, we tried to build such a chatbot — way before ChatGPT was a thing — and, even though the results were already impressive, traditional chatbot technologies cannot compare in this area. Especially when there are questions that require multiple layers of data retrieval and interpretation, but we’ll explain this later on. First, we’ll explain the theory behind function calling before getting into the specific use case.
Function calling allows AI models to interact with external tools and APIs in a structured manner. It enables developers to describe functions to GPT, which can intelligently output a JSON object containing arguments to call those specific functions. This process is dynamic and adaptive; the language model intelligently selects which function to execute and with what parameters at the appropriate times. The result of this function is then sent back to the language model, which can decide to invoke a new function with the results using new parameters and so on. This iterative process continues until enough information has been gathered through various function calls to generate an answer to the user's complex question.
So, simply said, we can get the model to generate text and execute specific tasks by calling predefined functions. For example, it can convert a user query like “What’s the weather like in Brussels?” into a function call that fetches real-time weather data, demonstrating the model's ability to intelligently navigate through a sequence of actions to provide informed responses.
This may seem very technical — and it is — but we promise, we’ll illustrate how it works with a clear use case: our Moviebot. At the end of this article, we’ll also offer some inspiration as to how you could implement this technology in other ways as well.
To test the possibilities of function calling, we decided to create an AI assistant capable of handling any movie-related questions. Picking ‘movies’ as the core of this assistant may seem random, but it isn’t. So, why movies? The answer lies in the complexity and richness of movie data — the available data goes much further than titles and release dates. Movie data encompasses genres, directors, actors, and more. You can cross-reference all these data points and ask questions requiring precise data retrieval and nuanced understanding of the questions asked. So, this made movies a perfect playground for this experiment.
Our approach was to integrate GPT with The Movie Database (TMBD) API, a rich source of movie data. This integration allowed us to create a range of functions that could query the TMDB database based on different parameters:
First, we implemented these functions in Python and described them in a standardized Swagger format, ensuring that each function's purpose, input, and output were clearly defined. This structured approach allowed GPT to understand and effectively use these functions to answer complex queries about movies. For instance, when asking GPT “An English animated movie, released in 1992, featuring Rowan Atkinson”, the function call should return “Lion King” as an output.
Moreover, we configured our chatbot in a way that it would only answer on questions related to movies. The result? One of the greatest cinephiles of all times — Oscar, The Moviebot.
So, let’s break down how our Moviebot works exactly by posing it a challenging question: “Name a popular drama released after summer 2022, directed by the same person who directed Inception."
Lots of parameters to consider, even as a human, so let’s see how the Moviebot handles it, fully autonomously:
The answer? “Oppenheimer”, a drama fitting all the criteria and proving how our Moviebot can handle complex queries all by itself.
The potential of function calling extends far beyond movies, of course. The ability to translate natural language queries into structured API calls can revolutionize how we interact with data across various fields. Think of the Rabbit R1, which is basically one big implementation of function calling specifically for consumer electronics. Of course, it’s interesting to see how businesses could use this technology as well, so let’s dive into some business use cases:
This list is just the tip of the iceberg. The possibilities are endless, limited only by your imagination and business needs. By partnering with experts in generative AI, you can quickly integrate function calling into your existing systems, reducing the time-to-market significantly. We would argue function calling is not just a technical update — it's a business enabler. It opens up unseen possibilities for automation. Businesses can get ahead of the pack by understanding its potential and integrating it effectively.
Now that we’ve got to the bottom of function calling, we’re ready for our third experiment in our playground… Coming soon.
Written by
Daphné Vermeiren
Want to know more?