close
address. Gaston Geenslaan 11 B4, 3001 Leuven
s.
All Insights

Generative AI is magical, but bending the rules will hurt your business

Generative AI is very powerful but could be harmful if deployed wrongly. And while thousands have urged a six-month pause on the creation of ‘giant’ AIs, the fear of missing out is forcing companies big and small to promote reckless practices.

Written by
Jan Vanalphen

Generative AI is very powerful but could be harmful if deployed wrongly. And while thousands have urged a six-month pause on the creation of ‘giant’ AIs, the fear of missing out is forcing companies big and small to promote reckless practices.

A widely adopted claim that people and companies on LinkedIn are making, is that integrating GPT-models into software applications is as easy as shooting fish in a barrel. 

Magic GPT

This is a misleading claim. So mind the gap. 

The claims are based on this chain of thought: because OpenAI’s highly capable GPT-models are accessible to anyone through a simple API key, you can now use that powerful capability to underpin your own awe-inspiring use case, or frankly any use case at all, without actual engineering pedigree involved

It's a kind of magic: you can just plug it in, prompt it with a simple question, et voilà! Hence why so many companies nowadays are selling AI-services without having actual AI-engineers on their payrolls, advertising that AI deployment is now merely a challenge for UX designers to be solved. 

I understand where this is coming from: generative AI is indeed forcing designers to deal with new UX paradigms. But, to remove engineering skills from the equation all together is misleading, and frankly, outright perilous.

Here is the gap.

A Black Box shaped hole

The way GPT-models generate responses is by predicting the most likely next word in a sequence of text. It does this by assigning probabilities to each possible next word, based on patterns it has learned from the training data.

These internal workings of LLMs are what we call ‘black boxes’. This means that while we can observe the inputs and outputs of a deep learning model, it can be difficult to understand how it arrived at a particular result or prediction. 

Because of the lack of transparency and interpretability of deep learning models, you cannot control how it assigns probabilities to word predictions, making the output very unpredictable, and potentially dangerous if that predicted word sequence is harmful for your business. 

Anyone who has been integrating LLMs for natural language processing tasks will be able to confirm that the average rate of getting correct answers from GPT-models is too low for enterprise-level applications. So if you want to build an LLM-based application you will need to guide the model to generate responses that align with desired and reliable outcomes. 

Assert control over text generation with Prompt Pipelines

To assert decent levels of control over LLMs, you need to define a domain-specific contextual reference within your prompt and install post-processing mechanisms. Doing all this at scale requires automation of the prompt creation and post-generation process. This type of automation is called a prompt pipeline.

A prompt pipeline refers to a sequence of steps typically involving pre-processing the input prompt, generating candidate responses using an LLM, and post-processing the responses to select the most appropriate one. A prompt pipeline is a sophisticated system that can generate high-quality responses to user inputs in natural language.

The pre-processing step may involve tokenising the input, classifying intents or normalising the text. The LLM will then generate candidate responses based on the input prompt, which are then ranked or filtered based on criteria such as relevance, coherence, or fluency. Finally, the post-processing step involves selecting the best response from the candidate set and formatting it for output based on business rules. This may involve additional text generation, rephrasing, or editing to ensure that the response is appropriate and understandable to the user.

Walk the walk, don't just talk the talk

A prompt pipeline is an essential component of an LLM-based application. Without a prompt pipeline, the model will generate irrelevant or inappropriate responses, leading to a poor user experience and potentially harmful outcomes.

While designers may have a general understanding of these concepts, they definitely lack the technical expertise needed to build a robust prompt pipeline. In contrast, seasoned AI engineers have the knowledge and experience required to design and implement a prompt pipeline that can handle various types of inputs and generate appropriate responses.

Additionally, prompt pipelines must be able to learn and adapt over time to improve their accuracy and effectiveness. This requires not only technical expertise but also experience in managing and training machine learning models.

The future of AI is multidisciplinary

While claiming that an AI-project is now a challenge for non-technical experts to be solved, the reality is far more nuanced. Yes, generative AI is forcing designers to deal with new UX paradigms, and it’s true that UX design has always been awfully undervalued in AI-based software development, calling righteously for deeper design involvement. However, to remove engineering skills from the equation all together is misleading and dangerous.

The future of AI is one where designers and engineers coexist and work to build safe and magical stuff together.

Written by
Jan Vanalphen

Subscribe to our newsletter

Raccoons NV (Craftworkz NV, Oswald AI NV, Brainjar NV, Wheelhouse NV, TPO Agency NV and Edgise NV are all part of Raccoons) is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick below:

By submitting this form, you agree to our privacy policy.

In order to provide you the content requested, we need to store and process your personal data. If you consent to us storing your personal data for this purpose, please tick the checkbox below.

More blog articles that can inspire you

what we do

Socials

address. Gaston Geenslaan 11 B4, 3001 Leuven