Generative AI can support researchers and creatives in many areas. However, AI competence is essential for its profitable professional use. This article deals with the question: What is AI competence and how can it be acquired?
Our last blog article from September 2023 on “AI in market research and innovation” was still titled “experiment”. The term “integration” would now be more accurate, as generative AI has become a natural tool for us in both the research and creation process: ChatGPT, Whisper, Midjourney, Firefly, Flux, Runway, Krea, Luma, Suno, and certainly a few more. At the same time, we see that there are still many myths, exaggerated expectations and a lot of bullshit circulating around the topic. We are therefore calling for a little more AI competence – and in this article we want to ask what constitutes AI expertise and how to achieve it.
The term “AI competence” is a direct analogy to the long-established concept of “media competence”. Media competence is what we demand when dealing with social media and fake news, what we want to teach our children and what we sometimes admire – or sometimes criticize – in young people. This does not mean skepticism, fear or withdrawal from social media, for example, but – on the contrary – an adult, reflective, critical attitude towards the media, questioning sources, thinking before spreading dubious posts or finding out about how algorithms work or the intentions and interests of the senders of certain messages.
AI competence can be seen as analogous to this: Critically examining which tools can be used in which places and which are useful and effective — and which are not, or an extra dose of caution is required. It makes it possible to use the new tools profitably – and it protects against uncritically buying into narratives about AI. However, it has not yet been developed to the same extent as we think we have it in media competence. For many, AI is still uncharted territory, and this makes us (still) susceptible to stories and promises that do not stand up to reality and are sold to us partly out of tangible financial interests, partly out of inexperience or partly for understandable psychological reasons. We would like to call the opposite attitude to AI competence the AI hype.
So what constitutes AI competence? From our experience — that is the perspective of psychological research and creation — there are essentially three simple things that equip us for the professional use of generative AI:

Basic knowledge of how generative AI works and the psychology of dealing with AI
Of course we have all dealt with it. We have all heard or read about the “stochastic parrot”. However, it is easy to lose sight of exactly what this means. The so-called “hallucination” in ChatGPT or the production of stereotypes in image-generating programs are not unwanted side effects that will eventually disappear, but a constituent part of the design of generative AI. The output is never based on any kind of understanding of the task, but is always the result of a selection process of tokens based on probabilities derived from the training data. Without this controlled (more or less variable depending on the “temperature”), i.e. probability-based randomness, it would not be “generative” AI, the answers would not be fluid and plausible and the model would be useless for our purposes
This shows the strengths and weaknesses of using generative AI quite precisely. It is a priceless tool for writing a social media post because the strengths of controlled randomness can be fully exploited here. In the field of research, this also applies (albeit to a lesser extent) to transcription, summarizing report volumes and many other tasks. However, it is less suitable for the evaluation of an in-depth interview because the focus here is on understanding and not on probabilities and apparent plausibility, and atmospheric and interpersonal dynamics are more important than the texts produced in the interview. It is excellent for producing test material — images or claims — as long as you don’t have very specific and complex image motifs in mind and you have the image idea yourself.
It is also important to realize that our evolutionary background means that we humans are wired in such a way that we are almost automatically fooled by this principle. Because we are human, we instinctively experience text productions by AI as “expressions“ of a quasi-living being, especially if they come across as fluent and plausible.
AI competence then protects against uncritically “believing” everything from ChatGPT, for example. It makes us cautious, for example, when using the controlled random principle as a substitute for interviewing people. “Synthetic data” is the keyword here. Even if the synthetic “test subjects” interviewed are fed with material from real surveys, the principle of controlled randomness and probability again applies when “extending” this basis. As a result, what emerges seems highly plausible to us, and we humans like to fall for the plausible and obvious (also because we are human and as such love meaningful stories, which is what storytelling lives on). But isn’t it exactly in research that the surprise and thus the exceeding of the obvious is what makes it so exciting, and not actually its justification for existence? In psychological research in particular, the gold of knowledge usually lies in the deviation and the unpredicted. It’s not about the last 5% of additional knowledge, but usually about the core of the matter.
The same applies to interactive AI avatars or AI personas if they are used in research as a substitute for generating or enriching insights or for insights-based idea development: if they are to be more than a “talking database” of existing studies, i.e. if they are to play to the strengths of controlled randomness (e.g. through the connection with a trained LLM), they will simply “enrich” the data from the training material in a plausible way (by the way, plausibility is also the basis of the Turing test: does what the model outputs sound “like a human”?). You can do this, but you should be aware that it will be difficult to judge how well they simulate reality or not, even – or especially – if they sound plausible. In any case, it is better not to base any important strategic decisions on them.
Part of the logic of the hype is the constant reference to the future. “In the future” or “soon“ this or that will be possible (AI probably has this in common with nuclear fusion). In any case, there is currently nothing in sight to suggest that completely new forms of generative AI will overcome the fundamental limitations based on the principle of probabilities. The improvements in the first year were rapid. After that, things leveled off, and it is impossible to predict whether the threat of AI incest (AI increasingly learning from AI-generated content) will intensify the quality problem that some industries are currently creating for themselves.

Critical evaluation of the added value of generative AI from the perspective of a specific profession
If I want to assess how well generative AI can support me in my professional processes, I don’t just need to familiarize myself with how AI works and its strengths and weaknesses. It is almost more important that I am familiar with the processes that AI is supposed to replace or support! That sounds banal, but it’s not. If I’m not a psychological researcher, I can’t assess how and where it can add value in psychological research. If I’m not a professional creative, I can’t assess whether and how AI can help me create an ad or a commercial.
This point is also not trivial because quite a few AI experts propagate this, even though they understand nothing about the processes for which they want to sell AI as a savior. For example, representatives from marketing or market research use the principle of controlled randomness to create arbitrary advertisements that they consider somehow chic and “creative” – without knowing what is important in the creation process. Designers or film producers usually want to produce a very specific result, a very specific mood, a very specific surface composition, a specific facial expression, etc., or they have design problems to solve, such as fitting three important messages into a single image, which should nevertheless be coherent and not overloaded. An advertising medium should have an effect on the right target group, make the company’s corporate design recognizable, achieve strategic goals and be well received by customers. In any case, AI expertise, as outlined here, protects against such overreaching fantasies of omnipotence. We wouldn’t even think of explaining to a chemist how she should use AI.

Practical use of generative AI in concrete and real projects
The third and probably most important aspect of AI competence is practice. Just do it, and not just try it out or “play around” with it, but actually use AI in real projects in your own profession (see point 2). Of course, you first have to test and try out a lot of things and, if necessary, get training in one tool or another – and given the large number of tools, this can also cost money in the long run. However, practical and professional use quickly separates the wheat from the chaff.
It would go beyond the scope of this blog post to list all the experiences we have had with AI tools in research and development. We would like to use the example of film production to illustrate this — and since we have already given our readers enough text at this point and because it is customary in the field of creation to show results instead of writing about them, we refer to “Eddi’s Journey” (switch on the sound!):
The film presents some of our use cases for AI, and at the same time “shows” what is possible with AI in film production – and where “conventional” professional programs deliver better results. Eddi himself was created with a 3D program. All AI tools were unable to cope with the simple geometric shape. Processing the three spherical segments that make up Eddi would require some kind of design recognition. The Luma software, on the other hand, is unbeatable when it comes to morphing a figure. Here, the principle of probability, according to which individual pixels follow one another, can create beautiful effects.
Other special effects could be generated faster and better with Adobe After Effects. The music is partly AI-generated. From minute 01:08, Suno takes over and “extends” an own track. Some real scenes are AI-generated, for others it was easier to use a scene from a stock database.
In any case, we are optimistic that AI competence will continue to grow and will soon finally replace the hype phase. Not out of disillusionment, but on the contrary: only with a competent attitude can the wonderful possibilities of generative AI be fully exploited and used profitably in companies. 🙂