Are we market researchers, innovation developers and designers still needed at all, or is AI taking over our jobs because it can do it better? Vice versa, can AI be of any use to us at all in our professional processes, or is the best way a happy win/win team? We explored this question in an experiment.
The discussion about artificial intelligence (AI) ranges from hyped expectations to practical use cases to disappointment. Although we have been using AI in our projects for about a year and have gained a lot of experience, we wanted to know exactly where we can use AI effectively in our work and where relying on HI (human intelligence) is the better choice.
What does “our work” mean? Together with our clients, we develop innovations, prototypes or design manuals based on psychological-creative research. To do this, we work in an interdisciplinary way using the InsightArt mindset we developed ourselves. We think research and innovation together — both merge into one process: innovation-led research and research-led innovation. However, the findings from our experiment can certainly also be applied, at least in part, to research and development processes of a different nature.
As a reference for the experiment, we used several of our projects carried out purely with HI in “manual work” — where we went through the entire R&D process: Psychological research, innovation development, communication and design creation. We had different Generative AIs perform each step in combination — ChatGPT, Neuroflash, Midjourney and Adobe Firefly were used. We performed the tasks in different settings and compared each with the results from the HI projects:
- AI as researcher:
The AI was instructed to provide us with research results on the respective research topic without prior feeding of information
- AI as respondent:
Here, the AI was asked the same questions we asked human respondents in psychological interviews
- AI as uninformed idea developer:
The AI was asked to develop ideas without knowledge of research findings
- AI as a free idea developer:
The AI received information from us beforehand from the research, e.g., persona descriptions, and was asked to freely develop ideas based on this information
- AI as a participant in a creation workshop:
The AI went through a strategic creation process based on our workshop concepts with procedures and techniques developed specifically for the task
- AI as designer:
The AI received a creative brief from the research findings and was asked to develop design ideas for communication and execute them (ChatGPT writes the instructions — the prompt — for the image generation AI Midjourney)
In each setting, the prompting was systematically varied (see box below for details of the test settings).
AI as a researcher
Do we still need to do our own research? The results that ChatGPT spit out line by line and in bullet points weren’t that far off from our results. Was the AI perhaps also trained with our research results?
On second glance, however, it was noticeable that listing the needs and motives for why someone would buy insurance for his or her cell phone or book a health vacation, for example, didn’t help us. Which of the needs is essential or perhaps just a sub-item of another need was not apparent. Compared to our results, the underlying psychological core was completely missing, which could also only be determined from the context of the zeitgeist.
Nor was it possible to identify the psychological structure that is essential for us: What are the conflicting motives, what are the dynamics and turning points at which approval of the product turns into contradiction. Also missing were atmospheric qualities and a fitting into the cultural context — all these fundamental “insights” for understanding through psychological analysis could not be found in the bullet point list.
Conclusion: plausible and not entirely wrong, but too superficial. An Internet search on the topic would probably have given us similar results. We often do such preliminary investigation to get into a topic for writing the Guideline for the interviews with humans. If we do this internet search ourselves, it takes much more time, but has the advantage of knowing what sources the information comes from. ChatGPT looks for the most likely answers from somewhere. If the AI has been fed with a lot of reputable data from already existing studies on the topic, the results are good and can replace a time-consuming own preliminary internet search. However, especially in innovation research, it often happens that one researches a completely new topic for which there is not much reliable knowledge yet.
AI as respondent
In this setting, the shortcomings were apparent from the first reading — sort of between the lines. Something similar came across to us when we asked our respondents in a study on AI journalism that we conducted last year. “Somehow … so loveless” was the response, or “This was definitely written by someone who wasn’t at the event himself, so copied from the hearsay”. An example: ChatGPT was given lessons on a type of vacationer and asked, “Put yourself in the shoes of the ‘Dissatisfied’ and describe how you feel about life.” The answer:
“My attitude toward life is characterized by a consistent discrepancy between current life circumstances and personal expectations or desires. It manifests itself in a feeling of incompleteness, a permanent dissatisfaction with external factors. There is a desire for change, but often without a clear path to its realization.”
All true. But: Nobody talks in such a stilted way!
Compared with the answers from the real conversations on the same question, we noticed that the human respondents tended to use more metaphors, often thought back and forth, rephrased what they had said before — as if they were actually physically and sensually reflecting on their feelings at the moment of describing them. This prompted us to give ChatGPT the same task again, but asked for the use of metaphors.
Here’s part of the result — only part, because ChatGPT didn’t stop at all …
“My life feeling is like I’m constantly walking in shoes that are too tight, while navigating a forest of disapproving statues that constantly raise their fingers. My interior resembles an abandoned library, where dust and shadows overlay once-vibrant stories. The days pass like endless train cars that I view from the platform of a train station without ever setting foot inside…”
If you would like to read an interesting book about the complex human achievement of finding the right methapors, analogies or idioms, we recommend Douglas Hofstadter “Surfaces and Essences: Analogy as the Fuel and Fire of Thinking”. Some metaphors ChatGPT manages quite well, because it can simulate very well that it would have fuel and fire of thinking. But with longer texts one notices: It has not experienced itself, not perceived itself or acted itself, but only recombined and varied its word creation and one also senses this — somehow — between the lines.
Conclusion: AI is a good simulator, but it is not good for authentic descriptions of experiences, actions and feelings, because a computer program has no body, no senses and also no feelings — even if it can often pretend this well. This helps us in the psychological analysis of AIs, but not in that of human experience and behavior.
AI as an uninformed idea developer
As a comparison for this task to the AI, we had a list of 30 ideas described in detail for a specific topic that we had developed for a client. These were also already sorted by: small improvement, a big improvement, and a disruptive idea. ChatGPT mainly named ideas that already existed on the market — we also had done extensive background research. One or two ideas corresponded to our small improvements, e.g.: “Personal connectivity advisor: each customer receives a personal advisor who helps find the best offer based on the individual usage profile.” Verivox says hello.
Additionally, ideas came up that we didn’t list because they didn’t match the research findings, such as: “Integrated virtual reality communication: instead of a simple video call, customers could put on VR glasses during the customer consultation” Telephone and email etc. were also invented so that you don’t have to talk face to face with people like an agent with whom you don’t want a personal relationship. Therefore, the video call already proved to be critical, three-dimensionally it should be even more so.
Conclusion: Rather no help, because mainly obvious, conventional ideas are created or miss the needs of the consumers.
AI as a free idea developer
In this task, the AI received the results of the real psychological study in advance and was supposed to develop ideas on this basis, i.e. similar source material as we ourselves had when developing ideas. For the most part, the ideas no longer missed the needs, but instead often took up the information from the study in a rather one-dimensional way, e.g., for the health vacationer who is stressed: “Forest yoga and meditation for relaxation: Special platforms or clearings in the forest reserved for yoga and meditation sessions.” The forests around spas are already half-cleared, so many such yoga platforms already exist there.
Conclusion: the ideas are too obvious and have a low level of creation. Often they are just descriptions of an insight as an idea. What was interesting for us here was that the feeding with results from the study apparently limits the creativity of the AI — similar to what we know about human participants from creation workshops. We therefore addressed this problem in the next setting.
AI as a participant in a creation workshop
Humans are creatures of habit and rely on familiar thinking patterns. This also has great advantages, because relying on the tried and true is safer and also usually goes faster — routines are also mind-economically efficient. However, this prevents outside-the-box thinking. For this we use procedures for the so-called “Creative Destruction” in creation workshops. Such a self-developed method and successfully tested in many workshops is our “Nimmo” technique. We played through this with ChatGPT, first on a fictional example: improve a washing machine by first listing the advantages of an apple compared to a washing machine and then transferring them back to the washing machine, thus coming up with new ideas. For verification, we created such a list ourselves using HI.
ChatGPT’s first positive reaction: While human participants are often irritated at first and some consider it nonsense to compare apples with washing machines, ChatGPT was immediately willing to cooperate. Here, the advantage of the AI becomes apparent that it does not have any limiting thinking patterns — if you do not give it such as the research results in the previous setting. At first, the ideas for the benefits of the apple were still relatively obvious and only produced a portion of the ideas on our list. Finally, by asking the AI to look for more distant ideas, it easily surpassed our list.
However, the second part of the task — to develop ideas for a better washing machine from the advantages of the apple — then drifted too far into the fanciful and lacked a connection to reality, e.g.: “Mysticism and folklore (apple in legends): A washing machine developed in collaboration with storytellers and designers to integrate stories and myths into its function and design, so that every wash is a journey to another world.” We then asked the AI tools to come up with more practical ideas. Technical practical ideas seem to suit them better, for example, this was not bad:
“Energy and water efficiency: using advanced sensors and algorithms, the machine can optimize water and energy consumption based on the current load, saving resources and reducing operating costs.”
As a next step, we again took a real-world project as a reference, where the AI already knew the findings. The strategic approach using creative techniques worked well in this case as well. The knowledge of the findings no longer had a limiting effect.
Conclusion: AI proves to be a good participant for creation workshops. It does not need to be “creatively destroyed“, but is willing to participate immediately. However, the creation strategy, in which steps it should proceed, must be given to it. The knowledge of research findings then does not have a limiting effect and not only conventional ideas are created, but quite fanciful ones. Regarding the Nimmo technique, ChatGPT couldn’t help but remark: “Interesting challenge! So we use the advantages of the apple as inspiration to develop a better washing machine or the laundry washing process”, including a wink smiley — here it simulates again, of course, because a software can of course not find anything interesting.
AI as designer
The AI already knew the study insights about the health vacation and was supposed to develop an image idea on this basis. It also made a genuine effort to visualize the lessons from the study with, for example, the result, which is already shortened:
“A person (perhaps in back view, so that the target group can identify with the person) is lying on a comfortable lounger surrounded by soft cushions and a drink or a book could lie next to him, symbolic of relaxation and “letting yourself be pampered.” Next to the lounger could be a small table with information brochures or books on various “alternatives for a different everyday life”. In the center field: a path or way leading away from the lounger to a hidden, only half-visible door or gate — this symbolizes the possibility of diving into the depths of one’s own self. In the background, an atmospherically dense and haunting landscape. Conceivable would be a forest enveloped by a gentle mist, with tall, old trees, a mysteriously shimmering lake and a few silhouettes of people implied , which emphasizes “being alone or in a small community.”
In theory everything is included, everything has been thought of, but not only Midjourney was completely overstrained with the realization, but every human designer would have been — and also the viewer, because with all the things that are supposed to be shown here on one picture, it’s more like a hidden object challenge. In addition, the cushions do not fit atmospherically to the misty, mysterious forest and a mystical gate into the underworld of the soul in the middle of the landscape cannot be united by the look and feel at all with a pile of information brochures.
Other image ideas and the subsequent executions by Midjourney turned out less crowded and inconsistent, but they were rather stereotypical and not very creative and original — just as they can already be seen in a thousand ads of health resorts. Or they didn’t quite fit the needs, especially when rather subliminal image messages were meant to only lightly hint at something or portray subtle nuances, as in the case of an expectantly joyful yet hesitant facial expression.
By the way, the task of developing ideas for our cartoon “Wissbert, the market researcher”, which is published in the marketing magazine “Planung & Analyse“, was similarly difficult for the AI: As unintentionally funny the seriously meant metaphorical descriptions were, which ChatGPT presented as a “test person”, as unfunny were the ideas for stories that were supposed to be humorous: They were, at best, trying to be funny, and often in such a way that the scene could not even be converted into a cartoon image.
Conclusion: In the development of design ideas, the same deficit as with metaphors is again apparent. The AI lacks the sensory imagination. It basically puzzles together correctly understood information without a feeling for aesthetic coherence, without having any idea of how the whole thing fits on just one image — and there is also the logo and other picture elements that are also competing for attention.
At most Midjourney and Firefly offer support as a kind of ‘assistant graphic artist’ for the realization of an (own) picture idea. At present, however, it is mostly still necessary to edit the pictures generated by the AI in Photoshop — or to have it generate the individual picture elements such as protagonists, background or objects and to compose them into a coherent picture in Photoshop itself. It remains to be seen whether future generations of image-generating AI, such as Dall E 3, which was announced at the time of publication of this article, will be better able to generate even more complex compositions via text prompt. As is well known, a picture says (and needs) more than 1000 words.
Humans and AI as a Win/Win Team
The little experiment actually gave us more clarity. In summary, the result is even simple.
Positive — and we will use this more often in a specific way in the future:
👍 Research and brainstorming
The large pool of knowledge and the AI’s ability to quickly connect and prepare the information in a required manner can support preliminary internet search — as long as it is not a completely new product. One should also remain skeptical about whether the info is correct because the sources are unknown. Preliminary internet search and topical brainstorming help to create the Guideline for conversations with real people.
👍 Fanciful collection of ideas
The AI also proves to be a valuable participant for a creation workshop. It is open to any task, no matter how unusual, in the development of ideas and can also produce very fanciful creations. However, you have to give her a well thought-out strategy to elicit her expertise in coming up with unusual yet study insight based ideas.
👍 Image generation following your instructions
The same applies to generating your own (i.e., human) image ideas. Especially in Design Guide exploration, you need a variety of different image motifs as test and trigger material. AI makes you a little less dependent on stock photos, saves you some time and allows you to realize one or the other image idea that would otherwise be much more time-consuming if you were to assemble it completely yourself in Photoshop. However, it can no more take over the systematic selection of pictures on the basis of the research findings than it can take over the development of the image ideas themselves.
Limits show up clearly with all abilities that require experiencing, physical and sensual (life) experience and intentionality. Here one should be cautious and not fall for the (partly good) simulation of the tools:
Human input is essential if you really want to know — and that’s what we want in psychological research — how people experience, imagine and feel something, and if you want to fundamentally and deeply understand why people behave the way they do. The deficit that AI only simulates human characteristics in the end is particularly evident in creating design ideas, but likewise in the lack of reality in creating product ideas: a washing machine made of bamboo may meet the criteria of sustainability, but it certainly fails due to some laws of physics.
The AI also doesn’t know whether something is coherent, e.g., if subjects express themselves in a somehow strange way and if this is a hint to go deeper in order to get to the bottom of what is actually behind a verbalized need. It does not know what makes a good idea and does not recognize whether the idea is actually a solution to a problem. Here it also lacks important human qualities: a vision, the vague idea of an ingenious solution, and the euphorically urgent effort to seek it.
The AI has no intention to find a good solution. It lacks the joy and pride of having jointly growing beyond your limits as a team, followed by the passion to keep at it and push the idea further. Do you want to hand that over to the AI? It doesn’t make much sense either, because you won’t get far with a collection of ideas. There is still a long way to go before you have a good and coherent innovation that also evokes enthusiasm among consumers. But AI can be a valuable companion in the process, providing inspiration, helping to overcome barriers to thinking, and accelerating the process without compromising on quality.
Additional information about the test setting
|For the generation of texts ChatGPT based on GPT3.5 and GPT4 (mainly) of the company OpenAI was used|
The prompts for ChatGPT (written text instructions) were systematically varied in terms of instruction, amount of background info given, and “temperature”: “temperature” (values between 0 and 1) influences the “emotionality” of the response
For image generation, Midjourney version 5.2 and Adobe Firefly were used: Adobe Firefly has recently become part of Adobe Photoshop
Neuroflash was also used for both text and image generation: Neuroflash is a German-language AI-powered software for automatic text and image generation that offers additional features such as automated creation of social media posts
The reference projects used for comparison were current projects from the media, telecommunications and tourism sectors
The experiment was conducted in September 2023
This text was written in German without the help of AI. The translation into English was done automatically with the help of AI and not checked by native speakers. The cover image (collage) may contain small traces of AI (and a lot of hand work in Photoshop). The chapter dividers are 100% human made and free of synthetic ingredients.