Conference Insights: ESOMAR The Art and Science of Innovation interview with Ray Poynter
The Society of Innovators at Purdue Northwest partnered with ESOMAR‘s The Art & Science of Innovation which was held in Chicago June 15-17, 2024. The international conference serves as a platform for insights professionals from various industries to come together and share ideas, tools, and methodologies that drive the future of market research and innovation.
In an interview with Ray Poynter, president of ESOMAR, we discussed this year’s event which placed a significant emphasis on the role of artificial intelligence (AI) in driving new ideas and evaluating innovations. Ray highlighted the importance of understanding and leveraging AI tools to democratize innovation and insights generation. He also touched on the evolving nature of AI and its impact on the research industry, and the future of education in adapting to these technological changes.
Can you please introduce yourself?
I am Ray Poynter. I’ve spent 47 years in the insights research industry and currently serve as president of ESOMAR, a global association for market research and insights. There are domestic insights associations in several countries, but we add the international dimension. What can we learn across cultures from different countries? If you want to do something in five countries, what do you have to change to make it work?
What is ESOMAR’s The Art & Science of Innovation?
We hosted the first event in Singapore, and this is the second in the series. It is really focusing on two elements. One is pure innovation. We’ve had some great presentations from people whose job is in coming up with new ideas, new products and new services. The second element is AI. It is an innovation creation tool, but it can be a fantastic aid to assessing innovation. The event really brings these two elements together to discuss how our industry is going to use AI to create ideas and evaluate those ideas.
AI is the obvious theme across all industries right now. With everyone jumping on the AI bandwagon, how can leaders in your industry effectively use AI as a tool or accelerant for their work?
A lot of research work has proven theory around it. With AI, we have less theory and new ways of messing things up. We’re still experimenting. Something will work and we’ll do it again. Maybe it will work again or maybe it won’t work the next time. People are also trying to find out how we can do it safely. How can we avoid things like what happened to Google where it says, “Show me some Vikings,” and it decided to use its diversity guardrail to show African American Vikings. That was diverse, but it was wrong.
AI is democratizing who can do this work, but it takes away the guardrails as well. Collectively, we’re still getting our head around how we can do things that are diverse and are safe.
You’ve got a long history in this industry and with AI. Can you talk about the evolution of AI that you have seen?
One of the things that makes talk about AI problematic is that as soon as the technology works, we often stop calling it AI. Siri and Alexa are phenomenal AI products. The ability to transcribe a conversation into text has been around for awhile now. That is astonishing AI.
Large language models (LLMs) have jumped us again into the next stage because now you don’t need a computer programmer to access it. You can start creating tools pretty much out of the box. That’s immediately created some interesting problems, but it’s also opened up fantastic new ways of doing business. To give you one example, I had a whole bunch of PDF files, and I wanted them as text. So, I uploaded the first one to ChatGPT and said, “convert this into a Word document and remove all the formatting.” It did that and then I uploaded another one and said, “do the same.” My instruction was just telling it to do the same. That is a change in the way we talk to computers and the way that we access AI. This is all part of an enormous change we will see happen over the next few years.
What are some effective uses of AI you are seeing in the insights industry?
One effective use we’re seeing is the intergeneration of new ideas. SCAMPER is one example of an established creative problem-solving technique that encourages people to think differently and come up with innovative solutions. When you tell ChatGPT to use this method, it comes up with strange and different ideas. AI just doesn’t have the same boxes that we have.
And you can then prompt it with additional contexts. “Okay, now I want you to think like an African American teenage in New York.” How might it now respond with SCAMPER. Or “Now I want you to think about a white couple who’ve retired from Detroit to live in Florida.” How are they going to generate ideas? So, we can ask these AI tools to behave like people who are not like us because most innovation professionals are white middle-class, university graduates. That’s a really narrow group.
This is helping us come up with new sets of ideas. While people say computers can’t be creative, AI’s background is wide than ours. Its circle is bigger.
How should we be thinking about getting our current workforce up to speed on AI technology and introducing it to younger generations?
A lot of this should have been in education for a long time. We’ve had query database systems since the 1980’s and most people were bad at choosing them. This should have been taught at schools and as Google became more common, we should have taught how to ask good questions and using logic models, but we didn’t.
Now that has morphed into prompts and we’re seeing qualitative researchers being better than quantitative researchers at using these new tools. You need to talk to LLMs like you are talking to a person. The way you ask the question changes the answer.
We need to go back and look at the Socratic method and what Plato said about it, because that was very much how the teaching was done. You would keep asking a question until somebody arrives at the right point. It was a very knowledgeable system. Rhetoric as a subject needs to make a comeback because now we’re calling it prompt engineering.
But this is an evolving skill. Five years from now, the way we ask questions to AI will be much more flexible, but we still need to make sure that the machine algorithms can home in on what we really want. This means the absence of ambiguity when you want ambiguity to be avoided and the presence of ambiguity when want it to be part of the question.
If you could step into a leadership role in a K-12 education system, what are some changes you would consider?
Historically, there is often one way of getting to the right answers and that is changing. One thing that would be really fun is to take a whole set of SAT tests and prompt AI with “here are the exam questions, how are you going to solve them?” I would then get the class together and compare the answers they came up with. Who got the answer right? Maybe just three of them, but how did they get it right? Maybe two did it one way and one did it another way. That means there are two ways of getting the right answer. Students need to understand that for the rest of their life there are going to be multiple ways of getting to the right answer.
Some of the clever people I see are now talking about unlearning. Memorization has been a key element in education. For British kids learning the kings and queens in order, as soon as we had encyclopedias and Wikipedia, you no longer needed to learn that. We need to develop an awareness of what we don’t need to learn. One of my big issues is when somebody says, “We will always need” or “We will never do.” “Always” and “never” are the sorts of words that you shouldn’t use in this world.
This is just the second Art & Science of Innovation event for ESOMAR. What are you taking away from this?
It is really good to see how global this event is, how many people are here. It is also reassuring that we’re seeing companies like Kimberly Clark and Procter & Gamble here. These big brands are out there exploring and experimenting with new things.
ESOMAR hosts have been very intentional about introducing speakers with a whole-person perspective. Using fun facts and introducing their hobbies. Why is that important?
I think it is particularly important that when you get into a “sciencey” area, you need to remember it is about people. For us, this is how do people use technology? How do people create innovations for people? We’re in a people business, not a tech business.