If you look at industry surveys about the adoption of generative AI, there are a lot of marketers putting their toe in the water, or a foot. But the easy access to these productivity tools means that adoption can sometimes be rather ad hoc and fragmented.

For example, Adobe's 2024 trends survey found only a quarter of client-side senior executives had already conducted skill-building programmes on genAI. And yet new Marketing Week data shows almost half of marketers are using AI for market research, and more than two in five for audience segmentation and creative testing.

Anecdotally, I'm sure we have all spoken to people, within marketing and without, who have cracked on with using new tools to create imagery and copy, and advised others to do the same. Is there any problem with having an open mind and keeping abreast of new technology? Isn't this what any good marketer should do?

Is it benefitting the wider organisation?

The danger to look out for is when ad hoc experiments deliver small local benefits while missing the opportunity to capture larger benefits for the wider organisation.

I spoke to one experienced data and analytics leader earlier this year who used the analogy of the early days of CRM systems in B2B businesses, and the difficulty in ensuring adoption by salespeople.

For 10-plus years, businesses said: manage your contacts and key sales processes in CRM. They never did that properly. And then suddenly they realised they can manage contacts on their smartphones. They can find a customer, they can use WhatsApp, they can do their own small campaign within minutes. They create contacts, groups of customers…

So, basically the company has not got its hands on this customer data asset. It's on people's devices. It's fragmented. It's lost.

If they had strategised properly on a CRM system and found ways to get that data in, they would be very rich with customer data. The same could happen in AI.

The risks of free rein

The potential downsides of allowing a salesperson or marketer or customer service associate to add Large Language Models to their workflow have been well documented. CEOs may perceive genAI to be a threat to both compliance and competitiveness, with issues such as hallucinations and privacy breaches often front of mind.

One might argue, in the case of ad hoc use of LLMs, that productivity tools are, by their nature, dispersed, and that employees already need to take personal responsibility for their use of everything from Google search and social media to enterprise software.

A useful way to think about this is that AI amplifies existing capability — it does not substitute for missing expertise. The value of a generative AI tool is directly tied to the user's ability to evaluate its output.

Expectations for genAI are sky-high, with two-thirds of senior executives in Adobe's recent survey saying they are optimistic the technology will deliver business transformation across analytics, content, customer service and sales. But without strategic oversight, those expectations are likely to be unmet.