Last month at Cranfield School of Management, I spoke to a group of the UK's leading Key Account Managers about AI. Not the usual "AI will change everything" talk, but a practical look at how this technology is actually being used in KAM today.
The Cranfield KAM Forum Winter Summit brings together experienced practitioners to tackle real challenges in Key Account Management. We spent the afternoon exploring AI tools, running live demonstrations, and discussing what works (and what doesn't) when applying this technology to customer relationships.
This article captures what we learned - both from the presentation and the candid discussions that followed. It's a practical guide based on real experience, not theory.
The Current State of AI in Business
We're witnessing something extraordinary - the ability to compress the entirety of human knowledge into a system small enough to fit on a thumb drive and run on a moderately powerful computer. As discussed at the Summit, "It's mind-blowing... that's like Star Trek, that's living in the future. All of this knowledge that's everywhere fits onto one of these."
However, having access to this knowledge is only half the equation. We need effective ways to interact with and extract value from these systems. That's why we're currently in what could be called "The Age of the Prompt" - learning the art of communicating effectively with these vast knowledge systems. It's about how we talk to the machine that makes the difference.
Given the uncertain trajectory of AI development and its potential impact on Key Account Management, I've developed "The Key Account Manager's Guide to Prompting" (available on my Substack). This guide consolidates my experience and research into practical methods for interacting with Large Language Models. It's not just about writing better prompts - it's about developing a systematic approach to extracting value from these powerful tools in a KAM context. From simple interaction techniques to advanced chain-of-thought prompting, the guide provides frameworks that help Key Account Managers leverage AI effectively in their daily work.
I am proposing that this is just the beginning of a much larger evolution through several distinct phases:
The Age of the Prompt (Current): Learning to communicate effectively with AI systems
The Age of the Agent: Where AI operates autonomously on routine tasks
The Age of the Collaborator: AI actively partnering in decision-making
The Age of the Advisor: Strategic guidance based on complex pattern analysis
The Age of the Architect: Autonomous system design and management
Understanding Today's AI Landscape
From my work with various AI systems, I'm noticing several important trends. The tools are evolving rapidly, but what's more interesting is how they're being used in practice.
Ecosystem Development
Each major AI provider is developing what I'd call a "walled garden" approach. For instance, you can now write and execute code entirely within Claude or ChatGPT's environment. While this creates powerful integrated experiences, it also raises questions about vendor lock-in - something Key Account Managers are quite familiar with from their own customer relationships.
The Multi-Modal Revolution
We're seeing a significant shift toward multi-modal capabilities - AIs that can see, hear, speak, and process information in various formats. During the Summit demonstration, we experimented with Advanced Voice Mode, which costs about $9/hour. While the interaction wasn't perfect (as our live demo showed!), it hints at where we're heading - toward more natural, conversation-like interactions with these systems.
The Context Window Race
One of the most exciting developments I'm seeing is the rapid expansion of context windows - essentially how much information these systems can process at once. We've moved from:
GPT-4's 32K tokens
Claude Opus's 200K tokens
Gemini 1.5 Pro's 128K tokens
This isn't just about handling more data - it's about the ability to analyse entire customer histories, contract portfolios, or market reports in a single interaction. When you combine this with retrieval-augmented generation (RAG), you can create systems that maintain accuracy while working with vast amounts of company-specific information.
Cost and Accessibility
A word about cost - while we're seeing rumors of premium services potentially reaching $2,000/month (particularly for ChatGPT), there's also a democratisation happening. Many powerful tools are available for $20-30 per month, and open-source alternatives are rapidly catching up to proprietary solutions.
During the Summit, I demonstrated how you could build a simple wordle clone game web application in 45 minutes using basic prompts and tools like Bolt.new, costing only about $7 in tokens.
Seems now the barrier to writing low level code isn’t there anymore, if we can think of a simple web application we can build it. This helps companies save on saas licences for tools which they’d otherwise have to purchase and could mean businesses in those spaces need to rethink their value propositions.
Mrs Nasty from Procurement
One of the Summit's most engaging demonstrations was our live experiment (no previous prompting or training of the system took place) with ChatGPT's Advanced Voice Mode, rumoured to be priced at $9/hour via API.
We decided to tackle a scenario every KAM dreads - negotiating a price increase with a challenging procurement officer.
The interaction went something like this:
"Hey ChatGPT, we're standing in front of 50 of the UK's best key account managers at Cranfield School of Management. Let's play a role-play game where you're Mrs Nasty from procurement, and I'm a key account manager asking for a price increase."
The AI readily accepted the role, responding with a suitably stern, "I hear you want to discuss a price increase. Convince me why I should even consider it." What followed was an interesting back-and-forth that had the room both chuckling and taking notes.
When I tried flattery ("I'm taking you out to lunch, Mrs. Nasty"), the AI stayed firmly in character, responding with a dismissive "Let's stick to business. Flattery won't influence my decision." Only when I started talking about demonstrating value and ROI did the conversation become more productive.
While there were moments where the interaction felt slightly mechanical - with noticeable delays between responses and occasional non-sequiturs - the potential was clear. The system maintained character, remembered context, and provided relevant pushback to negotiation attempts.
What made this demonstration particularly valuable was:
The ability to practice difficult conversations without real-world consequences
The AI's consistency in maintaining a challenging personality
The immediate feedback through responses
The opportunity to try different approaches in a safe environment
The limitations were also instructive:
The slight delay in responses disrupted natural conversation flow
Some responses felt formulaic
The system occasionally lost the thread of complex arguments
Despite these limitations, the demonstration sparked a lively discussion about potential applications in KAM training and preparation. Several attendees noted that even with its current limitations, such a tool could be valuable for:
Practicing difficult conversations
Testing different negotiation strategies
Training junior KAMs in a safe environment
Preparing for specific customer interactions
Developing and refining value propositions
Creating a Podcast
During the Summit, we showcased one of Google's most promising new tools - NotebookLM (currently in beta and free). What makes this demonstration particularly compelling is how it changes the way we can interact with our own business documents and customer data.
Here's what we did live at the Summit:
Uploaded the conference slide deck into NotebookLM
Asked it to analyse the content
Demonstrated how it could generate a natural conversation about the material
The system created a podcast-style discussion about the content. Without any specific instruction, it generated a conversational exchange that went something like this:
"A deep dive today, we're looking at AI and key account management." "Yes, pretty interesting stuff." "We have some really interesting source material for this one - a presentation from Richard Brooks."
We uploaded the podcast to Soundcloud and you can listen to it here:
The remarkable part? I think If you heard this on the radio, you wouldn't know it wasn't two people having a genuine conversation in a recording studio. The system naturally created a back-and-forth dialogue that felt authentic and engaging.
Practical Applications
This capability opens up several powerful use cases for KAMs:
Analyzing lengthy customer reports or proposals
Digesting meeting transcripts
Understanding complex RFPs
Processing quarterly business reviews
Analyzing competitor information
Key Benefits
What makes NotebookLM particularly valuable for KAMs is:
The ability to maintain source attribution (it tells you where information came from)
The capacity to handle large documents
Natural conversation-style interaction with the content
The ability to ask follow-up questions about specific details
As one participant noted during the demonstration: "I actually used this the other day... I asked it to give me 'Why would you join a professional association?' It took a very dry 600-word document and turned it into 41 minutes of two people having a conversation."
Important Considerations
While this tool is powerful, we discussed some important caveats:
Currently in beta, so features may change
While Google promises not to use uploaded content to train their models, standard data privacy considerations apply
Best used for non-confidential or properly redacted documents
Excellent for learning and preparation, but should not replace actual customer interaction
Strategic Implications for KAM Organisations
A cornerstone of our discussion at the Summit centred around Malcolm McDonald's classic strategic framework. As one of the foundational thinkers in strategic marketing and key account management, his insight about strategy versus tactics becomes particularly relevant in the AI era.
Strategy vs Tactics in an AI World
Looking at McDonald's framework:
What makes this framework particularly powerful in the context of AI is that artificial intelligence can dramatically amplify both strategic and tactical execution. As I discussed at the Summit, AI can effectively "10x" both axes of this matrix, meaning:
If you're implementing an ineffective strategy efficiently with AI, you'll fail even faster
If you're implementing an effective strategy with AI-enhanced tactics, you'll potentially see unprecedented success
The key message here is clear: Don't be very good at implementing a bad strategy. This becomes even more crucial in the AI era because these tools can make us remarkably efficient at executing whatever strategy we choose - good or bad.
Importance of Domain Knowledge
In this context, domain knowledge becomes more critical than ever. While AI can provide remarkable tactical efficiency, the ability to:
Develop effective strategies
Understand complex customer needs
Navigate organisational relationships
Make nuanced strategic decisions
These remain fundamentally human capabilities that require deep domain expertise and experience.
Practical Implementation Guidelines
While this on first glance appears to be a complex undertaking, there's a clear path forward for organisations wanting to implement AI in their KAM programs.
Here's how I suggest approaching it:
Start with What Works Today
Based on my experience, I recommend starting with a basic stack of proven tools:
For Daily KAM Tasks (~$20/month each):
Claude 3.5 for human like writing
ChatGPT as a general LLM tool
Gemini for research and large content creation
Perplexity for market and customer research (replacing traditional search)
For Visual Content:
Adobe Firefly for professional imagery
Canva with integrated AI for marketing collateral
Grok2 for photorealistic images
For Voice and Conversation:
ChatGPT's Advanced Voice Mode for practice and training
NotebookLM (currently free in beta) for document analysis
Implementation Strategy
Rather than a full-scale rollout, I recommend what I call the "lighthouse project" approach:
Start Small
Choose one specific use case (like proposal writing or customer research)
Select a small team of early adopters
Set clear metrics for success
Build Competency
Develop prompt libraries specific to your business
Create standardised workflows
Document best practices and failures
Scale Gradually
Expand to other teams based on early successes
Adjust policies and procedures based on learning
maintain control over sensitive data
Self-Hosting LLMs
For Key Account Managers working in secure environments - whether that's defense contracting, financial services, or highly regulated industries - self-hosting Large Language Models offers a compelling alternative to cloud-based solutions. This method provides enhanced security and frees you from dependency on third-party providers.
Running your own LLM is simpler than you might think. With today's technology, you can run a powerful model like Llama 3.2 on a decent desktop computer. All you need is a machine with good amount of VRAM (24GB), a decent GPU, and about 240GB of storage. Software like Ollama makes managing these models surprisingly straightforward.
The benefits of self-hosting extend far beyond security. When you run your own model, your data never leaves your premises, giving you complete control over information flow and ensuring compliance with even the strictest security protocols. You're also freed from the constraints of external providers - no surprise price increases, no token limits, and no dependency on internet connectivity.
The open-source community has made significant strides in this area. Models like Llama 3.2 from Meta, Mistral AI's offerings, and others now approach the capabilities of proprietary solutions. During the Summit, we demonstrated running Llama 3.2 locally, showing how a KAM could analyse customer data and generate responses without any internet connection. The entire system fits on a high-end laptop and ran at around 20-30 tokens per second.
There are, of course, practical considerations to weigh. While self-hosting eliminates ongoing subscription costs, it requires an initial investment in hardware and technical expertise. Performance may not always match cloud providers' speed, being limited by local hardware capabilities. However, for many organisations, sometimes these trade-offs are well worth the benefits of data sovereignty and independence.
Getting started doesn't have to be overwhelming. Many organisations begin with a smaller model like Llama 3.2 7B, testing on a single machine to validate use cases and build core competencies within the team. As comfort and requirements grow, the system can be scaled up gradually, integrating with existing systems and expanding to serve more users.
This approach particularly resonated with KAMs working in defense and financial sectors, where data security is paramount. As one participant at the Summit noted, "This could be a game-changer for us - all the benefits of AI without the compliance headaches."
Many organisations successfully use a hybrid approach, keeping sensitive operations on local models while using cloud services for general tasks. The key is matching the approach to your specific needs and security requirements.
Learning Through Experimentation
The best way to understand AI's potential in KAM is to experiment with it. Helping you to become familiar with the technology and discover new ways to approach challenges.
For example, our experiment in creating "The Key Account Management Blues."
During the Summit, we demonstrated how AI tools could be combined creatively:
Used Claude to help craft lyrics that captured common KAM challenges
Generated album artwork with AI imaging tools
Created the musical arrangement using AI composition tools
Total time: 90 minutes
Total cost: About $35
While this might seem like just a fun experiment, it demonstrated several important points:
How different AI tools can work together
The speed of content creation
The ability to iterate quickly
The importance of human guidance in the creative process
This kind of experimentation helps build confidence in using AI tools and reveals unexpected applications. At Cranfield, we've found that hands-on projects like this help people move from understanding AI theoretically to seeing practical applications in their work.
As AI continues to evolve, the ability to experiment and adapt will become increasingly important. The key is to start small, stay curious, and maintain a balance between embracing new capabilities and maintaining the human elements that make KAM successful.