Client Engagement in AI Consulting Projects Hits Different
Having now completed several AI consulting projects, I wanted to reflect on my experience and what differentiating factors make some engagements successful.
AI projects are not like other technology consulting projects. Not only are they technically more complex, but many AI applications (such as large language applications) work by understanding the latent associations of billions of different features and parameters. Unlike traditional analytical methods, these deep learning methodologies thus create a sort of “black box,” which often makes it impossible to see why a model is generating a certain value or making a particular recommendation.
As a result, AI projects require extra layers of delivery communication, stakeholder engagement, and iterative design. Clients should be aware of how much extra human resources may be required to make engagements successful.
A different type of communication – contextual.
Consistent and clear communication is critical to the success of any technology or analytical consulting project. But AI projects require even more, and for a couple main reasons. First, AI technologies are more specialized, so clients often have less in-house AI expertise to efficiently parse the various documents for project delivery. For example, in a regular analytical consulting project, the client may have a team of analysts who can parse the delivery and begin integrating it into their workflow. But most companies don’t have specialized AI teams that can do this. Second, AI technologies themselves may be more difficult to unpack analytically due to their black-box nature. For example, traditional business analytics may use methods such as multiple regression to make a prediction, which is additionally nice because it’s relatively easy to interpret. Simply looking at a regression output allows you to say things like, “On average, when X goes up 1 unit, Y goes up .5 units,” or “Here’s how much X impacts Y while controlling for Z.” In contrast, generative AI projects often use neural networks that can assess the impact of millions of observable and unseen features, making the output much less conducive to a clear business conversation.
So, how do you communicate such findings to the client? Often, you don’t.
Instead, you communicate 1) early and 2) often about things that the client can actually make sense of, such as the input data, model constraints, known features, missing data, etc. so that even if it’s unclear why a model is generating a particular feature, the client has confidence that all the relevant business considerations are being accounted for. I call this contextual communication, as its focus, unlike traditional consulting projects, is less on the analytical output itself, and more on the context around that analytical framework. This is especially important if your project delivery doesn’t include live testing where the client can measure the potential impact of the project with their usual KPIs.
On a side note: It’s also noteworthy that AI projects involve significant investments (a $500,000 sticker price is on the low side) and high expectations. Thus, you need more regular communication so the client can stay informed about progress and align their (probably lofty) expectations.
Use a human-centered design.
Nearly every consulting firm has an impressive-sounding human-centered design framework that they implement. They’re never shy to remind you of this. The goal of the framework is obviously to ensure that the technology aligns with an organization’s specific needs/goals/motivations by engaging key stakeholders from the onset. AI is a powerful tool, but if it’s ultimately going to serve people, you need to engage people early and often. Involving users throughout the design process is crucial. By engaging users early and often, we can gather valuable insights that shape the development of the AI solution. This involvement ensures that the final product is intuitive, user-friendly, and meets the real-world requirements of its users.
Design iteratively: the 4x rule.
Finally, let’s delve into iterative design—the engine of innovation. AI projects are complex and dynamic, requiring a flexible and adaptive approach. Iterative design, with its cycles of prototyping, testing, and refining, is the engine that drives success and is the way we make sure there’s end-of-project alignment. Each iteration is an opportunity to test and validate assumptions, ensuring potential problems are identified and addressed before they become critical. The process is all about continuous improvement. By regularly testing and refining the AI solution, we can incrementally enhance its performance, usability, and value, ensuring the final product is a continuously evolving solution that grows and improves over time.
Iterative design isn't just a nice-to-have in AI projects; it's as essential as caffeine in an analyst's bloodstream. The industry standard of having three milestone presentations throughout a project? Toss it out the window. In AI, you need to showcase progress, gather feedback, and pivot faster than you’re used to. I'm talking about bi-weekly demo sessions at a minimum, with the flexibility to call impromptu sessions when breakthroughs (or breakdowns) occur. Another metric I like to use as a baseline is that for every 1 hour of knowledge transfer sessions, you should have 3-5 hours (on average 4x the duration) of iteration spaced evenly throughout the lifecycle of the project.
Many consulting projects have a “deliverables” phase of the project where knowledge is transferred and the final documents/artifacts are handed over to the client. We went into our first few AI engagements with this framework in mind and it didn’t work. Clients wanted to see more documentation, and sooner, to verify that the contextual information around the “black box” was accurate. After some bumps in the road, we adjusted our delivery process to instead use “living” documents that we shared with clients early, and could modify and adjust throughout the entire project duration. In short, AI made the “deliverables” phase of our projects feel antiquated.
Engagement expectations: setting the standard
In traditional consulting, you might get away with a monthly steering committee meeting. In AI, that's a recipe for disaster. You need weekly check-ins with key decision-makers and not just the yes-men. Bring in the skeptics, the end-users, the people who'll actually have to live with this AI system day in and day out. Their insights are worth their weight in Bitcoin.
Here's a hard truth: if you're not uncomfortable with the amount of client interaction in your AI project, you're probably not doing it right. It should feel excessive. It should feel like you're oversharing. Because in the world of AI, there's no such thing as too much communication.
Let me leave you with this: AI projects aren't just about deploying clever algorithms. They're about shepherding organizations through a fundamental shift in how they operate. And that, my friends, requires more hand-holding, more tough conversations, and more iterations than any other type of consulting work I've encountered. So, if you're embarking on your first AI project, buckle up. Expect to spend as much time talking as you do coding. In the end, the success of your AI project won't be measured by the sophistication of your algorithms, but by how well your solution aligns with real human needs. And that alignment? It's built on a foundation of relentless, sometimes uncomfortable, always essential communication.
See you at the next one,
Dr. Pete