Article icon
Article

Mind the Gap: AI and Enterprise Architecture – Pitfalls and Pathways

Mark Cooper headshot

I’ve been thinking about corporate AI from the perspective of an enterprise architect (EA). I spent several years as an enterprise architect, leading some of the largest and most complex projects in the company’s history. So, what is the relationship between the enterprise architect and AI? Is the enterprise architect also the agentic AI architect (or just AI architect)? Perhaps in some organizations, but I would assert that an AI architect is more analogous to an application architect. The focus is on the design and implementation of the AI system: its technologies, frameworks, infrastructure, development, monitoring, enhancement, and maintenance.

The enterprise architect considers AI and how it fits into the enterprise ecosystem from a broader perspective. At that level, AI is simply another component. A nice shiny new tool in your EA toolbox. Much of what enterprise architects already know and practice is still applicable in an AI-enabled enterprise. The principles are the same.

Nevertheless, it is clear, even in these early stages of corporate AI adoption, that there are certain challenges that an enterprise architect may face when integrating AI into an application ecosystem. The purpose of this column is to highlight some of these pitfalls and recommend better pathways forward.

Let’s start at the beginning. Oftentimes our first mistake is the most damaging.

Pitfall: Start with the sole objective of delivering AI.

Yes, companies want to take advantage of the new technology. Everybody else is. FOMO is a reasonable, valid, and widespread concern, but you have to ask yourself why you want to use the new technology. What do you want to accomplish? If your primary objective is simply to implement AI agents or an agentic AI something-or-other so that you can say that you have delivered an AI something-or-other, then it is more likely than not that you have already failed. Welcome to the 95%. It doesn’t matter if the solution is appropriate for the problem or not. It doesn’t matter if the problem is too big or too small. Too risky or barely impactful. As long as we can check the AI box for the shareholders, we’re good. We’re successful because we say we’re successful. Chicken lunches for everybody!

Pathway: Maintain a clear line of sight from the AI deliverable to a business objective and its tangible business value.

Let’s take as a given that you want, need, or have been mandated to deliver an AI something-or-other. OK. Then move quickly – as in instantaneously – to this follow-up. You need to be able to articulate what you plan to improve for your customers, clients, employees, and/or partners. Management might care to some extent about the shiny new technology, but what they really want are more customers, or more revenue, or fewer, better, or more efficient employees. If you can’t explain how your project will accomplish one of those objectives, they’re not going to care and they will eventually abandon your project.

In addition, and perhaps more importantly, by establishing the connection between your project and business outcomes at the beginning, you will be able to articulate the benefit of your project, and of AI, to the company. IT has a long and embarrassing history of over-promising and under-delivering. No fluffy or imaginary or maybe-you-can-see-them-if-you-squint benefits.

Pitfall: Over-focus on business value.

Wait a minute! I thought you just said that we need to be focused on business value. We need to cut headcount and increase revenue. Now! Yes, you do need to be focused on business value. Yes, I said that you need to maintain line of sight from your AI-focused project to business outcomes.

You just don’t have to recoup all of your past and future investments all at the beginning all at once.

As I write this, a baseball game is on the TV across the room. What just happened seems eerily on point. Ninth inning. The team at bat is losing. Two outs. Two strikes. The pitch. The batter swings. Hard. And misses. Game over. The slow-motion replay shows a close-up of the batter swinging with all his might. A grimace on his face. He had hit a dozen home runs this season. Tried for another. He had four times as many strikeouts as home runs. Tally one more in the strikeout column.

Too often we get fixated on hitting the home run. We then act surprised when more often we strike out. Remember that for every overly-hyped customer-facing or revenue-generating application you read about in the trades, nineteen more walked dejectedly out of the batter’s box.

Pathway: Be realistic in your expectations, especially with what you sell to management.

You and your company are probably still learning. Still experimenting. That’s good. Start small. Start internal.

One tip that I’ve seen a few times is to ask around and find out how your employees are using AI themselves. Until fairly recently, companies have been reluctant to dip their toes into AI, leaving many employees to play with it on their own time (and their own dime). Leverage their creativity and their experience. Consider whether the problems that they’re solving could be rolled out to a broader audience.

Start with a proof of concept. That’s fine. Lean into the experience you get through this exercise, but keep the business benefit line of site. You might even come out with something useful at the end of it. Keep in mind, though, that the gap between prototype and production is often unexpectedly wide. Something that works in a lab might not work so well in the wild. (That’s why I prefer proof of concept to prototype or pilot.)

Be clear-eyed about the costs and the benefits, and especially the risks. In a recent article I gave some examples of problems that occurred in AI systems. I wonder how thoroughly the risks were considered in those cases.

Pitfall: Asking AI to generate too much of the application all at once.

I demonstrated how ChatGPT could be used to write a computer program that solves a certain constraint satisfaction problem. I then iteratively added features.

Aside: This is sometimes referred to as Vibe Coding, and it is currently a very hot topic of debate amongst those who comment on application development and AI. Some are enthusiastically in favor and some are enthusiastically against. I’m not going to comment one way or another (at least not right now) about its usefulness or effectiveness, especially in a corporate setting.

Each time I asked ChatGPT to add a new feature, the entire program was regenerated. Since previous versions remained within its context it probably wasn’t starting totally from scratch, but there were some unexpected changes from version to version.

Several researchers have experimented with generating entire systems using AI, including testing, bug fixes, and CI/CD pipeline. One significant result was that fixing a bug in one place and regenerating the code sometimes spontaneously created bugs in other places. This is consistent with my experience.

There seems to be an evolving fantasy where we can just say, “Build me a customer management system” or whatever, and the AI will just make it happen. Vendor hype is cranked to eleven. This should raise the hackles of any enterprise architect. That’s not how it works.

Pathway: Enterprise architecture 101 – functional decomposition and isolation.

Have we forgotten the basics of system design? Modularity and loose coupling are standard architecture principles. Functional decomposition is a standard Enterprise Architecture practice. We don’t write entire systems all at once.

AI is no different.

Break the solution down into its constituent components. Focus on each individual component. One service, one task. One AI Agent, one task. A change in one component shouldn’t impact any other.

But how do we determine what kind of components to use? This may be the most important consideration of all for an Enterprise Architect working in an organization that is just starting its AI journey.

Pitfall: Believing you have to implement everything (or at least everything new) with AI.

It’s easy to become enamored with this new technology, especially when it appears to be so easy to implement new applications. Just say what you want and the AI spits it out.

But for as easy as it appears on the surface, a whole ecosystem exists around agentic AI, AI agents, and generative AI just to support the technology itself. That’s in addition to what you want to actually implement with it. You need the large language model, of course. You need prompt engineering, interfaces, collaboration models, feedback mechanisms, and more. You need to have processes to make sure that the AI does what it’s supposed to do, doesn’t hallucinate, manages problems when (not if) they occur, and makes necessary adjustments.

It’s a lot. It can get expensive. And it’s often unnecessary.

Pathway: Keep it simple.

Again, decomposing a planned system into constituent components is a typical enterprise architect activity. One of the key decisions that an enterprise architect working in the AI space will then make is the selection of the most appropriate implementation approach for each component. Each must be critically evaluated. I’ve talked about this before. More often than not, this will be the simplest approach. You would think that this would be axiomatic, but when we’re all losing our minds over AI, that sometimes goes out the window.

You wouldn’t want to load all of your millions of customer transactions into ChatGPT and then ask it to produce invoices. Databases are best when data completeness and precision are required. Is the workflow consistent or the decision process rigid? Standard applications, services, orchestration, and tools will work for that. Is the workflow fluid or are interpretation or judgment required? AI may be appropriate.

IBM describes the situations where the use of agentic AI can be most effective: “Agentic AI architecture should be composed of components that address the core factors of an agency: Intentionality (planning), forethought, self-reactiveness, and self-reflectiveness. These factors provide autonomy to AI agents so that they can set goals, plan, monitor their performance and reflect to reach their specific goal.”

Ask first: Do I really need to use AI, or will something simpler suffice?

Pitfall: Conflate accomplishing the task and interacting with the system.

I like the natural-language interfaces. My vision for almost as long as I’ve been involved in data and analytics has been to interact with analytical systems with a Google-like interface. Just ask the question and get your answer. The capabilities of today’s LLMs far exceed that vision.

This approach has tremendous potential for democratizing analytics, combining the interpretive power of LLMs with the data storage and retrieval precision of databases and other repositories.

That said, interface selection doesn’t need to dictate the entire implementation.

Pathway: Separate accomplishing the task from its interface.

Put simply: just because you want to be able to ask natural language questions doesn’t mean that you have to load all of your company’s data into the large language model. It is possible to interact through an LLM that generates database queries. You want to know your top 10 most profitable customers? Use the LLM to generate a query that interrogates the database and returns those top ten most profitable customers. You could even use the LLM to present the results in natural-language format.

Use a database for what a database is best used for. Use applications and services for what they are best used for. Use tools for what they are best used for. And use AI for what it’s best used for. It’s not hard, but it does require evaluation, consideration, and judgment.

Pitfall: Believing that the AI will do what you want it to do.

I envision AI agents (and generative AI and agentic AI) like the mischievous genie of legend. You make what seems to be a simple wish, but it gets interpreted in the most unexpected and undesirable way. “I wish I could understand what my dog is saying.” All right. Poof! You’re a dog.

Most of our interactions with LLMs have relatively low-stakes. Summarizing Zoom calls and emails. Creating presentation pictures and TikTok Stormtrooper videos. Small-scale corporate experiments and proofs of concept. Stuff like that. But increasingly companies are looking to deploy externally-facing applications that interact not just with employees, but with customers, suppliers, partners, regulators, and the general public. Again, bad things can happen when AI behaves unexpectedly and undesirably.

Pathway: Spend as much time thinking about what could go wrong as you do on what you want to go right.

Get creative. Two of my favorite quotes. One is old, the other is new. First: You cannot ever make anything foolproof because fools are so ingenious. Second: You can’t build AI guardrails high enough.

Vendors are increasingly releasing guardrail frameworks that prevent certain LLM missteps. Obvious ones involve preventing the leakage of personally identifying information or inflammatory language. If you ask ChatGPT to share what it knows about someone or an address or anything like that, it will respectfully decline. Guardrails can also filter content based on context. Want to know about the different types of bombs dropped during World War II? OK. Want to know how to build a bomb? Not OK.

You will also have restrictions based on your own company’s policies and business needs. It’s an extension of the data protection standards and requirements already incorporated into your applications and databases. In addition to protecting the data, you also have to consider the inputs, the outputs, and the content generated.

As an enterprise architect, you will (or should) have the final word on what guardrails will be implemented and where, and you would be well served to understand the risks that must be considered: bias, toxic or destructive output, sensitive information, hallucinations, and prompt injection to name a few.

I would be interested to hear from other enterprise architects about your experiences introducing AI into enterprise workflows and applications. What have you found to be similar and what have you found to be unique? Let’s continue the conversation.

Applied Data Governance Practitioner Certification

Validate your expertise – accelerate your career.