Your Employees Are Already Using AI. You Just Don’t Have a Strategy For It Yet.

Shadow AI risks

A submission by Kevin Ziegler, Fractional CTO and strategic technology advisor

Why the next phase of AI leadership in mid-market companies isn’t about adoption. It’s about visibility.

A few weeks ago I sat across the table from the VP of Operations at a mid-sized manufacturer. About a hundred employees, second-generation family business, well-run. Halfway through our conversation he opened his laptop with a small grin and showed me what he’d been working on. He had a local language model running on his machine. He was using a commercial AI assistant for his own daily work. He was experimenting with prompts, comparing outputs, and quietly building a personal point of view about where this technology could take his operation.

His CEO did not know any of this.

This is the conversation I am having over and over again with leaders of growth-stage SMBs and mid-market companies. The story is always the same. Somewhere inside the organization, employees have already started using AI. Sometimes it is sanctioned, more often it is not. The sales rep is drafting proposals in ChatGPT. The operations manager is summarizing customer tickets in Claude. The finance lead is asking Copilot to clean up a spreadsheet. The marketing coordinator is generating first drafts of newsletter copy in whatever tool a friend recommended. None of this shows up in the IT roadmap. None of it is governed. And in most cases, leadership has only the faintest sense that any of it is happening.

If you are a CEO or operating leader reading this, your instinct is probably to do one of two things. Clamp down with a policy and a list of approved tools. Or, if you are more cautious, hire a consultant to build you an AI strategy from scratch. Both instincts are wrong, or at least incomplete. Here is what I would offer instead.

 

Your employees have already done the hardest part.

AI adoption inside mid-market companies is not a future project. It has already begun, and it has begun without you. That sounds alarming, but it is actually a gift if you know what to do with it.

Think about what your employees have already accomplished. They identified real problems in their day-to-day work that AI could help with. They evaluated tools, often against each other. They validated that the technology actually works on their specific tasks, in their specific context, with their specific data. They built personal proficiency through trial and error. In other words, they did the discovery, the proof of concept, and the hands-on training that a top-down AI initiative would have spent six figures and nine months trying to replicate.

Most leadership teams I work with do not see it this way. They see a control problem. I see a head start.

 

The risk isn’t that they’re using it. It’s that you can’t see what they’re doing.

I speak regularly on AI ethics, including at a prestigious data science institute, and I want to be clear that I am not naive about the risks. They are real. Customer data ending up in consumer-grade tools. Confidential financial information pasted into a chatbot whose terms of service nobody read. Source code shared with services that may train on it. Inconsistent or biased outputs being used to make decisions that affect employees, customers, or vendors. Hallucinated content presented to clients as fact. These are not theoretical concerns.

But here is the part I want you to sit with. The risk is not that your employees are using AI. The risk is that you have no idea what they are using it for, what data is moving through it, or what would happen to your operation if those tools changed, raised their prices, or disappeared tomorrow.

That reframe matters because it changes what governance actually means. Governance is not a list of banned tools. It is visibility. It is the ability to answer three questions on any given Tuesday. Who in my company is using AI. What are they using it for. And what data is involved. If you cannot answer those three questions today, no policy you write will protect you, because you do not yet know what you are policing.

 

What to do instead of clamping down.

If the goal is to channel the AI energy already running through your company into something that compounds for the business, here is the practical sequence I recommend to my clients.

Run a non-punitive AI inventory.

Ask your team, in a way that does not punish honesty, what AI tools they are using and what they are using them for. Make clear that you are not trying to take anything away. You are trying to learn. The first time a leadership team does this, they are usually surprised by both the volume and the creativity of what is happening. That surprise is the value. You cannot govern what you cannot see, and you cannot strategize around capabilities you did not know your team had.

Separate the use cases from the tools.

Once you have an inventory, sort it. There is a difference between a salesperson using AI to draft a cold email and a finance leader pasting payroll data into a public chatbot. The first is a productivity gain you should formalize. The second is a data exposure you should redirect, ideally to a sanctioned tool that does the same job inside an enterprise agreement. The mistake is treating all AI usage as a single risk category. It is not. Sort by sensitivity of data and sort by business impact, then act accordingly.

Standardize on a small number of sanctioned tools.

You probably do not need an AI procurement strategy with twenty vendors. For most mid-market companies, two or three tools cover the vast majority of legitimate use cases. Pick them deliberately, with attention to data handling, contractual protections, and integration with the systems you already use. Then make those tools easy to access. The fastest way to drive shadow AI usage underground is to make the sanctioned path harder than the unsanctioned one.

Write a usage policy that respects your employees’ intelligence.

A good AI usage policy is short, plainly written, and focused on principles rather than tool lists. It tells people what kinds of data they cannot put into AI tools, what they need to disclose when AI was meaningfully involved in their work, and where to go when they want to try something new. It treats your employees as adults who can make judgment calls inside a clear framework. It does not try to anticipate every scenario, because it cannot, and trying makes the document so long no one reads it.

Connect the dots to your roadmap.

This is the part most companies skip. The use cases your employees have already validated should feed directly into your strategic technology roadmap. If three different people in your operations team are using AI to summarize customer interactions, that is not a curiosity. That is a signal that your company has a documented appetite and proven workflow for AI-assisted customer intelligence, and that should influence what you build, buy, or invest in next year. Bottom-up signals should shape top-down strategy. In most companies they don’t, because nobody is listening for them.

 

The leadership shift.

The fractional CTO work I do increasingly looks less like building strategies in a vacuum and more like helping leaders see what is already happening in their own organizations. The companies that will get the most out of AI over the next three years are not the ones with the boldest vision decks. They are the ones whose leadership teams have done the unglamorous work of looking, listening, and channeling the energy that is already flowing through the building.

That manufacturer’s VP of Operations was not a problem to be managed. He was the most valuable AI asset in the company, and his CEO did not know it. Once we put that on the table, the conversation changed. The question stopped being how do we get our company started with AI. It became how do we honor the work this team has already done and make it count for the business.

If you are leading a company right now and you are not sure what AI is already happening inside your walls, that uncertainty is your starting point. Not a policy. Not a strategy deck. A conversation.

Start there. The rest gets a lot easier.