The Rise of "Shadow AI"
what happens when there's no policy
Have you ever used ChatGPT, Claude, or Gemini to complete a task at work and then felt awkward when others praised your contribution to the project? How about to edit an email or complete an assignment? Perhaps to “calculate” what reciprocal tariffs to apply to a trading partner? In the absence of formal policy, it can be difficult to know what is your work, what belongs to AI, and what is a healthy combination of the two. Welcome to the world of “Shadow AI” - the use of helpful and efficient tools that give short-term productivity boosts, and that might have longer-term risks for your organization or personal credibility. But first…
Rick’s AI Express
Of note in the past week:
deep research mode is available in many of the free AI models, including Claude, Gemini, ChatGPT, Perplexity, and DeepSeek. This advance suggests that prompt engineering will likely be less important in the future because of how LLMs analyze your chat request.
it’s very interesting to see the change in HOW people are using AI in their daily lives in 2025. And how that has changed from 2024, for example in the use of AI for Therapy and Companionship.
there is a significant rise in the use of AI to create deepfake pornography, to spread disinformation, and to prey on people via social media scams. Which is probably why we need MORE regulation, not less. The EU continues to lead the way on thoughtful AI policy.
and lastly, a recent study shows that people are using AI regularly in their daily lives, and often doing so inappropriately. A nice segue to today’s topic!
I know what you’re thinking - what’s the harm in using AI to massage an email reply to a customer, to improve your marketing pitch, or to look for cost-savings in your organization? Three years ago we wouldn’t be having this conversation; we would still be fighting the plagiarism battle - unauthorized use of another person’s words or ideas. And to the credit of most organizations, there are clear policies in place against plagiarism - the use of TurnItIn at schools, with failure or suspension as a stick; shame and forced resignation if used by a business or university leader; firing or demotion if an employee or team member was caught. So what has changed with the widespread availability and fast adoption of AI?
The short answer is that most organizations play catch-up when technology changes, because tech moves fast and organizations move slowly (DOGE might be the infamous exception to this rule!). There has always been a gap between innovation and policy - we often build the highway before deciding what the speed limit should be: traffic is already moving before the signs go up! And the same is true with technology - Facebook, TikTok, Twitter/X, and Instagram exploded in use before schools, workplaces, or governments had any real policies about privacy, harassment, misinformation, or screen time. We’re pretty good at Monday-morning quarterbacking - lots of “woulda, coulda, shoulda” once the game is over.
I’m going to define Shadow AI (with the help of Claude AI) as the unauthorized or undisclosed use of artificial intelligence tools in professional and educational environments, where individuals independently or covertly adopt AI tools to enhance their productivity and output without formal organizational policies, oversight, or transparency. I can hear the protests already: what’s the big deal if I use AI for a task if there’s a net benefit and no injured party; in other words, “no harm, no foul”? Or in the words of that Canadian rocker Kim Mitchell:
Might as well go for soda
Nobody hurts and nobody cries
Might as well go for soda
Nobody drowns and nobody dies.
There are several problems with this approach of “move quickly and break things” or “ask for forgiveness, not permission”, and they have to do less with efficiency or productivity, and more with unintended personal, organizational, or customer consequences - privacy, compliance, confidentiality; but also more immediate hazards, like the loss of personal agency, skill atrophy, and mistrust when AI is the only tool in your arsenal. Let’s break this down a bit more (again, with thanks to Claude - note the disclosure of “Shadow AI” use):
On a personal level, the use of Shadow AI:
can lead to deterioration of fundamental skills like writing, critical thinking, and problem-solving.
creates worry about whether or not to disclose AI use, and how colleagues or supervisors might judge your contributions if they only knew.
can cause reputational damage if your AI use is discovered and perceived as misleading or inappropriate.
impairs your ability to work independently or collectively.
can lead to loss of trust (or your job) if you are not careful to check AI outputs for errors or hallucinations.
On a corporate (business/school/non-profit/government) level, the use of Shadow AI:
creates inconsistent results and standards when varying levels of AI are used across teams.
creates significant security and privacy risks when sensitive organizational or customer data is shared with AI companies.
raises issues about copyright or patentability due to the nature of AI-generated results.
may inadvertently violate industry regulations, copyright laws, or ethical standards without proper oversight.
may cause critical organizational processes to become dependent on unofficial AI tools without risk assessment or contingency planning.
may reduce the development and documentation of institutional expertise as employees outsource thinking to AI.
may lead to a loss of authentic human connections with clients or stakeholders when communications are heavily AI-mediated.
This is NOT TO SAY that you can never use AI in your work (as I do, regularly); however, it means that either you need to disclose the unofficial use of AI tools, OR your organization needs to have a policy that clearly outlines the use of AI when dealing with certain types of personal, corporate, or client information. The goal is to have standards or rules that are easy for all users and customers to understand, and that allows workers to use AI tools for specific tasks or processes, rather than just banning the use of AI (which leads to MORE Shadow AI use!). Think of this as something similar to the policies that every organization has on dishonesty, plagiarism, health and safety, or discrimination/bullying/sexual harassment. You might consider adapting and adding the following to your workplace policies, to encourage appropriate and honest use of AI - with thanks to Claude for consolidating my random thoughts.
AI Usage Policy Framework
Purpose Statement
This policy establishes guidelines for using artificial intelligence tools in our organization to balance innovation and productivity with accountability, security, and ethical considerations.
Core Principles
Transparency: Be open about when and how AI tools are used in your work.
Accountability: You remain responsible for all work you submit, including AI-assisted content.
Security: Never input sensitive, confidential, or proprietary information into external AI tools.
Quality Control: Verify AI outputs for accuracy, appropriateness, and alignment with organizational standards.
Attribution: Clearly disclose AI contributions when presenting work to colleagues, supervisors, or external parties.
Acceptable Use Guidelines
Permitted Uses: AI tools may be used for drafting routine communications, generating ideas, formatting documents, simple data analysis, and editing for clarity.
Restricted Uses: Seek approval before using AI for customer-facing content, strategic decisions, financial analyses, or content representing organizational positions.
Prohibited Uses: Do not use AI to create legally binding documents, make final decisions affecting stakeholders, or generate content that requires specialized expertise, without review.
Disclosure Requirements
Include a brief note on AI assistance in document metadata or acknowledgments when appropriate.
In educational settings, follow instructor guidelines on permitted AI use for assignments.
During performance reviews, provide an honest assessment of which skills you've developed personally versus tasks where you've leveraged AI assistance.
Implementation
Departments may adapt these guidelines to their specific needs with leadership approval.
The organization will provide approved AI tools and training on their appropriate use.
This policy will be reviewed quarterly to adapt to evolving technology and organizational needs.
Enforcement
Policy violations will be addressed according to existing disciplinary procedures, with emphasis on education rather than punishment for first-time or minor infractions.
The goal of developing policies around Shadow AI Use is primarily about TRUST. Are employees (or students, or leaders) using AI to short-circuit or bypass the hard work that is typically required to be successful in your organization? Is there consistency in the way that AI use is encouraged or prohibited, or is this a moving target that depends on the who, what, and results of AI use? Is it okay to use unsanctioned AI tools if you end up landing a big account or saving a customer money? Do the results or consequences dictate what is acceptable in your organization? If your answer is yes, it might be time to revisit your vision and mission statements!
The final word on this topic: don’t let user action (or your inaction) dictate your organization’s policies on AI use. Ben Franklin said that “failure to plan is planning to fail”; Neil Peart said “if you choose not to decide, you still have made a choice”. Let’s get ahead of Shadow AI use and create a framework that allows all stakeholders to trust that each group has the others’ best interests at heart!
That’s all for now,
Cheers,
-Rick

