Stories →
Financial Services Firm
Content Governance
Building tomorrow's intelligence on yesterday's mess


The call of 2025
"We want to implement AI for our internal content. ChatGPT/Claude/Copilot, automated tagging, the whole thing. Can you help?"
This was a Fortune 500 insurance company. They had thousands of documents for insurance advisors. No working search. Little metadata, or conflicting tags depending on the department. Everything in PDFs. Some dating back to 2003.
"Sure," I said. "But first, let me guess: advisors can't find anything, your best content gets zero views, and everyone just emails each other the same five forms over and over?"
Long pause.
"How did you know?"
Because everyone wants AI to solve their content problems immediately. Nobody wants to do the unglamorous prep work that makes AI possible. It's like asking a chef to make dinner when your kitchen is just a room full of unopened Amazon boxes and a cat.
What we uncovered
Their platform was where good content went to retire:
4000+ pieces of content (90% PDFs, naturally)
Zero search functionality (removed in an "upgrade" years ago)
75% missing metadata (the rest was wrong)
40% broken links (pointing to servers that no longer existed)
95% operational use (same five forms, everything else ignored)
But here's what made me take the project: they weren't asking for a prettier website. They wanted to build something that could actually learn and help their advisors. They just didn't know that AI needs context and structure to work.
You can't just point ChatGPT at a folder of PDFs and say "make this smart." Trust me, I tried. I try every year. The results are... creative.

The call of 2025
"We want to implement AI for our internal content. ChatGPT/Claude/Copilot, automated tagging, the whole thing. Can you help?"
This was a Fortune 500 insurance company. They had thousands of documents for insurance advisors. No working search. Little metadata, or conflicting tags depending on the department. Everything in PDFs. Some dating back to 2003.
"Sure," I said. "But first, let me guess: advisors can't find anything, your best content gets zero views, and everyone just emails each other the same five forms over and over?"
Long pause.
"How did you know?"
Because everyone wants AI to solve their content problems immediately. Nobody wants to do the unglamorous prep work that makes AI possible. It's like asking a chef to make dinner when your kitchen is just a room full of unopened Amazon boxes and a cat.
What we uncovered
Their platform was where good content went to retire:
4000+ pieces of content (90% PDFs, naturally)
Zero search functionality (removed in an "upgrade" years ago)
75% missing metadata (the rest was wrong)
40% broken links (pointing to servers that no longer existed)
95% operational use (same five forms, everything else ignored)
But here's what made me take the project: they weren't asking for a prettier website. They wanted to build something that could actually learn and help their advisors. They just didn't know that AI needs context and structure to work.
You can't just point ChatGPT at a folder of PDFs and say "make this smart." Trust me, I tried. I try every year. The results are... creative.

Building the invisible foundation
We spent eight weeks doing the work nobody sees but AI desperately needs.
Step 1: The content reality check
We evaluated thousands of pieces of content. Not skimmed. Actually evaluated. Using classification with AI to start to chip away at the deep dives to spot patterns. Then scrubbed them again against criteria that would matter to machines:
What we measured | Why AI needs it | The ugly truth |
|---|---|---|
Metadata completeness | AI needs to know what things are | 75% had none |
Content structure | AI needs patterns to learn from | PDFs. All PDFs. |
Taxonomy alignment | AI needs categories that make sense | "Misc" was the biggest folder |
Quality scoring | AI needs to know what "good" looks like | Zero content scored an A |
The hard part? The people part.
Their advisors were brilliant people drowning in garbage information architecture. One advisor told me: "Sometimes you scroll endlessly to find a form." Another just gave up: "I keep my own folder of the five things I actually use."
After interviewing 15 advisors, the issue wasn't what anyone expected.
The problem wasn't that the content was badly organized. The problem was that three completely different types of advisors were trying to use the same system.

The patterns were striking. Legacy advisors had admins to look up things, they needed notifications, not navigation. Overwhelmed users were teams of one who needed filtering, not more features. Pioneers knew the tools (and were the meatspace version of the search engine for others) and needed guidance, not access.
Trigger | Legacy | Overwhelmed Navigator | Digital Pioneer |
|---|---|---|---|
What they need | Client requests form | Product comparison for prospect | Sales pitch preparation |
What they actually do | Check saved files → email MGA → use outdated form | Search SRC → get frustrated → call wholesaler | Browse randomly → feel overwhelmed → ask mentor |
What they wish existed | "Just tell me when forms update" | "Show me what's changed" | "Tell me what I actually need" |
Step 2: Creating AI training data (without calling it that)
Here's the secret: AI doesn't need perfect content. It needs coherent patterns. So we built them:
The taxonomy was the first place to establish context and structure:
Simple, right? But now AI can understand:
"Show me disability claim forms"
"What do advisors need for new accounts?"
"Find everything about critical illness for clients"
We didn't just organize content. We taught it how to be found.
Step 3: The prompt library nobody knew they needed
Everyone thinks AI just magically knows how to write like your brand or your audience. Spoiler: it kind of, well, doesn't.
We built an initial 9+ prompt templates for different scenarios:
Regulatory updates (lawyer-approved language)
Advisor communications (conversational but compliant)
Client materials (simple but not patronizing)
Each prompt had:
Context instructions
Tone guidelines
Compliance boundaries
Quality checkpoints
This isn't sexy work. But it's the difference between AI that sounds like a robot reading regulations and AI that actually helps humans. It the difference between guessing and learning.
The framework that made it scalable
We didn't just fix their content. We built systems that could fix themselves:
The content scorecard
Every piece of content was graded on three dimensions:
Content quality (45%) - Is it actually good?
Technical health (30%) - Can systems use it?
Business impact (25%) - Does it matter?
But here's the clever part: we built it as an algorithm. Now they can:
Auto-score new content on upload
Flag problems before they publish
Track quality over time
Train AI on what "good" looks like
The governance that doesn't suck
Most governance is where good ideas go to die in committee. We made it simple:
Three questions for every piece of content:
Can AI categorize this automatically? (If no, fix the taxonomy)
Would you trust AI to summarize this? (If no, fix the structure)
Could AI find this based on user intent? (If no, fix the metadata)
That's it. No 47-page governance document. Just three questions that the governance committee could use as starters to keep AI honest. Partnering with legal and compliance teams, I established:
Governance Principles:
Transparency: All AI-generated content clearly labeled
Accountability: Human review required for customer-facing content
Fairness: Bias detection and mitigation processes
Privacy: No PII in training data or prompts
Security: Encrypted prompt storage and access controls
Then to move decisions along, we defined some benchmarks on what “good” is:
Building the invisible foundation
We spent eight weeks doing the work nobody sees but AI desperately needs.
Step 1: The content reality check
We evaluated thousands of pieces of content. Not skimmed. Actually evaluated. Using classification with AI to start to chip away at the deep dives to spot patterns. Then scrubbed them again against criteria that would matter to machines:
What we measured | Why AI needs it | The ugly truth |
|---|---|---|
Metadata completeness | AI needs to know what things are | 75% had none |
Content structure | AI needs patterns to learn from | PDFs. All PDFs. |
Taxonomy alignment | AI needs categories that make sense | "Misc" was the biggest folder |
Quality scoring | AI needs to know what "good" looks like | Zero content scored an A |
The hard part? The people part.
Their advisors were brilliant people drowning in garbage information architecture. One advisor told me: "Sometimes you scroll endlessly to find a form." Another just gave up: "I keep my own folder of the five things I actually use."
After interviewing 15 advisors, the issue wasn't what anyone expected.
The problem wasn't that the content was badly organized. The problem was that three completely different types of advisors were trying to use the same system.

The patterns were striking. Legacy advisors had admins to look up things, they needed notifications, not navigation. Overwhelmed users were teams of one who needed filtering, not more features. Pioneers knew the tools (and were the meatspace version of the search engine for others) and needed guidance, not access.
Trigger | Legacy | Overwhelmed Navigator | Digital Pioneer |
|---|---|---|---|
What they need | Client requests form | Product comparison for prospect | Sales pitch preparation |
What they actually do | Check saved files → email MGA → use outdated form | Search SRC → get frustrated → call wholesaler | Browse randomly → feel overwhelmed → ask mentor |
What they wish existed | "Just tell me when forms update" | "Show me what's changed" | "Tell me what I actually need" |
Step 2: Creating AI training data (without calling it that)
Here's the secret: AI doesn't need perfect content. It needs coherent patterns. So we built them:
The taxonomy was the first place to establish context and structure:
Simple, right? But now AI can understand:
"Show me disability claim forms"
"What do advisors need for new accounts?"
"Find everything about critical illness for clients"
We didn't just organize content. We taught it how to be found.
Step 3: The prompt library nobody knew they needed
Everyone thinks AI just magically knows how to write like your brand or your audience. Spoiler: it kind of, well, doesn't.
We built an initial 9+ prompt templates for different scenarios:
Regulatory updates (lawyer-approved language)
Advisor communications (conversational but compliant)
Client materials (simple but not patronizing)
Each prompt had:
Context instructions
Tone guidelines
Compliance boundaries
Quality checkpoints
This isn't sexy work. But it's the difference between AI that sounds like a robot reading regulations and AI that actually helps humans. It the difference between guessing and learning.
The framework that made it scalable
We didn't just fix their content. We built systems that could fix themselves:
The content scorecard
Every piece of content was graded on three dimensions:
Content quality (45%) - Is it actually good?
Technical health (30%) - Can systems use it?
Business impact (25%) - Does it matter?
But here's the clever part: we built it as an algorithm. Now they can:
Auto-score new content on upload
Flag problems before they publish
Track quality over time
Train AI on what "good" looks like
The governance that doesn't suck
Most governance is where good ideas go to die in committee. We made it simple:
Three questions for every piece of content:
Can AI categorize this automatically? (If no, fix the taxonomy)
Would you trust AI to summarize this? (If no, fix the structure)
Could AI find this based on user intent? (If no, fix the metadata)
That's it. No 47-page governance document. Just three questions that the governance committee could use as starters to keep AI honest. Partnering with legal and compliance teams, I established:
Governance Principles:
Transparency: All AI-generated content clearly labeled
Accountability: Human review required for customer-facing content
Fairness: Bias detection and mitigation processes
Privacy: No PII in training data or prompts
Security: Encrypted prompt storage and access controls
Then to move decisions along, we defined some benchmarks on what “good” is:
The numbers that mattered
Before our work:
Finding a form: 10–15 minutes of scrolling
Content updates: Email blast into the void
New advisor onboarding: "Ask Jennifer, who knows where everything is"
After building the foundation:
AI-ready taxonomy: 100% of content categorized
Prompt library: 12+ templates ready to deploy
Quality baseline: Every piece scored and prioritized
Metadata framework: Ready for automated generation
"You didn't just organize our content. You made it possible for us to stop being a PDF library and start being an intelligent platform. The AI implementation that would have failed spectacularly six months ago is now actually possible."
What I learned about AI readiness
AI without structure is just expensive chaos. You can't sprinkle AI on a broken content system and expect magic. The magic comes from doing the boring work first.
Governance for AI is different. Traditional governance asks "Is this approved?" AI governance asks "Can a machine understand this?" Different game, different rules.
Prompt engineering is content strategy. Those templates we built? They are not just instructions for AI. They are codified expertise that scales. It's what content strategists have always done, just for a different audience.
The foundation is the innovation. Everyone wants the flashy AI demo. But the real innovation is building a foundation so solid that AI actually works. That's the competitive advantage.
The plot twist
CUT TO: SIX MONTHS LATER
THEM:
"We still haven’t started on the AI implementation."
ME:
"Okay. And the good news?"
THEM:
"Actually... the foundation? Our advisors can actually find things now. Support tickets dropped 40%. The governance framework caught 200+ issues before they went live. The taxonomy made our filtered search actually work. We might not even need AI the way we originally thought. But when we do implement it, and give it the right work, it will actually work."
That's the thing about good foundations: they pay dividends even before you build on them.
Sometimes the best AI strategy is using AI to make your content so coherent that you don't desperately need AI to fix it later. You want AI because it can make something good even better, not because it's your only hope of making something terrible slightly less terrible.
Project details
We delivered:
Complete content audit with prioritization framework
Dependency matrix for phased implementation
Content governance recommendations
Information architecture for the rebuild
Product roadmap
Content evaluated: 4,000+ pieces
Prompt templates created: 12+
Time until AI-ready: 0 (ready now)
The numbers that mattered
Before our work:
Finding a form: 10–15 minutes of scrolling
Content updates: Email blast into the void
New advisor onboarding: "Ask Jennifer, who knows where everything is"
After building the foundation:
AI-ready taxonomy: 100% of content categorized
Prompt library: 12+ templates ready to deploy
Quality baseline: Every piece scored and prioritized
Metadata framework: Ready for automated generation
"You didn't just organize our content. You made it possible for us to stop being a PDF library and start being an intelligent platform. The AI implementation that would have failed spectacularly six months ago is now actually possible."
What I learned about AI readiness
AI without structure is just expensive chaos. You can't sprinkle AI on a broken content system and expect magic. The magic comes from doing the boring work first.
Governance for AI is different. Traditional governance asks "Is this approved?" AI governance asks "Can a machine understand this?" Different game, different rules.
Prompt engineering is content strategy. Those templates we built? They are not just instructions for AI. They are codified expertise that scales. It's what content strategists have always done, just for a different audience.
The foundation is the innovation. Everyone wants the flashy AI demo. But the real innovation is building a foundation so solid that AI actually works. That's the competitive advantage.
The plot twist
CUT TO: SIX MONTHS LATER
THEM:
"We still haven’t started on the AI implementation."
ME:
"Okay. And the good news?"
THEM:
"Actually... the foundation? Our advisors can actually find things now. Support tickets dropped 40%. The governance framework caught 200+ issues before they went live. The taxonomy made our filtered search actually work. We might not even need AI the way we originally thought. But when we do implement it, and give it the right work, it will actually work."
That's the thing about good foundations: they pay dividends even before you build on them.
Sometimes the best AI strategy is using AI to make your content so coherent that you don't desperately need AI to fix it later. You want AI because it can make something good even better, not because it's your only hope of making something terrible slightly less terrible.
Project details
We delivered:
Complete content audit with prioritization framework
Dependency matrix for phased implementation
Content governance recommendations
Information architecture for the rebuild
Product roadmap
Content evaluated: 4,000+ pieces
Prompt templates created: 12+
Time until AI-ready: 0 (ready now)
Read more stories:



Toyota dealer training transformation
How we brought car dealerships into the digital age and killed the three-ring binder



Gotta Serve 'Em All: Redesigning Pokémon.com
Building a digital home for kids, parents, and 30-something collectors



Can you rewrite 13.8 billion years of history?
See how a strategic redesign of content made an edutech platform come to life.
