Scaling Myself with AI
How We Built Internal Prompt Assistants at Evidnt
TL;DR: We used AI to scale sales and client communication at Evidnt by turning our internal knowledge into prompt-driven assistants tailored to each team.
The result? Consistency, speed, and confidence across every client interaction—without months of training. Here’s how we did it, what we learned, and how others can try it too.
The Problem We Wanted to Solve
As the founder and primary salesperson at Evidnt, I found myself repeating the same narratives: why we built the platform, how our measurement works, what makes us different, and how to upsell our services. I had spent years crafting those answers.
But as we grew, I needed my team to articulate it just as clearly, without relying on months of shadowing or my presence in every conversation. The challenge: How do you scale your voice, your logic, and your insights so that even the most junior hire can sound like a seasoned expert?
Training alone wasn’t enough. We needed an internal system that could bring our knowledge to life, on demand, in-context, and on-brand.
2. Why Using AI Internally Is a No-Brainer (But Not Plug-and-Play)
AI can instantly access, retrieve, and synthesize complex knowledge. For internal enablement, sales, customer success, and onboarding, it’s a natural fit. But implementing it thoughtfully requires more than just “add GPT.” Here’s what we learned building ours:
a) Prompt Rigor
Prompts aren’t one-and-done. Early on, we saw hallucinations—incorrect data, made-up numbers, overly confident claims. Over time, we learned to treat prompts like code: test them, version them, stress-test them under different inputs. This dramatically reduced errors and made responses more reliable.
b) Security Considerations
As a smaller company, we didn’t encounter the same compliance challenges as larger enterprises, but we were cautious not to upload any confidential information to public LLMs. Instead, we carefully vetted all content and used tools that ensured secure data handling. This remains a major concern for bigger organizations, it's important to review terms of service and data flow policies thoroughly.
c) Prompt Sensitivity
LLMs are surprisingly finicky. Too much instruction? The output becomes rigid and robotic. Too little? You get generic, unhelpful fluff. We had to strike a balance—clear goals with room for the model to reason. We continually tracked output quality across various use cases.
d) Dealing with Jargon and Style
I’m not a fan of AI’s default tone, overuse of emojis, unnecessary padding, or vague corporate jargon. To fix this, we gave the model example answers and a tone-of-voice guide. This helped us maintain clarity, confidence, and conciseness in our responses.
e) Feedback Loops
A common trap: assuming it “just works.” We made it a practice to review assistant outputs weekly, track accuracy, and collect team feedback. These feedback loops helped the assistants become more aligned with real-world use and enabled us to adapt as our messaging evolved.
3. How We Did It: A Practical Blueprint
We began by compiling our internal sales and account management materials, including FAQs, onboarding documents, pitch decks, and process explanations. These became the foundation of our assistants.
Here’s how we operationalized it:
Step 1: Create a Knowledge Base
We embedded our materials into a searchable store using tools like OpenAI’s Assistants API. If you’re working with just 10-20 docs, no need for complex infrastructure, you can use OpenAI’s file upload directly. For larger use cases, third-party libraries such as LangChain or LlamaIndex are good solutions.
Step 2: Define Roles and Use Cases
We mapped assistants to specific roles:
Sales: outbound messaging, follow-ups, brand positioning
Account Management: onboarding templates, campaign reporting, upsell framing
Strategy and Planning: explaining methodologies, articulating performance logic
Step 3: Build the Interface
We developed a simple internal React app. Team members choose a use case (for example, "follow-up email after first pitch”), enter a few details such as brand, product, and campaign goals, and the assistant generates a message that aligns with our tone, value proposition, and playbook. Users can also upload campaign data, email follow-ups, and ask specific questions, receiving clear responses in our consistent tone.
Step 4: Iterate and Improve
We reviewed weekly logs, marked outputs as strong or weak, and refined prompts accordingly. Eventually, assistants could even recommend upsells or handle nuanced client objections—pulling from actual case studies and results.
4. What You Can Try
If you want to explore this for your team, start simple:
Gather your key materials: Sales decks, FAQs, customer docs, onboarding guides
Use OpenAI’s Assistants API: You can upload files and prompt the model to reference them
Create specific use cases: Don’t try to solve everything—pick three common communication moments (e.g., intro email, explaining product value, handling objections)
Define your tone and structure: Give the model a few example responses with the style you want
Build a basic front end: Even Streamlit or Google Sheets + API calls can be enough to start
Review outputs regularly: Ask your team for feedback, and refine prompts based on their input
5. Final Thoughts
This isn’t about replacing people. It’s about scaling the best parts of your team—your voice, your logic, your know-how—so that everyone can perform at their highest level faster.
It took me months to train new hires to speak the way I wanted. With AI, we got them 80% of the way there in a week.
This approach worked for us at Evidnt. If you’re thinking about how to scale your own sales, CX, or strategy teams—I’m happy to share more or help you think through how this could work in your org.