A Practical RevOps Audit Checklist
Most RevOps problems show up in small, annoying ways. A lead goes missing. A report doesn’t match what sales thinks happened. Someone asks a simple question and no one trusts the answer.
An audit is just a structured way to look at those moments and figure out why they keep happening.
This checklist focuses on what you can see, trace, and verify.
What the system is actually being used for
Start with behavior, not intent.
Check
What people complain about in Slack or meetings.
What gets manually tracked in spreadsheets “just in case.”
What questions leadership asks repeatedly because the answer is unclear.
Real examples
A VP asks how many inbound leads converted last quarter and gets three different numbers.
Sales managers keep their own pipeline spreadsheet because they don’t trust the CRM.
Marketing exports leads weekly to double check follow up.
What usually breaks
The system was built for a past stage of the company.
Teams rely on it differently, so expectations don’t match.
No one ever agreed on what the system needs to answer now.
What good looks like
Everyone agrees on what the system should reliably tell them.
It is clear which questions the system can answer and which it cannot.
Manual tracking exists only as a temporary workaround, not a permanent crutch.
If you can’t name the top three questions the system needs to answer, everything else will feel noisy.
How a lead really moves from start to finish
Ignore diagrams. Follow an actual lead.
Check
Pick five recent leads.
Look at when they were created.
Track every status change.
Note when a person took action.
Real examples
A lead enters from a webinar, sits unassigned for two days, then gets marked “contacted” without an email logged.
A lead is marked qualified, then disqualified, then re qualified with no notes.
A lead converts to an opportunity weeks after the first meeting because no one updated the record.
What usually breaks
Statuses mean different things to different people.
Automation updates fields without context.
Sales updates happen after the fact, if at all.
What good looks like
You can explain why each status change happened.
Timing between steps makes sense.
There are fewer steps, not more.
If you can’t explain what happened to a lead without guessing, the funnel is lying to you.
How leads get assigned and followed up
This is where most revenue leaks quietly.
Check
Look at how inbound leads are assigned today.
Measure time from creation to first action.
Look for reassignment or inactivity.
Real examples
Leads route by territory rules no one has touched in a year.
SDRs cherry pick leads and leave the rest untouched.
Leads get reassigned when reps are out, then forgotten.
What usually breaks
Assignment logic grows more complex over time.
Exceptions become the norm.
No one reviews whether routing still makes sense.
What good looks like
Most leads route automatically without manual fixes.
First action happens quickly and consistently.
When routing fails, it is obvious and easy to diagnose.
If sales doesn’t trust lead assignment, they stop paying attention to inbound altogether.
CRM data people actually depend on
Look at what gets filled out under pressure.
Check
Open recent opportunity records.
Look at required fields.
Check notes, stages, and close dates.
Real examples
Required fields filled with “N/A” or random characters.
Close dates pushed out every week without explanation.
Fields that exist only because someone asked for them once.
What usually breaks
Too many required fields slow people down.
Fields stay required long after they stop being useful.
Cleanup is no one’s job.
What good looks like
Required fields clearly support reporting or process.
Records are usable without being perfect.
Bad data can be corrected without jumping through hoops.
If people rush through data entry, the system will always lag reality.
Reports people trust when making decisions
Do not start with dashboards. Start with meetings.
Check
What reports are pulled up in forecast calls.
What numbers get questioned.
What reports are ignored entirely.
Real examples
Marketing and sales argue over conversion rates every month.
Pipeline numbers change depending on who pulls the report.
A dashboard exists but no one opens it.
What usually breaks
Metrics lack clear definitions.
Reports are built once and never revisited.
Different tools calculate the same metric differently.
What good looks like
A small set of reports used consistently.
Clear definitions written down somewhere obvious.
Fewer surprises during reviews.
If a report causes debate instead of decisions, it needs work.
Tools and the gaps between them
Most problems live between tools, not inside them.
Check
List the tools used by marketing, sales, and success.
Note what data moves between them.
Identify manual imports or exports.
Real examples
Leads sync from marketing automation but miss key fields.
Opportunities update in the CRM but not in reporting tools.
Someone runs a weekly CSV export to “fix” things.
What usually breaks
Tools added quickly to solve immediate problems.
Integrations partially configured and forgotten.
No one owns the system as a whole.
What good looks like
Each tool has a clear reason for existing.
Data movement is understood.
Manual fixes are rare and temporary.
More tools rarely solve system problems.
How people learn and work around the system
Watch what new hires struggle with.
Check
How new team members are onboarded.
Where they ask questions.
What steps they skip or avoid.
Real examples
New reps ask how to log activity because docs are outdated.
People rely on shadowing instead of documentation.
Changes are shared verbally and forgotten.
What usually breaks
Documentation falls out of date quickly.
Processes differ by manager or team.
Changes are not communicated clearly.
What good looks like
Short, current docs tied to real workflows.
Consistent expectations across teams.
Changes communicated clearly and early.
If onboarding depends on who you sit next to, the system isn’t ready.
Deciding what to fix first
The audit only matters if it leads to action.
Check
Which issues showed up repeatedly.
Which ones cause the most rework or confusion.
What can realistically be fixed next.
Real examples
Lead routing breaks weekly but no one owns it.
Reporting is noisy because definitions are unclear.
Data cleanup never happens because it feels too big.
What usually breaks
Teams try to fix everything at once.
Problems get turned into vague initiatives.
Momentum fades after the review.
What good looks like
One or two concrete fixes.
Clear ownership.
A plan to check if the fix worked.
Progress comes from finishing small things.
Closing
A useful RevOps audit gives you a clear picture of how leads move through the system and where the process slows down or breaks.
If you can trace a lead end to end, trust the numbers you look at, and point to the next fix, this checklist has done its job.