“AI readiness” has become an empty term. Every consultancy has an AI maturity assessment. Every SaaS vendor has a readiness quiz. Most of them measure the wrong things — strategic alignment, change management capacity, executive buy-in. The actual readiness question for a service business shipping an AI build is much narrower and much more practical.
This is the readiness checklist OFO Collective runs internally during the audit week of every 30-day trial. Twelve checks. Each one is either green, amber, or red. The colour distribution tells us whether the build is going to ship cleanly, ship with friction, or stall.
If you run operations at an Australian agency and you are evaluating whether to engage an AI build partner, the checklist below is the honest version of what they should be checking before they take your money.
The 12 checks
Stack readiness
1. Does the team already use a CRM that contains the customer record consistently?
Green: HubSpot, Salesforce, Pipedrive — set up properly, with records updated within 48 hours of the customer interaction. Amber: A CRM exists but parts of the team route around it. Red: No CRM, or one that exists in name only. AI builds without a clean source-of-truth record are going to drift.
2. Is the AI provider stack already accessible (Anthropic, OpenAI, or both)?
Green: API access set up, billing in place, someone on the team has a working knowledge of prompt engineering. Amber: Provider account exists but nobody has used the API in production. Red: No provider access yet. Add a week to the build timeline.
3. Is there a workflow orchestration layer in place (n8n, Make, Zapier)?
Green: One of the three is in production use, with at least 3 to 5 live workflows. Amber: One exists but is barely used. Red: All process orchestration runs through manual handoffs.
Data readiness
4. Are the workflows you want to automate documented somewhere?
Green: Written SOPs, video walkthroughs, or at minimum a step-by-step description in someone’s head that they can articulate in 15 minutes. Amber: Tribal knowledge across two or three people that has never been written down. Red: One person knows the workflow, that person is busy, and they cannot describe it without doing it.
5. Is there historical data the AI can train on or reference?
Green: 6+ months of structured data in the CRM, finance system, or product database. AI models work much better with operator-specific training signal. Amber: Some historical data exists but is fragmented across systems. Red: New business or no record-keeping. AI builds without operator-specific training signal end up generic.
6. Are integrations between your core systems either already live or known to exist?
Green: CRM → email, CRM → finance, CRM → analytics, all working today. Amber: Some integrations live, others rumoured to be possible. Red: Tools are siloed. Data moves between them by export/import.
Team readiness
7. Is there a senior operator who will own the system after handover?
Green: A named person, in the business today, who can spend 2 to 4 hours per week on the system after launch. Amber: A junior operator who would need to grow into the role. Red: No internal owner identified. The system will atrophy.
8. Will the agency’s senior decision-maker (founder, CEO, head of growth) be in the room for the audit week?
Green: At least 3 sessions in week one with the decision-maker present. Amber: Decision-maker attends kick-off and sign-off only. Red: Decision-maker delegates the whole engagement to a director. The build will still ship but adoption stalls.
9. Is there a culture of “shipping something rough and iterating” or a culture of “shipping when perfect”?
Green: The team has shipped imperfect things and iterated. AI builds require this temperament. Amber: Mixed — the team prefers polished launches but tolerates iteration. Red: A perfectionist culture. AI builds reach 80% fast and then need a longer tail of refinement. Perfectionist teams stall in the tail.
Decision-making readiness
10. Can the agency commit to a fixed scope by end of audit week?
Green: The decision-maker can lock scope in writing on Friday of week one. Amber: Scope locked the following Monday or Tuesday. Red: Scope drifts past the first week. Every day of drift compresses the build window.
11. Is the budget for the build pre-approved?
Green: The 4 to 5 figure trial cost is signed off before the engagement starts, not pending procurement. Amber: Budget is verbally agreed but needs sign-off. Red: Budget is contingent on the audit deliverable. This kills the trial model.
12. Is the team prepared to change a workflow if the audit shows a different bottleneck than they expected?
Green: Yes. The team is hiring OFO Collective for outcome, not for a specific build. Amber: Strong preference for the specific build the team came in wanting. Red: Hard requirement on a specific build, regardless of what the audit finds. This is fine if the audit confirms the build — but if it does not, the engagement stalls.
Reading the score
The honest scoring:
- 9+ green: Ready to ship. The build will move fast and adoption will be clean.
- 6 to 8 green: Ready to ship with friction. Expect 2 to 4 weeks of post-build iteration on the items that came in amber.
- 3 to 5 green: Build slowly. Pick a small, well-defined first system and use the trial as a forcing function for the readiness work that has not happened yet.
- Fewer than 3 green: Pause. Spend a month on readiness work before engaging an AI build partner. Otherwise you are paying an agency to do work your team should do internally first.
The agencies we have shipped cleanly with sat at 9+ green. The ones that stalled mid-engagement sat at 5 or below and could have been told that, in advance, if either side had done this check.
What to do if you score badly
Three high-leverage moves close most readiness gaps in 4 to 6 weeks of internal work.
Document the workflow. The single highest-leverage readiness move is writing down how the workflow you want to automate actually works today. Most agencies cannot do this without observing themselves for two weeks. The act of writing it down also surfaces the operator-side fixes that should happen before AI gets involved.
Clean the CRM. Run an export, look at the field-completion rates, audit the duplicate records, fix the ones that matter. AI builds amplify CRM quality — clean data in, clean systems out; dirty data in, fast garbage out.
Identify the system owner. Name the person who will own the AI system after handover. Get their commitment to 2 to 4 hours per week of support work. If you cannot name someone, the build is going to atrophy regardless of how well it ships.
After these three moves, most agencies move from a 5-green score to an 8 or 9 in a month or two. That is when the trial economics start working.
The build OFO Collective ships from a 9+ green start
When the readiness checklist comes back at 9 or 10 green, the 30-day trial reliably ships. We have run a version of this build across real estate, marketing agencies, DOOH operators, and adjacent service businesses, with consistent results — 80% to 98% time reduction on the targeted workflow, inside 30 days, at a fixed price.
The case studies are at /case-studies. The trial detail is at /trial. If you have run yourself through the 12 checks above and want to talk through what a build would look like for your stack, book a call.