Confidential Executive Briefing
Building Orbit365 (next-gen platform), sunsetting CAPs (legacy), maintaining client obligations, pursuing ISO certification β all with a lean team. Every role is stretched thin.
The AI capability gap is closing fast. Competitors who move first gain a compounding advantage β agents get smarter every week, knowledge accumulates, capability accelerates. The window to lead is now.
AI assistant handles 75% of customer chats, does the work of 700 full-time employees. Resolution time dropped from 11 to 2 minutes.
CEO Jack Dorsey announced mass layoffs to restructure around AI. Company pivoting to AI-first operations across all divisions.
Workforce reductions tied to automation and AI-driven productivity. Follows years of restructuring toward AI-augmented operations.
One of the largest AI-driven workforce reductions in logistics. AI handling routing, planning, and operational decisions previously done by humans.
Quietly reduced thousands of positions as AI agents handle customer success, technical support, and code generation previously done by teams.
Harvard Business Review surveyed 1,006 global executives β AI is behind increasing layoffs, driven by anticipation of AI's impact, not just current performance.
The companies that survive the next decade will be the ones that figured out how to make 50 people as productive as 500 β using AI as the multiplier.
Our competitors in roadside assistance, fleet management, and B2B services are evaluating and deploying AI right now. Companies that wait 12β18 months will find themselves competing against organisations that are faster, leaner, and more capable β with the same or smaller headcount. The compounding advantage is real: every month of deployment makes the gap harder to close.
Open-source framework. Runs on our hardware. Our data stays on our machines. No cloud dependency. No data leakage. Enterprise API terms with model providers β our data is never used for training.
Digital employees, not software subscriptions.
The Account Manager gave Echo vague, natural-language instructions β the way you'd brief a colleague, not a computer: "Here's last month's report, here's the raw data, make it look like this." Echo asked clarifying questions, iterated through 11 versions of the classifier based on real-time feedback, and learned the business rules by doing the work alongside the team member. The AI didn't just automate a task β it learned a process that was only in one person's head and made it repeatable, auditable, and scalable. Next month, it runs automatically.
Accelerate Orbit365 delivery. Code review, generation, architecture design. Reduce reliance on external contractors.
Impact: 30-50% faster feature delivery, reduced contractor spend
ISO 27001 certification program. Policy lifecycle management. Audit readiness. 49 policies already at v02.
Impact: Months of consultant work done in weeks. Continuous compliance.
VGA monthly report: previously 2-3 days manual work. Now generated programmatically in minutes. Scalable to all clients.
Impact: ~30 hours/month saved per client report. Scalable to all clients.
Pipeline development, data integrity auditing, reporting frameworks. Automating the grunt work of data normalisation and validation.
Impact: Real-time dashboards, data trust, faster decisions
AWS Connect migration support. Call analytics, agent performance, workflow automation. Quality monitoring at scale.
Impact: Lower cost per call, higher CSAT, 24/7 AI triage
Customer journey mapping, white-label design, OrbitLink UX. AI-driven A/B testing and optimisation.
Impact: Better client retention, differentiated product offering
Every task an agent completes makes it better at the next one. Knowledge compounds. Capability accelerates.
Month 1 is the weakest it will ever be. By month 6, the ROI is undeniable.
AI assists staff with specific tasks. Staff learn what AI can do. Low risk, high learning.
AI shadows processes. Agents learn workflows, business rules, and exceptions from staff.
AI handles routine work end-to-end. Staff shift to oversight, exceptions, and quality.
Staff become AI directors. One person manages 5-10 AI agents. Output multiplied 10x.
Every role in the organisation is about to change β not disappear, but evolve. Here's what that looks like for real people:
π‘ The job doesn't disappear. The boring parts do.
People get promoted from "task doer" to "AI director" β and their output goes from 1x to 10x.
Echo, Sage, Archer operational. Copal shadowing. Infrastructure deployed (3 Mac Minis). Teams integration live. ISO policy program underway. VGA reporting automated. AI agents shadow existing staff workflows, learn business rules.
Ava (Data/BI) + Leo (Engineering) + Tess (Delivery) + Knox (Ops) + Piper (Sales) + Iris (EA) come online. Mission Control live dashboard for exec visibility. Agents begin handling full workflows. Staff transition from doing to overseeing.
Agents collaborate on complex, cross-functional projects. Orbit365 engineering support at scale. ISO 27001 certification submission. Operational automation across contact centre, provider management, client reporting.
365 operates as an AI-augmented company. Predictive analytics. Multi-client automation at scale. Contact centre AI triage. Every team member directs AI agents as force multipliers.
Prompts and data are sent to cloud AI providers via API. While enterprise terms prevent training on our data, prompts are processed on their servers.
Mitigation: Enterprise API agreements. No PII in prompts. Data classification policy. Audit which data categories agents can access.
Agents can take autonomous actions (file changes, API calls, sending messages). LLMs can hallucinate β producing confident but incorrect outputs.
Mitigation: Authority matrix, human review gates, no autonomous production deployments, safety boundaries in agent config.
Currently, only the CTO has deep knowledge of the AI infrastructure. If unavailable, agent management capability is limited.
Mitigation: Documentation, runbooks, Mission Control dashboard for exec visibility. Phase 2 adds additional trained operators.
Heavy dependence on AI model providers. Pricing changes or service disruptions could impact operations.
Mitigation: Multi-model routing (3 providers). OpenClaw is model-agnostic. Can switch providers without rebuilding.
Staff may fear AI is replacing them. Morale impact if not communicated well.
Mitigation: Clear "augmentation not replacement" messaging. Involve staff in AI training. Upskill into AI orchestration roles.
OpenClaw is open-source. The project could be abandoned, or security vulnerabilities could be discovered.
Mitigation: Active community, rapid development, MIT licensed. Can fork and maintain independently if needed.
Agents currently store memory in flat files. As context grows, recall becomes unreliable β agents occasionally "forget" important context or approved contacts.
Working on: Migrating from file-based memory to a structured database. Knowledge graph evaluation underway. This is a known limitation of all current AI agent frameworks.
Microsoft Teams integration has message delivery issues β occasional message leakage between sessions and dropped messages. The bot framework has limitations.
Working on: Evaluating alternative platforms (Slack, custom web interface). Teams' Bot Framework wasn't designed for persistent AI agents.
When an AI provider's API hits rate limits, runs out of credits, or experiences an outage, agents silently stop working β no notification, no graceful degradation.
Working on: Credit monitoring dashboard, automatic failover between providers (Anthropic β OpenAI β Gemini), proactive alerts before credits run out.
AI API costs can spike unpredictably with heavy usage. Current spend tracking is manual β no automated alerting when approaching budget thresholds.
Working on: Per-user/per-agent cost tracking, monthly credit allocations, smart model routing (use cheaper models for simple tasks), budget alerts.
Anthropic API costs alone: ~$2,500 AUD in under one month β with only two active users (CTO + one Account Manager). This includes Echo, Sage, and Archer running on Claude Sonnet/Opus across coding, compliance, reporting, and communication tasks.
| Model | Chip | Memory | Storage | Price (AUD) | Best For |
|---|---|---|---|---|---|
| Mac Mini | M4 Pro 12-core | 24GB | 512GB | $2,499 | Standard agent host |
| Mac Mini | M4 Pro 14-core | 48GB | 512GB | $3,299 | Heavy workload / orchestrator |
| Mac Mini | M4 Pro 14-core | 48GB | 1TB | $3,699 | Multi-agent + local models |
| Mac Studio | M4 Max 14-core / 32-core GPU | 36GB | 512GB | $3,499 | Intensive workloads |
| Mac Studio | M4 Max 16-core / 40-core GPU | 64GB | 1TB | $6,599 | Multi-agent orchestrator + local LLMs |
| Mac Studio | M4 Max 16-core / 40-core GPU | 128GB | 2TB | $9,999 | Enterprise-grade / running large local models |
Proposed budget: $1,000 AUD AI credits per employee per month as a starting allocation. Adjust based on actual usage patterns.
| Provider | Model | Input / Output (USD) | Use Case |
|---|---|---|---|
| Anthropic | Claude Opus 4 | $15 / $75 | Complex reasoning, architecture |
| Anthropic | Claude Sonnet 4 | $3 / $15 | General tasks, drafting |
| Anthropic | Claude Haiku 3.5 | $0.80 / $4 | Fast, lightweight tasks |
| OpenAI | GPT-4o | $2.50 / $10 | Alternative for diverse tasks |
| Gemini 2.5 Flash | $0.15 / $0.60 | Bulk processing, cost-sensitive | |
| Gemini 2.5 Pro | $1.25 / $10 | Deep research, long context |
Recommendation: Physical Mac Minis for primary fleet now. Evaluate AWS migration once we have 6+ months of operational data and clear scaling needs.
Per year for 5 equivalent FTEs (AUD)
Total all-in (hardware + API + services)
| Platform | Type | Data Location | Models | Cost | Multi-Agent | Best For |
|---|---|---|---|---|---|---|
| OpenClaw β‘ | Self-hosted | Your hardware | Any (Anthropic, OpenAI, Google, Ollama) | Free + API costs | β Full mesh | Full control, multi-agent orchestration |
| Claude Cowork | Hybrid | Local + Cloud | Claude only | $20β100/mo | β Single | Individual knowledge workers |
| Perplexity Computer | Cloud VM | Perplexity cloud | Multi | $20/mo | β Parallel | Research, lightweight automation |
| Microsoft Copilot | Cloud | Azure | GPT-4o | $30/user/mo | β Embedded | M365 users |
| Google Agentspace | Cloud | Google Cloud | Gemini | Enterprise $$$ | β οΈ Limited | Google Workspace enterprises |
| CrewAI / LangGraph | Framework | Your infra | Any | Free + dev time | β Custom | Dev teams building custom pipelines |
AI Agents don't need perfect processes. They replicate what humans do β even messy, inefficient workflows. You don't need to re-engineer the process first.
Teach an agent to do it the broken way β get immediate value β optimise later.
A common misconception: AI means building a custom application for every task. That's the old model.
The new model: an AI agent does the work directly β the same way a human would. It opens the spreadsheet, runs the analysis, writes the report, sends the email. No dev hours. No months of requirements gathering. No project plan.
Instead of spending $50K building an app to generate reports, you spend $50 teaching an agent to generate reports. Tomorrow.
The companies that thrive won't have the most employees. They'll have the best people β amplified by AI. One brilliant operator managing five AI agents will outperform a team of twenty doing it the old way.
AI becomes a board-level strategic asset. Investor narrative shifts to "AI-augmented operations." Competitive positioning requires AI fluency.
Operations measured by "AI augmentation ratio" β not just headcount. Process design starts with "what can an agent do?" before "who do we hire?"
AI enables new service offerings and pricing models. Scale without proportional cost growth. AI-augmented capability becomes the sales story.
Endorsing H.E.L.I.X means recognising AI agents as a strategic capability for 365 β not a side experiment. It gets a board-visible program name, quarterly reviews, and executive sponsorship. Staff know this is a real direction, not a tech hobby.
Endorsement gives us the green light to introduce AI agents to the broader team. Staff will know what the agents are, what they do, and how their roles will evolve. Transparency prevents fear.
Approving Q2 scale-up means budget for: additional hardware ($10β15K one-time), increased API credits (~$3β5K/month), and time allocation for the CTO to operationalise the program.
The exec team formally acknowledges the risks outlined in Section 7 β including the active operational issues β and accepts them with the mitigations proposed. This goes into the risk register.
Do we formally endorse H.E.L.I.X as a strategic program with executive sponsorship?
Do we accept the risk register (Section 7) and agree to the proposed mitigations?
Do we approve the Q2 rollout plan and associated budget?
Resolve active operational issues (memory, Teams reliability, API failover). Complete runbooks. Get Mission Control dashboard live for exec visibility.
Present H.E.L.I.X to all staff. "Meet your AI colleagues." Training sessions on how to interact with agents. Set expectations around roles evolving.
Bring Knox (Ops), Piper (Sales), and Iris (EA) online. Each shadows a human counterpart for 2 weeks before taking on tasks independently.
First 90-day review. Quantitative assessment: time saved, cost per task, agent utilisation, error rates. Adjust strategy based on data.
Awareness β GEM understands scope, progress, and strategic potential
Risk Acknowledgement β Formal register entry for AI program risks
Endorsement β Green light to continue and communicate to staff
90-Day Review β Checkpoint agreed for quantitative progress assessment