
How We Use AI Agents to Run a Digital Marketing Agency
In early 2025, I made a decision that most agency owners thought was either brilliant or reckless: I deployed AI agents to run core operations at Sphere Agency. Not as an experiment or a side project — as fundamental infrastructure. The kind of thing you build your daily operations around.
A year later, I can tell you it was the best operational decision I've made. But I'd be lying if I said the path was smooth. This is the honest story of how we got here, what worked, what didn't, and where we're going next.
My name is JP, I'm the founder of Sphere Agency, a digital marketing agency based in Bangkok. We do everything from Google Ads and Meta campaigns to Shopee marketplace optimization and SEO for clients across Thailand and APAC.
Why I Started Looking at AI Agents

The trigger was simple: I was drowning in operational overhead.
Running a digital marketing agency means managing campaigns across Google Ads, Meta, TikTok, Shopee, Lazada — often for multiple clients simultaneously. Every morning started the same way: log into five dashboards, check spend pacing, look for anomalies, compile numbers into reports, and flag issues before they became expensive problems.
By the time I'd done all the monitoring and reporting, half the day was gone. The strategic work — the thinking, the creative direction, the client conversations that actually drive growth — got squeezed into whatever time was left.
I'd tried the usual solutions: marketing automation tools, dashboard aggregators, scheduled reports. They helped, but they were fundamentally reactive. They could show me what happened. They couldn't tell me why it happened, what to do about it, or — crucially — catch problems while they were still small.
Then I discovered OpenClaw — an open-source platform for running autonomous AI agents. Not chatbots. Not assistants that wait for you to ask. Agents that run 24/7, monitor your systems, reason about what they find, and take action. The difference sounded theoretical until I actually deployed one. Then it became visceral.
What Our Three AI Agents Actually Do
Today, we run three specialized AI agents on OpenClaw, each focused on a different domain of our agency operations:
Agent 1: Marketing Performance
This is our most active agent and the first one we deployed. Its responsibilities include:
24/7 campaign monitoring across Google Ads, Meta Ads, TikTok Ads, Shopee Ads, and Lazada
Daily performance reporting — every morning, I get a structured summary of all active campaigns with key metrics, trends, and anomalies flagged
Anomaly detection — if a campaign's CPC spikes 30% overnight or ROAS drops below target, I know within minutes, not hours
Optimization recommendations — the agent doesn't just report problems, it analyzes root causes and proposes specific fixes
Competitive monitoring — tracking competitor ad activity and flagging strategic shifts
The impact here was immediate. Before this agent, catching a budget overspend or a sudden performance drop required me to be actively checking dashboards. Now, issues come to me — on Discord, with context and a recommended action. My response time went from hours to minutes.
Agent 2: Agency Operations
The second agent handles the business side — the operational work that keeps an agency running but doesn't directly generate revenue:
Knowledge management — maintaining our internal wiki, documenting processes, keeping institutional knowledge organized
Scheduling and coordination — managing calendars, preparing meeting briefs, tracking action items
Research tasks — competitive analysis, market research, platform policy updates
Process automation — routine tasks that follow predictable patterns but previously required human attention
This agent is less glamorous than the marketing one, but honestly, it might save more time. The operational overhead of running an agency is enormous and largely invisible until you automate it away.
Agent 3: Revenue Intelligence
The third agent focuses on revenue-related analysis and opportunities:
Revenue tracking and forecasting — monitoring agency revenue streams and projecting trends
Market opportunity analysis — identifying emerging platforms, new ad formats, and market shifts that could benefit our clients
Client health scoring — flagging accounts that might need extra attention before problems arise
Financial analysis — tracking margins, identifying efficiency opportunities, and supporting budgeting decisions
What Real Results Look Like
I'm going to share specific outcomes, but let me be upfront about what I can and can't share. Client data is confidential — so I'll use anonymized examples and focus on operational improvements rather than specific client metrics.
24/7 Monitoring Changed Everything
Within the first month, our marketing agent caught a budget pacing issue for a client's Google Ads campaign at 11 PM on a Saturday. The campaign was on track to overspend its monthly budget by 35% due to a seasonal traffic spike. The agent flagged it, recommended a bid adjustment, and I approved the fix from my phone in under two minutes.
Under our old system, we wouldn't have caught it until Monday morning. That single catch saved the client roughly ฿45,000 in wasted spend.
Reporting Went from Hours to Minutes
Our morning performance reports used to take 1.5–2 hours to compile manually — logging into each platform, pulling numbers, formatting them for each client. Now they're generated automatically before 7 AM, and they're more comprehensive than what we produced manually. They include trend analysis, period-over-period comparisons, and anomaly flags that a human reviewer might miss when tired or rushed.
Proactive Optimization Became Real
One pattern we discovered: our marketing agent started identifying creative fatigue cycles — the predictable performance decline when ad audiences have seen the same creative too many times. It would flag CTR decay trends 3–5 days before they would have hit our internal alert thresholds, giving us time to prepare fresh creative instead of scrambling reactively.
Over a quarter, this pattern alone improved average campaign ROAS by an estimated 8–12% across accounts where we implemented the agent's recommendations consistently.
Client Satisfaction Improved
The most unexpected benefit was client perception. When you can tell a client about a problem before they notice it — with a diagnosis and a fix already in progress — trust grows fast. Several clients have commented that our responsiveness feels different from other agencies they've worked with. It is. It's an AI agent watching their account at 3 AM.
What Went Wrong (Because Plenty Did)

This wouldn't be an honest account without the failures:
The Accuracy Problem
Early on, we had incidents where agents reported metrics that were wrong. Not wildly wrong — but wrong enough to matter. An incorrect ROAS figure in a client-facing report, an API data pull that returned stale data, a percentage calculation that used the wrong baseline.
This was our biggest crisis point. In marketing, a wrong number doesn't just look bad — it can lead to bad budget decisions. We had to build rigorous verification protocols: every number needs a source citation, every report gets a self-check before sending, every correction gets logged and reviewed.
The verification overhead felt like it defeated the purpose of automation. But over time, the error rate dropped dramatically. Now our agent-generated reports are actually more reliable than our old manual reports, because the verification is systematic rather than dependent on whether the human reviewer was having a good day.
The Autonomy Balance
We gave agents too much autonomy too quickly. An agent that can adjust bids and pause campaigns is powerful — but if it makes a wrong call, the financial impact is immediate. We learned to implement approval workflows: the agent recommends, I approve. For routine optimizations we've validated multiple times, the agent can act independently. For anything involving budget changes above a threshold, it asks first.
Finding the right autonomy level is an ongoing calibration, not a one-time setup.
The Context Window Problem
AI agents don't have perfect memory. They work within context windows — essentially, how much information they can hold in their "working memory" at once. For complex, multi-client analysis, this was initially a limitation. We had to design our workflows around it: clear documentation, structured memory files, specific operating procedures that help agents maintain continuity across sessions.
The "AI Wrote This" Problem
Early agent-generated reports had a distinctly AI tone — generic, hedging, full of unnecessary caveats. Clients didn't want a report that read like a diplomatic press release. They wanted insights that sounded like they came from a senior strategist. We spent significant time refining the agents' communication style to match agency standards. The breakthrough was realizing we needed to treat agent persona development as seriously as we'd treat onboarding a new team member.
What I'd Tell Other Agency Owners
If you're considering AI agents for your agency, here's what I wish someone had told me:
Start with monitoring, not acting. Let your agent watch and report for at least a month before giving it any ability to make changes. Build trust through accuracy.
Invest in verification systems early. The speed of automation is worthless if the data isn't reliable. Build your QC processes before you build your workflows.
Don't try to automate everything at once. Pick one painful, time-consuming task and nail it. Then expand.
Treat agent setup like hiring. Write clear job descriptions (what to monitor, how to report, when to escalate). Create SOPs. Define boundaries. The agent is as good as its instructions.
Plan for failure. Agents will make mistakes. Have a process for catching, logging, and learning from errors. The agents get better — but only if you build the feedback loop.
What's Next for Sphere Agency
We're now looking at several expansions:
Deeper platform integration — connecting agents to more data sources for richer cross-platform analysis
Client-facing agents — giving select clients direct access to monitoring agents for real-time campaign visibility
Creative analysis — using agents to analyze ad creative performance patterns and predict which elements will perform best
Helping other businesses adopt AI agents — our experience setting up and managing agents is now a service we offer to clients
The agency model is changing. The agencies that thrive in 2026 and beyond won't be the ones with the most people — they'll be the ones that combine human expertise with AI capability most effectively. We're building that model at Sphere, and we're seeing it work every day.
Want to See Our Work or Talk to Us?
If you're curious about what an AI-powered agency actually delivers, check out our portfolio. If you want to discuss how AI agents could work for your business — whether you want us to manage your marketing or help you build your own agents — get in touch.
And if you want to learn more about the team and philosophy behind Sphere Agency, visit our about page.




