Growth Is Entering a New Paradigm
After eight years running user growth at ByteDance, TikTok, Kuaishou, and miHoYo, I've come to believe the most expensive line item in growth isn't media spend. It's the institutional knowledge that vanishes every time someone leaves.
I recently came across a detail that stopped me: Anthropic's growth marketing was run by a single non-technical operator for ten straight months. Using Claude Code, one person covered nearly all growth execution. Granted, Claude's product-led growth engine was already strong. But even after discounting for that, the fact that one of the fastest-growing AI companies in the world chose to organize its growth function this way deserves serious attention. A year ago, most people would have found this hard to believe. For me, it wasn't surprising at all — because we're building the product that makes this possible, and every week brings new proof points.
The golden age I grew up in
Eight years ago, I was running growth strategy at Toutiao (ByteDance's flagship news app). Our team operated like assembly-line workers inside an app factory, sprinting across product lines and shipping dozens of live experiments every week. We pulled our own data, validated our own results, and our days revolved around two things.
First, building a centralized growth platform — abstracting the scattered capabilities of ad buying, creative production, push notifications, and referral loops into reusable, platform-level tools and playbooks. Second, acting as growth Business Partners: embedding into Toutiao, Xigua, Huoshan, and TikTok to deploy those tools, then feeding validated learnings back into the platform.
Simple in theory. Brick by brick in practice. Attribution had to be verified case by case. Anti-fraud logic required constant iteration. Experiment velocity had to keep climbing. But results came easily in those days: our operations team could reliably steer millions of dollars in daily spend, and a well-designed strategy could move 7-day or 14-day retention by several percentage points. As programmatic tools matured, most decisions about which creatives to scale and which accounts to shut down could be handled by rule-based logic.
-
For a while, I believed this was the endgame for growth:
smart people, good systems, relentless iteration.
I carried that belief from Toutiao to TikTok, then to Kuaishou, and later to miHoYo for the launches of Genshin Impact and Honkai: Star Rail. The products changed. The methods shifted at the margins. But the underlying assumption never wavered — find the best people, give them the best tools, iterate fast, test faster.
A growth team's ceiling was always defined by the judgment and stamina of its operators. I ran this model across different products and markets for nearly eight years.
The old paradigm is breaking down
The deeper I went across different business contexts, the more often I hit walls. The technical landscape and traffic structures were evolving, and last-generation methods simply stopped working on next-generation problems.
For the past decade, the growth industry has operated on a widely accepted playbook: hire experienced operators, trust their judgment, and run massive volumes of experiments to find local optima. Whoever understood platform mechanics best, whoever had the sharpest creative instinct, whoever grasped attribution most deeply — they could pull ROI numbers that others couldn't touch. Platforms weren't yet intelligent enough to commoditize those skills, and information asymmetry was wide enough that a few sharp individuals could generate real competitive advantage through intuition alone.
Paradigm ShiftMeta's Advantage+, Google's Performance Max, TikTok's GMV Max — they're all doing the same thing: reclaiming decisions that used to depend on human expertise and folding them into platform algorithms. Budget allocation, traffic distribution, audience targeting — more of these levers are being pulled by machines, not people.
I've watched this firsthand. In the early days, a skilled media buyer on my team could manually tune targeting, bids, and creative combinations to squeeze multiples of performance out of the same campaign. That "feel" was a genuine edge. But over the past two or three years, the feedback I hear from buyers has shifted dramatically: "It's harder to scale. Budgets drift the moment you increase them. Manual adjustments barely register. The algorithm needs one to two weeks of learning. Creative fatigue is just the new normal."
As platform algorithms mature, the space for human operators to influence outcomes is being systematically compressed. The basis of competition in growth has fundamentally changed: it used to be about who has better judgment and faster hands. Now it's about who has better systems and tighter feedback loops.
Yet the vast majority of growth teams are still organized to compete on the old axis.
Growth's most expensive hidden cost
Over eight years, whether at TikTok, Kuaishou, or miHoYo, I found myself doing essentially the same thing each time: rebuilding attribution infrastructure, creative pipelines, media-buying systems, and A/B testing frameworks from scratch. Every time, it required significant headcount investment. Every time, I accumulated real operational knowledge across different markets and verticals.
But in hindsight, almost none of that knowledge actually survived inside the organization. It lived in the heads of a handful of individuals.
-
A bidding strategy that took an analyst months
to develop disappears the day they resign.
Lessons a media-buying team learned the hard way get re-learned by their replacements from zero. On the surface, organizations keep investing resources. In reality, countless companies are paying tuition for the same problems, over and over again.
This isn't a management failure at any particular company. It's a structural flaw in the operating model itself. As long as expertise is stored in individual brains, it can never be truly inherited, reused, or compounded.
I once believed that comprehensive systems, detailed documentation, and careful handoffs could solve this problem. I was wrong. Because what's actually valuable isn't "knowing what was done" — it's "knowing why that particular call was made at that particular moment." That kind of judgment doesn't survive in SOPs. Even if you write it down, the context has already shifted by the time someone reads it.
AI isn't changing efficiency. It's changing who does the work.
If expertise has to be extracted from human minds before it can be reused, then perhaps the more fundamental move is to stop relying on individual expertise for execution in the first place.
Over the past year, AI tools have flooded the market. The question most people ask is: "Is AI going to replace us?" I don't think that's the right question. What's actually happening is subtler and more consequential: in an expanding range of contexts, the entity performing the work is shifting from humans to agents. This isn't an efficiency upgrade. It's a structural change in how organizations operate.
Consider a growth team managing eight figures in monthly ad spend. Historically, this required a dozen or more people collaborating — media buyers monitoring accounts, designers producing creatives, analysts pulling reports, strategy leads designing experiments, plus external agency coordination. The daily rhythm was highly repetitive: check dashboards in the morning, adjust campaigns during the day, monitor spend at night. Rinse and repeat.
Looking back at my own experience leading these teams, the moments that actually required human judgment were surprisingly rare. Most of the time was consumed by data wrangling, rule execution, and manual repetition — followed by waiting for results, running a retrospective, and starting the next cycle.
The New ModelThe same scope of work can now be handled by a leaner team working alongside a fleet of agents. The agents pull data, run diagnostics, adjust budgets, generate creative variants, and execute tests — continuously, without the bottleneck of human bandwidth. Humans define objectives, set guardrails, handle edge cases, and calibrate judgment at critical decision points — with the option to engage full "autopilot" mode. Each person is no longer managing a handful of ad campaigns. They're overseeing an autonomous execution system — and their effective scope of work has expanded by an order of magnitude.
The future of growth team: orchestration over execution
In this new model, the core competency of a growth team is no longer execution speed. It's agent orchestration.
What does orchestration look like in practice? Take a concrete scenario: a gaming company wants to blitz the North American market for a new title. Under the old model, the growth lead runs a kickoff meeting to set targets, an analyst builds budget and payback models, media buyers split budgets across channels and accounts based on experience, everyone monitors data manually, and you hold a weekly retrospective. The entire process is bottlenecked by the experience and feel of a few key operators.
An orchestration-first organization approaches this differently. The growth lead decomposes the "blitz" objective into a task structure that agents can execute: market prioritization logic, acceptable CPI ranges, creative test directions to cover, conditions that trigger budget increases, and signals that mean it's time to pull back. Agents execute against that rule set. Humans only intervene at anomaly signals and pivotal decision points.
There's no need to wait for the weekly meeting, agents monitor performance data and business updates in real time. With the ability, the team isn't reviewing "what we tried" and auditing "where in the decision chain can we improve the system". The next time the organization enters a similar market, the system's accumulated learning is the new starting point. Not a memory in someone's head, but an iterable, callable asset in the system itself.
-
The real moat for next-generation growth organizations
isn't automated execution. It's explainable, systematic decision-making.
Because agents don't accept vague instructions. You have to translate "feel" into explicit logic. And that translation process — that act of making the implicit explicit — is precisely how individual capability becomes organizational capability.
A positive-sum game
Growth is just the entry point. Tens of millions of people worldwide work in digital marketing and growth. A disproportionate share of their time goes to checking dashboards, adjusting bids, swapping creatives, and monitoring for anomalies. Many of these people are talented, well-educated, and creative — yet their intelligence and imagination have been trapped at the execution layer for years. That's an enormous waste of human potential.
At the same time, growth capability is distributed extremely unevenly. Large companies maintain growth teams of hundreds with deep institutional playbooks. A five-person startup has nothing. When growth execution is systematized through agents, good products become easier to surface. Competition shifts back to product value, not budget and headcount.
Take it one step further: a system-driven growth engine, because it runs on data feedback loops rather than individual intuition, is inherently more precise. Users see more relevant content. Advertisers spend less. Platform ecosystems get healthier. That's a positive-sum outcome.
And if the growth domain can prove out the "80% agent, 20% human judgment" model, the same paradigm will extend to a much broader set of knowledge work.
What I'm building
Before starting this company, I spent a long time thinking about a single question: is it possible to take every business judgment I've made over the past eight years — every mistake, every hard-won insight, every feedback loop — and store it in a system that can continue learning on its own? If that system could take in new information and autonomously iterate on its own decisions, that would be something close to a breakthrough. It would mean institutional memory no longer has to decay.
With today's foundation model capabilities, I believe that window has opened.
The product I'm designing doesn't bolt AI features onto a traditional workflow. It's not a smarter assistant you prompt for suggestions. From the ground up, it treats AI as a manageable, collaborative, auditable unit of execution — one that runs continuously in live business environments, learns from real-world signals, and iterates when problems arise.
The Question We're AnsweringCan an agentic system autonomously orchestrate a full-funnel growth workflow, from diagnostic analysis and strategy formulation to creative development, media buying, and optimization, while delivering outcomes that are more reliable, consistent, and interpretable than those of most human-led teams?
When that becomes possible, human time and attention get redirected to where they actually matter: understanding users, defining value, and building great products. Not pulling data, tweaking bids, iterating on copy, and cycling through creatives — work that agents will do better.
User growth is undergoing a transition from a labor-intensive discipline to an agent-collaborative one. This won't happen all at once. It's happening right now, one proof point at a time.
-
GrowthGPT is our answer to the question:
what does growth look like when agents do the execution?
This isn't a story about AI replacing people. It's about what becomes possible when the best people are no longer stuck doing the work that machines can do better. The teams that figure this out first won't just be more efficient — they'll be playing a different game entirely.
See it in action.
Watch GrowthGPT run a live campaign optimization — from diagnosis to execution.