How to Get a 10% Reply Rate and 0% Close
Why your "winning" experiments are secretly failing (and what to measure instead)
Everyone wants 10% reply rates overnight. But they’re optimizing for the wrong metrics.
Last week, three different founders told me the same story.
They ran “experiments” that failed. Pulled the plug after a week. Went back to what “worked.”
Except what worked was actually failing. . .they just couldn’t see it yet.
This is the state of experimentation in 2025: We’ve replaced the scientific method with anxiety-driven guesswork. We’re swimming in data but drowning in impatience. And we’re optimizing for metrics that actively mislead us about what’s actually working.
Keep scrolling to see real-world examples. . .
The Three Experiments That Prove We’ve Forgotten How to Test
Experiment #1: The LinkedIn DM Paradox
Chris Cozzolino recently shared what looked like a massive win. He tested LinkedIn DMs, changed ONE variable, and his response rate jumped from 12% to 25% in just two weeks.
The comments section exploded with praise.
“Game-changer!”
“What was the variable?”
“Teaching this to my team tomorrow!”
But two questions nobody was asking:
Which audience had higher conversion rates down funnel?
What if that 12% response rate led to 50% higher SQL conversion and better close rates, while the 25% response rate generated exactly 0 closed/won deals?
This isn’t hypothetical. This pattern plays out dozens of times. The message that gets the highest reply rate often attracts the wrong buyers. . .people who are curious but not serious, responsive but not qualified.
You can’t eat reply rates. You can only eat revenue.
Experiment #2: The Client Who Almost Quit Too Soon
Last month, a client called an emergency meeting.
“We A/B tested your approach for a week. Lower connection rates. Fewer meetings booked. We’re going back to what worked.”
I convinced them to give it 30 more days.
One month later: “Your approach generated 4X the reply rate and higher SQL conversion.”
Same test. Same data. Different timeline.
The only thing that changed was patience.
Experiment #3: The 3% That Changed Everything
La Growth Machine analyzed thousands of LinkedIn outbound campaigns across their entire client base. They tested two approaches:
Test A: No connection note, then follow-up DM
Test B: Transparent sales note, then follow-up DM
The result? Transparent notes got 3% fewer connections but DOUBLE the conversions.
3% fewer connections. 100% more revenue.
Most teams would have killed Test B on day three.
The Monday-to-Friday Lifecycle of Modern “Experimentation”
Here’s how every “experiment” dies in 2025:
Monday: Launch test with enthusiasm. Send Slack message: “Trying something new! 🚀”
Tuesday: Check metrics every 30 minutes. Refresh dashboard. Refresh again. Starting to sweat.
Wednesday: Panic sets in. Numbers aren’t improving. Boss asks for update. You mumble something about “giving it time.”
Thursday: Revert everything. “The data was clear…it wasn’t working.”
Friday: Post on LinkedIn about “failing fast” and “learning from experiments.”
That’s not experimentation. That’s fear. That’s anxiety cosplaying as strategy.
What Tiffany Gonzalez Knows About Real Testing
At a recent Sprouts.ai webinar, Tiffany Gonzalez (who’s scaled revenue at Microsoft, Amazon, and Coupang) dropped this gem:
“Don’t look for proof. Look for data that shows you’re wrong.”
She learned this from a scientist friend, and it revolutionized how she approaches GTM.
Most teams run experiments like this:
Launch with a hypothesis they desperately want to be true
Cherry-pick data that confirms success
Ignore contradictory signals
Scale prematurely
Blame “execution” when it fails
Tiffany does the opposite:
Launch experiment
Actively hunt for failure signals
If you can’t find failure after 4 weeks, you’re probably right
Scale carefully, ready to reverse
Her framework comes from Jeff Bezos’s “one-way door” principle. Most GTM decisions are reversible—two-way doors. The worst outcome? You admit you were wrong and try something else.
But we treat every test like it’s irreversible, like our careers depend on being right immediately.
Stop waiting for ‘perfect’
Tiffany operates at 70-80% confidence. Most teams won’t move without 99% certainty.
Here’s what she knows that they don’t: Getting from 80% to 99% confidence is exponentially harder and rarely changes the outcome.
It’s like poker. You don’t need to know every card in the deck. You need:
Directional confidence
The ability to read patterns
The courage to fold when wrong
The patience to play multiple hands
Your GTM strategy isn’t a PhD thesis requiring peer review. It’s a series of educated bets requiring iteration.
What Real Experimentation Looks Like: A RevCast Case Study
RevCast, an early-stage SaaS company, recently showed what actual experimentation looks like.
Instead of testing random tactics, they:
Step 1: Used ChatGPT to create an exhaustive list of every possible buying signal (competitor content engagement, profile views, event attendance, job postings, and dozens more)
Step 2: Tested messaging for each signal type, tied to the critical business outcomes their best customers cared about (reducing commission overspend, spending less time on board slides, etc.).
Step 3: Ran tests for THREE MONTHS. Not three days. Three months.
Step 4: Validated three signals as true buying triggers:
Following their CEO on LinkedIn for 30+ days
CRO hired within the last 90 days
Completing Pavilion’s CRO School
Step 5: Doubled down on only those three signals and their proven messaging.
Result: 4X ROI in the first two months after validation.



