Case Study

How Did Static Image Testing Lift Our NC ROAS From 1.2 to 1.9?

5 min read

TL;DR: A DTC food brand running Meta ads was stuck at a 1.2 new customer ROAS with stale creatives. We tested 10 static image variations across two formats (4x5 and 9x16), changing only visuals and on-image text. Within four weeks, NC ROAS climbed to 1.9. No video. No copy-length experiments. Just strategic visual variation on static images.

What Happens When You Stop Running the Same Stale Creatives?

We had a problem that looked like a budget problem.

A DTC food brand running five figures monthly on Meta was watching their new customer ROAS flatten at 1.2. The team assumed it was audience saturation. But when we audited the account, the real issue was simpler: they were running the same three static images. Recycled, reused, exhausted.

Their frequency had crept above 4.0. Their creative refresh rate was zero.

Meta's own research shows that at four repeated exposures, the likelihood of a conversion drops by about 45%. This brand had already blown past that threshold.

So we built an experiment. We created 10 new static image variations and tested them against the existing creatives.

Why Did We Focus on Creative Instead of Targeting?

Here's the thing about paid advertising in 2025 and 2026: the targeting is largely solved. Meta's Advantage+ audience tools handle most of the heavy lifting.

What's not solved is creative.

Google's research found that creative quality accounts for up to 70% of ad performance. Not 50%. Not 60%. Seventy percent.

Meta's algorithm rewards diversity. The platform recommends running multiple conceptually distinct creatives per campaign. When brands introduce enough creative variation, they unlock algorithmic advantages: better exploration of user segments, faster learning phase completion, and lower cost-per-action.

This brand was running three static images across their entire spend. We didn't touch their targeting. We went straight at creative.

How Did We Structure 10 Static Image Variations?

We mapped out a creative strategy based on what we knew about their customer. The brand sells specialty food products. Their buyer is health-conscious, cares about ingredient quality, and shops online. Biggest silent objection: "Is this worth the premium?"

We built 10 static image variations across two format sizes:

Formats: 4x5 (feed-optimized) and 9x16 (Stories and Reels placement). Every variation was produced in both sizes. No video. No carousels. Just static images with on-image text.

Visual approaches: Product close-ups with ingredient callouts, lifestyle shots showing the product in use, before-and-after recipe visuals, customer quote overlays, comparison graphics against generic alternatives, and bold claim visuals with supporting stats.

On-image text angles: Quality proof ("Stone-milled, single-origin"), price justification ("Premium ingredients, everyday price"), social proof ("Join 10K+ home bakers"), transformation ("From pantry to table in 15 minutes"), and ingredient transparency ("Three ingredients. That's it.").

Each variation paired a distinct visual approach with a specific on-image text angle. No copy-length experiments. No body text variations. The hypothesis was that the image and its on-image text would do the heavy lifting inside the feed.

By Thursday, each variation had enough spend for meaningful click and conversion data.

What Actually Won?

Across 10 variations, new customer ROAS spread from 1.2 to 1.9. The gap between worst and best was significant, and it came down entirely to visual and messaging choices on static images.

The top performer: a 4x5 product close-up with ingredient transparency messaging. Clean visual. Bold on-image text calling out the ingredient list. It tapped into the MAHA movement and the broader consumer trend toward healthier, real food. That cultural tailwind gave the creative an edge: it wasn't just selling a product, it was aligning with a shift buyers already felt.

The winning creatives weren't the most elaborate. They were the most specific. They addressed a real concern (ingredient quality and value) and gave the viewer enough information in the image itself to decide whether to engage.

What Does This Reveal About Static Image Testing?

Three things emerged that most teams get wrong.

Creative fatigue hits faster than you think. At four exposures, conversions drop roughly 45%. This brand had been running the same three images for months. Introducing 10 fresh variations brought frequency back down and gave the algorithm new signals to work with. CPA dropped meaningfully within the first week.

Format fit matters even for static. The 4x5 format consistently outperformed 9x16 in feed placements. But 9x16 dominated in Stories and Reels placements. Same image concept, different crop, different performance. Running both formats for every variation captured value across all placements without extra creative effort.

Specificity beats polish. The ingredient transparency image with bold on-image text outperformed every lifestyle variation we tested. Why? Because it answered the buyer's real question and rode a cultural wave: the MAHA-driven push toward real, transparent food. Most ad teams default to aspirational visuals. But a buyer scrolling Meta at 9pm wants to know "What's in this?" and "Is it worth it?", not see a curated kitchen scene.

How Did Results Scale After Week One?

Once we identified the top three performers, we built secondary iterations. The winning ingredient-transparency image got variations with different text angles, different background colors, and different product SKU features.

NC ROAS climbed from 1.2 at baseline to 1.5 by week two. By week four, it reached 1.9. No budget increase. No new formats. No video production. Just iterating on what the data said was working.

The current benchmark for Meta ad ROAS sits around 2.19:1 across all industries. For new customer acquisition specifically, hitting 1.9 from a 1.2 baseline represents a 58% improvement, and it came entirely from creative changes.

Why Don't More Teams Test Static Images This Way?

Testing 10 static variations sounds simple. It is. What's hard is the strategic thinking that makes each variation meaningfully different.

The real barrier is that most teams skip the strategy layer. They go straight to the design tool and produce surface-level variations: same message, different color. Same angle, different font. That's not testing. That's decorating.

Before you can test smart variations, you need to know who your customer actually is, what they're afraid of, what stage they're in, and how your positioning differs from competitors. Only 43% of content teams describe their workflows as standardized and consistently efficient.

When you test 10 variations built on distinct strategic angles, even static images can move your ROAS significantly.

Frequently Asked Questions

Q: Do I need video to improve my Meta ad performance? A: No. This entire test used only static images with on-image text. Video can work, but static creative testing is faster to produce, easier to iterate, and still highly effective. Start with static, prove the angles, then consider video for top performers.

Q: How many variations should I test if my budget is smaller? A: Even 5 to 7 variations will surface meaningful differences. The key is making each variation conceptually distinct, not just visually different. Test different messages, different objections, different proof points.

Q: Should I test 4x5 or 9x16 first? A: Both. Produce every creative in both formats. 4x5 wins in feed placements. 9x16 wins in Stories and Reels. Running both ensures you capture value across all Meta placements with minimal extra production effort.

Q: What if my best-performing image looks "ugly" or feels off-brand? A: Run it anyway. Your taste is not your customer's taste. The ingredient-transparency image that won our test wasn't the most visually polished. It was the most clear and specific. The algorithm and your customer care about message relevance, not aesthetics.

FAQ

Do I need video to improve my Meta ad performance?

No. This entire test used only static images with on-image text. Video can work, but static creative testing is faster to produce, easier to iterate, and still highly effective. Start with static, prove the angles, then consider video for top performers.

How many variations should I test if my budget is smaller?

Even 5 to 7 variations will surface meaningful differences. The key is making each variation conceptually distinct, not just visually different. Test different messages, different objections, different proof points.

Should I test 4x5 or 9x16 first?

Both. Produce every creative in both formats. 4x5 wins in feed placements. 9x16 wins in Stories and Reels. Running both ensures you capture value across all Meta placements with minimal extra production effort.

What if my best-performing image looks "ugly" or feels off-brand?

Run it anyway. Your taste is not your customer's taste. The ingredient-transparency image that won our test wasn't the most visually polished. It was the most clear and specific. The algorithm and your customer care about message relevance, not aesthetics.

Sources

  1. Analytics at Meta, "Creative Fatigue: How Advertisers Can Improve Performance by Managing Repeated Exposures," Medium, 2023 https://medium.com/@AnalyticsAtMeta/creative-fatigue-how-advertisers-can-improve-performance-by-managing-repeated-exposures-e76a0ea1084d
  2. Celtra, "Creative Quality: The Not-So-Secret Sauce for Boosting Ad Performance" (citing Google Media Lab) https://celtra.com/blog/creative-quality-the-not-so-secret-sauce-for-boosting-ad-performance/
  3. TrendTrack, "What is the Average ROAS for Facebook Ads in 2025?" https://www.trendtrack.io/blog-post/what-is-the-average-roas-for-facebook-ads
  4. Canto, "State of Digital Content Report." https://www.canto.com/blog/state-of-digital-content/

Stop posting plastic ads.

Use the evidence-backed anti-glaze checklist, or automate it end-to-end with Prism.

Try Prism for Free