Ad Testing in the Age of AI: How to Run Experiments That Still Matter in 2025

Ad Testing in the Age of AI: How to Run Experiments That Still Matter in 2025

04 July 2025 |

AI

AI-led automation is reshaping how we run digital campaigns. Tools like Google’s Performance Max and Meta’s Advantage+ promise greater efficiency, but they’re also making structured ad testing harder than ever.

With less visibility, fewer levers, and more black-box decision-making, many marketers are asking the same question: is creative testing still possible?

The answer is yes, but it requires a shift in mindset, strategy, and execution. This article explores how to run ad tests that still deliver insights in an AI-driven landscape, and how advertisers can continue making data-led decisions even as automation increases.

Creative still matters – even if the platforms don’t tell you why

AI helps determine where your ad shows and who it reaches – but the message, look, and feel still play a major role in performance.

This means:

  • Stronger creative inputs deliver better results. Even in fully automated campaigns, CTR, CVR and ROAS are shaped by message and design.
  • Testing is how you learn what works. Without structured tests, advertisers risk flying blind, especially when platform reporting is limited.
  • AI needs human strategy. Platforms don’t understand your brand nuance, audience segments or campaign goals. Humans still lead on messaging and meaning.

Despite the black-box nature of modern platforms, advertisers can – and should – still test creative in a way that delivers value.

What makes ad testing more difficult in 2025?

Structured A/B testing isn’t as straightforward as it once was. Automation has made platforms more efficient, but also more opaque.

Key challenges:

  • Limited visibility in Performance Max and Advantage+
    Asset-level reporting is minimal or non-existent, especially when multiple formats and messages are grouped together.
  • Reduced control over delivery
    AI decides which asset to show, when, and to whom, often without disclosing why it made that choice.
  • Fewer campaign levers to isolate tests
    Platforms encourage campaign consolidation, making it harder to separate variables like creative, audience, or placement.
  • Overloaded creatives = noisy results
    Dynamic creative tools can mix headlines, body text and images in a way that hides what’s actually driving performance.

The result? Advertisers must rethink how they structure tests, isolate learnings, and measure success.

How to adapt ad testing for automated campaigns

While AI has made traditional A/B testing more complex, there are still smart ways to test creative and extract learnings. Here are some of the most effective methods:

1. Test asset groups inside PMax and Meta manually

Group creative by theme, tone or product focus. This gives the algorithm distinct inputs while still enabling you to tag and compare performance via external tools like GA4.

2. Use naming conventions and UTM parameters to track performance

Since the platforms don’t give asset-level data, track manually through consistent naming and URL structures. This enables you to analyse performance in Looker Studio or other reporting environments.

3. Run split tests via geo or time-based segmentation

If you can’t test inside a campaign, try isolating a variable by location or date. For example, run version A in NSW and version B in VIC over the same period, or alternate creatives weekly.

4. Use non-automated formats for controlled testing

Before launching into Advantage+, Performance Max or Google’s new AI Max, use standard Meta campaigns or Google Search campaigns to validate creative. These environments provide more transparency and control.

5. Monitor engagement metrics alongside conversions

Metrics like CTR, hold time, scroll depth, and post interactions give directional insight – especially useful when conversion data is delayed or obscured.

Meta-specific ad testing: keep it simple and structured

Meta’s Advantage+ creative tools are powerful, but often too broad to deliver meaningful insights unless you take control.

Tips for testing on Meta:

  • Limit dynamic creative variations – stick to 1–2 key variables to isolate impact.
  • Use parallel ad sets for side-by-side testing of copy or visuals.
  • Avoid testing multiple messages in one ad – this makes it impossible to understand what worked.
  • Use engagement data to learn – likes, shares, comments, and view-through metrics help identify standout creative.

Even with automation, structured testing is still possible, it just requires a shift from campaign-led testing to concept-led testing.

What advertisers should do next

Ad testing isn’t about fighting AI, it’s about working alongside it. Here’s how advertisers can continue to gather creative insights in 2025:

  • Reframe how you think about testing: focus on themes, concepts, and user intent, not individual headlines or button colours.
  • Control what you can: while AI controls delivery, you can still decide how assets are grouped and which audiences you serve.
  • Track beyond the platform: use UTM parameters and GA4 to build your own insight pipeline.
  • Don’t test everything at once: isolate a single creative variable to ensure learnings are clear.

Document your tests: keep a simple log of what was tested, where, and what you learned – this builds long-term creative intelligence.

Final thoughts

AI-driven platforms aren’t going anywhere, but that doesn’t mean we give up on testing. In fact, as automation increases, so does the importance of strategic creative input.

Advertisers who invest in smart, structured testing will continue to learn what resonates, scale what works, and stay ahead of both the algorithm and the competition.

Ready to test smarter?

Looking for creative insights that go beyond the usual platform data? At ADMATIC, “test and learn” isn’t just a mindset, it’s our method. From planning experiments to interpreting results, we help brands turn testing into real, scalable growth. Get in touch with ADMATIC’s digital experts today.