I’ve been tracking something interesting this week that’s going to matter quite a bit for anyone selling products online. While everyone’s been talking about AI models getting smarter, what’s actually happening behind the scenes is that these systems are getting dramatically better at understanding images, videos, and all the messy product data we throw at them. And that changes things for Google Ads in ways worth paying attention to.
Let me walk you through what’s happening and what it means for your business going forward.
Google’s Gemini 3 Launches With Serious Visual Intelligence
Google just released Gemini 3, their latest AI model, and whilst the technical benchmarks are impressive, here’s what actually matters: this thing can properly understand images and video in context now. Not just “that’s a blue shirt” but understanding product features, quality indicators, and how items relate to search intent.
Why does this matter for your Google Ads? Because Performance Max campaigns – which most of you are probably running – rely entirely on Google’s AI to match your products to searches. When that AI gets significantly better at understanding what your products actually are and who should see them, your campaigns work differently.
Here’s the practical bit: if you’ve been running Performance Max and felt like Google sometimes shows your products to completely wrong searches, that should improve. But – and this is important – only if your product data is actually good. The better the AI gets at understanding images and context, the more it punishes poor product photography, vague titles, and incomplete descriptions. Your product feed quality just became more important, not less.
Peak Season Reality Check: What Actually Works Right Now
I’ve been looking at peak season performance data, and there’s something interesting happening with how Google’s AI handles high-intent shopping searches versus browse behaviour. During peak periods like we’re heading into now, search intent shifts dramatically – people move from browsing to buying mode faster.
What I’m seeing is that Performance Max campaigns need tighter conversion tracking during these periods. When someone’s ready to buy, the AI should know immediately that the conversion happened and what the value was. Delayed conversion data during peak season means the AI is optimising on yesterday’s behaviour whilst today’s customers are acting completely differently.
Plain and simple: if your conversion tracking has any lag (even a few hours), you’re giving Google’s AI old information during the exact time when customer behaviour is changing fastest. That’s worth checking before you hit your busy period.
The Budget Question Everyone’s Asking
Here’s what happens during peak season that catches people out: Google’s AI wants to spend more because it’s seeing more opportunities. But it can’t tell the difference between “genuine increased demand” and “everyone’s shopping around comparing prices”. You need to know which one you’re seeing.
If your conversion rate holds steady or improves whilst volume increases, that’s real demand – let the AI spend. If your traffic’s up but conversion rate drops, that’s browse behaviour, and you might want to hold budgets steadier. The AI can’t make that strategic decision for you.
The AI Infrastructure Arms Race (And Why You Should Care)
Microsoft, NVIDIA, and Anthropic just formed an alliance around AI computing infrastructure. On the surface, this looks like big tech companies doing big tech things. But there’s a practical angle here.
Google Ads runs on Google’s AI infrastructure. Microsoft Ads runs on Microsoft’s. As these companies invest billions into making their AI systems faster and more capable, the advertising platforms built on top of them get better too. What we’re seeing is a fairly significant acceleration in how quickly these platforms can process signals and make decisions.
For your campaigns, this means the gap between “something happened on your website” and “Google’s AI adjusted your bids based on it” keeps shrinking. Real-time optimisation is becoming actually real-time. Which again brings me back to data quality – faster AI decisions based on poor data just means poor decisions happen faster.
Retail Media Gets Smarter (Finally)
There’s an interesting piece about how major brands are handling AI-powered retail media, and it highlights something I’ve been thinking about: the lines between traditional Google Shopping, retail media platforms (Amazon, eBay, etc.), and AI search are blurring fast.
If you’re selling on multiple platforms – your own Shopify store, Amazon, eBay – you’re probably running ads on each platform separately. What’s changing is that customers now start their journey in AI search tools, then bounce between platforms. Google’s AI needs to understand that your customer might research on Google, check reviews on Amazon, then buy from your website.
The practical implication? Cross-platform tracking and attribution is becoming essential, not optional. If you’re only measuring last-click conversions on your website, you’re missing how the AI-powered customer journey actually works. And if you’re not feeding that fuller picture back to Google’s AI, it can’t optimise properly.
What This Means for Product Data Strategy
Here’s where this all connects: whether someone finds you through Google Shopping, AI search, or retail media, they’re comparing products across platforms instantly. Your product titles, descriptions, images, and pricing need to be consistent and compelling everywhere.
I’m seeing more situations where Google’s AI deprioritises products with weak data even if the bid is competitive, because it predicts the click won’t convert. It’s making quality judgements about your product presentation before anyone even sees your ad. That’s new behaviour worth understanding.
The Grok Wildcard
Can’t ignore that Elon Musk’s xAI launched Grok 4.1 this week with significantly reduced “hallucination” rates – meaning it makes up fewer facts. Whilst Grok isn’t directly part of advertising platforms yet, here’s why it matters: X (Twitter) has advertising, and if Grok gets integrated into ad targeting and creative generation, that’s another AI-powered platform to understand.
More broadly, we’re entering a phase where every major platform has its own AI model powering its advertising. Google has Gemini, Microsoft has their models, Meta has Llama, and now X has Grok. Each one works slightly differently, prioritises different signals, and requires different optimisation approaches.
The days of one-size-fits-all campaign strategy across platforms are ending. What works on Google’s AI won’t necessarily work on Microsoft’s or Meta’s.
What You Should Actually Do
Right, let’s make this practical. Based on what’s happening with AI and Google Ads right now:
- Audit your product feed quality – As AI gets better at understanding products, poor data becomes more obviously poor. Check your titles, descriptions, and images across all products.
- Verify your conversion tracking speed – Make sure conversions are reporting to Google as quickly as possible, especially heading into busy periods.
- Review your Performance Max asset groups – With better visual understanding, your image and video quality matters more. Weak creative will get filtered out faster.
- Check cross-platform consistency – If you sell on multiple platforms, make sure your product presentation is strong everywhere, not just your main channel.
- Watch your conversion rate trends – Don’t just look at volume during peak season. If conversion rate drops whilst traffic increases, that’s a signal to adjust strategy.
Looking Forward
The pattern I’m watching is clear: AI systems are getting dramatically better at understanding visual content and context, which changes how they evaluate and serve ads. This isn’t some future development – it’s happening now in your campaigns.
The businesses that will do well in this environment are the ones treating their product data and creative assets as seriously as they treat their bids and budgets. Google’s AI can only work with what you give it, and what you give it just became significantly more important.
I’ll be keeping tabs on how these AI improvements actually show up in campaign performance over the coming weeks. Peak season will be an interesting test of whether these new models actually deliver better results or just sound impressive in press releases.
More next week as we head deeper into the busy period. If you’re seeing anything unusual in your campaigns, that’s worth investigating now rather than mid-December when everyone’s scrambling.