🏛️ Official Updates

Microsoft Open-Sources Industry-Leading Embedding Model

Microsoft open-sources Harrier, its industry-leading embedding model that ranks first on the multilingual MTEB-v2 benchmark with a score of 74.3.

This new model series delivers state-of-the-art performance for modern AI systems, supporting over 100 languages with a 32k context window. Harrier improves grounding quality through large-scale contrastive pre-training on 2 billion data examples and fine-tuning on 10 million high-quality pairs. The model outperforms both open-source competitors and proprietary solutions like OpenAI’s text-embedding-3-large and Amazon’s Titan. Harrier-OSS-v1-270m achieves 66.5% accuracy while the 27B variant reaches 74.27%, demonstrating superior efficiency across model sizes. This advancement strengthens the full grounding pipeline that AI agents depend on for memory, ranking, and orchestration in the agentic web era.

🔗 Microsoft Bing Blog


🤖 GEO·SEO Highlights

Your Owned Content Is Losing To A Stranger’s Reddit Comment

Your owned content is losing ground to community-driven platforms, with Reddit comments now cited more frequently than brand-owned pages in AI-generated answers.

I analyzed the data: Reddit is the most cited domain in Google AI Overviews, with citations growing 450% between March and June 2025. Community consensus has become the primary signal AI systems trust for product recommendations and decision-making. The architecture is clear – AI models weigh distributed human validation (upvotes, accepted answers) more heavily than single authoritative sources. Your owned content needs authentic community presence or risks becoming invisible in AI search results.

🔗 Search Engine Journal


ChatGPT Now Crawls 3.6x More Than Googlebot: What 24M Requests Reveal

ChatGPT now crawls 3.6x more than Googlebot according to our analysis of 24 million requests across 69 websites.

We found OpenAI’s ChatGPT-User crawler made 133,361 requests while Googlebot made only 37,426 during the 55-day study period. AI crawlers combined accounted for 213,477 requests versus 59,353 for traditional search crawlers. OpenAI operates two distinct crawlers – ChatGPT-User for real-time retrieval and GPTBot for training – and most sites don’t differentiate between them in their robots.txt. AI crawlers delivered faster response times (8-21ms) with near-perfect success rates compared to Googlebot’s 84ms and 96.3% success rate. The data confirms AI crawling surged 15x in 2025, making it essential to update your SEO strategy to account for this shift in crawler dominance.

🔗 Search Engine Journal


Agentic search: How AI agents will decide which brands get found

Agentic search AI is transforming how brands are discovered and evaluated, with AI agents now researching, comparing, and taking action on behalf of users before humans are involved.

We’re seeing 1,300% growth in agentic web traffic in 2025, and these AI systems are making complex multi-step decisions about which brands to recommend and which to filter out entirely. As an Austin venue competing for visibility, your brand must pass through multiple evaluation layers – from basic discovery in simple queries to trust-based validation when agents take delegated actions. The key is understanding that traditional SEO alone won’t suffice; you need to ensure AI agents can find accurate information about your brand, validate it through independent sources, and trust you enough to recommend or act on your behalf. This means optimizing your content for AI comprehension, building authoritative third-party validation, and ensuring your brand information is consistent and actionable across all platforms where agents might research you.

🔗 Semrush Blog


ChatGPT traffic analysis: Insights from 17 months of clickstream data

ChatGPT traffic analysis reveals that outbound referral traffic from ChatGPT to other websites grew 206% in 2025, according to 17 months of clickstream data.

The analysis shows that ChatGPT’s total traffic plateaued around November 2025 at approximately 1 billion monthly visits, but referral traffic continued to expand. Over 30% of all referral traffic from ChatGPT goes to just 10 domains, with Google alone receiving more than 20% of all referrals. The data also indicates that ChatGPT enables its search feature on only 34.5% of queries as of February 2026, down from 46% in late 2024, meaning most responses still rely on training data alone. Users are asking more prompts per session, with average queries per session jumping 50% in the last four months of the study period after 12 months of flat engagement.

🔗 Semrush Blog


Brand Bias in Prompts: An Experiment

We tested 300 prompts to measure how brand bias in prompts affects AI visibility, finding that brand mentions in prompts dramatically increase brand mentions in outputs.

Our experiment revealed that 100% of brand prompts returned brand mentions, while only 53% of non-brand prompts did, with soft-brand prompts falling in between. On average, brand prompts generated 14.5 brand mentions per response, compared to just 0.79 for non-brand prompts. The data shows that simply including your brand name in a prompt almost guarantees brand visibility in AI responses, but be careful – it also increases competitor mentions. This brand bias in prompts experiment demonstrates why SEOs must track brand, soft-brand, and non-brand prompts separately when measuring AI visibility.

🔗 Moz Blog


Reddit Brand Strategy for AI Search — Whiteboard Friday

Victory Umurhurhu’s Whiteboard Friday explains how to build a Reddit brand strategy for AI search.

With Reddit’s $60M Google AI deal and OpenAI partnership, the platform now surfaces content in search results and ChatGPT. The strategy focuses on three modes: insight-gathering (monitoring 10-20 subreddits, understanding rules, using AI answers tools), contextual contribution (building credibility by engaging in existing conversations), and conversation catalyst (starting your own discussions). Since 90% of audiences seek authentic brands before purchasing, Reddit’s 166M daily users across 100K communities offer unique trust-building opportunities through real human conversations.

🔗 Moz Blog


Google’s CEO Predicts Search Will Become An AI Agent Manager

Google’s CEO Sundar Pichai predicts search will evolve into an AI agent manager, fundamentally changing how we interact with information.

In a recent interview, Pichai explained that traditional search queries will transform into “agentic search” where AI systems handle tasks rather than simply returning results. He envisions search becoming an orchestration layer that manages multiple AI agents working simultaneously to complete complex, long-running tasks. This shift means search will no longer be about finding web pages but about getting things done through AI intermediaries. The pace of AI development makes long-term predictions difficult, but Pichai emphasizes the importance of embracing this expansionary moment rather than viewing it as a zero-sum game. While websites weren’t mentioned in the discussion, the implication is clear: search is moving beyond simple information retrieval toward becoming a sophisticated task manager powered by AI agents.

🔗 Search Engine Journal


GEO Was Invented On Sand Hill Road

GEO was invented on Sand Hill Road, not in the trenches of SEO practice, and this article lays bare how venture capital manufactured a category to sell their portfolio tools.

I’ve watched this cycle before—shiny new acronyms arrive, FOMO spreads, and professionals repackage their services to match the latest buzzword. In May 2025, Andreessen Horowitz published a blog post declaring that “SEO is slowly losing its dominance” and promoting three GEO tools—Profound, Goodie, and Daydream—where a16z happens to be an investor. Ten months later, an engagement farmer repackaged that post as a “leaked 34-page memo” with invented stats, and the SEO community ran with it without checking the source. The real problem isn’t the VC or the farmer—it’s SEO professionals who market this fear to clients, rebranding their services under GEO to avoid looking “legacy,” even though they can’t define what GEO actually measures. If the only way you can sell your expertise is by adopting unverified terminology every eighteen months, the issue isn’t the label—it’s the confidence you’re selling.

🔗 Search Engine Journal


Why Product Feeds Shouldn’t Be The Most Ignored SEO System In Ecommerce

Product feeds are no longer just a backend tool for paid ads—they are now central to how ecommerce brands show up in organic search, shopping results, and AI-driven discovery.

In this article, I explain why SEO teams must treat product feeds as a core search infrastructure, not an afterthought. By optimizing feeds with semantic query mapping, logical taxonomy, structured data, and ongoing analysis, brands can dramatically improve visibility and click-through rates. I share real examples of how SEO-driven feed changes—like keyword-rich titles and accurate categorization—directly boost rankings. I also highlight common mistakes, such as inconsistent pricing and missing GTINs, that cause disapprovals and lost sales. Ultimately, the more context and quality you put into your product feeds, the more likely your products will be recommended in both traditional and AI-powered search results.

🔗 Search Engine Journal


How To Turn AI Search Visibility Data Into a GEO Strategy That Closes Citation Gaps [Webinar]

I’m excited to share insights from a recent webinar on how to turn AI search visibility data into a GEO strategy that closes citation gaps.

The session, featuring Sam Garg from Writesonic, revealed key findings from analyzing 500M+ AI conversations to understand what drives citations in ChatGPT, Perplexity, and Gemini. The data shows which content types and sources actually get cited, and how this differs from traditional ranking logic. I learned about a practical GEO prioritization framework that helps focus efforts on what moves the needle for specific citation gaps. The webinar also introduced an execution model powered by AI agents, including free open-source tools that can be deployed immediately to automate GEO tasks at scale. This approach gives SEO teams both the diagnostic framework and execution playbook needed to close AI search visibility gaps and make the case for AI search investment internally.

🔗 Search Engine Journal


How Consumers Navigate High-Stakes Purchases In AI Mode

A new study reveals how consumers navigate high-stakes purchases in AI Mode, with 74% accepting AI-generated shortlists without external verification. The research tracked 185 purchase tasks across 48 participants, showing that AI Mode fundamentally changes how people research expensive products like laptops, TVs, and insurance.

When using traditional search, 56% of shoppers built their own product lists by clicking through multiple sources. In AI Mode, only 8 out of 147 tasks showed this behavior. Instead, 88% of users took the AI’s recommendations outright, with 64% clicking nothing during their entire purchase journey.

The shift is most pronounced in insurance shopping, where participants delegated heavily to AI recommendations. Even in categories requiring specific physical constraints like appliances, users accepted AI suggestions and only visited external sites to confirm details they’d already chosen.

This creates a new reality for brands: your visibility now depends on appearing in AI-generated shortlists rather than traditional search rankings. The study identifies three key factors that determine whether your brand makes the cut in AI Mode recommendations.

The behavioral change is significant enough that we must adapt our SEO strategies. Users aren’t comparing across sources anymore – they’re accepting AI-curated lists and moving forward. This means optimizing for AI recommendation algorithms becomes as critical as traditional search optimization.

🔗 Search Engine Journal


Are AI Overviews Stealing Your Clicks? How Paid Search Teams Are Adapting to the Answer Engine Era

AI overviews stealing clicks are forcing paid search teams to adapt quickly.

I see user behavior shifting as search engines prioritize AI-generated summaries over traditional blue links. Advertisers are responding by refining their strategies to maintain visibility in this evolving landscape. We must focus on optimizing content for AI interpretation while maintaining strong paid search performance. The key is balancing traditional search tactics with new approaches that account for AI-driven results. I recommend implementing structured data, enhancing content quality, and monitoring AI overview performance metrics closely. Teams that adapt their paid search strategies now will maintain competitive advantage as AI continues transforming search behavior.

🔗 Neil Patel


AEO strategy for SaaS: 6 tactics that convert prospects into trials

An effective AEO strategy for SaaS focuses on optimizing for AI-driven search engines that now influence 56% of software buyers during discovery, making early-stage visibility critical for making vendor shortlists.

I recommend prioritizing tactics that clearly define your product category and use cases, publish explanatory content with direct answers to common questions, maintain consistent terminology across all platforms, and structure content for easy extraction by AI systems.

These approaches ensure your brand appears in AI-generated answers during both early learning phases and evaluation stages, increasing the likelihood of being cited when buyers compare options and make trial decisions.

🔗 HubSpot Marketing


Zero-click searches and the future of your marketing funnel

Zero-click searches are reshaping how marketers measure success, with AI-powered results now satisfying user queries directly on the search results page.

I’ve found that about 80% of consumers rely on zero-click results for at least 40% of their searches, while organic web traffic drops by 15% to 20%. These AI Overviews, featured snippets, and People Also Ask boxes may reduce website visits, but they build brand recognition when your content appears in them. The key is adapting your SEO strategy to focus on answer engine optimization – creating content that earns citations and summaries in these rich results. Using HubSpot’s free AEO grader can help you track your visibility in AI search engines and improve your performance.

🔗 HubSpot Marketing


13 Enterprise GEO Agencies Driving Generative Search Success

This article identifies 13 enterprise GEO agencies driving generative search success in 2026, providing specific data on their strengths, client portfolios, and GEO methodologies.

The agencies help enterprises secure AI search visibility through technical optimization, content strategy, and measurable ROI approaches. I found Siege Media’s BlueprintIQ and DataFlywheel tools particularly noteworthy for maintaining content freshness and authority signals across generative platforms. The article delivers concrete facts about how AI platforms now shape buyer discovery, with early results showing 10.7% increases in homepage traffic driven by AI Overviews and LLMs. For enterprises competing in an AI-first search world, these agencies offer proven frameworks to appear in generative answers and expand SEO into GEO through structural content strengthening, entity authority building, and citation earning.

🔗 Siege Media


7 Best Healthcare SEO Agencies for Healthcare Providers, Health Tech, and Pharma

Siege Media’s analysis identifies the 7 best healthcare SEO agencies that combine clinical expertise with proven SEO strategies to help healthcare organizations attract patients, build trust, and drive revenue growth.

The top agencies include Siege Media, First Page Sage, Cardinal Digital Marketing, Intrepy Healthcare Marketing, Coalition Technologies, Thrive Internet Marketing Agency, and k2md Health. Each specializes in different aspects of healthcare marketing, from premium content creation to multi-location healthcare groups and health tech expertise.

I recommend evaluating these agencies based on your specific needs – whether you need B2B lead generation, physician practice marketing, or health tech optimization. The article provides detailed comparisons of their services, specialties, and notable clients to help you make an informed decision.

🔗 Siege Media


HELP – Site Has Been Hacked – Rankings GONE

When a website is hacked, rankings can plummet overnight, leaving site owners scrambling for solutions. In this geo-writer style summary, we’ll provide concrete steps to help site owners recover from a hack and restore their SEO rankings.

I recommend taking immediate action by first identifying the source of the hack. Look for unusual files, database changes, or suspicious user accounts. Once the vulnerability is found, patch it promptly to prevent further damage.

Next, clean up the site by removing any malicious code or content injected by the hackers. This may involve restoring from a clean backup or manually deleting the harmful elements.

After cleaning the site, submit a reconsideration request to Google if your site was penalized due to the hack. Be transparent about the steps taken to resolve the issue and prevent future incidents.

Finally, monitor your site closely for any signs of recurring attacks and implement stronger security measures to safeguard against future hacks.

By following these steps, you can help site owners recover from a hack and restore their SEO rankings. Remember, quick and decisive action is key to minimizing the impact of a hack on your site’s performance.

🔗 Reddit r/SEO


Trust In AI Search Could Drop With Ads, Survey Shows

A recent Ipsos survey shows 63% of U.S.

adults say ads in AI search results would reduce their trust, with 27% strongly agreeing. Only 24% disagree, indicating a significant trust challenge for AI search platforms introducing ads. I recommend advertisers closely monitor user sentiment and test ad placements carefully to avoid eroding trust in AI search experiences. Early data from ChatGPT’s ad pilot shows click-through rates around 0.91%, much lower than Google Search’s 6.4%, suggesting users may respond cautiously to ads in AI search. As both Google and OpenAI expand ad inventory, balancing monetization with user trust will be critical for long-term success in AI search.

🔗 Search Engine Journal


Google CEO Says AI Could ‘Break Pretty Much All Software’

Google CEO Sundar Pichai warns AI could break pretty much all software, exposing widespread vulnerabilities and accelerating zero-day exploits.

In a recent podcast, Pichai said AI models are already discovering software flaws faster than ever, potentially driving down black-market exploit prices. Google’s Threat Intelligence Group tracked 90 zero-day exploits in 2025, with nearly half targeting enterprise software. AI is accelerating both attack and defense, shrinking the window for patching vulnerabilities. This means every website—including WordPress plugins, server configs, and third-party scripts—faces heightened risk. As AI-driven exploit discovery speeds up, maintaining current patches and auditing dependencies becomes critical for security.

🔗 Search Engine Journal


Google Explains Why It Doesn’t Matter That Websites Are Getting Larger

Google explains why page weight is not a reliable metric for SEO success. In a recent podcast, Google’s Gary Illyes and Martin Splitt clarified that the increasing size of websites should not be automatically considered a negative factor.

The discussion highlighted that page size measurements vary significantly depending on what is being measured. While Googlebot has a 2MB HTML limit, this represents only one aspect of page weight. When considering total page size including images, CSS, and JavaScript, the conversation shifts to user experience rather than crawler efficiency.

A key insight from the podcast is that compression technology, such as Brotli, means the actual data transferred over networks can be significantly smaller than what users ultimately download. This creates ambiguity in defining true page size – is it the compressed data sent or the decompressed data stored on user devices?

The experts emphasized that large page sizes are not inherently inefficient. For example, a 15MB HTML document containing mostly useful content is acceptable, whereas a smaller page with minimal content but excessive markup might be less efficient. This introduces the concept of content-to-markup ratio as a more meaningful metric than raw size.

Additionally, much of a page’s weight comes from elements users never see, such as structured data for search engines. This structural reality of the web means pages serve multiple purposes beyond human readability.

The primary takeaway for publishers and SEOs is to focus less on absolute page weight numbers and more on the value and efficiency of the content being delivered.

🔗 Search Engine Journal


Share.
Avatar photo

I am Wonfull, an SEO & GEO expert driving next-gen organic growth. I recently scaled a Middle Eastern media project's organic traffic by 10x in 6 months. As an AI builder, I created seo-audit (delivers a 92-point SEO diagnostic report in 1 minute) and am developing GEOWriter to automate content pipelines via agentic workflows.

Leave A Reply