Your comprehensive, no-nonsense guide to ensuring your website is technically flawless in the eyes of Google — built specifically for senior marketing leaders who need results, not rhetoric.
As a London CMO, you already know that organic search is the single most scalable acquisition channel your brand owns. But in 2026, the gap between companies that invest in technical SEO and those that merely talk about content has become a chasm. Algorithm updates are relentless. AI Overviews have reshaped the SERP landscape. And Core Web Vitals — particularly the Interaction to Next Paint (INP) metric — have become non-negotiable ranking signals.
This isn't a surface-level tips article. This is a 2,500-word deep dive into the four pillars of a modern technical SEO audit: Crawlability, Rendering, Core Web Vitals, and Structured Data (Schema). Whether you're briefing your in-house team or evaluating an SEO Agency partner, this checklist will give you the vocabulary, the priorities, and the action items you need.
Bookmark this page. Share it with your dev team. Let's get into it.
Table of Contents
- Pillar 1: Crawlability & Indexation
- Pillar 2: Rendering & JavaScript SEO
- Pillar 3: Core Web Vitals (with INP Focus)
- Pillar 4: Structured Data & Schema Markup
- Bringing It All Together: Your 2026 Action Plan
Pillar 1: Crawlability & Indexation
Before Google can rank your pages, it has to find them and choose to index them. In an era where Google is increasingly selective about what it indexes — a trend that accelerated sharply after the March 2025 core update — ensuring pristine crawlability is foundational. Every wasted crawl is a missed opportunity.
The Crawlability Checklist
✅
Audit your robots.txt file thoroughly. Ensure you are not inadvertently blocking critical resources — CSS files, JavaScript bundles, image directories, or API endpoints that Googlebot needs to fully render your pages. Use Google Search Console's robots.txt tester (or Screaming Frog's validator) to confirm directives are behaving as intended. In 2026, a surprising number of enterprise sites still block /api/ routes that feed client-side rendered content.
✅ Submit and maintain a clean XML sitemap. Your sitemap should only contain canonical, indexable, 200-status URLs. Remove any pages returning 3xx, 4xx, or 5xx responses. If your site exceeds 50,000 URLs, implement a sitemap index file. For London-based businesses with location pages, ensure every local landing page is explicitly included.
✅ Implement self-referencing canonical tags on every indexable page. Canonical tags remain the single most important signal for telling Google which version of a URL to index. Audit for conflicting signals: a page should not have a canonical tag pointing elsewhere while simultaneously appearing in the sitemap. Consistency is everything.
✅
Resolve crawl budget waste from parameter-based URLs. If your site generates duplicate or near-duplicate pages via URL parameters (e.g., ?sort=price, ?colour=blue, ?utm_source=...), configure these in Google Search Console or handle them via robots.txt disallow rules. For e-commerce brands especially, faceted navigation can silently consume thousands of crawl slots.
✅ Flatten your site architecture where possible. No critical page should be more than three clicks from your homepage. Conduct a crawl depth analysis with tools like Screaming Frog or Sitebulb. Pages buried at depth 5+ are statistically less likely to be crawled frequently and less likely to rank.
✅ Eliminate redirect chains and loops. Every redirect chain adds latency and dilutes crawl efficiency. Audit all 301 and 302 redirects and collapse chains into single-hop redirects. Pay particular attention to HTTP-to-HTTPS and non-www-to-www redirect stacking, which is still endemic on legacy sites.
✅ Monitor index coverage reports weekly. Google Search Console's index coverage (now called "Pages" in the Indexing section) should be reviewed on a weekly cadence. Watch for sudden spikes in "Crawled – currently not indexed" or "Discovered – currently not indexed" — both are early warning signals of quality or crawlability issues.
✅ Implement proper internal linking governance. Internal links are how you communicate page importance to Google. Audit for orphan pages (pages with zero internal links), and ensure your highest-value commercial pages receive contextual links from relevant content. This is where content strategy and technical SEO converge.
Pillar 2: Rendering & JavaScript SEO
The modern web runs on JavaScript. React, Next.js, Vue, Nuxt, Angular — frameworks that deliver outstanding user experiences but can create catastrophic SEO blind spots if not implemented correctly. Google's rendering capabilities have improved enormously, but improved does not mean infallible. In 2026, rendering issues remain one of the most common — and most expensive — technical SEO failures.
The Rendering Checklist
✅ Determine your rendering strategy and document it. Are you using Client-Side Rendering (CSR), Server-Side Rendering (SSR), Static Site Generation (SSG), or Incremental Static Regeneration (ISR)? Each has different SEO implications. For most commercial websites, SSR or SSG is strongly preferred because content is available in the initial HTML response without requiring JavaScript execution.
✅ Test rendered output with Google's URL Inspection Tool. For every critical page template (homepage, product page, category page, blog post, location page), use the "Live Test" feature in Search Console to view the rendered HTML that Googlebot actually sees. Compare this to what a user sees in the browser. Any discrepancy is a red flag.
✅
Ensure critical content is present in the initial HTML payload. View your page source (not the DOM via DevTools, but the actual source via curl or "View Page Source"). If your H1, primary copy, product descriptions, or key navigation links are absent from the raw HTML, you are dependent on Google's renderer — a dependency that introduces risk and delay.
✅ Audit for JavaScript-dependent meta tags. Title tags, meta descriptions, canonical tags, hreflang annotations, and structured data must ideally be present in the server-rendered HTML. While Google can process JavaScript-injected meta tags, there is a well-documented render queue delay that can result in indexing issues, particularly for large sites.
✅ Avoid lazy-loading above-the-fold content. If your hero section, H1, or primary content block is behind a lazy-load trigger (intersection observer or scroll-based), Googlebot may not see it during initial render. Reserve lazy loading for below-the-fold images and secondary content modules.
✅
Handle client-side routing with care. Single Page Applications (SPAs) that use pushState for navigation must ensure that each route returns a unique, fully-rendered HTML response when accessed directly. Test this by disabling JavaScript and navigating to key URLs — if you see a blank page or a loading spinner, you have a rendering problem.
✅ Monitor JavaScript console errors in Googlebot's render. The URL Inspection Tool shows JavaScript errors encountered during rendering. Any error that prevents content from loading — a failed API call, a CORS issue, an undefined variable — can result in partial or empty renders. Treat these errors as P1 bugs.
✅ Evaluate the need for dynamic rendering as a stopgap. If your JavaScript framework cannot be refactored for SSR in the near term, dynamic rendering (serving pre-rendered HTML to bots while serving the JS app to users) remains a viable interim solution. Tools like Rendertron or Prerender.io can bridge the gap while your engineering team plans a proper migration.
For a deeper look at how we approach rendering challenges across frameworks, explore our Technical Capabilities.
Pillar 3: Core Web Vitals (with INP Focus)
Google's Core Web Vitals have matured significantly since their introduction. In 2026, the three metrics that matter are Largest Contentful Paint (LCP), Cumulative Layout Shift (CLS), and — critically — Interaction to Next Paint (INP), which officially replaced First Input Delay (FID) in March 2024. INP is the metric most London brands are failing, and it's the one most likely to differentiate your rankings from competitors.
Understanding INP
INP measures the latency of all interactions a user makes with your page throughout its entire lifecycle — clicks, taps, and keyboard inputs — and reports the worst interaction (with some statistical smoothing). Unlike FID, which only measured the first interaction, INP captures the full picture of your page's responsiveness. A good INP score is under 200 milliseconds.
The Core Web Vitals Checklist
✅ Measure field data, not just lab data. Lab tools (Lighthouse, PageSpeed Insights in lab mode) are useful for diagnostics, but Google uses field data (real-user metrics from the Chrome User Experience Report) for ranking. Monitor your CrUX data via Search Console's Core Web Vitals report, or use a RUM solution like web-vitals.js, SpeedCurve, or Vercel Analytics.
✅ Optimise LCP to under 2.5 seconds. The most common LCP culprits in 2026 are unoptimised hero images, render-blocking CSS, slow server response times (TTFB), and third-party scripts that delay critical resource loading. Ensure your LCP element (usually a hero image or H1 text block) loads within the first 2.5 seconds on 75% of page loads.
✅
Serve images in next-gen formats with responsive sizing. Use AVIF as your primary image format with WebP as a fallback. Implement srcset and sizes attributes on all <img> elements. For your LCP image specifically, use fetchpriority="high" and avoid lazy-loading it. Preload it if necessary with <link rel="preload" as="image">.
✅
Eliminate layout shifts caused by dynamic content injection. CLS failures most often stem from ads loading without reserved space, images without explicit width and height attributes, web fonts causing FOIT/FOUT, and dynamically injected banners or cookie consent modals. Use aspect-ratio in CSS and font-display: optional or font-display: swap with preloaded fonts.
✅
Audit and optimise event handlers for INP. This is where most brands fall down. Every click handler, form interaction, dropdown toggle, and accordion expansion contributes to INP. Use Chrome DevTools' Performance panel to identify long tasks (>50ms) that block the main thread during interactions. Break long tasks into smaller chunks using requestAnimationFrame, setTimeout(0), or the scheduler.yield() API.
✅ Reduce main thread blocking from third-party scripts. Tag managers, analytics suites, chat widgets, A/B testing tools, and ad scripts are the most common sources of main thread congestion. Audit every third-party script for its impact on INP. Defer non-critical scripts, load them asynchronously, or move them to web workers where possible. Consider using Partytown to offload third-party scripts from the main thread entirely.
✅
Implement content-visibility: auto for off-screen content. This CSS property tells the browser to skip rendering of off-screen elements until they are needed, significantly reducing initial rendering cost and improving both LCP and INP on content-heavy pages.
✅ Optimise your server response time (TTFB). A TTFB above 800ms undermines every front-end optimisation you make. Ensure your hosting infrastructure — whether it's Vercel, AWS, Cloudflare, or a traditional setup — delivers sub-600ms TTFB for London users. Use a CDN with edge caching, implement stale-while-revalidate caching headers, and optimise database queries.
✅ Monitor INP regressions in CI/CD pipelines. Integrate Lighthouse CI or web-vitals monitoring into your deployment pipeline. Set performance budgets that fail builds if INP exceeds your threshold. Prevention is infinitely cheaper than remediation.
Pillar 4: Structured Data & Schema Markup
Structured data has evolved from a "nice to have" to a competitive weapon. In 2026, with AI Overviews dominating the top of SERPs for informational queries, structured data is one of the primary mechanisms through which Google understands entity relationships, validates factual claims, and selects content for rich results. For London CMOs competing in saturated markets — financial services, legal, property, SaaS — schema markup is a strategic differentiator.
The Schema Checklist
✅
Implement Organisation schema on your homepage. This is your foundational entity markup. Include your company name, logo, URL, social media profiles, contact information, and sameAs links to authoritative profiles (LinkedIn company page, Companies House listing, Wikipedia page if applicable). This helps Google build a robust Knowledge Graph entity for your brand.
✅
Deploy LocalBusiness schema for every physical location. If your London business has one or more offices, each should have its own LocalBusiness (or more specific subtype like ProfessionalService) markup with accurate NAP (Name, Address, Phone), geo coordinates, openingHours, and a link to the corresponding Google Business Profile.
✅
Add Article or BlogPosting schema to all editorial content. Every blog post and insights article should carry Article or BlogPosting schema with headline, datePublished, dateModified, author (linked to a Person entity with their own markup), and image. The dateModified field is particularly important in 2026, as Google increasingly uses freshness signals to determine content relevance.