
The web has evolved far beyond static pages and simple loading metrics. In an era where generative AI models are actively evaluating, summarising, and surfacing content based on quality signals, the responsiveness of your interface is no longer a nice-to-have — it is a foundational ranking factor. Interaction to Next Paint (INP) sits at the heart of this evolution, representing a fundamental shift in how we measure, optimise, and ultimately engineer the user experiences that both humans and machines reward.
This guide provides a comprehensive, deeply technical exploration of INP — from its conceptual underpinnings to advanced optimisation strategies, with particular emphasis on the often-overlooked devastation caused by third-party scripts. Whether you are a frontend engineer, a technical SEO strategist, or a search engineering leader, this is your blueprint for mastering the metric that defines the interactive web.
INP Unveiled: Why Interaction to Next Paint is the New Frontier in Core Web Vitals
In March 2024, Google officially replaced First Input Delay (FID) with Interaction to Next Paint (INP) as the responsiveness metric within Core Web Vitals. This was not a cosmetic rebrand. It was a philosophical transformation in how we define and measure interactivity.
From First Impression to Full Conversation
FID measured only the first interaction a user had with a page — specifically, the delay between that first input event and the moment the browser could begin processing its event handler. It ignored every subsequent interaction entirely. A page could score perfectly on FID while delivering an agonisingly sluggish experience on the tenth, twentieth, or hundredth click.
INP corrects this blind spot comprehensively. It observes every discrete interaction throughout the full lifecycle of a page visit — clicks, taps, and keyboard inputs — and reports a value representative of the worst interactions observed. Specifically, INP selects the interaction with the highest latency, with an adjustment for pages with many interactions (it approximates the 98th percentile to reduce the impact of true outliers).
An interaction's latency, as measured by INP, encompasses three distinct phases:
- Input Delay: The time between the user's physical input and the moment the browser begins executing the associated event handler. This delay is almost always caused by other tasks occupying the main thread.
- Processing Duration: The total time spent executing the event handler callbacks themselves (including
pointerdown,pointerup,click, and related events). - Presentation Delay: The time from when the event handlers complete until the browser renders the next frame reflecting the visual update.
The INP score thresholds are clear:
- Good: ≤ 200 milliseconds
- Needs Improvement: 200–500 milliseconds
- Poor: > 500 milliseconds
Why AI Search Engines Care About Responsiveness
The significance of INP extends well beyond Google's Core Web Vitals programme. As generative AI search engines — Google's AI Overviews, Perplexity, and emerging competitors — increasingly evaluate which sources to cite and surface, they are building sophisticated quality models that incorporate user experience signals.
The logic is straightforward: if users consistently bounce from, abandon interactions on, or express frustration with a given source, that source is less likely to be considered authoritative or trustworthy. AI models trained on engagement data, Chrome User Experience Report (CrUX) signals, and behavioural patterns inherently learn that responsive sites correlate with satisfied users. A high INP score is, effectively, a signal of friction — and friction erodes the trust that AI systems are designed to surface.
The shift from "Did the page load?" to "Can the user actually use the page?" represents the maturation of web performance measurement from a technical curiosity into a strategic imperative.
Diagnosing the Lag: Advanced Tools and Techniques for INP Measurement & Auditing
Optimising INP begins with precise, actionable measurement. The metric's complexity — spanning thousands of potential interactions across diverse devices and network conditions — demands a multi-layered diagnostic approach combining field data, lab simulation, and custom instrumentation.
Field Data: The Ground Truth
Field data captures real user interactions on real devices under real conditions. It is the definitive source for understanding your actual INP performance.
Chrome User Experience Report (CrUX)
CrUX aggregates anonymised performance data from opted-in Chrome users across millions of origins and URLs. It is the dataset that directly informs Google's Core Web Vitals assessment for ranking purposes.
- Access origin-level and URL-level INP data via the CrUX Dashboard, the CrUX API, or BigQuery.
- CrUX reports the 75th percentile (p75) INP value, meaning 75% of user experiences must fall at or below 200ms for a "Good" assessment.
- Segment by form factor (mobile vs. desktop) to identify device-specific issues — mobile INP is almost always worse due to less powerful processors and more complex touch event handling.
Google Search Console (Core Web Vitals Report)
Search Console surfaces CrUX data contextualised for your property, grouping URLs by similar performance characteristics. Use it to:
- Identify clusters of URLs with "Poor" or "Needs Improvement" INP.
- Track improvements over time after deploying optimisations.
- Correlate INP status with search performance metrics (impressions, clicks, position).
Real User Monitoring (RUM) Solutions
For granular, interaction-level diagnostics, deploy a RUM solution that captures INP with full attribution. Tools such as web-vitals.js (Google's official library), SpeedCurve, Sentry Performance, Datadog RUM, or custom implementations provide:
- Per-interaction latency breakdowns (input delay, processing duration, presentation delay).
- Attribution to specific DOM elements and event types.
- Segmentation by page type, user cohort, geography, device class, and more.
A critical implementation detail: use the web-vitals library's onINP function with the reportAllChanges option during development to capture every interaction candidate, not just the final INP value:
onINP((metric) => {
const entry = metric.attribution;
console.log('INP Target:', entry.interactionTarget);
console.log('INP Type:', entry.interactionType);
console.log('Input Delay:', entry.inputDelay);
console.log('Processing Duration:', entry.processingDuration);
console.log('Presentation Delay:', entry.presentationDelay);
// Send to your analytics endpoint
navigator.sendBeacon('/analytics/inp', JSON.stringify({
value: metric.value,
target: entry.interactionTarget,
type: entry.interactionType,
inputDelay: entry.inputDelay,
processingDuration: entry.processingDuration,
presentationDelay: entry.presentationDelay,
url: window.location.href,
timestamp: Date.now()
}));
}, { reportAllChanges: true });
Lab Data: Controlled Diagnostics
Lab tools simulate interactions in controlled environments. While they cannot produce a true INP score (INP requires real user interactions), they are indispensable for diagnosing the causes of poor INP.
Chrome DevTools Performance Panel
The Performance panel is the single most powerful tool for INP root cause analysis. Record a trace, interact with the page, and examine:
- Long Tasks (>50ms) on the main thread, highlighted with red corner flags.
- Event handlers associated with specific interactions — click into the "Interactions" track to see input delay, processing time, and presentation delay visualised.
- Call stacks within long tasks to pinpoint exactly which scripts and functions are responsible.
Lighthouse
Lighthouse (v11+) includes an INP-related audit under the "Diagnostics" section. While it uses simulated throttling and synthetic interactions, its "Avoid long main-thread tasks" and "Minimise main-thread work" audits directly identify the code paths that contribute to poor INP in the field.
The Third-Party Script Problem: A Deep Diagnostic Guide
In our extensive performance auditing work, third-party scripts consistently emerge as the single largest contributor to poor INP across commercial websites. Analytics platforms, ad tag managers, chat widgets, consent management platforms (CMPs), social media embeds, and A/B testing tools collectively impose an enormous burden on the main thread — often dwarfing the execution cost of the site's own first-party code.
Isolating Third-Party Impact in Chrome DevTools
Here is a systematic methodology for identifying and quantifying third-party script contributions to INP:
Step 1: Record a Performance Trace with Interaction
- Open Chrome DevTools → Performance tab.
- Enable "Screenshots" and "Web Vitals" lanes.
- Click the record button, then perform the specific interaction that field data has flagged as problematic (e.g., clicking a navigation menu, tapping an "Add to Cart" button).
- Stop the recording.
Step 2: Identify the Interaction in the Timeline
In the "Interactions" track, locate the interaction event. It will display as a labelled bar showing the total interaction duration. Click it to see the three-phase breakdown:
- If Input Delay is the dominant component, look for long tasks preceding the interaction on the main thread.
- If Processing Duration dominates, examine the event handler call stack.
- If Presentation Delay is high, investigate layout and paint costs.
Step 3: Attribute Long Tasks to Third-Party Origins
Zoom into the long tasks surrounding your interaction in the "Main" thread flame chart. Each task's call stack reveals the script files responsible. Third-party scripts are typically identifiable by their domain in the source URL — look for domains such as:
googletagmanager.com,google-analytics.com(analytics)doubleclick.net,googlesyndication.com,amazon-adsystem.com(advertising)cdn.cookielaw.org,consent.cookiebot.com(consent management)widget.intercom.io,js.driftt.com(chat widgets)connect.facebook.net,platform.twitter.com(social embeds)
Step 4: Quantify the Cost
For each identified third-party origin, sum the total main thread execution time during the critical window around your interaction. DevTools' "Bottom-Up" and "Call Tree" tabs (with "Group by Domain" filtering) enable this quantification. We routinely encounter scenarios such as:
- A CMP executing 180ms of JavaScript on every page load, directly blocking the first user interaction.
- Google Tag Manager containers firing 12+ tags synchronously, creating a 400ms+ long task chain.
- Chat widget initialisation consuming 250ms of main thread time within the first 3 seconds.
Implementing a Custom Third-Party Script Performance Monitor
Beyond ad hoc DevTools analysis, implement continuous monitoring to detect third-party script regressions in production. Here is a custom performance observer pattern that identifies long tasks and attributes them to third-party origins:
// Third-Party Script INP Impact Monitor
(function initThirdPartyMonitor() {
'use strict';
const FIRST_PARTY_ORIGINS = [
location.origin,
'https://cdn.yourdomain.com',
'https://assets.yourdomain.com'
];
const thirdPartyMetrics = new Map();
function isThirdParty(url) {
if (!url) return false;
try {
const origin = new URL(url).origin;
return !FIRST_PARTY_ORIGINS.some(fp => origin.startsWith(fp));
} catch {
return false;
}
}
// Monitor Long Tasks via PerformanceObserver
if ('PerformanceObserver' in window && 'PerformanceLongTaskTiming' in window) {
const longTaskObserver = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
// Long task attribution provides script source information
if (entry.attribution && entry.attribution.length > 0) {
for (const attr of entry.attribution) {
const containerSrc = attr.containerSrc || 'unknown';
if (isThirdParty(containerSrc)) {
const existing = thirdPartyMetrics.get(containerSrc) || {
totalDuration: 0,
taskCount: 0,
maxDuration: 0
};
existing.totalDuration += entry.duration;
existing.taskCount += 1;
existing.maxDuration = Math.max(existing.maxDuration, entry.duration);
thirdPartyMetrics.set(containerSrc, existing);
}
}
}
}
});
longTaskObserver.observe({ type: 'longtask', buffered: true });
}
// Monitor Event Timing for interaction-level attribution
if ('PerformanceObserver' in window) {
const eventObserver = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
// Only care about slow interactions (>100ms as early warning)
if (entry.duration > 100 && entry.interactionId > 0) {
const inputDelay = entry.processingStart - entry.startTime;
const processingDuration = entry.processingEnd - entry.processingStart;
const presentationDelay = entry.startTime + entry.duration - entry.processingEnd;
// Report interaction with third-party context
const report = {
eventType: entry.name,
target: entry.target?.tagName || 'unknown',
duration: entry.duration,
inputDelay: Math.round(inputDelay),
processingDuration: Math.round(processingDuration),
presentationDelay: Math.round(presentationDelay),
thirdPartyBudget: Object.fromEntries(thirdPartyMetrics),
timestamp: Date.now(),
url: location.href
};
// Send to monitoring endpoint
if (navigator.sendBeacon) {
navigator.sendBeacon(
'/api/perf/third-party-inp',
JSON.stringify(report)
);
}
}
}
});
eventObserver.observe({ type: 'event', durationThreshold: 100, buffered: true });
}
// Periodic summary reporting
function reportSummary() {
if (thirdPartyMetrics.size === 0) return;
const summary = {};
for (const [source, metrics] of thirdPartyMetrics) {
summary[source] = {
totalMainThreadMs: Math.round(metrics.totalDuration),
longTasks: metrics.taskCount,
longestTaskMs: Math.round(metrics.maxDuration)
};
}
if (navigator.sendBeacon) {
navigator.sendBeacon(
'/api/perf/third-party-summary',
JSON.stringify({ summary, url: location.href, timestamp: Date.now() })
);
}
}
// Report on page visibility change (user navigating away)
document.addEventListener('visibilitychange', () => {
if (document.visibilityState === 'hidden') {
reportSummary();
}
});
})();
This monitor provides continuous, production-level visibility into which third-party scripts are consuming main thread time and how that consumption correlates with slow interactions — data that is otherwise invisible in aggregate CrUX reporting.
Mitigation Strategies for Third-Party Script Impact
Once you have identified the offending third-party scripts, apply these targeted mitigation strategies:
1. Facade Pattern for Non-Critical Widgets
Replace eagerly-loaded third-party widgets (chat, video embeds, social feeds) with lightweight "facade" elements that load the full widget only upon user interaction:
// Chat widget facade - loads Intercom only when user clicks
class ChatFacade extends HTMLElement {
connectedCallback() {
this.innerHTML = `