Loading…
Loading…
No content strategy or link-building campaign will rescue a site with broken foundations. This is where we always start.
We have never taken on a client and jumped straight into content production or link building. Every single engagement at Sample Digital Lab begins with a comprehensive technical SEO audit — no exceptions. The reason is simple: if search engines cannot crawl, render and index your pages properly, nothing else you do will have the impact it should.
We have seen businesses spend tens of thousands of pounds on content marketing while their most important pages were blocked by a misconfigured robots.txt file. We have seen redesigns tank rankings overnight because nobody checked whether the migration plan handled redirects properly. These are not edge cases — they are depressingly common.
A technical audit gives you a clear picture of your site's health before you invest in anything else. It identifies the issues that are silently limiting your organic performance and provides a prioritised roadmap for fixing them. Think of it as a structural survey before you renovate a house — you would not start knocking down walls without knowing whether the foundations are sound.
The first thing we examine is whether Google can actually find and access your pages. This sounds basic, but it is remarkable how often crawl and indexation issues are the root cause of poor organic performance. We start by running a full site crawl using Screaming Frog or Sitebulb to map out every URL on the site and identify any crawl errors, broken links, redirect chains or orphaned pages.
We review your robots.txt file to ensure it is not inadvertently blocking important pages or directories. A common mistake we see is staging environments left accessible with noindex directives that get carried over to production, or entire subdirectories blocked during development and never unblocked after launch. We also check your XML sitemap to confirm it includes all indexable pages, excludes pages you do not want indexed, and returns a 200 status code.
Next, we compare the pages in your sitemap against what Google has actually indexed using the Index Coverage report in Google Search Console. If there is a significant gap between pages submitted and pages indexed, something is wrong — and we need to find out whether it is a crawl budget issue, a content quality signal or a technical barrier. We also review the 'Excluded' section of the report to identify pages Google has chosen not to index and understand why.
Finally, we check your site's crawl budget allocation. For smaller sites (under 10,000 pages), crawl budget is rarely a concern. For larger sites — particularly e-commerce or publisher sites — we analyse server log files to understand how Googlebot is spending its crawl budget and whether important pages are being deprioritised in favour of low-value URLs like filtered navigation or paginated archives.
Clean, descriptive URL structures are a foundational element of good technical SEO. We audit every URL pattern on the site, checking for issues like excessive URL parameters, session IDs appended to URLs, uppercase/lowercase inconsistencies and unnecessarily deep nesting. A URL like /products/category/subcategory/item-name is fine; /products?cat=12&subcat=45&id=789&session=abc123 is not.
Canonical tags are one of the most frequently misconfigured elements we encounter. We check every page to ensure the canonical tag points to the correct URL — the version you want Google to treat as the primary. Common problems include self-referencing canonicals that point to URLs with trailing slashes when the live URL does not have one, canonical tags pointing to non-indexable pages, and pages with no canonical tag at all. We also verify that canonical tags are consistent across HTTP/HTTPS and www/non-www versions.
We pay particular attention to faceted navigation and parameter-based URLs on e-commerce sites. Without proper canonical implementation, a single product page can exist at dozens of different URLs based on the filtering path a user takes to reach it. This creates massive duplicate content issues and wastes crawl budget. We ensure that canonical tags, parameter handling in Search Console and (where appropriate) noindex directives work together to consolidate authority on the correct URLs.
Page speed has been a confirmed Google ranking factor since 2010, and with the introduction of Core Web Vitals as ranking signals in 2021, it has become even more important. We audit site speed at both the page level and the site-wide level, using a combination of Google PageSpeed Insights, Lighthouse, Chrome DevTools and real-user monitoring data from the Chrome User Experience Report (CrUX).
We focus on the three Core Web Vitals metrics. Largest Contentful Paint (LCP) measures how quickly the main content of a page loads — the target is under 2.5 seconds. Interaction to Next Paint (INP) measures responsiveness to user interactions — the target is under 200 milliseconds. Cumulative Layout Shift (CLS) measures visual stability — the target is under 0.1. We check these metrics for both mobile and desktop, with mobile being the priority since Google uses mobile-first indexing.
Beyond the Core Web Vitals, we audit Time to First Byte (TTFB) to assess server response times, check for render-blocking resources that delay initial page load, review image optimisation (format, compression, lazy loading, correct sizing), analyse JavaScript execution and identify any third-party scripts that are degrading performance. We also check whether the site uses a CDN and whether caching headers are properly configured.
Speed issues are rarely caused by a single problem. More often, it is a combination of unoptimised images, excessive JavaScript bundles, no browser caching, server-side rendering delays and too many third-party tracking scripts all stacking up. We document every issue with its performance impact and provide specific, actionable recommendations — not just 'make the site faster'.
Since Google switched to mobile-first indexing, the mobile version of your site is what Google crawls and indexes by default. If your mobile experience is poor, it does not matter how good your desktop site looks — Google is not looking at it. We test every key page type on actual mobile devices, not just responsive design emulators, to identify usability issues that simulators often miss.
We check for common mobile issues including text that requires zooming to read, tap targets that are too small or too close together, content that overflows the viewport horizontally, interstitials or pop-ups that obscure content (which can trigger a ranking penalty), and forms that are cumbersome to complete on a mobile device. We also verify that the site uses responsive design rather than a separate mobile URL configuration, which simplifies SEO management considerably.
We run every key template through Google's Mobile-Friendly Test and review the Mobile Usability report in Search Console for any flagged issues. If the site uses AMP (Accelerated Mobile Pages), we validate the AMP markup separately — though increasingly we advise clients that AMP is no longer necessary now that Core Web Vitals have replaced the AMP requirement for the Top Stories carousel.
Structured data helps Google understand the content and context of your pages, and it enables rich results (previously called rich snippets) that can significantly increase your click-through rate in search results. We audit all existing schema markup using Google's Rich Results Test and the Schema Markup Validator, checking for errors, warnings and missed opportunities.
At a minimum, we expect to see Organisation schema on the homepage, LocalBusiness schema (if applicable), BreadcrumbList schema for navigation, Article or BlogPosting schema on editorial content and Product schema (with reviews, price and availability) on e-commerce product pages. For service businesses, we also recommend FAQPage schema on relevant pages — this can generate expandable FAQ results directly in the SERPs, taking up significantly more screen real estate.
We check that the structured data is implemented correctly — using JSON-LD (Google's preferred format) rather than Microdata or RDFa — and that the data is consistent with what appears on the page. Google has become increasingly strict about schema spam, where sites mark up content that is not actually visible to users. We ensure all structured data is accurate, properly nested and reflects the genuine content of the page.
HTTPS has been a ranking signal since 2014, and any site still serving pages over HTTP is at a disadvantage. But simply installing an SSL certificate is not enough — we frequently encounter sites where the HTTPS migration was done incompletely, creating mixed content warnings, redirect loops or security certificate errors that damage both rankings and user trust.
We verify that all pages are served over HTTPS with a valid, unexpired SSL certificate. We check for mixed content issues — pages served over HTTPS that load resources (images, scripts, stylesheets) over HTTP. We confirm that HTTP-to-HTTPS redirects are implemented correctly using 301 redirects (not 302s) and that there are no redirect chains. We also check that internal links, canonical tags and sitemap URLs all reference the HTTPS versions.
Beyond HTTPS, we review basic security headers including Strict-Transport-Security (HSTS), X-Content-Type-Options, X-Frame-Options and Content-Security-Policy. While these are not direct ranking factors, they protect against common web vulnerabilities and contribute to the overall trustworthiness signals that Google considers when evaluating site quality.
Internal linking is one of the most underutilised levers in SEO. It controls how link equity flows through your site, helps Google understand which pages are most important and establishes topical relationships between content. We audit your internal linking structure to identify orphaned pages (pages with no internal links pointing to them), pages that are buried too deep in the site architecture (more than 3–4 clicks from the homepage) and missed opportunities to link between related content.
We analyse the distribution of internal links across your site to ensure your most commercially important pages are receiving the most internal link equity. A common problem we see is websites where the blog receives far more internal links than the service or product pages that actually drive revenue. We map out the ideal linking hierarchy and recommend changes to your navigation, sidebar, footer and in-content links to better support your SEO objectives.
We also check for broken internal links, excessive use of nofollow on internal links (which wastes link equity), anchor text distribution and whether your site uses a flat or hierarchical architecture. For content-heavy sites, we recommend implementing topic clusters with a pillar page at the centre and supporting content linking back to it — this structure helps Google understand topical authority and improves rankings across the entire cluster.
A typical technical audit for a medium-sized site will surface anywhere from 30 to 150 individual issues. Trying to fix everything at once is neither practical nor necessary. We prioritise every finding using a simple impact-versus-effort framework: issues that have a high potential impact on rankings and are relatively easy to fix go to the top of the list.
Critical issues — such as large sections of the site blocked from crawling, widespread canonical errors or security vulnerabilities — are flagged for immediate action. High-impact items like Core Web Vitals failures, missing structured data and broken redirect chains are scheduled for the first 30 days. Medium and lower-priority items are placed into a phased roadmap that typically spans 60–90 days.
We deliver the audit as a structured document with every issue categorised, explained in plain English (not just a list of Screaming Frog error codes) and accompanied by a specific recommendation for how to fix it. We also provide a summary for stakeholders who do not need the technical detail but need to understand what the issues are costing the business. The goal is not just to find problems — it is to create a clear, actionable plan that your development team can execute against.
Every audit also includes a re-crawl benchmark so we can measure the impact of the fixes once they are implemented. We typically run a follow-up audit 8–12 weeks after the initial fixes go live to verify improvements, catch any regressions and update the priority list based on current data.
More Insights
Our Services
Case Studies
Every engagement starts with a comprehensive technical audit. We will identify what is holding your organic performance back and give you a clear, prioritised plan to fix it.