Mornox Tools

Link Analyzer

Extract and analyze all links from HTML content. Classify internal vs external, check nofollow and noopener attributes, and identify SEO and security issues.

A link analyzer is a specialized diagnostic process used in Search Engine Optimization (SEO) to extract, categorize, and evaluate the hyperlinks present on a specific webpage or across an entire domain. Because search engines like Google treat hyperlinks as fundamental votes of confidence and pathways for discovering content, analyzing these links is essential for understanding how a website communicates its structure, authority, and relevance to search algorithms. By mastering link analysis, website owners can identify toxic external links, optimize the flow of internal ranking power, and strategically structure their digital properties to achieve maximum visibility in search engine results.

What It Is and Why It Matters

At its core, a link analyzer is a system designed to parse the HTML code of a webpage, locate every instance of a hyperlink, and extract critical metadata about where that link points and how it is constructed. The internet is fundamentally a massive, interconnected web of documents, and hyperlinks are the threads that bind these documents together. Search engine crawlers—the automated bots that index the internet—rely entirely on these links to navigate from one page to another. If a page has no links pointing to it, it is virtually invisible to search engines. Therefore, understanding exactly what links exist on your website, where they lead, and what instructions they give to search engines is the bedrock of technical SEO.

The necessity of link analysis stems from the concept of "link equity," historically referred to as "link juice." When a highly authoritative webpage links to another page, it passes a portion of its authority to the destination page. This transfer of authority is what allows new or deeply buried pages to rank in search results. However, this flow of equity must be carefully managed. If a website links out to hundreds of low-quality, spammy websites, it dilutes its own authority and signals to search engines that it resides in a bad neighborhood of the internet. Conversely, if a website fails to link its own pages together efficiently, it starves its own content of necessary ranking power.

A link analyzer solves the problem of visibility and control. Without analyzing your links, you are flying blind. You might have broken links that lead users and search engine bots to dead ends (404 errors), wasting valuable "crawl budget"—the limited amount of time a search engine allocates to scanning your site. You might be inadvertently passing your hard-earned link equity to competing websites, or you might have "orphan pages" on your own site that are entirely disconnected from your main navigation. By systematically extracting and categorizing every link, a link analyzer allows webmasters, marketers, and developers to audit their site architecture, repair broken pathways, and ensure that every link serves a strategic purpose in improving the site's overall search engine performance.

The history of link analysis is inextricably tied to the invention of the World Wide Web and the subsequent evolution of search engines. In 1989, Tim Berners-Lee invented the World Wide Web at CERN, creating the fundamental concept of hypertext—documents connected by clickable links. However, in the early days of the web, search engines like AltaVista and Yahoo! Directory ranked pages primarily based on on-page factors, such as keyword density. If you wanted to rank for "buy shoes," you simply typed "buy shoes" on your webpage a hundred times. This led to massive manipulation and poor search results, as the algorithms had no way to determine the actual quality or authority of a document.

The paradigm shifted permanently in 1996 when Larry Page and Sergey Brin, two PhD students at Stanford University, began working on a search engine project called "BackRub." They theorized that the web could be modeled as a massive graph, where pages were nodes and hyperlinks were edges. They hypothesized that a link from Page A to Page B should be viewed as a "vote" of confidence by Page A for Page B. Furthermore, a vote from a highly authoritative page should carry more weight than a vote from an obscure page. This theory culminated in the publication of the PageRank algorithm in 1998, which became the foundation of Google. Overnight, link analysis became the most important discipline in digital marketing, as understanding and acquiring links was the only way to dominate search results.

As SEO practitioners realized the power of links, a massive industry of link spam emerged. Webmasters built automated software to drop millions of links in blog comments and forums to artificially inflate their PageRank. To combat this, Google, Yahoo, and Microsoft introduced the rel="nofollow" link attribute in early 2005. This allowed webmasters to link to a page without passing link equity, effectively telling the search engine, "I am linking to this, but I do not vouch for it." The evolution continued with Google's devastating "Penguin" algorithm update on April 24, 2012, which began actively penalizing websites that possessed unnatural or spammy inbound link profiles. Most recently, in September 2019, Google evolved link analysis again by introducing the rel="sponsored" and rel="ugc" (User Generated Content) attributes, requiring even more granular analysis of how websites classify their outbound connections.

Key Concepts and Terminology

To fully grasp link analysis, one must understand the specific vocabulary used to describe the anatomy and function of hyperlinks. The foundation is the HTML Anchor Tag, represented as <a> in website code. This tag is what creates a hyperlink. Within this tag, the href attribute (Hypertext Reference) dictates the destination URL of the link. For example, in the code <a href="https://example.com">Click Here</a>, the href dictates where the browser will navigate when the user clicks.

Anchor Text is the visible, clickable text of the hyperlink—in the previous example, "Click Here." Search engines use anchor text as highly relevant context about the destination page. If a link's anchor text is "best running shoes," the search engine assumes the destination page is about running shoes. Internal Links are hyperlinks that point to another page on the exact same domain (e.g., your homepage linking to your "About Us" page). External Links (or Outbound Links) point from your domain to a completely different domain. Conversely, Backlinks (or Inbound Links) are external links from other websites pointing inward to your domain.

The concept of Link Equity (colloquially known as Link Juice) refers to the ranking power that flows through a link. When an authoritative page links to another page, a portion of its authority flows through that link. Dofollow Links are standard links that allow this equity to flow freely. Nofollow Links, designated by the rel="nofollow" attribute, instruct search engines not to pass link equity to the destination. Crawl Depth (or Click Depth) measures how many clicks it takes to reach a specific page starting from the homepage. A page with a crawl depth of 1 is linked directly from the homepage, while a page with a crawl depth of 15 is buried deep within the site architecture. Finally, an Orphan Page is a webpage that exists on a server but has zero internal links pointing to it, making it nearly impossible for users or search engine crawlers to find naturally.

How It Works — Step by Step (with PageRank Math)

A link analyzer operates through a systematic process of crawling, parsing, and calculating. Step one is the HTTP Request. The analyzer sends a request to a server to fetch the raw HTML document of a target URL. Step two is DOM Parsing (Document Object Model Parsing). The analyzer's engine reads the HTML text and isolates every single <a> tag. Step three is Attribute Extraction. For every <a> tag found, the analyzer extracts the destination URL from the href attribute, records the visible anchor text, and checks for any rel attributes like nofollow. Step four is Status Code Verification. The analyzer pings every extracted URL to see if it responds with a 200 OK (success), a 301 (permanent redirect), or a 404 (not found). Step five is the calculation of equity flow, which models how authority is distributed among the extracted links.

To understand how a link analyzer evaluates the flow of authority, we must look at the mathematical foundation of link analysis: the original PageRank formula. The simplified formula is: $PR(A) = (1-d) + d \times \sum (PR(T_i)/C(T_i))$

Where:

  • $PR(A)$ is the PageRank of the target Page A.
  • $d$ is the damping factor, usually set at $0.85$. This represents the probability that a random web surfer will continue clicking links (85% chance) versus getting bored and starting over (15% chance).
  • $T_i$ represents the pages linking to Page A.
  • $PR(T_i)$ is the PageRank of the linking page.
  • $C(T_i)$ is the total number of outbound links on the linking page.

Worked Example: Imagine a microscopic internet consisting of exactly three pages: Page A, Page B, and Page C.

  • Page A has 2 outbound links: pointing to B and C. ($C(A) = 2$)
  • Page B has 1 outbound link: pointing to C. ($C(B) = 1$)
  • Page C has 1 outbound link: pointing to A. ($C(C) = 1$)

Assume we are running the first iteration of the algorithm, and every page starts with a base PageRank of $1.0$.

Let's calculate the new PageRank for Page A. We look at which pages link to Page A. Only Page C links to A. $PR(A) = (1 - 0.85) + 0.85 \times [PR(C) / C(C)]$ $PR(A) = 0.15 + 0.85 \times [1.0 / 1]$ $PR(A) = 0.15 + 0.85 \times 1$ $PR(A) = 1.00$

Now, let's calculate Page B. Only Page A links to B. $PR(B) = 0.15 + 0.85 \times [PR(A) / C(A)]$ $PR(B) = 0.15 + 0.85 \times [1.0 / 2]$ $PR(B) = 0.15 + 0.85 \times 0.5$ $PR(B) = 0.15 + 0.425$ $PR(B) = 0.575$

Now, let's calculate Page C. Both Page A and Page B link to C. $PR(C) = 0.15 + 0.85 \times [ (PR(A) / C(A)) + (PR(B) / C(B)) ]$ $PR(C) = 0.15 + 0.85 \times [ (1.0 / 2) + (1.0 / 1) ]$ $PR(C) = 0.15 + 0.85 \times [ 0.5 + 1.0 ]$ $PR(C) = 0.15 + 0.85 \times 1.5$ $PR(C) = 0.15 + 1.275$ $PR(C) = 1.425$

In this closed system, Page C emerges as the most authoritative page because it receives equity from both A and B, and B doesn't dilute its vote because it only links to C. A link analyzer performs this exact type of mathematical modeling, but scaled up across millions of URLs, to help you determine which pages on your site hold the most concentrated ranking power.

Types, Variations, and Methods

Link analysis is not a monolith; it is divided into distinct methodologies depending on the goal of the SEO practitioner. The first type is Internal Link Auditing. This method focuses exclusively on the links connecting a single domain's pages. The goal here is to map the site architecture, ensure link equity flows efficiently from high-authority pages (like the homepage) to deeper product or article pages, and identify technical errors like 404s or redirect loops. Internal link analysis is completely within the control of the webmaster and is often the fastest way to achieve search ranking improvements.

The second type is Outbound Link Analysis. This involves scanning a domain to catalog every link that points to an external website. Search engines judge a website based on the company it keeps. If an outbound link analyzer reveals that a site is linking to gambling, pharmaceutical, or adult websites without using the nofollow attribute, it indicates to the webmaster that their site may have been hacked, or that their editorial standards are signaling poor quality to search engines. Outbound analysis is crucial for maintaining a pristine digital reputation and preventing the bleeding of link equity to irrelevant domains.

The third type is Backlink Analysis (or Competitor Link Analysis). Unlike internal and outbound analysis, which look at links on your site, backlink analysis looks at links on other websites pointing to your site. Because you cannot easily crawl the entire internet yourself, this method relies on massive third-party databases that crawl the web continuously. Practitioners use backlink analysis to discover who is linking to them, measure the authority of those referring domains, and audit for toxic or spammy links that could trigger a Google penalty. Furthermore, by running a backlink analysis on a competitor's domain, a marketer can reverse-engineer their SEO strategy, discovering exactly which websites they need to acquire links from in order to compete.

Real-World Examples and Applications

Consider a tangible scenario involving an e-commerce website selling outdoor gear, boasting a catalog of 50,000 distinct SKUs (Stock Keeping Units). The site's organic traffic has plateaued. An SEO manager runs a comprehensive internal link analyzer across the domain and discovers a massive structural flaw: out of the 50,000 product pages, 12,000 are "orphan pages." These products were uploaded to the database and exist in the XML sitemap, but due to a bug in the category pagination, no internal category pages actually link to them. Because they have zero internal links, search engine crawlers rarely visit them, and they receive zero link equity from the homepage. By using the link analyzer's data, the developers fix the pagination, instantly connecting these 12,000 pages to the site graph. Within weeks, search engines index the previously hidden products, resulting in a $45,000 monthly increase in organic revenue.

In another application, imagine a B2B SaaS (Software as a Service) company that recently acquired a smaller competitor for $2.5 million. The acquiring company wants to merge the competitor's website into their own to absorb its SEO authority. However, before setting up 301 redirects, they perform a deep backlink analysis on the acquired domain. The analyzer reveals that 40% of the acquired domain's inbound links come from toxic, exact-match anchor text directories built by a black-hat SEO agency five years prior. If the SaaS company simply redirected the entire domain, they would pass that toxic link history to their pristine main site, risking a severe algorithmic penalty. Using this intelligence, they isolate the toxic links, submit a "Disavow" file to Google to nullify them, and only redirect the pages with high-quality, natural backlinks, safely preserving the SEO value of their multi-million dollar acquisition.

The Anatomy of Rel Attributes (Nofollow, Noopener, Noreferrer)

When a link analyzer parses HTML, the rel (relationship) attribute is one of the most critical pieces of data it extracts. The rel attribute defines the relationship between the linking page and the linked page. The most famous is rel="nofollow". Introduced in 2005 to combat blog comment spam, this attribute explicitly tells search engines: "Do not follow this link, and do not pass any link equity to the destination." If a high-authority news site links to a local business with a nofollow tag, the local business may receive referral traffic from readers clicking the link, but they will receive zero direct SEO ranking benefit from the news site's authority.

In 2019, Google expanded the nofollow ecosystem by introducing two new granular attributes. rel="sponsored" was created specifically for links that are part of advertisements, sponsorships, or affiliate agreements. If money or compensation exchanged hands for the link, it must use this tag. rel="ugc" stands for User Generated Content, and is designed for links placed within forum posts, blog comments, or user profiles. Both of these act similarly to nofollow in that they restrict the flow of link equity, but they provide search engines with better context about why the link is restricted. A sophisticated link analyzer will categorize all three of these attributes separately to give webmasters an accurate view of their outbound link profile.

Beyond SEO, link analyzers also check for security and privacy attributes like noopener and noreferrer. When a link is set to open in a new browser tab using target="_blank", it creates a security vulnerability. The newly opened tab gains partial access to the original tab's window object via JavaScript, allowing malicious sites to redirect the original tab to a phishing page. Adding rel="noopener" severs this connection, protecting the user. Similarly, rel="noreferrer" instructs the browser to hide the origin URL from the destination site's analytics. If Site A links to Site B with noreferrer, Site B's Google Analytics will show the traffic as "Direct" rather than a referral from Site A. While these do not directly impact SEO link equity, they are vital components of modern web development that a thorough link analyzer must audit.

Common Mistakes and Misconceptions

One of the most pervasive misconceptions among beginners using link analyzers is the belief in "PageRank Sculpting." Prior to 2009, SEOs believed that if a page had 10 outbound links, and they added a nofollow attribute to 5 of them, the remaining 5 dofollow links would receive double the link equity. Google explicitly changed their algorithm to stop this practice. Today, if a page has 10 links, the total link equity is divided by 10, regardless of whether some are nofollowed. The nofollowed links simply drop their share of the equity into a black hole. Therefore, adding nofollow to internal links (like your Privacy Policy or Contact page) to "funnel" equity to product pages is a massive mistake that actually wastes your website's total authority.

Another critical mistake is over-optimizing anchor text. When beginners realize that anchor text influences rankings, their immediate instinct is to use exact-match keywords for every internal link. If they want a page to rank for "cheap car insurance," they will ensure that every single internal link pointing to that page uses the exact phrase "cheap car insurance." Link analyzers easily flag this unnatural pattern. Search engines view 100% exact-match anchor text as highly manipulative and spammy. Natural link profiles feature a diverse mix of exact match, partial match (e.g., "affordable insurance for your car"), branded terms, and generic text (e.g., "click here" or "read more").

A third common pitfall is ignoring broken outbound links. Webmasters often obsess over fixing broken internal links but ignore the links pointing to external sites that have since shut down or moved. If your website contains hundreds of links pointing to 404 error pages on other domains, it signals to search engines that your content is outdated, unmaintained, and provides a poor user experience. This phenomenon, known as "link rot," slowly degrades the perceived quality of your site. Regular outbound link analysis is required to identify and replace these dead external references.

Best Practices and Expert Strategies

Expert SEO practitioners use link analyzers not just for error checking, but for strategic site architecture design. The gold standard for internal linking is the "Hub-and-Spoke" or "Silo" model. In this strategy, a central, highly authoritative "Hub" page (e.g., a massive guide on "Digital Marketing") links out to dozens of specific "Spoke" pages (e.g., "Email Marketing Basics," "Social Media Strategy"). Crucially, all the Spoke pages link back to the Hub, and to each other. When a link analyzer visualizes this structure, it should look like a tightly clustered web. This traps link equity within a specific topical cluster, signaling immense topical authority to search engines and ensuring that if one spoke page earns a powerful external backlink, that equity flows immediately to the hub and the other related spokes.

Managing "Crawl Budget" is another expert strategy derived from link analysis. Search engines do not have infinite resources; they allocate a specific amount of time to crawl your site based on its size and authority. If a link analyzer reveals that you have 50,000 dynamically generated URL parameters (e.g., ?sort=price&color=red) that all contain duplicate content, you are wasting your crawl budget. Experts use link analysis to identify these infinite crawling traps and implement strategic robots.txt disallows or rel="canonical" tags. This forces search engine bots to spend their limited time crawling your valuable, revenue-generating pages rather than getting stuck in endless loops of faceted navigation.

Finally, experts aggressively hunt and eradicate "Redirect Chains." A redirect chain occurs when Page A redirects to Page B, which redirects to Page C, which finally redirects to Page D. Every step in a redirect chain causes a slight loss of link equity (historically estimated at around 15% per jump) and significantly slows down page loading times. Furthermore, search engine crawlers will typically abandon a chain after 4 or 5 hops, meaning the final destination page may never get indexed. A link analyzer easily exposes these chains, allowing the webmaster to update the original link on Page A to point directly to Page D, bypassing the middle steps entirely and preserving 100% of the possible link equity.

Edge Cases, Limitations, and Pitfalls

While link analyzers are powerful, they are bound by the technical limitations of how the modern web is constructed. The most significant edge case involves Client-Side Rendering (CSR) and JavaScript frameworks like React, Angular, or Vue.js. Traditional link analyzers rely on parsing the raw HTML returned by the server. However, many modern websites serve a nearly blank HTML document and use JavaScript to dynamically generate the content—and the links—in the user's browser. A standard HTML link analyzer will see zero links on these pages. To accurately analyze a JavaScript-heavy site, practitioners must use advanced link analyzers equipped with "Headless Browsers" (like Puppeteer or Playwright) that actually render the JavaScript before extracting the DOM. This process is exponentially slower and requires vastly more computing power.

Another limitation is the "Infinite Space" pitfall, commonly found in calendar applications or dynamic date filters. If a website has a sidebar calendar where every "Next Month" button is a crawlable <a> tag, a link analyzer will follow it. It will click to next month, then the next, generating URLs for the year 2025, 2030, 2050, and into infinity. Without careful configuration and strict depth limits, a link analyzer will get trapped in this infinite space, running until it runs out of memory or crashes the user's computer. Practitioners must be acutely aware of dynamic URL generation and use inclusion/exclusion rules to prevent the analyzer from crawling useless, auto-generated pathways.

Finally, link analyzers struggle with links embedded within <iframe> tags or unstructured data like PDFs and images. An iframe is essentially a window showing another website within your website. While users can click links inside the iframe, those links belong to the embedded site, not the host site. Beginners often confuse iframe content with their own DOM. Furthermore, while search engines can extract links from PDF documents, many basic link analyzers only parse HTML, meaning valuable internal linking structures hidden within downloadable resources are completely ignored in the final equity calculations.

Industry Standards and Benchmarks

To understand whether the data returned by a link analyzer is "good" or "bad," professionals rely on established industry benchmarks. Regarding HTTP status codes, the standard is strict: a technically sound website should have less than a 2% rate of 404 (Not Found) errors among its internal links. Ideally, 100% of internal links should return a 200 OK status. While 301 redirects are acceptable, they should make up less than 10% of internal link pathways to ensure fast crawler navigation and optimal equity flow.

For internal site architecture, the widely accepted benchmark for "Crawl Depth" is the 3-Click Rule. Industry standards dictate that any important, revenue-generating page should be reachable within 3 clicks from the homepage. If a link analyzer reveals that key product pages have a crawl depth of 5, 6, or 7, the site architecture is considered poor and requires flattening. Regarding the sheer volume of links, historically, Google recommended keeping outbound links to under 100 per page to avoid overwhelming the crawler. While this hard limit was officially deprecated, modern UX and SEO standards suggest keeping the total number of links (including navigation, footer, and body links) below 200-300 per page to ensure that the link equity isn't diluted into microscopically small fractions.

In the realm of backlink analysis and anchor text, standards are highly dependent on the industry, but general benchmarks exist to avoid penalty. A natural, healthy backlink profile typically consists of 50% to 70% Branded or URL anchor text (e.g., "Nike," "nike.com," "Nike Inc."). Exact match commercial anchor text (e.g., "buy cheap running shoes") should rarely exceed 1% to 5% of the total backlink profile. If a competitor link analyzer shows a website with 40% exact match anchor text, it is almost certainly engaging in black-hat link building and is at a high risk of an algorithmic penalty from Google's spam detection systems.

Comparisons with Alternatives

While dedicated link analyzers (both cloud-based crawlers and desktop software) are the standard tools for this work, there are alternative methods for understanding a website's link structure, each with distinct pros and cons. The most notable alternative is Server Log File Analysis. When a search engine bot like Googlebot crawls a website, it leaves a record in the server's log files. By analyzing these logs, you can see exactly which links Google is actually following, rather than just the links a third-party tool thinks are there. Log file analysis provides 100% ground truth about search engine behavior. However, it is highly technical, requires server access that many marketers lack, and only shows you what Google has already done, whereas a link analyzer allows you to proactively test changes before Google sees them.

Another alternative is relying solely on Google Search Console (GSC). GSC provides a free "Links" report showing top internally linked pages and external backlinks. The primary advantage of GSC is that the data comes directly from the source—Google itself. If GSC says you have a backlink, you definitely have it. However, the severe limitation of GSC is that it heavily aggregates and samples the data. It will only show a maximum of 1,000 rows of data, making it completely useless for comprehensive audits of large enterprise sites with millions of URLs. Furthermore, GSC does not provide crawl depth metrics, anchor text categorization for internal links, or status code reporting for outbound links.

Ultimately, choosing between these methods depends on the scale of the problem. For a 10-page local business site, GSC is sufficient. For diagnosing complex indexation issues, Log File Analysis is unparalleled. But for the vast majority of SEO tasks—auditing architecture, calculating equity flow, fixing broken pathways, and analyzing competitors—a dedicated web crawler and link analyzer remains the most versatile, comprehensive, and actionable method available to digital marketing professionals.

Frequently Asked Questions

What is the difference between a Dofollow and a Nofollow link? A dofollow link is a standard hyperlink that allows search engine crawlers to follow the path and passes "link equity" (ranking power) from the source page to the destination page. A nofollow link contains the HTML attribute rel="nofollow", which explicitly instructs search engines not to pass any link equity to the destination. While users can click both types of links normally, nofollow links are used for untrusted content, advertisements, or comment sections to prevent the manipulation of search rankings.

How many internal links are too many for one webpage? While search engines no longer enforce a strict "100 links per page" rule, having too many links dilutes the link equity passed to each destination. If a page has 10 links, each receives roughly 1/10th of the available equity; if it has 1,000 links, each receives 1/1,000th. Best practices suggest keeping total links (including navigation and footers) under 200 to 300 per page to ensure a good user experience and to keep the equity flow concentrated on your most important pages.

What is an orphan page and why is it bad for SEO? An orphan page is a webpage that exists on your server and can be accessed via its direct URL, but has absolutely zero internal links pointing to it from anywhere else on your website. This is terrible for SEO because search engine crawlers rely on links to discover content; without links, the page will likely never be indexed. Furthermore, even if it is in your XML sitemap, the lack of internal links means the page receives zero link equity, making it nearly impossible for the page to rank for competitive keywords.

Can I use a link analyzer to find out why my competitor is outranking me? Yes, this is one of the most powerful use cases for backlink analysis. By running a link analyzer on a competitor's domain, you can extract their entire backlink profile. You can see exactly which authoritative websites are linking to them, what anchor text is being used, and which of their pages attract the most links. You can then use this intelligence to replicate their strategy, reaching out to those same websites to acquire links for your own domain.

Why does my link analyzer show a 301 redirect as an error or warning? A 301 redirect is a permanent redirection from one URL to another, and while it is the correct way to handle moved content, it is not ideal for internal site architecture. Every time a crawler has to pass through a 301 redirect, it slightly slows down the page load time and historically causes a small loss of link equity. Link analyzers flag internal 301s so that webmasters can update the original link to point directly to the final destination, ensuring the cleanest, fastest, and most powerful link architecture possible.

Do outbound links to other websites help my SEO? Yes, but indirectly. Linking out to highly authoritative, relevant sources (like government sites, academic journals, or industry leaders) helps search engines understand the context and factual basis of your content. It signals that your page is a well-researched hub of information. However, linking out to low-quality, spammy, or irrelevant websites will actively harm your SEO, as it signals that your website is part of a low-quality neighborhood on the web.

How often should I run a link analysis on my website? The frequency depends entirely on the size of your site and how often it is updated. For a massive e-commerce site or a news publisher adding hundreds of pages daily, an internal link audit should be run weekly or even continuously via automated cloud crawlers. For a standard corporate website or medium-sized blog, a comprehensive manual link analysis should be conducted at least once a quarter to catch broken links, fix redirect chains, and ensure newly added content is properly integrated into the site architecture.

Command Palette

Search for a command to run...