Depth-first crawling

Depth-first crawling is when you prioritize links on the same domain over links that lead to other domains. In this program, external links are completely ignored, and only paths on the same domain or relative links are followed.

In this example, unique paths are stored in a slice and printed all together at the end. Any errors encountered during the crawl are ignored. Errors are encountered often due to malformed links, and we don't want the whole program to exit on errors like that.

Instead of trying to parse URLs manually using string functions, the url.Parse() function is utilized. It does the work of splitting apart the host from the path.

When crawling, any query strings and fragments are ignored to reduce duplicates. ...

Get Security with Go now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.