Skip to content

Athe1stB/web-crawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Web Crawler

Functionalities:

  • Crawl through a seed url
  • Defined upto a given depth only
  • Leverage concurrency to crawl a url

Future Enhancements

  • content parser supports all media
  • concurrently parse pages
  • store data

Usage

go run main.go <flags>

flags:

url      seed_url to start crawling
depth    max depth to go for a url

About

web crawler

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages