The topic of this articlemay not meet Wikipedia'snotability guidelines for products and services. Please help to demonstrate the notability of the topic by citingreliable secondary sources that areindependent of the topic and provide significant coverage of it beyond a mere trivial mention. If notability cannot be shown, the article is likely to bemerged,redirected, ordeleted. Find sources: "StormCrawler" – news ·newspapers ·books ·scholar ·JSTOR(September 2016) (Learn how and when to remove this message) |
| Apache StormCrawler | |
|---|---|
| Developer | Apache Software Foundation |
| Initial release | September 11, 2014 (2014-09-11) |
| Stable release | |
| Written in | Java |
| Type | Web crawler |
| License | Apache License |
| Website | stormcrawler |
| Repository | |
Apache StormCrawler is anopen-source collection of resources for building low-latency, scalableweb crawlers onApache Storm. It is provided underApache License and is written mostly inJava (programming language).
StormCrawler is modular and consists of a core module, which provides the basic building blocks of a web crawler such as fetching, parsing, URL filtering. Apart from the core components, the project also provides external resources, like for instance spout and bolts forElasticsearch andApache Solr or a ParserBolt which usesApache Tika to parse various document formats.
The project is used by various organisations,[2] notablyCommon Crawl[3] for generating a large and publicly available dataset of news.
Linux.com published a Q&A in October 2016 with the author of StormCrawler.[4] InfoQ ran one in December 2016.[5] A comparative benchmark withApache Nutch was published in January 2017 on dzone.com.[6]
Several research papers mentioned the use of StormCrawler, in particular:
The project Wiki contains a list of videos and slides available online.[10]