.\" Automatically generated by Pod::Man 4.14 (Pod::Simple 3.40) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" Set up some character translations and predefined strings. \*(-- will .\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left .\" double quote, and \*(R" will give a right double quote. \*(C+ will .\" give a nicer C++. Capital omega is used to do unbreakable dashes and .\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, .\" nothing in troff, for use with C<>. .tr \(*W- .ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' .ie n \{\ . ds -- \(*W- . ds PI pi . if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch . if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch . ds L" "" . ds R" "" . ds C` "" . ds C' "" 'br\} .el\{\ . ds -- \|\(em\| . ds PI \(*p . ds L" `` . ds R" '' . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "Scrappy 3pm" .TH Scrappy 3pm "2021-01-07" "perl v5.32.0" "User Contributed Perl Documentation" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH "NAME" Scrappy \- The All Powerful Web Spidering, Scraping, Creeping Crawling Framework .SH "VERSION" .IX Header "VERSION" version 0.94112090 .SH "SYNOPSIS" .IX Header "SYNOPSIS" .Vb 2 \& #!/usr/bin/perl \& use Scrappy; \& \& my $scraper = Scrappy\->new; \& \& $scraper\->crawl(\*(Aqhttp://search.cpan.org/recent\*(Aq, \& \*(Aq/recent\*(Aq => { \& \*(Aq#cpansearch li a\*(Aq => sub { \& print $_[1]\->{href}, "\en"; \& } \& } \& ); .Ve .PP And now manually, ... without crawl, the above is similar to the following ... .PP .Vb 2 \& #!/usr/bin/perl \& use Scrappy; \& \& my $scraper = Scrappy\->new; \& \& if ($scraper\->get($url)\->page_loaded) { \& $scraper\->select(\*(Aq#cpansearch li a\*(Aq)\->each(sub{ \& print shift\->{href}, "\en"; \& }); \& } .Ve .SH "DESCRIPTION" .IX Header "DESCRIPTION" Scrappy is an easy (and hopefully fun) way of scraping, spidering, and/or harvesting information from web pages, web services, and more. Scrappy is a feature rich, flexible, intelligent web automation tool. .PP Scrappy (pronounced Scrap+Pee) == 'Scraper Happy' or 'Happy Scraper'; If you like you may call it Scrapy (pronounced Scrape+Pee) although Python has a web scraping framework by that name and this module is not a port of that one. .SS "\s-1FEATURES\s0" .IX Subsection "FEATURES" Scrappy provides a framework containing all the tools necessary to create a simple yet powerful web scraper. At its core, Scrappy loads an array of features for access control, event logging, session handling, url matching, web request and response handling, proxy management, web scraping, and downloading. .PP Furthermore, Scrappy provides a simple Moose-based plugin system that allows Scrappy to be easily extended. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->control; # Scrappy::Scraper::Control (access control) \& $scraper\->parser; # Scrappy::Scraper::Parser (web scraper) \& $scraper\->user_agent; # Scrappy::Scraper::UserAgent (user\-agent tools) \& $scraper\->logger; # Scrappy::Logger (event logger) \& $scraper\->queue; # Scrappy::Queue (flow control for loops) \& $scraper\->session; # Scrappy::Session (session management) .Ve .PP Please see the \s-1METHODS\s0 section for a more in-depth look at all Scrappy functionality. .SS "\s-1ATTRIBUTES\s0" .IX Subsection "ATTRIBUTES" The following is a list of object attributes available with every Scrappy instance, attributes always return an instance of the class they represent. .PP \fIcontent\fR .IX Subsection "content" .PP The content attribute holds the HTTP::Response object of the current request. Returns undef if no page has been successfully fetched. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->content; .Ve .PP \fIcontrol\fR .IX Subsection "control" .PP The control attribute holds the Scrappy::Scraper::Control object which is used the provide access conrtol to the scraper. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->control; \& \& ... $scraper\->control\->restrict(\*(Aqgoogle.com\*(Aq); \& ... $scraper\->control\->allow(\*(Aqcpan.org\*(Aq); \& ... if $scraper\->control\->is_allowed($url); .Ve .PP \fIdebug\fR .IX Subsection "debug" .PP The debug attribute holds a boolean which controls whether event logs are captured. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->debug(1); .Ve .PP \fIlogger\fR .IX Subsection "logger" .PP The logger attribute holds the Scrappy::Logger object which is used to provide event logging capabilities to the scraper. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->logger; .Ve .PP \fIparser\fR .IX Subsection "parser" .PP The parser attribute holds the Scrappy::Scraper::Parser object which is used to scrape html data from the specified source material. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->parser; .Ve .PP \fIplugins\fR .IX Subsection "plugins" .PP The plugins attribute holds the Scrappy::Plugin object which is an interface used to load plugins. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->plugins; .Ve .PP \fIqueue\fR .IX Subsection "queue" .PP The queue attribute holds the Scrappy::Queue object which is used to provide flow-control for the standard loop approach to crawling. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->queue; .Ve .PP \fIsession\fR .IX Subsection "session" .PP The session attribute holds the Scrappy::Session object which is used to provide session support and persistent data across executions. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->session; .Ve .PP \fIuser_agent\fR .IX Subsection "user_agent" .PP The user_agent attribute holds the Scrappy::Scraper::UserAgent object which is used to set and manipulate the user-agent header of the scraper. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->user_agent; .Ve .PP \fIworker\fR .IX Subsection "worker" .PP The worker attribute holds the WWW::Mechanize object which is used navigate web pages and provide request and response header information. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->worker; .Ve .SH "METHODS" .IX Header "METHODS" .SS "back" .IX Subsection "back" The back method is the equivalent of hitting the \*(L"back\*(R" button in a browser, it returns the previous page (response) and returns that \s-1URL,\s0 it will not backtrack beyond the first request. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->get(...); \& ... \& $scraper\->get(...); \& ... \& my $last_url = $scraper\->back; .Ve .SS "cookies" .IX Subsection "cookies" The cookies method returns an HTTP::Cookie object. Note! Cookies can be made persistent by enabling session-support. Session-support is enable by simply specifying a file to be used. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->session\->write(\*(Aqsession.yml\*(Aq); # enable session support \& $scraper\->get(...); \& my $cookies = $scraper\->cookies; .Ve .SS "crawl" .IX Subsection "crawl" The crawl method is very useful when it is desired to crawl an entire website or at-least partially, it automates the tasks of creating a queue, fetching and parsing html pages, and establishing simple flow-control. See the \s-1SYNOPSIS\s0 for a simplified example, ... the following is a more complex example. .PP .Vb 1 \& my $scrappy = Scrappy\->new; \& \& $scrappy\->crawl(\*(Aqhttp://search.cpan.org/recent\*(Aq, \& \*(Aq/recent\*(Aq => { \& \*(Aq#cpansearch li a\*(Aq => sub { \& my ($self, $item) = @_; \& # follow all recent modules from search.cpan.org \& $self\->queue\->add($item\->{href}); \& } \& }, \& \*(Aq/~:author/:name\-:version/\*(Aq => { \& \*(Aqbody\*(Aq => sub { \& my ($self, $item, $args) = @_; \& \& my $reviews = $self \& \->select(\*(Aq.box table tr\*(Aq)\->focus(3)\->select(\*(Aqtd.cell small a\*(Aq) \& \->data\->[0]\->{text}; \& \& $reviews = $reviews =~ /\ed+ Reviews/ ? \& $reviews : \*(Aq0 reviews\*(Aq; \& \& print "found $args\->{name} version $args\->{version} ". \& "[$reviews] by $args\->{author}\en"; \& } \& } \& ); .Ve .SS "domain" .IX Subsection "domain" The domain method returns the domain host of the current page. Local pages, e.g. file:///this/that/the_other will return undef. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->get(\*(Aqhttp://www.google.com\*(Aq); \& print $scraper\->domain; # print www.google.com .Ve .SS "download" .IX Subsection "download" The download method is passed a \s-1URL,\s0 a Download Directory Path and a optionally a File Path, then it will follow the link and store the response contents into the specified file without leaving the current page. Basically it downloads the contents of the request (especially when the request pushes a file download). If a File Path is not specified, Scrappy will attempt to name the file automatically resorting to a random 6\-charater string only if all else fails, then returns to the originating page. .PP .Vb 2 \& my $scaper = Scrappy\->new; \& my $requested_url = \*(Aq...\*(Aq; \& \& $scraper\->download($requested_url, \*(Aq/tmp\*(Aq); \& \& # supply your own file name \& $scraper\->download($requested_url, \*(Aq/tmp\*(Aq, \*(Aqsomefile.txt\*(Aq); .Ve .SS "dumper" .IX Subsection "dumper" The dumper method is a convenience feature that passes the passed-in objects to Data::Dumper which in turn returns a stringified representation of that object/data\-structure. .PP .Vb 2 \& my $scaper = Scrappy\->new; \& my $requested_url = \*(Aq...\*(Aq; \& \& $scraper\->get($requested_url); \& \& my $data = $scraper\->select(\*(Aq//a[@href]\*(Aq)\->data; \& \& # print out the scraped data \& print $scraper\->dumper($data); .Ve .SS "form" .IX Subsection "form" The form method is used to submit a form on the current page. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->form(fields => { \& username => \*(Aqmrmagoo\*(Aq, \& password => \*(Aqfoobarbaz\*(Aq \& }); \& \& # or more specifically, for pages with multiple forms \& \& $scraper\->form(form_name => \*(Aqlogin_form\*(Aq, fields => { \& username => \*(Aqmrmagoo\*(Aq, \& password => \*(Aqfoobarbaz\*(Aq \& }); \& \& $scraper\->form(form_number => 1, fields => { \& username => \*(Aqmrmagoo\*(Aq, \& password => \*(Aqfoobarbaz\*(Aq \& }); .Ve .SS "get" .IX Subsection "get" The get method takes a \s-1URL\s0 or \s-1URI\s0 object, fetches a web page and returns the Scrappy object. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& if ($scraper\->get($new_url)\->page_loaded) { \& ... \& } \& \& # $self\->content has the HTTP::Response object .Ve .SS "log" .IX Subsection "log" The log method logs an event with the event logger. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->debug(1); # unnecessary, on by default \& $scraper\->logger\->verbose(1); # more detailed log \& \& $scraper\->log(\*(Aqerror\*(Aq, \*(AqSomthing bad happened\*(Aq); \& \& ... \& \& $scraper\->log(\*(Aqinfo\*(Aq, \*(AqSomthing happened\*(Aq); \& $scraper\->log(\*(Aqwarn\*(Aq, \*(AqSomthing strange happened\*(Aq); \& $scraper\->log(\*(Aqcoolness\*(Aq, \*(AqSomthing cool happened\*(Aq); .Ve .PP Note! Event logs are always recorded but never automatically written to a file unless explicitly told to do so using the following: .PP .Vb 1 \& $scraper\->logger\->write(\*(Aqlog.yml\*(Aq); .Ve .SS "page_content_type" .IX Subsection "page_content_type" The page_content_type method returns the content_type of the current page. .PP .Vb 3 \& my $scraper = Scrappy\->new; \& $scraper\->get(\*(Aqhttp://www.google.com/\*(Aq); \& print $scraper\->page_content_type; # prints text/html .Ve .SS "page_data" .IX Subsection "page_data" The page_data method returns the \s-1HTML\s0 content of the current page, additionally this method when passed a string with \s-1HTML\s0 markup, updates the content of the current page with that data and returns the modified content. .PP .Vb 3 \& my $scraper = Scrappy\->new; \& $scraper\->get(...); \& my $html = $scraper\->page_data; .Ve .SS "page_ishtml" .IX Subsection "page_ishtml" The page_ishtml method returns true/false based on whether our content is \s-1HTML,\s0 according to the \s-1HTTP\s0 headers. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->get($requested_url); \& if ($scraper\->is_html) { \& ... \& } .Ve .SS "page_loaded" .IX Subsection "page_loaded" The page_loaded method returns true/false based on whether the last request was successful. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->get($requested_url); \& if ($scraper\->page_loaded) { \& ... \& } .Ve .SS "page_match" .IX Subsection "page_match" The page_match method checks the passed-in \s-1URL\s0 (or \s-1URL\s0 of the current page if left empty) against the \s-1URL\s0 pattern (route) defined. If \s-1URL\s0 is a match, it will return the parameters of that match much in the same way a modern web application framework processes \s-1URL\s0 routes. .PP .Vb 1 \& my $url = \*(Aqhttp://somesite.com/tags/awesomeness\*(Aq; \& \& ... \& \& my $scraper = Scrappy\->new; \& \& # match against the current page \& my $this = $scraper\->page_match(\*(Aq/tags/:tag\*(Aq); \& if ($this) { \& print $this\->{\*(Aqtag\*(Aq}; \& # ... prints awesomeness \& } \& \& .. or .. \& \& # match against a passed url \& my $this = $scraper\->page_match(\*(Aq/tags/:tag\*(Aq, $url, { \& host => \*(Aqsomesite.com\*(Aq \& }); \& \& if ($this) { \& print "This is the ", $this\->{tag}, " page"; \& # ... prints this is the awesomeness page \& } .Ve .SS "page_reload" .IX Subsection "page_reload" The page_reload method acts like the refresh button in a browser, it simply repeats the current request. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->get(...); \& ... \& $scraper\->reload; .Ve .SS "page_status" .IX Subsection "page_status" The page_status method returns the 3\-digit \s-1HTTP\s0 status code of the response. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->get(...); \& \& if ($scraper\->page_status == 200) { \& ... \& } .Ve .SS "page_text" .IX Subsection "page_text" The page_text method returns a text representation of the last page having all \s-1HTML\s0 markup stripped. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->get(...); \& \& my $text = $scraper\->page_text; .Ve .SS "page_title" .IX Subsection "page_title" The page_title method returns the content of the title tag if the current page is \s-1HTML,\s0 otherwise returns undef. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->get(\*(Aqhttp://www.google.com/\*(Aq); \& \& my $title = $scraper\->page_title; \& print $title; # print Google .Ve .SS "pause" .IX Subsection "pause" This method sets breaks between your requests in an attempt to simulate human interaction. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->pause(20); \& \& $scraper\->get($request_1); \& $scraper\->get($request_2); \& $scraper\->get($request_3); .Ve .PP Given the above example, there will be a 20 sencond break between each request made, get, post, request, etc., You can also specify a range to have the pause method select from at random... .PP .Vb 1 \& $scraper\->pause(5,20); \& \& $scraper\->get($request_1); \& $scraper\->get($request_2); \& \& # reset/turn it off \& $scraper\->pause(0); \& \& print "I slept for ", ($scraper\->pause), " seconds"; .Ve .PP Note! The download method is exempt from any automatic pausing. .SS "plugin" .IX Subsection "plugin" The plugin method allow you to load a plugin. Using the appropriate case is recommended but not necessary. See Scrappy::Plugin for more information. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->plugin(\*(Aqfoo_bar\*(Aq); # will load Scrappy::Plugin::FooBar \& $scraper\->plugin(\*(Aqfoo\-bar\*(Aq); # will load Scrappy::Plugin::Foo::Bar \& $scraper\->plugin(\*(AqFoo::Bar\*(Aq); # will load Scrappy::Plugin::Foo::Bar \& \& # more pratically \& $scraper\->plugin(\*(Aqwhois\*(Aq, \*(Aqspammer_check\*(Aq); \& \& ... somewhere in code \& \& my $var = $scraper\->plugin_method(); \& \& # example using core plugin Scrappy::Plugin::RandomProxy \& \& my $s = Scrappy\->new; \& \& $s\->plugin(\*(Aqrandom_proxy\*(Aq); \& $s\->use_random_proxy; \& \& $s\->get(...); .Ve .SS "post" .IX Subsection "post" The post method takes a \s-1URL,\s0 a hashref of key/value pairs, and optionally an array of key/value pairs, and posts that data to the specified \s-1URL,\s0 then returns an HTTP::Response object. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->post($requested_url, { \& input_a => \*(Aqvalue_a\*(Aq, \& input_b => \*(Aqvalue_b\*(Aq \& }); \& \& # w/additional headers \& my %headers = (\*(AqContent\-Type\*(Aq => \*(Aqmultipart/form\-data\*(Aq); \& $scraper\->post($requested_url, { \& input_a => \*(Aqvalue_a\*(Aq, \& input_b => \*(Aqvalue_b\*(Aq \& }, %headers); .Ve .PP Note! The most common post headers for content-type are application/x\-www\-form\-urlencoded and multipart/form\-data. .SS "proxy" .IX Subsection "proxy" The proxy method will set the proxy for the next request to be tunneled through. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->proxy(\*(Aqhttp\*(Aq, \*(Aqhttp://proxy1.example.com:8000/\*(Aq); \& $scraper\->get($requested_url); \& \& $scraper\->proxy(\*(Aqhttp\*(Aq, \*(Aqftp\*(Aq, \*(Aqhttp://proxy2.example.com:8000/\*(Aq); \& $scraper\->get($requested_url); \& \& # best practice when using proxies \& \& use Tiny::Try; \& \& my $proxie = Scrappy\->new; \& \& $proxie\->proxy(\*(Aqhttp\*(Aq, \*(Aqhttp://proxy.example.com:8000/\*(Aq); \& \& try { \& $proxie\->get($requested_url); \& } catch { \& die "Proxy failed\en"; \& }; .Ve .PP Note! When using a proxy to perform requests, be aware that if they fail your program will die unless you wrap your code in an eval statement or use a try/catch mechanism. In the example above we use Tiny::Try to trap any errors that might occur when using proxy. .SS "request_denied" .IX Subsection "request_denied" The request_denied method is a simple shortcut to determine if the page you requested got loaded or redirected. This method is very useful on systems that require authentication and redirect if not authorized. This function return boolean, 1 if the current page doesn't match the requested page. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->get($url_to_dashboard); \& \& if ($scraper\->request_denied) { \& # do login, again \& } \& else { \& # resume ... \& } .Ve .SS "response" .IX Subsection "response" The response method returns the HTTP::Repsonse object of the current page. .PP .Vb 3 \& my $scraper = Scrappy\->new; \& $scraper\->get(...); \& my $res = $scraper\->response; .Ve .SS "select" .IX Subsection "select" The select method takes \s-1XPATH\s0 or \s-1CSS\s0 selectors and returns a Scrappy::Scraper::Parser object which contains the matching elements. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& # return a list of links \& my $list = $scraper\->select(\*(Aq#profile li a\*(Aq)\->data; # see Scrappy::Scraper::Parser \& \& foreach my $link (@{$list}) { \& print $link\->{href}, "\en"; \& } \& \& # Zoom in on specific chunks of html code using the following ... \& my $list = $scraper \& \->select(\*(Aq#container table tr\*(Aq) # select all rows \& \->focus(4) # focus on the 5th row \& \->select(\*(Aqdiv div\*(Aq)\->data; \& \& # The code above selects the div > div inside of the 5th tr in #container table \& # Access attributes html, text and other attributes as follows... \& \& $element = $scraper\->select(\*(Aqtable\*(Aq)\->data\->[0]; \& $element\->{html}; # HTML representation of the table \& $element\->{text}; # Table stripped of all HTML \& $element\->{cellpadding}; # cellpadding \& $element\->{height}; # ... .Ve .SS "stash" .IX Subsection "stash" The stash method sets a stash (shared) variable or returns a reference to the entire stash object. .PP .Vb 2 \& my $scraper = Scrappy\->new; \& $scraper\->stash(age => 31); \& \& print \*(Aqstash access works\*(Aq \& if $scraper\->stash(\*(Aqage\*(Aq) == $scraper\->stash\->{age}; \& \& my @array = (1..20); \& $scraper\->stash(integers => [@array]); .Ve .SS "store" .IX Subsection "store" The store method stores the contents of the current page into the specified file. If the content-type does not begin with 'text', the content is saved as binary data. .PP .Vb 1 \& my $scraper = Scrappy\->new; \& \& $scraper\->get($requested_url); \& $scraper\->store(\*(Aq/tmp/foo.html\*(Aq); .Ve .SS "url" .IX Subsection "url" The url method returns the complete \s-1URL\s0 for the current page. .PP .Vb 3 \& my $scraper = Scrappy\->new; \& $scraper\->get(\*(Aqhttp://www.google.com/\*(Aq); \& print $scraper\->url; # prints http://www.google.com/ .Ve .SH "AUTHOR" .IX Header "AUTHOR" Al Newkirk .SH "COPYRIGHT AND LICENSE" .IX Header "COPYRIGHT AND LICENSE" This software is copyright (c) 2010 by awncorp. .PP This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.