.\" Automatically generated by Pod::Man 4.14 (Pod::Simple 3.42) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" Set up some character translations and predefined strings. \*(-- will .\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left .\" double quote, and \*(R" will give a right double quote. \*(C+ will .\" give a nicer C++. Capital omega is used to do unbreakable dashes and .\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, .\" nothing in troff, for use with C<>. .tr \(*W- .ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' .ie n \{\ . ds -- \(*W- . ds PI pi . if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch . if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch . ds L" "" . ds R" "" . ds C` "" . ds C' "" 'br\} .el\{\ . ds -- \|\(em\| . ds PI \(*p . ds L" `` . ds R" '' . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "HTML::WikiConverter 3pm" .TH HTML::WikiConverter 3pm "2022-06-14" "perl v5.34.0" "User Contributed Perl Documentation" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH "NAME" HTML::WikiConverter \- Convert HTML to wiki markup .SH "SYNOPSIS" .IX Header "SYNOPSIS" .Vb 3 \& use HTML::WikiConverter; \& my $wc = new HTML::WikiConverter( dialect => \*(AqMediaWiki\*(Aq ); \& print $wc\->html2wiki( html => \*(Aqtext\*(Aq ), "\en\en"; \& \& # A more complete example \& \& my $html = qq( \&

Italic, bold, also bold, etc.

\& ); \& \& my @dialects = HTML::WikiConverter\->available_dialects; \& foreach my $dialect ( @dialects ) { \& my $wc = new HTML::WikiConverter( dialect => $dialect ); \& my $wiki = $wc\->html2wiki( html => $html ); \& printf "The %s dialect gives:\en\en%s\en\en", $dialect, $wiki; \& } .Ve .SH "DESCRIPTION" .IX Header "DESCRIPTION" \&\f(CW\*(C`HTML::WikiConverter\*(C'\fR is an \s-1HTML\s0 to wiki converter. It can convert \&\s-1HTML\s0 source into a variety of wiki markups, called wiki \&\*(L"dialects\*(R". The following dialects are supported: .PP .Vb 10 \& DokuWiki \& Kwiki \& MediaWiki \& MoinMoin \& Oddmuse \& PbWiki \& PhpWiki \& PmWiki \& SlipSlap \& TikiWiki \& UseMod \& WakkaWiki \& WikkaWiki .Ve .PP Note that while dialects usually produce satisfactory wiki markup, not all features of all dialects are supported. Consult individual dialects' documentation for details of supported features. Suggestions for improvements, especially in the form of patches, are very much appreciated. .PP Since version 0.50 all dialects were separated out from HTML:WikiConverter. Please install the independent dialect packages as needed. .SH "METHODS" .IX Header "METHODS" .SS "new" .IX Subsection "new" .Vb 1 \& my $wc = new HTML::WikiConverter( dialect => $dialect, %attrs ); .Ve .PP Returns a converter for the specified wiki dialect. Croaks if \&\f(CW$dialect\fR is not provided or its dialect module is not installed on your system. Additional attributes may be specified in \f(CW%attrs\fR; see \&\*(L"\s-1ATTRIBUTES\*(R"\s0 for a complete list. .SS "html2wiki" .IX Subsection "html2wiki" .Vb 4 \& $wiki = $wc\->html2wiki( $html, %attrs ); \& $wiki = $wc\->html2wiki( html => $html, %attrs ); \& $wiki = $wc\->html2wiki( file => $file, %attrs ); \& $wiki = $wc\->html2wiki( uri => $uri, %attrs ); .Ve .PP Converts \s-1HTML\s0 source to wiki markup for the current dialect. Accepts either an \s-1HTML\s0 string \f(CW$html\fR, an file \f(CW$file\fR, or a \s-1URI\s0 <$uri> to read from. .PP Attributes assigned in \f(CW%attrs\fR (see \*(L"\s-1ATTRIBUTES\*(R"\s0) will augment or override previously assigned attributes for the duration of the \&\f(CW\*(C`html2wiki()\*(C'\fR call. .SS "elem_search_lineage" .IX Subsection "elem_search_lineage" .Vb 1 \& my $ancestor = $wc\->elem_search_lineage( $node, \e%rules ); .Ve .PP Searches the lineage of \f(CW$node\fR and returns the first ancestor node that has rules matching those specified in \f(CW%rules\fR, or \f(CW\*(C`undef\*(C'\fR if no matching node is found. .PP For example, to find out whether \f(CW$node\fR has an ancestor with rules matching \f(CW\*(C`{ block =>1 }\*(C'\fR, one could use: .PP .Vb 3 \& if( $wc\->elem_search_lineage( $node, { block => 1 } ) ) { \& # do something \& } .Ve .SS "given_html" .IX Subsection "given_html" .Vb 1 \& my $html = $wc\->given_html; .Ve .PP Returns the \s-1HTML\s0 passed to or fetched (ie, from a file or \s-1URI\s0) by the last \f(CW\*(C`html2wiki()\*(C'\fR method call. Useful for debugging. .SS "parsed_html" .IX Subsection "parsed_html" .Vb 1 \& my $parsed_html = $wc\->parsed_html; .Ve .PP Returns a string containing the post-processed \s-1HTML\s0 from the last \&\f(CW\*(C`html2wiki\*(C'\fR call. Post-processing includes parsing by HTML::TreeBuilder, \s-1CSS\s0 normalization by HTML::WikiConverter::Normalizer, and calls to the \f(CW\*(C`preprocess\*(C'\fR and \&\f(CW\*(C`preprocess_tree\*(C'\fR dialect methods. .SS "available_dialects" .IX Subsection "available_dialects" .Vb 1 \& my @dialects = HTML::WikiConverter\->available_dialects; .Ve .PP Returns a list of all available dialects by searching the directories in \f(CW@INC\fR for \f(CW\*(C`HTML::WikiConverter::\*(C'\fR modules. .SS "rules_for_tag" .IX Subsection "rules_for_tag" .Vb 1 \& my $rules = $wc\->rules_for_tag( $tag ); .Ve .PP Returns the rules that will be used for converting elements of the given tag. Follows \f(CW\*(C`alias\*(C'\fR references. Note that the rules used for a particular tag may depend on the current set of attributes being used. .SH "ATTRIBUTES" .IX Header "ATTRIBUTES" You may configure \f(CW\*(C`HTML::WikiConverter\*(C'\fR using a number of attributes. These may be passed as arguments to the \f(CW\*(C`new\*(C'\fR constructor, or can be called as object methods on an H::WC object. .PP Some dialects allow other attributes in addition to those below, and may override the attributes' default values. Consult the dialect's documentation for details. .SS "base_uri" .IX Subsection "base_uri" \&\s-1URI\s0 to use for converting relative URIs to absolute ones. This effectively ensures that the \f(CW\*(C`src\*(C'\fR and \f(CW\*(C`href\*(C'\fR attributes of image and anchor tags, respectively, are absolute before converting the \s-1HTML\s0 to wiki markup, which is necessary for wiki dialects that handle internal and external links separately. Relative URIs are only converted to absolute ones if the \f(CW\*(C`base_uri\*(C'\fR argument is present. Defaults to \f(CW\*(C`undef\*(C'\fR. .SS "dialect" .IX Subsection "dialect" (Required) Dialect to use for converting \s-1HTML\s0 into wiki markup. See the \*(L"\s-1DESCRIPTION\*(R"\s0 section above for a list of dialects. \f(CW\*(C`new()\*(C'\fR will fail if the dialect given is not installed on your system. Use \&\f(CW\*(C`available_dialects()\*(C'\fR to list installed dialects. .SS "encoding" .IX Subsection "encoding" Specifies the encoding used by the \s-1HTML\s0 to be converted. Also determines the encoding of the wiki markup returned by the \&\f(CW\*(C`html2wiki\*(C'\fR method. Defaults to \f(CW"utf8"\fR. .SS "escape_entities" .IX Subsection "escape_entities" Passing \f(CW\*(C`escape_entities\*(C'\fR a true value uses HTML::Entities to encode potentially unsafe '<', '>', and '&' characters. Defaults to true. .SS "p_strict" .IX Subsection "p_strict" Boolean indicating whether HTML::TreeBuilder will use strict handling of paragraph tags when parsing \s-1HTML\s0 input. (This corresponds to the \f(CW\*(C`p_strict\*(C'\fR method in the HTML::TreeBuilder module.) Enabled by default. .SS "passthrough_naked_tags" .IX Subsection "passthrough_naked_tags" Boolean indicating whether tags with no attributes (\*(L"naked\*(R" tags) should be removed and replaced with their content. By default, this only applies to non-semantic tags such as ,
, etc., but does not apply to semantic tags such as ,
, etc. To override this behavior and specify the tags that should be considered for passthrough, provide this attribute with a reference to an array of tag names. Defaults to false, but you'll probably want to enable it. .SS "preprocess" .IX Subsection "preprocess" Code reference that gets invoked after \s-1HTML\s0 is parsed but before it is converted into wiki markup. The callback is passed two arguments: the \&\f(CW\*(C`HTML::WikiConverter\*(C'\fR object and a HTML::Element pointing to the root node of the \s-1HTML\s0 tree created by HTML::TreeBuilder. .SS "slurp" .IX Subsection "slurp" Boolean that, if enabled, bypasses \f(CW\*(C`HTML::Parser\*(C'\fR's incremental parsing (thus \fIslurping\fR the file in all at once) of files when reading \s-1HTML\s0 files. If File::Slurp is installed, its \f(CW\*(C`read_file()\*(C'\fR function will be used to perform slurping; otherwise, a common Perl idiom will be used for slurping instead. This option is only used if you call \f(CW\*(C`html2wiki()\*(C'\fR with the \f(CW\*(C`file\*(C'\fR argument. .SS "strip_empty_tags" .IX Subsection "strip_empty_tags" Strips elements containing no content (unless those elements legitimately contain no content, such as is the case for \f(CW\*(C`br\*(C'\fR and \&\f(CW\*(C`img\*(C'\fR tags, for example). Defaults to false. .SS "strip_tags" .IX Subsection "strip_tags" A reference to an array of tags to be removed from the \s-1HTML\s0 input prior to conversion to wiki markup. Tag names are the same as those used in HTML::Element. Defaults to \f(CW\*(C`[ \*(Aq~comment\*(Aq, \*(Aqhead\*(Aq, \&\*(Aqscript\*(Aq, \*(Aqstyle\*(Aq ]\*(C'\fR. .SS "user_agent" .IX Subsection "user_agent" Specifies the LWP::UserAgent object to be used when fetching the \&\s-1URI\s0 passed to \f(CW\*(C`html2wiki()\*(C'\fR. If unspecified and \f(CW\*(C`html2wiki()\*(C'\fR is passed a \s-1URI,\s0 a default user agent will be created. .SS "wiki_uri" .IX Subsection "wiki_uri" Takes a \s-1URI,\s0 regular expression, or coderef (or a reference to an array of elements of these types) used to determine which links are to wiki pages: a link whose \f(CW\*(C`href\*(C'\fR parameter matches \f(CW\*(C`wiki_uri\*(C'\fR will be treated as a link to a wiki page. In addition, \f(CW\*(C`wiki_uri\*(C'\fR will be used to extract the title of the wiki page. The way this is done depends on whether the \f(CW\*(C`wiki_uri\*(C'\fR has been set to a string, regexp, or coderef. The default is \f(CW\*(C`undef\*(C'\fR, meaning that all links will be treated as external links by default. .PP If \f(CW\*(C`wiki_uri\*(C'\fR is a string, it is interpreted as a \s-1URI\s0 template, and it will be assumed that URIs to wiki pages are created by joining \&\f(CW\*(C`wiki_uri\*(C'\fR with the wiki page title. For example, the English Wikipedia might use \f(CW"http://en.wikipedia.org/wiki/"\fR as the value of \&\f(CW\*(C`wiki_uri\*(C'\fR. Ward's wiki might use \f(CW"http://c2.com/cgi/wiki?"\fR. These examples use an absolute \f(CW\*(C`wiki_uri\*(C'\fR, but a relative \s-1URI\s0 can be used as well; an absolute \s-1URI\s0 will be created based on the value of \&\f(CW\*(C`base_uri\*(C'\fR. For example, the Wikipedia example above can be rewritten using \f(CW\*(C`base_uri\*(C'\fR of \f(CW"http://en.wikipedia.org"\fR and a \f(CW\*(C`wiki_uri\*(C'\fR of \&\f(CW"/wiki/"\fR. .PP \&\f(CW\*(C`wiki_uri\*(C'\fR can also be a regexp that matches URIs to wiki pages and also extracts the page title from them. For example, the English Wikipedia might use \&\f(CW\*(C`qr~http://en\e.wikipedia\e.org/w/index\e.php\e?title\e=([^&]+)~\*(C'\fR. .PP \&\f(CW\*(C`wiki_uri\*(C'\fR can also be a coderef that takes the current \&\f(CW\*(C`HTML::WikiConverter\*(C'\fR object and a \s-1URI\s0 object. It should return the title of the wiki page extracted from the \s-1URI,\s0 or \f(CW\*(C`undef\*(C'\fR if the \&\s-1URI\s0 doesn't represent a link to a wiki page. .PP As mentioned above, the \f(CW\*(C`wiki_uri\*(C'\fR attribute can either take a single URI/regexp/coderef element or it may be assigned a reference to an array of any number of these elements. This is useful for wikis that have different ways of creating links to wiki pages. For example, the English Wikipedia might use: .PP .Vb 7 \& my $wc = new HTML::WikiConverter( \& dialect => \*(AqMediaWiki\*(Aq, \& wiki_uri => [ \& \*(Aqhttp://en.wikipiedia.org/wiki/\*(Aq, \& sub { pop\->query_param(\*(Aqtitle\*(Aq) } # requires URI::QueryParam \& ] \& ); .Ve .SS "wrap_in_html" .IX Subsection "wrap_in_html" Helps HTML::TreeBuilder parse \s-1HTML\s0 fragments by wrapping \s-1HTML\s0 in \&\f(CW\*(C`\*(C'\fR and \f(CW\*(C`\*(C'\fR before passing it through \&\f(CW\*(C`html2wiki\*(C'\fR. Boolean, enabled by default. .SH "ADDING A DIALECT" .IX Header "ADDING A DIALECT" Consult HTML::WikiConverter::Dialects for documentation on how to write your own dialect module for \f(CW\*(C`HTML::WikiConverter\*(C'\fR. Or if you're not up to the task, drop me an email and I'll have a go at it when I get a spare moment. .SH "SEE ALSO" .IX Header "SEE ALSO" HTML::Tree, Convert::Wiki .SH "AUTHOR" .IX Header "AUTHOR" David J. Iberri, \f(CW\*(C`\*(C'\fR .SH "BUGS" .IX Header "BUGS" Please report any bugs or feature requests to \&\f(CW\*(C`bug\-html\-wikiconverter at rt.cpan.org\*(C'\fR, or through the web interface at . I will be notified, and then you'll automatically be notified of progress on your bug as I make changes. .SH "SUPPORT" .IX Header "SUPPORT" You can find documentation for this module with the perldoc command. .PP .Vb 1 \& perldoc HTML::WikiConverter .Ve .PP You can also look for information at: .IP "\(bu" 4 AnnoCPAN: Annotated \s-1CPAN\s0 documentation .Sp .IP "\(bu" 4 \&\s-1CPAN\s0 Ratings .Sp .IP "\(bu" 4 \&\s-1RT: CPAN\s0's request tracker .Sp .IP "\(bu" 4 Search \s-1CPAN\s0 .Sp .SH "ACKNOWLEDGEMENTS" .IX Header "ACKNOWLEDGEMENTS" Thanks to Tatsuhiko Miyagawa for suggesting Bundle::HTMLWikiConverter as well as providing code for the \&\f(CW\*(C`available_dialects()\*(C'\fR class method. .PP My thanks also goes to Martin Kudlvasr for catching (and fixing!) a bug in the logic of how \s-1HTML\s0 files were processed. .PP Big thanks to Dave Schaefer for the PbWiki dialect and for the idea behind the new \f(CW\*(C`attributes()\*(C'\fR implementation. .SH "COPYRIGHT & LICENSE" .IX Header "COPYRIGHT & LICENSE" Copyright (c) David J. Iberri, all rights reserved. .PP This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.