.TH "URLEXTRACTOR" "1" "FEBRUARY 27, 2021" "Version 0.2.0" "Usage" .SH NAME URLEXTRACTOR - Information gathering and website reconnaissance .SH SYNOPSIS urlextractor [URL] .SH DESCRIPTION urlextractor gathers information from the specified URL and prints it to STDOUT gathering the following information: - IP and hosting info like city and country (using FreegeoIP) - DNS servers (using dig) - ASN, Network range, ISP name (using RISwhois) - Load balancer test - Whois for abuse mail (using Spamcop) - PAC (Proxy Auto Configuration) file - Compares hashes to diff code - robots.txt (recursively looking for hidden stuff) - Source code (looking for passwords and users) - External links (frames from other websites) - Directory FUZZ (like Dirbuster and Wfuzz - using Dirbuster) directory list) - URLvoid API - checks Google page rank, Alexa rank and possible blacklists - Provides useful links at other websites to correlate with IP/ASN - Option to open ALL results in browser at the end .SH FILES urlextractor at runtime wil check if the directory \fB$HOME/.urlextractor\fR exists if the directory does not exists the directory will be created. The previous behaviour has been added in Debian Systems in order to have a better user experience .TP .BR $HOME/.urlextractor/config The configuration file used to customize default program settings. After the directory \fB$HOME/.urlextractor\fR is created a default configuration file is copied from the package examples directory \fB/usr/share/doc/urlextractor/examples/config\fR containing a default configuration to enable urlextractor to work. For more information about the configuration check the example file. .TP .BR $HOME/.urlextractor/log.csv Save the scanned sites for future reference. .SH AUTHOR Eduardo Schultze (2016). .SH NOTES This manual page has been written by Josue Ortega for the Debian project (and may be used by others). .SH LICENSE The MIT License (MIT)