Skip to content

Command line user interface

Tasos Laskos edited this page Mar 30, 2017 · 92 revisions

There are a lot of options, customizations and tweaks you can use but fear not and don't let yourself be overwhelmed. This guide will walk you through each and every one of them and teach you how to use them in order to make your scans as efficient as possible.

If you intend to scan big and complex sites it's best that you read through this guide and evaluate all available options.

Quickstart

Help

In order to see everything Arachni has to offer execute:

arachni -h

Control screen

To see a control screen via which you can inspect an issue summary and perform actions like pausing/resuming, aborting, suspending etc., press Enter while the scan is running.

NOTE: This functionality is not available on MS Windows.

Examples

You can simply run Arachni like so:

arachni http://example.com

which will load all checks, the plugins under /plugins/defaults and audit all forms, links and cookies.

In the following example, all checks will be run against http://example.com, auditing links/forms/cookies and following subdomains while also printing verbose messages.

The results of the scan will be saved in the the file example.com.afr.

arachni --output-verbose --scope-include-subdomains http://example.com --report-save-path=example.com.afr

The Arachni Framework Report (.afr) file can later be used to create reports in several formats, like so:

arachni_reporter example.com.afr --reporter=html:outfile=my_report.html.zip

To see all available reporter components run:

arachni_reporter --reporters-list

You can make check loading easier by using wildcards (*) and exclusions (-).

To load all xss checks using a wildcard:

arachni http://example.net --checks=xss*

To load all active checks using a wildcard:

arachni http://example.net --checks=active/*

To exclude only the csrf check:

arachni http://example.net --checks=*,-csrf

Or you can mix and match; to run everything but the xss checks:

arachni http://example.net --checks=*,-xss*

More resources

For more resources you can consult the articles in the knowledge base.

Command reference

Command Line Interface help output

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Outputs the Arachni banner and version information.

Expects: string

Default: disabled

Multiple invocations?: no

The string passed to this option will be used as the value for the From HTTP request header field. The option value should be the e-mail address of the person who authorized the scan.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Disables the keyboard listener for the scan status screen, in order to allow the system to be run in the background using &.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

When verbose messages are enabled, Arachni will give you detailed information about what's going on during the whole process.

Let's give this a try:

arachni --audit-forms --checks=xss http://testfire.net/ --scope-page-limit=1

This will load the XSS checks and audit all the forms in http://testfire.net/.

Verbose mode disabled

Observe that there's no --output-verbose flag in the previous run.

Don't worry about the rest of the parameters right now.

Quick note:

Arachni's output messages are classified into several categories, each of them prefixed with a different colored symbol:

  • [*] are status messages.
  • [~] are informational messages.
  • [+] are success messages.
  • [v] are verbose messages.
  • [!] are debug messages.
  • [-] are error messages.

I won't bother with coloring during the examples.

Arachni - Web Application Security Scanner Framework v1.0
   Author: Tasos "Zapotek" Laskos <[email protected]>

           (With the support of the community and the Arachni Team.)

   Website:       http://arachni-scanner.com
   Documentation: http://arachni-scanner.com/wiki


 [*] Initializing...
 [*] Waiting for plugins to settle...
 [*] BrowserCluster: Initializing 6 browsers...
 [*] BrowserCluster: Initialization completed with 6 browsers in the pool.

 [*] [HTTP: 200] http://testfire.net/
 [~] Identified as: windows, iis, asp, aspx
 [~] Analysis resulted in 0 usable paths.
 [~] DOM depth: 0 (Limit: 10)
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [*] XSS: Submitting form with original values for txtSearch at 'http://testfire.net/search.aspx'.
 [*] XSS: Submitting form with sample values for txtSearch at 'http://testfire.net/search.aspx'.
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [*] Harvesting HTTP responses...
 [~] Depending on server responsiveness and network conditions this may take a while.
 [*] XSS: Analyzing response #2...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [*] XSS: Analyzing response #3...
 [*] XSS: Analyzing response #4...
 [*] XSS: Analyzing response #5...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [*] XSS: Analyzing response #6...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx

Verbose mode enabled

Observe the extra information in this run.

[v] messages are verbose messages.

$ arachni --audit-forms --checks=xss http://testfire.net/ --scope-page-limit=1 --output-verbose
Arachni - Web Application Security Scanner Framework v1.0
   Author: Tasos "Zapotek" Laskos <[email protected]>

           (With the support of the community and the Arachni Team.)

   Website:       http://arachni-scanner.com
   Documentation: http://arachni-scanner.com/wiki


 [*] Initializing...
 [*] Waiting for plugins to settle...
 [*] BrowserCluster: Initializing 6 browsers...
 [*] BrowserCluster: Initialization completed with 6 browsers in the pool.

 [*] [HTTP: 200] http://testfire.net/
 [~] Identified as: windows, iis, asp, aspx
 [~] Analysis resulted in 0 usable paths.
 [~] DOM depth: 0 (Limit: 10)
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [v] XSS: --> With: "<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>" -> "arachni_text<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>"
 [*] XSS: Submitting form with original values for txtSearch at 'http://testfire.net/search.aspx'.
 [v] XSS: --> With: nil -> ""
 [*] XSS: Submitting form with sample values for txtSearch at 'http://testfire.net/search.aspx'.
 [v] XSS: --> With: nil -> ""
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [v] XSS: --> With: "()\"&%1'-;<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>'" -> "arachni_text()\"&%1'-;<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>'"
 [*] XSS: Auditing form input 'txtSearch' pointing to: 'http://testfire.net/search.aspx'
 [v] XSS: --> With: "--><some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/><!--" -> "arachni_text--><some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/><!--"
 [*] Harvesting HTTP responses...
 [~] Depending on server responsiveness and network conditions this may take a while.
 [*] XSS: Analyzing response #2...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [v] XSS: Injected:  "arachni_text<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>"
 [v] XSS: Proof:     <some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>
 [v] XSS: Request:
GET /search.aspx?txtSearch=arachni_text%3Csome_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714%2F%3E HTTP/1.1
Host: testfire.net
Accept-Encoding: gzip, deflate
User-Agent: Arachni/v1.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: ASP.NET_SessionId=e4h4wy45jmb5vkrg0wl1rj45;amSessionId=15420499882


 [*] XSS: Analyzing response #3...
 [*] XSS: Analyzing response #4...
 [*] XSS: Analyzing response #6...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [v] XSS: Injected:  "arachni_text--><some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/><!--"
 [v] XSS: Proof:     <some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>
 [v] XSS: Request:
GET /search.aspx?txtSearch=arachni_text--%3E%3Csome_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714%2F%3E%3C%21-- HTTP/1.1
Host: testfire.net
Accept-Encoding: gzip, deflate
User-Agent: Arachni/v1.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: ASP.NET_SessionId=e4h4wy45jmb5vkrg0wl1rj45;amSessionId=15420499882


 [*] XSS: Analyzing response #5...
 [~] XSS: Response is tainted, looking for proof of vulnerability.
 [+] XSS: In form input 'txtSearch' with action http://testfire.net/search.aspx
 [v] XSS: Injected:  "arachni_text()\"&%1'-;<some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>'"
 [v] XSS: Proof:     <some_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714/>
 [v] XSS: Request:
GET /search.aspx?txtSearch=arachni_text%28%29%22%26%251%27-%3B%3Csome_dangerous_input_b2816f222dd9fce0ce8f0cda12aaf714%2F%3E%27 HTTP/1.1
Host: testfire.net
Accept-Encoding: gzip, deflate
User-Agent: Arachni/v1.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Cookie: ASP.NET_SessionId=e4h4wy45jmb5vkrg0wl1rj45;amSessionId=15420499882

Expects: integer

Default: 1

Multiple invocations?: no

When this flag is enabled the system will output a lot of messages detailing what's happening internally. The level/detail of the messages can be specified in the form of an integer between 1 and 3.

If you don't want to be flooded by annoying and obscure messages, you can pipe debugging output to a separate file when running Arachni using:

arachni http://example.com --output-debug 2> debug.log

Expects: <n/a>

Default: disabled

Multiple invocations?: no

This will suppress all messages except for for the ones denoting sucess -- usually regarding the discovery of some issue.

pattern refers to valid Ruby regular expressions without being enclosed by / and are thus case-sensitive and single-line.

Examples:

  • exclude-me: Excludes any string that includes the exclude-me substring.
  • exclude.*me: Excludes any string that includes exclude, followed by any content, and then me.
  • \/gallery\/winter\/: Excludes any string that includes the /gallery/winter/ substring -- slashes need to be escaped.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Restricts the scope of the scan to resources whose URL matches the pattern.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Allow the system to include subdomains in the scan.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Excludes resources whose URL matches the pattern.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Excludes pages whose content matches the pattern.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Excludes pages with binary content.

Note: Binary content can confuse passive check that perform pattern matching.

Expects: pattern:integer

Default: disabled

Multiple invocations?: yes

This option expects a pattern and a counter, like so: --scope-redundant-path-pattern='calendar.php:3'

This will cause URLs that contain calendar.php to be crawled only 3 times.

This option is useful when scanning websites that have a lot of redundant pages like a photo gallery or a dynamically generated calendar.

Expects: integer

Default: disabled (with a value of 10 if none has been specified)

Multiple invocations?: no

This option limits how many resources with URLs with identical parameters should be includes in the scan.

This can prevent infinite loops caused by pages like photo galleries or catalogues.

With --scope-auto-redundant=2 and given the following list of URLs:

http://example.com/?stuff=1
http://example.com/?stuff=2
http://example.com/?stuff=other-stuff
http://example.com/?stuff=blah
http://example.com/?stuff=blah&stuff2=1
http://example.com/?stuff=blah&stuff2=2
http://example.com/?stuff=blah2&stuff2=bloo
http://example.com/path.php?stuff=blah&stuff2=1

Only the following will be included:

http://example.com/?stuff=1
http://example.com/?stuff=2
http://example.com/?stuff=blah&stuff2=1
http://example.com/?stuff=blah&stuff2=2
http://example.com/path.php?stuff=blah&stuff2=1

Expects: integer

Default: infinite

Multiple invocations?: no

This option limits how deep into the site structure the scan should go.

Expects: integer

Default: infinite

Multiple invocations?: no

This option limits how many pages should be included in the scan.

Expects: filepath

Default: disabled

Multiple invocations?: yes

Allows you to extend the scope of the scan by seeding the system with the paths contained within the given file.

Note: The file must contains one path per line.

Expects: filepath

Default: disabled

Multiple invocations?: yes

Uses the paths contained within the given file instead of performing a crawl.

Note: The file must contains one path per line.

Expects: pattern:substitution

Default: disabled

Multiple invocations?: yes

This option expects a pattern and a substitution, like so: --scope-url-rewrite='articles/[\w-]+/(\d+):articles.php?id=\1'

The above will rewrite the URL http://example.com/articles/some-stuff/23 as http://example.com/articles.php?id=23.

Expects: integer

Default: 5

Multiple invocations?: no

This option limits how deep into each page's DOM structure the scan should go.

Note: DOM levels are counted as stacked interactions with the page's interface.

Expects: integer

Default: infinite

Multiple invocations?: no

This option limits the amount of events to be triggered for each page DOM snapshot.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Forces the system to only follow HTTPS URLs.

Note: The target URL must be an HTTPS one as well.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Enable auditing of links.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Enable auditing of forms.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Enable auditing of cookies.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled the system will submit all links and forms of the page along with the cookie permutations.

Warning: Will severely increase the scan-time.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Audit HTTP request headers.

Note: Header audits use brute force. Almost all valid HTTP request headers will be audited even if there's no indication that the web app uses them.

Warning: Enabling this option will result in increased requests, maybe by an order of magnitude.

Expects: pattern

Default: disabled

Multiple invocations?: yes

This option allows you to extract and audit inputs from generic paths based on a specified template in the form of a Ruby regular expression using names groups.

To extract the input1 and input2 inputs from: http://example.com/input1/value1/input2/value2

Use: input1/(?<input1>\w+)/input2/(?<input2>\w+)

Expects: <n/a>

Default: enabled

Multiple invocations?: no

Enable auditing of JSON inputs extracted from browser or proxy requests.

Expects: <n/a>

Default: enabled

Multiple invocations?: no

Enable auditing of XML inputs extracted from browser or proxy requests.

Expects: <n/a>

Default: enabled

Multiple invocations?: no

Enable auditing of orphan user interface inputs (like <input> elements not belonging to any form) which submit their data via DOM event callbacks.

Expects: <n/a>

Default: enabled

Multiple invocations?: no

Enable auditing of input and button groups which don't belong to any form, but are instead associated via JavaScript code and submitted via DOM event callbacks.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled, the system will inject payloads into parameter names instead of just values.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled, the system will add an extra parameter to all vectors and audit it as well.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled, the system will submit all elements using both GET and POST HTTP request methods.

Warning: Will severely increase the scan-time.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Don't audit input vectors whose name matches the pattern.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Only audit input vectors whose name matches the pattern.

Expects: string

Default: "Arachni/<version>"

Multiple invocations?: no

Specify a value for the User-Agent request header field.

Expects: integer

Default: 20

Multiple invocations?: no

Sets the maximum amount of requests to be active at any given time; this usually directly translates to the amount of open connections.

Note: If your scan seems unresponsive try lowering the limit to easy the server's burden.

Warning: Given enough bandwidth and a high enough concurrency setting the scan could cause a DoS. Be careful when setting this option too high, don't kill your server.

Expects: integer (milliseconds)

Default: 10000

Multiple invocations?: no

Limit how long the client should wait for a response from the server.

Expects: integer

Default: 5

Multiple invocations?: no

Limits the amount of redirects the client should follow for each request.

Expects: integer

Default: 100

Multiple invocations?: no

Maximum amount of requests to keep in the client queue.

Note: More means better scheduling and better performance, less means less RAM consumption.

Expects: string

Default: disabled

Multiple invocations?: yes

Allows you to specify custom request headers in the form of key-value pairs.

--http-request-header='field_name=field value'

Expects: integer

Default: 500000

Multiple invocations?: no

Limits the size of response bodies the client accepts. Essentially, the client will not download bodies of responses which have a Content-Length larger than the specified value.

Expects: filepath

Default: disabled

Multiple invocations?: no

Arachni allows you to pass your own cookies in the form of a Netscape cookie-jar file. If you want to audit restricted parts of a website that are accessible only to logged in users you should pass the session cookies to Arachni.

There's a number of ways to do that, I've found that Firebug's export cookie feature works best.

Note: If you don't feel comfortable setting your own cookie-jar, you can use the proxy or autologin plugins to login to the web application.

Expects: string

Default: disabled

Multiple invocations?: no

Cookies, in the format of a Set-Cookie response header, to be sent to the web application.

--http-cookie-string='my_cookie=my_value; Path=/, other_cookie=other_value; Path=/test'

By default, the HTTP authentication type is detected automatically, all that is necessary is specifying the username and password.

The only situation where that's not the case is when using Kerberos. In that case a ticket needs to be acquired via kinit and no username nor password need to be specified in the scan configuration.

If you are using the official packages, this can be accomplished like so:

./bin/arachni_shell -c 'kinit [email protected]'

After acquiring the Kerberos ticket you can perform the scan wthout any extra authentication configuration.

Expects: string

Default: disabled

Multiple invocations?: no

Username to use for HTTP authentication.

Expects: string

Default: disabled

Multiple invocations?: no

Password to use for HTTP authentication.

Expects: string

Default: auto

Multiple invocations?: no

HTTP authentication type to use, available types are:

  • auto -- Default
  • basic
  • digest
  • digest_ie
  • negotiate
  • ntlm

Expects: server:port

Default: disabled

Multiple invocations?: no

Sets a proxy server for the client.

Expects: username:password

Default: disabled

Multiple invocations?: no

Sets authentication credentials for the specified proxy server.

Expects: http, http_1_0, socks4, socks5, socks4a

Default: auto

Multiple invocations?: no

Sets the protocol for the specified proxy server.

Expects: n/a

Default: disabled

Multiple invocations?: no

Verify SSL peer.

Expects: n/a

Default: disabled

Multiple invocations?: no

Verify SSL host.

Expects: filepath

Default: disabled

Multiple invocations?: no

SSL certificate to use.

Expects: pem,der

Default: pem

Multiple invocations?: no

SSL certificate type.

Expects: filepath

Default: disabled

Multiple invocations?: no

SSL private key to use.

Expects: pem,der

Default: pem

Multiple invocations?: no

SSL private key type.

Expects: string

Default: disabled

Multiple invocations?: no

Password for the SSL private key.

Expects: filepath

Default: disabled

Multiple invocations?: no

File holding one or more certificates with which to verify the peer.

Expects: path

Default: disabled

Multiple invocations?: no

Directory holding multiple certificate files with which to verify the peer.

Expects: TLSv1,TLSv1_0,TLSv1_1,TLSv1_2,SSLv2,SSLv3

Default: auto

Multiple invocations?: no

SSL version to use.

Expects: pattern:value

Default: disabled

Multiple invocations?: yes

Sets a value for inputs whose name matches the pattern.

Expects: filepath

Default: disabled

Multiple invocations?: no

YAML file containing a Hash object with regular expressions, to match against input names, as keys and input values as values.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled, system default values won't be used.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Forces the system to fill-in even non-empty inputs.

Expects: pattern

Default: disabled

Multiple invocations?: yes

Lists all available checks.

If an option has been provided, it will be treated as a pattern and be used to filter the displayed checks.

Expects: string,string

Default: * (all)

Multiple invocations?: no

Loads the given checks, by name.

You can specify the checks to load as comma separated values (without spaces) or * to load all. You can prevent checks from being loaded by prefixing their name with a dash (-).

Note: Checks are referenced by their filename without the .rb extension, use --checks-list to see all.

As CSV:

arachni --checks=xss,sqli,path_traversal http://example.com/

All:

arachni http://example.com/

Excluding checks:

arachni --checks=*,-backup_files,-xss http://example.com/

The above will load all checks except for the backup_files and xss ones.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Lists all available plugins.

Expects: string

Default: disabled

Multiple invocations?: yes

Loads a plugin by name and configures it with the given options.

Note: Plugins are referenced by their filename without the .rb extension, use --plugins-list to see all.

Excluding the logout URL and running the `autologin1 plugin to automatically login to a web application:

arachni http://testfire.net --scope-page-limit=1 --checks=xss \
    --plugin=autologin:url=http://testfire.net/bank/login.aspx,parameters='uid=jsmith&passw=Demo1234',check='Sign Off|MY ACCOUNT' \
    --scope-exclude-pattern logout

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Lists all available platforms.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

Disables platform fingerprinting and results in all audit payloads being sent to the webapp.

Expects: string,string,...

Default: auto

Multiple invocations?: no

Explicitly sets the platforms for the remote web application. You can use this to help the system be more efficient in its scan.

Expects: string

Default: disabled

Multiple invocations?: no

Requires: session-check-pattern

The URL passed to this option will be used to verify that the system is still logged in to the web application.

If the HTTP response body of URL matches the session-check-pattern this should indicate that the system is logged in.

Expects: string

Default: disabled

Multiple invocations?: no

Requires: session-check-url

A pattern used against the body of the session-check-url to verify that the system is still logged in to the web application.

A positive match should indicate that the system is logged in.

Expects: filepath

Default: disabled

Multiple invocations?: no

This option allows you to save your current running configuration, all the options passed to Arachni, to an Arachni Framework Profile (.afp) file.

Expects: filepath

Default: disabled

Multiple invocations?: no

This option allows you to load and run a saved profile.

Note: This option does not impede your ability to specify more options or resave the profile.

Expects: filepath

Default: disabled

Multiple invocations?: no

Populates the browsers' local storage from the JSON data found in the specified file.

Expects: PATTERN:CSS

Default: disabled

Multiple invocations?: yes

Wait for element matching the CSS selector to appear when visiting a page whose URL matches the PATTERN.

Note: There is no special timeout setting for this operation, the global browser cluster job timeout option will be enforced.

To wait for an element with an ID attribute of myElement to appear when visiting a page whose URL includes the string withElement (like: http://example.com/blah#withElement):

--browser-cluster-wait-for-element='withElement:#myElement'

Sometimes it is necesary to wait for an element for a page whose URL does not include a string. This is common for client-side MVC frameworks when the seed URL includes no route in the fragment section.

In this case, in order to wait for an element with an ID attribute of myElement when the URL has no hash (#) part:

--browser-cluster-wait-for-element='^((?!#).)*$:#myElement'

Expects: integer

Default: 6

Multiple invocations?: no

Amount of browser workers (process) to maintain in the pool.

Expects: integer

Default: 10

Multiple invocations?: no

Maximum allowed time for each job, measured in seconds.

Expects: integer

Default: 100

Multiple invocations?: no

Amount of jobs each worker should process before having its process respawned.

Note: Mainly used to prevent individual browser process from accumulating too much RAM.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled, the browsers will not load any images.

Expects: integer

Default: 1600

Multiple invocations?: no

Sets the browsers' screen width.

Note: Can be used to test responsive and mobile applications.

Expects: integer

Default: 1200

Multiple invocations?: no

Sets the browsers' screen height.

Note: Can be used to test responsive and mobile applications.

Expects: string

Default: .

Multiple invocations?: no

Directory or file path where to store the scan report.

Note: You can use the generated file to create reports in several formats with the arachni_reporter executable.

Expects: string

Default: .

Multiple invocations?: no

Directory or file path where to store the snapshot of a suspended scan.

Note: You can use the generated file to resume the scan with the arachni_restore executable.

Expects: hours:minutes:seconds

Default: infinite

Multiple invocations?: no

Maximum amount of time to allow the scan to run.

Expects: <n/a>

Default: disabled

Multiple invocations?: no

If enabled, the scan will be suspended when the --timeout is reached, instead of being aborted.

$ arachni -h
Arachni - Web Application Security Scanner Framework v1.4
   Author: Tasos "Zapotek" Laskos <[email protected]>

           (With the support of the community and the Arachni Team.)

   Website:       http://arachni-scanner.com
   Documentation: http://arachni-scanner.com/wiki


Usage: ./bin/arachni [options] URL

Generic
  -h, --help                  Output this message.
                               
      --version               Show version information.
                               
      --authorized-by EMAIL_ADDRESS
                              E-mail address of the person who authorized the scan.
                                (It'll make it easier on the sys-admins during log reviews.)
                                (Will be used as a value for the 'From' HTTP request header.)
                               

Output
      --output-verbose        Show verbose output.
                               
      --output-debug [LEVEL 1-4]
                              Show debugging information.
                               
      --output-only-positives Only output positive results.
                               

Scope
      --scope-include-pattern PATTERN
                              Only include resources whose path/action matches PATTERN.
                                (Can be used multiple times.)
                               
      --scope-include-subdomains
                              Follow links to subdomains.
                                (Default: false)
                               
      --scope-exclude-pattern PATTERN
                              Exclude resources whose path/action matches PATTERN.
                                (Can be used multiple times.)
                               
      --scope-exclude-file-extensions EXTENSION,EXTENSION2,..
                              Exclude resources with the specified extensions.
                               
      --scope-exclude-content-pattern PATTERN
                              Exclude pages whose content matches PATTERN.
                                (Can be used multiple times.)
                               
      --scope-exclude-binaries
                              Exclude non text-based pages.
                                (Binary content can confuse passive checks that perform pattern matching.)
                               
      --scope-redundant-path-pattern PATTERN:LIMIT
                              Limit crawl on redundant pages like galleries or catalogs.
                                (URLs matching PATTERN will be crawled LIMIT amount of times.)
                                (Can be used multiple times.)
                               
      --scope-auto-redundant [LIMIT]
                              Only follow URLs with identical query parameter names LIMIT amount of times.
                                (Default: 10)
                               
      --scope-directory-depth-limit LIMIT
                              Directory depth limit.
                                (Default: inf)
                                (How deep Arachni should go into the site structure.)
                               
      --scope-page-limit LIMIT
                              How many pages to crawl and audit.
                                (Default: inf)
                               
      --scope-extend-paths FILE
                              Add the paths in FILE to the ones discovered by the crawler.
                                (Can be used multiple times.)
                               
      --scope-restrict-paths FILE
                              Use the paths in FILE instead of crawling.
                                (Can be used multiple times.)
                               
      --scope-url-rewrite PATTERN:SUBSTITUTION
                              Rewrite URLs based on the given PATTERN and SUBSTITUTION.
                                To convert:  http://example.com/articles/some-stuff/23 to http://example.com/articles.php?id=23
                                Use:         articles/[\w-]+/(\d+):articles.php?id=\1
                               
      --scope-dom-depth-limit LIMIT
                              How deep to go into the DOM tree of each page, for pages with JavaScript code.
                                (Default: 5)
                                (Setting it to '0' will disable browser analysis.)
                               
      --scope-https-only      Forces the system to only follow HTTPS URLs.
                                (Default: false)
                               

Audit
      --audit-links           Audit links.
                               
      --audit-forms           Audit forms.
                               
      --audit-cookies         Audit cookies.
                               
      --audit-cookies-extensively
                              Submit all links and forms of the page along with the cookie permutations.
                                (*WARNING*: This will severely increase the scan-time.)
                               
      --audit-headers         Audit headers.
                               
      --audit-link-template TEMPLATE
                              Regular expression with named captures to use to extract input information from generic paths.
                                To extract the 'input1' and 'input2' inputs from:
                                  http://example.com/input1/value1/input2/value2
                                Use:
                                  input1/(?<input1>\w+)/input2/(?<input2>\w+)
                                (Can be used multiple times.)
                               
      --audit-jsons           Audit JSON request inputs.
                               
      --audit-xmls            Audit XML request inputs.
                               
      --audit-ui-inputs       Audit orphan Input elements with events.
                               
      --audit-ui-forms        Audit UI Forms.
                                Input and button groups that do not belong to a parent <form> element.
                               
      --audit-parameter-names Inject payloads into parameter names.
                               
      --audit-with-raw-payloads
                              Inject payloads with and without HTTP encoding.
                               
      --audit-with-extra-parameter
                              Inject payloads into extra element parameters.
                               
      --audit-with-both-methods
                              Audit elements with both GET and POST requests.
                                (*WARNING*: This will severely increase the scan-time.)
                               
      --audit-exclude-vector PATTERN
                              Exclude input vectorS whose name matches PATTERN.
                                (Can be used multiple times.)
                               
      --audit-include-vector PATTERN
                              Include only input vectors whose name matches PATTERN.
                                (Can be used multiple times.)
                               

Input
      --input-value PATTERN:VALUE
                              PATTERN to match against input names and VALUE to use for them.
                                (Can be used multiple times.)
                               
      --input-values-file FILE
                              YAML file containing a Hash object with regular expressions, to match against input names, as keys and input values as values.
                               
      --input-without-defaults
                              Do not use the system default input values.
                               
      --input-force           Fill-in even non-empty inputs.
                               

HTTP
      --http-user-agent USER_AGENT
                              Value for the 'User-Agent' HTTP request header.
                                (Default: Arachni/v2.0dev)
                               
      --http-request-concurrency MAX_CONCURRENCY
                              Maximum HTTP request concurrency.
                                (Default: 20)
                                (Be careful not to kill your server.)
                                (*NOTE*: If your scan seems unresponsive try lowering the limit.)
                               
      --http-request-timeout TIMEOUT
                              HTTP request timeout in milliseconds.
                                (Default: 10000)
                               
      --http-request-redirect-limit LIMIT
                              Maximum amount of redirects to follow for each HTTP request.
                                (Default: 5)
                               
      --http-request-queue-size QUEUE_SIZE
                              Maximum amount of requests to keep in the queue.
                                Bigger size means better scheduling and better performance,
                                smaller means less RAM consumption.
                                (Default: 100)
                               
      --http-request-header NAME=VALUE
                              Specify custom headers to be included in the HTTP requests.
                                (Can be used multiple times.)
                               
      --http-response-max-size LIMIT
                              Do not download response bodies larger than the specified LIMIT, in bytes.
                                (Default: 500000)
                               
      --http-cookie-jar COOKIE_JAR_FILE
                              Netscape-styled HTTP cookiejar file.
                               
      --http-cookie-string COOKIE
                              Cookie representation as an 'Cookie' HTTP request header.
                               
      --http-authentication-username USERNAME
                              Username for HTTP authentication.
                               
      --http-authentication-password PASSWORD
                              Password for HTTP authentication.
                               
      --http-proxy ADDRESS:PORT
                              Proxy to use.
                               
      --http-proxy-authentication USERNAME:PASSWORD
                              Proxy authentication credentials.
                               
      --http-proxy-type http,http_1_0,socks4,socks4a,socks5,socks5h
                              Proxy type.
                                (Default: auto)
                               
      --http-ssl-verify-peer  Verify SSL peer.
                                (Default: false)
                               
      --http-ssl-verify-host  Verify SSL host.
                                (Default: false)
                               
      --http-ssl-certificate PATH
                              SSL certificate to use.
                               
      --http-ssl-certificate-type pem,der
                              SSL certificate type.
                               
      --http-ssl-key PATH     SSL private key to use.
                               
      --http-ssl-key-type pem,der
                              SSL key type.
                               
      --http-ssl-key-password PASSWORD
                              Password for the SSL private key.
                               
      --http-ssl-ca PATH      File holding one or more certificates with which to verify the peer.
                               
      --http-ssl-ca-directory PATH
                              Directory holding multiple certificate files with which to verify the peer.
                               
      --http-ssl-version TLSv1,TLSv1_0,TLSv1_1,TLSv1_2,SSLv2,SSLv3
                              SSL version to use.
                               

Checks
      --checks-list [GLOB]    List available checks based on the provided glob.
                                (If no glob is provided all checks will be listed.)
                               
      --checks CHECK,CHECK2,...
                              Comma separated list of checks to load.
                                    Checks are referenced by their filename without the '.rb' extension, use '--checks-list' to list all.
                                    Use '*' as a check name to load all checks or as a wildcard, like so:
                                        xss*   to load all XSS checks
                                        sql_injection*  to load all SQL injection checks
                                        etc.
                                
                                    You can exclude checks by prefixing their name with a minus sign:
                                        --checks=*,-backup_files,-xss
                                    The above will load all checks except for the 'backup_files' and 'xss' checks.
                                
                                    Or mix and match:
                                        -xss*   to unload all XSS checks.
                               

Plugins
      --plugins-list [GLOB]   List available plugins based on the provided glob.
                                (If no glob is provided all plugins will be listed.)
                               
      --plugin 'PLUGIN:OPTION=VALUE,OPTION2=VALUE2'
                              PLUGIN is the name of the plugin as displayed by '--plugins-list'.
                                (Plugins are referenced by their filename without the '.rb' extension, use '--plugins-list' to list all.)
                                (Can be used multiple times.)
                               

Platforms
      --platforms-list        List available platforms.
                               
      --platforms-no-fingerprinting
                              Disable platform fingerprinting.
                                (By default, the system will try to identify the deployed server-side platforms automatically
                                in order to avoid sending irrelevant payloads.)
                               
      --platforms PLATFORM,PLATFORM2,...
                              Comma separated list of platforms (by shortname) to audit.
                                (The given platforms will be used *in addition* to fingerprinting. In order to restrict the audit to
                                these platforms enable the '--platforms-no-fingerprinting' option.)
                               

Session
      --session-check-url URL URL to use to verify that the scanner is still logged in to the web application.
                                (Requires 'session-check-pattern'.)
                               
      --session-check-pattern PATTERN
                              Pattern used against the body of the 'session-check-url' to verify that the scanner is still logged in to the web application.
                                (Requires 'session-check-url'.)
                               

Profiles
      --profile-save-filepath FILEPATH
                              Save the current configuration profile/options to FILEPATH.
                               
      --profile-load-filepath FILEPATH
                              Load a configuration profile from FILEPATH.
                               

Browser cluster
      --browser-cluster-local-storage FILE
                              Sets the browsers' local storage using the JSON data in FILE.
                               
      --browser-cluster-wait-for-element PATTERN:CSS
                              Wait for element matching CSS to appear when visiting a page whose URL matches the PATTERN.
                               
      --browser-cluster-pool-size SIZE
                              Amount of browser workers to keep in the pool and put to work.
                                (Default: 6)
                               
      --browser-cluster-job-timeout SECONDS
                              Maximum allowed time for each job.
                                (Default: 25)
                               
      --browser-cluster-worker-time-to-live LIMIT
                              Re-spawn the browser of each worker every LIMIT jobs.
                                (Default: 100)
                               
      --browser-cluster-ignore-images
                              Do not load images.
                               
      --browser-cluster-screen-width
                              Browser screen width.
                                (Default: 1600)
                               
      --browser-cluster-screen-height
                              Browser screen height.
                                (Default: 1200)
                               

Report
      --report-save-path PATH Directory or file path where to store the scan report.
                                You can use the generated file to create reports in several formats with the 'arachni_reporter' executable.
                               

Snapshot
      --snapshot-save-path PATH
                              Directory or file path where to store the snapshot of a suspended scan.
                                You can use the generated file to resume the scan with the 'arachni_restore' executable.
                               

Timeout
      --timeout HOURS:MINUTES:SECONDS
                              Stop the scan after the given duration is exceeded.
                               
      --timeout-suspend       Suspend after the timeout.
                                You can use the generated file to resume the scan with the 'arachni_restore' executable.
Clone this wiki locally