Last time I explained how sloppy representations can cause various vulnerabilities. While doing some research for that post I stumbled across NUL byte injection bugs in two projects. Because both have been fixed now, I feel like I can freely talk about them with a clear conscience.

These projects are Chicken Scheme and the C implementation of Ruby. The difference in the way these systems deal with NUL bytes clearly shows the importance of handling security issues in a structural way. We'll also see the importance of truly grokking the problem when implementing a fix.

A quick recap

Remember that C uses NUL bytes to delimit strings. Many other languages store the length of the string instead. In these languages, NUL bytes can occur inside strings. This can cause unintended reinterpretation when strings cross the language border into C.

In my previous post I already pointed out how Chicken automatically prevents this reinterpretation in its foreign function interface (FFI). You just describe to Scheme that your C function accepts a string, and it will take care of the rest:

(define my-length (foreign-lambda int "strlen" c-string))

 ;; Prints 12:
(print (my-length "hello, there"))

;; Raises an exception, showing the following message:
;; Error: (##sys#make-c-string) cannot represent string with NUL
;;   bytes as C string: "hello\x00there"
(print (my-length "hello\x00there"))

The FFI's feature of automatically checking for NUL bytes in strings before passing them on to C was only added in late 2010 (Chicken 4.6.0). However, because everything uses this interface, this mismatch could easily be fixed, in a central location, securing all existing programs in one fell swoop.

Now, you may be thinking "well, that's nothing special; it's good engineering practice that there must be a single point of truth, and that you Don't Repeat Yourself". And you'd be right! In fact, this is a key insight: solid engineering is a prerequisite to secure engineering. It can prevent security bugs from happening, and help to fix them quickly once they are discovered. A core tenet of "structural security" is that without structure, there can be no security.

When smugness backfires

To drive home the point, let's take a look at what I discovered while writing my previous blog post. After describing Chicken's Right Way solution and feeling all smug about it, I noticed an embarrassing problem: for various reasons (some good, others less so), there are places in Chicken where C functions are called without going through the FFI. Some of these contained hand-rolled string conversions!

It turns out that we overlooked these places when first introducing the NUL byte checks, and as a consequence several critical procedures (standard R5RS ones like with-input-from-file) were left vulnerable to exactly this bug:

;; This program outputs "yes" twice in Chickens < 4.8.0
(with-output-to-file "foo\x00bar" (lambda () (print "hai")))
(print (if (file-exists? "foo") "yes" "no"))
(print (if (file-exists? "foo\x00bar") "yes" "no"))

To me, this just validates the importance of approaching security measures in a structural rather than an ad-hoc way; the bug was only in those parts of the code that didn't use the FFI. Deviation from a rule is where bugs are often found!

You can also see that we fixed it as thoroughly as possible, especially given the at times awkward structure of the Chicken code. We commented every special situation extensively, assigned a new error type C_ASCIIZ_REPRESENTATION_ERROR for this particular error, and added regression tests for at least each class of functionality (string to number conversion, file port creation, process creation, environment access, and low-level messaging functionality). There's definitely room for improvement here, and I hope to one day reduce the special cases to the bare minimum. By documenting special cases it's easy to avoid introducing new problems. It also makes them easier to find when refactoring. The tests help there too, of course.

When you run the above program in a Chicken version with the fix, it behaves like expected:

 Error: cannot represent string with NUL bytes as C string: "foo\x00bar"

Another approach

The Ruby situation is a little more complicated. It has no FFI but a C API, so it works the other way around: you write C to interface "up" into Ruby. It has a StringValueCStr() macro, which is documented as follows (sic):

 You can also use the macro named StringValueCStr(). This is just
 like StringValuePtr(), but always add nul character at the end of
 the result. If the result contains nul character, this macro causes
 the ArgumentError exception.

However, this isn't consistently used in Ruby's own standard library:

File.open("foo\0bar", "w") { |f| f.puts "hai" }
puts File.exists?("foo")
puts File.exists?("foo\0bar")

In Ruby 1.9.3p194 and earlier, this shows the following output, indicating it's vulnerable:

 true
 test.rb:4:in `exists?': string contains null byte (ArgumentError)
         from test.rb:4:in `<main>'

It turns out that internally, Ruby strings are stored with a length, but also get a NUL byte tacked onto the end, to prevent copying when calling C functions. This performance hack undermines the safety of Ruby to C string conversions, and is the direct cause of these inconsistencies. True, there is a safe function that extracts the string while checking for NUL bytes, but there are also various ways to bypass this, and if you accidentally use the wrong macro to extract the (raw) string, your code won't break. Of course, this is only true for benign inputs...

The complexity of Ruby's implementation makes it hard to ensure that it's safe everywhere. Indeed, the various places where strings are passed to C all do it differently. For example, the ENV hash for manipulating the POSIX environment has its own hand-rolled test for NUL, which you can easily verify; it produces a different error message than the one exists? gave us earlier:

irb(main):001:0> ENV["foo\0bar"] = "test"
ArgumentError: bad environment variable name

There is no reason this couldn't just use StringValueCStr(). So, even though Ruby has this safe macro, which provides a mechanism to check for poisoned NUL bytes in strings, it's rarely used by Ruby's own internals. This could be fixed just like Chicken; here too, the best way to do that would be to generalize and eliminate all special cases. Simpler code is easier to secure.

A fundamental misunderstanding

When I reported the bug in the File class to the Ruby project, they quickly had a fix, but unfortunately they seemed uninterested in going through Ruby's entire code to fix all string conversions (quoting from private e-mail conversation):

 > I agree that this looks like a good place to fix the File/IO
 > class, but there are many other places where strings are passed to C.
 > Are all of those secured?
 All path names should be converted with "to_path" method if possible.
 If any methods don't obey the rule, it is another bug.  Please let us
 know if you find such case.

In retrospect, there is the possibility that I didn't quite make myself clear enough. Perhaps this person thought I was referring to other path strings in the code. However, to me it sounds a lot like they made the same conceptual mistake that the PHP team made when they "fixed" NUL injections.

The PHP solution was to add a special "p" flag for converting path strings. This happens for all PHP functions declared in C (via zend_parse_parameters()). By the way, notice how this is a new flag. There probably are tons of PHP extensions out there which aren't using this flag yet. Also, who can verify that they managed to find all the strings in PHP which represent paths?

The PHP team was completely missing the point here. This fix means that path arguments aren't allowed to have embedded NUL bytes. Other string type arguments are not checked. They are missing the fact that this isn't just a path issue. Rather, as I described before, it's a fundamental mismatch at the language boundary where strings are translated from the host language to C. However, there seems to be a widespread belief that this can only be exploited in path strings.

I'm not entirely sure why this is, but I can guess. First off, "poisoned NUL byte" attacks have been popularized by a 1999 Phrack article. This article shows a few attacks, but only the path examples are really convincing. Of course, another reason is that injecting NUL bytes in path strings really is the most obvious and practical way to exploit web scripts.

Recently, however, different NUL byte attacks have been documented. For example, they can be used to truncate LDAP and SQL queries and to bypass regular expression filters on SQL input, but you could argue these are all examples of failure to escape correctly. I found a more convincing example in the (excellent!) book The Tangled Web: it contains a one-sentence warning about using HTML sanitation C libraries from other languages. Also, NUL bytes can sometimes be used to hide attacks from log files.

However, the most impressive recent exploit is without a doubt this common vulnerability in SSL certificate verification systems. In an attack, an embedded NUL byte causes a certificate to be accepted for "www.paypal.com", when the CN (Common Name) section (that is, the server's hostname) actually contains the value "www.paypal.com\0.thoughtcrime.org". Certificate authorities generally just accepted this as a valid subdomain of "thoughtcrime.org", ignoring the NUL byte. Client programs (like web browsers) tended to use C string comparison functions, which stop at the NUL byte. Luckily, this was widely reported, and has been fixed in most programs.

I believe that NUL byte mishandling represents a big and mostly untapped source of vulnerabilities. High-level languages are gaining popularity over C for client-side programs, but many crucial libraries are still written in C. This combination means that the problem will grow unless this is structurally fixed in language implementations.

Flattr this!  Bitcoin  (why?)