Didier Stevens

Thursday 29 January 2009

Quickpost: Vigenère Is Beta-Only

Filed under: Encryption,Forensics,Quickpost,Windows 7 — Didier Stevens @ 8:41

I asked Steve Riley if he has inside information on the move from ROT13 to Vigenère for the UserAssist keys. It’s part of the beta program, to test upgrades. The final version of Windows 7 and Windows 2008 R2 will use ROT13 for the UserAssist keys, which has been used since Windows 2000.

The binary format of the UserAssist keys has also changed, I’ve decoded most of it.

Here’s Steve’s complete answer, published with permission:

We used ROT-13 to obfuscate UserAssist for its historical Usenet purpose — not to try to secure the data, but to express that the data shouldn’t be tampered with. Sort of like claiming “Don’t peek and definitely don’t modify unless you’re prepared to deal with the consequences. You’ve been warned.” There are times, like this one, where simple obfuscation is technically justified. ROT-13 was never an encryption scheme, everyone fully expects everyone else to recognize ROT-13 on sight, and some people even developed the ability to read it directly. ROT-13 was an easy and inexpensive way to invoke an implicit social contract.

As you know, UserAssist stores the info about your most frequently used applications for display on the Start menu. (Basic principles at http://blogs.msdn.com/oldnewthing/archive/2007/06/11/3215739.aspx.) The data isn’t confidential and doesn’t need to be encrypted — after all, opening the Start menu displays it. However, its stored format is subject to change, and we don’t want applications or people unintentionally changing it. So we ROT-13ed it, in a geeky attempt to convey exactly the same message that ROT-13 signified on Usenet.

In Windows 7 we made some changes to the way the MFU list is maintained and to the data’s storage format in UserAssist. When you upgrade from a previous version of Windows, we clear the MFU list and start anew. We don’t want old data to carry forward into this key. Changing the encoding from ROT-13 to Vigenère makes it easier for us to test that we’re getting the behavior we want — it’s obvious if old data carries over, because ROT-13ed data makes no sense to Vigenère. This is very useful in pre-release builds while we’re shaking the bugs out.

However, there’s no such benefit to using Vigenère in the final release — it doesn’t convey the same message as ROT-13, and since it’s key-based, it’s easy to mistake Vigenère for true encryption. Therefore, in the final release of Windows 7, we’ll revert to using ROT-13 for UserAssist.

Hope this helps clarify the issue. Feel free to post my email on your blog, too. Incidentally, we have plenty of real crypto in Windows 7 — check out the performance improvements to our AES implementation, for example.

Quickpost info

Sunday 18 January 2009

Quickpost: Windows 7 Beta: ROT13 Replaced With Vigenère? Great Joke!

Filed under: Encryption,Entertainment,Forensics,Quickpost,Windows 7 — Didier Stevens @ 23:17

Remember that the UserAssist keys are encrypted with ROT13?

In Windows 7 Beta, not anymore! Weak ROT13 crypto has been replaced with “stronger” Vigenère crypto!

The Vigenère key I found through some basic cryptanalysis is BWHQNKTEZYFSLMRGXADUJOPIVC.

To the Microsoft developer who designed this: great joke! You really made me laugh. Seriously. 8-)

And I thought Easter Eggs were banned in Microsoft products. Maybe you don’t think of it as an Easter Egg, but as a programmer, I do. ;-)


Quickpost info

Saturday 17 January 2009

Playing With Authenticode and MD5 Collisions

Filed under: Encryption,Hacking — Didier Stevens @ 15:11

Back when I researched Microsoft’s code signing mechanism (Authenticode), I noticed it still supported MD5, but that the signtool uses SHA1 by default.

You can extract the signature from one program and inject it in another, but that signature will not be valid for the second program. The cryptographic hash of the parts of the program signed by Authenticode is different for both programs, so the signature is invalid. By default, Microsoft’s codesigning uses SHA1 to hash the program. And that’s too difficult to find a collision for. But Authenticode also supports MD5, and generating collisions for MD5 has become feasible under the right circumstances.

If both programs have the same MD5 Authenticode hash, the signature can be copied from program A to program B and it will remain valid. Here is the procedure I followed to achieve this.

I start to work with the goodevil program used on this MD5 Collision site. goodevil is a schizophrenic program. It contains both good and evil in it, and it decides to execute the good part or the evil part depending on some data it caries. This data is different for both programs, and also makes that both programs have the same MD5 hash.

The MD5 collision procedure explained on Peter Selinger’s page will generate 2 different programs, good and evil, with the same MD5 hash. But this is not what I need. I need 2 different programs that generate the same MD5 hash for the byte sequences taken into account by the Authenticode signature. For a simple PE file, the PE Checksum (4 bytes) and the pointer to the digital signature (8 bytes) are not taken into account (complete details here). That shouldn’t be a surprise, because signing a PE file changes these values.

So let’s remove these bytes from PE file goodevil.exe, and call it goodevil.exe.stripped. The hash for goodevil.exe.stripped is the same as the Authenticode hash for goodevil.exe.



Now I can compute an MD5 collision for goodevil.exe.stripped, as explained on Peter Selinger’s page. (I could also have modified the MD5 collision programs to skip these fields, but because this is just a one shot demo, I decided not to).

After about an hour, I have 2 new files, good.exe.stripped and evil.exe.stripped, both with the same MD5 hash. I transform them back to standard-compliant PE files by adding the checksum and pointer bytes I removed (giving me good.exe and evil.exe). Now the MD5 hashes are different again, but not the Authenticode MD5 hashes (Authenticode disregards the PE checksum and the signature pointer when calculating the hash).

OK, now I sign good.exe with my own cert. But there’s a little change in the code signing procedure I explained in this other post.

One difference is that I select custom signing:


This allows me to select MD5 hashing in stead of the default SHA1:


OK, now good.signed.exe is signed:



The signature is valid, and of course, the program still works:


Let’s summarize what we have. Two programs with different behavior (good.exe and evil.exe), both with the same MD5 Authenticode hash, one with a valid Authenticode signature (good.signed.exe), the other without signature.

Now I extract the signature of good.signed.exe and add it to evil.exe, saving it with the name evil.signed.exe. I use my digital signature tool disitool for this:

disitool.py copy good.signed.exe evil.exe evil.signed.exe


This transfers the signature from program good.signed.exe (A) to evil.signed.exe (A’). Under normal circumstances, the signature transferred to the second program will be invalid because the second program is different and has a different hash. But this is an exceptional situation, both programs have the same Authenticode hash. Hence the signature for program evil.signed.exe (A’) is also valid:


evil.signed.exe executes without problem, but does something else than good.signed.exe:


This demonstrates that MD5 is also broken for Authenticode code signing, and that you shouldn’t use it. But that’s not a real problem, because Authenticode uses SHA1 by default (I had to use the signtool in wizard mode and explicitly select MD5 hashing). In command-line mode (for batching or makefiles), the signtool provides no option to select the hashing algorithm, it’s always SHA1. And yes, SHA1 is also showing some cracks, but for Authenticode, you have no other choice.

You can download the demo programs and code signing cert here.

Monday 12 January 2009

A Hardware Tip for Fuzzing Embedded Devices

Filed under: Hardware,WiFi — Didier Stevens @ 21:22

Phidgets are hardware interfaces that let your computer interact with the environment. In this first blogpost of a new series, I explain how to automatically power-cycle a crashed embedded device.

I’ve been playing with Phidgets over the holiday season. Phidgets are inexpensive hardware interfaces for your computer. You connect them via USB, thus extending your machine with digital inputs/outputs and analogue inputs.

There are several aspects I like about the API-software:

  • it’s available for Windows, Linux and Mac
  • the Linux version is open-source (in a next post, I’ll show it running on my nslu2)
  • there’s support for many programming languages, even Python
  • input changes can trigger events (avoids polling loops)

One problem with automated fuzzing of embedded devices (for example a WiFi AP) is that you’ve to power-cycle a device when it crashed. And that’s a problem when you let it run unattended (i.e. overnight). So it would be handy to have your fuzzer power-cycle the device each time it detects the device became unresponsive.

This Phidget Interface Kit with 4 relays lets you do this. Connect the power supply of the embedded device to the NC (Normally Closed) connector of the relay. This way, the un-powered relay will let the current flow through the power-supply and feed the embedded device. To power-cycle the device, activate the relay for a second or two. This will open the circuit and shutdown the embedded device.

Activating a relay for a second is very easy with the Phidgets sofware, here is a Python example for an Interface Kit:

    oInterfaceKit = Phidgets.Devices.InterfaceKit.InterfaceKit()

    oInterfaceKit.setOutputState(0, True)
    oInterfaceKit.setOutputState(0, False)


setOutputState is the actual command used to control the relay on output 0. The other statements are necessary to setup the interface.

Before OSes took full control over the input and output ports, a popular solution was to connect a relay to a Centronics printer port and control the output of the port directly from your program. But nowadays, OSes like Windows take full control over the Centronics port (if your machine still has one…), making it much harder to control from user software.

Phidgets were used (but not hurt) for my TweetXmasTree:


Tuesday 6 January 2009

Quickpost: Running BackTrack 3 on a Eee PC

Filed under: Eee PC,Quickpost — Didier Stevens @ 20:44

I want to run the BackTrack 3 Live CD on my new Eee PC 901. Here is how I configured a SD card to boot the BackTrack 3 USB Version (not the same as installing the BackTrack 3 distro on a SSD or SD card).

  • start Windows XP on Eee PC
  • download the BackTrack 3 USB version
  • use unetbootin to install the BackTrack 3 iso file to the SD card
  • copy 901_net_gfx.lzm to the \BT3\optional directory on the SD card (details and download here)
  • edit file /boot/syslinux/syslinux.cfg on the SD card to add these lines after the LABEL… line:
KERNEL /boot/vmlinuz
APPEND vga=785 initrd=/boot/initrd.gz ramdisk_size=6666 root=/dev/ram0 rw load=901_net_gfx autoexec=startx

The following step is only needed if you want to change the keyboard. I use a Belgian keyboard, so I want the default KDE keyboard to be the BE keyboard, not the US keyboard. We will update the kxkbrc file in the root.lzm compressed directory. sda1 is the SD card I booted from, adapt according to your configuration.

  • Boot BackTrack 3 from the SD card and get a root shell
  • cp /mnt/sda1/BT3/base/root.lzm .
  • mv /mnt/sda1/BT3/base/root.lzm /mnt/sda1/BT3/base/root.lzm.original
  • mkdir newroot
  • lzm2dir root.lzm newroot
  • edit file newroot/root/.kde/share/config/kxkbrc
  • edit the LayoutList property and move the be value to the beginning of the comma-separated list, like this:
    • LayoutList=be,us,ch,br,cz,fr,de,it,pl,sk,gb,dk,de
  • save file kxkbrc
  • dir2lzm newroot root.lzm
  • mv root.lzm /mnt/sda1/BT3/base

You can use the same procedure to edit other (config) files, or add files like your favorite utilities.

Quickpost info

Monday 5 January 2009

Howto: Add a Digital Signature to an Office Document

Filed under: Encryption — Didier Stevens @ 21:19

Starting with Microsoft Office 2003, digital signatures can be added to office documents (e.g. a spreadsheet). Macros can also be digitally signed. So if you’ve that special spreadsheet macro to execute, but your Excel configuration requires macros to be signed, this howto is what you’re looking for ;-) .

First step is to import our PKCS12 file.

First we’ll take a look ad signing the document (an Excel spreadsheet).


Adding a digital sigature is done with Tools / Options…






These actions added our digital signature:


Of course, when we change the spreadsheet, we have to sign it again:


The title bar warns us when we opened a signed spreadsheet:


The “unverified” word in the title is a reminder that we have to check the signature (via Tools / Options…) to make sure the spreadsheet wasn’t tampered with:


Now let’s sign a macro:






A default Office 2003 install requires macros to be signed:


Sunday 4 January 2009

Howto: Add a Digital Signature to a PDF File

Filed under: Encryption,PDF — Didier Stevens @ 21:47

After signing an executable and a Mozilla add-on, let’s sign a PDF document with our certificate.

I didn’t manage to use a free tool to sign a PDF document with a certificate, so we’ll use a trial version of Adobe Acrobat Professional.

Our PKCS12 file (with keys and signatures) is imported in our certificate store, so let’s open a PDF document and sign it:



After signing, we notice there’s one issue, our identity is not yet trusted:


Let’s trust ourself:





After validation, Adobe Acrobat tells us our signature is valid:


Thursday 1 January 2009

Howto: Add a Digital Signature to a Firefox Add-on

Filed under: Encryption — Didier Stevens @ 22:02

After signing a Windows executable with our own certificate, let’s sign an XPI file.

There is a nice Firefox add-on we’ll use to achieve this: Key Manager. But the subordinate CA certificate we create is not suited to sign XPI files, because it doesn’t state explicitly that it can be used for code signing. We have to create another one with an extendedKeyUsage property for code signing.

First we need to create a config file with the extended key usage, eku.cnf:


Then we issue the next OpenSSL commands to create a new certificate and PKCS12 file:

openssl x509 -req -days 730 -in ia.csr -CA ca.crt -CAkey ca.key -set_serial 02 -out ia2.crt -extfile eku.cnf -extensions eku_codesigning

openssl pkcs12 -export -out ia2.p12 -inkey ia.key -in ia2.crt -chain -CAfile ca.crt

Now we use the Key Manager add-on to import the PKCS12 file (this can also be done with the Firefox options manager):







After importing the certificates and keys, we need to enable the root CA certificate for code signing:



Now this is done, we’re ready to sign an XPI file. As an example, I’m taking my WhoAmI Firefox add-on:




When installing this signed Firefox add-on, we get to see the identity of the signer:


For an unsigned add-on, it says “Author not verified”:


If we don’t trust the root CA for code signing (or the root CA certificate is missing), we can’t install the add-on!


So it doesn’t make sense to sign a Firefox add-on with your own self-signed certificate if you plan to make it public (e.g. publish it on the Mozilla add-on site). Users will not be able to install your add-on if they don’t have imported and approved your root CA certificate.

The Rubric Theme Blog at WordPress.com.


Get every new post delivered to your Inbox.

Join 196 other followers