Heartbleed: A Scrambled Egg with Lots of Ham

CVE-2014-0160The sensational headline news this week was “Heartbleed” security flaw, which was covered by most mainstream and tech sites.  It was an old bug that was accidentally introduced, and just discovered recently1. The report got IT professionals scrambling to fix their systems.

At first glance, the bug is benign enough, with chances of hacking the passwords or SSL keys rather slim. However, like any other hacking issues, if someone is determined (and clever) enough to exploit this bug, they may just get a bunch of useful data. Whether or not they can use the hacked data to steal client information, or use it for a phishing site, it’s unclear. Just the thought of the potential leak scares the daylights out of everyone! It’s also proof that the marketing behind this bug was very effective.

Regardless, the actions need to be taken are as follows:

  1. Check with Qualys SSL Analyzer to determine if your site is vulnerable.
  2. If vulnerable, upgrade OpenSSL to version 1.0.1g, or alternatively recompile OpenSSL without the “heartbeat” option (-DOPENSSL_NO_HEARTBEATS).
  3. Recompile or restart the web server to reload the latest OpenSSL libraries.
  4. Test the site(s) with the Qualys SSL Analyzer again.  Also check if site is functional.
  5. With the new OpenSSL, generate a new SSL key, and re-key a new certificate.  Install the new key/certificate in the web server(s).
  6. Urge the users to change their passwords – which they occasionally have to do, anyway.  This step is tricky considering the PR scare that it’s going to generate when admitting the site is vulnerable.  However, the notification is the responsible thing to do.

When the dust settles, we can look back and use this as an important reminder how fragile the Internet is.  Customers are expected to be cautious of their data being transmitted over the Internet, no matter how secure a company claim they’re being kept.

  1. Introduced in 2011 and found out in February 2014 []

Will Windows 8 Save the PC Business?

Windows_8_screenshotPredictions are in already: Windows 8 will be irrelevant. The clues seem to support the suspicion – the masses are already happy with Windows 7. Enterprise already made a substantial investment upgrading to Windows 7. Another migration in 2012 is just too soon.

But putting all that aside, the PC manufacturers need to support Windows 8 because it’s the platform that will finally bring integration of desktop PCs with Tablets1 – especially in an Enterprise environment.  There’s also a good list of new features that will ensure some to upgrade.  Plus, there are millions of new PCs and Laptops to sell, every year.

Windows 8 is still relevant and it will save the PC business.

  1. As demonstrated in Microsoft’s Build Conference 2011 []

Listing Memory Usage by Process

Solaris OS LogoA question asked to me often, “Which processes are using up too much memory?”  I generally use top to figure them out manually.  But there’s a better way to do it, using Solaris pmap command.  I can get a good estimate on the memory usage.  Brandon Hutchinson has a shell script that provides a nice output.  I modified it a little bit to include a column for process owner.

#!/bin/sh
/usr/bin/printf "%-6s %-9s %-13s %s\n" "PID" "Total" "User" "Command"
/usr/bin/printf "%-6s %-9s %-13s %s\n" "---" "-----" "----" "-------"
for PID in `/usr/bin/ps -ef  | /usr/bin/awk '$2 ~ /[0-9]+/ { print $2 }'`
do
   USER=`/usr/bin/ps -o user -p $PID | /usr/bin/tail -1`
   CMD=`/usr/bin/ps -o comm -p $PID | /usr/bin/tail -1`
   # Avoid "pmap: cannot examine 0: system process"-type errors
   # by redirecting STDERR to /dev/null
   TOTAL=`/usr/bin/pmap $PID 2>/dev/null | /usr/bin/tail -1 | \
   /usr/bin/awk '{ print $2 }'`
   [ -n "$TOTAL" ] && /usr/bin/printf "%-6s %-9s %-13s %s\n" "$PID" "$TOTAL" "$USER" "$CMD"
done | /usr/bin/sort -rn -k2

Note, this script needs to run as “root” for pmap to have permission to examine each process.

Output looks something like this:

PID    Total     User      Command
---    -----     ----      -------
694    25240K    root      /opt/RICHPse/bin/se.sparcv9.5.9
696    5208K     root      /usr/dt/bin/dtlogin
613    4992K     root      /opt/CA/BABcmagt/caagentd
326    4512K     smmsp      /usr/lib/sendmail
260    4440K     root      /usr/sbin/syslogd
269    2440K     root      /usr/sbin/cron
196    2360K     root      /usr/sbin/keyserv
193    2352K     root      /usr/sbin/rpcbind
103    2336K     root      /usr/lib/sysevent/syseventd
235    2224K     root      /usr/lib/nfs/lockd
206    2184K     root      /usr/lib/netsvc/yp/ypbind

Samba and Windows 7

Windows 7 has upgraded security.  This will effectively cause trouble in making connections to legacy apps (ie. Windows XP supported).  One of them is Samba on Unix.

Fortunately, there’s a solution to this:

  1. Open Control Panel.
  2. Choose Administrative Tools.
  3. Click Local Security Policy.
  4. Under Local Policies and Security Options:
    1. Change Network security: LAN Manager Authentication Level to “Send LM & NTLM responses”
    2. Change Minimum Session Security for NTLM SSP to disable “Require 128-bit encryption” into “No Minimum Security”.

Illustrations below:

How To Build A Web App

ApprovedDeveloping a good web application is a tricky job.  Deploying one that gains people’s acceptance can be a big challenge.  The good ones that come into mind are Twitter, Pandora, and tumblr.

Is there a recipe for building a good web application?

I watched an excellent presentation by Fred Wilson, a venture capitalist who invested in several successful companies, that summarized the basic rules of building a great web app:

  1. Fast
  2. Instantly Useful
  3. Unique Style
  4. Less and simple
  5. Programmable (ie. APIs)
  6. Personal
  7. REST – REpresentational State Transfer (ie. Unique URL)
  8. SEO – Search Engine Optimization
  9. Clean Design
  10. Playful

These guidelines are definitely a good start for new companies.  They’re also useful for established companies who want to redefine their products.

Here’s the presentation by Fred Wilson on The 10 Golden Principles of Successful Web Apps:

The Importance of Page Loading Time

run_blur

Customers are very fickle when checking out a company’s web site.  Unless they’re desperate, a person browsing a site tend to go quickly from one page to another.  Their attention span is short.  Their time is valuable.  They don’t want to spend too much time waiting for a web page to load.

Companies have spent a substantial amount of money to improve page loading times.  Improvements include upgrading internet connectivity, buying faster computers, reducing web applications RAM usage footprint, or investing on a content delivery network.

What other important reasons to improve web performance?

  • Increase in traffic due to natural business growth, or advertising campaigns.
  • Snappy response times are required when using the latest web browser tools, such as AJAX.
  • Google is planning to rank web pages by their load times.
  • Increase use of videos using embedded Flash, and future HTML5.

There is a cheaper way to improve web site performance: Optimize Content.  It means reducing the use of heavy graphics, Flash files, or client side Javascripts.  It also means reducing HTML and CSS file sizes.  It may seem contradictory, but ultimately, content dictates page loading times and can improve the web browsing experience.

Operating System and Web Apps Hacking

A lot can be learned from a hacker (albeit a convicted hacker).  Here are some of his thoughts on OS and web application security:

Securing a system:

I keep my services to a minimum, and I keep them updated. On my Linux box I use custom kernel hardening patches to make memory corruption bugs pretty hard to exploit. OpenSSH is firewalled and only accepts a connection from your ip if you visit a custom port-knocking page on my webserver. Basically the only service listening is apache, without PHP.

On my desktop and laptop I don’t have any services listening at all.

Public computers:

… I try to avoid public computers. If I really have to log in from an untrusted terminal I use otp authentication.

Modern website’s security:

Not very secure. SQL-injections are everywhere.

Discovering SQL injections vulnerabilities:

I don’t know of any specific papers, SQL injection is such a simple concept so you can pick it up in a matter of hours. The best method of finding them manually is to simply insert ‘ and ” union select(..” at random in parameters and see if things break.

Local source disclosure vulnerabilities:

Yes, sure. You can do a lot with config.php + phpMyAdmin.

What to do in a hacked machine:

1) Find a custom admin interface.

2) Get read access to a db from an SQL-injection.

3) Find tables corresponding to the custom admin interface.

4) Crack the admin password.

5) Log in and upload a new picture, containing PHP.

6) Exploit buggy custom cron-scripts that delete directories in /tmp once a day.

7) Wait for exploit to trigger..

8 ) Infect a binary on an NFS-share.

9) Wait for someone to use the binary..

10) Enjoy access to the main servers.

Something like that 😉

Operating System:

Personally I use Linux. I don’t consider Linux especially secure, just look at the number of local kernel root vulns found in the last year. I do however know that this is because there are so many people auditing Linux every day. I’d rather use an OS that has a few serious public vulns each year than one where the vulns are still there but aren’t found.

If you make a new operating system, how long it takes for someone to exploit vulnerabilities depend on how secure your code is and how much someone would want to exploit it. A local root vulnerability in QNX isn’t as “popular” as one in Linux, so more people are looking at Linux.

Tools used:

Exploits, network scanners, rootkits, google (perhaps the best network scanner).

And a voice recorder. They are essential when hacking banks.

More security holes:

Yes, I’ve written exploits for most types of bugs. Buffer overflows, format strings, int overflows. I have discovered some holes myself. Nowadays the most popular thing to audit is webapps. The age of remote root holes in popular ftpds is gone.

Government computers:

Personally I think that there are government agencies in the US, China, Russia etc. that have already backdoored each other to hell and back.

Stopping a hacker from coming in:

In short, if you have a network that is connected to the internet and someone wants to get in, they will eventually get in. If you are running the latest versions of all possible software you might think you are safe. But what if someone comes along with a 0day, or someone hacks the home computer of one of your administrators?

Tracing a hacker:

I got too comfortable with my setup and thought I was untraceable. It turns out that, given enough incentive, some people will analyze router logs from all over the world for months until they find you.

PHP:

Make sure whatever PHP software you are using is always up to date. PHP stuff has a tendency to be written very poorly. Install some custom hardening patches like Grsec.

… I’m a big fan of Python. It’s much easier to write insecure software in PHP than in Python.

Security Industry:

I think has become less about knowledge and innovation and more about hype. Extreme hype. Everyone wants to make money off their name. Bugs become a commodity that is sold to companies that charge subscription fees for advance notice, etc..

Personally I am a blackhat. I loathe the cesspool of inflated egos that is the computer security industry. Therefore, I would never ever advise them to become “whitehats”. As for a more rewarding way to use their skill and curiosity, I can’t think of a good answer. Hacking into computers is simply the most rewarding experience I have ever had. I don’t see it as a problem if you are hacking big companies or governments for the sake of adventure, you are not out to hurt people.

Just make sure not to make money from your hacking, be it selling out to the security industry or selling botnet-stuff to russians. Both will destroy your passion.

Sysadmins:

I understand that ultimately some admin will have to take care of cleaning up after the breach, but it’s a part of their job. If one of the main reasons not to hack is that some administrator, whose job it is to maintain the servers, has to do his job.. I just don’t see that as a very compelling reason not to hack.

Hacking:

The incentive was the thrill of breaking into something that could sometimes have taken over a month of preparation. Looking at information that you weren’t supposed to be looking at. I suppose it’s the same feeling you get when solving any complex problem. It’s better than sex. I mostly worked alone, and I was not hired for anything.

Operating System hacking preference:

I almost exclusively hacked *NIX machines. Mostly Linux and Solaris, but also a lot of IRIX/HP-(S)UX/AIX. I would however definitely say that it is easier to hack a Windows PC, given their history of remote “root” vulnerabilities in default services.

OpenBSD is not secure at all. At least they changed the text on their front page to “Only two remote holes in the default install, in a heck of a long time!”. There’s a reason Theo DeRaadt has been hacked a number of times, his ego is enormously inflated. OpenBSD is 10 years behind grsec for example.

The most important part is the anti-exploitation techniques like ASLR, PIE, etc. What I meant to say was that GRsec has always been in the forefront when it comes to those. GRSec, RBAC and SELinux also have MAC capabilities but these are extremely rarely used correctly and to their full extent, since they are so hard to configure right.

IE6 Still Lingering

W3CSchools has stats for IE6 usage at about 15%, as of May 2009.  IE6 in Enterprise environment is still being used,

It’s steadily dropping because of the wide acceptance of Firefox, and Corporations are proactively upgrading to IE7 or IE8.   This number will change dramatically when Enterprise favors Windows 7 as the new standard for productivity machines.

Some websites have already taken steps to prevent IE6 from loading their site.  I can only applaud their efforts.

IE6 Denial Image

IE6 Denial Image

URL Rewrite Examples

Rewrite Right - Flickr PhotoOne of the most common Webmaster task is to use mod_rewrite Apache module.  It’s a flexible and efficient way to redirect URLs.  It is useful to redirect non-functional URLs, moving domain names or renaming directories.

Below is a list of some of the frequently used mod_rewrites.

Note the [R=301] entries for 301 Permanent Redirect directive on the rules.  It’s a popular use to preserve SEO rankings of an older site that has been moved to a new one.

Simple redirect:

RewriteRule ^/sub/dir/home.html$ /sub/dir2/page.html [R=301,L]

Redirect http://domain.com to http://www.domain.com.  This is especially useful for an SSL certificate that’s already registered to www.domain.com name.  Note, the rule captures the query string and redirects with it:

RewriteCond     %{HTTP_HOST}    ^domain.com$      [NC]
RewriteRule     ^(.*)$          http://www.domain.com$1      [R=301,L]

To capture more than one variables in the query string, use the following.

RewriteRule ^([^/]*)/([^/]*)/([^/]*)$  /sub/program.jsp?arg1=$1&arg2=$2&arg3=$3 [L]

For redirects based on the URL’s query string, use QUERY_STRING to capture it for comparison.  Note the destination URL may use spaces if enclosed in quotes.

RewriteCond %{QUERY_STRING} ^id=2234$
RewriteRule ^/sub/dir/product.html$ “/sub/dir3/description.html?prodid=vac pro” [L,R=301]

Redirects can also be conditional.  For example, redirect everything except with a certain keyword.

RewriteCond %{REQUEST_URI} !/sub/dir/important.html$
RewriteRule ^/sub/dir/.*$ /main/dir/home.html [L,R=301]

With the above rule, it’s possible the original URL may have a query string.  To get rid of it, just add “?” to the end of target RewriteRule. For example:

RewriteCond %{REQUEST_URI} !/sub/dir/important.html$
RewriteRule ^/sub/dir/.*$ /main/dir/home.html? [L,R=301]

There are more examples out there.  Writing a comprehensive mod_rewrite guide is a full time job, so this list will continue to grow.  Here are some other useful references:

Photo Credit: Luke Seeley

Custom 404 Page Using JBOSS

Missing PuzzleHaving a custom “page not found”, or 404 page, is an important modification for any website.  It’s used to enhance the user experience by presenting an easy to understand message.

Setting up a user friendly error page is simple enough using Apache web server.  Just modify the line in httpd.conf and point it to a static HTML document:

ErrorDocument 404 /the404_page.html

With JBOSS (or Tomcat-like Java container) application server, it’s slightly trickier.  It has to be handled per web application basis.  The change is done on the web.xml file, with these entries:

<web-app>

<error-page>
<error-code>404</error-code>
<location>/the404_page.html</location>
</error-page>

</web-app>

For the root directory, modify the web.xml in the ./deploy/jboss-web.deployer/ROOT.war/WEB-INF directory.

Testing this setup in Firefox and Opera, the custom 404 page will automatically show up properly.

However, with Internet Explorer, a “The Webpage Cannot Be Found” message comes up instead.  This is a feature of IE to show Microsoft’s version of a “friendlier error message”.  In this case, we want to disable it, so the custom 404 page will show up.  It can be done via Internet Options -> Advanced tab :

Option in IE to Supress Custom 404 Error Page

Update: Microsoft Help & Support site states if the 404 error page is greater than 512 bytes, then IE will not show the friendly message.  So the page size must be a bigger one, not just a simple one liner.

Now that the applications are setup to serve up custom error page, here are some examples of beautiful 404 page designs to improve the user experience.