Tuesday, November 18, 2008

Airport Limericks

This is what happens when you arrive at the gate too early.

There once was a skycap named Christine,
Who was wanted at gate fifteen.
A wheelchair was needed,
So an elderly passenger could be seated.
Their experience flying Delta was exceeded.

There once was a passenger named Nick,
Who was wanted at the gate mighty quick.
They had checked his baggage,
Now the aircraft is attracting rabbits.
Evidently it's not advised to pack cabbage.

Friday, October 10, 2008

PHP Profiling with webgrind

I've returned from a blogging hiatus. Read - not feeling so lazy today. Profiling PHP scripts has always been one of those things I know that should be done, but there never seemed to be time to set it up. Today I put off a lot of work to focus on getting profiling working in my dev setup. The environment consists of OS X 10.4, PHP 5.1.6, Apache 2.2.4, Xdebug 2.0.3 and webgrind.

First thing's first - webgrind requires the json PHP extension to be enabled to work. As of PHP 5.2.0 json is enabled by default, and if you happen to be so fortunate as to be running that version or later your setup tasks are reduced significantly. However I'm running PHP 5.1.6 because that's the setup in the production environment I get paid for. Someday I plan to have multiple PHP versions running on different apache ports, but not today. This means the json extension has to be installed and configured in PHP manually.

Now, I spend the later part of the day getting json working in my MAMP setup and tried so many different configuration options to get it working that I've lost track. Let me just give you some pointers if it's still not working for you after you do a

pecl install json

on the command line.

Check your path to php, phpize and php-config. Make sure if you issue a phpize command, the right version of phpize is going to execute. I had a conflict because OS X comes with PHP 4.4.8 installed at /usr/bin/php, and I had compiled and installed PHP 5.1.6 at /usr/local/bin/php. However /usr/bin was set in my path and /usr/local/bin was not. So unless you want to specifly the absolute path to php everytime, make a link from /usr/bin/local/php to /usr/bin/php (same for the other PHP executables).

Check that the json extension is installed by the command:
php -m

You will see output similar to:
[PHP Modules]
bz2
ctype
curl
date
dom
gd
hash
iconv
json
libxml
mbstring
mysql
openssl
pcre
PDO
posix
Reflection
session
SimpleXML
soap
SPL
standard
tokenizer
wddx
xdebug
xml
xmlreader
xmlwriter
xsl
zlib

[Zend Modules]
Xdebug


If it's not listed it's not installed. Alternatively you can load a page on your localhost that calls phpinfo() and see if json is listed as one of the extensions.

Check php.ini. There should be a line

extension=json.so

The json.so file should be located in the directory that the command:
php-conf --extension-dir
shows.

You also have to tell Xdebug to profile. Add these lines to php.ini if not present:

xdebug.profiler_enable=1
xdebug.profiler_enable_trigger=1

profiler_enable_trigger allows you to instruct Xdebug to profile the page when you add xdebugprofile=1 to the query string of the URL.

Restart apache after making changes to php.ini.

By default Xdebug will output profiling information in files named like cachegrind.out.

webgrind parses these cachegrind.out files into human-readable performance info. To install webgrind download it here http://code.google.com/p/webgrind/ and uncompress it into your apache htdocs directory (i.e. http:/localhost/webgrind/). The initial view should look like this:




Select one of the cachegrind files from the selection list and click the update button. You should get some pretty performance statistics similar to this:







Congratulations if you get this far. You've fought the good fight, pecl be damned. There is essentially no documentation for webgrind, so your are kind of on your own as to what to do from here. Play with the interface for a few minutes though and it will start to make sense. Hopefully you can find some code that could be improved, and you can be a hero.


Useful links:
webgrind http://code.google.com/p/webgrind/
Xdebug http://www.xdebug.org/
PECL http://pecl.php.net/

Webgrind: A Web Frontend for Xdebug
http://jokke.dk/blog/2008/04/webgrind_a_web_frontend_for_xdebug

Wednesday, August 20, 2008

screen

I've been using screen for about two weeks and it's working out quite nicely for me. Before I would have between four and six terminal windows open, constantly Command+~ between them. The effort expended to learn its commands was not extreme, and worth it. The switch to screen instead of multiple terminal windows was partially fueled by the inherent 'leet-ness of it, and also due to several recently blogged about tutorials:

http://www.kuro5hin.org/story/2004/3/9/16838/14935

http://www.redhatmagazine.com/2007/09/27/a-guide-to-gnu-screen/

So if you're the type of person who likes to rock out on the command line I recommend you give it a go. What have you got to lose?

Thursday, July 17, 2008

Blame it on sendmail

A new day, a new challenge: recently automated emails from some webservers just stopped arriving. The webservers are (almost) identical configuration, load balanced, hosting the same sites. You should also know, mail destined for other domains was getting through just fine (new user registration confirmations), but mail going to our own domain was not getting through (support form emails, internal notifications).

The first place to look was maillog:

Jul 17 10:15:35 www3 sendmail[25808]: m6HHCt7S025798: to=, ctladdr= (500/404), delay=00:02:20, xdelay=00:02:20, mailer=esmtp, pri=120342, relay=mail.example.com. [10.0.0.22], dsn=4.0.0, stat=Deferred: Connection timed out with mail.example.com.

That seemed strange because mail.example.com is in the same LAN as www3 and is pingable. I could telnet to port 25 of mail.example.com and get a message through that way. After hitting some dead ends (and wrong ends) messing with sendmail configuration files the problem presented itself to me. It was a DNS problem. Now, I explicitly defined the mail server with its internal IP address in /etc/hosts, but it seems sendmail was ignoring /etc/hosts and consulting a name server and getting the (external) IP address of said mail server. The mail server is not reachable by its external IP to hosts in the internal network, hence the time outs. A little modification of /etc/resolv.conf to add an internal name server, and I don't know if this was absolutely necessary, but a restart of the sendmail service, and messages started flowing. Inboxes started exploding with queued mail from the last five days. I'm sure there is a way to configure sendmail to use /etc/hosts but frankly editing sendmail configuration files scares me so I'm leaving well enough alone.

Thursday, July 10, 2008

Websites that make you pick your country must die!

plugin geoIP or something - for fsck's sake man it's not rocket science.

Die canon.com die!

eat my bizzalls fedex.com!

ups.com you have no chance to survive!

suck it intel.com (also sites with flash intros must be destroyed!) - ditto linksys.com

Wednesday, July 9, 2008

MySQL DNS Problem

The call came in a little after 3AM - the site is down. Stumbled down to the office and dollars to doughnuts bringing up the site in a browser resulted in a spartan yet blood pressure-raising "Server Error" message. My first inking was to suspect there was a problem connecting to the database server and that was confirmed - "Too many connections". Mysqld wouldn't politely shutdown so it had to be killed. At this point you could copy and paste this blog post here as the exact same thing was happening - mysqladmin processlist was exploding with "unauthenticated user" messages. Reassuringly the client's IP addresses were all the web servers' so I don't believe it was some sort of denial of service attack. I followed the advice in said blog and added entries in /etc/hosts for each of the web servers. Restarting mysqld after doing so brought everything back to a working state - whew! Why this happened at this particular moment though is still a mystery and a little troubling as who knows when it may happen again.

Coincidentally earlier that day CERT issued a warning that some DNS implementations are vulnerable to cache poisoning. In the back of my mind I feared a world wide DNS exploit was in effect and our poor mysql server was a victim. So far it seems that was not the case.


Further reading
MySQL DNS Details
http://hackmysql.com/dns

MySQL unauthenticated login pile-up
http://rackerhacker.com/2007/08/16/mysql-unauthenticated-login-pile-up/

Multiple DNS implementations vulnerable to cache poisoning
http://www.kb.cert.org/vuls/id/800113

Stalled MySQL Logins
http://www.paperplanes.de/archives/2008/5/20/stalled_mysql_logins/

Bug #2814 multiple connections, database locking up.
http://bugs.mysql.com/bug.php?id=2814

Wednesday, June 18, 2008

Today's MySQL Drama

Here's the situation homies. I had recently reinstalled OS 10.4 due to the problems I had with Tiger on my G4 Powerbook. I had a MySQL server running and needed to get it back online to get some work done. So I pulled down the latest binary release for Mac (5.0.27 as of the time of this writing). Installation went nary a hitch and after getting the datadir correct, I was able to make a client connection to the server. Enter the problem - I could show databases/tables but any query would return:

ERROR 1017 (HY000): Can't find file: './db/table.frm' (errno: 13)

Thankfully this post lead me to the solution, permissions d'oh!

computer$ perror 13
OS error code 13: Permission denied

Sure enought, the MYD, MYI and frm files of the database were owned by root with permission 640. Changing ownership to the mysql user brought the warm fuzzy. perror is your friend in the mix.

Tuesday, June 17, 2008

Compiling Apache on Mac OS X 10.4

Compiling apache 2.2.4 on Mac OS X 10.4 (Tiger) caused some complaints. I used all the default configure options, so:

computer# cd /usr/local/src/httpd-2.2.4
computer# ./configure
... bunch of output...
checking for chosen layout... apr
checking for gcc... gcc checking for C compiler default output file name... configure: error: C compiler cannot create executables See `config.log' for more details. configure failed for srclib/apr
computer#

Of course config.log did not have any specific information about the C compiler and why the build failed. Several people have posted this same problem on various apache forums. The definitive answer for Mac users is to install the Xcode tools from the OS X installer DVD (they aren't installed with the OS. You have to open the Xcode Tools folder on the DVD and run XcodeTools.mpkg). Yes, it was that easy. Now apache got through configure, make and make install. The only other thing I did was to make a backup of the apache that comes with OS X (version 1.3.41) in case I ever need to use that version for some reason, and to make a link to my fresh install:

computer# mv /usr/sbin/httpd /usr/sbin/httpd_1.3
computer# ln -s /usr/local/apache2/bin/httpd /usr/sbin/httpd
computer# httpd -v
Server version: Apache/2.2.4 (Unix)
Server built: Jun 17 2008 14:43:30
computer #

Friday, June 13, 2008

Enable root on Mac

Update Enabling the root user in Snow Leopard (OS X 10.6) has changed some what. See http://snowleopardtips.net/tips/enable-root-account-in-snow-leopard.html for how to do it.

I can understand why Apple probably intentionally put this feature in a pretty obscure place. But after having to repeat every command and prepending it with sudo while getting a MySQL server instance up and running I had had enough. To enable you to su in the terminal:

Launch the NetInfo Manager utility found in /Applications/Utilities

Security > Enable Root User

You will get a warning the root password is blank, followed by a new password dialog box. After you setup a password you're good to go.

Tuesday, June 10, 2008

Browser Frustration of the Day

Page up/page down keys don't scroll the page up/down when the focus is in a text input.

Friday, June 6, 2008

Fun with pound and HTTPS

Recently it came to our attention a page loaded using HTTPS was being declared as a security risk due to included images, css and javascript files being loaded via HTTP. A developer had recently made some user interface improvements on the page, so naturally I thought he had coded the new images to load with HTTP, causing the SSL warning. After much grepping through includes and header files the cause turned out to be an incorrectly set base href tag in the header. We were using PHP's getenv command to check for the existence of the ssl variable being set - and thus setting the base href appropriately. Upon examination of the output of phpinfo() for a page loaded with HTTPS - the ssl environmental variable was not present. The lack of the variable seemed odd, so I checked the output of phpinfo() on our QA machine - it appeared here - and production and QA share the same code base. Hmmmmm.

Now what I haven't told you is our production web servers are load balanced by pound. Some googling revealed the pound decrypts HTTPS requests before dispatching them to a backend www server (so that's why you define the SSL certificate in pound.cfg. BTW - the .pem file pound wants to see is the concatenation of the site key + the site certificate + the intermidiate certificate. I lost a day trying to find this info.). Ok, mystery solved. Now then how to detect on the www servers the original request was via HTTPS and not HTTP? More googling... turns out you can insert a HTTP header into the request pound sends to a backend server. Add this line in pound.cfg:

AddHeader "X-Forwarded-Proto: https"

HeadRemove "X-Forwarded-Proto"

Now we can check for X-Forwarded-Proto in the $_SERVER scope. Wait there's more - it's called HTTP_X_FORWARDED_FOR now, so:

$request_type = if ($_SERVER['HTTP_X_FORWARDED_FOR']) ? 'HTTPS' : 'HTTP';

And that's it, problem solved.