Archive for June, 2009

Fun (Problems) With WordPress and 301 Redirect

Today I wanted to take a few minutes and update the rest of the links on the site to this nice new blog setup. It didn’t take long to change all the links on the pages to the new URL, but I wanted to do more.

I was trying to see how I could redirect the old static blog page to WordPress.  I quickly found some tips which said to NOT use redirects from HTML as they had been too abused by spammers.  Instead I decided to use the tips for using .htaccess, since my site is hosted on Linux with Apache.

The example for redirecting URLs with mydomain.com to www.mydomain.com worked immediately and nicely, however the instructions I found for using the “Redirect 301 oldpage full-newpage-URL” did not work.  My browsers quickly reported “too many redirect” errors.

As it turns out, my settings in WordPress had been read redirecting to mydomain.com/wordpress. So I did have an infinite loop of redirects.

A quick change in the “Settings” page and the “WordPress address (URL)”  and “Blog Address (URL)” were both using “www.” in the URL and working fine with the .htaccess settings.

Of course, now I’m way beyond “a few minutes”.

Tags:

A Fleet of Blog Posts

Someone may ask, “why are all these posts so close together?”.   The answer is that they weren’t written that way, just posted that way.

I’ve had many of these sitting in local files for a while and am getting posted now that I’ve moved to a WordPress blog setup.  My old setup was all static pages and clumsy.

So, I’m catching up on many things I’ve written or half-written.  You should see more soon.

FileMaker Server Performance Part 3: Re-Thinking Backups

There’s problem with specifying a single external hard drive as the only backup destination: if that drive fails, there is no backup until somebody notices and replaces the drive.

Previously, I set all my backup schedules to copy from Drive1 (with the databases) to a folder in Drive2.  Other scripts would run periodically to synchronize the folder in Drive2 to Drive3 and Drive4.  But this makes Drive2 a single source of failure for the whole setup.

Now, each external hard drive is specified separately in turn in the server schedules and we backup to the boot drive as well. I’m pretty sure somebody will notice if the boot drive fails.

The schedules do send e-mail updates, but somebody does have to read it every day. And FileMaker Server 9 doesn’t help by sending emails on every execution, succeed or fail.  People start to ignore messages of success.

So, my updated outlook: I’ll go ahead and take a performance penalty for backing up to the boot drive at least a few times per day.

Tags: ,

FileMaker Server Performance Part Two

The case of the Mac mini mentioned in part one made me think a bit more about drive operations and backups because when the users were first reporting slow performance, one of them would go look at the server, and see it in the middle of a backup operation. This backup operation was not one of FileMaker’s, it was an Automater script which was copying the databases in the backup folder to external backup drives. (It was triggered hourly by a product named “Proxi”.)

It quickly got all the blame since, when the user quit it, performance returned to normal.  So, if that simple copy operation was too much for the drive to handle while doing FileMaker operations, perhaps FileMaker’s own backup operation is also too much.  (But much less visible than the Automater script.)

Thinking again about the number of operations a drive can perform, and how many we’re asking it to do, we take a new look at backups. So, what might be in a backup operation? In a simple version, one might think of disc-read operations to read the entire set of database files and a set of disc-write operations to write those files to a destination.  If the source files and the destination files are on the same drive, that drive must do all that work while doing the work of serving a FileMaker database.

So, how do we keep the number of disk operations to an absolute minimum? Simple, we have all backups go directly to another hard drive. It doesn’t have to be an external drive. It can be another internal drive. Now, No matter what you do you won’t be able to take away the read operations required to do a backup. But, at least the write operations can go to another drive.

In the case of our Mac mini, we cut the operations down by backing up directly to an external hard drive.

Tags: ,

Getting More Performance from FileMaker Server

I thought should take a moment and jot down some ideas and tests I’ve done lately to help some support customers improve their FileMaker server performance. I did not create the FileMaker solutions used by these customers, but I do support their Macs, including the ones used as FileMaker Servers. (If any of this helps you with your server, I’d love to hear about it.)

The first piece of advice that FileMaker offers is that we should run FileMaker Server by itself on a computer. That is good advice, but why is it good advice? And what if you don’t have the option? Well, here is a little of my thinking on that.

In one case I’ve worked on lately, there is a massive Intel-based X-Serve, with eight cores, 10 Gigs of RAM and a dual-drive mirror raid (okay, software raid), running multiple services, including file services and FileMaker server. In the other case, there is a Macintosh Mini as a dedicated FileMaker Server. Both of these were getting multiple reports of slow performance (as in taking many times longer than normal to perform a common operation of the solution.)

In the case of the X-Serve, we have way way more than enough processing power to handle everything asked of it and enough RAM (can you ever have too much RAM?). The 100 Megabit network is an obvious choice of bottlenecks, but one computer is connected by FireWire at 800 Megabits. It normally has much better FileMaker performance than the others but its user is reporting slow performance when everyone else is. Since the FireWire connection is not shared with anything else, this pretty much points to the hard drives as the common bottleneck.

Taking another look at the X-Serve RAID drives, we see that the system is booted from the RAID, FileMaker is being served from the RAID, and file sharing is being served from the RAID. (Other services are also being hosted from the RAID, but are not using a significant amount of resources.) File services becomes a big culprit quickly when we learn that some large files are being opened, closed and copied during the day. Of course, the system has things it needs to do, like opening and closing applications, swapping out virtual memory, and so forth.

In the case of the Mac mini, we see that the hard drive which is serving up FileMaker is the Internal 2.5 inch (laptop) drive. A drive which is great for low power consumption and low heat but not great for performance.

In the case of both of these machines, I started to think about the ability of the drives to perform a certain number of operations per second. When you think about it that way, you realize that all these services are competing with each other for operations each second. If they all hit at once, we have an overwhelmed drive, and slow performance. (More complicated, but realistic, is the performance penalty for switching from read-mode to write-mode and back.)

So, how can we help FileMaker Server get the maximum number of operations per second from a drive? Give it a drive to itself, so there is no competition.

In both cases we decided to try a new hard drive. For the X-Serve, we moved the FileMaker databases to the unused 80 gig hard drive which shipped with the X-Serve. For the Mac mini, we moved the FileMaker databases to an external FireWire drive.

The first problem involved with this was specifying where FileMaker server should find its data bases. For FileMaker server version 9.0v2, it was just about impossible. You have an extremely difficult time accepting paths other than its own defaults. FileMaker server version 9.0v3, however, accepted them with much less headache. You still must know exactly how to specify one (“filemac:/VolumeName/folder/path/”) and you must know to begin with “filemac:”,  then a beginning slash, then the volume’s name (not to be confused with a Unix-style path which begins with “/Volumes/”).  Finally, you must end the path with a slash.

You must also know, to have permissions on the target folder set to allow reading and writing for the owner account “fmserver” and/or for the group “fmsadmin”. In the case of an external drive, you can use the “Get Info” dialog in the Finder to “ignore permissions” on the drive. (But that’s Read-Write for everyone.)

As a cheap way to step-up performance a little more, we looked at partitioning the drives so that the databases would stay on the outer tracks which were supposed to give higher performance. Since these are Macs, we used the utility DiskUtility  to partition the drives. The first partition (top one in the list) is the one on the outer-most tracks. Giving it around 20 percent of drive seemed right for the current needs and future expansion of the database files. (Your mileage may vary.)

To check myself on this, I ran the benchmark tool from “Drive Genius” from ProSoft Engineering.  This can’t really say if there will be any difference for FileMaker’s end performance since, but some of the benchmarks did show faster speeds on the outer tracks partition. This was at least encouraging.

With all the pieces in place, we worked carefully to move the files.  In the “Database Server” configuration page, we selected the “Default Folders” tab, turned on “Use additional database folder”, and carefully specified our path to the new drive partition’s database folder. (Which has nothing in it.)  We ran back a backup schedule in FileMaker, closed the database files, then shutdown the server.  Once it was shut down, we made a direct copy of the files in Finder as a manual backup. We then copied the files to their new home drive, and moved the originals to a new folder so FileMaker could not accidentally reopen them later.

At this point we restarted the FileMaker server and there were our databases, up and running. (Wipe sweaty brow.)

At this point, it is useful point out that we went back and changed the name of the database folder on the external drive to reflect its location. Since the admin console still shows the default “Databases” folder it isn’t good to name the new folder “Databases”. We named the one for the X-Serve “Databases80Gig”. All the databases and the sub-folders with databases show up just like they did for the default databases folders, but now under “Databases80Gig”.

Now, since we don’t have any real tools to see if our performance is truly better, we will just go with the anecdotal evidence that in both cases, the users are complaining less about the servers being slow. (They’re complaining less to me.)  It seems fair to say things are closer to their expectations for performance.

Tags: ,

Favorite Quotes 2

“There are a million ways to lose a work day, but not even a single way to
get one back.”
-Tom DiMarco and Timothy Lister in “PeopleWare: Productive Projects and Teams”

Tags:

Favorite Quotes 1

“Potential is nice, but you don’t get kinetic until some work is done”   – Ellen Sellers, science teacher, speaking about energy, but making us think all the same

Tags:

A Technique for Finding iPhone App Crashes at ojbc_msgSend

(if this helps you, even to save a little time, please leave a comment)

Recently, I was working on an iPhone application for a client when I ran headlong into a crash in objc_msgSend.  It was a hair-pulling bit of frustration for me as it seemed so hard to debug for a long time.  Every time I encountered it, this was all I saw on the stack:

Debugger Stack for objc_msgSend problem

Debugger Stack for objc_msgSend problem

I saw several posts online which gave me more than few hints.  It seemed very clear that I had over-released something, but all the zombies and other flags didn’t really make it clear which thing was getting over-released or where.  I could tell what it was, but nothing much else until I struck on this idea while taking a walk:  if the crashes are always related to the release of an auto-release pool, then I need to control the auto-release pool. I can’t wait for this _NSFireDelayedPerform thing to decide to release it. (Steve Maguire’s  “Writing Solid Code” has been one of my favorite industry books for a long time.)

So in essence, I started bracketing all my suspect code like this:

Sure enough, my debugger stacks went from looking like the one above to looking like this:

Debugger Stack With Manual Pool Release

Debugger Stack With Manual Pool Release

That’s a LOT better. Now I know where to look.  Further application of layers of pools narrowed down the target pretty quickly.

In the end,  I found that my mistake was a call like this in an initializer:

which isn’t wrong.  However,  I never actually filled the allocated array entries.  When it was released later, it caused my crash.  Changing the code to to allocate the array later, when I had items to fill it,  fixed the problem.

There is one little problem with this technique that I see: you have to be careful how you bracket things with the pool-release calls.  If you release a pool, then try to use something that was allocated on that pool, you may be trying to use something you’ve already released.  The easiest way I saw to use it was to put them around a call to a routine or a message.  When I had to use them inside a block of code, I had to be careful that allocations and releases that used the pool would all be inside the bracketing calls.

I hope this helps someone else find a solution.  If it did, please leave a comment.

Tags: , ,