1-888-310-4540 (main) / 1-888-707-6150 (support) info@spkaa.com
Select Page

Network Storage – Getting the most from your filesystem

Published by Mike Solinap
on March 10, 2011

In my last blog entry, I mentioned that I would discuss how to roll your own network attached storage device. At first, this might sound trivial. Take any commodity PC hardware, throw a large disk in there, install linux, configure NFS, done. Not so fast, there are numerous considerations that must be taken into account in order to have a secure, reliable server that performs well.

This week I’ll be focusing on what I believe to be one of the most important considerations when building a network attached storage server — the filesystem.

Most modern filesystems today have enough features to suit our needs. A system administrator would typically want to be able to do the following:

  • Easily resize the filesystem
  • Reliably recover the filesystem in the event of a system crash
  • Keep filesystem performance at a consistent level
  • Not worry about disk fragmentation
  • Maximize usable disk space

As network consultants, we provide network management services for clients who are in need of IT infrastructure solutions. At one of our clients however, we came across a special set of requirements. The client captures network data on the order of about 20 gigabytes per day. This data then gets parsed and inserted into a postgres database. At 20GB per day, the storage requirements based on their retention period are huge. This presents two problems. Network captures are highly compressible. If only there was a way to store these captures in a compressed state transparently, that way users would not need to spend time doing this separately. Secondly, with such a large database, how can a backup be taken consistently, in a reasonable amount of time?

ZFS filesystemLuckily, ZFS came to our rescue. ZFS is a filesystem developed by Sun, but unfortunately due to a conflict between GPL and CDDL licenses, a linux kernel based ZFS port has not been released yet. Some progress has been made by the http://zfsonlinux.org/ project, but I’m not sure it’s production ready yet however. Some of ZFS’ most powerful features include:

  • Storage pools (Similar to LVM)
  • Transparent compression (gzip and zlib)
  • Snapshots
  • Deduplication

The snapshot feature played an important part in backing up the large postgres database. To get all data files into a consistent state, previously the only way to do that was to shutdown the database completely. Then you could copy the files off to another server or off to tape. With over several terabytes of data however, this would mean hours of downtime. With snapshots on the other hand, the database remains running, and all files are consistent. To the database, this appears as a crash, and if restoring from a snapshot, the database will use crash recovery to come back online. Depending on how your application does transactions, this might not be acceptable.

The transparent compression feature was equally important. A 3U server that we had available supported (8) 3.5″ drives, for a total of 16TB raw capacity. With network captures as the main data source, the client could expect upwards of 25TB of usable compressed space. With 3TB drives becoming more common, the amount of potential space available in a 3U footprint is becoming a bigger value.

Unfortunately, these “free” features really do come at a price. For instance, if you are primarily a linux shop, then running FreeBSD or OpenSolaris to get ZFS may not be feasible. Also, to take advantage of transparent compression, you will need a more powerful file server than typically required. But if you can deal with these small limitations, ZFS provides a wealth of benefits.

Subscribe to our blog to keep informed on server storage solutions and other areas of IT Infrastructure.

Michael Solinap
Sr. Systems Integrator, SPK

Latest White Papers

6 Secrets To A Successful Atlassian Migration At Scale

6 Secrets To A Successful Atlassian Migration At Scale

With large scale migrations, large user bases, multiple Atlassian tools, plenty of apps, and lots of data, moving to Atlassian Cloud may feel like a steep mountain to climb. But, it doesn't have to be. In fact, we've already helped many customers make the move. Plus,...

Related Resources

SPK Accelerates Fortune 100 MedDevice Product Sale

SPK Accelerates Fortune 100 MedDevice Product Sale

Our client is Fortune 100 Medical Device manufacturer. SPKAA acts as a product cybersecurity managed service provider for their hospital products which have embedded Windows or Windows OS.   Fortune 100 MedDevice Problem For over 10 years, SPK has provided ongoing...

Why Process Automation Is Critical For Engineering

Why Process Automation Is Critical For Engineering

Process automation releases your engineers for the work their brains are intended for. That work is creativity and problem-solving.  By implementing process automation, you improve the team’s morale. Firstly, they get more focus time for deep work and designing better...

Deep Work Improves Engineers’ Productivity

Deep Work Improves Engineers’ Productivity

In this blog we'll explore how the principle of Deep Work by Cal Newport can improve your engineers productivity. Does it feel harder for you to focus on your creative, technical work? When I speak to engineers or management staff and ask this question, the answer...