Blog


November 17, 2016


As an individual and as a small business owner, I generally try to buy the cheapest storage that will do the job at any given time.  I simply don’t have the luxury to massively overbuy, run out my purchase over 3 or 5 years, and then do it all over again.  The enterprise approach to IT just can’t work for me here.

I buy what storage I need, when I need it.  The storage fills up relatively quickly, and then I buy more.  Sound familiar?  Sometimes, even enterprises end up doing this.  Unfortunately for me, this approach has two major problems.

The first problem is that of storage Tetris.  Because no individual device has access to the totality of my storage capacity or performance capabilities, I need to worry about where I put workloads, backups, and so forth.  Maybe that’s fine if you work in a company with unlimited resources, but every minute I waste doing storage analysis, workload sizing, and so forth, is a minute I could be doing something that actually makes me money instead.

The second problem I face is that storage devices, like myself, age.  As I age, I notice the passage of years less and less.  A storage device that seems “new” to me can easily be 3 or 5 years old.  How many years can I get out of a storage device these days?

This last is a complicated beast.  In the enterprise, IT teams are used to throwing away perfectly good hardware simply because an arbitrary aging point has been reached, or vendor support has run out.  I don’t have that luxury.

I can’t afford to throw out storage devices and buy new unless I can consolidate multiple units into a single one, and that simply isn’t the case.  Affordable drive capacities have not advanced much in the past few years.  Today’s storage units really aren’t that much cheaper per GB than three-year-old and even five-year-old devices.  The cost of performance may have dropped dramatically with the price of flash, but the cost of capacity is still ruinous.

before
Trevor’s infrastructure before ioFABRIC

The end result is that my lab has a number of different devices that provide storage.  Some go fast, some hold lots, and some nights I dream terrible dreams of LUNs and storage Tetris.  Backups take forever.  It’s generally miserable.

For all the wonderful types of storage I have available to me, managing that storage is still a pain.

Vicinity

ioFABRIC offered me the opportunity to test their software, and this includes one of their prototype physical appliances.  The combination is starting to change not only how I implement the storage I have, but how I think of storage.

I used to work with storage as individual islands.  I would have to assign workloads to individual devices, juggle LUNs for each workload, and generally spend more time on storage than I’d ideally like.

Vicinity allows me to add all my storage to a single storage fabric.  I don’t have to fuss about backups, because the policy levels handle that.  I don’t have to worry about what should or shouldn’t be kicked up to the cloud, because the QoS handles that.

Storage used to be a thing I managed.  It was a multi-tentacled nightmare that stood in the way of the simple requirement to get real work done.  Now it’s just a resource that I request as needed, and return to the fabric when I’m done.

If I need to remove a device from the fabric – for example, because I need to do bare metal testing against an NVMe node – then I simply remove it from the fabric.  The fabric will detect the loss of capacity and reconverge in order to meet data resiliency requirements.  When I’m done, I can return the device to the fabric and the fabric will absorb the new device.  In the case of the NVMe/high RAM nodes, it quickly promotes blocks back into their storage, speeding up the whole of the fabric.

after
Trevor’s infrastructure after ioFABRIC

What has really blown my mind, and saved me time, is LUN ingestion.  Anything that was iSCSI on my network can be ingested into the fabric without having to transfer files around.  This was absolutely critical for migration into the fabric, but it’s also useful for ongoing operations.

Sometimes I’ll do work on some software on an isolated unit, or a customer SAN full of workloads will arrive for analysis.  Instead of pointing traditional backup software at it and taking hours to migrate, I can be up and running in minutes.

The process of ingesting a LUN into Vicinity means it must be assigned a profile.  All profiles have a minimum data resiliency of two copies, so the very process of ingesting the LUN makes that data highly available.  If I pick the right policy, regular snapshots of that data are taken and kicked up to the cloud.  A few mouse clicks replaces hours of work.

I operate a high data churn environment.  Storage units are constantly entering and leaving the lab, and I am always changing up my workloads.  Vicinity simply saves me time.  ioFABRIC has even promised to enable NFS emission and ingestion to match the capabilities they offer with iSCSI, ridding me of the need to ever work with LUNs again.

I can’t speak for anyone else, but I find all of the above rather noteworthy.

For tips and tricks on how to do this in your own environment, watch the Stressed by Storage Mess? webinar I did with a panel of IT experts and the ioFABRIC CTO. It goes through a more technical discussion of my experiences with great questions and answers from the CTO.

1 Comments
  • Year end thank yous - WeBreakTech | WeBreakTech

    December 23, 2016 at 18:36

    […] are a client of mine. They make software that has helped to transform my lab. I like their software, but the software itself is only part of the reason I want to include them […]

Back to top
[gravityform id="46" title="false" description="false" tabindex="50"]
[gravityform id="47" title="false" description="false" tabindex="20"]
[gravityform id="39" title="false" description="false" tabindex="50"]
[gravityform id="19" title="false" description="false" tabindex="10"]
[gravityform id="37" title="false" description="false" tabindex="50"]