March 19, 2018


ioFABRIC Vicinity users know that they configure performance, capacity, protection and cost objectives for applications which drives automation across the storage infrastructure – an order of magnitude simpler and faster than individually configuring performance media and capacity on each storage silo. We will take a closer look at how and why ioFABRIC makes this happen.

Treat all storage resources as one

First, Vicinity analyzes performance of individual disks or resources and pools all these storage resources across sites and clouds into a single management console.

It doesn’t matter if the resource speaks a Block, File, or Object protocol – heterogeneity is a given and Vicinity abstracts everything, presenting a Block and File interface based on service levels.

Industry-unique service level objectives

Service levels determine: the appropriate set of capacity or performance resources for each application, where data should be placed or migrated to among the resources, where storage service access points should be, and which storage is the most cost-effective for providing that service level.

Manage even the fastest data growth

Growth always leads to change. Vicinity intelligently adapts and maintains services behind the scenes as compute and data needs grow, and workloads increase.

Use Objectives to automate storage selection

Vicinity automatically selects the best set of storage resources for an application and migrates the active and stale data across these resources to provide automated storage.

For example, stale data requires far less performance than active data. Once Vicinity determines a volume is stale, it is auto-migrated to a storage location and resource that is fast enough to maintain appropriate performance objectives – at the least cost.

If Vicinity sees an application is not meeting its performance objectives, it migrates the active data to faster resources such as available flash storage – and if two flash devices can each meet the performance objectives, it selects the least costly of the two.

How performance works

Disks across the infrastructure are selected for membership that satisfy the applications performance objectives. Application performance is monitored at the disk level, and if Vicinity detects that it is not meeting performance objectives, it will respond with one or more actions: add more disks to the volume, reallocate active data, remove stale data, or reduce the amount of data on a disk.

A key factor that determines the total IOPS and bandwidth of a storage service are how the application uses data access points (i.e., iSCSI targets for block traffic).

Because Vicinity presents a shared pool, increasing the number of access points linearly increases available IOPS and bandwidth performance, whereas latency is reduced by placing data closer to each access point or on faster media such as flash.

Vicinity actively monitors data placement and if a problem is detected, it will auto-migrate data closer to newly created access points to maintain proper performance.

Make your storage go faster

These features that intelligently monitor and adjust storage to match application demands come in handy during regular operations and also when new flash resources are added.

Rather than configuring and provisioning a new flash array, simply connect and allow Vicinity to profile the device, add it to the pool and start using it while applications are up live and running. Vicinity does this by auto-migrating active data to faster storage based on applications’ needs making performance-starved workloads immediately benefit from new Flash resources.

To get the best performance, add flash drives directly to an existing Vicinity node – in effect creating your own ‘distributed’ all-flash array capable of providing up to 1 million IOPS per node (at 1ms).

How capacity works

So that’s performance – capacity pooling works very much the same way except stale data is identified and moved to slower or cheaper hardware.

All cloud, SAN, NAS, or direct-attached storage capacity is collected and presented in the management console.

By constantly shifting data to the appropriate resource, Vicinity eliminates the hands-on labor of scaling out or up.

Incorporate cloud

Capacity scaling must take cloud resources into account. Cloud storage capacity is considered by Vicinity as a general resource but also uniquely as an overflow volume when onsite storage is full.

With the overflow feature, data can auto-migrate to the cloud when other available resources are full. Once more on-premises capacity is added, Vicinity can bring data back to keep cloud storage costs low.

With this feature there is never a risk of running out of capacity.

And of course, capacity and performance can be scaled independently: if you want more capacity, add bulk capacity storage anywhere; if you want more performance, add high-performance storage anywhere but, ideally near the performance hungry compute.

Manage your capacity and performance without complexity

Performance and capacity management becomes increasingly complex as infrastructure grows – but Vicinity handles it for you. It does this by focusing humans on high-level objectives to keep your application running and automating the complex implementation details to make it happen — freeing you up for more important tasks.

Want the more technical details on how Vicinity does this? Download our Capacity and Performance Tech Brief and get the full story.

Back to top
[gravityform id="62" title="false" description="false" tabindex="20"]
[gravityform id="60" title="false" description="false" tabindex="20"]
[gravityform id="58" title="false" description="false" tabindex="20"]
[gravityform id="47" title="false" description="false" tabindex="30"]
[gravityform id="46" title="false" description="false" tabindex="50"]
[gravityform id="47" title="false" description="false" tabindex="20"]
[gravityform id="39" title="false" description="false" tabindex="50"]
[gravityform id="19" title="false" description="false" tabindex="10"]