Data Efficiency at Scale

Data Efficiency at Scale

by:

The initial wave of data efficiency features for primary storage focus on silos of information organized in terms of individual file systems. Deduplication and compression features provided by some vendors are limited by the scalability of those underlying file systems, essentially the file systems have become silos of optimized data. For example, NetApp deduplication can’t scale beyond a 100 TB limit, because that’s the limit in size of its WAFL file system.

Continue reading at Virtualization Journal



This post is brought to you as part of the VMware Software Defined Enterprise, which provides resources and solutions for the software defined world. Join the conversation on Facebook and Twitter.

Leave a Reply