Chart: which one is easier to understand?

Standard

As I was reading the following Information Week report, I saw this chart.  And I realized it didn’t give me a good sense of well, how much data was in what type of storage.  Did people have a lot of data in iSCSI pools, or a little.  I found it hard to decipher.

Before:

Original Chart

After:

Area chart

Now you can split the chart at the 50% and above mark (for storage) And see there is very little data in iSCSI and a good deal residing in DAS.  What’s your take?  Which is easier for you?

Ethernet Reformed: iSCSI, FCoE, DataCenter Ethernet

Standard

I don’t post about technology very often, but here goes.   Time to chime in with some industry related info.

There is a new battle in the storage space, FCoE (fibre channel over ethernet) vs. ISCSI (internet SCSI).  iSCSI is growing, and is gaining marketshare against FC incumbents, direct attached storage and network attached storage.

Here is an excerpt from Doug Rainbolt’s response to Chris Mellor’s piece on the death of iSCSI.

To argue that FCoE is superior because of 10GbE and superior delivery mechanisms built around DCE, is dubious at best. It is way too early to say with confidence what the future of FCoE will be. We’re seeing support for 10GbE today with iSCSI so making a GbE iSCSI comparison to 10GbE FCoE is silly. When DCE is resting upon a noisy environment, how efficient will it be? Can it even operate? The point is that we need to compare the technologies in real world environments, from end to end. I personally think, as I’ve indicated earlier, that both technologies will co-exist for some time. Hopefully, the market can be left to decide which solutions solve real problems.

Doug, I have to agree with you, the market should decide which solutions to choose to solve their problems.  FCoE solves an important problem:  vendors providing FC infrastructure are losing revenues to IP SAN solutions and they need to compensate.  😀

There are a lot of competing ideas, and ways to take advantage of a unified IP infrastructure. There is a place and application for each of these protocols, and hopefully we can come up with a way to make datacenter management easier, whether that means a new protocol, friendlier APIs or standards based management tools.  At the end of the day customers are looking for ways to extend their investments, deploy new technologies quickly and well get down to business.  If these protocols help, so be it!

I don’t intend to discuss the virtues and applications for iSCSI here, but as it goes in technology formats come and go all the time.  In the end I hope the end-users can choose the winner, and not the industry.

De-Dupe: is it SAN ready?

Standard

I tend to keep my posts around job duties, but not my industry, but I came across this fun post on Data Domain’s blog so I thought it was time to chime in. (No comments on their blog. :D)

The gist of the post? Large vendors spent a few years pretending deduplication was unnecessary, and it came back to haunt them later. Now it is a key requirement for their customers. I think dedupe really illustrates the huge changes in the storage industry over the past five years or so. There have been a host disruptive technologies: iSCSI, clustered block storage, tiered storage, snapshots, disk to disk backup, and deduplication that have changed the conversation on building storage infrastructure.

Before, companies bought a huge system, where you anticipated your future needs and it cost big bucks. If you chose wrong at the beginning, a few years down the road it would haunt you. Today overall capacity is growing more quickly. Instead of doubling every 36 months, needs double every 9 months. Smaller organizations have the same needs as large ones, but smaller budgets.

Today’s storage goals: scale easily and inexpensively, use storage resources efficiently, simplify storage management, and pay as you grow not up front. Newer vendors have heard this message loud and clear and are developing appropriately. Most larger vendors are still pushing the big iron storage systems, and this has left a lot of space for innovative vendors to flourish and grow, since they developed according to the new reality.

I am looking forward to the next disruptive storage technology: we have a lot of data to deal with these days.

Back to my post title. So does de-dupe make sense for your SAN? Do SANs need to be smarter? Is the next phase in storage consolidation centralizing all host-based disk functions to the network? Clearly the industry has been back and forth on this. Intelligent switches have come and gone, EMCs Invista is looking a little too much like Windows Vista. Maybe SANs are on their way to be replaced by DANs (data area networks) and the network (and storage arrays) will have insight into the data and act accordingly. Is this what users want?