Why do we need SANs any more when many virtualised app and storage controller servers and virtualised storage can co-exist in a single set of racks?
Storage area networks (SANs) came into being so that many separate physical servers could each access a central storage facility at block level. They each saw their own LUN (Logical unit number) chunk of storage as being directly connected to them even though it was actually accessed through a channel, the Fibre Channel fabric tying the SAN together.
Nowadays our servers are are vastly more powerful through having multiple processors, typically X86 ones, connected to many sockets, with each processor having many cores, and each core capable of running several threads. The processor engines are managed and allocated to applications by a hypervisor engine such as VMware's ESX. Each core runs an application inside an O/S wrapper such as Windows or Linux and the application can be multi-threaded.
Blade Systems such as those from HP can cram more than a hundred cores in a collection of 1U rack shelves. These servers need to communicate, both to client systems, to storage facilities, and to the wider world via networking.
The local client access is a given; Ethernet rules. The storage access has been provided by a Fibre Channel SAN, with smaller enterprises using a cheaper and less complex and scalable iSCSI SAN or perhaps a filer or two. Filers have been remarkably successful in providing VMware storage, witness NetApp's wonderful business results over the past few quarters.
We now have storage in the mid-range that swings both ways; unified storage providing file and block access. This looks as if it makes the virtualised storage choice more difficult, but in fact a reunification initiative could come about by using the storage array controller engines to run applications.
No comments:
Post a Comment