The common model architecture of any vendor storage array may consist the below components.
- Front-End Ports
- Processors (CPU)
- Cache Memory
- Backend
- Storage Disks
Front-End Ports: These ports connect to the storage network and allow hosts to access and use exported storage resources. Front-end ports are usually FC or Ethernet (iSCSI, FCoE, NFS, SMB). Hosts that wish to use shared resources from the storage array must connect to the network via the same protocol as the storage array. So if you want to access block LUNs over Fibre Channel, you need a Fibre Channel host bus adapter (HBA) installed in the server.
Also Read: ElectroMechanical Hard Disk Drive (HDD) Overview
Processors/Controllers: These are mostly the Intel CPUs, they run the array firmware and control the front-end ports and the I/O that comes in and out over them. They also referred as conrollers
Cache Memory: This is used to increase the array performance and if in mechanical disk-based arrays it is absolutely critical to obtain decent performance. If there is no cache available, it is impossible to get a reasonable performance.
Backend: It might contans more CPUs and ports that connect to the drives that compose the major part of the backend. Sometimes the same CPUs that control the front end also control the backend.
Ports and Connectivity
Servers in a data center communicate with a storage array through the ports on the front-end which are often reffered as front-end ports. The number and type of front-end ports depends on the type and size of the storage array. Large enterprise arrays can have hundreds of front-end ports and depending on the type of storage array, these ports can be using FC, SAS, FCoE and iSCSI protocols.
Connectivity of the servers to the storage array can be made in different types, based upon the type of connection used (DAS, SAN, NAS) it is possible to have multiple paths between a host/server and a storage array.
Also Read: Solid State Drive (SSD) Overview
Also Read: Solid State Drive (SSD) Overview
Direct Attached Storage
In this type of connectivity, hosts can directly connect to the storage array without any SAN switch. In this type of Direct attached storage, there can be only a one-to-one mapping of hosts to storage ports. For example if the storage array has 6 ports, only 6 hosts can be directly attached to the storage array. If multi-path is required, then only 3 servers can be connected directly to the storage array because each server uses 2 ports for the connection.
SAN Attached Storage
In this type of connection, there will be a SAN switch between the server and the storage array and this technique allows multiple servers to share the same storage port. This SAN switch is responsible for the routing of the data access to and from host and storage.
It is essential that each server connecting to storage has at-least two ports to connect to the storage network so that if one fails or is otherwise disconnected, the other can be used to access storage. Ideally these ports should be on a separate PCIe cards but not on the same PCIe card. A host based multipath I/O software controls how data is routed or load balanced across these multiple paths and also deals with failed and flapping paths.
We are going to see more about the above techniques in the future posts.
Previous: 2.4 What is a Storage Array
We are going to see more about the above techniques in the future posts.
Previous: 2.4 What is a Storage Array
Go To >> Index Page
What Others are Reading Now...
0 Comment to "2.5 Storage Array Architecture"
Post a Comment