What is a Server or Compute System ?
A compute system or Server is any computing device which is the combination of hardware, firmware, and system software that runs business applications. Examples of compute systems include physical servers, desktops, laptops, and mobile devices.Generally in a data center, applications are typically deployed on compute clusters for high availability and for balancing computing workloads. A compute cluster is a group of two or more compute systems (servers) that function together, sharing certain network and storage resources, and logically viewed as a single system.
The compute systems used in building data centers are typically classified into three categories
- Tower compute systems (Servers)
- Rack-mounted compute systems (servers)
- Blade compute systems (Servers)
A Rack-mounted Compute System, also known as a rack server, is a compute system designed to be fixed inside a frame called a “rack”. A rack is a standardized enclosure containing multiple mounting slots called “bays”, each of which holds a server in place with the help of screws. A single rack contains multiple servers stacked vertically in bays, thereby simplifying network cabling, consolidating network equipment, and reducing the floor space use. Each rack server has its own power supply and cooling unit.. Typically, a console is mounted on a rack to enable administrators to manage all the servers in the rack.
A Blade Compute System, also known as a blade server, is an electronic circuit board containing only core processing components, such as processor(s), memory, integrated network controllers, storage drive, and essential I/O cards and ports. Each blade server is a self-contained compute system and is typically dedicated to a single application.
A blade server is housed in a slot inside a blade enclosure (or chassis), which holds multiple blades and provides integrated power supply, cooling, networking, and management functions. The blade enclosure enables interconnection of the blades through a high-speed bus and also provides connectivity to external storage systems. The modular design of the blade servers makes them smaller, which minimizes the floor space requirements, increases the compute system density and scalability, and provides better energy efficiency as compared to the tower and the rack servers.
A blade server is housed in a slot inside a blade enclosure (or chassis), which holds multiple blades and provides integrated power supply, cooling, networking, and management functions. The blade enclosure enables interconnection of the blades through a high-speed bus and also provides connectivity to external storage systems. The modular design of the blade servers makes them smaller, which minimizes the floor space requirements, increases the compute system density and scalability, and provides better energy efficiency as compared to the tower and the rack servers.
Physical components of a compute System (Server)
A compute system’s hardware consists of processor(s), memory, internal storage, and I/O devices.
Processor: A processor, also known as a Central Processing Unit (CPU), is an integrated circuit (IC) that executes the instructions of a software program by performing fundamental arithmetical, logical, and input/output operations. A common processor/instruction set architecture is the x86 architecture with 32-bit and 64-bit processing capabilities. Modern processors have multiple cores (independent processing units), each capable of functioning as an individual processor.
Random-Access Memory (RAM): The RAM or main memory is an IC that serves as a volatile data storage internal to a compute system. The RAM is directly accessible by the processor, and holds the software programs for the execution and the data used by the processor.
Read-Only Memory (ROM): A ROM is a type of non-volatile semiconductor memory from which data can only be read but not written to. It contains the boot firmware that enables a compute system to start, power management firmware, and other device-specific firmware.
Chipset: A chipset is a collection of microchips on a motherboard and it is designed to perform specific functions. It manages processor access to the RAM and the GPU and also connects the processor to different peripheral ports, such as USB ports.
Secondary storage: Secondary storage is a persistent storage device, such as a hard disk drive or a solid state drive, on which the OS and the application software are installed. The processor cannot directly access secondary storage. The desired applications and data are loaded from the secondary storage on to the RAM to enable the processor to access them.
Also Read: Types of Storage Devices used in Storage Arrays
Also Read: Types of Storage Devices used in Storage Arrays
Based on business and performance requirements, cost, and expected rate of growth, an organization has to make multiple important decisions about the choice of compute system hardware to be deployed in a data center. These decisions include the number of compute systems to deploy, the number, the type, and the speed of processors, the amount of RAM required, the motherboard’s RAM capacity, the number and type of expansion slots on a motherboard, the number and type of I/O cards, and installation and configuration effort.
Logical Components of a compute system (Server)
The operating system (OS): It is a software that acts as an intermediary between a user of a compute system and the hardware. It controls and manages the hardware and software on a compute system. The OS manages hardware functions, applications execution, and provides a user interface (UI) for users to operate and use the compute system. An OS also provides networking and basic security for the access and usage of all managed resources. It also performs basic storage management tasks while managing other underlying components, such as the device drivers, logical volume manager, and file system. An OS also contains high-level Application Programming Interfaces (APIs) to enable programs to request services.
The amount of physical memory (RAM) in a compute system determines both the size and the number of applications that can run on the compute system. Memory virtualization presents physical memory to applications as a single logical collection of contiguous memory locations called virtual memory. While executing applications, the processor generates logical addresses (virtual addresses) that map into the virtual memory. The memory management unit of the processor then maps the virtual address to the physical address. The OS utility, known as the virtual memory manager (VMM), manages the virtual memory and also the allocation of physical memory to virtual memory.
The evolution of LVMs enabled dynamic extension of file system capacity and efficient storage management. The LVM provides optimized storage access and simplifies storage resource management. It hides details about the physical disk and the location of data on the disk. It enables administrators to change the storage allocation even when the application is running.
File & File System: A file is a collection of related records or data stored as a single named unit in contiguous logical address space. Files are of different types, such as text, executable, image, audio/video, binary, library, and archive. A file system is an OS component that controls and manages the storage and retrieval of files in a compute system.
File system may be broadly classified as follows disk-based, network-based, and virtual file systems
Disk-based File System: A disk-based file system manages the files stored on storage devices such as solid-state drives, disk drives, and optical drives. Examples of disk-based file systems are Microsoft NT File System (NTFS), Apple Hierarchical File System (HFS) Plus, Extended File System family for Linux, Oracle ZFS, and Universal Disk Format (UDF).
Network-based File System: A network-based file system uses networking to allow file system access between compute systems. Network-based file systems may use either the client-server model, or may be distributed/clustered. In the client-server model, the file system resides on a server, and is accessed by clients over the network. The client-server model allows clients to mount the remote file systems from the server. NFS for UNIX environment and CIFS for Windows environment are two standard client-server file sharing protocols. A distributed/clustered file system is a file system that is simultaneously mounted on multiple compute systems (or nodes) in a cluster. It allows the nodes in the cluster to share and concurrently access the same storage device. Clustered file systems provide features like location-independent addressing and redundancy. A clustered file system may also spread data across multiple storage nodes, for redundancy and/or performance. Examples of network-based file systems are Microsoft Distributed File System (DFS), Hadoop Distributed File System (HDFS), VMware Virtual Machine File System (VMFS), Red Hat GlusterFS, and Red Hat CephFS.
Also Read: File Based Storage Systems (NAS) Overview
Also Read: File Based Storage Systems (NAS) Overview
Virtual file system: A virtual file system is a memory-based file system that enables compute systems to transparently access different types of file systems on local and network storage devices. It provides an abstraction layer that allows applications to access different types of file systems in a uniform way. It bridges the differences between the file systems for different operating systems, without the application’s knowledge of the type of file system they are accessing. The examples of virtual file systems are Linux Virtual File System (VFS) and Oracle CacheFS.
Previous: The next generation IT data center layers
Next: 1.7 Server and Applications Virtualization
Previous: The next generation IT data center layers
Next: 1.7 Server and Applications Virtualization
Go To >> Index Page
What Others are Reading Now...
0 Comment to "1.6 Compute Systems (Servers) overview and its components"
Post a Comment