Sunday, 24 June 2018

HPE Superdome X

 


What is the HPE Superdome X?
The Superdome X is an enterprise level x86 server that’s designed to support mission critical workloads that require maximum scalability and reliability. It is intended to run the most resource intensive business processing, decision support, virtualization and database workloads, including SQL Server, SAP and Oracle. The Integrity Superdome X consists of a single compute enclosure containing one to eight BL920s Gen8 or Gen9+ blades as well as interconnect modules, manageability modules, fans, power supplies, and an integrated LCD Insight Display. The Insight
Display can be used for basic enclosure maintenance and displays the overall enclosure health. The compute enclosure supports four XFMs that provide the crossbar fabric which carries data between blades.

To service any internal compute enclosure component, complete the following steps
in order:
1. Power off the partition.
2. Power off all XFMs.
3. Disconnect the power cables from the lower power supplies.
4. Disconnect the power cables from the upper power supplies.

Each BL920s server blade contains two x86 processors and up to 48 DIMMs. Server blades and partitions Integrity Superdome X supports multiple nPartitions of 2, 4, 6, 8, 12, or 16 sockets (1, 2, 3, 4, 6, or 8 blades). Each nPartition must include blades of the same type but the system can include nPartitions with different blade types.

Integrity Superdome X provides I/O through mezzanine cards and FlexLOMs on individual server blades. Each BL920s blade has two FLB slots and three Mezzanine slots.

The Integrity Superdome X compute enclosure supports two power input modules, using either single phase or 3-phase power cords. Connecting two AC sources to each power input module provides 2N redundancy for AC input and DC output of the power supplies. There are 12 power supplies per Integrity Superdome X compute enclosure. Six power supplies are installed in the upper section of the enclosure, and six power supplies are installed in the lower section of the enclosure.





Isn’t the Superdome X an Itanium server?
No. HPE markets a separate server called the Integrity Superdome 2 that is built around the Itanium chip and it runs HP-UX. The HPE Superdome X is an x86 server that uses the Intel Xeon Processor E7 v3 processor family and it runs SLES, RHEL, Microsoft Windows Server 2012 R2, VMware vSphere and CentOS. It will also be certified for Windows Server 2016 when Microsoft releases it.
What is the maximum scalability of the Superdome X?
The HPE Superdome X provides extreme scalability. In its maximum configuration it can support up to 16 sockets and 288 cores. You can configure the Superdome X with one to eight scalable BL920 Gen9 x86 blades. The maximum memory capacity is 3 TB per blade for a total of 24 TB of RAM for a fully configured Superdome X server.  SQL Server 2016 can scale to consume all of these cores and with Windows Server 2016, scalability will be up to 640 cores.

What availability features does the Superdome X have?
The HPE Superdome X is designed to provide five nines (99.999 percent) of availability. All key Superdome X hardware components are redundant and hot-swappable. This includes components like power supplies, fans, and I/O switches. The Superdome X uses a “firmware first” architecture that is able to contain errors in the firmware before any corrupted data can reach the OS. In addition, the built-in Error Analysis Engine (EAE) constantly analyzes all possible hardware faults, predicts errors and can automatically initiate recovery actions without any operator actions.

What are nPars?
The Superdome X supports multiple hardware partitions that are called nPars. Each nPar partition can be completely electrically isolated from the other partitions. Using the HPE Superdome X nPar technology, you can effectively run multiple diverse workloads on the same server system and those workloads will not interfere with one another. For instance, you can run an instance of the SQL Server relational database in one partition and SQL Server Analysis Services and Reporting Services in another partition. Even though these workloads have very different characteristics, they would be completely isolated from one another just as if they were running on separate systems.


Which virtualization technologies are supported with Superdome X?
Superdome X is certified for Hyper-V, VMware vSphere, and KVM/RHEV virtualization.


HPE Integrity Superdome X Management
see the entire Superdome X system through the Superdome Onboard Administrator (OA).
•    iLO Management—remote access the individual servers
•    HPE System Update Manager—firmware management and system updates system updates.
•    HPE Insight Remote Support (7.x) software—24x7 remote monitoring, automated case creation, diagnosis, notifications, and connectivity to HPE Support.
•    HPE Insight Online and the mobile dashboard—monitor device health and alerts, contract and warranties, or service credits.



IBM XIV 2810/12-114


--42U 19" standard rack
--1 ATS, 3UPS, 15Module, 12Drive per module, 1U management console -Chabuka module4-5-6-7-8-9 interface module.
--laptop on laptop port, dhcp enabled, ip received is 14.10.202.1 (Laptop port works as dhcp server ip is 14.10.202.250).
-- ta tool is required for initial configuration, code load and various maintainenace activity. guided procedure technician/????????.
-- logical configuration XIV GUI admin/adminadmin technician.
-- XCLI command state_list, monitor_redist, help, event_list, componenet_phaseout componenet=componentid,
--componenet_list  filter=notok, componenet_test component=1:module:10
-- fc_port_list, fc_connectivity_list logged_in=yes
--servicecenter.xiv.ibm.com  for remote management. 
 support_center_connect and support_center_disconnect
--/dev/sda CF configuration and root filesystem
--/dev/sdb 37gb from each disk. SX - Traces/Events/Cores
--/dev/sdc 60gb 1 vol per If Mod (x6)
-- bootup time 4minu
-- upgrade from 10.0 to 10.1 -disruptive
-- upgrade since 10.1 -concurrent i/o cutover time for host < 13sec
--With 1tb drives --> 180tb - 12*1tb + 3*1tb + 6.8tb for SX => mirror /2 79tb
--with 2tb drives --> 360tb - 12*2tb + 3*2tb + 8tb for for SX => mirror /2 161tb
-- data broken in 1MB partition and mirrored such that partitions are stored on separate module.
-- Each logical volume is created from the partitions of all drives and entire modules are rebuilt in the event of a failure and only used capacity is rebuilt.
-- All DDM take part in the rebuild.
-- storage pool  ==> volume
-- host connection ==> host ==> map volume to host
--The storage space of IBM XIV storage system is partitioned into storage pools, where each volume belongs to specific storage pool.
Storage pool provides improved management and regulation of storage space.
--size of storage pool is 17GB to 80654GB. Size of the storage pool can be increased limited only by the free space on the system
--size of storage pool can be decreased limited only by the space consumed by volumes and clones.
-- volumes can be moved between storage pools as long as there is enough free space in the target storage pool.
-- All of the above transaction on storage pools are pure accounting transactions and do not impose any data copying from one storage pool to another.
-- Volume can belong to 1 storage pool. 1 consistency group.  All volumes in the consistency group belong to same storage pool. Volume can have multiple clone. A clone is point in time copy of a volume.
-- XIV queue depth 1400 per port and 256 per volume.

IOPS = Queue Depth/Latency
Throughput=IOPS*IO Size ==> Queue Depth/Latency *IO Size

-- Host queue depth - A minimum queue depth of 64 should be used. Performance can be improved with higher values dependent on relative workload levels and content.
-- Multi Path : AIX MPIO Currently only default Path Control Module is supported in active/passive mode
-- Linux Device Mapper - Requires RPMs to be installed and kernel recompile.
-- Solaris Native MPxIO, windows DSM to support XIV, VMware native active/standby
-- Not advisable to use two protocols(FCP/iSCSI) to access the same volume. Might be used to migrate a host from FC to iSCSI. To access different volumes
from the same host through different protocols, use separate host definition.
-- supports traditional (scsi2) and persistent (scsi3) reserves
--reserves can be displayed and cleared using xcli commands
reservation_list - list volume reservations
reservation_key_list - list reservation keys
reservation_clear - clear reservations of a volume
-- In multi Host environments reserves can be used to block volume accesses from other hosts while it is updated from the reserving host.
Problem exists, when the reserving host crashes while the reserve is still outstanding. The above commands can be used to analyze the situation by the customer and resolve the problem by clear the reservation.
-- SSR should never use such commands as the risk to damage the data integrity is very high.