Sunday, 18 February 2018


Remote Management System





  • Intel uses RMM2 (remote management 2),
  • Dell uses DRAC (Dell Remote Access Control),
  • Sun (now Oracle) uses ILOM (Integrated Lights Out Manager)
  • IBM uses IMM (Integrated Management Module)
  • HP uses ILO (Integrated Lights-Out).

  • Coming out with pros and cons of each of these....

    Bye...
    Dell Technologies - VxRail 


    - Jointly by Dell EMC and VMware

    - Dell EMC PowerEdge 14th Generation - VMware - VSAN
    - VxRail Appliances are built using a distributed-cluster architecture consisting of modular blocks that scale linearly as the system grows from as small as 3 nodes to as large as 64 nodes. Nodes are available with different form factors, with single-node appliances for use cases: E entrylevel systems; P performance optimized; V VDI optimized with GPU; and S storage-optimized configurations supporting high-capacity HDD drives


    - All appliance models support either 10GbE or 1GbE network. 10Gb Ethernet networks are required for all-flash configurations and environments that will scale to more than 8 nodes. Additional ports are available, allowing the customer to expand VM-network traffic.
    - Scale up and Scale out


    The number of Ethernet switch ports required depends on the VxRail model and whether it is configured for hybrid storage or for all flash. The all-flash system requires two 10GbE ports, and hybrid systems use either two 10GbE ports per node or four 1GbE ports per node. For 1GbE  networks, the 10GbE ports auto-negotiate down to 1GbE. Additional network connectivity can be accomplished by adding additional NIC cards. The additional PCIe NICs are not configured by VxRail management, but can be used by the customer to support non-VxRail traffic, primarily VM traffic. The additional ports are managed through vCenter. Network traffic is segregated using switch-based VLAN technology and vSphere Network I/O Control (NIOC). Four types of network traffic exist in a VxRail cluster:
    Management -  Management traffic is use for connecting to VMware vCenter web client, VxRail Manager, and other management interfaces and for communications between the management components and the ESXi nodes in the cluster. Either the default VLAN or a specific management
    VLAN is used for management traffic.
    vSAN -  Data access for read and write activity as well as for optimization and data rebuild is performed over the vSAN network. Low network latency is critical for this traffic and a specific VLAN isolates this traffic.
    vMotion -  VMware vMotion allows virtual-machine mobility between nodes. A separate VLAN is used to isolate this traffic.
    Virtual Machine  -  Users access virtual machines and the service provided over the VM network(s). At least one VM VLAN is configured when the system is initially configured, and others may be defined as required.

    VxRail Manager - VxRail management platform, is the appliance hardware lifecycle management and serviceability interface for VxRail clusters. In newer VxRail version 4.7 plugin for vCenter allow the entire activities from within the vCenter.

    vSphere - vCenter and ESXi,, vSAN Software Defined storage (at least 1 SSD required for VSAN)
    After the hardware and network configuration is complete access VxRail cluster with default IP address of 192.168.10.200 and follow step by step procedure to configure VxRail. This can also be automated by putting all input values in JSON file. JSON files can be created using VxRail PEQ(pre engagement questionnaire).

    Initial Screen on browser with 192.168.10.200



    One of the screen during configuration
     


    VxRail Cluster Initialized



      


    ESRS - EMC Secure Remote Services,
    /var/log/VMware/marvin/tomcat/log/marvin.log
    http://vxrail-ip/stats/log

    Bye...

    NetApp E-Series

    

    Controller State:

     
    Optimal - The remaining controller marks the internal state of its alternate as 'Not Present"
    Quiesced - The remaining controller marks the internal state of its alternate as "Not Present". No I/O requests are processed until the state is no longer quiesced.
    Service Mode - The remaining controller marks the internal state of its alternate as "Not Present". No I/O requests are processed until the state is no longer service mode.
    Suspended - The remaining controller becomes Online.
    Lockdown - The remaining controller remains in the Lockdown state.
    Offline - The remaining controller is released from reset, enter the Service Mode state, and the process with Start-of-Day(SOD) processing.


    Dynamic Disk Pools:
     
    These were initially called CRUSH(Controlled Replication Under Scalable Hashing)

    Stripe with 3 segments - 1 segment on drive 1, 2nd segment on drive 2, 3 segment on drive 3

    volume with 3 piece - 1 piece on drive 1, 2nd piece on drive 2, 3 piece on drive

    piece 1 contains all segment on drive 1,  piece 2 contains all segment on drive 2

    segment combine to for strips
    piece combine to for volume
     
    Stripe are broken in segments. All segments residing on drive is called  Pieces.  Each piece of the drive is written to disk of raid group.
     
    In DDP C-Stripe or D-Stripe - 5GB (4GB data and 1GB parity)
    C-Piece or D-Piece
    Stripes always has 10 piece irrespective of the number of disks in Dynamic pools.
    Each Raid 6 Stripes is 1MB (8+2 with 128K segment size)
    C-stripe has 4096 traditional Raid 6 Stripes
    No drive contains two C-piece from the Same Stripes. Each C-Piece is 512MB in size.
    Preservation Capacity
    When disk pools are created, a certain amount of capacity is preserved for emergency use.  This capacity is expressed in terms of a number of disks in the management software but the actual implementation is stored across the entire  pool of disks.  The default amount of capacity that is preserved is based on the number of disks in the pool.
    preservation capacity 0 - 10 disks.  Preservation capacity is active.
    Dynamic Pools - TPV
    4GB minimum repository size and expansions must be in 4GB increments.
    Virtual capacity can be specified between 32MB to 63TB.
    Provisioned capacity between 4GB and 64TB. Provisioned capacity quota limits automatic expansion of repository. Quota equals provisioned capacity when expansion mode is manual.
     
     
    DS5300 - 7.3x - 10.73

    MD3260 - DE6600 - 60 drives /DE5600 - 24 drives /DE1600 12 drives - 4U/2U - 6Gbps ESM
    DE460C/DE224C/DE212C - 4U/2U - 12Gbps IOM
    MD3460 - E2760 80.20.x - 11.xx
    E2800,E5700 - 8.4x - 11.84 (Embed Web User Interface)

    Dacstore - All configuration information is stored in Dacstore. Dacstore is stored on all drives but is invisible to hosts and users.  Capacity reserved for dacstore is subtracted from the usable capacity of a volume group. Dacstore resides on innermost portion of the disk drives.  Read/Writes to innermost tracks are slower and the faster outer track are reserved for customer data.
    Bye...