Thursday, 17 October 2019

SAN Switch Configuration, VLAN Basics


  • Emulex - HBAnywhere - OneCommand manager

  • Qlogic - SANsurfer - QConvergeConsole

  • Multippath Policy: Failover only, Round Robin, Round Robin with subset, Least Queue Depth, Weighted Paths, Least Blocks.

  • Device login process in SAN

    • FLogi - Fabric Login - Between N port (device) and F port(switch) - N ports send Flogi frame  to Fabric Name server which in turn provide 24 bit (8 bit domain, 8 bit area(port), 8 bit device number in that area) FC address to port.

    •  PLogi - Port Login - Between N-port and N-port. Initiator N-port send PLOGI request to target N-port if target accepts it session is established.*(Does it mean both N-port are in the same zone?  )

    •  Prli - Process Login - Between two N-ports to exchange ULP(upper layer protocol layer like SCSI,  related pararmeters

Brocade


Perform the initial configuration
upgrade the firmware, if needed -firmwaredownload
assign the domain ID - configure
assign port speeds - portcfgspeed
verify that the configuration is correct - switchshow
verify that there is host and storage connectivity - switchshow, portloginshow
create configuration - cfgcreate
create FC Zones - zonecreate
add zones in the cfg - cfgadd
Save the configuration and back it up - cfgsave
Obtain techincal support - supportshow, supportsave

 

Adaptive Networking

Adaptive Networking (AN) is a family of technologies that allow flexible control of traffic movement within the fabric to deliver application-aware management of fabric resources. Applications may be used with multiple protocols and multiple classes of service. Adaptive Networking includes the following features:

• Ingress Rate Limiting—Allows the ingress bandwidth of a port to be throttled to a rate lower than negotiated with the SAN node. This could be very useful for enterprises offering stepped levels of service and enforcing SLAs.

• Quality of Service (QoS)—Enables zones with high, medium, and low priorities within a fabric on a zone-by-zone basis. This can be very useful for prioritizing array replication over MANs and WANs over less critical traffic.

• Traffic Isolation Zones (TIZ)—Defines paths through a fabric for some or all nodes. Failover allows a non-preferred path to be used if the preferred path fails. TIZs use failover by default, but this can be disabled if traffic stops due to a preferred path failure. TIZ can be used to manually map out traffic flows within a fabric based on application, priority, or topology.

The HPE Power Pack+ Software Bundle includes:

• Fabric Vision

• Extended Fabric

• ISL Trunking

Fabric Vision

Fabric Vision technology provides a breakthrough hardware and software solution that helps simplify monitoring, maximize network availability, and dramatically reduce costs. Featuring innovative monitoring, management, and diagnostic capabilities, Fabric Vision technology enables administrators to avoid problems before they impact operations, helping their organizations meet SLAs. Fabric Vision includes:

• Monitoring and Alerting Policy Suite (MAPS)—A policy-based monitoring tool with pre-built rules and automation that simplifies fabric-wide threshold configuration and monitoring.

• Configuration and Operational Monitoring Policy Automation Services Suite (COMPASS)—Simplifies deployment, safeguards consistency, and increasesoperational efficiencies of larger environments with automated switch and fabric configuration services. Administrators can configure a template or adopt an existing configuration to seamlessly deploy a configuration across the fabric.

• ClearLink Diagnostics—Helps ensures optical and signal integrity for Fibre Channel optics and cables, simplifying deployment and support of high-performance fabrics. ClearLink Diagnostic Port (D_Port) is an advanced capability of Fibre Channel platforms.

• Flow Vision—A comprehensive tool that enables administrators to identify, monitor, and analyze specific application data flows in order to simplify troubleshooting, maximize performance, and avoid congestion without using taps to help ensure optimized performance.

• Health and performance dashboard—A single customizable screen displayed in HPE Management Portal that contains all critical SAN information for convenient review and analysis.

Extended Fabric

Extended Fabric is an optional license that extends all of the scalability, reliability, and performance benefits of Fibre Channel Storage Area Networks (SANs) beyond the native 10 km distance specified by the Fibre Channel standard.

ISL Trunking

For high performance enhanced trunking, this optional license logically groups up to eight 32 Gbps SFP+ ports per ISL trunk to provide a high bandwidth trunk between two switches. Each 32 Gb switch requires its own license. The switch operating system views the trunk as a single, high bandwidth resource (up to 256 Gbps) when routing connections between 32 Gb switches. Connections are load-balanced across the individual links, which comprise the logical trunk group.

HPE SANnav Management Software

HPE SANnav Management Software is the next-generation SAN management application suite for HPE B-series SAN environments. The software consists of:

• SANnav Management Portal Software—A next-generation SAN management application with a simple browser-based user interface (UI) that streamlines common workflows, such as configuration, zoning, deployment, troubleshooting, and reporting.

• SANnav Global View Software—Helps administrators visualize the health, performance, and inventory of multiple SANnav Management Portal instances at data centers across the globe, or a single multi-tenant data center using a simple, intelligent dashboard.

SANnav Management Portal and SANnav Global View Software not only transform SAN telemetry data into useful insights, such as health and performance scores, but also enable administrators to quickly associate real-time data with historical metrics and logs for in-depth analysis. This can help with spotting trends, establishing baselines, and identifying any behavioral changes over time.

HPE SANnav Management Software is available as a term-license for a one-, three-, or five-year period as both physical and electronic License-to-Use (LTU). The software supports 8 Gb, 16 Gb, and 32 Gb FC switches and directors.



Cisco

To bring a Cisco switch into production, you follow these steps. You perform the initial configuration. If necessary, you upgrade the firmware. You create the VSAN, assign the domain ID, assign port speeds, verify configuration and connectivity, create FC zones,and save the configuration.
Power on the switch and set up the management interface, as directed by the switch documentation. You specify the IP address, subnet mask, gateway address, the name of the switch host, and administrator password. On most Cisco switches, the default user name and password are “admin” and “password.” Some newer switches have no default password. The newer switches require you to enter a strong password the first time that the switch is powered on. It is a best practice to use the same version of Cisco FabricWare on all the switches in a fabric. To identify the current FabricWare version, use the show version command. If the version is not the correct version, you can load an image from an FTP or SCP server. For details about the relevant commands, see your Cisco documentation. Before starting a firmware upgrade, to verify images and system impact, use the show install command. After verification, to install the firmware, use the install all command. After installation, to verify the upgrade, use the install all status command.
Cisco FC switches enable the creation of VSANs. VSANs enable one switch to be partitioned into multiple, fully independent virtual switches, potentially creating multiple fabrics within one physical switch. Each Cisco switch must have at least one active VSAN. The default VSAN, vsan 1, should not be used for production traffic.
To create a VSAN, perform the following steps. First, enter configuration mode by typing config t. Second, enter VSAN database configuration mode by typing vsan database. Third, create the VSAN by typing vsan vid, where vid is an unused whole number between 2 and 4093. A VSAN can be activated only if it has at least one physical port assigned to it. Assign ports by typing vsan vid interface fc slot/port, where vid is the ID of the VSAN that you created and slot/port is the physical port to be assigned to the VSAN. Repeat the activation step for each interface that is to be added to the VSAN.
Because each VSAN functions as an independent switch, each VSAN must have a unique domain ID. Cisco domain IDs can be either static or preferred. If the domain ID is static, the VSAN cannot join the fabric with any other ID. This restriction guarantees a consistent and known ID. However, if the required ID is not available, the VSAN is isolated. That is, the VSAN is not a member of any fabric. If the domain ID is preferred, the VSAN attempts to use a specified ID to join the fabric. However, if the specified ID is not available, the VSAN uses a different ID. This flexibility guarantees that the VSAN joins the fabric. However, this flexibility can result in an unforeseen domain ID. It is recommended to have static domain IDs. Assign a domain ID by using the fcdomain command and specifying either the static or the preferred keyword.
After you assign the domain ID, you assign port speeds. After each device has been connected and the switch has performed automatic speed negotiation, you should assign the port speed. Reassigning helps to avoid future negotiation errors and prolonged fabric rebuild times. To assign port speeds on a Cisco switch, use the following commands. To enter configuration mode, use the config t command. To specify an interface, use the interface command. Finally, to specify the speed, use the switchport speed command. The speed is specified in megabits per second.
To verify VSAN creation and port assignments, use the show vsan membership command. To ensure that the domain ID is correct for each VSAN, use the show fcdomain command. Finally, to verify port speed configuration, use the show interface brief command. The Operating Speed column should display 1G, 2G, 4G, or 8G (not “auto”).

After you verify the switch configuration, to verify that host and storage ports are online, use the show flogi database command. Then verify that the WWPNs are correct. On the storage system, use the management software or CLI commands. On a host, verify that the WWPNs are correct by using the operating system or HBA software.
NPIV is a feature that enables multiple logical ports to exist on a single physical FC port. This feature is required for clustered Data ONTAP, Dell compellent using virtual port mode when using the FC protocol. The NPIV feature is disabled by default on Cisco FC switches. To enable the feature, use the feature npiv command. To verify that an NPIV login has occurred, for example from a logical interface on a Data ONTAP cluster, Virtual Port mode in Dell Compellent SC series, vSphere if RDM is assigned to VM, Blade Center with pass-through switch,  use the show flogi database command.
After you verify connectivity, you create the FC zones. WWPNs can be used to create zones, but the use of aliases simplifies administration. To create aliases for host and storage ports, use the device-alias command. Next, for each VSAN, use the zone and member commands to group the aliases into zones. The example illustrates the required syntax.
After you create FC zones, you create and activate zone sets. First, to create the zone set, use the zoneset command. Next, assign member zones to the zone set. Finally, to activate the zone set, use the zoneset activate command.
Now that the Cisco switch is configured and operational, to commit the configuration to NVRAM, use the copy running-config startup-config command. To copy the configuration to or from an FTP or SCP server for backup and restore, you can also use the copy command. For detailed information, see the Cisco documentation.
If you need assistance, contact the switch vendor that sold the switch or contact the Cisco Technical Assistance Center. To gather support data, use the show tech-support details command.

config t
vsan database
vsan vid ==> vsan 2
vsan 2 interface fc2/1
vsan 2 interface fc2/2
vsan 2 interface fc2/3
config t

fcdomain domain 10 static vsan 2

config t

interface fc 2/1
switchport speed x ==> switchport speed 1000/2000/4000/8000
show vsan membership
show fcdomain
show interface brief
show flogi database
n_porti ID virtualtion (NPIV) enable multiple logical fc ports to existing on a single physical FC port.
feature npiv
show flogi database
config t
device-alias database
device-alias name aliname pwwn => device-alias name lif1 pwwn 20:1b:00:a0:98:13:d5:d4
exit
device-alias commit
zone name myzone vsan 2
member device-alias lif1
exit
config t
zoneset name zoneset1 vsan 2
member myzone
exit
zoneset activate name zoneset1 vsan 2
copy running-config startup-config
show tech-support details
show version
show install all imapact show
install all system
show install all status
VSAN enable 1 switch to be partitioned into multiple VSAN potentailly creating multiple fabrics using signle physical swtich.  VSAN 1 is default and should not be used for default traffic.

VLAN IEEE 802.1Q  dot1q


To increase security and reduce contention for bandwidth, iSCSI is typically separated from other TCP/IP traffic, such as email and web access. The separation can be accomplished in several supported ways. No method is correct for all environments. Direct connections are inexpensive, secure, and simple to implement. However, they have limited scalability, limited distance coverage, and limited high-availability, or HA, options. Dedicated switched networks guarantee isolation and bandwidth. Dedicated switched networks are also more scalable, and dedicated switched networks can use bandwidth aggregation. The hardware costs of dedicated switched networks are higher than the costs that are associated with direct-attached configurations. The administration of dedicated switched networks is more complex. Shared or mixed networks should use VLANs to increase security and to separate iSCSI traffic from general TCP/IP traffic. This configuration is more expensive and complex to implement but this configuration is highly secure and highly scalable. This configuration also uses existing infrastructure and has more HA options.
VLANs improve flexibility by enabling subnets to be physically dispersed and VLANs improve reliability by isolating problems. They reduce problem-resolution time by limiting the problem space. They also reduce the number of available paths to a LUN. This benefit is important in non-multipathed environments or configurations where the multipathing software supports only a limited number of paths. VLAN should be in the 2-4096 range. VLAN1 is typically used as the admin vlan.
There are two types of VLANs: static and dynamic. Static VLANs are port based. The switch and switch port are used to define the VLAN and its members. Static VLANs are highly secure because MAC spoofing cannot breach them. In some environments, static VLANs are easier to manage because tracking of MAC addresses is not required. Static VLANs are conceptually similar to the port zoning that is used in FC environments. A benefit of a static VLAN is that there is no need to change the VLAN configuration if a network interface card, or NIC, is replaced. Dynamic VLANs are based on MAC addresses. The VLAN is defined by specifying the MAC addresses of the members that are to be included. Dynamic VLANs provide increased flexibility at the cost of the increased complexity that MAC address management requires. Dynamic VLANs are conceptually similar to the worldwide name zoning, or WWN zoning, that is used in FC environments. A benefit of a dynamic VLAN is that devices can be moved without changing the VLAN configuration. As with other topology options, each kind of VLAN, static and dynamic, is best for certain situations, but not all situations. The requirements of the environment and the current implementation determine which type of VLAN is appropriate.

IEEE 802.3AD - Link Aggregation

In iSCSI environments, port aggregation is frequently used to increase bandwidth. Port aggregation technologies are named in various ways. The IEEE standard is 802.3ad. However, port aggregation is also known as trunking, EtherChannel, network interface card teaming or NIC teaming, and so on. (In Data ONTAP systems, port aggregation is implemented by creating interface groups. Three types of interface groups are available: single-mode, static multimode, and dynamic multimode. Single-mode interface groups have one active interface. The other interfaces are on standby, ready to take over if the active interface fails. Single-mode interface groups do not require switch support or switch configuration. Multimode interface groups enable multiple interfaces to be active simultaneously and balance loads across the interfaces. Static multimode interface groups are slightly more flexible, with fewer requirements. However, they lack one important feature: they cannot detect higher-layer data errors, although static multimode interface groups can detect link failure. Dynamic multimode links detect loss of data flow in addition to link failure and can renegotiate dynamically. In Data ONTAP systems, dynamic multimode interface groups are implemented with Link Aggregation Control Protocol, or LACP, and they require switches that support LACP. Both multimode options require switch support. The switch must be configured to the same multimode option as the Data ONTAP system. Port aggregation is not supported on the E-Series or EF-Series storage systems.
iSCSI switch configuration is straightforward. Be sure to use the IMT(Interoperability Matrix Tool) to qualify all components before you begin the configuration. After you qualify all components and perform the initial configuration steps, you enable jumbo frames, verify host and storage connectivity, and configure the VLAN. VLAN configuration details vary widely, depending on the desired mode and the switch software.)

RUN THE SETUP PROGRAM
Setup programs vary by switch. Power on the switch and run the setup program, as prescribed by the switch documentation. You assign the network information, including the IP address, subnet mask, and gateway address. You also assign the host name and the IP domains.
In many iSCSI environments, the implementation of jumbo frames improves performance and decreases the CPU load on the hosts. Data ONTAP and SANtricity storage operating systems support jumbo frames on all Gigabit Ethernet ports. If your switches and hosts support jumbo frames, you enable jumbo frames by increasing the maximum transmission unit (MTU). The most common setting is 9,000 bytes. Larger frames can be used, but an MTU greater than 12,000 bytes causes the cyclic redundancy check, or CRC, mechanism to lose its effectiveness. Therefore, MTUs greater than 12,000 are not recommended. Different devices can have different MTU limits. Be sure that all devices in your environment have the same MTU setting. To enable jumbo frames on a Cisco switch, you follow these steps. To enter configuration mode, you use the config t command to set the system-wide MTU, you then use the system mtu jumbo 9000 command. The larger MTU has no effect on the devices and the connections that use smaller frames.
After you complete the basic configuration tasks, you connect the host and the storage system to the switch and verify connectivity. On the storage system, use the management software or CLI commands. On the switch, use the management software or CLI commands. On a host, use the software initiator. On a host you can also use the host bus adapter, or HBA, management utility.


Reference:

FCoE enables organizations to transport LAN and FC SAN storage traffic on a single, unified Ethernet cable.

FCoE is enabled by an enhanced 10Gb Ethernet proposed standard commonly referred to as Data Center Bridging (DCB) or Converged Enhanced Ethernet (CEE).

Tunneling protocols, such as FCiP and iFCP, use IP to transmit FC traffic over long distances, but FCoE is a layer-2 encapsulation protocol that uses Ethernet physical transport to transmit FC data

Monday, 30 September 2019

Back to Basics - Open System Interconnect - FC Layers - Infrastructure Services BIND

Upper layers - Application, Presentation and Session - Performs all task of upper layer in the application or browser itself.
Application Layer - Top layer of the OSI has many protocols like http, ftp, telnet, smtp that are used by the application to communicate to lower layers of OSI.

Presentation Layer - Data received from the application layer is in ASCII format, that needs to be translated into EBCDIC.
·         ASCII -> EBCDIC
·         Compression
·         Encryption (SSL)

Session Layer - Connection establishment is done using APIs (NetBIOS). Performs Authentication, Authorization.  Session management by keeping track of order and type data received. It terminates connections when not required.

Low Layer - Transport, Network, Data, Physical
Transport Layer - It performs Segmentation, Flow control (Sender and Receiving devices communicate with each for agreed speed of data flow) and Error Control(checksum) of data. TCP is connection oriented for reliable communication and required to acknowledge.  UDP is connectionless and does not required to send acknowledgement because of this reason it is faster than reliable TCP protocol. Each segment of data has port number and order id to identify originating application and proper ordering of data segments.

Network Layer - Segments received from Transport layer are placed(encapsulated) in the packet with source and destination IP address.  Performs the routing.  It Also checks for suitable paths using RIP(Interior gateway protocol used in INTRADomain routing i.e. routing within single autonomous system, based on distance vector which chooses route with minimum number of hop count. Max hop count supported are 15. protocol UDP port 20) OSPF(Interior Gateway Protocol used in INTRADomain routing i.e. routing between single autonomous system(network controlled by single entity) based on the shortest path, IP protocol port 89, link state), BGP(Exterior Gateway Protocol used in INTERDomain routing i.e. routing between multiple autonomous systems based on best path selection TCP protocol port 179 path vector)

Data Layer - Packets received are appended with MAC address (12 hexadecimal character) of source and destinations.  Frames reached to device using destination MAC address. At destination, MAC address is removed and IP address in the packet used to reach to correct IP address.

Physical Layer - Transfers frames in binary and convert them to electric signal(copper), light(fiber) or waves(wireless) depending on the media used.

Receiving side performs all given operations in reverse and eventually application on local system communicates to application on remote system.

OSI model is theoretical in practical we TCP/IP Layer model is used. Good things is that it is similar to OSI model.  Upper three layer of OSI Application, Presentation and Session are merged to form single Application Layer. In TCP/IP layer model there are five layer Application, Transport(segments), Network(packets), Data(frames), and Physical.

Like OSI model Fibre Channel has similar layer structure.  It has 5 layers(starting from 0) in cotrast with the 7 layers of OSI.


FC-4: Defines application interfaces that can execute over Fibre Channel. It performs mapping of protocol like (SCSI-FCP), means it allows SCSI commands to use FC infrastructure using FCP.  Similarly we can have IP (FC-IP), FICON (FC-SB-2), FC-TAPE (FCP-2).  There is newer one FC-NVMe which allows NVMe to use FC and is generally call NVMe over FC (Maps SCSI to Fibre Channel using FCP)

FC-3: performs advanced features like striping(transfering 1 data units to multiple links), hunt group(mapping multiple ports to single node) 

FC-2: It divides data into frames. It perform flow control and checks how much data needs to be sent and classes of service (Frames payload 2112 byte,  Sequence is formed by a set of one or more related frames transmitted unidirectional from 1 N-Port to other,  Exchange is nonconcurrent sequence for a single operation)

FC-1: Performs encoding/decoding 8b/10b (1,2,4,8) 64b/66b(10,16,32)

FC-0: Defines physical link in the system cables, sfp

Socket type LGA (Land Grid Array), PGA (Pin Grid Array), BGA (Ball Grid Array) - The way a CPU interfaces with the socket on a motherboard.  LGA is used on Intel sockets with pins as part of the socket.  AMD's AM4 solution, PGA, has the pins are on the processor, and these fit into holes on the socket.  AMD's Threadripper CPU also use LGA sockets. A BGA socket is one in which the processor is permanently soldered to the motherboard, typically on a laptop.


MBR - EFI
MBR has a partition table that indicates where the partitions are located on the disk drive, and with this particular partition style, only volumes up to 2TB (2,048GB) are supported. An MBR drive can have up to four primary partitions or can have three primary partitions and one extended partition that can be divided into unlimited logical drives.
Windows Server 2012 R2 can only boot off an MBR disk unless it is based on the Extensible Firmware Interface (EFI); then it can boot from GPT. An Itanium server is an example of an EFI-based system. GPT is not constrained by the same limitations as MBR. In fact, a GPT disk drive can support volumes of up to 18EB (18,874,368 million terabytes) and 128 partitions. As a result, GPT is recommended for disks larger than 2TB or disks used on Itanium-based computers.

 

Basic - Dynamic

Windows Server 2012 R2 supports two types of disk configurations: basic and dynamic. Basic disks are divided into partitions and can be used with previous versions of Windows. Dynamic disks are divided into volumes and can be used with Windows 2000 Server and newer releases. When a disk is initialized, it is automatically created as a basic disk, but when a new fault-tolerant (RAID) volume set is created, the disks in the set are converted to dynamic disks. Fault-tolerance features and the ability to modify disks without having to reboot the server are what distinguish dynamic disks from basic disks.  A basic disk can simply be converted to a dynamic disk without loss of data. When a basic disk is converted, the partitions are automatically changed to the appropriate volumes. However, converting a dynamic disk back to a basic disk is not as simple. First, all the data on the dynamic disk must be backed up or moved. Then, all the volumes on the dynamic disk have to be deleted. The dynamic disk can then be converted to a basic disk. Partitions and logical drives can be created, and the data can be restored.

WDS - Windows Deployment Service
WSUS - Windows Server Update Service
IANA - Internet Assigned Number Authority (Governing Body maintaining IP Address)
ICANN - Internet Corporation for Assigned Named and numbers (Governing Body maintaining DNS service). This assigns the control of TLD to one or more organization. In turn, organization delegates portion of  DNS namespace to other organization.  for example example.com. Registrar has delegated the control over example.com node in the dns tree while controlling TLD of .com.  Within the portion of example.com dns host and records can be created. example.com can be further divided into ksa.example.com india.example.com called subdomain each domain and subdomain are associated with DNS nameserver. It means every node in the dns can have 1 or more server to give authoritative answer to queries about that domain. At the root of domain namespace are root servers.

Given are 3 excerpts from MCSA Windows 2012R2 Sybex Study Guide showing name resolution and reverse resolution


From MCSA Certificate Exam sybex publication





  1. DNS sends recursive query to look for india.example.com to local DNS server and in case it find no zone corresponding
  2. Forwards the request to root servers. Root name server has authority for root domain. In turn root server provides the IP address of name server  for .com top level domain.
  3.  Local server send the request  of www.india.example.com to received IP
  4. name server for www.example.com send IP address of nameserver authoritative for www.example.com
  5. Local DNS server send the resolutions request to server authoritative for www.india.example.com which  In turns provide IP address of server authoritative for www.india.example.com
  6. Local DNS provides the IP to client

DNS zone is a portion of DNS namespace over which specific DNS server has authority. Within zones there are Resource Record that define hosts and other type of information that make up database for the zone.

Bye...

Monday, 23 September 2019

Miscellaneous - My reference  blogs coming later...


- 1vpc(virtual private cloud) has  1vgw(virtual gateway). 1 vpn has 1 cgw(customer gateway - logical aws entity associated with customer premises equipment router)
- 1 vgw  and 1 cgw can have n vpn
- well architect framework - Performance efficiency(scaling resources when there is more demand and scaling down when demand is less automatically), Reliability(multi az),Operational Excellence(system manager single place to maintain entire infrastructure, cloudwatch for performance monitoring),Cost Optimization(pay as you use model, big discounts for reserved instances, serverless application model help developer to run code without provisioning any instance thus saving cost and time),Security(comply with pci dss, hippa, iso27001. Inspector for performing security checks for CVE, Cloudtrail and config(non-repudiation, masie based on AI able to find Personal Identity information PII in S3)
- Authorization(IAM),  Authentication(IAM), Accounting(Trusted Advisor)
- Confidentiality(IAM, encryption KMS/ cloudHSM), Integrity, Availability(AZ)
- RDS Commercial Oracle and SQl Server, open source MariaDB, MySQL, PostgreSQL, AWS Native Aurora DB based on MySQL and PostgresSQL.
- RAM Resource Access Manager sharing of resources by using sharable subnet.  No need of vpc peering.
- Cloud development kit from aws provides framework for developer to perform resource provisioning using programing languages supported by CDK java, nodejs ruby etc. yaml - cloudformation 
 

Thursday, 19 September 2019

Miscellaneous - My reference 

NTNX ABS(pro), AFS(ultimate)
- Time streamer - Snapshot - local replication.
- Async replication uses proprietary technology that replicates between nutanix cluster. Replication snapshot.
- Stretch cluster with sync replication. With AOS 5.1 witness node performs automatic failover.
- NGT Nutanix Guest tools can be installed on VM to perform additional functionalities. Similar to VMware Tools.
- Nutanix Cloud connect feature enables you to configure AWS and remote site for virtual machine backup. AWS EC2 instance m1.xlarge and 30TB of S3 bucket.
- Protection strategy 1-1 1-many many-1 many-many
- Prism built-in to every acropolis localized 1to1. Prism central(license required) is vm that can manage multiple clusters (standard edition). Prism central pro - adds operational insight, capacity planning, and performance monitoring.
-nCLI configure nutanix cluster(login in system where ncli is installed or cvm) aCLI manage the Acropolis portion of the nutanix environment( login in cvm)host, network, snapshot and vm.
- cvm 2 network (private 192.168.5.2) and other is connect to host management public ip
Pulse -> Nutanix cloud base analytic engine  similar to(callhome/ESRS -> CloudIQ) 
-stargate entry gate for nfs/smb/iscsi
- casandra - metadata about the the data RF2 - 3nodes RF3 5 nodes

- prism management gui
- zeus zookeeper db is configuration db RF2 - 3nodes RF3 5 nodes
- Curator decision maker algorithm map reduce
- Acropolis starter, pro, ultimate. In case multiple license are applied features defaults to features of lowest license.
- Reclaim license before destroying cluster for reus
  • Nutanix - STIG (security technical implementation guide) standards required for DoD to provide certificates.