Friday, 25 October 2019

Miscellaneous - AWS Quick Notes for Reference

  • EC2 elastic compute cloud
    • Mnemonic - fight dr mc px z au
      • FPGA - Genomic Research, Financial analysis
      • IOPS - Database/ Applications
      • Graphics - 3D modeling
      • High disk throughput
      • t2.micro - general purpose - free tier
      • density
      • ram
      • m main general purpose
      • c compute intensive more CPU
      • p graphics
      • x Extreme Memory
      • z Extreme Memory and Extreme CPU
      • A ARM based architecture
      • U Bare Metal Servers
  • IAM - User Group Role, Policies
  • Virtual Private Cloud (VPC) 5 per account, public(route with IG) private subnet(default route), NAT gateway 10gb-HA-AWS managed / NAT instance (AMI), NACL(stateless inbound/outbound need to be specifically mentioned), Security Group(state full), site to site VPN, direct connect via dc provider (1 or 2 per region) 
  • EBS general purpose(10K IOPS(3 IOPS / GB)) bootable, provisioned IOPS (>10K IOPS), HDD Throughput intensive, HDD capacity oriented, magnetic HDD (bootable) 
  • Logical to physical mapping of AZ is different for different account
  • SQS first service, Message based, 14 days maximum retention, Pull. SWF(simple workflow service) Task based, 1yr maximum retention. SNS push. SES for only email notification.
  • CloudWatch (performance), CloudTrail(API calls log), config(consistency in the configuration) logs for CloudTrail go to S3 where Athena can be used to retrieve logs using SQL. CloudWatch logs also go to S3 and they are retained indefinitely.
  • System manager for cloud and on premise
  • Migrations Tools to AWS
    • Server migration service (SMS OVF)instance on VMware or Hyper-V block migration from on premise to cloud.
    • Database migration service(DMS) instance on ec2 migration from on premise DB to cloud.
    • Storage gateway as instance on premise as VM, that allows on premise to use cloud storage as NFS/SMB(S3 - Storage gateway configured for file), iSCSI (EBS - Storage gateway configured for block) VTL(S3/Glacier - Storage gateway configured for VTL).
    • Snowball 50tb/80tb. Snowball Edge 100TB with compute power. Snowmobile exabyte of data

  • CI/CD is methodology that allows developers to store code in a repository and collaborate with others. Task of compiling code and deploying application is fully automated and each task is orchestrated. In AWS this is analogous to Code Pipeline that consists CodeCommit(based on git) is repository for code, maintain different versions code and collaborate with other programmer, CodeBuild(kicks in as soon as the code commit happened), CodeDeploy(after the application is built it deployed either in rolling upgrade or Blue/Green - In Blue green Old Application and new applications are run in parallel with more weight given to new application, once new application tested in the field, old application is removed. All process starting from code commit to code build, and deploying application is fully automated.
  • NoSQL - key value good for large number of data, no need for normalization 1nf,2nf,3nf. RDBM consistency model is ACID(Atomic, Consistency, Isolation, Durable) and NoSQL consistency model is BASE (Basic availability, Soft state and Eventual consistence).
  • AWS - ISO 27001 compliance, HIPPA compliance(USA), PCI DSS compliance. KMS entry level symmetric key management service. CloudHSM on dedicated host FIPS-140 level 3 compliance, symmetric/asymmetric keys.

    Whats in the name 

 


Bye...

Thursday, 17 October 2019

SAN Switch Configuration, VLAN Basics


  • Emulex - HBAnywhere - OneCommand manager

  • Qlogic - SANsurfer - QConvergeConsole

  • Multippath Policy: Failover only, Round Robin, Round Robin with subset, Least Queue Depth, Weighted Paths, Least Blocks.

  • Device login process in SAN

    • FLogi - Fabric Login - Between N port (device) and F port(switch) - N ports send Flogi frame  to Fabric Name server which in turn provide 24 bit (8 bit domain, 8 bit area(port), 8 bit device number in that area) FC address to port.

    •  PLogi - Port Login - Between N-port and N-port. Initiator N-port send PLOGI request to target N-port if target accepts it session is established.*(Does it mean both N-port are in the same zone?  )

    •  Prli - Process Login - Between two N-ports to exchange ULP(upper layer protocol layer like SCSI,  related pararmeters

Brocade


Perform the initial configuration
upgrade the firmware, if needed -firmwaredownload
assign the domain ID - configure
assign port speeds - portcfgspeed
verify that the configuration is correct - switchshow
verify that there is host and storage connectivity - switchshow, portloginshow
create configuration - cfgcreate
create FC Zones - zonecreate
add zones in the cfg - cfgadd
Save the configuration and back it up - cfgsave
Obtain techincal support - supportshow, supportsave

 

Adaptive Networking

Adaptive Networking (AN) is a family of technologies that allow flexible control of traffic movement within the fabric to deliver application-aware management of fabric resources. Applications may be used with multiple protocols and multiple classes of service. Adaptive Networking includes the following features:

• Ingress Rate Limiting—Allows the ingress bandwidth of a port to be throttled to a rate lower than negotiated with the SAN node. This could be very useful for enterprises offering stepped levels of service and enforcing SLAs.

• Quality of Service (QoS)—Enables zones with high, medium, and low priorities within a fabric on a zone-by-zone basis. This can be very useful for prioritizing array replication over MANs and WANs over less critical traffic.

• Traffic Isolation Zones (TIZ)—Defines paths through a fabric for some or all nodes. Failover allows a non-preferred path to be used if the preferred path fails. TIZs use failover by default, but this can be disabled if traffic stops due to a preferred path failure. TIZ can be used to manually map out traffic flows within a fabric based on application, priority, or topology.

The HPE Power Pack+ Software Bundle includes:

• Fabric Vision

• Extended Fabric

• ISL Trunking

Fabric Vision

Fabric Vision technology provides a breakthrough hardware and software solution that helps simplify monitoring, maximize network availability, and dramatically reduce costs. Featuring innovative monitoring, management, and diagnostic capabilities, Fabric Vision technology enables administrators to avoid problems before they impact operations, helping their organizations meet SLAs. Fabric Vision includes:

• Monitoring and Alerting Policy Suite (MAPS)—A policy-based monitoring tool with pre-built rules and automation that simplifies fabric-wide threshold configuration and monitoring.

• Configuration and Operational Monitoring Policy Automation Services Suite (COMPASS)—Simplifies deployment, safeguards consistency, and increasesoperational efficiencies of larger environments with automated switch and fabric configuration services. Administrators can configure a template or adopt an existing configuration to seamlessly deploy a configuration across the fabric.

• ClearLink Diagnostics—Helps ensures optical and signal integrity for Fibre Channel optics and cables, simplifying deployment and support of high-performance fabrics. ClearLink Diagnostic Port (D_Port) is an advanced capability of Fibre Channel platforms.

• Flow Vision—A comprehensive tool that enables administrators to identify, monitor, and analyze specific application data flows in order to simplify troubleshooting, maximize performance, and avoid congestion without using taps to help ensure optimized performance.

• Health and performance dashboard—A single customizable screen displayed in HPE Management Portal that contains all critical SAN information for convenient review and analysis.

Extended Fabric

Extended Fabric is an optional license that extends all of the scalability, reliability, and performance benefits of Fibre Channel Storage Area Networks (SANs) beyond the native 10 km distance specified by the Fibre Channel standard.

ISL Trunking

For high performance enhanced trunking, this optional license logically groups up to eight 32 Gbps SFP+ ports per ISL trunk to provide a high bandwidth trunk between two switches. Each 32 Gb switch requires its own license. The switch operating system views the trunk as a single, high bandwidth resource (up to 256 Gbps) when routing connections between 32 Gb switches. Connections are load-balanced across the individual links, which comprise the logical trunk group.

HPE SANnav Management Software

HPE SANnav Management Software is the next-generation SAN management application suite for HPE B-series SAN environments. The software consists of:

• SANnav Management Portal Software—A next-generation SAN management application with a simple browser-based user interface (UI) that streamlines common workflows, such as configuration, zoning, deployment, troubleshooting, and reporting.

• SANnav Global View Software—Helps administrators visualize the health, performance, and inventory of multiple SANnav Management Portal instances at data centers across the globe, or a single multi-tenant data center using a simple, intelligent dashboard.

SANnav Management Portal and SANnav Global View Software not only transform SAN telemetry data into useful insights, such as health and performance scores, but also enable administrators to quickly associate real-time data with historical metrics and logs for in-depth analysis. This can help with spotting trends, establishing baselines, and identifying any behavioral changes over time.

HPE SANnav Management Software is available as a term-license for a one-, three-, or five-year period as both physical and electronic License-to-Use (LTU). The software supports 8 Gb, 16 Gb, and 32 Gb FC switches and directors.



Cisco

To bring a Cisco switch into production, you follow these steps. You perform the initial configuration. If necessary, you upgrade the firmware. You create the VSAN, assign the domain ID, assign port speeds, verify configuration and connectivity, create FC zones,and save the configuration.
Power on the switch and set up the management interface, as directed by the switch documentation. You specify the IP address, subnet mask, gateway address, the name of the switch host, and administrator password. On most Cisco switches, the default user name and password are “admin” and “password.” Some newer switches have no default password. The newer switches require you to enter a strong password the first time that the switch is powered on. It is a best practice to use the same version of Cisco FabricWare on all the switches in a fabric. To identify the current FabricWare version, use the show version command. If the version is not the correct version, you can load an image from an FTP or SCP server. For details about the relevant commands, see your Cisco documentation. Before starting a firmware upgrade, to verify images and system impact, use the show install command. After verification, to install the firmware, use the install all command. After installation, to verify the upgrade, use the install all status command.
Cisco FC switches enable the creation of VSANs. VSANs enable one switch to be partitioned into multiple, fully independent virtual switches, potentially creating multiple fabrics within one physical switch. Each Cisco switch must have at least one active VSAN. The default VSAN, vsan 1, should not be used for production traffic.
To create a VSAN, perform the following steps. First, enter configuration mode by typing config t. Second, enter VSAN database configuration mode by typing vsan database. Third, create the VSAN by typing vsan vid, where vid is an unused whole number between 2 and 4093. A VSAN can be activated only if it has at least one physical port assigned to it. Assign ports by typing vsan vid interface fc slot/port, where vid is the ID of the VSAN that you created and slot/port is the physical port to be assigned to the VSAN. Repeat the activation step for each interface that is to be added to the VSAN.
Because each VSAN functions as an independent switch, each VSAN must have a unique domain ID. Cisco domain IDs can be either static or preferred. If the domain ID is static, the VSAN cannot join the fabric with any other ID. This restriction guarantees a consistent and known ID. However, if the required ID is not available, the VSAN is isolated. That is, the VSAN is not a member of any fabric. If the domain ID is preferred, the VSAN attempts to use a specified ID to join the fabric. However, if the specified ID is not available, the VSAN uses a different ID. This flexibility guarantees that the VSAN joins the fabric. However, this flexibility can result in an unforeseen domain ID. It is recommended to have static domain IDs. Assign a domain ID by using the fcdomain command and specifying either the static or the preferred keyword.
After you assign the domain ID, you assign port speeds. After each device has been connected and the switch has performed automatic speed negotiation, you should assign the port speed. Reassigning helps to avoid future negotiation errors and prolonged fabric rebuild times. To assign port speeds on a Cisco switch, use the following commands. To enter configuration mode, use the config t command. To specify an interface, use the interface command. Finally, to specify the speed, use the switchport speed command. The speed is specified in megabits per second.
To verify VSAN creation and port assignments, use the show vsan membership command. To ensure that the domain ID is correct for each VSAN, use the show fcdomain command. Finally, to verify port speed configuration, use the show interface brief command. The Operating Speed column should display 1G, 2G, 4G, or 8G (not “auto”).

After you verify the switch configuration, to verify that host and storage ports are online, use the show flogi database command. Then verify that the WWPNs are correct. On the storage system, use the management software or CLI commands. On a host, verify that the WWPNs are correct by using the operating system or HBA software.
NPIV is a feature that enables multiple logical ports to exist on a single physical FC port. This feature is required for clustered Data ONTAP, Dell compellent using virtual port mode when using the FC protocol. The NPIV feature is disabled by default on Cisco FC switches. To enable the feature, use the feature npiv command. To verify that an NPIV login has occurred, for example from a logical interface on a Data ONTAP cluster, Virtual Port mode in Dell Compellent SC series, vSphere if RDM is assigned to VM, Blade Center with pass-through switch,  use the show flogi database command.
After you verify connectivity, you create the FC zones. WWPNs can be used to create zones, but the use of aliases simplifies administration. To create aliases for host and storage ports, use the device-alias command. Next, for each VSAN, use the zone and member commands to group the aliases into zones. The example illustrates the required syntax.
After you create FC zones, you create and activate zone sets. First, to create the zone set, use the zoneset command. Next, assign member zones to the zone set. Finally, to activate the zone set, use the zoneset activate command.
Now that the Cisco switch is configured and operational, to commit the configuration to NVRAM, use the copy running-config startup-config command. To copy the configuration to or from an FTP or SCP server for backup and restore, you can also use the copy command. For detailed information, see the Cisco documentation.
If you need assistance, contact the switch vendor that sold the switch or contact the Cisco Technical Assistance Center. To gather support data, use the show tech-support details command.

config t
vsan database
vsan vid ==> vsan 2
vsan 2 interface fc2/1
vsan 2 interface fc2/2
vsan 2 interface fc2/3
config t

fcdomain domain 10 static vsan 2

config t

interface fc 2/1
switchport speed x ==> switchport speed 1000/2000/4000/8000
show vsan membership
show fcdomain
show interface brief
show flogi database
n_porti ID virtualtion (NPIV) enable multiple logical fc ports to existing on a single physical FC port.
feature npiv
show flogi database
config t
device-alias database
device-alias name aliname pwwn => device-alias name lif1 pwwn 20:1b:00:a0:98:13:d5:d4
exit
device-alias commit
zone name myzone vsan 2
member device-alias lif1
exit
config t
zoneset name zoneset1 vsan 2
member myzone
exit
zoneset activate name zoneset1 vsan 2
copy running-config startup-config
show tech-support details
show version
show install all imapact show
install all system
show install all status
VSAN enable 1 switch to be partitioned into multiple VSAN potentailly creating multiple fabrics using signle physical swtich.  VSAN 1 is default and should not be used for default traffic.

VLAN IEEE 802.1Q  dot1q


To increase security and reduce contention for bandwidth, iSCSI is typically separated from other TCP/IP traffic, such as email and web access. The separation can be accomplished in several supported ways. No method is correct for all environments. Direct connections are inexpensive, secure, and simple to implement. However, they have limited scalability, limited distance coverage, and limited high-availability, or HA, options. Dedicated switched networks guarantee isolation and bandwidth. Dedicated switched networks are also more scalable, and dedicated switched networks can use bandwidth aggregation. The hardware costs of dedicated switched networks are higher than the costs that are associated with direct-attached configurations. The administration of dedicated switched networks is more complex. Shared or mixed networks should use VLANs to increase security and to separate iSCSI traffic from general TCP/IP traffic. This configuration is more expensive and complex to implement but this configuration is highly secure and highly scalable. This configuration also uses existing infrastructure and has more HA options.
VLANs improve flexibility by enabling subnets to be physically dispersed and VLANs improve reliability by isolating problems. They reduce problem-resolution time by limiting the problem space. They also reduce the number of available paths to a LUN. This benefit is important in non-multipathed environments or configurations where the multipathing software supports only a limited number of paths. VLAN should be in the 2-4096 range. VLAN1 is typically used as the admin vlan.
There are two types of VLANs: static and dynamic. Static VLANs are port based. The switch and switch port are used to define the VLAN and its members. Static VLANs are highly secure because MAC spoofing cannot breach them. In some environments, static VLANs are easier to manage because tracking of MAC addresses is not required. Static VLANs are conceptually similar to the port zoning that is used in FC environments. A benefit of a static VLAN is that there is no need to change the VLAN configuration if a network interface card, or NIC, is replaced. Dynamic VLANs are based on MAC addresses. The VLAN is defined by specifying the MAC addresses of the members that are to be included. Dynamic VLANs provide increased flexibility at the cost of the increased complexity that MAC address management requires. Dynamic VLANs are conceptually similar to the worldwide name zoning, or WWN zoning, that is used in FC environments. A benefit of a dynamic VLAN is that devices can be moved without changing the VLAN configuration. As with other topology options, each kind of VLAN, static and dynamic, is best for certain situations, but not all situations. The requirements of the environment and the current implementation determine which type of VLAN is appropriate.

IEEE 802.3AD - Link Aggregation

In iSCSI environments, port aggregation is frequently used to increase bandwidth. Port aggregation technologies are named in various ways. The IEEE standard is 802.3ad. However, port aggregation is also known as trunking, EtherChannel, network interface card teaming or NIC teaming, and so on. (In Data ONTAP systems, port aggregation is implemented by creating interface groups. Three types of interface groups are available: single-mode, static multimode, and dynamic multimode. Single-mode interface groups have one active interface. The other interfaces are on standby, ready to take over if the active interface fails. Single-mode interface groups do not require switch support or switch configuration. Multimode interface groups enable multiple interfaces to be active simultaneously and balance loads across the interfaces. Static multimode interface groups are slightly more flexible, with fewer requirements. However, they lack one important feature: they cannot detect higher-layer data errors, although static multimode interface groups can detect link failure. Dynamic multimode links detect loss of data flow in addition to link failure and can renegotiate dynamically. In Data ONTAP systems, dynamic multimode interface groups are implemented with Link Aggregation Control Protocol, or LACP, and they require switches that support LACP. Both multimode options require switch support. The switch must be configured to the same multimode option as the Data ONTAP system. Port aggregation is not supported on the E-Series or EF-Series storage systems.
iSCSI switch configuration is straightforward. Be sure to use the IMT(Interoperability Matrix Tool) to qualify all components before you begin the configuration. After you qualify all components and perform the initial configuration steps, you enable jumbo frames, verify host and storage connectivity, and configure the VLAN. VLAN configuration details vary widely, depending on the desired mode and the switch software.)

RUN THE SETUP PROGRAM
Setup programs vary by switch. Power on the switch and run the setup program, as prescribed by the switch documentation. You assign the network information, including the IP address, subnet mask, and gateway address. You also assign the host name and the IP domains.
In many iSCSI environments, the implementation of jumbo frames improves performance and decreases the CPU load on the hosts. Data ONTAP and SANtricity storage operating systems support jumbo frames on all Gigabit Ethernet ports. If your switches and hosts support jumbo frames, you enable jumbo frames by increasing the maximum transmission unit (MTU). The most common setting is 9,000 bytes. Larger frames can be used, but an MTU greater than 12,000 bytes causes the cyclic redundancy check, or CRC, mechanism to lose its effectiveness. Therefore, MTUs greater than 12,000 are not recommended. Different devices can have different MTU limits. Be sure that all devices in your environment have the same MTU setting. To enable jumbo frames on a Cisco switch, you follow these steps. To enter configuration mode, you use the config t command to set the system-wide MTU, you then use the system mtu jumbo 9000 command. The larger MTU has no effect on the devices and the connections that use smaller frames.
After you complete the basic configuration tasks, you connect the host and the storage system to the switch and verify connectivity. On the storage system, use the management software or CLI commands. On the switch, use the management software or CLI commands. On a host, use the software initiator. On a host you can also use the host bus adapter, or HBA, management utility.


Reference:

FCoE enables organizations to transport LAN and FC SAN storage traffic on a single, unified Ethernet cable.

FCoE is enabled by an enhanced 10Gb Ethernet proposed standard commonly referred to as Data Center Bridging (DCB) or Converged Enhanced Ethernet (CEE).

Tunneling protocols, such as FCiP and iFCP, use IP to transmit FC traffic over long distances, but FCoE is a layer-2 encapsulation protocol that uses Ethernet physical transport to transmit FC data