Friday, 12 November 2021

 Virtualization Trends

Compute:

Started with mainframes and its virtualizing OS zVM that inspired power hypervisor on Power systems. Same idea were applied on Intel/AMD systems by VMware, KVM, Hyper-V and many others with various degree of virtualization in the hypervisor type 1 (bare metal hypervisor), type 2 hypervisor VMware Workstation, Virtual Box. From Physical to Virtual systems to Windows/Linux container to workload partition (WPAR in AIX) to Docker container 

Moving to Virtualization has signification improved resource utilization cause, time provision and reduction in the footprint. These benefits are evident on reduction of CAPEX and OPEX.  Moving from VM to Container has revolutionized the pace of deployment and delivery of services with ease. Virtual Machines are not more a new to many of IT professional. Use of container has been growing rapidly.  

Container is self contained environment capable of running complete application and are well isolated from each other. There are container who shares kernel of single host and there are container which are stripped down virtual machine (Linux Container and Windows Hyper-V Container)

Windows Container are containers in windows that share same windows kernel.  Single host can have multiple windows container. Problem is with this is that it is not suitable for multitenancy. A single problematic or resource hungry containers can after all other container running on that hosts. Even though there are ways to put a cap on resource utilization but security is still the concern as they will be share same OS. Required windows server 2016 and windows 10 anniversary edition.  It required daemon dockerd and docker.exe

Windows Hyper-V Container are stripped down version of Hyper-V VM but these container will not be visible on Hyper-V manager however you can see the process used by these container in task manager. These container are completed isolated and good for multitenancy environment. Requires windows server 2016 / windows 10 anniversary edition 1501+ with Hyper-V role. Can be managed using docker for windows desktop that creates a docker VM in Hyper-V.

LXD/LXC - Linux Container are similar to Hyper-V container on Linux side. These are in between VM and docker container.

Docker is an open source tool that enable containerization of application using simple text docker file (something like Infrastructure as code). You create application and create docker file with specification and build docker image.

Docker images can be kept in hub.docker.com as a public / private. some of the commands

Docker uses concepts from union filesystem layers to build Docker images. This is why you can see seven layers being pulled from Docker Hub. One stacks up onto another, building a final image

By default, Docker will try to pull the image with the latest tag, but we can also download an older, more specific version of an image we are interested in using different tags.

docker pull nginx, docker run -dp   80;80 nginx, docker stop <containerid> , docker login, docker info, docker version, docker insepct <containerid>, docker pull microsoft/nanoserver, docker images, docker ps, docker run -it microsft/nanoserver cmd, docker ps -a, docker run -it microsoft/nanoserver cmd,  docker run -it microsoft/nanoserver cmd echo /c "hello world", docker network ls, docker inspect <containerid>, docker run -d -p 8000:8000 microsft/iis:nanoserver ,docker pull microsoft/windowsservercore, docker run -it microsoft/nanoserver cmd /c "echo hellow world",  docker run --rm -it microsoft/nanoserver, docker save httpd -o httpd.tar, docker rmi httpd, docker load -i httpd.tar, docker login, docker tag httpd:latest vijayraj/httpd:latest,docker logs 5e3, docker run -d httpd (to run in background), docker exec -i 5e3 ls -l / , docker  exec -it 5e3 (interactive and tty)

docker rm -f $(docker ps -qa)

It can be very well integrated with git hub in CI/CD model. As soon as the docker code is updated it is build and pushed to hub.docker.com to be used by end user.

Kubernetes and Docker Swarm are the used to orchestrate docker container. Kubernetes by default container runtime is docker. K8 can work other runtime as well.


Docker Swarm vs Kubernetes


Sotrage: --- coming soon


Network: --- coming soon



Friday, 27 August 2021

 My reference

https://www.capitolinetraining.com/data-centre-certification-who-can-certify-which-data-centre-standard/

Play with Docker (play-with-docker.com)

A Guide to the Kubernetes Networking Model - Kevin Sookocheff

The Ultimate Guide To Using Calico, Flannel, Weave and Cilium - Platform9

Flannel - L2 For simple and small setup where network monitoring and securing not required

Calico - L3 Network rich features along with security. Suitable for big environment

K8 - 5000 nodes in cluster - 100 pods per node 150000 pods per cluster 300000 container. 



An Introduction To SAP S/4 Simple Logistics – ITPFED

TAP vs SPAN | Garland Technology


Kali Linux - Menus


• Information Gathering: Collecting data about the target network and its structure, identifying computers, their operating systems, and the services that they run. Identifying potentially sensitive parts of the information system. Extracting all sorts of listings from running directory services.
• Vulnerability Analysis: Quickly testing whether a local or remote system is affected by a number of known vulnerabilities or insecure configurations. Vulnerability scanners use databases containing thousands of signatures to identify potential vulnerabilities. 
• Web Application Analysis: Identifying misconfigurations and security weaknesses in web applications. It is crucial to identify and mitigate these issues given that the public availability of these applications makes them ideal targets for attackers.
• Database Assessment: From SQL injection to attacking credentials, database attacks are a very common vector for attackers. Tools that test for attack vectors ranging from SQL injection to data extraction and analysis can be found here.
• Password Attacks: Authentication systems are always a go-to attack vector. Many useful tools can be found here, from online password attack tools to offline attacks against the encryption or hashing systems.
• Wireless Attacks: The pervasive nature of wireless networks means that they will always be a commonly attacked vector. With its wide range of support for multiple wireless cards, Kali is an obvious choice for attacks against multiple types of wireless networks.
• Reverse Engineering: Reverse engineering is an activity with many purposes. In support of offensive activities, it is one of the primary methods for vulnerability identification and exploit development. On the defensive side, it is used to analyze malware employed in targeted attacks. In this capacity, the goal is to identify the capabilities of a given piece of tradecraft.
• Exploitation Tools: Exploiting, or taking advantage of a (formerly identified) vulnerability, allows you to gain control of a remote machine (or device). This access can then be used for further privilege escalation attacks, either locally on the compromised machine, or on other machines accessible on its local network. This category contains a number of tools and utilities that simplify the process of writing your own exploits.
• Sniffing & Spoofing: Gaining access to the data as they travel across the network is often advantageous for an attacker. Here you can find spoofing tools that allow you to impersonate a legitimate user as well as sniffing tools that allow you to capture and analyze data right off the wire. When used together, these tools can be very powerful.
• Post Exploitation: Once you have gained access to a system, you will often want to maintain that level of access or extend control by laterally moving across the network. Tools that assist in these goals are found here.
• Forensics: Forensic Linux live boot environments have been very popular for years now. Kali contains a large number of popular Linux-based forensic tools allowing you to do everything from initial triage, to data imaging, to full analysis and case management.
• Reporting Tools: A penetration test is only complete once the findings have been reported. This category contains tools to help collate the data collected from information-gathering tools, discover non-obvious relationships, and bring everything together in various reports.
• Social Engineering Tools: When the technical side is well-secured, there is often the possibility of exploiting human behavior as an attack vector. Given the right influence, people can frequently be induced to take actions that compromise the security of the environment. Did the USB key that the secretary just plugged in contain a harmless PDF? Or was it also a Trojan horse that installed a backdoor? Was the banking website the accountant just logged into the expected website or a perfect copy used for phishing purposes? This category contains tools that aid in these types of attacks.
• System Services: This category contains tools that allow you to start and stop applications that run in the background as system services.

Thursday, 19 August 2021

 My reference - PostgreSQL

centos 7

sudo yum -y install https://download.postgresql.org/pub/repos/yum/reporpms/EL-7-x86_64/pgdg-redhat-repo-latest.noarch.rpm

sudo yum install postgresql12-server postgresql12

# postgresql-12-setup initdb

# systemctl enable --now postgresql-12.service 

su - posgres

psql -c "alter user postgres with password 'password'"

[root@vm11 ~]# vi /var/lib/pgsql/12/data/pg_hba.conf^C

host all all 0.0.0.0/0 md5

host all all 192.168.18.0/24 md5

[root@vm11 ~]# vi "/var/lib/pgsql/12/data/postgresql.conf"^C

listen_addresses = '192.168.10.10' 

or

listen_addresses = "*"

sudo rpm -i https://ftp.postgresql.org/pub/pgadmin/pgadmin4/yum/pgadmin4-redhat-repo-2-1.noarch.rpm

yum install pgadmin4

/usr/pgadmin4/bin/setup-web.sh

systemctl enable --now httpd

http://192.168.82.170/pgadmin4

su - postgres

psql

create database lokus;

\c lokus;

select version();

Wednesday, 21 July 2021

Ethical Hacking - Penetration Testing -- My Reference

Whenever possible, choose a modern framework to build your apps, always use the built-in security features, and make sure you keep it up-to-date.

...


TCP/IP network model includes the Link, Internet, Transport, and Application layers. The Link layer of the TCP/IP architecture encompasses the Physical and Data Link layers from the OSI model, while the application  layer takes in the Session, Presentation, and Application layers from the OSI model

Most firewall perform packet filter based on the information in the.  Stateful firewall allows  outgoing traffic to any destination that means traffic from those destination can penetrate the network. Newer firewall can perform deep packet inspection. Next generation firewall UTM that include functionality of antivirus somewhat capable of inspecting layer 7 traffic as well know as WAF (Web Application Firewall) though there are specialized WAF available AWS WAF, Azure WAF, Cloudfare WAF, Imperva WAF. Further organization can have Defense in Depth (multiple layers of firewall at perimeter and at DMZ ), Defense in Breadth includes awareness, performing vulnerability assessment, monitoring traffic, applying honeypot(intentionally keeping vulnerable system in the network so as find threat vector,  vulnerability exploited and its motive. Similar to honeypot honey network can also be deployed to find attack vectors and hackers.

CIA - Confidential is to keep the information secret, Integrity means data should not change when it is not expected to change, availability means data should be available when it is expected to be available.



Methodology of Ethical Hacking

  • Reconnaissance and Foot printing
Footprinting and reconnaissance are important activities in an overall methodology when it comes to ethical hacking. This is the stage where you collect a lot of data that you will use later. It’s also a stage where you don’t want to make a lot of noise. You’re sneaking up on someone to scare them. If you make noise, you get no startled reaction from your target. The same is true here. You want to do your work ahead of time so when you get started, you aren’t registering a lot of activity that could get you detected and the next thing you know, you’ve been locked out. 
There are multiple sources of intelligence that are freely and openly available. Using sources like the SEC’s EDGAR database, you can obtain information about companies. Job sites can also provide you with details about a company that may be useful later on. This includes the technology they may be using, since when they hire people, they generally provide job requirements, including technology vendors like Microsoft, Red Hat, Cisco, and Palo Alto Networks. You can also read reviews from employees that may have useful data. Social network sites can be used to gather information about people, as well as companies. The people sometimes post information about their jobs or the company in general. Social engineering is a common attack technique, and this attack technique can be more successful if you know how to target individuals. You can also gather information about the companies themselves. Often, companies will have a social network presence. Additionally, there are sites like LinkedIn that are focused on businesses and business interactions.

The Internet requires data to function. This includes who has been allocated addresses. These allocations go through RIRs. You can use tools like whois to gather some of this data from the RIRs. The program whois can also be used to gather registration details about domains. This would include addresses, phone numbers, and contact information. Not all domains will have this information since registrars will sometimes hide the details of the registrant. What you’ll get is the registrar’s contact information and address and not the organization. What you will also get out of this is the registered NSs for the domain. These are the NSs considered to be authoritative for the domain. DNS contains a lot of detail about a company if you can find it. The quickest way to get all of the hosts is to do a zone transfer, but it would be unlikely for zone transfers to be allowed. Instead, you may have to resort to something like brute forcing DNS requests to guess possible hostnames. From this, you will get hostnames that can become targets. You will also get IP addresses. These IP addresses can be run through whois to get the network block and the owner of the block. You may get ranges of IP addresses that belong to the company from doing that. Many attacks take place through web interfaces. While gathering details about a web server or the technologies used in a website isn’t entirely passive, meaning using third-party sources, making common requests to a web server shouldn’t be noticed much. As long as you aren’t sending too many requests all at once or sending something unusual, you’ll get lost in the noise of all the other regular requests. Google hacking is a technique you can use to narrow your search and get results that are focused and relevant. Google hacking is a technique that makes use of keywords like site, inurl, intext, filetype, and link to get very specific results. One of the most useful will be the site keyword, which means you are searching only within one domain. This means you are only looking at results within the organization you are testing for. If you need help identifying search terms that may help you identify vulnerabilities, you can use the Google hacking
database. People and businesses are often using devices that have a network connection but don’t have traditional means for users to interact with them. This often means these devices can be vulnerable to attack. All these devices are called the Internet of Things (IoT). There are sites like Shodan that can be used to identify these embedded devices. Shodan will provide a lot of details about a device, including the IP address and where the IP address is located. You should be able to narrow down whether the device belongs to your target company using a site like Shodan.
  • Scanning and Enumeration

Scanning will provides with a wealth of information that will be necessary to move forward with testing and evaluation. There are different types of scanning, however. As we scan, we want to identify ports that are open. The purpose of identifying open ports isn’t just to get a list of ports. Ultimately, we want to identify the services or applications that are listening on the open ports. In order to identify these open ports, use a port scanner. The most commonly used port scanner is nmap.

nmap can be used for more than just identifying open ports. It can also be used to identify application versions by grabbing banners. Also, nmap can identify the operating system running on the target. While there are other port scanners available, including masscan, which is used for highspeed scanning, nmap is the only port scanner that has a scripting engine built into it. The scripting engine for nmap is based on the programming language Lua, but nmap provides libraries that gives access to the information nmap has to write scripts to better identify services and also perform tests such as identifying vulnerabilities. When writing scripts for the nmap scripting engine (NSE), register ports with nmap so nmap knows to call your script when it finds the registered port to be open.

While nmap is commonly a command-line program, there is also a GUI that acts as a front end for nmap. Zenmap is a program that will call nmap based on a command specified, but it will also parse the results, providing in different ways.  Provides all servers that were identified in nmap along with a look at a topology of the network based on what nmap finds.  Zenmap will also provide some scan types like an intense scan. Selecting the scan type will fill in the needed command-line parameters.

fping -aeg 192.168.86.0/24
nmap -sS 192.168.1.20
nmap -sT -p 80,443 192.168.1.0/24
nmap -sU -T 4 192.168.86.32
nmap -sX 192.168.86.32
nmap -sV 192.168.86.32
nmap -O 192.168.1.20
zenmap GUI nmap (an overlay for nmap)
masscan --rate=100000 --80,443 192.168.1.0/24

Vulnerability scanners will not only look for vulnerabilities, they will also generally perform port scanning as part of looking for and identifying open ports. There are a number of vulnerability scanners available commercially. Very few vulnerability scanners exist that are open-source. One, based on one that is now commercial, is OpenVAS. OpenVAS was forked from the last open-source version of Nessus. One of the challenges of vulnerability scanners is the vast amount of work it takes to maintain them and keep them up to date, which is perhaps a primary reason why there are very few open-source scanners. Vulnerability scanners, like OpenVAS and Nessus, use plug-ins to perform tests. They probe the targeted host to observe behavior on the host in order to identify potential vulnerabilities. Not all identified vulnerabilities are real, however. Vulnerabilities that are identified by scanners but aren’t real are called false positives. A false negative would be a vulnerability that did exist but wasn’t identified. Vulnerability scanners are far from infallible because of the way they work. It may require manual work to validate the findings from a vulnerability scanner. In order to test vulnerabilities and also perform scans, we may need to do something other than relying on the operating system to build packets. There are multiple tools that to craft packets, such as hping. This is a tool that can be used for scanning but also can be used to create packets using command-line switches. If you would prefer not to use the command line, you can use a tool like packETH. packETH presents with all of the headers at layers 2 through 4. We can also create a payload to go in the packet. packETH will also let extract packets from a PCAP and then make changes to it. We can send individual packets to a target—either layer 2 or layer 3—or we  could send streams. Using these crafted packets, we can get responses from target that may provide with necessary information.

Target networks will likely have firewalls and an IDS installed. This  will probably want to use techniques to evade those devices since they will likely prevent us from doing job. There are multiple ways to evade security technologies, including encryption/encoding,  causing the operator to go screen blind, or sending malformed messages to the target.

  • Gaining Access
    • After gaining the access black hat hacker may exploit vulnerability
  • Maintaining Access
  • Covering Tracks



White hat, black hat, and gray hat:

 White hat hackers are people who always do their work for good. 

Black hat hackers, probably not surprisingly, are people who do bad things, generally actions that are against the law. 

Gray hat hackers, though, fall in the middle. They are working for good, but they are using the techniques of black hat hackers.

Enumeration is the process of gathering a lot of information further up the network stack  than just IP addresses and ports. At this point, we are moving up to the Application layer. We’re looking for things like usernames, where we can find them, and network shares and any other footholds we may be able to gather. In order to accomplish this enumeration work, there are a number of protocols and tools that we can use. The first is nmap, because we need to go beyond just identifying open ports. We need to identify the services that are in use, including the software being used. One feature of nmap that is very useful, especially in these circumstances, is its scripting capability. This includes, especially, all the scripts that are built into nmap. When it comes to nmap, there are scripts that can be used not only to probe services for additional details but to take advantage of the many enumeration capabilities. One of the protocols we can spend time looking at is the SMB protocol. Nmap includes a number of scripts that will probe systems that use SMB. This includes identifying shares that may be open as well as potentially users and other management-related information that can be accessed using SMB.

SMB relies on RPCs. NFS, a file sharing protocol developed by Sun Microsystems, also uses RPC. We can use nmap to enumerate RPC services, since these services register dynamically with a mapping or registry service. Probing the RPC server will provide details about the programs and ports that are exporting RPC functionality. If the program is written in Java, it will use RMI instead of the portmap or SunRPC protocol. Another program you can use across a number of protocols for enumeration is Metasploit. Metasploit comes with lots of modules that will enumerate shares and users on SMB, services using SunRPC, and a number of other protocols. If there is information that can be enumerated, Metasploit probably has a module that can be run. This includes modules that will enumerate users in mail servers over SMTP. You can also enumerate information using SNMP. Of course, when it comes to SNMP, you can also use tools like snmpwalk

While Metasploit can be used across a lot of different protocols to look for different pieces of useful information, it is not the only tool you can use. There are built-in tools for gathering information from SMB, for example. You’re more likely to find those tools on Windows systems, but you can also find tools on Linux systems, especially if you have Samba installed. Samba is a software package that implements the SMB protocol on Unix like systems. There are also a lot of open-source tools that can be used for different protocols. If you are okay with using Linux, Kali Linux is a distribution that includes hundreds of security-related tools. As you are performing this enumeration, you should be taking notes so you have references when you are going forward. One advantage to using Metasploit, not to oversell this software, is that Metasploit uses a database backend, which will store a lot of information automatically. This is certainly true of services and ports but also of usernames that have been identified. This is not to say that Metasploit can be used to store every aspect of your engagement, but you can refer to details later on by querying the Metasploit database as needed.

Hacking

Once you have all of your information gathered about your target—networks, systems, email addresses, etc.—the next step is to work on exploiting the vulnerabilities. There are multiple reasons for exploiting the identified vulnerabilities that have nothing to do with simply proving you can. The ultimate goal is to improve the security posture and preparedness for the organization you are working with. This means the reason you are exploiting vulnerabilities is to demonstrate that they are there and not just false positives. Additionally, you are exploiting vulnerabilities in order to gather additional information to exploit more vulnerabilities to demonstrate that they are there. Ultimately, once you have finished your testing and identified vulnerabilities, you can report them back to your employer or client. In order to exploit vulnerabilities, you need a way to search for exploits rather than being expected to write all the exploits yourself. There are online repositories of exploits, such as exploit-db.com. Some of these online repositories are safer than others. You could, for example, go to the TOR network and also look for exploits there. However, there are several potential problems with this. What you get there may not be safe, especially if you are grabbing binaries or source code you don’t understand. If you prefer to keep exploit repositories local, such as if you aren’t always sure if you will have Internet access, you can grab the exploit-db repository. There is also a search script, called searchsploit, that will help you identify exploits that match possible vulnerabilities. 

Once you have exploited a system, there are several steps you would consider taking that would not only emulate what an attacker would do but also give you continued access to the compromised system and also to other systems in the network. For example, you can grab passwords from the system you have compromised. These passwords may be used on other systems once you have cracked the passwords using a tool like John the Ripper or rainbow tables. In a Windows domain, you will certainly find that usernames are probably usable across multiple systems, and often, local administrator passwords are used across systems as well. You may find there are networks that you can’t get to from the outside. You can pivot to those other networks once you have compromised a system. What you are doing is using the compromised system as a router. You will adjust your routing table to push traffic through the connection you have to the remote system. That system will then forward the packets out to other systems on the network. In order to accomplish many tasks, you will need to have administrative privileges. This may require privilege escalation. This could be done by Meterpreter automatically, but more than likely you will need to make use of a local vulnerability that will give you administrative privileges once the vulnerability has been exploited. Metasploit can help with that, but you may also need to find other exploits that you run locally. One thing you may need elevated privileges for is to maintain persistence, meaning you can always get back into the system when you want to. There are ways to do it without elevated privileges, but getting administrative rights is always helpful. You’ll also want to cover your tracks to avoid detection. This may include wiping or manipulating logs. This is another place where elevated privileges are useful. There are a number of ways to hide files on the compromised system. This will help with casual observation, for sure. Really hiding files and processes may require a rootkit. You can also manipulate time stamps on files, which may be necessary if you are altering any system provided files.

Malware is a serious problem across the information technology industry. It affects businesses and people alike. Malware authors generally have the upper hand. If they want to be on top of things, they can be always ahead of antivirus programs. This means they can get the malware into systems before anyone can protect against it. They usually get a leg up with new malware, and they can also keep modifying their malware to make it even harder to detect. This is where malware analysis comes in. There are two types of analysis. The first is static analysis, which comes from looking at the properties of the malware as well as the actual code. You can look at the composition of the file, including the number and size of the sections of a portable executable (PE) file. Static analysis can also potentially tell you whether you have a packed executable or not. One way of knowing this is looking at the entry point of the application. The entry point is determined at compile time, based on the address space provided to the program. This address can be labeled and not just be an address. If you see an entry point named something like UPX, you will know that it has been packed because UPX is a common packer. Disassemblers are useful for looking at the code and also looking at some properties of the executable. The other type of analysis is dynamic, which means running the program and looking at what the malware does. This must be done with care because you don’t want to infect your own machine, nor do you want to expose other systems on your network to potential infection. Running malware inside of virtual machines can help here, though some malware will know it is running inside of a virtual machine and not fully deploy. You can use a sandboxed environment to run the program, and there are sandboxes that will automate the analysis for you. One of these is Cuckoo Sandbox, which can be installed on your own hardware, or there are some openly available installations online. Using a debugger can help with dynamic inspection because you can control the run of the program. You may, in the course of your testing of your target organization, want to make use of malware. Because you are behaving ethically, you don’t want to use actual malware, whose purpose is really malicious. You may, though, want to install backdoors or test whether operations staff will detect the existence of malware. You can do this by writing your own, taking into consideration your target platform and architecture as well as other requirements. Python is a popular programming language, but Windows systems may not have a Python interpreter installed like a MacOS or Linux system would. Windows would have PowerShell installed, though. A compiled program, written using a language like C, would generally work because all elements of the program can be compiled in rather than relying on libraries to be installed. Malware will often have infrastructure associated with it. You may have heard of botnets. Botnets are collections of infected systems that have command and control systems that are used to tell the bot endpoints what they should be doing. Even if there isn’t an entire network of command and control systems, there may be at least one system available for infected systems to connect back out to so as to allow remote control from an attacker. This is done because firewalls generally allow outbound traffic where inbound traffic will get blocked. Therefore, connections are best initiated from inside the network. When connections are initiated from the inside, there has to be a server for the inside system to connect to. Even if there aren’t a lot of systems, this is still infrastructure

Sniffing can be an important skill to have because of the many tactics that can rely on information that can be gathered from sniffing. Sniffing is another word for capturing packets, which is the process of gathering all messages that pass by the network interface, grabbing them at the Data Link layer, and passing all the messages up to an application that is capable of displaying the messages captured. While it’s called packet capturing, it’s really frames that are being grabbed since the data is being grabbed at the Data Link layer with the layer 2 headers intact. The protocol data unit (PDU) at layer 2 is a frame. The PDU at layer 3 is a packet. If the packet-capture software is discarding the layer 2 information, then it really is a packet capture. There is a lot of software that can be used to capture packets across varied platforms. The program tcpdump has been around since the late 1980s and was standardized in the late 1990s. It is generally available across multiple operating systems, but especially Unixlike operating systems. On Windows, you can get a port of tcpdump called windump. The behavior is the same, but the source code is different in order to take into account the way Windows interacts with its network hardware. If you are looking for a program you can use with the same name across multiple platforms, you can use tshark. This is a command line program that comes with the Wireshark package. It also has the advantage of giving you the capability of printing only the fields you indicate. Wireshark is a GUI-based program that can perform not only packet capture but also packet analysis. There may be other programs and utilities you can use to analyze packet captures, but Wireshark has to be about the best you can get, especially for the money. It’s freely available and packed with functionality. Wireshark knows about dozens if not hundreds of  protocols. It does protocol decoding and can identify issues with protocols. It will call attention to those issues by coloring the frames in the packet capture and also coloring the lines in the protocol decode. Wireshark will provide expert information that you can look at all at once from the Analyze menu. There is also a Statistics menu that provides a number of different ways to look at the data. This includes a protocol hierarchy, showing how the protocols break down in the packet capture. You can also look at the packet capture from the perspective of endpoints. In the different statistics views, you can see packet and byte counts. It can be challenging to get packets to the device where you are trying to capture them. One way to do this is to mirror ports on a switch. This is sometimes called port spanning because, as mentioned previously, Cisco calls the functionality SPAN. You may not have access to the switch, though. You can also perform spoofing attacks, such as ARP spoofing. ARP spoofing is when a system sends gratuitous ARP responses, which are then cached on other systems on the network. ARP spoofing can be used to get packets to your system for capture. However, you can also use ARP spoofing as a starting point to do DNS spoofing if what you really want to do is redirect requests to other IP addresses. You can also use ARP spoofing to redirect web requests to the sslstrip plug-in. The program that does all this is Ettercap, though there are other programs that can do ARP spoofing. DNS spoofing is also a possible way to redirect traffic to an attacker. This may be done by intercepting DNS requests and responding to them faster than the legitimate DNS server. In the case of DNS, first to answer wins, and sometimes DNS clients will accept answers even from IP addresses that don’t originate the request because DNS servers may sometimes respond on a different IP address than the one the response came in on. 

Social engineering is a skill that has been around for probably as long as there have been humans who have wants and desires. Once you know the principles of social engineering, you can start to recognize places where you are being manipulated. The principles of social engineering are reciprocity, commitment, social proof, authority, liking, and scarcity. These key principles can be used to manipulate people into performing acts they may not otherwise be inclined to perform. This can include giving you access to systems or information. It could also be giving up credentials that you may need to perform other attacks. When you are preparing a social engineering attack, pretexting is important. Pretexting is creating the story you are going to be telling. You can see examples of pretexting in your email and even in scam phone calls. The 419 scam is common and a good example of  pretexting. You are being told a story, and it’s a story you are inclined to buy into, likely because of the promise of millions of dollars. Who, after all, wouldn’t like to have enough money that they could quit their job and go live on a beach in Tahiti? Pretexting is just the story you are going to tell your targets. A good pretext has all the angles covered so you aren’t fumbling for an answer when your target asks a question or raises an objection. You have it all figured out. This can require research, especially if you are going after employees within an organization who may have been given training to protect themselves and the company against social engineering attacks. There are many forms social engineering can take. Four of these are vishing, phishing, smishing, and impersonation. Vishing is trying to gather information over the phone; phishing is the process of gathering information through fraud, though commonly it’s thought to include email as the delivery means; smishing is using text messages; and impersonation is pretending to be someone else. While some of the forms of social engineering will include impersonation as a component, when we talk about impersonation as a social engineering vector, we’re talking about impersonating someone else in order to gain physical access to a building or facility. Gaining physical access to a facility may be an important element in a penetration test or red-team effort. You can impersonate someone else, but there are multiple protections that may make that difficult. Many buildings today are protected by requiring a badge swipe to demonstrate that you are who you say you are and that you have legitimate access to the building. This can be avoided through the use of tailgating. Tailgating means following someone else through the door after they have opened it with their badge. A man trap can protect against this sort of entrance, as can a revolving door that only makes a quarter turn for a swiped badge. Biometrics can also be used to verify identity. Once identity has been demonstrated, access can be granted as defined. Websites can be used as vectors for social engineering attacks. This is another area where impersonation can be useful. It’s trivial to set up a clone of an existing website by just copying all of the resources from that site to another location. Once there, you can add additional components, including “malicious” software, which may give you remote access to the target system. You can use typosquatting attacks, using a domain name that is very similar to a real domain name with a common typo as part of the name. You can also use watering hole attacks, which is where a commonly visited site is compromised so when users come to the site, they may get infected. Wireless network access is common today, especially with so many devices that can’t support wired network access. This is another area where you can run a social engineering attack. You can set up a rogue access point with an enticing name to get people to connect and give up information. You could also use an existing SSID, jamming access to the authentic access point. This means legitimate users can be forced to attempt authentication against your access point and you can gather credentials or any other information, since once they are connected, all of their traffic will be passing through your system. While these attacks can be done manually, it can be easier to automate a lot of them.

There are some good tools to help. They include wifiphisher, which can automate the creation of a rogue access point. The SET is another tool that can automate social engineering attacks. It uses Metasploit and the payloads and modules to support these attacks. 

Wireless Security Wireless networks are common today, especially as mobile devices become more ubiquitous. There are two common types of wireless networks. The first is an ad hoc network where the stations organize themselves into the network. A station is an endpoint in a wireless network. If there is an access point that the stations connect to, it’s an infrastructure network. When stations connect to an access point, they have to authenticate. There may be open authentication, which means anyone can join the network without needing to know a password. Modern wireless networks use WPA for encryption and WPA networks use a four-stage handshake to authenticate the station and also derive the encryption key. WEP was the first attempt at encrypting wireless communications. Unfortunately, the initialization vector, a random value meant to seed the encryption, wasn’t actually random and could be guessed, leading to key leakage. The second pass was meant as a stopgap, and that was WPA. Eventually, the final replacement for WEP came out, called WPA2. WPA and WPA2 support both personal and Enterprise authentication. Personal authentication uses a pre-shared key. Enterprise authentication can support username and password authentication.

Wireless networks can be attacked. In networks that use WEP, the encryption key could be derived if enough frames were collected from the wireless network. Common sniffing applications like Wireshark can be used to sniff wireless networks. The wireless interface needs to be put into monitor mode to collect the radio headers. Wireshark may be capable of doing this on its own, depending on the network interface. You can also use the airmon program from the aircrack-ng suite. You can also use aircrack-ng to try to crack the password used for the network. One way to gather enough data is to use a deauthentication attack. This attack sends messages to clients telling them they have been deauthorized on the network. The attacking system spoofs the messages to appear as though they are coming from the access point. Simultaneously, the attacker may attempt to jam the access point so it doesn’t get and, more important, respond to the station. The attacker may continuously send these disassociation messages to force the victim station to keep sending messages trying to authenticate against the network. Evil twin attacks are where the attacker establishes a network that appears to be a legitimate network with the same SSID. Stations would attempt to authenticate against the rogue access point. The rogue access point could then collect any authentication credentials, and once the station had passed those along, the evil twin would allow the connection, gathering traffic from the station once the station was connected. Another attack is the key reinstallation attack. This is an attack where traffic is reused in order to force a station to use a key that’s already known. This means the attacker knows what the key is, meaning traffic can be decrypted.  Of course, 802.11 is not the only wireless communications protocol. Bluetooth is often used. There are several Bluetooth attacks, though some of them may be outdated and won’t work on modern devices. But there are plenty of legacy devices that use older Bluetooth implementations where these attacks can work. Bluejacking is using Bluetooth to send a message to a target system. Bluesnarfing is collecting data from a target system using Bluetooth. Bluebugging is using Bluetooth to connect to a target system and then initiating a call out from the target system so conversations can be bugged. Mobile devices commonly use wireless protocols like 802.11 and Bluetooth. There are ways to inject malicious applications onto mobile devices like tablets and smartphones. This may be done using a third-party app store that users may be convinced to add as an option to their device. Mobile devices are also vulnerable to the same sorts of attacks as desktops, like phishing, malware, and programming errors that can open the door for data leakage. 


Attack and Defense 

Old-style attacks would have used vulnerabilities in listening services. They may even have used vulnerabilities in the implementation of the network stack. This is not where modern attacks are taking place. As often as not, they happen through social engineering. Attacks often happen at the Application layer, and since web applications are such a common avenue for users to interact with businesses, web applications are a good target for attackers. There are several common attacks against web applications. These attacks can allow the attackers to gain access to data or even to the underlying system. XML External Entity Processing can allow the use of XML to gain access to underlying system functions and files, for instance. SQL injection attacks can not only allow attackers access to the data stored in the database, they can also be used to gain access to the underlying operating system in some cases. Not all attacks are about the server infrastructure, though. A cross-site scripting attack is about gaining access to something the user has. This may include not only data on the user’s system, but also data stored on sites the user has access to. Session identification information, stored in cookies, may be used to gain access to other systems where the user has privileges. This may be online retailers or even banking sites. These session tokens could be used to steal from the user. Web applications can be protected through a number of means when they are developed. First, all input should be validated by each function, even if the expectation is it comes from a trusted source. Additionally, nothing should be passed directly from the user to any subsystem. Such actions could lead to attacks like command injection. When it comes to Application-layer exploitation, attackers are looking to inject their own code into the memory space belonging to the application. The point is to control the flow of the application by manipulating the instruction pointer, telling the processor where to get its instructions to execute. Buffer overflows can be used to push instructions and return addresses onto the stack, where data for the application is stored. If an attacker can push instructions onto the stack, the program can be manipulated into executing those instructions, sometimes called arbitrary code execution. Another attack involves the memory structure called the heap, where dynamic data is stored. Heap spraying involves injecting code from the attacker into the heap. Once it’s there, the attacker could cause the program to execute instructions on the heap. Once attackers are in the environment, they will look to move laterally, to gain access to other systems. This may involve privilege escalation and more reconnaissance so they can gain access to systems where there may be more data. It has long been a well-respected strategy to use a defense in depth approach to network protection. This is sometimes called a castle defense, focused around building a lot of walls to make it more difficult or cumbersome for attackers to gain access to networks. This ignores modern adversaries, who are organized and well funded and will take as much time as necessary to gain access to a prized target. A few additional hurdles won’t be a deterrent, and if there are not detection strategies and controls in place, slowing the attacker down won’t provide much other than a short reprieve from the breach. Defense in breadth is a way to alleviate some of the concerns related to defense in depth by broadening the scope of understanding how modern attackers function. A defense in breadth approach would look at the entire network stack, providing controls where possible to protect against attack rather than assuming more firewalls will keep attackers out. A newer approach to network architecture and design is becoming common. It’s called defensible network architecture and it is based on the understanding that social engineering attacks are common. It also takes into account the importance of visibility and response, since breaches are common and may not be possible to avoid. Logging is essential to detection, and when a lot of logs are collected, a system to manage them is essential, such as a SIEM system.

Cryptography Encryption is an important concept because privacy is so important. This is especially

the case when attackers are looking for any advantage they can get. If they can intercept

messages that are not encrypted, they may be able to make use of the contents of the message.

Users will sometimes make the mistake of believing that messages sent to other users

within an enterprise are safe because they remain inside the enterprise. These messages

are not safe because they can be intercepted and used. The same can be true of disk-based

encryption. You can’t assume that a disk that has been encrypted is safe. Once someone has

authenticated as a legitimate user, the disk is unencrypted. This means if an attacker can

gain authenticated access, even by introducing malware that is run as the primary user, the

disk is wide open to the attacker.

There are two types of encryption when you think about the end result. The first is substitution,

where one character is substituted for another character. This is common with

encryption schemes like a rotation cipher and the Vigenère cipher. The second type is a

transformation cipher. This is where the unencrypted message, or plain text, is not replaced

a character at a time but the entire message is transformed, generally through a mathematical

process. This transformation may be done with fixed-length chunks of the message,

which is a block cipher. It may also be done byte by byte, which is how a stream cipher

works. With a block cipher, the data size is expected to be a multiple of the block size. The

final block may need to be padded to get it to the right size.

Key management is essential. An important element of that can be key creation. You

could use pre-shared keys, which could be learned or intercepted while they are being

shared. If you don’t use a pre-shared key, the key could be derived. This may be done using

the Diffie-Hellman Key Exchange protocol. Using a common starting point, both parties

in the process add a value and pass it to the other party. Once the value has been added to

the shared key, you end up with both sides having the common value plus the random value

from side A plus the random value from side B. Both sides have the same key and can begin

sending encrypted messages.

This process could be used for symmetric keys, where the same key is used for both

encryption and decryption. The Advanced Encryption Standard (AES) is a common symmetric

key encryption algorithm. AES supports 128, 192, and 256 bits. You might also use

an asymmetric key algorithm where different keys are used for encryption and decryption.

This is sometimes referred to as public key cryptography. A common approach is to use a

hybrid cryptosystem where public key cryptography is used to share a session key, which is

a symmetric key used to encrypt messages within the session.

Certificates, defined by X.509, a subset of the X.500 digital directory standard, are used

to store public key information. This includes information about the identity of the certificate

holder so verification of the certificate can happen. Certificates may be managed using

a CA, which is a trusted third party that verifies the identity of the certificate holder. A CA

is not the only way to verify identity, though. PGP uses a web of trust model, where individual

users validate identity by signing the public keys of people they know.

A MAC is used to ensure that messages haven’t been altered. This is generally a cryptographic

hash, which is an algorithm that generates a fixed-length digest value from

variable-length data. This can be used not only for message authentication but also for verifying

that files have not been tampered with or corrupted.



attack lifecycle


Data classification is an essential activity. It helps to identify all data resources as well as

prioritize their sensitivity or importance. This action is needed in order to implement a

security model. You would be unable to implement Biba, for instance, if you didn’t know

sensitivity or priority, since Biba needs to know who can read up and who can write down.

The Biba security model is about data integrity. The same is true for the Clark-Wilson

integrity model. Other models, like the Bell-LaPadula model, aren’t as concerned with

integrity as they are with confidentiality. Integrity ensures that data isn’t altered or corrupted

by unauthorized users. Confidentiality ensures that data isn’t seen by those who are

not authorized. Security models are necessary for implementing mandatory access controls.

Applications are designed. As a result, they generally follow known architecture or

design patterns. Native applications that are stand-alone have no external architecture but

may have an internal architecture. This means, since they are stand-alone, they don’t rely

on other systems or services. A common application architecture, though, is the n-tier,

or multitier, architecture. The multitier architecture is composed of the Presentation,

Application, Business Logic, and Data Access layers. Sometimes the Application and

Business Logic layers are consolidated to just handle logic for the application, based on

business requirements. This would be a three-tier application. This is an implementation of

a MVC application design.

Web applications will generally use a multitier architecture. The browser is the presentation

layer (view). There is likely an application server, whether it’s running Java, .NET,

PHP, or some other language, that handles business and application logic (controller).

Finally, there is probably a data store, perhaps in the form of a database, that is the data

access layer (model).

Modern applications are still using multitier architectures, but they are also often broken

up into functions or services. When an application is viewed or designed this way, it

is said to have a service-oriented architecture (SOA). This means the overall application is

broken up into services and the services interact with one another. It provides modularity

so any service can be replaced with another service with the same input/output specifications

without altering the rest of the application. Recently, this approach has been adapted

into a microservice architecture. Microservices are further enabled through the use of containers

like Docker or Kubernetes.

Sometimes, these containers are implemented through the use of a cloud provider.

Traditional application architectures can also be implemented using a cloud provider, and

you could end up with a hybrid approach where pieces of your application are on your

premises while others are implemented using a cloud provider. Cloud providers are also

beginning to expose application developers to new ways of considering their application

design. This includes such things as serverless functions. The functions are connected in

order to create the overall application, but there is no server underneath that an attacker

could gain access to. Similarly, the use of containers has sometimes led to automated infrastructure,

so containers and virtual machines are built up and torn down on demand. An

attacker who gained access to such an environment might have to keep starting over when

the system they had access to suddenly went away, including any files that had been put in

place by the attacker.

Often, applications need to store data. It may be temporary data or it may be persistent

data. Traditionally, application data has been stored in a relational database accessed

using SQL. Modern applications are moving away from this approach and starting to

use NoSQL databases, which may use semi-structured documents or key-value associative

arrays.

Businesses in general need to think about a security architecture. This is not related to

application or even network design or architecture. Instead, it is a set of data and methodologies

that guide the overall implementation of security within the organization. NIST

recommends the Five Functions of Identify, Protect, Detect, Respond, and Recover as a way

of guiding the organization—organizationally for staffing, but also in terms of how they

evaluate information security and any potential risks to the business.

NIST is not the only organization that has security recommendations. ISO 27001 is

another set of recommendations for information security management systems. They recommend

Plan, Do, Check, and Act. There is also the attack life cycle that identifies phases

an adversary works through to gain access to critical business systems or data. These are

initial recon, initial compromise, establish foothold, escalate privileges, internal recon,

move laterally, maintain persistence, and complete mission.



------

  1. Threat Attack and Vulnerability
  2. Architectural Design
  3. Implementation
  4. Operational and Incident Response
  5. Governance Risk and Compliance
Control Objectives for Information and related technology (COBIT). COBIT is a documented set of best IT security practices crafted by the Information System Audit and Control Association (ISACA).  set of best IT security practices crafted by the information

----

  • PCI – Visa Payment Card Industry Data Security Standard
  • COBIT – Control Objectives for Information and Related Technology
  • SOX – Sarbanes-Oxley Public Company Accounting Reform and Investor Protection Act
  • GLBA – Gramm-Leach-Bliley Privacy Act
  • FISMA – Federal Information Security Management Act
  • NERC – The North American Electric Reliability Council
  • GSX – Government Secure Extranet
  • HIPAA – health Insurance Portability and Accountability


InfoSec

  1. Security Governance Through Principles and Policies
  2. Personnel Security and Risk Management concepts
  3. Business Continuity Planning
  4. Laws, Regulations and Compliance
  5. Protecting Security of Assets
  6. Cryptography and Symmetric Key Algorithm
  7. PKI and Cryptographic Applications
  8. Principle of Security Model, Design and Capabilities 
  9. Security, Vulnerability, Threats and Countermeasures
  10. Physical Security Requirements
  11. Secure Network Architecture and Securing Network Components
  12. Secure Communication and Network Attacks
  13. Managing Identity and Authentication
  14. Controlling and Monitoring Access
  15. Security Assessment and Testing
  16. Managing Security Operations
  17. Preventing and Responding to incidents
  18. Disaster Recovery Planning
  19. Investigation and Ethics
  20. Software Development Security
  21. Malicious Code and Application attack

Security Governance Through Principles and Policies

Security Governance, management and principles are inherent elements of security policy.  They define basic parameter needed for secure environment. They define goal and objective that security designer and implementer should achieve.  The primary goal and objective of security are contained in CIA Triad (Confidentiality, Integrity and Availability). These 3 principles (CIA) are considered to be most important in the realm of security. Confidential is the principle that objects are not disclosed to unauthorized subjects. Integrity is the principle that maintains the veracity of object and object cannot be viewed or modified by unauthorized subject. Availability is the principle that authorized subjects are granted timely and uninterruptible access to object. 

Other security related principles that should be considered and addressed when designing security policy and implementing security solution are Authentication, Authorization, Accountability, Non-Repudiation, Auditing and Identification.

Senior manager are the one who define security policy. Security Professionals are responsible for implementing security policy. Users are responsible for complying with security policy. Person assigned the Data Owner role is responsible for classifying data (Unclassified, Classified, Confidential, Secret and Top Secret). Data Custodian is responsible for maintaining secure environment and backing up data. Auditor is responsible for making sure that secure environment is properly protecting assets.

A formalized security policy structure consists of policy, standards, baselines, guidelines, and procedures. These individual documents are essential elements to design and implementation of security in any environment. 

When a secure environment is changed, loopholes, overlaps, missing objects and oversights can lead to new vulnerabilities. systematically managing change fix this this.  This involves extensive logging, auditing and monitoring of activities related to security controls.

Threat modeling is the security process where potential threats are identified, categorized and analyzed. Threat modeling can be performed in proactive manner during the design and development or reactive measure once a product has been deployed. The process identifies potential harm, probability of occurrence, the priority of concern, and the means to eradicate or reduce the threat. Various Threat modelling are STRIDE(Spoofing - Authenticity, Tampering - Integrity, Repudiate - Non-reputability, Information Disclosure - Confidentiality, Denial of Service - Availability, Elevation of privilege - Authorization), DREAD(Damage, Reproducibility, Exploitability, Affected User, Discoverability) PASTA(Process for Attack Simulation and Threat Analysis), TRIKE, VAST

Integrating cyber security risk management with supply chain, acquisition and business practices is a means to ensure a more robust and successful security strategy in organizations of all sizes. When purchases are made without security considerations, the risks inherent in those products remain throughout their deployment life span.

Security management is based on three types of plans: strategic, tactical and operational. A strategic plan is a long-term plant that is fairly stable. Tactical plan is a midterm plan developed  to provide more details on accomplishing the goals set forth in strategic plan. Operational plans are short-term and highly detailed plans based on the strategic and tactical plans.

Personnel Security and Risk Management Concept

The process of identifying, evaluating, and preventing or reducing risks is known as risk management. The primary goal of risk management is to reduce the risk to an acceptable level.

Total risk is the amount of risk an organization would face if no safeguards were implemented. To calculate total risk, use this formula: threats * vulnerabilities * asset value = total risk. Residual risk is the risk that management has chosen to accept rather than mitigate. The difference between total risk and residual risk is the controls gap, which is the amount of risk that is reduced by implementing safeguards. To calculate residual risk - Total Risk - Controls Gap = Residual Risk

The six steps of the risk management framework are Categorize, Select, Implement, Assess, Authorize and Monitor

Business Continuity Planning

(ISC)2 International Information System Security Certificate Consortium 


Law Regulation and Compliance

Criminal Law protect society against acts that violate the basic principles we believe in.  Violation of criminal law are prosecuted by federal and state governments. Civil law provides framework for the transaction of business between people and organizations. Violation of civil law are brought to the court and argued by the two affected parties. Administrative law is used by government agencies to effectively carry out their day-to-day business.

Copyrights protect original works of authorship such as books, articles, poems and songs. Trademarks are names, slogans, and logos that identify a company, product or service. Patent provide protection to the creators of new inventions. Trade secret law protects the operating secrets of a firm.

Contractual license agreements are written agreements between a software vendor and user. Shrink-wrap agreements are written on software packaging and take effect when a user opens the package. Click-wrap agreements are included in a package but require the user to accept the terms during the software installation process.

Protecting Security of Assets

Data Owners are responsible for  defining data and asset classifications and ensures that data and systems are properly marked. They define requirement to protect data at different classification like encryption at rest or encryption at transit. PII (Personal Identifiable Information) and PHI (Protect Health information) need to be protected as per many laws and regulation. Information should be properly marked, handled, stored and destroyed. Protecting and securing backup media and sanitizing media(degausser for HDD and Blanco or other software for SSD. OS commands sdelete(windows), shred(linux)) when sensitive information no longer needed. GDPR (General Data Protection Regulation) mandates protection of privacy data. 

Cryptography and Symmetric Key Algorithm

Symmetric Key Cryptography uses same key to encrypt and decrypt data. Cryptography achieved through symmetric key are faster but not very scalable. Aymmetric Key Cryptography uses different keys to encrypt and decrypt data (Public and Private Key). Cryptography achieved through asymmetric key are slower but are more scalable. DES - 56bits , 3DES 112bits/168bits. AES is standard now and more secure than older DES. AES uses Rijndael Algorithm. Strength 128bits/192bits/256bits.

Work Factor is the way to measure strength of cryptography system. It is the amount of time required to decrypt the message. It is directly proportional to strength of cryptography system. Work factor to decrypt AES is much more than the work factor to decrypt DES.

Split knowledge means information required to perform operation is divided among multiple users. 


PKI and Cryptographic Application

cryptography can be used to secure email (using PGP and S/MIME), web communications (using SSL and TLS), and both peer-to-peer and gateway-to-gateway networking (using IPsec and ISAKMP) as well as wireless communications (using WPA and WPA2).

Public Key Cryptosystem: RSA 

Secure Hash Algorithm (SHA), SHA-1 and SHA-2, make up the government standard message digest function.

In the public key infrastructure, certificate authorities (CAs) generate digital certificates containing the public keys of system users. Users then distribute these certificates to people with whom they want to communicate. Certificate recipients verify a certificate using the CA’s public key.

To digitally sign a message, first use a hashing function to generate a message digest. Then encrypt the digest with your private key. To verify the digital signature on a message, decrypt the signature with the sender’s public key and then compare the message digest to one you generate yourself. If they match, the message is authentic.

The Digital Signature Standard uses the SHA-1, SHA-2, and SHA-3 message digest functions along with one of three encryption algorithms: the Digital Signature Algorithm (DSA); the Rivest, Shamir, Adleman (RSA) algorithm; or the Elliptic Curve DSA (ECDSA) algorithm.

Principles of Security Model, Design and Capabilities

Certification is the technical evaluation of each part of the computer system to assess it security standards. Accreditation is the formal acceptance of certified system. Entire evaluation and accreditation depends on the standard evaluation criteria. Several criteria exist for evaluating computer security system.  One of  them is ITSEC used in US also called orange book provides criteria to evaluate security of the system.  TCSEC is alternative used in European countries.

A subject is user who makes a request to access object. Object is resource that user wants to access. Security control use access rules to limit the access by a subject to an object..

Security Vulnerability, Threat and Countermeasures

Security Architecture and Engineering assess and mitigate the vulnerabilities of security systems, web-based system, mobile systems, embedded system and various other system at client, server, cloud, IoT, database, and industrial system.



... To be continued

Bye...

Saturday, 26 June 2021

 My Reference - 2021

Predictive analytic cloud platform

HPE - HPE InfoSight, Dell EMC - CloudIQ, NetApp - ActiveIQ


Data Collection and Analysis

HPE - HPE Assessment Foundry - Collects data from Windows/Linux/Hyper-V/Failover Clusrer/HPE Storage

Dell - Live Optics - Collect data from Windows/PowerEdge/Dell EMC storage

Brocade - SANHealth