Saturday, 26 June 2021

 My Reference - 2021

Predictive analytic cloud platform

HPE - HPE InfoSight, Dell EMC - CloudIQ, NetApp - ActiveIQ


Data Collection and Analysis

HPE - HPE Assessment Foundry - Collects data from Windows/Linux/Hyper-V/Failover Clusrer/HPE Storage

Dell - Live Optics - Collect data from Windows/PowerEdge/Dell EMC storage

Brocade - SANHealth

Sunday, 10 May 2020

Oracle DB installation on ASM(automatic storage management)


ASM-RAC installation require the installation of oracle grid before installing DB.

Installation of Grid Infrastructure on CentOS Linux release 7.6.1810 (Core)

Download given from Oracle for testing we have used Oracle 12c Release 2

linuxx64_12201_database.zip and linuxx64_12201_grid_home.zip

yum install kmod-oracleasm
yum install oracleasm-support
wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.12-1.el6.x86_64.rpm
wget http://download.oracle.com/otn_software/asmlib/oracleasmlib-2.0.4-1.el6.x86_64.rpm
rpm -ivh oracleasmlib-2.0.12-1.el6.x86_64.rpm
rpm -ivh oracleasmlib-2.0.4-1.el6.x86_64.rpm

groupadd -g 54327 asmdba; groupadd -g 54328 asdoper; groupadd -g 54329 asmadmin
# id oracle
uid=1000(oracle) gid=1000(oracle) groups=1000(oracle),10(wheel),1001(oinstall),54327(asmdba),54329(asmadmin),54331(backupdba),54333(dgdba),54334(kmdba),54335(asmoper)
[root@centos02 ~]# id grid
uid=1001(grid) gid=1001(oinstall) groups=1001(oinstall),1000(oracle),54327(asmdba),54329(asmadmin),54331(backupdba),54335(asmoper),54336(racdba)

multiple 2G disk and created partition using fdisk /dev/sdd
press n and select all default and w
repeat for all drive

oracleasm configure -i
oracleasm init
oracleasm createdisk crs1 /dev/sdb1
oracleasm createdisk crs2 /dev/sdc1
oracleasm createdisk crs3 /dev/sdd1

oracleasm createdisk data1 /dev/sdc1
oracleasm createdisk fra1 /dev/sdd1

[root@centos02 disks]# pwd
/dev/oracleasm/disks
[root@centos02 disks]# ls -ltr
total 0
brw-rw----. 1 grid oinstall 8, 33 May 10 02:04 DATA1
brw-rw----. 1 grid oinstall 8, 49 May 10 02:04 FRA1
brw-rw----. 1 grid oinstall 8, 81 May 10 04:46 CRS3
brw-rw----. 1 grid oinstall 8, 65 May 10 04:46 CRS2
brw-rw----. 1 grid oinstall 8, 17 May 10 04:46 CRS1
[root@centos02 disks]#


[root@centos02 dev]# mkdir -p /u01/app/oracle/product/12.2.0/db_home
[root@centos02 dev]# chown -R oracle:oinstall /u01
[root@centos02 dev]# mkdir -p /u01/app/grid/12.2.0/grid_home
[root@centos02 dev]# chown -R grid:oinstall /u01/app/grid
[root@centos02 dev]# chmod -R 775 /u01

Edit .bash_profile for grid user

vi /home/grid/.bash_profile
if [ -f ~/.bashrc ] ; then
. ~/.bashrc
fi
ORACLE_SID=+ASM; export ORACLE_SID
ORACLE_BASE=/u01/app/grid; export ORACLE_BASE
ORACLE_HOME=/u01/app/grid/12.2.0/grid_home; export ORACLE_HOME
JAVA_HOME=/usr/bin/java ; export JAVA_HOME
TNS_ADMIN=$ORACLE_HOME/network/admin; export TNS_ADMIN

/u01/app/grid/12.2.0/grid_home
[grid@centos02 grid_home]$ unzip linuxx64_12201_grid_home.zip
[grid@centos02 grid_home]$ ./gridSetup.sh

Follow wizard.  Step 10 shows selected


[grid@centos02 grid_home]$ ps -ef | grep -i pmon
grid      1426     1  0 04:08 ?        00:00:00 asm_pmon_+ASM
[grid@centos02 grid_home]$ asmca


[grid@centos02 grid_home]$ sqlplus sys as sysasm;

SQL*Plus: Release 12.2.0.1.0 Production on Sun May 10 05:41:10 2020

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password:
Connected to an idle instance.

SQL> startup
ASM instance started

Total System Global Area 1140850688 bytes
Fixed Size                  8629704 bytes
Variable Size            1107055160 bytes
ASM Cache                  25165824 bytes
ASM diskgroups mounted
SQL> select instance_name, status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
+ASM             STARTED


Troubleshooting
------------------------------------------------------------------------------------------------------------------------

ora:12457 TNS:lost connection
chmod 6751 $ORACLE_HOME/bin/oracle
chmod 6751 $GRID_HOME/bin/oracle

The above permission are required to have DG group available in DBCA. If suid and guid are needed.  Better to change the mount option and remove nosuid.

Oracle High Availability Service not start
Start Oracle High Availability service
crsctl status resource -t
crsctl start has

--------------------------------------------------------------------------------------------------------------------------


su - oracle

[oracle@centos02 ~]$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

ORACLE_HOSTNAME=centos02; export ORACLE_HOSTNAME
ORACLE_SID=orcl; export ORACLE_SID
ORACLE_HOME=/u01/app/oracle/product/12.2.0/db_home; export ORACLE_HOME
ORACLE_BASE=/u01/app/oracle; export ORACLE_BASE

PATH=$PATH:$HOME/.local/bin:$HOME/bin:$ORACLE_HOME/bin
export PATH

cd /u01/app/oracle/product/12.2.0/
unzip linuxx64_12201_database.zip
cd database
./runInstaller




once DB creation is complete using oracle user

$sqlplus sys as sysdba

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/ORCL/DATAFILE/system.256.1040061529
+DATA/ORCL/DATAFILE/sysaux.257.1040061691
+DATA/ORCL/DATAFILE/undotbs1.258.1040061771
+DATA/ORCL/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/system.266.1040061997
+DATA/ORCL/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/sysaux.265.1040061997
+DATA/ORCL/DATAFILE/users.259.1040061779
+DATA/ORCL/4700A987085B3DFAE05387E5E50A8C7B/DATAFILE/undotbs1.267.1040061997
+DATA/ORCL/A54DDDE3191E4A19E0530A97A8C039F7/DATAFILE/system.271.1040063257
+DATA/ORCL/A54DDDE3191E4A19E0530A97A8C039F7/DATAFILE/sysaux.272.1040063257
+DATA/ORCL/A54DDDE3191E4A19E0530A97A8C039F7/DATAFILE/undotbs1.270.1040063257
+DATA/ORCL/A54DDDE3191E4A19E0530A97A8C039F7/DATAFILE/users.274.1040063397

11 rows selected.

SQL> select name from v$controlfile;

NAME
--------------------------------------------------------------------------------
+DATA/ORCL/CONTROLFILE/current.260.1040061903
+FRA/ORCL/CONTROLFILE/current.256.1040061903

SQL> select name, open_mode from v$database;

NAME      OPEN_MODE
--------- --------------------
ORCL      READ WRITE

SQL> select instance_name, status from v$instance;

INSTANCE_NAME    STATUS
---------------- ------------
orcl             OPEN


Bye...

Tuesday, 28 April 2020

Kernel Build, Installation from Source, Disk Encryption

Kernel Build from Source


Download latest stable kernel source files from kernel.org



# unxz linux-5.6.7.tar.xz
# tar zxvf linux-5.6.7.tar
# cd linux-5.6.7
# yum group install "Development Tools"
#yum install openssl-devel

for Debian use
# apt-get install build-essential libncurses-dev bison flex libssl-dev libelf-dev



# cp -v /boot/confing-$(uname -r) .config
# make menuconfig 
make required selection and save .config



#make
Compiles source files will take lot of time
After compilation install kernel modules using

# make module_install
After installation of kernel modules install new kernel using


#make install

you will find given files
initramfs-5.6.7.img system.map-5.6.7 vmlinuz-5.6.7 in /boot

update /boot/grub2/grub.cfg using
# grub2-mkconfig -o /boot/grub2/grub.cfg
# grubby --set-default /boot/vmlinux-5.6.7
# grubby --default-index
# grubby --default-kernel
# grubby --info=ALL

for Debian use
# update-initramfs -c -k 5.6.7
# update-grub

#reboot
-------------------------------------------------------------------------------
centos - mkinitrd is a wrapper which calls dracut to generate initramfs image
debian - mkiniramfs to generate initramfs image

 mkinitrd -f -v /boot/initramfs-$(uname -r).img $(uname -r)
dracut foo.img

mkinitramfs -o  /tmp/initramfs-$(uname -r).img
cpio -i < /tmp/initramfs-$(uname -r).img

udevadm monitor

-------------------------------------------------------------------------------



Installing  software from source



Download source file, extract, and goto directory

cd Python-3.8.0
./configure --enable-optimizations
make -j 8 (compiling as per host system)
sudo make altinstall (installing the software. altinstall mean it will not overwrite existing software)
python3.8 --version

for debian
/etc/apt/source.list
example
deb http://archive.defbian.org/debain/ main non-free contrib
deb http://archive.debian.org/debian/   wheezy  main    non-free        contrib
# Line commented out by installer because it failed to verify:
deb http://archive.debian.org/debian-security/ wheezy/updates main non-free contrib

update-alternatives --config x-session-manager
dpkg-reconfigure gdm


Disk Encryption in Linux
cryptsetup -y -v luksFormat /dev/sdb
cryptsetup luksOpen /dev/sdb backup
ls -la /dev/mapper/backup
cryptsetup -v status backup
dd if=/dev/zero of=/dev/mapper/backup
mkfs.ext4 /dev/mapper/backup
mkdir /backup
mount /dev/mapper/backup /backup/
umount /backup
cryptsetup  luksClose backup
cryptsetup luksOpen /dev/sdb backup
Enter passphrase for /dev/sdb:
mount /dev/mapper/backup /backup/

Bye...

Sunday, 24 November 2019

NIS - NFS - autofs - bind - router, apache vhosts, Cert - iSCSI Reference

NIS - NFS  - BIND

yum install ypserv / apt install ypserv
yum install rpcbind /apt install rpcbidn
ypdomainname nis-server  / apt install nis-server

# cat /etc/sysconfig/network
# Created by cloud-init on instance boot automatically, do not edit.
#
NETWORKING=yes
NISDOMAIN=nis-server
# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

172.16.0.12     nis-server
172.16.0.14     nis-client


systemctl start rpcbind ypserv ypxfrd yppasswdd
systemctl enable rpcbind ypserv ypxfrd yppasswdd
/usr/lib64/yp/ypinit -m (for ubunto /usr/lib/yp/ypinit -m)
useradd -g 1024 -u 1024 testuser01


yum install nfs-utils
# cat /etc/exports
/home   172.16.0.0/28(rw,no_root_squash)
[root@ip-172-16-0-12 ~]# showmount -e
Export list for ip-172-16-0-12.ec2.internal:
/home 172.16.0.0/28

firewall-cmd --add-service=nfs --permanent
firwall-cmd --add-service{nfs3,mountd,rpc-bind} --permanent
firewall-cmd --reload

systemctl start rpcind nfs-server
systemctl enable rpcbind nfs-server


NIS and NFS Client


[root@nis-client ~]# yum install ypbind rpcbind nfs-utils
[root@nis-client ~]# ypdomainname nis-server
[root@nis-client ~]# echo "172.16.0.12 nis-server" >> /etc/sysconfig/network
[root@nis-client ~]# echo "172.16.0.12 nis-server" >> /etc/hosts
[root@nis-client ~]# echo "172.16.0.9 nis-client" >> /etc/hosts

[root@nis-client ~]# authconfig --enablenis --nisdomain=nis-server --nisserver=nis-server --enablemkhomedir --update
[root@nis-client ~]# systemctl start rpcbind ypbind
[root@nis-client ~]# systemctl enable rpcbind ypbind
ypwhich
[root@nis-client /]# mount nis-server:/home /home
[root@nis-client /]# vi /etc/fstab

reference: https://www.server-world.info/en

Automount using autofs

root@debian01:/photos# apt-get install autofs
root@debian01:~# showmount -e server1
Export list for server1:
/movies 172.18.14.0/24
/photos 172.18.14.0/24
/users  172.18.14.0/24,192.168.10.0/24

mkdir /nfs
vim /etc/auto.master
/nfs /etc/auto.photos

vim /etc/auto.photos
phtos server1:/photos
movies server1:/movies

service autofs start

root@debian01:~# df -h
Filesystem                 Size  Used Avail Use% Mounted on
rootfs                     4.5G  3.7G  596M  87% /
udev                        10M     0   10M   0% /dev
tmpfs                      208M  604K  207M   1% /run
/dev/mapper/debian01-root  4.5G  3.7G  596M  87% /
tmpfs                      5.0M     0  5.0M   0% /run/lock
tmpfs                      415M  224K  415M   1% /run/shm
/dev/sda1                  228M   32M  185M  15% /boot
/dev/sr0                   1.1G  1.1G     0 100% /media/cdrom0
root@debian01:~# ls /nfs
movies  photos
root@debian01:~# ls /nfs/movies/
movie1.mpeg  movie2.mpeg
root@debian01:~# ls /nfs/photos
photo1.jpg  photo2.jpg
root@debian01:~# df -h
Filesystem                 Size  Used Avail Use% Mounted on
rootfs                     4.5G  3.7G  596M  87% /
udev                        10M     0   10M   0% /dev
tmpfs                      208M  604K  207M   1% /run
/dev/mapper/debian01-root  4.5G  3.7G  596M  87% /
tmpfs                      5.0M     0  5.0M   0% /run/lock
tmpfs                      415M  224K  415M   1% /run/shm
/dev/sda1                  228M   32M  185M  15% /boot
/dev/sr0                   1.1G  1.1G     0 100% /media/cdrom0
server1:/movies            3.5G  1.4G  2.2G  39% /nfs/movies
server1:/photos            3.5G  1.4G  2.2G  39% /nfs/photos
root@debian01:~#



BIND

yum install bind
[root@rac1 ~]# cat /etc/named.conf
//
// named.conf
//
// Provided by Red Hat bind package to configure the ISC BIND named(8) DNS
// server as a caching only nameserver (as a localhost DNS resolver only).
//
// See /usr/share/doc/bind*/sample/ for example named configuration files.
//

options {
        listen-on port 53 { 127.0.0.1;192.168.4.21; };
        listen-on-v6 port 53 { ::1; };
        directory       "/var/named";
        dump-file       "/var/named/data/cache_dump.db";
        statistics-file "/var/named/data/named_stats.txt";
        memstatistics-file "/var/named/data/named_mem_stats.txt";
        allow-query     { localhost;192.168.4.0/24;192.168.5.0/24; };
        recursion yes;

        dnssec-enable yes;
        dnssec-validation yes;

        /* Path to ISC DLV key */
        bindkeys-file "/etc/named.iscdlv.key";

        managed-keys-directory "/var/named/dynamic";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
        type hint;
        file "named.ca";
};

zone "testlab.com" IN {
type master;
file "forward.testlab.com";
allow-update { none; };
};
zone "4.168.192.in-addr-arpa" IN {
type master;
file "reverse.testlab.com";
allow-update { none; };
};


include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

[root@rac1 ~]# cat /var/named/forward.testlab.com
$TTL 86400
@       IN      SOA     rac1.testlab.com.       root.testlab.com. (
        2011071001  ;Serial
       1800        ;Retry
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
)
@       IN  NS          rac1.testlab.com.
@      IN  A           192.168.4.21

rac1-vip         IN  A           192.168.4.31

rac1-priv        IN  A           192.168.5.21
rac2             IN  A           192.168.4.22
rac2-vip         IN  A           192.168.4.32
rac2-priv        IN  A           192.168.5.22
scan             IN  A           192.168.5.26
scan             IN  A           192.168.5.27
scan             IN  A           192.168.5.28
rac1            IN      A       192.169.4.21
[root@rac1 ~]# cat /var/named/reverse.testlab.com
$TTL 86400
@       IN      SOA     rac1.testlab.com.       root.testlab.com. (
        2011071001  ;Serial
        3600        ;Refresh
        1800        ;Retry
        604800      ;Expire
        86400       ;Minimum TTL
)
@       IN  NS          rac1.testlab.com.
@    IN  A              192.168.1.21
rac1    IN      A       192.168.1.21


10      IN      PTR     rac1-vip

20      IN      PTR     rac1-priv
30      IN      PTR     rac2
40      IN      PTR     rac2-vip
50      IN      PTR     rac2-priv
60      IN      PTR     scan
61      IN      PTR     scan
62      IN      PTR     scan
[root@serv1
[root@serv1 named]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="dhcp"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="eth0"
UUID="bab7d116-b33a-43ff-b27e-bf2a1bd1dce4"
DEVICE="eth0"
ONBOOT="yes"
IPV6_PRIVACY="no"
# this is required to stop changing /etc/resolve.conf using dhcp-client script
PEERDNS=no

[root@serv1 named]# cat /etc/resolv.conf
; generated by /usr/sbin/dhclient-script
search testlab.com mshome.net
nameserver 192.168.1.10
nameserver 172.18.14.4

------ Troubleshooting

named-checkconf /etc/named.conf
named-checkconf -z /etc/named.conf
named-checkzone testlab.com /var/named/forward.testlab.com
named-checkszone testlab.com /var/named/reverse.testlab.com

service named restart
systemctl restart named
-------------------





[root@server1 ~]# named-checkconf -z
zone logic.com/IN: loaded serial 2011071001
zone 10.10.168.192.in.addr-arpa/IN: loaded serial 2011071001
zone logic1.com/IN: loaded serial 2011071001
zone 11.10.168.192.in.addr-arpa/IN: loaded serial 2011071001
zone logic20.com/IN: loaded serial 2011071001
zone 12.11.168.192.in.addr-arpa/IN: loaded serial 2011071001
zone logic21.com/IN: loaded serial 2011071001
zone logic22.com/IN: loaded serial 2011071001
zone localhost.localdomain/IN: loaded serial 0
zone localhost/IN: loaded serial 0
zone 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa/IN: loaded serial 0
zone 1.0.0.127.in-addr.arpa/IN: loaded serial 0
zone 0.in-addr.arpa/IN: loaded serial 0


[root@client2 conf.d]# httpd -S
VirtualHost configuration:
*:80                   is a NameVirtualHost
         default server client2.logic20.com (/etc/httpd/conf.d/logic20.conf:1)
         port 80 namevhost client2.logic20.com (/etc/httpd/conf.d/logic20.conf:1)
                 alias logic20.com
         port 80 namevhost client2.logic21.com (/etc/httpd/conf.d/logic21.conf:1)
                 alias logic21.com
         port 80 namevhost client2.logic22.com (/etc/httpd/conf.d/logic22.conf:1)
                 alias logic22.com
ServerRoot: "/etc/httpd"
Main DocumentRoot: "/var/www/html"
Main ErrorLog: "/etc/httpd/logs/error_log"
Mutex authn-socache: using_defaults
Mutex default: dir="/run/httpd/" mechanism=default
Mutex mpm-accept: using_defaults
Mutex authdigest-opaque: using_defaults
Mutex proxy-balancer-shm: using_defaults
Mutex rewrite-map: using_defaults
Mutex authdigest-client: using_defaults
Mutex proxy: using_defaults
PidFile: "/run/httpd/httpd.pid"
Define: _RH_HAS_HTTPPROTOCOLOPTIONS
Define: DUMP_VHOSTS
Define: DUMP_RUN_CFG
User: name="apache" id=48
Group: name="apache" id=48

elinks logic.com  -- on server1
elinks loigc1.com -- on client1
elinks loigc20.com -- on client2
elinks loigc21.com -- on client2
elinks logic22.com -- on client2

Router

Client
ifconfig enp0s8 10.0.0.1 netmask 255.255.255.0
sudo route add default gw 10.0.0.254
/etc/resolv.conf
nameserver 8.8.8.8

Server
ifconfig enp0s8 10.0.0.254 255.255.255.0
iptables -L -r
--enable masquerading
sudo iptables --table nat --append POSTROUTING --out-nterface enp0s3 -j MASQUERADE
--enable ipforwarding
sudo iptables --append FORWAARD --in-interface enp0s8 -j ACCEPT
--eanble ip forwarding on sysctl system

sudo sysctl -w net.ipv4.ip_forward=1
iptables-save
sudo sh -c "iptables-save > /etc/iptables.rules"
iptables-restore < /etc/iptables.rules


for firewalld
# firewall-cmd --direct --permanent --add-rule ipv4 nat POSTROUTING 0 -o eth0 -j MASQUERADE
# firewall-cmd --direct --permanent  --add-rule ipv4 filter FORWARD 0 -i eth1 -o eth0 -j ACCEPT
# firewall-cmd --direct  --permanent --add-rule ipv4 filter FORWARD 0 -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT
# firewall-cmd --reload


[root@server1 ~]# firewall-cmd --permanent --add-service http
success
[root@server1 ~]# firewall-cmd --permanent --add-service  dns
success
[root@server1 ~]# firewall-cmd --permanent --add-service  nfs


eth0 connected to internet for out traffic
eth1 connected to internal network  incoming traffic form internal network

2: eth0: mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:54:4e:3b brd ff:ff:ff:ff:ff:ff
    inet 172.18.14.51/28 brd 172.18.14.63 scope global noprefixroute eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fe54:4e3b/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:15:5d:54:4e:3c brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.10/24 brd 192.168.10.255 scope global noprefixroute eth1
       valid_lft forever preferred_lft forever
    inet6 fe80::215:5dff:fe54:4e3c/64 scope link
       valid_lft forever preferred_lft forever
[root@server1 ~]# ip rout show
default via 172.18.14.49 dev eth0 proto static metric 100
default via 172.18.14.51 dev eth1 proto static metric 101
169.254.0.0/16 dev eth1 scope link metric 1003
172.18.14.48/28 dev eth0 proto kernel scope link src 172.18.14.51 metric 100
172.18.14.51 dev eth1 proto static scope link metric 101
192.168.10.0/24 dev eth1 proto kernel scope link src 192.168.10.10 metric 101

ip route add 192.168.11.0/24 via 192.168.10.10 dev eth1

Multiple sites on single host

Add in /etc/httpd/conf/httpd.conf
IncludeOptional sites-enabled/*.conf

mkdir /etc/httpd/sites_availbale
cd /etc/httpd/sites_availbale

cat site1.conf

        ServerName site1
        ServerAlias site1
        DocumentRoot    /var/www/site1/
cat site2.conf

        ServerName site2
        ServerAlias site2
        DocumentRoot    /var/www/site2/


ln -s site1.conf /etc/httpd/sites_enabled/site1.conf
ln -s site2.conf /etc/httpd/sites_enabled/site2.conf

mkdir /etc/httpd/sites_enabled

mkdir /var/wwww/site1
cat > index.html
site1


cat > index.html
site2

chown -R apache:apache /var/www/site1
chown -R apache:apache/var/www/site2

systemctl restart httpd


Apache Virtual Host - Self Signed Certificate - TLS


yum install openssl, mod_ssl
cd /etc/pki/tls/certs
virtual host is logic5.logic1.com
generate private  key using givne
openssl genpkey -algorithm rsa -pkeyopt rsa_keygen_bits:2048 -out logic5.logic1.com.key
provide generated key to certificate authority  along with certificate signing request
generate certificate signing request
[root@server1 certs]#  openssl req -new -key logic5.logic1.com.key -out logic5.logic1.com.csr


generate certificate using given

[root@server1 certs]# openssl x509 -req -days 365 -signkey logic5.logic1.com.key -in logic5.logic1.com.csr -out logic5.logic1.com.crt

[root@server1 certs]# pwd
/etc/pki/tls/certs
[root@server1 certs]# ls -lt
total 28
-rw-r--r--. 1 root root 1277 May 21 06:15 logic5.logic1.com.crt
-rw-r--r--. 1 root root 1074 May 21 06:10 logic5.logic1.com.csr
-rw-r--r--. 1 root root 1704 May 21 06:02 logic5.logic1.com.key

[root@server1 certs]# openssl s_client -connect logic5.logic1.com:443 -state

cd /etc/httpd/conf.d/ssl.conf
in the end add virtual host entry


DocumentRoot "/var/www/logic5"
ServerName logic5.logic1.com:443
SSLCertificateFile /etc/pki/tls/certs/logic5.logic1.com.crt
SSLCertificateKeyFile /etc/pki/tls/certs/logic5.logic1.com.key


check syntax

[root@server1 ~]# httpd -t
Syntax OK
[root@server1 ~]# httpd -S
VirtualHost configuration:
*:443                  is a NameVirtualHost
         default server server1.logic1.com (/etc/httpd/conf.d/ssl.conf:56)
         port 443 namevhost server1.logic1.com (/etc/httpd/conf.d/ssl.conf:56)
         port 443 namevhost logic5.logic1.com (/etc/httpd/conf.d/ssl.conf:217)
*:80                   is a NameVirtualHost
         default server server1.logic1.com (/etc/httpd/sites-enabled/logic2.conf:1)
         port 80 namevhost server1.logic1.com (/etc/httpd/sites-enabled/logic2.conf:1)
                 alias logic1.com
         port 80 namevhost logic1.com (/etc/httpd/sites-enabled/logic3.conf:1)
                 alias logic3.com
         port 80 namevhost logic5.logic1.com (/etc/httpd/sites-enabled/logic5.conf:1)
                 alias logic5.com
         port 80 namevhost server1.logic1.com (/etc/httpd/sites-available/logic2.conf:1)
                 alias logic1.com
         port 80 namevhost logic1.com (/etc/httpd/sites-available/logic3.conf:1)
                 alias logic3.com
         port 80 namevhost logic5.logic1.com (/etc/httpd/sites-available/logic5.conf:1)


https://logic5.logic1.com and see the certificate



cat /etc/apt/source.list

deb http://archive.debian.org/debian/ wheezy  main contrib
apt-get install xfsprogs

change display manager in debian
update-alternatives --config x-session-manager
dpkg-reconfigure gdm3


iSCSI
Clients
 yum install iscsi-initiator-utils
iscsiadm --mode discovery --type sendtargets --portal x.x.x.x
iscsiadm --mode node --targetname iqn.2016-09-rhel.com --portal x.x.x.x --login
lsblk
Servers
yum install taragetcli
lsblk
targetcli> create block device, create iscsi, create luns, create acl use cd command in targetcli
 




Friday, 25 October 2019

Miscellaneous - AWS Quick Notes for Reference

  • EC2 elastic compute cloud
    • Mnemonic - fight dr mc px z au
      • FPGA - Genomic Research, Financial analysis
      • IOPS - Database/ Applications
      • Graphics - 3D modeling
      • High disk throughput
      • t2.micro - general purpose - free tier
      • density
      • ram
      • m main general purpose
      • c compute intensive more CPU
      • p graphics
      • x Extreme Memory
      • z Extreme Memory and Extreme CPU
      • A ARM based architecture
      • U Bare Metal Servers
  • IAM - User Group Role, Policies
  • Virtual Private Cloud (VPC) 5 per account, public(route with IG) private subnet(default route), NAT gateway 10gb-HA-AWS managed / NAT instance (AMI), NACL(stateless inbound/outbound need to be specifically mentioned), Security Group(state full), site to site VPN, direct connect via dc provider (1 or 2 per region) 
  • EBS general purpose(10K IOPS(3 IOPS / GB)) bootable, provisioned IOPS (>10K IOPS), HDD Throughput intensive, HDD capacity oriented, magnetic HDD (bootable) 
  • Logical to physical mapping of AZ is different for different account
  • SQS first service, Message based, 14 days maximum retention, Pull. SWF(simple workflow service) Task based, 1yr maximum retention. SNS push. SES for only email notification.
  • CloudWatch (performance), CloudTrail(API calls log), config(consistency in the configuration) logs for CloudTrail go to S3 where Athena can be used to retrieve logs using SQL. CloudWatch logs also go to S3 and they are retained indefinitely.
  • System manager for cloud and on premise
  • Migrations Tools to AWS
    • Server migration service (SMS OVF)instance on VMware or Hyper-V block migration from on premise to cloud.
    • Database migration service(DMS) instance on ec2 migration from on premise DB to cloud.
    • Storage gateway as instance on premise as VM, that allows on premise to use cloud storage as NFS/SMB(S3 - Storage gateway configured for file), iSCSI (EBS - Storage gateway configured for block) VTL(S3/Glacier - Storage gateway configured for VTL).
    • Snowball 50tb/80tb. Snowball Edge 100TB with compute power. Snowmobile exabyte of data

  • CI/CD is methodology that allows developers to store code in a repository and collaborate with others. Task of compiling code and deploying application is fully automated and each task is orchestrated. In AWS this is analogous to Code Pipeline that consists CodeCommit(based on git) is repository for code, maintain different versions code and collaborate with other programmer, CodeBuild(kicks in as soon as the code commit happened), CodeDeploy(after the application is built it deployed either in rolling upgrade or Blue/Green - In Blue green Old Application and new applications are run in parallel with more weight given to new application, once new application tested in the field, old application is removed. All process starting from code commit to code build, and deploying application is fully automated.
  • NoSQL - key value good for large number of data, no need for normalization 1nf,2nf,3nf. RDBM consistency model is ACID(Atomic, Consistency, Isolation, Durable) and NoSQL consistency model is BASE (Basic availability, Soft state and Eventual consistence).
  • AWS - ISO 27001 compliance, HIPPA compliance(USA), PCI DSS compliance. KMS entry level symmetric key management service. CloudHSM on dedicated host FIPS-140 level 3 compliance, symmetric/asymmetric keys.

    Whats in the name 

 


Bye...