Linux Fundamentals
Single partition - good idea - create - atleast - separate for boot and swap.
Good idea to put boot on separate partition
It is better to use GPT instead of MBR especially you are working with Large Drives.
Don't write directly to the block device files, bad things can happen. /dev/sda /dev/sdb.
It doesn't understand partitions exist on this device.
Device files - accessed via file interfaces.
Character files - teminal
Filesystems :
XFS is a Journaled filesystem that uses inodes support direct I/O and has a 64-bit address space.
it also supports delayed allocations, it's really worth using for big data database, as it is really scalable.
ext4 we can customize the inode size between 128 to 4096 bytes.
mke2fs -t ext4 /dev/sdc1
if you want to specify the size of inode then use
mke2fs -t ext4 -I 4096 /dev/sdc1
Note : The defaults of mke2fs command will be defined in /etc/mke2fs.conf file.
ext3 has journal
ext4 has journal and bunch more options.
Mounting :
we use mount command to mount the partitions.
For automount we use /etc/fstab file.
vi /etc/fstab
/dev/sdc1 /testmount ext4 rw,discard 0 0
Now let us see how to mount the partition using UUID
by using blkid command, we can get the UUID of the partition.
We can also use lsblk -fs so it will display the UUID info in tree format.
in /etc/fstab file using UUID
uuid=<uuid no.> mountpoint fstype fsoptions fsckvalue dumpvalue
uuid's dont change across reboots.
Unmounting :
If a mount point is not unmounting, then we will use the lsof and fuser command to check what PID's are restricting to unmount.
# lsof /testmount
# fuser -cuv /testmount
Lazy Unmount :
If any imp process is running on the partition and we want to unmount it after the completion of that proecess then Lazy unmount will come into the picture.
# umount -l /testmount/
So it will detach the filesystem now and do evertyhing that it can and then once the filesystem is totally no longer in use, it will do the rest of unmoounting.
Superblock :
Each unix file system has atleast one of them.
In order to access any file in a filesystem normally require access to Superblock.
Even you can't even mount the filesystem, if you cant read the superblock.
Linux will normally maintain a copy of each mounted filesystem's superblock in memory for fast and more efficient access to it.
to see information of superblock on ext based filesystem, we can run the dumpe2fs
# dumpe2fs /dev/sdc1
# dumpe2fs | grep -i superblock
superblock will contain a metadata of a particular filesystem.
inode :
stands for index node.
On a flie system each and every file have inodes.
Every single object on a filesystem will have its own inode.
inode will contain a metadata of a particular file.
The number of inodes in the filesystem has a direct pairing on the number of files.
# ls -i /etc
Labels :
Everytime we can not use UUID in fstab file, it is 128 bit long.
Another way to simplify it, we use Labels.
To see a filesystem label we use the blkid command.
# blkid
# e2label /dev/sdc1 "label test"
# mke2fs -t ext4 -L userlabel /dev/sdc1
In fstab we will specify the filesystem as
# vi /etc/fstab
Label="label test" /testmount ext4 rw,discard 0 0
fsck :
fsck is the face of e2fsck.
We can not perform fsck on mounted file systems.
In case if any dirty unmount happened due to system crash or uneven shutdown, dirty unmount can happen.
So Linux system will automatically run the fsck during the boot process.
# fsck /dev/sdc1
We can provide various options, for fsck.
-a is for automatically repair
-V is for verbose mode.
-n
df and du :
# df -h
# df -ih
# df -hT
# du -h /etc/fstab
# du -h /etc/
# du -hcs /etc/
Advanced LVM :
Device-Mapper : Kernel based framework for advanced block storage management.
Maps block storage devices to other block storage devices.
Device Mapper is made up of Three layers :
Target devices /dev/sdb /dev/sdc
Mapping Layer Device Mapper
Mapped Devices /dev/mapper/dm-1 or linear or Striped or dm-multipath
Growing and Shrinking Logical Volumes :
# lvs
# vgs
# pvs
# pvextend
# vgextend
# lvextend -L +1G /dev/datavg/lv3
# resize2fs /dev/datavg/lv3
# umount /mnt/everest
# e2fsck -f /dev/datavg/lv3
# resize2fs /dev/datavg/lv3 1G
# lvreduce -L -2G /dev/datavg/lv3
# e2fsck -f /dev/datavg/lv3
# mount /mnt/everest
LVM Snapshots :
LVM snapshots are not the replacement of Backup's.
We can take snapshots of only LVM volumes.
Space Efficient and PiT (point in Time) Recovery.
# lvcreate -L +10M -s snap1 /dev/datavg/lv3
# lvs
(It will list the snap volume as well)
# lvdisplay /dev/datavg/snap1
(You can observe that the snapshot as active.)
# lvdisplay /dev/datavg/lv3
(you can observe that this volume has active snapshot.)
It is recommended instead of merging the snaphost with original, we can mount the snap volume and copy the files to the original location for fewer files.
For large number of files, merging the snapshot is the only option.
# lvconvert --merge /dev/datavg/snap1 (To merge the snapshot with original Volume in case if we lost the files form the original Volume.)
LVM Migration :
Migrating one set of storage devices to another.
There are two ways for LVM migration
1) LVM Mirroring
2) using pv move command.
LVM mirroring is the more complicated.
# dmsetup deps /dev/vg_data/lv_sap
# pvmove -n lv_sap /dev/sdb1 /dev/sdc1 (moving the lv data from sdb1 to sdc1)
# dmsetup deps /dev/vg_data/lv_sap
Backup and Recovering the LVM :
How we backup our LVM Configs ?
LVM metadata will automatically backup and archived anytime we change the configuration.
By default metadata backup go to /etc/lvm/backup and archives go to /etc/lvm/archive
Backup is the latest copy of LVM metadata.
# ls -l /etc/lvm/backup
(Entire configuration will be saved in plain text files.)
Archives are older copies, all previous backup get archived to /etc/lvm/archive
# ls -l /etc/lvm/archive
Lets create a new LV and check its file in /etc/lvm/backup
# lvcreate -L +99M -n lv_java vg_data
# ls -l /etc/lvm/backup
# vi /etc/lvm/backup/vg_test
# lvs
# ls /etc/lvm/archive
So lets restore the VG to older version prior to creation of lv_java
# vgcfgrestore -f /etc/lvm/archive/vg_data_0037.vg vg_data
# lvs
Now we can not get the newly created LVM group.
To restore the newly created LVM then
# vgcfgrestore -f /etc/lvm/backup/vg_data_001.vg vg_data (Latest modified changes will be in /etc/lvm/backup)
Now we can get the newly created LVM back.
# lvs
RAID : Software RAID
Redundant array of independent disks.
It will be done in either software or Hardware.
In this case we are doing RAID on Linux (Software Raid)
Software RAID is very flexible and it occupies few CPU cycles.
It is implemented by the MD driver.
managed with mdadm
Sometimes called mdraid.
From RHEL 6 is capable of installing OS on Software RAID.
# ls -l /dev/ | grep sd
we need to special partition type which supports mdraid.
To examine, whether the existing partition will supports mdraid or not.
# mdadm --examine /dev/sdb /dev/sdc
# mdadm --examine /dev/sdb1 /dev/sdc1
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
(Raid level we can use numbers as well like 0,1,5...)
To check the RAID details after creation use the command
# mdadm --detail /dev/md0
Lets create a file system on new raid device.
# mkfs.ext4 /dev/md0
Growing the RAID Array :
Created new partition /dev/sdd
Now we are adding this device to existing RAID.
# mdadm --manage /dev/md0 --add /dev/sdd1
# mdadm --detail /dev/md0
It will shown as spare drive.
Lets grow the RAID devices from 2 to 3.
mdadm --grow --raid-devices=3 /dev/md0
# cat /proc/mdstat
# mdadm --detail /dev/md0
Lets make a drive faulty
# mdadm --faulty /dev/md0 /dev/sdb1
Now /dev/sdb1 is faulty.
Lets remove the /dev/sdb1 device from the RAID.
# mdadm --remove /dev/md0 /dev/sdb1
Now reduce the RAID devices from 3 to 2.
# mdadm --grow /dev/md0 --raid-devices=2
Dealing with the Failed Drives :
# mdadm --detail /dev/md0
/etc/mdadm.conf
once we create the md raid, its our job to check existing RAID configuration is saving to /etc/mdadm.conf file.
# mdadm --grow
Once we create the mdraid, it will be perstant.
# mdadm --detail /dev/md0
While booting the kernel will automatically check the disks rather than config file of md raidl.
To create a mdadm.conf file,
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
Security with NFSv4 :
ls -l /dev/ | grep -i sd
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdc1
All these are newly added disks
# mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
# mdadm --detail /dev/md1
# mkfs.ext4 /dev/md1
______________________________________________________________________________
NFS ( Network File System ) :
Server exports a file system and the client mounts the file system.
NFS versions prior to version 4 used portmapper service. It is very difficult if it is behind the firewall.
From RHEL 6 onwards, NFS uses RPC bind is used to control the mapping of RPC programs to tcp and udp sockets rather than using portmap.
NFSv3
No Authentication
No Encryption
No Data integrity checking
No ACLs
NFSv4
Mandated Security
Authentication
Encryption
Data Integrity checking
Supports ACLs
As a netword based file system, nfs sits on top of RPC which uses the default uses TCP/IP
using Systemd and systemctl to manage services :
# ps -fp 1
(ps with full info with the process ID 1 )
Here the process ID 1 is systemd.
In earlier versions of linux, we use to see initd as the process ID 1.
But in recent distributions, it is systemd. We can manage our services through systemd.
# systemctl status crond.service
# systemctl cat crond.service
(we can get the complete details of the service.)
# systemctl stop crond.service
# systemctl status crond.service
(the service is stopped, but it will be in enabled state.)
# systemctl start crond.service
# systemctl disable crond.service
# systemctl enable crond.service
# systemctl mask crond.service (if we mask any service, that will restricts the user from restarting the service)
So next time when we want to restart the service, we need to unmask the service first.
# systemctl umask crond.service
________________________________________________________________________________
Configure FTP Server :
yum install vsftpd -y
# systemctl enable vsftpd
# systemctl start vsftpd
# netstat -ntulp
Configure files : /etc/vsftpd/vsftpd.conf
Configure ftp to allow only anonymous connections :
# vi vsftpd.conf
anonymous_enable=Yes (login as anonymous)
local_enable=No (login with standard accounts)
write_enable=NO (writing to the system or upload)
local_umask=022
dirmessage_enable=Yes
xferlog_enable=YES (transfer log so that we can see what's going on )
connect_from_port_20=YES
xferlog_std_format=YES
listen=YES (with IPv4 no listen. We can change it to yes)
listen_ipv6=NO(listen on IPv6)
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
anon_world_readable_only=YES
# systemctl restart vsftpd
# netstat -ntl
now port 21 is open on ipv4 address.
# now you can access the ftp site using ip address.
Creating an FTP YUM repository :
mount the DVD to /mnt/
# mount /dev/sr0 /mnt
# df -h
# mkdir /var/ftp/pub/centos72
(here we can use sync and cp command to copy the complete contents of dvd)
# cd /mnt
# find . | cpio -pmd /var/ftp/pub/centos72
# cd
# eject /mnt
# df -h
# ls /var/ftp/pub/centos72
Connecting to the YUM repository :
# point our client to FTP YUM repository on the client machine
On client machine :
cd /etc/yum.repos.d/
# ls
# mv * /root/
# vim ftp.repo
[ftpc7]
name=FTP centos 7.20
baseurl=ftp://server1.example.vm/pub/centos72
enabled=1
gpgcheck=0
# yum clean all
# yum install bash_completion
# yum repolist
______________________________________________________________________________
Good idea to put boot on separate partition
It is better to use GPT instead of MBR especially you are working with Large Drives.
Don't write directly to the block device files, bad things can happen. /dev/sda /dev/sdb.
It doesn't understand partitions exist on this device.
Device files - accessed via file interfaces.
Character files - teminal
Filesystems :
XFS is a Journaled filesystem that uses inodes support direct I/O and has a 64-bit address space.
it also supports delayed allocations, it's really worth using for big data database, as it is really scalable.
ext4 we can customize the inode size between 128 to 4096 bytes.
mke2fs -t ext4 /dev/sdc1
if you want to specify the size of inode then use
mke2fs -t ext4 -I 4096 /dev/sdc1
Note : The defaults of mke2fs command will be defined in /etc/mke2fs.conf file.
ext3 has journal
ext4 has journal and bunch more options.
Mounting :
we use mount command to mount the partitions.
For automount we use /etc/fstab file.
vi /etc/fstab
/dev/sdc1 /testmount ext4 rw,discard 0 0
Now let us see how to mount the partition using UUID
by using blkid command, we can get the UUID of the partition.
We can also use lsblk -fs so it will display the UUID info in tree format.
in /etc/fstab file using UUID
uuid=<uuid no.> mountpoint fstype fsoptions fsckvalue dumpvalue
uuid's dont change across reboots.
Unmounting :
If a mount point is not unmounting, then we will use the lsof and fuser command to check what PID's are restricting to unmount.
# lsof /testmount
# fuser -cuv /testmount
Lazy Unmount :
If any imp process is running on the partition and we want to unmount it after the completion of that proecess then Lazy unmount will come into the picture.
# umount -l /testmount/
So it will detach the filesystem now and do evertyhing that it can and then once the filesystem is totally no longer in use, it will do the rest of unmoounting.
Superblock :
Each unix file system has atleast one of them.
In order to access any file in a filesystem normally require access to Superblock.
Even you can't even mount the filesystem, if you cant read the superblock.
Linux will normally maintain a copy of each mounted filesystem's superblock in memory for fast and more efficient access to it.
to see information of superblock on ext based filesystem, we can run the dumpe2fs
# dumpe2fs /dev/sdc1
# dumpe2fs | grep -i superblock
superblock will contain a metadata of a particular filesystem.
inode :
stands for index node.
On a flie system each and every file have inodes.
Every single object on a filesystem will have its own inode.
inode will contain a metadata of a particular file.
The number of inodes in the filesystem has a direct pairing on the number of files.
# ls -i /etc
Labels :
Everytime we can not use UUID in fstab file, it is 128 bit long.
Another way to simplify it, we use Labels.
To see a filesystem label we use the blkid command.
# blkid
# e2label /dev/sdc1 "label test"
# mke2fs -t ext4 -L userlabel /dev/sdc1
In fstab we will specify the filesystem as
# vi /etc/fstab
Label="label test" /testmount ext4 rw,discard 0 0
fsck :
fsck is the face of e2fsck.
We can not perform fsck on mounted file systems.
In case if any dirty unmount happened due to system crash or uneven shutdown, dirty unmount can happen.
So Linux system will automatically run the fsck during the boot process.
# fsck /dev/sdc1
We can provide various options, for fsck.
-a is for automatically repair
-V is for verbose mode.
-n
df and du :
# df -h
# df -ih
# df -hT
# du -h /etc/fstab
# du -h /etc/
# du -hcs /etc/
Advanced LVM :
Device-Mapper : Kernel based framework for advanced block storage management.
Maps block storage devices to other block storage devices.
Device Mapper is made up of Three layers :
Target devices /dev/sdb /dev/sdc
Mapping Layer Device Mapper
Mapped Devices /dev/mapper/dm-1 or linear or Striped or dm-multipath
Growing and Shrinking Logical Volumes :
# lvs
# vgs
# pvs
# pvextend
# vgextend
# lvextend -L +1G /dev/datavg/lv3
# resize2fs /dev/datavg/lv3
# umount /mnt/everest
# e2fsck -f /dev/datavg/lv3
# resize2fs /dev/datavg/lv3 1G
# lvreduce -L -2G /dev/datavg/lv3
# e2fsck -f /dev/datavg/lv3
# mount /mnt/everest
LVM Snapshots :
LVM snapshots are not the replacement of Backup's.
We can take snapshots of only LVM volumes.
Space Efficient and PiT (point in Time) Recovery.
# lvcreate -L +10M -s snap1 /dev/datavg/lv3
# lvs
(It will list the snap volume as well)
# lvdisplay /dev/datavg/snap1
(You can observe that the snapshot as active.)
# lvdisplay /dev/datavg/lv3
(you can observe that this volume has active snapshot.)
It is recommended instead of merging the snaphost with original, we can mount the snap volume and copy the files to the original location for fewer files.
For large number of files, merging the snapshot is the only option.
# lvconvert --merge /dev/datavg/snap1 (To merge the snapshot with original Volume in case if we lost the files form the original Volume.)
LVM Migration :
Migrating one set of storage devices to another.
There are two ways for LVM migration
1) LVM Mirroring
2) using pv move command.
LVM mirroring is the more complicated.
# dmsetup deps /dev/vg_data/lv_sap
# pvmove -n lv_sap /dev/sdb1 /dev/sdc1 (moving the lv data from sdb1 to sdc1)
# dmsetup deps /dev/vg_data/lv_sap
Backup and Recovering the LVM :
How we backup our LVM Configs ?
LVM metadata will automatically backup and archived anytime we change the configuration.
By default metadata backup go to /etc/lvm/backup and archives go to /etc/lvm/archive
Backup is the latest copy of LVM metadata.
# ls -l /etc/lvm/backup
(Entire configuration will be saved in plain text files.)
Archives are older copies, all previous backup get archived to /etc/lvm/archive
# ls -l /etc/lvm/archive
Lets create a new LV and check its file in /etc/lvm/backup
# lvcreate -L +99M -n lv_java vg_data
# ls -l /etc/lvm/backup
# vi /etc/lvm/backup/vg_test
# lvs
# ls /etc/lvm/archive
So lets restore the VG to older version prior to creation of lv_java
# vgcfgrestore -f /etc/lvm/archive/vg_data_0037.vg vg_data
# lvs
Now we can not get the newly created LVM group.
To restore the newly created LVM then
# vgcfgrestore -f /etc/lvm/backup/vg_data_001.vg vg_data (Latest modified changes will be in /etc/lvm/backup)
Now we can get the newly created LVM back.
# lvs
RAID : Software RAID
Redundant array of independent disks.
It will be done in either software or Hardware.
In this case we are doing RAID on Linux (Software Raid)
Software RAID is very flexible and it occupies few CPU cycles.
It is implemented by the MD driver.
managed with mdadm
Sometimes called mdraid.
From RHEL 6 is capable of installing OS on Software RAID.
# ls -l /dev/ | grep sd
we need to special partition type which supports mdraid.
To examine, whether the existing partition will supports mdraid or not.
# mdadm --examine /dev/sdb /dev/sdc
# mdadm --examine /dev/sdb1 /dev/sdc1
# mdadm --create /dev/md0 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1
(Raid level we can use numbers as well like 0,1,5...)
To check the RAID details after creation use the command
# mdadm --detail /dev/md0
Lets create a file system on new raid device.
# mkfs.ext4 /dev/md0
Growing the RAID Array :
Created new partition /dev/sdd
Now we are adding this device to existing RAID.
# mdadm --manage /dev/md0 --add /dev/sdd1
# mdadm --detail /dev/md0
It will shown as spare drive.
Lets grow the RAID devices from 2 to 3.
mdadm --grow --raid-devices=3 /dev/md0
# cat /proc/mdstat
# mdadm --detail /dev/md0
Lets make a drive faulty
# mdadm --faulty /dev/md0 /dev/sdb1
Now /dev/sdb1 is faulty.
Lets remove the /dev/sdb1 device from the RAID.
# mdadm --remove /dev/md0 /dev/sdb1
Now reduce the RAID devices from 3 to 2.
# mdadm --grow /dev/md0 --raid-devices=2
Dealing with the Failed Drives :
# mdadm --detail /dev/md0
/etc/mdadm.conf
once we create the md raid, its our job to check existing RAID configuration is saving to /etc/mdadm.conf file.
# mdadm --grow
Once we create the mdraid, it will be perstant.
# mdadm --detail /dev/md0
While booting the kernel will automatically check the disks rather than config file of md raidl.
To create a mdadm.conf file,
# mdadm --detail --scan --verbose >> /etc/mdadm.conf
Security with NFSv4 :
ls -l /dev/ | grep -i sd
# mdadm --examine /dev/sdb1 /dev/sdc1 /dev/sdc1
All these are newly added disks
# mdadm --create /dev/md1 --level=5 --raid-devices=3 /dev/sdb1 /dev/sdc1 /dev/sdd1
# mdadm --detail /dev/md1
# mkfs.ext4 /dev/md1
______________________________________________________________________________
NFS ( Network File System ) :
Server exports a file system and the client mounts the file system.
NFS versions prior to version 4 used portmapper service. It is very difficult if it is behind the firewall.
From RHEL 6 onwards, NFS uses RPC bind is used to control the mapping of RPC programs to tcp and udp sockets rather than using portmap.
NFSv3
No Authentication
No Encryption
No Data integrity checking
No ACLs
NFSv4
Mandated Security
Authentication
Encryption
Data Integrity checking
Supports ACLs
As a netword based file system, nfs sits on top of RPC which uses the default uses TCP/IP
using Systemd and systemctl to manage services :
# ps -fp 1
(ps with full info with the process ID 1 )
Here the process ID 1 is systemd.
In earlier versions of linux, we use to see initd as the process ID 1.
But in recent distributions, it is systemd. We can manage our services through systemd.
# systemctl status crond.service
# systemctl cat crond.service
(we can get the complete details of the service.)
# systemctl stop crond.service
# systemctl status crond.service
(the service is stopped, but it will be in enabled state.)
# systemctl start crond.service
# systemctl disable crond.service
# systemctl enable crond.service
# systemctl mask crond.service (if we mask any service, that will restricts the user from restarting the service)
So next time when we want to restart the service, we need to unmask the service first.
# systemctl umask crond.service
________________________________________________________________________________
Configure FTP Server :
yum install vsftpd -y
# systemctl enable vsftpd
# systemctl start vsftpd
# netstat -ntulp
Configure files : /etc/vsftpd/vsftpd.conf
Configure ftp to allow only anonymous connections :
# vi vsftpd.conf
anonymous_enable=Yes (login as anonymous)
local_enable=No (login with standard accounts)
write_enable=NO (writing to the system or upload)
local_umask=022
dirmessage_enable=Yes
xferlog_enable=YES (transfer log so that we can see what's going on )
connect_from_port_20=YES
xferlog_std_format=YES
listen=YES (with IPv4 no listen. We can change it to yes)
listen_ipv6=NO(listen on IPv6)
pam_service_name=vsftpd
userlist_enable=YES
tcp_wrappers=YES
anon_world_readable_only=YES
# systemctl restart vsftpd
# netstat -ntl
now port 21 is open on ipv4 address.
# now you can access the ftp site using ip address.
Creating an FTP YUM repository :
mount the DVD to /mnt/
# mount /dev/sr0 /mnt
# df -h
# mkdir /var/ftp/pub/centos72
(here we can use sync and cp command to copy the complete contents of dvd)
# cd /mnt
# find . | cpio -pmd /var/ftp/pub/centos72
# cd
# eject /mnt
# df -h
# ls /var/ftp/pub/centos72
Connecting to the YUM repository :
# point our client to FTP YUM repository on the client machine
On client machine :
cd /etc/yum.repos.d/
# ls
# mv * /root/
# vim ftp.repo
[ftpc7]
name=FTP centos 7.20
baseurl=ftp://server1.example.vm/pub/centos72
enabled=1
gpgcheck=0
# yum clean all
# yum install bash_completion
# yum repolist
______________________________________________________________________________
Comments
Post a Comment