1, install the http server, make it automatically startup
yum install -y http* /etc/init.d/httpdd start chkconfig httpd on
2, create logical volume, name as lv0, lv0 belong to logical group vg0, PE size as 8M, logical volume as 40 PE, formatted as ext4, mount to /data, and make it automatically mounted when start,
fdisk /dev/sda
to create physical partition, label as 0x8e, make it usable by partx command,
partx -a /dev/sda
create pv,
pvcreate /dev/sda4
create vg, the PE size as 8M
vgcreate –s 8M vg0 /dev/sda4
create lv, name as lv0, size as 40 PE.
lvcreate –l 40 –n lv0 vg0
formatted the system as ext4, better use the disk mapping system,
mkfs –t ext4 /dev/mapper/vg0-lv0
mount to /data, if there is no data folder, create it,
mount /dev/mapper/vg0-lv0 /data
add the auto mount to fstab, /etc/fstab
/dev/mapper/vg0-lv0 /data ext4 defaults 0 0
another way to create the logical volume, logical volume name as engineering, belong to group development, 20 extends, 32M size for each extend in the logic group development, use vfat format to format and auto mounted to /mnt/engineering
fdisk /dev/sda p #check the status of the partition n #create new partition p #the primary partition #size as +800M w #save
command “sytstem-config-lvm” to enter the UI, create the logical group as development, physical extend size as 32M, select eh logical view, and create the logical volume name as engineering, the UI interface doesn’t have the vfat format, needs to use the command to format the logical volume.
mkfs.vfat /dev/development/engineering mkdir /mnt/engineering vim /etc/fstab
add one line to auto mount,
/dev/development/engineering /mnt/engineering vfat defaults 0 0
to mount all,
mount -a
3, find the lines in the /etc/testfile contains “abcde”, and dump into /tmp/testfile, and the sequence should be the same as /etc/testfile
to see the content of the /etc/testfile,
cat /etc/testfile
grep command search /etc/testfile,
grep -n [abcde] /etc/testfile > /tmp/testfile
4, configure the SSH as, has the remote access to your machine from with example.com, client with the remote.test cannot access to ssh to your system
modify the /etc/host.allow, to add one line,
sshd: .example.com
modify the /etc/host.deny, to add one line,
sshd: .remote.test
add auto on,
chkconfig sshd on
5, Export your /common directory via NFS to the example.com domain only
modify the /etc/export, add one line,
/common *.example.com(ro,sync)
restart the nfs and make it auto on,
/etc/init.d/nfs restart chkconfig nfs on
test the result at the instructor computer, make sure the firewall let the nfs service go through, otherwise the problem will be,
[root@instructor ~]# showmount -e server3.example.com clnt_create: RPC: Port mapper failure - Unable to receive: errno 113 (No route to host)
enable the firewall to allow the nfs go through, two ways to do this,
a, let the eth0 to allow all service go through,
iptables -I INPUT 4 -i eth0 -j ACCEPT
service iptables save
b, let the firewall to allow specific port to go through,
modify the nfs configuration file, to let the nfs to use the fixed port,
vim /etc/sysconfig/nfs
uncomment this line
MOUNTD_PORT=892
restart the nfs service
service nfs restart
add three line in the iptables,
-A INPUT -p tcp -m state --state NEW -m tcp --dport 111 -j ACCEPT -A INPUT -p udp -m state --state NEW -m udp --dport 111 -j ACCEPT -A INPUT -p tcp -m state --state NEW -m tcp --dport 892 -j ACCEPT
restart the iptables,
service iptables restart
test on the instructor computer again,
[root@instructor ~]# showmount -e server4.example.com Export list for server3.example.com: /home * /common 192.168.0.0/24