Showing posts with label centos. Show all posts
Showing posts with label centos. Show all posts

Wednesday, 7 September 2022

Privilege escalation with sudo

Privilege escalation with sudo

Normal user in Linux can be allowed to execute privileged commands with sudo rights.

Sudoers configuration file is /etc/sudoers, however editing this file directly is not advisable either you can use visudo to edit this file or create a template file under /etc/sudoers.d. 

I would suggest use template file under /etc/sudoers.d which will be more easier to manage multiple user privileges and it can me efficient in managing granular access. 

Syntax - Who Where = (Runas-Who:group) What_Commands

Who - which user you want to give sudo rights/privilege.

where - In what server you want to grant the user access to execute mentioned commands.

Runas-Who:group - As which user will the user execute the granted commands. 

What_commands - Finally the commands that you want to provide access to user for executing. 



User Shiva is a normal user here and he is now allowed to list the repositories on the server without providing root password. 

NOPSSWD directive may be a security issue as it will not ask for a password when elevating privilege to execute the command. 
To over come this security issue you may use the line "defaults     !targetpw" - This will ask for the users password when elevating privilege.

Example: 

# visudo /etc/sudoers.d/shiva
defaults !targetpw
shiva ALL = (ALL:ALL) /usr/sbin/yum repolist


You can also enforce asking for root users password when elevating privilege. 

"defaults targetpw


Best practice:

Edit sudoers file with visudo which will validate the content when exiting and if anything wrong in the format it will show you error and you can edit the file to fix the issues. 

While editing the sudoers template, ensure you have two session opened with root rights and let one session be running with TOP command. After implementing sudoers config make sure that sudo command is working without any problem on a new session and then you can terminate the top command and its session. It would be helpful if the sudoers went wrong and you are locked out of root. 

This situation can occur when you have wrong sudo file and there is no root user login allowed directly. 

To validate the sudoers configuration you created using visudo -vf %s if it is formatted correctly

Wednesday, 4 August 2021

Setup NFS Server on Centos 8


Server and Client OS = CentOS Linux release 8.4.2105
NFS = Network File System

NFS Server Hostname = nfsserver.jk.com
IP Address  = 192.168.111.129/24
NFS Client Hostname = nfsclient.jk.com
IP Address  = 192.168.111.128/24
NFS Directory = /nfsdata
NFS mount point = /mnt/data


NFS Server setup

Install nfs-utils on server. 
[root@nfsserver nfsdata]# yum install nfs-utils

To make nfs-server service enabled on reboot 
[root@nfsserver nfsdata]# systemctl enable nfs-server

Starting nfs server service
[root@nfsserver nfsdata]# systemctl start nfs-server

Check status of nfs-server 
[root@nfsserver nfsdata]# systemctl status nfs-server
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since Wed 2021-08-04 21:31:04 IST; 3s ago
  Process: 2042 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
  Process: 2040 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
  Process: 2039 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
  Process: 2067 ExecStart=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssprox>
  Process: 2056 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS)
  Process: 2055 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 2067 (code=exited, status=0/SUCCESS)

Aug 04 21:31:04 nfsserver.jk.com systemd[1]: Starting NFS server and services...
Aug 04 21:31:04 nfsserver.jk.com systemd[1]: Started NFS server and services.
[root@nfsserver nfsdata]#


Firewall requirements
Services to be allowed = rpc-bind, nfs

[root@nfsserver /]# firewall-cmd --permanent --add-service=rpc-bind
success
[root@nfsserver /]# firewall-cmd --permanent --add-service=nfs
success
[root@nfsserver /]# firewall-cmd --reload
success
[root@nfsserver /]# firewall-cmd --list-all
public
  target: default
  icmp-block-inversion: no
  interfaces:
  sources:
  services: cockpit dhcpv6-client mountd nfs rpc-bind ssh
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
[root@nfsserver /]#

Create a new directory on server that will be shared to client.

[root@nfsserver /]# mkdir nfsdata
[root@nfsserver /]#

Add nfs export details to the exports file. 

[root@nfsserver /]# cat /etc/exports
/nfsdata        192.168.111.0/24(rw,no_root_squash,sync)
[root@nfsserver /]#

Export the mounts created on exports file.

[root@nfsserver /]# exportfs -rv
exporting 192.168.111.0/24:/nfsdata
[root@nfsserver /]#


Mount nfs directory on Client machine

Run showmount to see if we are getting the exported directory from client machine.

[root@nfsclient ~]# showmount -e 192.168.111.129
Export list for 192.168.111.129:
/nfsdata 192.168.111.0/24
[root@nfsclient ~]#


[root@nfsclient ~]# mount -t nfs 192.168.111.129:/nfsdata /mnt/data
[root@nfsclient ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  877M     0  877M   0% /dev
tmpfs                     896M     0  896M   0% /dev/shm
tmpfs                     896M  8.7M  887M   1% /run
tmpfs                     896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl_test-root   37G  1.7G   36G   5% /
/dev/sda1                1014M  195M  820M  20% /boot
tmpfs                     180M     0  180M   0% /run/user/0
192.168.111.129:/nfsdata   37G  1.7G   36G   5% /mnt/data

Create a test file under the nfs mount in client
[root@nfsclient ~]# cd /mnt/data/
[root@nfsclient data]# touch testnew
[root@nfsclient data]# ls
testnew
[root@nfsclient data]# echo "This is a new test file" > testnew
[root@nfsclient data]# cat testnew
This is a new test file
[root@nfsclient data]#

Make mount point persistent over reboot.

To ensure mount point come up during reboot - add it to fstab as below,

[root@nfsclient ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Jul 31 05:53:32 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl_test-root /                       xfs     defaults        0 0
UUID=1e6b0524-90a5-49d8-aff0-466bac22a3b4 /boot                   xfs     defaults        0 0
/dev/mapper/cl_test-swap none                    swap    defaults        0 0
192.168.111.129:/nfsdata        /mnt/data       nfs     defaults        0 0
[root@nfsclient ~]#

Lets unmount and validate if it is getting mounted automatically.

[root@nfsclient ~]# umount /mnt/data

Now issue mount -a to see if it mounts automatically.

[root@nfsclient ~]# mount -a
[root@nfsclient ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  877M     0  877M   0% /dev
tmpfs                     896M     0  896M   0% /dev/shm
tmpfs                     896M  8.7M  887M   1% /run
tmpfs                     896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl_test-root   37G  1.7G   36G   5% /
/dev/sda1                1014M  195M  820M  20% /boot
tmpfs                     180M     0  180M   0% /run/user/0
192.168.111.129:/nfsdata   37G  1.7G   36G   5% /mnt/data
[root@nfsclient ~]#


Verify if server showing newly created file and its content.

[root@nfsserver nfsdata]# cat testnew
This is a new test file
[root@nfsserver nfsdata]# pwd
/nfsdata
[root@nfsserver nfsdata]#





Friday, 18 June 2021

Docker installation issues



Docker Installation issues
 
This document is intended to cover Docker Engine installation issue. 


Scenario 1: Installing Docker CE on CentOS 8.4 Server GUI
Installing Docker CE on Centos 8.4 should be pretty simple using the script https://get.docker.com/
However you should consider some pre-requisites to perform a smooth installation of Docker CE 

Version Involved:
Linux : CentOS Linux release 8.4.2105
Docker : Docker version 20.10.7, build f0df350

CentOS 8.4 Server GUI version is found to have default containerd packages which would conflict Docker CE installation. Before proceeding with Installation we must remove these conflicts.
 
Packages to be removed : runc, container-selinux

#yum remove runc
#yum remove container-selinux


Now we can install Docker CE successfully using Docker installation script

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh