Wednesday, 11 August 2021

Extend Logical volume using LVM

 Extend Logical volume using LVM


I am going to extend already created xfs file system using LVM by 1GB.
Currently i have no free space in pv / vg / lv to extend it to the File system.
So i will be adding a new 1 GB disk to the VM first and then extend vg / lv.
Adding disk and making partition is explained here.

Let's continue with our work to extend pv first. 
Current pv vg and lv output as below. and you can see i have no space for now. 

1.1 Current Physical volume

1.2 Current Physical volume free space detail

1.3 Current VG details

1.4 Current LV details

I have completed adding disk and got it on the server by scanning the scsi hosts.
Lets proceed creating a new pv using the newly added disk. 
Command to create pv - pvcreate 
Syntax : pvcreate disk_name
disk_name = /dev/sdc1

1.5 pv create

Now we can extend our vg vgdata with the newly created pv as shown below,
Syntax: vgextend vgname pvname
vgname = vgdata
pvname = /dev/sdc1

1.6 vg extend



1.7 new vg details


1.7 current lv details

Now we have free space in vg, we can extend the lv,
Syntax : lvextend -L +size lvname
size = 1020MiB 
lvname = /dev/vgdata/lvdata

1.8 extend lv

You can see the newly extended size in lvs and lvdisplay command as below

1.9 new lv

LV size is extended, however this will not be visible to file system yet. we have to resize / grow the file system. I have a xfs file system so i will use xfs_growfs, if you have a older fs like ext4/3/2 you may use resize2fs.

command: xfs_growfs
Syntax : xfs_grow lvname/devicename
lvname = /dev/mapper/vgdata-lvdata

1.10 xfs grow

Once you have executed xfs_growfs for the file system you can now see the increased size in the df -h output and you can start utilizing the newly available fs. 

1.11 Extended xfs filesystem


This is the best advantage of a LVM, you can grow/extend/increase your file system space online without any need for downtime. 
LVM has many more uses like cluster support, raid features, backup/recovery.

Tuesday, 10 August 2021

Creating File system with Linux LVM

Creating File system with Linux LVM

Adding new disk to Linux vm and creating LVM File system

I am planning to perform following action.
Add 1GB disk to a Linux vm from VMware workstation.
Create a LVM disk with the added 1GB disk
Mount it as additional data disk to linux vm.

Environment description
Guest VM Name: nfs server 
Guest OS: CentOS Linux release 8.4.2105
VMware® Workstation 16 Pro 16.1.2 build-17966106

Current VM hardware:

Guest Linux Details:

[root@nfsserver ~]# date
Tue Aug 10 14:46:35 IST 2021
[root@nfsserver ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  877M     0  877M   0% /dev
tmpfs                     896M     0  896M   0% /dev/shm
tmpfs                     896M  8.7M  887M   1% /run
tmpfs                     896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl_test-root   37G  1.7G   36G   5% /
/dev/sda1                1014M  195M  820M  20% /boot
tmpfs                     180M     0  180M   0% /run/user/0
[root@nfsserver ~]# cat /etc/redhat-release
CentOS Linux release 8.4.2105
[root@nfsserver ~]# hostname
nfsserver.jk.com
[root@nfsserver ~]#


Adding new disk in Workstation.
From the Workstation Vm  -> Go to settings 
From Virtual machine settings Click on ADD at the bottom



I am going to use ISCSI, so i can add disks without powering off the vm machine.

I am creating a new virtual disk .vmdk file will be created. You can choose to use existing virtual disk files also.


I am creating 1GB and storing it as a single file.
You can also choose to store it as multiple files but if you need more performance we can go for single virtual files also maintaining multiple file disk becomes complex.

You can opt to save this disk file in separate location or with the vm also and name it on your style. I am going with default name and location which is with the vm.




Once completed you can click ok and proceed.


We have now successfully added the 1GB disk onto the server and we need to login to guest vm and make it visible inside guest os to use it. 

Login to the server and scan the scsi host adapters to refresh and list the attached devices. 
If you have added the disks with vm power off state then you will not require to scan as it will be scanned when server is powered on.

Scanning SCSI Disk from Linux vm

My server has 3 host adapters host{0,1,2} So i will scan for 3 adapters to see which adapter has the disk.
Host adapters will be listed under /sys/class/scsi_host/

[root@nfsserver ~]# ls /sys/class/scsi_host/
host0  host1  host2
[root@nfsserver ~]#

I will be executing below commands to scan the newly added disks
echo "- - -" > /sys/class/scsi_host/host0/scan
echo "- - -" > /sys/class/scsi_host/host1/scan
echo "- - -" > /sys/class/scsi_host/host2/scan



For me new disk was attached to host2. Once scan is done - you can see newly added disk with lsblk command.

Partitioning the Disk

I am using fdisk -l command to create disk partition table and mark it as LVM as i am familiar with it. 
You can also use parted. 

Execute fdisk -l /dev/sdb to see details of the disk.
Here you can see there is no partition table created and it is 1GB in size which is what i have added from workstation. 

[root@nfsserver scsi_host]# fdisk -l /dev/sdb
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[root@nfsserver scsi_host]#

Execute fdisk /dev/sdb to create partition table
Once in the fdisk console 
To create GPT disk label issue "g"
To Print the created table issue "p"
To create new partition issue "n" 
You will need to provide partition number, First sector and Last sector. I am providing defaults and 
Partition number (1-128, default 1):
First sector (2048-2097118, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-2097118, default 2097118):

To mark the partition type issue "t"
To mark the partition as LVM issue "31" -- This might vary in older release of centos it was 8e
To save created partition and exit console issue "w"

[root@nfsserver ~]# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help):  g
Created a new GPT disklabel (GUID: 696462B2-DD16-384D-8745-D4E2514F3B7F).

Command (m for help): p
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 696462B2-DD16-384D-8745-D4E2514F3B7F

Command (m for help): n
Partition number (1-128, default 1):
First sector (2048-2097118, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-2097118, default 2097118):

Created a new partition 1 of type 'Linux filesystem' and of size 1023 MiB.

Command (m for help): p
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 696462B2-DD16-384D-8745-D4E2514F3B7F

Device     Start     End Sectors  Size Type
/dev/sdb1   2048 2097118 2095071 1023M Linux filesystem

Command (m for help): 

Command (m for help): t
Selected partition 1
Partition type (type L to list all types): 31
Changed type of partition 'Linux filesystem' to 'Linux LVM'.

Command (m for help): p
Disk /dev/sdb: 1 GiB, 1073741824 bytes, 2097152 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 696462B2-DD16-384D-8745-D4E2514F3B7F

Device     Start     End Sectors  Size Type
/dev/sdb1   2048 2097118 2095071 1023M Linux LVM

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

[root@nfsserver ~]#

Issue lsblk to see the partition created

[root@nfsserver ~]# lsblk
NAME             MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                8:0    0   40G  0 disk
├─sda1             8:1    0    1G  0 part /boot
└─sda2             8:2    0   39G  0 part
  ├─cl_test-root 253:0    0   37G  0 lvm  /
  └─cl_test-swap 253:1    0    2G  0 lvm  [SWAP]
sdb                8:16   0    1G  0 disk
└─sdb1             8:17   0 1023M  0 part
sr0               11:0    1  723M  0 rom
[root@nfsserver ~]#

Still you dont have a filesystem mounted df -h will not list you the newly created partition.

[root@nfsserver ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  877M     0  877M   0% /dev
tmpfs                     896M     0  896M   0% /dev/shm
tmpfs                     896M  8.7M  887M   1% /run
tmpfs                     896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl_test-root   37G  1.7G   36G   5% /
/dev/sda1                1014M  195M  820M  20% /boot
tmpfs                     180M     0  180M   0% /run/user/0
[root@nfsserver ~]#

Creating Logical volume 

First we need to create a physical volume command syntax is 
Syntax: pvcreate partition_name
partition_name=/dev/sdb1

Once created you can list it using pvdisplay comand 
[root@nfsserver ~]# pvcreate /dev/sdb1
  Physical volume "/dev/sdb1" successfully created.
[root@nfsserver ~]#

[root@nfsserver ~]# pvdisplay /dev/sdb1
  "/dev/sdb1" is a new physical volume of "1022.98 MiB"
  --- NEW Physical volume ---
  PV Name               /dev/sdb1
  VG Name
  PV Size               1022.98 MiB
  Allocatable           NO
  PE Size               0
  Total PE              0
  Free PE               0
  Allocated PE          0
  PV UUID               WZrfHl-fR33-W1sX-La0W-ZDRH-gXQS-HEYwTk

[root@nfsserver ~]#

Once PV is created we can create a Volume group on top of it. 
Syntax: vgcreate vg_name pv_name
vg_name=vgdata
pv_name=/dev/sdb1

[root@nfsserver ~]# vgcreate vgdata /dev/sdb1
  Volume group "vgdata" successfully created
[root@nfsserver ~]#

[root@nfsserver ~]# vgdisplay vgdata
  --- Volume group ---
  VG Name               vgdata
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               1020.00 MiB
  PE Size               4.00 MiB
  Total PE              255
  Alloc PE / Size       0 / 0
  Free  PE / Size       255 / 1020.00 MiB
  VG UUID               1ErkXI-EhVp-224p-KHT7-7opk-fjlV-N9X4HD

[root@nfsserver ~]#

Now we have a PV and VG on top of it, Lets create a LV now
syntax: lvcreate --size size --name lv_name vg_name
lv_name=lvdata
vg_name=vgdata
size=1020MiB

[root@nfsserver ~]# lvcreate --size 1020MiB --name lvdata vgdata
  Logical volume "lvdata" created.
[root@nfsserver ~]#  lvdisplay /dev/vgdata/lvdata
  --- Logical volume ---
  LV Path                /dev/vgdata/lvdata
  LV Name                lvdata
  VG Name                vgdata
  LV UUID                vPPTZN-MTPO-AiAP-Osfg-yzls-lRwR-Zw7wot
  LV Write Access        read/write
  LV Creation host, time nfsserver.jk.com, 2021-08-10 15:12:19 +0530
  LV Status              available
  # open                 0
  LV Size                1020.00 MiB
  Current LE             255
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:2

[root@nfsserver ~]#

Once we have created LV we can create a filesystem like ext2/3/4, xfs whatever we require from there. 

I am using XFS filesystem.
syntax: mkfs.xfs lv_name
lv_name=  /dev/vgdata/lvdata

[root@nfsserver ~]# mkfs.xfs /dev/vgdata/lvdata
meta-data=/dev/vgdata/lvdata     isize=512    agcount=4, agsize=65280 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=261120, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=1566, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@nfsserver ~]#

Mounting LV
Create a new directory where you want to mount this newly create filesystem.
Directory name = /datanew
Mount syntax: mount lv_name directory_name
lv_name = /dev/vgdata/lvdata
directory_name = /datanew


[root@nfsserver ~]# mkdir /datanew
[root@nfsserver ~]# mount /dev/vgdata/lvdata /datanew/
[root@nfsserver ~]#

Once mounted you can see the filesystem with df -h 

[root@nfsserver ~]# df -h /datanew/
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vgdata-lvdata 1014M   40M  975M   4% /datanew
[root@nfsserver ~]#

Set Mount to be persistent after reboot.
Add the mount entry to /etc/fstab
You can provide the lv name or its UUID in the fstab file.
I prefer to use UUID 

syntax: UUID="ID" mount_directory Filesystem_type defaults 0 0 
mount_directory=/datanew
UUID= cd8f30ed-d938-4deb-8eec-0e72ed4c9810
You can get UUID number of a lv using blkid as below
 
[root@nfsserver ~]# blkid /dev/mapper/vgdata-lvdata
/dev/mapper/vgdata-lvdata: UUID="cd8f30ed-d938-4deb-8eec-0e72ed4c9810" BLOCK_SIZE="512" TYPE="xfs"
[root@nfsserver ~]#


[root@nfsserver ~]# cat /etc/fstab | grep datanew
UUID=cd8f30ed-d938-4deb-8eec-0e72ed4c9810       /datanew        xfs     defaults        0 0
[root@nfsserver ~]#
[root@nfsserver ~]# mount -a
[root@nfsserver ~]# df -h /datanew/
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vgdata-lvdata 1014M   40M  975M   4% /datanew
[root@nfsserver ~]#

We have successfully added 1GB disk and mounted it as a xfs filesystem with LVM.
Next time i will try to extend this LVM disk with another 1GB in another blog.




Wednesday, 4 August 2021

Setup NFS Server on Centos 8


Server and Client OS = CentOS Linux release 8.4.2105
NFS = Network File System

NFS Server Hostname = nfsserver.jk.com
IP Address  = 192.168.111.129/24
NFS Client Hostname = nfsclient.jk.com
IP Address  = 192.168.111.128/24
NFS Directory = /nfsdata
NFS mount point = /mnt/data


NFS Server setup

Install nfs-utils on server. 
[root@nfsserver nfsdata]# yum install nfs-utils

To make nfs-server service enabled on reboot 
[root@nfsserver nfsdata]# systemctl enable nfs-server

Starting nfs server service
[root@nfsserver nfsdata]# systemctl start nfs-server

Check status of nfs-server 
[root@nfsserver nfsdata]# systemctl status nfs-server
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since Wed 2021-08-04 21:31:04 IST; 3s ago
  Process: 2042 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
  Process: 2040 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
  Process: 2039 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
  Process: 2067 ExecStart=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssprox>
  Process: 2056 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS)
  Process: 2055 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 2067 (code=exited, status=0/SUCCESS)

Aug 04 21:31:04 nfsserver.jk.com systemd[1]: Starting NFS server and services...
Aug 04 21:31:04 nfsserver.jk.com systemd[1]: Started NFS server and services.
[root@nfsserver nfsdata]#


Firewall requirements
Services to be allowed = rpc-bind, nfs

[root@nfsserver /]# firewall-cmd --permanent --add-service=rpc-bind
success
[root@nfsserver /]# firewall-cmd --permanent --add-service=nfs
success
[root@nfsserver /]# firewall-cmd --reload
success
[root@nfsserver /]# firewall-cmd --list-all
public
  target: default
  icmp-block-inversion: no
  interfaces:
  sources:
  services: cockpit dhcpv6-client mountd nfs rpc-bind ssh
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
[root@nfsserver /]#

Create a new directory on server that will be shared to client.

[root@nfsserver /]# mkdir nfsdata
[root@nfsserver /]#

Add nfs export details to the exports file. 

[root@nfsserver /]# cat /etc/exports
/nfsdata        192.168.111.0/24(rw,no_root_squash,sync)
[root@nfsserver /]#

Export the mounts created on exports file.

[root@nfsserver /]# exportfs -rv
exporting 192.168.111.0/24:/nfsdata
[root@nfsserver /]#


Mount nfs directory on Client machine

Run showmount to see if we are getting the exported directory from client machine.

[root@nfsclient ~]# showmount -e 192.168.111.129
Export list for 192.168.111.129:
/nfsdata 192.168.111.0/24
[root@nfsclient ~]#


[root@nfsclient ~]# mount -t nfs 192.168.111.129:/nfsdata /mnt/data
[root@nfsclient ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  877M     0  877M   0% /dev
tmpfs                     896M     0  896M   0% /dev/shm
tmpfs                     896M  8.7M  887M   1% /run
tmpfs                     896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl_test-root   37G  1.7G   36G   5% /
/dev/sda1                1014M  195M  820M  20% /boot
tmpfs                     180M     0  180M   0% /run/user/0
192.168.111.129:/nfsdata   37G  1.7G   36G   5% /mnt/data

Create a test file under the nfs mount in client
[root@nfsclient ~]# cd /mnt/data/
[root@nfsclient data]# touch testnew
[root@nfsclient data]# ls
testnew
[root@nfsclient data]# echo "This is a new test file" > testnew
[root@nfsclient data]# cat testnew
This is a new test file
[root@nfsclient data]#

Make mount point persistent over reboot.

To ensure mount point come up during reboot - add it to fstab as below,

[root@nfsclient ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Jul 31 05:53:32 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl_test-root /                       xfs     defaults        0 0
UUID=1e6b0524-90a5-49d8-aff0-466bac22a3b4 /boot                   xfs     defaults        0 0
/dev/mapper/cl_test-swap none                    swap    defaults        0 0
192.168.111.129:/nfsdata        /mnt/data       nfs     defaults        0 0
[root@nfsclient ~]#

Lets unmount and validate if it is getting mounted automatically.

[root@nfsclient ~]# umount /mnt/data

Now issue mount -a to see if it mounts automatically.

[root@nfsclient ~]# mount -a
[root@nfsclient ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  877M     0  877M   0% /dev
tmpfs                     896M     0  896M   0% /dev/shm
tmpfs                     896M  8.7M  887M   1% /run
tmpfs                     896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl_test-root   37G  1.7G   36G   5% /
/dev/sda1                1014M  195M  820M  20% /boot
tmpfs                     180M     0  180M   0% /run/user/0
192.168.111.129:/nfsdata   37G  1.7G   36G   5% /mnt/data
[root@nfsclient ~]#


Verify if server showing newly created file and its content.

[root@nfsserver nfsdata]# cat testnew
This is a new test file
[root@nfsserver nfsdata]# pwd
/nfsdata
[root@nfsserver nfsdata]#





Wednesday, 30 June 2021

Docker image push

To push a image to docker hub follow below,

Use docker container ls command to list running container image details.

 [root@jkcli ~]# docker container ls
CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS          PORTS                                   NAMES
cb72f5d90279   httpd     "httpd-foreground"   21 minutes ago   Up 21 minutes   0.0.0.0:8080->80/tcp, :::8080->80/tcp   jkapache
[root@jkcli ~]#

Use docker container commit command to commit container as a image with tag.

[root@jkcli ~]# docker container commit cb72f5d90279 jk-httpd:test
sha256:8daf5db17d6fdb0fd6ceea7e4d6ed38d22c3fdad6375f479ed6fc7986eec4a61
[root@jkcli ~]#

You can now list and see the created image is available.

[root@jkcli ~]# docker image ls
REPOSITORY   TAG       IMAGE ID       CREATED              SIZE
jk-httpd     test      8daf5db17d6f   About a minute ago   189MB
httpd        latest    30287d899656   7 days ago           138MB
nginx        latest    d1a364dc548d   5 weeks ago          133MB
[root@jkcli ~]#


To upload your newly created image to docker hub. First you have to login to docker with docker login credentials. To do this use, docker login command.

[root@jkcli ~]# docker login

Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.

Username: karthick04

Password: XXXXX

Now we can tag the image across the repository created in the docker hub. Here I have already created a repository karthick04/jkrepo. 

Command : docker tag source_image:tag target_image:tag

[root@jkcli ~]# docker tag jk-httpd:test karthick04/jkrepo:version1
REPOSITORY          TAG        IMAGE ID       CREATED         SIZE
karthick04/jkrepo   version1   8daf5db17d6f   9 minutes ago   189MB
jk-httpd            test       8daf5db17d6f   9 minutes ago   189MB
httpd               latest     30287d899656   7 days ago      138MB
nginx               latest     d1a364dc548d   5 weeks ago     133MB
[root@jkcli ~]# 
[root@jkcli ~]# docker image ls

To push the image finally to the docker hub use docker image push command. 

[root@jkcli ~]# docker image push karthick04/jkrepo:version1
The push refers to repository [docker.io/karthick04/jkrepo]
c54811c7708d: Pushed
dfd488a286c9: Pushed
15176fdb9a61: Pushed
61172cb5065c: Pushed
9fbbeddcc4e4: Pushed
764055ebc9a7: Pushed
version1: digest: sha256:c6271d65a6b6a4b6274adcbd51b0ddd5dab1b94b8112ccdbd7d407770bdb238b size: 1578
[root@jkcli ~]#


Successfully we have pushed the image to docker hub, we can see this image in docker hub now

https://hub.docker.com/r/karthick04/jkrepo

Also you can pull same image using docker pull karthick04/jkrepo:version1


[root@jkcli ~]# docker pull karthick04/jkrepo:version1
version1: Pulling from karthick04/jkrepo
b4d181a07f80: Already exists
4b72f5187e6e: Already exists
12b2c44d04b2: Already exists
ef481fc2a03a: Already exists
d483d945fcab: Already exists
6b6a1b4ae105: Pull complete
Digest: sha256:c6271d65a6b6a4b6274adcbd51b0ddd5dab1b94b8112ccdbd7d407770bdb238b
Status: Downloaded newer image for karthick04/jkrepo:version1
docker.io/karthick04/jkrepo:version1
[root@jkcli ~]# docker image ls
REPOSITORY          TAG        IMAGE ID       CREATED          SIZE
karthick04/jkrepo   version1   8daf5db17d6f   26 minutes ago   189MB
httpd               latest     30287d899656   7 days ago       138MB
nginx               latest     d1a364dc548d   5 weeks ago      133MB
[root@jkcli ~]#


Bash into a container

Bash into a container


After creating a container in detach mode. If you are looking to bash into a container then you can follow this.

There is a docker exec command that can be used to execute a command on a container that is already running.

Use docker ps to get the name of the existing container


[root@jkcli ~]# docker ps
CONTAINER ID   IMAGE     COMMAND              CREATED          STATUS          PORTS                                   NAMES
cb72f5d90279   httpd     "httpd-foreground"   17 minutes ago   Up 17 minutes   0.0.0.0:8080->80/tcp, :::8080->80/tcp   jkapache
[root@jkcli ~]#


Use command docker exec -it <container name> /bin/bash to bash into a container

[root@jkcli ~]# docker exec -it cb72f5d90279 /bin/bash
root@cb72f5d90279:/usr/local/apache2#


Beginner common mistakes with Docker container

Container run/re-run

When you create a container and then exit it. The container only stops but it still is lying there in stopped/Exited state. You cannot re-run a new container with the same name.

Running httpd container

[root@jkcli ~]# docker container run --name jkapache -p 8080:80 httpd
Unable to find image 'httpd:latest' locally
latest: Pulling from library/httpd
b4d181a07f80: Pull complete
4b72f5187e6e: Pull complete
12b2c44d04b2: Pull complete
ef481fc2a03a: Pull complete
d483d945fcab: Pull complete

Later when i exit this container and try to run same container it will give me below error. 

[root@jkcli ~]# docker container run --name jkapache -p 8080:80 httpd -d

docker: Error response from daemon: Conflict. The container name "/jkapache" is already in use by container "28c5bc35be99e59baf5ce076bdfad683188b64d2e47d7483e8cfc86a764f832c". You have to remove (or rename) that container to be able to reuse that name.

see 'docker run --help'.

However if you check "docker container ls" command, it will show no container running. 

[root@jkcli ~]# docker container ls
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
[root@jkcli ~]# 


This is because docker container ls command will show only running containers at that point in time. 
we have to use -a to list all containers.


[root@jkcli ~]# docker container ls -a
CONTAINER ID   IMAGE     COMMAND                  CREATED         STATUS                          PORTS     NAMES
28c5bc35be99   httpd     "httpd-foreground"       2 minutes ago   Exited (0) About a minute ago             jkapache
1cb4b275776a   nginx     "/docker-entrypoint.…"   10 days ago     Exited (0) 10 days ago                    jk
[root@jkcli ~]#


Remove docker container
Now remove these docker container and then we can try running the docker like below,


[root@jkcli ~]# docker container ls -a
CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS                      PORTS     NAMES
28c5bc35be99   httpd     "httpd-foreground"       15 minutes ago   Exited (0) 14 minutes ago             jkapache
1cb4b275776a   nginx     "/docker-entrypoint.…"   10 days ago      Exited (0) 10 days ago                jk
[root@jkcli ~]# docker container rm 28c5bc35be99
28c5bc35be99

Running docker container with detatch
[root@jkcli ~]# docker container run -d --name jkapache -p 8080:80 httpd
cb72f5d90279fcdef02b3715a04cec20043c23f10a8e548ac7a0cf4789640788
[root@jkcli ~]# docker container ls
CONTAINER ID   IMAGE     COMMAND              CREATED         STATUS         PORTS                                   NAMES
cb72f5d90279   httpd     "httpd-foreground"   4 minutes ago   Up 4 minutes   0.0.0.0:8080->80/tcp, :::8080->80/tcp   jkapache
[root@jkcli ~]#


Friday, 18 June 2021

Docker installation issues



Docker Installation issues
 
This document is intended to cover Docker Engine installation issue. 


Scenario 1: Installing Docker CE on CentOS 8.4 Server GUI
Installing Docker CE on Centos 8.4 should be pretty simple using the script https://get.docker.com/
However you should consider some pre-requisites to perform a smooth installation of Docker CE 

Version Involved:
Linux : CentOS Linux release 8.4.2105
Docker : Docker version 20.10.7, build f0df350

CentOS 8.4 Server GUI version is found to have default containerd packages which would conflict Docker CE installation. Before proceeding with Installation we must remove these conflicts.
 
Packages to be removed : runc, container-selinux

#yum remove runc
#yum remove container-selinux


Now we can install Docker CE successfully using Docker installation script

curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh