Saturday, 9 August 2025

Linux - Password less SSH Login

In this blog I will show how to enable a password less SSH access, I will use SSH key-based authentication. This method uses a public/private key pair to authenticate without needing a password.

1. Generate SSH Key Pair on Your Local Machine(Node A)

(Node A) ssh-keygen -t rsa

It will ask two prompts as below - if you do not want to change default values which is empty password simply hit enter twice.

  • Press Enter to accept the default location (~/.ssh/id_rsa)
  • Leave the passphrase empty for true password less login

 2. Copy the Public Key to the Remote Server(Node B)

ssh-copy-id username@nodeb

  • This appends your public key to the remote server’s ~/.ssh/authorized_keys
  • You’ll be prompted for the remote user’s password once

If ssh-copy-id command isn’t installed , use:

cat ~/.ssh/id_rsa.pub | ssh username@nodeb "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

 3. Set Correct Permissions on Remote Server

ssh username@nodeb "chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys"

 4. Test Passwordless Login

ssh username@nodeb

You should now log in without being prompted for a password.

Friday, 30 May 2025

Process State in Linux


What is a Process in Linux?
     A process is an instance of a running program that has its own memory, execution state, and system resources. The Linux operating system uses the process scheduler to manage multiple processes, ensuring fair execution and resource allocation.

Understanding this process definition will help in system performance monitoring (identifying CPU-heavy tasks), debugging issues (checking for unresponsive or zombie processes), optimizing resource allocation (ensuring efficient memory usage).

Every process in Linux goes through different states, which can be viewed using the ps command:
ps aux
(OR)
ps -eo pid,state,cmd

Where pid is the process identifier

Different Process states are
  1. Running (`R`)
  2. Sleeping (`S` or `D`)
    • Interruptible Sleep (`S`)
    • Uninterruptible Sleep (`D`)
  3. Stopped (`T`)
  4. Zombie (`Z`)
  5. Dead (`X`)
Now, Let’s explore about these process states.

Running (`R`)

The process is actively executing or is ready to execute.
If the CPU is available, the process moves forward.
Processes in this state are visible in the task manager or top command.

Sleeping (`S` or `D`)

There are two types of sleeping states:
  • Interruptible Sleep (`S`)
            The process is waiting for an event or resource (such as user input or a network response).
            It can be woken up if the necessary resource becomes available.
  • Uninterruptible Sleep (`D`)
            The process is waiting for I/O operations (like disk read/write).
            It cannot be interrupted by signals until it finishes the I/O task.

Stopped (`T`)

    The process is paused and does not execute. A user can stop a process using 
Ctrl + Z 
(OR)
kill -STOP <pid>

It can be resumed using,
fg 
(OR)
 kill -CONT <pid>

Zombie (`Z`)

    The process has completed execution, but its parent process has not acknowledged its termination.
The system still holds the process ID (PID) but frees its resources. Zombie processes do not consume CPU or memory, but too many zombies indicate system inefficiency.

To list all zombie processes, we can use below command
ps aux | awk '{if ($8=="Z") print $2,$11}'
(OR)
ps -eo pid,ppid,state,cmd | awk '{if ($3=="Z") print $0}'

Dead (`X`)

    The process has fully terminated, and its ID is released. It no longer exists in the process table.

Examples and its explanation on which process state causes what on the system. 

    If a process is stuck in uninterruptible sleep (`D`), it may be waiting on slow disk operations.
Too many zombie processes (`Z`) might indicate improper child process management.

Managing Process States in Linux

Process can be managed with following commands in Linux, 

Terminate a process
kill -9 <pid>

Resume a stopped process 
kill -CONT <pid>



Understanding basics and organized work culture is key to efficient Troubleshooting!!!
Happy Troubleshooting!!!

Thursday, 29 May 2025

TCP connection states


TCP Connection Protocol (TCP) States – In IT Networking, when we stream or browse or data copy there is always a connection between 2 or more systems and these are going through different TCP states, once we understand them clearly it will be easy for us to understand what and where can be bottleneck/issue, improve performance, security. 

TCP operates in the transport layer of OSI model. TCP Ensures reliable network communication.

What are packets ?

     Packets are nothing but a smaller chunk/block/group of information/data that is being transferred over the network. Every packet when travelling over the network is independent and they are re-assembled once they all reached the destination to reconstruct the original data. 

Packets contain 3 main components to reconstruct the original data - Header, Payload, Trailer.

For every TCP connection that is happening in the networking world below can be the different states,
  1. LISTEN
  2. SYN-SENT
  3. SYN-RECEIVED
  4. ESTABLISHED
  5. FIN-WAIT-1
  6. CLOSE-WAIT
  7. FIN-WAIT-2
  8. LAST-ACK
  9. TIME-WAIT
  10. CLOSED
Let’s explore the different TCP connection states, which define how a connection is established, maintained, and eventually closed.

LISTEN

     This is the state where the server passively waits for incoming connection requests. It listens for SYN (synchronization) packets from clients that wish to establish a connection.

SYN-SENT

     This state occurs when a client initiates a connection by sending a SYN packet. The client waits for an acknowledgment from the server to move forward.

SYN-RECEIVED

     The server receives the SYN packet and responds with both a SYN and an ACK (acknowledgment).  This confirms that the server is ready for the handshake.

ESTABLISHED

     Once both sides exchange SYN and ACK packets, the connection is fully established. This is where data transmission occurs. Applications can send and receive packets smoothly.

FIN-WAIT-1

     When one side wants to close the connection, it sends a FIN (finish) packet. It waits for an acknowledgment from the other side.

CLOSE-WAIT

     The receiving party acknowledges the FIN packet and enters CLOSE-WAIT. It may still send remaining data before fully closing the connection.

FIN-WAIT-2

     The initiating party receives the ACK and waits for the second FIN packet from the other end.

LAST-ACK

     The second party sends a FIN and waits for the final acknowledgment.

TIME-WAIT

     The connection enters TIME-WAIT, ensuring the last ACK was received. This prevents old duplicate packets from interfering with new connections.

CLOSED

     Finally, the connection is fully terminated, and all resources are freed.

Sharing a real-world example of a user browsing on a website.

     Imagine you’re opening a website in your browser. Here’s how TCP connection states apply:
  • LISTEN – The web server is waiting for incoming connections.
  • SYN-SENT – Your browser sends a SYN packet to the server, requesting a connection.
  • SYN-RECEIVED – The server responds with a SYN-ACK, acknowledging the request.
  • ESTABLISHED – Your browser sends an ACK, completing the handshake. Now, data (webpage content) can be exchanged.
  • FIN-WAIT-1 & FIN-WAIT-2 – When you close the tab, your browser sends a FIN packet to terminate the connection.
  • TIME-WAIT & CLOSED – The server acknowledges the termination, ensuring no stray packets interfere with future connections.


Understanding basics and organized work culture is key to efficient Troubleshooting!!!
Happy Troubleshooting!!!

Sunday, 6 October 2024

Installing kali linux image with podman

Installing kali linux image with podman

1.      Install podman using yum

#yum install podman



To check version of the installed podman you can use “-v” or “--version”

# podman -v

(or)

#podman --version



To check if any container running use below command

# podman ps -a


We can also check same with podman container ls and to show already downloaded images podman image ls.

# podman container ls

#podman image ls


 To search if a particular image is available in the configured repositories.

# podman search kali-rolling


To download image from the repository.

# podman pull kali-rolling


To see if the image is downloaded.

# podman image ls


 To run the container

# podman run –tty –interactive kali-rolling

This command will connect to the tty terminal of the kali linux container. From this bash you can execute all kali linux commands.


 


To exit from the terminal you can give exit at the bash prompt.

Once exited container also will be stopped. To start the container we can use below command.

# podman start container_name


To login to the container you can use podman attach command as below,

# podman attach container_name




Monday, 15 January 2024

Symbolic link

Symbolic link


Symbolic links are files that points to another file or directory. It is also called symlink or soft link. It is simply a pointer to the original directory. As a system administrator, we need to have a good understanding of these symlink and its working.  In this blog we will see simulating different situations around symbolic link and how things work.


Create symlink

Symlink can be created using ln command as shown below, 

#ln -s orginal_directory_path symlink_filename

1.1 Create symlink


We can identify if a file is just a regular file or symlink by looking at the "ls -ltr" command output, where we see "l" in first place in the permission column - which denotes link. 

1.2 List symlink

We can also compare the inodes of both original and symlink to see if both are same and it should be same. 

1.3 Inode for files

When we create or change anything inside the symlink, it will reflect in source/original directory.
Let's try creating and deleting files inside of the symlink to demonstrate same. 

1.4 Create files under symlink

1.5 Delete files under symlink

Now we are quite clear that contents of symlink are same as the original directory and any change will impact both destination.

Let's now take it further to change the symlink file itself and see what will happen.

Remove symlink

We can remove a symlink we can use "rm" or "unlink" command. however when using rm command ensure you are not using / in the end as it will remove the contents instead of removing the symlink file.

1.6 Remove symlink with rm command

1.7 Remove symlink with unlink command

Notice that when we try / in the end using unlink comamnd it will throw error saying not a directory, however when we use it with rm it will delete contents. Its always safe to use unlink command to delete a symlink rather than rm comamnd to avoid accidently removing the original files.

1.8 Remove original file


However when we remove the original file tagged to the symlink, will break the symlink to work as shown above. 
Symlink allows us to be created in a different filesystem, in this example i have created symlink in a different mount /data whereas the source directory exist in the /home directory. 

1.9 Filesystem structure used in this blog


Update symlink

What if we renamed the original directory, let's explore now. 

When we rename the original directory the symlink becomes unusable the referenced pointer is no longer exist as it is moved to a different. however all the files are still available in the original directory in the moved place and to access them we can directly go to the original directory using the moved path. 
 
1.10 Rename original directory


Now to fix this, we need to update the symlink file, this is simply run the symlink command with force option to remove the existing symlink file and create new symlink file with new destination. 

#ln -sf /original_directory symbolic_link_file 

1.11 Update symlink path


Now everything looks back normal. 

Thursday, 4 January 2024

Extend LVM without partition table

 Extend LVM without partition table


This Blog is continuation of my previous blog Create LVM without partition table

In this scenario, we will extend the size of disk on VM instead of adding new disk.

Login to the VM and check if the increased size is visible. we can do this by rescanning the block device as shown below, 

#echo 1 > /sys/class/scsi_device/2:0:1:0/device/rescan --> where 2:0:1:0 is the scsi interface of  the device connected this can be identified using the lun number we have extended from vm portal and matching with lsscsi output.

#echo 1 > /sys/class/block/sdb/device/rescan --> Where sdb is the block device.

Now we will be able to see the new increased size for the disk using fdisk -l /dev/sdb or lsblk command. 

1.1 Disk Rescan

Next we can resize the PV since we have not used or created a partition table. 

Command to resize as below,
#pvresize /dev/sdb

1.2 Resize PV


Once we resize the PV, we can see the changes using pvs or pvdisplay command,

VG will be reflecting the free space available automatically we do not have to extend anything as we are using the same PV instead of new PV. 

We can see the VG status using vgs or vgdisplay.

1.3 vgs output

Lets proceed with resize of LV directly using below command, 

#lvresize -l +100%FREE lv_name

1.4 LV resize


Now we can see the lv has been resized with lvs or lvdisplay command.

Now the LV has required space, filesystem can be grown using below command, 

#xfs_growfs /mount_point_name

1.5 Resize Filesystem

Once the filesystem is extended, we can see the change using df -h and its usable now. 

1.6 Resized mount

We have increased the disk space without downtime and partition table. 

Caution: When we use a partitionless LVM, we also need to be careful at later point in time when we add more disk, where our old disk used as LVM may also show as no partition in fdisk or parted commands. So, its recommended to use pvdisplay or pvs command validate and confirm its usage. 



Wednesday, 3 January 2024

Create LVM without partition table

 Create LVM without partition table


Earlier I had written a blog to create LVM using partition table Create File system with Linux LVM

In this blog, I will show you how to create LVM without using partition table. This is useful when you want to use a full disk for the volume instead of portion of the disk. 
This procedure will be relevant to the data disk that you will add to your system rather than the OS Disk/Partition. Also its very helpful in later stage when you need to extend the same disk instead of adding new disk in the virtual environment/cloud, LVM without partition will be very easy to handle as you can directly resize your PV and LV instead of destroying and recreating the disk and partition table. 

Lets go with creating the disk after adding new disk on vm. We have covered about how to re-scan the scsi bus in the earlier blog - Create File system with Linux LVM

To rescan the new disk - my setup has the new disk in host2 adapter.
#echo "- - -" /sys/class/scsi_host/host2/scan

1.1 Scanning SCSI Host adapter

We have now got a new disk that we added on the VM that is /dev/sdb, Let's continue to create this disk as physical volume directly instead of create a partition table with parted or fdisk.

#pvcreate /dev/sdb
Syntax - pvcreate disk_name/partition_name

1.2 Create PV
Now you can see we have a physical volume (PV) created /dev/sdb as shown above,

To create Volume Group(VG) on this new PV, you need to use vgcreate command, 

#vgcreate data_vg /dev/sdb
Where data_vg is the volume group name. 
Syntax - vgcreate VG_new PV1 PV2 ...

1.3 Create VG

LV can be created on top of this VG using lvcreate comamnd, 

Syntax - lvcreate -l +100%FREE -n LV_name vg_name

1.4 Create LV

From lsblk you can see that lvm is created successfully. 

1.5 lsblk output


Now you can create Filesystem to this Logical volume and mount it as per your requirement. 

I have formatted with XFS and mounted to /data mount point. 

1.6 XFS Format


1.7 Mount FS

I will write procedure to extend same disk and LV without having downtime in another blog.  

Sunday, 16 July 2023

Linux User SSH Access Control

Technology growth is rapid, implementing solutions without proper restriction would cause data leak, cyberattacks. First line of defence for any solution would be its users. Granting granular privilege and role based access is one of the critical task that should be implemented.

SSH is Secure SHELL protocol works on port 22. In Linux, Users over remote connect to administer and manage daily activities on the server. There are multiple security ways of handling user access. In this blog I will be writing how we will restrict SSH access to server. 

Create a Group in Linux that will contain all the users who will be provided SSH access to the server. 

#groupadd ssh-users

You can also specify a group id for this group and maintain it across your environment to manage it with config automation tools like ansible and others in a large scale in future. 

#groupadd -g 3000 ssh-users

Create user and add to the ssh-users group 

# useradd -c "SSH Allowed user" -m -d /home/karthick -s /bin/bash -aG ssh-users karthick 

where, 

-c is comment for a administrator to identify what user or a brief about the user. 

-d is mention home directory of the user 

-s is login shell that the user will be using. 

-a is to append user to the mentioned group

-G is to add user to the secondary groups mentioned.

SSH Restriction

Edit the sshd_config file under /etc/ssh/sshd and add the following line in the end.

AllowGroups ssh-users

Restart the sshd service. 

#systemctl restart sshd

Now, ssh will be restricted to all users, except for those who are added in this ssh-users group.

Ansible Automation

I have written a sample ansible play to automate these tasks in my git - SSH-Restriction



Best Practices: 

Never allow any Privileged/Admin accounts Remote SSH Access.

Always provide SSH access to normal user and then allow sudo escalation or su to privileged user with password. 

References:

https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers.html

OS: 

RHEL, SUSE Linux, Centos

Saturday, 1 July 2023

Azure Fence agent in Azure China for Pacemaker cluster


Microsoft has provided extensive documents for Pacemaker cluster on azure. They are efficient and easy to follow. 

On Azure china, so many things are different in terms of infrastructure connectivity. we see a lot of restrictions and security added. So in this blog I will be writing a small topic which is not covered in the global azure document. This topic would be in specific to government/China azure instances. 

For a Pacemaker cluster to operate in expected manner, we would need the STONITH device as you all know. On Azure there is fencing agent(fence_azure_arm)  to provide this feature. 

Fencing agent is created with python and it is granted with azure service principle rights or MSI(Managed system identity) roles to perform fence operation.  

Ideally this agent would communicate with the Azure AD and get authenticated to perform the stop/start/restart operation for a VM. All of this communication for global azure happens through the management.azure.com, however in china this api is different, and the fence agent script will need to be configured to inform which cloud api it needs to communicate explicitly. 

While configuring fence agent as cluster resource you will need to add additional option called cloud with value as china. This will ensure fence agent is communicating to the correct api and bring up the fence agent online.

The following is the Eg: to create fencing device for a pacemaker cluster in SUSE Linux

# replace the bold strings with your subscription ID and resource group of the VM

sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm params msi=true subscriptionId="subscription ID" resourceGroup="resource group"  cloud=china pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 op monitor interval=3600 timeout=120

References -


Hope this topic is useful for you !!
- JK

Wednesday, 7 September 2022

Privilege escalation with sudo

Privilege escalation with sudo

Normal user in Linux can be allowed to execute privileged commands with sudo rights.

Sudoers configuration file is /etc/sudoers, however editing this file directly is not advisable either you can use visudo to edit this file or create a template file under /etc/sudoers.d. 

I would suggest use template file under /etc/sudoers.d which will be more easier to manage multiple user privileges and it can me efficient in managing granular access. 

Syntax - Who Where = (Runas-Who:group) What_Commands

Who - which user you want to give sudo rights/privilege.

where - In what server you want to grant the user access to execute mentioned commands.

Runas-Who:group - As which user will the user execute the granted commands. 

What_commands - Finally the commands that you want to provide access to user for executing. 



User Shiva is a normal user here and he is now allowed to list the repositories on the server without providing root password. 

NOPSSWD directive may be a security issue as it will not ask for a password when elevating privilege to execute the command. 
To over come this security issue you may use the line "defaults     !targetpw" - This will ask for the users password when elevating privilege.

Example: 

# visudo /etc/sudoers.d/shiva
defaults !targetpw
shiva ALL = (ALL:ALL) /usr/sbin/yum repolist


You can also enforce asking for root users password when elevating privilege. 

"defaults targetpw


Best practice:

Edit sudoers file with visudo which will validate the content when exiting and if anything wrong in the format it will show you error and you can edit the file to fix the issues. 

While editing the sudoers template, ensure you have two session opened with root rights and let one session be running with TOP command. After implementing sudoers config make sure that sudo command is working without any problem on a new session and then you can terminate the top command and its session. It would be helpful if the sudoers went wrong and you are locked out of root. 

This situation can occur when you have wrong sudo file and there is no root user login allowed directly. 

To validate the sudoers configuration you created using visudo -vf %s if it is formatted correctly

Wednesday, 15 June 2022

Linux - HA Cluster Maintenance

Linux - HA Cluster Maintenance



What is a cluster ? 

A group or collection of servers that is providing same service/application. 

In this blog we will see about setting cluster into maintenance mode and releasing cluster out of maintenance mode. 

What is Maintenance mode ? 

Maintenance mode is when the server is not available to service the customers and is currently undergoing a maintenance task. 

With HA Cluster enabled, If we need to perform some activity on the cluster managed pools/servers, we will need to enable cluster maintenance mode so that you do not see any unexpected behaviors from the system.

To enable cluster Maintenance following command is used. 

For SUSE Clusters 
crm configure property maintenance-mode=true

For Redhat Cluster
pcs property set maintenance-mode=true

When you have enabled/set the Maintenance mode you can perform start/stop or any config update on the cluster managed resources. During this period cluster will not interfere with these resources. 

Once we have completed the required activity to release cluster from maintenance mode following command is used. 


For SUSE Clusters 
crm configure property maintenance-mode=false

For Redhat Cluster
pcs property set maintenance-mode=false

Sunday, 12 June 2022

Creating and modifying user in Linux

Creating and modifying user in Linux

    By default when you Install Linux Operating system a user will be created and that user will have full sudo privilege. However when ever you create a user after installing OS, you will need to provision required privilege for that user. In this blog i will show you few commands with useradd and usermod. 

    The following user karthick has been created when OS was installed and this user is added to the Wheel group to get the sudo privilege. User karthick can run privileged commands with his password. 

    To create a normal user in command line providing the users home directory and UID we will use as -d to specify user home directory and -u for UID.

#useradd -d <Path_To_User's_HomeDirectory> -u UID

UID is a unique identifier number used to identify the user. 


    This user does not have privilege to execute root level commands. However if we need to provide root access we will need to add this user to wheel group. To do so we will use usermod command


Adding users to this wheel group will gain complete root access for the user. 

    If you do not want user to have complete root access and instead provide granular or role based access we can provision that through sudo template. you can create a sudoers template/drop-in file under /etc/sudoers.d directory. 

We will see about sudoers template file in much more detail in upcoming blogs. 


Wednesday, 10 November 2021

zmprov commands in zimbra

zmprov commands in zimbra

zmprov commands are used for provisioning in zimbra.

syntax - zmprov command argument


gaà Get account – gets current value

maà Modify account – modifies current value to specified value.

gcfà get configuration – gets current configuration

mcfà modify configuration – Modifies current configuration to specified configuration.

 

1. To Change display name for a single user

$zmprov ga user@domain.com | grep displayName

displayName: User Name

$zmprov ma user@domain.com displayName "User Name1"

$zmprov ga user@domain.com | grep displayName

displayName: User Name1

$zmprov ga user@domain.com | grep zimbraPrefFromDisplay

2. To Modify Mail attachment size  

Check current value

zmprov gcf zimbraMtaMaxMessageSize

zmprov gcf zimbraFileUploadMaxSize

zmprov gcf zimbraMailContentMaxSize

Modify size

zmprov mcf zimbraMtaMaxMessageSize 20971520

zmprov mcf zimbraFileUploadMaxSize 20971520

zmprov mcf zimbraMailContentMaxSize 52428800

3.  To get &Modify Mail forwarding address.

zmprov ga user@domain.com zimbraPrefMailForwardingAddress

zmprov ma user@domain.com zimbraPrefMailForwardingAddressforward@domain.com

Monday, 8 November 2021

Install Zimbra on Centos 8

Install Zimbra on Centos 8

Zimbra is a open source email collaboration software. In this blog lets concentrate on how to install a single server setup of Zimbra. 

Zimbra can be installed as a multi-server setup which we will visit in later blogs. 
Following are the Pre-requisites to install zimbra.
  1. Set a Fully Qualified Hostname(FQDN) = mail.jkzimbra.com
  2. Update host file entry for the host in format - <ip> <FQHN> <HN>
  3. Configure a static ip = 192.168.142.128
  4. For now we can stop and disable firewalld, iptables, selinux. In later blogs we can go through how to configure these along with zimbra.
  5. Install and set-up dns with a A and MX record. Here we will use dnsmasq software to accomplish this.
  6. Finally update the system to latest patch level with yum update. 
  7. Download the zimbra setup from zimbra.com site. and place it in the server.
  8. It is better to have a separate LVM for /opt as the whole zimbra setup and configuration will reside at this mount point.
CentOS version - CentOS Linux release 8.4.2105
Zimbra Version - zcs-8.8.15_GA_3953

Step 1: Perform yum update 
Step 2: Set the hostname as fqdn with hostnamectl as shown in below image.


Step 3: Update host file to reflect as below with FQDN


Step 4:Set a static IP 


Step 5: Stop and disable Firewalld and Selinux


Step 6: Install dnsmasq and configure dnsmasq.conf as below
# yum install dnsmasq

Start dnsmasq service - systemctl start dnsmasq 
Query the mx record and it should return a MX record as shown below with the servers fqdn and correct ip address.

Note: If you are not getting proper result, you may need to update /etc/resolv.conf with correct dnsserver(which has this dnsmasq setup) - here it is the same zimbra server so i have used 192.168.142.128


Step 7: Download zimbra setup and move it to the server in /tmp or /home.
zcs-8.8.15_GA_3953.RHEL8_64.20200629025823.tgz

Extract the setup file as below

# tar -xzvf zcs-8.8.15_GA_3953.RHEL8_64.20200629025823.tgz



Now Lets start with the actual Install and configuration of Zimbra,

Issue the command from inside the zcs* directory.
install.sh

Installation will start as below,





Answer "Y" where required 




You will receive a DNS error, correct the domain name as you have configured in dns. This error will come as we have configured mx for domain name and not the hostname. you have to change the domain name here so give yes and then provide proper domain name. 


You will be seeing the configuration menu as below,


Set the password for admin user. and view the ldap, postfix, and other users password using appropriate menu numbers. 
You can also update the password here. 


To go to previous menu you can type "r"  and hit enter,


Once you have configured all required options you can apply the configuration - to do so type "a" and hit enter.
It will ask to save config file, type yes or hit enter to accept.
Finally the system to start modifying with given config type yes at the prompt and hit enter.


Sit back and relax, the setup will go through some time and you will get as below.
Zimbra servers will start now.


Post setup tasks

Set up the ssh keys. To populate the ssh keys, as Zimbra user (su-zimbra). Type zmupdateauthkeys and press Enter. The key is updated on /opt/zimbra/.ssh/authorized_keys.


Enabling Server Statistics Display. In order for the server statistics to display on the administration console, the syslog configuration files must be modified.

As root, type /opt/zimbra/libexec/zmsyslogsetup. This enables the server to display statistics.


Lets check the zimbra service status by logging as zimbra user. 


zmcontrol status will give you all service status, all should be running. here dnscache is not running as we have configured dnsmasq and it is running on server it will be conflicting zimbra dnscache. so you can ignore. 
You could have not installed this dnscache by giving "N" during the setup when it asks for dnscache installation or you can configure dnsmasq on separate server. 



Now that we are all set here, Lets open the zimbra web client where we can send and receive emails. 

Note: we can send mail to outside domain now but we will not be able to receive emails as we have not published our domin/MX to global dns - you can contact your domain provider like godaddy to do that. 

Webmail link - https://ipaddress

Note: you will get a ssl warning accept and proceed as we have not yet installed any self signed certificate.


Admin account is already created during installation where you will get all zimbra service related emails. lets login and see how the web email looks, 



Yay!! we got some emails. remember these are not external emails. we only get internal email until we set public dns.

Lets see how admin console looks like. admin console opens with port 7071
Admin console link - https://ipaddress:7071



Ignore the server status failed as it is showing for dnscache which we are not using now.


Thats all we still have a lot more on zimbra. we will get through them in later blogs.