Showing posts with label LINUX. Show all posts
Showing posts with label LINUX. Show all posts

Saturday, 9 August 2025

Linux - Password less SSH Login

In this blog I will show how to enable a password less SSH access, I will use SSH key-based authentication. This method uses a public/private key pair to authenticate without needing a password.

1. Generate SSH Key Pair on Your Local Machine(Node A)

(Node A) ssh-keygen -t rsa

It will ask two prompts as below - if you do not want to change default values which is empty password simply hit enter twice.

  • Press Enter to accept the default location (~/.ssh/id_rsa)
  • Leave the passphrase empty for true password less login

 2. Copy the Public Key to the Remote Server(Node B)

ssh-copy-id username@nodeb

  • This appends your public key to the remote server’s ~/.ssh/authorized_keys
  • You’ll be prompted for the remote user’s password once

If ssh-copy-id command isn’t installed , use:

cat ~/.ssh/id_rsa.pub | ssh username@nodeb "mkdir -p ~/.ssh && cat >> ~/.ssh/authorized_keys"

 3. Set Correct Permissions on Remote Server

ssh username@nodeb "chmod 700 ~/.ssh && chmod 600 ~/.ssh/authorized_keys"

 4. Test Passwordless Login

ssh username@nodeb

You should now log in without being prompted for a password.

Friday, 30 May 2025

Process State in Linux


What is a Process in Linux?
     A process is an instance of a running program that has its own memory, execution state, and system resources. The Linux operating system uses the process scheduler to manage multiple processes, ensuring fair execution and resource allocation.

Understanding this process definition will help in system performance monitoring (identifying CPU-heavy tasks), debugging issues (checking for unresponsive or zombie processes), optimizing resource allocation (ensuring efficient memory usage).

Every process in Linux goes through different states, which can be viewed using the ps command:
ps aux
(OR)
ps -eo pid,state,cmd

Where pid is the process identifier

Different Process states are
  1. Running (`R`)
  2. Sleeping (`S` or `D`)
    • Interruptible Sleep (`S`)
    • Uninterruptible Sleep (`D`)
  3. Stopped (`T`)
  4. Zombie (`Z`)
  5. Dead (`X`)
Now, Let’s explore about these process states.

Running (`R`)

The process is actively executing or is ready to execute.
If the CPU is available, the process moves forward.
Processes in this state are visible in the task manager or top command.

Sleeping (`S` or `D`)

There are two types of sleeping states:
  • Interruptible Sleep (`S`)
            The process is waiting for an event or resource (such as user input or a network response).
            It can be woken up if the necessary resource becomes available.
  • Uninterruptible Sleep (`D`)
            The process is waiting for I/O operations (like disk read/write).
            It cannot be interrupted by signals until it finishes the I/O task.

Stopped (`T`)

    The process is paused and does not execute. A user can stop a process using 
Ctrl + Z 
(OR)
kill -STOP <pid>

It can be resumed using,
fg 
(OR)
 kill -CONT <pid>

Zombie (`Z`)

    The process has completed execution, but its parent process has not acknowledged its termination.
The system still holds the process ID (PID) but frees its resources. Zombie processes do not consume CPU or memory, but too many zombies indicate system inefficiency.

To list all zombie processes, we can use below command
ps aux | awk '{if ($8=="Z") print $2,$11}'
(OR)
ps -eo pid,ppid,state,cmd | awk '{if ($3=="Z") print $0}'

Dead (`X`)

    The process has fully terminated, and its ID is released. It no longer exists in the process table.

Examples and its explanation on which process state causes what on the system. 

    If a process is stuck in uninterruptible sleep (`D`), it may be waiting on slow disk operations.
Too many zombie processes (`Z`) might indicate improper child process management.

Managing Process States in Linux

Process can be managed with following commands in Linux, 

Terminate a process
kill -9 <pid>

Resume a stopped process 
kill -CONT <pid>



Understanding basics and organized work culture is key to efficient Troubleshooting!!!
Happy Troubleshooting!!!

Monday, 15 January 2024

Symbolic link

Symbolic link


Symbolic links are files that points to another file or directory. It is also called symlink or soft link. It is simply a pointer to the original directory. As a system administrator, we need to have a good understanding of these symlink and its working.  In this blog we will see simulating different situations around symbolic link and how things work.


Create symlink

Symlink can be created using ln command as shown below, 

#ln -s orginal_directory_path symlink_filename

1.1 Create symlink


We can identify if a file is just a regular file or symlink by looking at the "ls -ltr" command output, where we see "l" in first place in the permission column - which denotes link. 

1.2 List symlink

We can also compare the inodes of both original and symlink to see if both are same and it should be same. 

1.3 Inode for files

When we create or change anything inside the symlink, it will reflect in source/original directory.
Let's try creating and deleting files inside of the symlink to demonstrate same. 

1.4 Create files under symlink

1.5 Delete files under symlink

Now we are quite clear that contents of symlink are same as the original directory and any change will impact both destination.

Let's now take it further to change the symlink file itself and see what will happen.

Remove symlink

We can remove a symlink we can use "rm" or "unlink" command. however when using rm command ensure you are not using / in the end as it will remove the contents instead of removing the symlink file.

1.6 Remove symlink with rm command

1.7 Remove symlink with unlink command

Notice that when we try / in the end using unlink comamnd it will throw error saying not a directory, however when we use it with rm it will delete contents. Its always safe to use unlink command to delete a symlink rather than rm comamnd to avoid accidently removing the original files.

1.8 Remove original file


However when we remove the original file tagged to the symlink, will break the symlink to work as shown above. 
Symlink allows us to be created in a different filesystem, in this example i have created symlink in a different mount /data whereas the source directory exist in the /home directory. 

1.9 Filesystem structure used in this blog


Update symlink

What if we renamed the original directory, let's explore now. 

When we rename the original directory the symlink becomes unusable the referenced pointer is no longer exist as it is moved to a different. however all the files are still available in the original directory in the moved place and to access them we can directly go to the original directory using the moved path. 
 
1.10 Rename original directory


Now to fix this, we need to update the symlink file, this is simply run the symlink command with force option to remove the existing symlink file and create new symlink file with new destination. 

#ln -sf /original_directory symbolic_link_file 

1.11 Update symlink path


Now everything looks back normal. 

Thursday, 4 January 2024

Extend LVM without partition table

 Extend LVM without partition table


This Blog is continuation of my previous blog Create LVM without partition table

In this scenario, we will extend the size of disk on VM instead of adding new disk.

Login to the VM and check if the increased size is visible. we can do this by rescanning the block device as shown below, 

#echo 1 > /sys/class/scsi_device/2:0:1:0/device/rescan --> where 2:0:1:0 is the scsi interface of  the device connected this can be identified using the lun number we have extended from vm portal and matching with lsscsi output.

#echo 1 > /sys/class/block/sdb/device/rescan --> Where sdb is the block device.

Now we will be able to see the new increased size for the disk using fdisk -l /dev/sdb or lsblk command. 

1.1 Disk Rescan

Next we can resize the PV since we have not used or created a partition table. 

Command to resize as below,
#pvresize /dev/sdb

1.2 Resize PV


Once we resize the PV, we can see the changes using pvs or pvdisplay command,

VG will be reflecting the free space available automatically we do not have to extend anything as we are using the same PV instead of new PV. 

We can see the VG status using vgs or vgdisplay.

1.3 vgs output

Lets proceed with resize of LV directly using below command, 

#lvresize -l +100%FREE lv_name

1.4 LV resize


Now we can see the lv has been resized with lvs or lvdisplay command.

Now the LV has required space, filesystem can be grown using below command, 

#xfs_growfs /mount_point_name

1.5 Resize Filesystem

Once the filesystem is extended, we can see the change using df -h and its usable now. 

1.6 Resized mount

We have increased the disk space without downtime and partition table. 

Caution: When we use a partitionless LVM, we also need to be careful at later point in time when we add more disk, where our old disk used as LVM may also show as no partition in fdisk or parted commands. So, its recommended to use pvdisplay or pvs command validate and confirm its usage. 



Wednesday, 3 January 2024

Create LVM without partition table

 Create LVM without partition table


Earlier I had written a blog to create LVM using partition table Create File system with Linux LVM

In this blog, I will show you how to create LVM without using partition table. This is useful when you want to use a full disk for the volume instead of portion of the disk. 
This procedure will be relevant to the data disk that you will add to your system rather than the OS Disk/Partition. Also its very helpful in later stage when you need to extend the same disk instead of adding new disk in the virtual environment/cloud, LVM without partition will be very easy to handle as you can directly resize your PV and LV instead of destroying and recreating the disk and partition table. 

Lets go with creating the disk after adding new disk on vm. We have covered about how to re-scan the scsi bus in the earlier blog - Create File system with Linux LVM

To rescan the new disk - my setup has the new disk in host2 adapter.
#echo "- - -" /sys/class/scsi_host/host2/scan

1.1 Scanning SCSI Host adapter

We have now got a new disk that we added on the VM that is /dev/sdb, Let's continue to create this disk as physical volume directly instead of create a partition table with parted or fdisk.

#pvcreate /dev/sdb
Syntax - pvcreate disk_name/partition_name

1.2 Create PV
Now you can see we have a physical volume (PV) created /dev/sdb as shown above,

To create Volume Group(VG) on this new PV, you need to use vgcreate command, 

#vgcreate data_vg /dev/sdb
Where data_vg is the volume group name. 
Syntax - vgcreate VG_new PV1 PV2 ...

1.3 Create VG

LV can be created on top of this VG using lvcreate comamnd, 

Syntax - lvcreate -l +100%FREE -n LV_name vg_name

1.4 Create LV

From lsblk you can see that lvm is created successfully. 

1.5 lsblk output


Now you can create Filesystem to this Logical volume and mount it as per your requirement. 

I have formatted with XFS and mounted to /data mount point. 

1.6 XFS Format


1.7 Mount FS

I will write procedure to extend same disk and LV without having downtime in another blog.  

Sunday, 16 July 2023

Linux User SSH Access Control

Technology growth is rapid, implementing solutions without proper restriction would cause data leak, cyberattacks. First line of defence for any solution would be its users. Granting granular privilege and role based access is one of the critical task that should be implemented.

SSH is Secure SHELL protocol works on port 22. In Linux, Users over remote connect to administer and manage daily activities on the server. There are multiple security ways of handling user access. In this blog I will be writing how we will restrict SSH access to server. 

Create a Group in Linux that will contain all the users who will be provided SSH access to the server. 

#groupadd ssh-users

You can also specify a group id for this group and maintain it across your environment to manage it with config automation tools like ansible and others in a large scale in future. 

#groupadd -g 3000 ssh-users

Create user and add to the ssh-users group 

# useradd -c "SSH Allowed user" -m -d /home/karthick -s /bin/bash -aG ssh-users karthick 

where, 

-c is comment for a administrator to identify what user or a brief about the user. 

-d is mention home directory of the user 

-s is login shell that the user will be using. 

-a is to append user to the mentioned group

-G is to add user to the secondary groups mentioned.

SSH Restriction

Edit the sshd_config file under /etc/ssh/sshd and add the following line in the end.

AllowGroups ssh-users

Restart the sshd service. 

#systemctl restart sshd

Now, ssh will be restricted to all users, except for those who are added in this ssh-users group.

Ansible Automation

I have written a sample ansible play to automate these tasks in my git - SSH-Restriction



Best Practices: 

Never allow any Privileged/Admin accounts Remote SSH Access.

Always provide SSH access to normal user and then allow sudo escalation or su to privileged user with password. 

References:

https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_handlers.html

OS: 

RHEL, SUSE Linux, Centos

Saturday, 1 July 2023

Azure Fence agent in Azure China for Pacemaker cluster


Microsoft has provided extensive documents for Pacemaker cluster on azure. They are efficient and easy to follow. 

On Azure china, so many things are different in terms of infrastructure connectivity. we see a lot of restrictions and security added. So in this blog I will be writing a small topic which is not covered in the global azure document. This topic would be in specific to government/China azure instances. 

For a Pacemaker cluster to operate in expected manner, we would need the STONITH device as you all know. On Azure there is fencing agent(fence_azure_arm)  to provide this feature. 

Fencing agent is created with python and it is granted with azure service principle rights or MSI(Managed system identity) roles to perform fence operation.  

Ideally this agent would communicate with the Azure AD and get authenticated to perform the stop/start/restart operation for a VM. All of this communication for global azure happens through the management.azure.com, however in china this api is different, and the fence agent script will need to be configured to inform which cloud api it needs to communicate explicitly. 

While configuring fence agent as cluster resource you will need to add additional option called cloud with value as china. This will ensure fence agent is communicating to the correct api and bring up the fence agent online.

The following is the Eg: to create fencing device for a pacemaker cluster in SUSE Linux

# replace the bold strings with your subscription ID and resource group of the VM

sudo crm configure primitive rsc_st_azure stonith:fence_azure_arm params msi=true subscriptionId="subscription ID" resourceGroup="resource group"  cloud=china pcmk_monitor_retries=4 pcmk_action_limit=3 power_timeout=240 pcmk_reboot_timeout=900 pcmk_delay_max=15 op monitor interval=3600 timeout=120

References -


Hope this topic is useful for you !!
- JK

Wednesday, 7 September 2022

Privilege escalation with sudo

Privilege escalation with sudo

Normal user in Linux can be allowed to execute privileged commands with sudo rights.

Sudoers configuration file is /etc/sudoers, however editing this file directly is not advisable either you can use visudo to edit this file or create a template file under /etc/sudoers.d. 

I would suggest use template file under /etc/sudoers.d which will be more easier to manage multiple user privileges and it can me efficient in managing granular access. 

Syntax - Who Where = (Runas-Who:group) What_Commands

Who - which user you want to give sudo rights/privilege.

where - In what server you want to grant the user access to execute mentioned commands.

Runas-Who:group - As which user will the user execute the granted commands. 

What_commands - Finally the commands that you want to provide access to user for executing. 



User Shiva is a normal user here and he is now allowed to list the repositories on the server without providing root password. 

NOPSSWD directive may be a security issue as it will not ask for a password when elevating privilege to execute the command. 
To over come this security issue you may use the line "defaults     !targetpw" - This will ask for the users password when elevating privilege.

Example: 

# visudo /etc/sudoers.d/shiva
defaults !targetpw
shiva ALL = (ALL:ALL) /usr/sbin/yum repolist


You can also enforce asking for root users password when elevating privilege. 

"defaults targetpw


Best practice:

Edit sudoers file with visudo which will validate the content when exiting and if anything wrong in the format it will show you error and you can edit the file to fix the issues. 

While editing the sudoers template, ensure you have two session opened with root rights and let one session be running with TOP command. After implementing sudoers config make sure that sudo command is working without any problem on a new session and then you can terminate the top command and its session. It would be helpful if the sudoers went wrong and you are locked out of root. 

This situation can occur when you have wrong sudo file and there is no root user login allowed directly. 

To validate the sudoers configuration you created using visudo -vf %s if it is formatted correctly

Wednesday, 15 June 2022

Linux - HA Cluster Maintenance

Linux - HA Cluster Maintenance



What is a cluster ? 

A group or collection of servers that is providing same service/application. 

In this blog we will see about setting cluster into maintenance mode and releasing cluster out of maintenance mode. 

What is Maintenance mode ? 

Maintenance mode is when the server is not available to service the customers and is currently undergoing a maintenance task. 

With HA Cluster enabled, If we need to perform some activity on the cluster managed pools/servers, we will need to enable cluster maintenance mode so that you do not see any unexpected behaviors from the system.

To enable cluster Maintenance following command is used. 

For SUSE Clusters 
crm configure property maintenance-mode=true

For Redhat Cluster
pcs property set maintenance-mode=true

When you have enabled/set the Maintenance mode you can perform start/stop or any config update on the cluster managed resources. During this period cluster will not interfere with these resources. 

Once we have completed the required activity to release cluster from maintenance mode following command is used. 


For SUSE Clusters 
crm configure property maintenance-mode=false

For Redhat Cluster
pcs property set maintenance-mode=false

Sunday, 24 October 2021

Basic Linux commands

Basic Linux commands and its uses


Following are the basic linux commands, getting familiar with below commands will help beginner in linux administration.

Each command has a extensive manual page for it. We can review the man page using syntax as below, 

#man <command_name>


ssh - connect to another server over network
syntax: ssh server_name

ls - list files and directories.
syntax: ls

pwd - print present working directory 
syntax: pwd

cat - read contents of file without editing the file
syntax: cat file_name

vi - view/edit contents of a file.
syntax: vi file_name

rm - remove / delete a file 
syntax: rm file_name

rmdir - remove / delete directory and its contents
syntax: rmdir directory_name

cp - copy a file from one location to another 
syntax: cp source_file destination_path

cd - change directory to given path
syntax: cd destination_path

grep - find a particular pattern in given file
syntax: grep test filename.txt

yum - Install / remove /update packages/softwares.
syntax: yum install package_name
            yum remove package_name
            yum update package name

Wednesday, 4 August 2021

Setup NFS Server on Centos 8


Server and Client OS = CentOS Linux release 8.4.2105
NFS = Network File System

NFS Server Hostname = nfsserver.jk.com
IP Address  = 192.168.111.129/24
NFS Client Hostname = nfsclient.jk.com
IP Address  = 192.168.111.128/24
NFS Directory = /nfsdata
NFS mount point = /mnt/data


NFS Server setup

Install nfs-utils on server. 
[root@nfsserver nfsdata]# yum install nfs-utils

To make nfs-server service enabled on reboot 
[root@nfsserver nfsdata]# systemctl enable nfs-server

Starting nfs server service
[root@nfsserver nfsdata]# systemctl start nfs-server

Check status of nfs-server 
[root@nfsserver nfsdata]# systemctl status nfs-server
● nfs-server.service - NFS server and services
   Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
  Drop-In: /run/systemd/generator/nfs-server.service.d
           └─order-with-mounts.conf
   Active: active (exited) since Wed 2021-08-04 21:31:04 IST; 3s ago
  Process: 2042 ExecStopPost=/usr/sbin/exportfs -f (code=exited, status=0/SUCCESS)
  Process: 2040 ExecStopPost=/usr/sbin/exportfs -au (code=exited, status=0/SUCCESS)
  Process: 2039 ExecStop=/usr/sbin/rpc.nfsd 0 (code=exited, status=0/SUCCESS)
  Process: 2067 ExecStart=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssprox>
  Process: 2056 ExecStart=/usr/sbin/rpc.nfsd (code=exited, status=0/SUCCESS)
  Process: 2055 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
 Main PID: 2067 (code=exited, status=0/SUCCESS)

Aug 04 21:31:04 nfsserver.jk.com systemd[1]: Starting NFS server and services...
Aug 04 21:31:04 nfsserver.jk.com systemd[1]: Started NFS server and services.
[root@nfsserver nfsdata]#


Firewall requirements
Services to be allowed = rpc-bind, nfs

[root@nfsserver /]# firewall-cmd --permanent --add-service=rpc-bind
success
[root@nfsserver /]# firewall-cmd --permanent --add-service=nfs
success
[root@nfsserver /]# firewall-cmd --reload
success
[root@nfsserver /]# firewall-cmd --list-all
public
  target: default
  icmp-block-inversion: no
  interfaces:
  sources:
  services: cockpit dhcpv6-client mountd nfs rpc-bind ssh
  ports:
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
[root@nfsserver /]#

Create a new directory on server that will be shared to client.

[root@nfsserver /]# mkdir nfsdata
[root@nfsserver /]#

Add nfs export details to the exports file. 

[root@nfsserver /]# cat /etc/exports
/nfsdata        192.168.111.0/24(rw,no_root_squash,sync)
[root@nfsserver /]#

Export the mounts created on exports file.

[root@nfsserver /]# exportfs -rv
exporting 192.168.111.0/24:/nfsdata
[root@nfsserver /]#


Mount nfs directory on Client machine

Run showmount to see if we are getting the exported directory from client machine.

[root@nfsclient ~]# showmount -e 192.168.111.129
Export list for 192.168.111.129:
/nfsdata 192.168.111.0/24
[root@nfsclient ~]#


[root@nfsclient ~]# mount -t nfs 192.168.111.129:/nfsdata /mnt/data
[root@nfsclient ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  877M     0  877M   0% /dev
tmpfs                     896M     0  896M   0% /dev/shm
tmpfs                     896M  8.7M  887M   1% /run
tmpfs                     896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl_test-root   37G  1.7G   36G   5% /
/dev/sda1                1014M  195M  820M  20% /boot
tmpfs                     180M     0  180M   0% /run/user/0
192.168.111.129:/nfsdata   37G  1.7G   36G   5% /mnt/data

Create a test file under the nfs mount in client
[root@nfsclient ~]# cd /mnt/data/
[root@nfsclient data]# touch testnew
[root@nfsclient data]# ls
testnew
[root@nfsclient data]# echo "This is a new test file" > testnew
[root@nfsclient data]# cat testnew
This is a new test file
[root@nfsclient data]#

Make mount point persistent over reboot.

To ensure mount point come up during reboot - add it to fstab as below,

[root@nfsclient ~]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Sat Jul 31 05:53:32 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/cl_test-root /                       xfs     defaults        0 0
UUID=1e6b0524-90a5-49d8-aff0-466bac22a3b4 /boot                   xfs     defaults        0 0
/dev/mapper/cl_test-swap none                    swap    defaults        0 0
192.168.111.129:/nfsdata        /mnt/data       nfs     defaults        0 0
[root@nfsclient ~]#

Lets unmount and validate if it is getting mounted automatically.

[root@nfsclient ~]# umount /mnt/data

Now issue mount -a to see if it mounts automatically.

[root@nfsclient ~]# mount -a
[root@nfsclient ~]# df -h
Filesystem                Size  Used Avail Use% Mounted on
devtmpfs                  877M     0  877M   0% /dev
tmpfs                     896M     0  896M   0% /dev/shm
tmpfs                     896M  8.7M  887M   1% /run
tmpfs                     896M     0  896M   0% /sys/fs/cgroup
/dev/mapper/cl_test-root   37G  1.7G   36G   5% /
/dev/sda1                1014M  195M  820M  20% /boot
tmpfs                     180M     0  180M   0% /run/user/0
192.168.111.129:/nfsdata   37G  1.7G   36G   5% /mnt/data
[root@nfsclient ~]#


Verify if server showing newly created file and its content.

[root@nfsserver nfsdata]# cat testnew
This is a new test file
[root@nfsserver nfsdata]# pwd
/nfsdata
[root@nfsserver nfsdata]#





Monday, 10 October 2016

Blocking USB Storage drives in Linux

Blocking USB Storage drives in Linux


Blocking usb stick

STEP 1: Create and edit a file "no-usb" as below.

#vi /etc/modprobe.d/no-usb

STEP 2: Add the following lines to the above created file.
Install usb-storage /bin/true

STEP 3: Save and exit using ESC :wq

STEP 4: Reboot the computer and try inserting USB Mass Storage drive

USB blocked


To unblock.

Remove the added line from the file "no-usb". 
Refresh the system. It should be unblocked.



Operating System - OS

Operating System - OS


Every Machine/Hardware is made user-friendly with the help of Many Codes/Programs. These Programs/Codes are collectively named as a Software. The Operating system is a collection of Software without which Most Hardware/Machine does not operate.

There are so many operating systems people using around the world. It is my perception that whenever there is a Technology Growth/Change, it is because of development of New Code/Program/Software designed. 

So far i have come across a few Operating systems(OS). Many of us would be famous with Windows - The so called Microsoft Windows OS.

Microsoft OS versions.

MS-DOS
Windows 98
Windows XP
Windows Vista
Windows 7
Windows 8
Windows 8.1
Windows 10.

Unix

MAC OS X
HP-linux
IBM - Aix
Oracle - Solaris

Linux

RedHat Enterprise Linux (RHEL)
Centos
Ubuntu/Kubuntu/Lubuntu
Fedora
Suse Enterprise Linux
Linux Mint
and several others

Mobile - OS

Android
IOS
Windows

Operating system may be defined in many other ways. To keep it very simple, we can understand that operating system is a language that helps to interpret Our instructions to Machine understandable code for it to operate effectively/efficiently.