Linux processes

A process is a running program within the Operating System. A process will contain code that can be interpreted by the system processor. Because Linux and Windows use different executable structures, a program that runs on one OS cannot run on the another. A file that’s stored on the hard drive must be first loaded into RAM to be able to execute its instructions. The kernel is responsible for managing system resources and it gives control over CPU to different processes loaded into RAM. We’ve discussed previously about Linux boot process so please check out that article before proceeding further. In our days Operating Systems are multi-tasking which offers the possibility of running multiple programs at the same time. The kernel maintains a process table in which it stores information for each process such as:

  • PID or process ID – it’s a number from 0 to 65535 that uniquely identifies a process .
  • memory space –  the kernel is responsible for dividing RAM memory for each process so they do not interfere with each other.
  • UID, GID – group and user IDs determine the permissions with which a process is executed.
  • the parent process
  • the terminal on which the process is connected.
A process can spawn other processes within the System, this is called the parent process and spawned processes are also known as subprocesses or child processes. If you don’t know by now, in Linux, the init process is the first process executed when the OS is booted. init will always have the PID equal to 0 and all processes are spawned under it. When a process finishes its execution, all resources that were allocated are returned to the resource pool and in the process table the exit code of the process is loaded. If a process has successfully finished its operation, the process table will stored the value 0. When the parent process reads and acknowledges the status code, the subprocess entry is deleted from the table.

When the parent process terminates before the child process finishes its execution, the subprocess becomes an orphan process and its the registered under init which terminates its execution.
A process can have one of the following STATE:
Running – the process is running or it waits to receive access on the CPU
Waiting – the process is waiting for an event to occur or it waits for certain resources. There are two types of waiting processes:

  1. interruptible – may be interrupted by signals
  2. uninterruptible – cannot be interrupted and are dependent to hardware conditions

Stopped – the process has been stopped
Zombie – a process that’s stuck between finishing its execution and the moment it’s deleted from the process table.

Commands used to interact with System processes:

Use the ps -el command to view the current running processes:

View Linux running processes

PID – process ID
S – State
PPID – Parent process ID
TIME – how long the process has used the CPU
COMMAND – command executed by the process
There are multiple options that can be used with ps command. Check its man page or use the help paramter to view the available options. I also use ps -aux command often

 ps command supports a lot of parameters that can help you a when trying to get processes information on a Linux Operating System. I personally prefer using the -faux parameters because it will display a tree like structure with full process information:

ps -faux
processes

Linux process information

Another useful command that can be used to gather Linux process information is the top command. You can use top to view running processes, kill a specific process, view resources used by each process, sort processes by a specific criteria (CPU, Memory, PID, etc.) and many others. I think top is one of the best commands that you can use to view the overall System usage and it’s available by default in most Linux distributions.

gather Linux process information

 

htop and atop are two enhanced versions of top command. They are not available by default on most distributions but, you can easily install them by using the following command: yum install atop htop. I like htop a lot because it offers an interactive interface that can easily be used to manipulate processes:

htop command 

lsof or list of open files is a command that’s useful when you want to view the files opened by a certain process. lsof can be used with the -p parameter to specify a certain PID to get opened files for a specific process:

List opened files in Linux

tree and pstree are used to view a structured tree of all the processes running on the OS. I prefer using the pstree with the -Ap option:
pstree -Ap

pstree command in Linux

If you are using Linux OS on a daily basis then you are probably familiar with most of the information presented in this article. I’ve written it for those that are just beginning their Linux journey and want to learn new stuff about it. Processes play an important role within an OS so knowing how to interact with them is a must have skill. There may be other information that needs mentioned here so please post a comment if you have anything to add. Wish you all the best and stay tuned for the following articles.

Tuning and troubleshooting file systems

In this article I’ll show you how to troubleshoot and find useful information about your server’s file system. There are a lot of useful commands available on Linux distributions and some may have enhanced capabilities than the ones presented in this article. Also note that I’m not able to cover each command’s parameters and features so I’ll just make an overview image of the available utilities on Linux distributions. For this example I’ll be using a CentOS machine.

Getting information about the file system:

lsblk command can be used to display block devices, it offers a tree like structure that can be easily interpreted. With this command you can find all sort of useful information such as disk/partitions/sizes/device type and so on:

lsblk

List block devices in Linux

Another useful command is the blkid which can be used to view the UUID and the file system type of system block devices:

blockdevices

list Linux block devices ID

To find out detailed information regarding a specific partition use the dumpe2fs command. It can be used with the -h parameter to display a human friendly output:

dumpe2fs -h /dev/sda1

dumpe2fs

Get partition information on Linux machines

With tune2fs command you can change all parameters that are displayed by dumpe2fs. There are a lot of parameters supported with this tool and I’ll let you explore the man page for this command (man tune2fs ). Note that it’s not very common that you have to change file system parameters because default values work in most scenarios. I recommend playing with these parameters if you know what you’re doing or in a testing environment.

Another useful tool to troubleshoot file systems  performance is debugfs. Note that it’s best that you not use this tool on a mounted file system:

[root@localhost /boot/grub]# debugfs /dev/sda2
debugfs 1.41.12 (17-May-2010)
debugfs:

Type help to view available commands within the debugfs utility. You can achieve similar output as dumpe2fs by typing status or show_super_status. Note that debugfs can be used to change state of different file systems such as ext2, 3 ,4.

Checking inodes information:

Inodes information can be seen by using df command with the -ih parameters:

indoes

Get inode information on Linux

 

Debugfs command supports multiple options with which you can check and modify inode information. With lsdel you can list deleted inodes that you can later restore if needed:

lsdel

Check deleted inodes in Linux

Then you can execute undelete inode_number file to undelete an inode. This will actually restore the file if it’s respective inode is found.

You can dump all deleted inodes to a file much faster by using the following command:

echo lsdel | debugfs /dev/sda1 > deletedindoes

With the logdump -i inode_number command you can view a specific inode information. This option will dump the content of the file system journal.

dump_inode, or cat options can be used with debugfs to dump an inode to a file or to the standard output.

Note that commands presented above work only with ext2 ,ext3 and ext4 file systems. If you are using other file systems such as xfs, you have to use their dedicated tools to perform troubleshooting operations.

 

Running Docker containers

In this short article I’ll discuss about how you can run containers with docker.

Docker offers an easy way to search and run container images on your servers. You can simply execute docker search name to search for a particular application, just like in the following example: docker search httpd

docker-httpd

docker apache image

As you can see from the image above, we’ve searched for apache images that are available on the docker hub website. On this website you can find all sorts of images that are built by different contributors to the docker community. You can create an account yourself and build/upload/download docker images from the hub. For this article I’m going to use images directly from the hub.

For the following example I’m going to search for a centos image which I’m going to run it on my current host. So once you’ve executed docker search centos, you can then run the following command to launch the container: docker run docker.io/centos

Because we’ve simply used the run command, the docker image will be downloaded but no actual output will be shown. With the run command we can also append commands to our docker images just like in the following example:

run docker.io/centos ls -al /

docker-image

docker centos image

This is the container’s file system, as you can see it’s completely isolated from your machine’s file system. Whatever docker image you run, that specific container will only be executed in your opened terminal so it will close if you exit the terminal or end the process. Docker supports the so called detach method in which you can run containers and let them execute in background (or so called detached). This result can be easily achieved by executing docker run command with the -d parameter:

docker run -d docker.io/centos

I’ve not chosen the best image for this example because the container will be executed then it will close. In this case I can run the following command to make sure that my container will not end its process once it’s executed: docker run -d docker.io/centos sleep infinity

You can visualize the running containers by executing the docker ps command:

docker-ps

docker containers

Note that a unique container ID has been automatically assigned to our newly created container so any future interaction with the container will be made by referencing its ID.

As you can see, because I’ve used a public available image, the name of the container is set by the image author. We can change this behavior and start an image with a custom name. For now let’s stop our container by running docker stop ID just like in the following example: docker stop 424c8d3a61ce

We can verify again that the container has been stopped:

docker

how to check docker processes

To assign a custom name to a container use the docker run command with the –name parameter: docker run -d –name ittrainingday docker.io/centos sleep infinity

This is very useful if you are running multiple containers from the same docker image so make sure you assign a unique name to each container instance.

You can start/stop/pause/restart docker containers whenever it’s needed simply by using the docker action container_name_or_id command. Note that docker command supports multiple actions and parameters so make sure to check its manpage or much easier, by using docker –help.

There are many other features available with docker, I’ll try to cover some of them in future articles. Since I’m still at the beginning with this technology, any input from your is more than helpful. So please post any comments/questions in the dedicated section and I’ll try to respond as soon as possible. Wish you all the best!

Getting Started with Docker

Hello,

In this article I’ll show you how to install docker on a CentOS7 machine and make it ready for your future docking deployments. If you are not familiar with containers you should know that this technology allows you to create multiple user space instances on a Linux machine. Simply put, you can run multiple applications that have their own environment in terms of processes, user space and file system (hence the “container” name). Each container is isolated from others. The only thing that’s shared between containers is the Linux kernel since containers run on top of the OS. Containers have a lower resource footprint than Virtual Machines since multiple containers can run on a single server. By comparison, with VMs you need to spawn multiple servers that have their own Operating System on top of which you run applications. Since each container is isolated you can run multiple applications that listen on the same port on a single server.

To install docker on  CentOS7 run the yum install docker command:

docker

how to install docker

You can verify the status of the docker service by typing systemctl status docker.service:

docker

verify docker service status

As you can see, the docker service is stopped right now so we need to start it by typing systemctl start docker.service. You can then verify again the status of the service to make sure it has successfully started:

docker-service

docker on centos7

What’s left to do is to enable the automatic startup of the docker service. To achieve this result use the systemctl enable docker command:

docker-status

how to enable docker service

As you can see form the image, a symlink is created from the multi-user.target.wants directory to the location of the docker service. Because our machine will run by default in multi-user mode, the docker service is added to this location. You can verify the default target by typing systemctl get-default or by checking the contents of the /etc/systemd/system/default.target file:

default-target

system default target

Now execute docker run hello-world to make sure that the installation has been completed successfully:

docker-hello

how to run docker containers

To view system-wide docker information type docker info. You can also verify docker version with the docker version command:

docker-version

how to verify docker version

That’s about it for this first docker article, once you have everything installed and configured, you can proceed further with docker containers and images. Stay tuned for the following articles from IT training day.

FTP authentication using MySQL backend

We’ve learned by now how to install and configure a FTP server using pure-ftpd. We’ve created a local username and managed to login to our FTP Server. In this article we will make additional authentication settings by adding a MySQL back-end. I will not focus on installing and configuring the FTP server because that part has been already covered in the previous article. We will start directly by installing and configuring our MySQL server and we’ll proceed with the configuration of our authentication mechanism.
If you are using the official CentOS repository, type yum install mysql mysql-server and wait for the installation to complete its operation:

Now we’ll need to configure the local firewall to allow MySQL port (3306) on incoming and outgoing connections:
You can verify if the rules were created successfully by typing iptables -L:
We can now start mysql daemon by typing /etc/init.d/mysqld start
The mysqld service must start automatically each time the server is restarted, type chkconfig mysqld on to set the startup mode to automatic on all runlevels:
Execute /usr/bin/mysql_secure_installation and follow all instructions in the wizard. The settings configured here will secure your MySQL server:
Now that a password has been set for the root user, type mysql -u root -p and press Enter. You will be prompted to type the root password:
You can now execute: SELECT User, Host, Password FROM mysql.user; and view all users within your MySQL server:

We will create a new database for our FTP server and then we’ll set permissions for a newly created user to the database. Type CREATE DATABASE ftpserver; to create the database and type show databases; afterwards to view the newly created database:

To create our database username, type the following command:
INSERT INTO mysql.user (User,Host,Password) VALUES(‘ftpuser’,’localhost’,PASSWORD(‘1qaz@WSX’));

Once you’ve created the user type FLUSH PRIVILEGES;
The permissions on our ftpserver database can be added using the following command:

GRANT ALL PRIVILEGES ON ftpserver .* to ftpuser@localhost; Permissions can be viewed by typing SHOW GRANTS FOR ftpuser;

Execute again FLUSH PRIVILEGES; This command has the following role (from MySQL.COM)
  • "PRIVILEGES
    Reloads the privileges from the grant tables in the mysql database.
    The server caches information in memory as a result of GRANT and CREATE USER statements. This memory is not released by the corresponding REVOKE and DROP USER statements, so for a server that executes many instances of the statements that cause caching, there will be an increase in memory use. This cached memory can be freed with FLUSH PRIVILEGES.”
We’ll need to create the tables for our database, to select the database type use ftpserver; Now we’ll need to populate our database using the following commands (taken from pure-ftpd website):

CREATE TABLE users (
  User VARCHAR(16) BINARY NOT NULL,
  Password VARCHAR(64) BINARY NOT NULL,
  Uid INT(11) NOT NULL default '-1',
  Gid INT(11) NOT NULL default '-1',
  Dir VARCHAR(128) BINARY NOT NULL,
  PRIMARY KEY  (User)
);

You can verify that the fields where created successfully by typing describe users;

Navigate to /etc/pure-ftpd and open pureftpd-mysql.conf using a text editor. You will need to make sure the following commands are entered in the configuration file:
#MYSQLServer     127.0.0.1
#MYSQLPort       3306
MYSQLSocket     /tmp/mysql.sock
MYSQLUser       ftpuser
MYSQLPassword   1qaz@WSX
MYSQLDatabase   ftpserver
MYSQLCrypt      md5
MYSQLGetPW      SELECT Password FROM users WHERE User="\L"
MYSQLGetUID     SELECT Uid FROM users WHERE User="\L"
MYSQLGetGID     SELECT Gid FROM users WHERE User="\L"
MYSQLGetDir     SELECT Dir FROM users WHERE User="\L"
We’ll need to add a ftp user to our database by executing the following command:
INSERT INTO `users` (`User`, `Password`, `Uid`, `Gid`, `Dir`) VALUES (‘danftp’, md5(‘1qaz@WSX’), ‘1002’, ‘1003’, ‘/home/danftp’);
To verify that the user was created successfully, type SELECT * FROM users;
The MySQL configuration is done, we will need to modify the pure-ftpd configuration file. Navigate to /etc/pure-ftpd and open pure-ftpd.conf with VIM:
Add the following line MySQLConfigFile               /etc/pure-ftpd/pureftpd-mysql.conf
and comment  # UnixAuthentication            yes
Just need to restart the FTP Server daemon by typing /etc/init.d/pure-ftpd restart and we should be able to connect using our mysql user:

We’ve successfully configured our FTP Server to support MySQL for back-end authentication. If you think there are unclear things written in this article, please leave a comment and I will respond as soon as possible. Don’t forget to enjoy your day and stay tuned for the following articles from IT training day.

Extracting data from RPM packages

You can download RPM packages locally using the yum’s download only plugin. Use the following command to install this feature:

yum install yum-plugin-downloadonly

The package can then be downloaded using the following command:

yum install –downloadonly –downloaddir=/root/downloads python

Another method in which rpm packages can be downloaded locally is by using the yumdownloader command, you’ll need to install the yum-utils package to enable it:

yum install yum-utils

Then you can use yumdownloader  with the –destdir option to download a RPM package to a desired location:

yumdownloader python  –destdir /root/downloads/

Remember that if an RPM package has several dependencies, you can download them by using the yumdownloader command with the –resolve option just like in the following example:

yumdownloader mysql-server –destdir /root/downloads/ –resolve

I’ve downloaded too many RPM packages because I’ve selected all mysq-server dependencies. For training purposes I’ll clean all RPM packages except the python one:

find /root/downloads/ -type f -not -name “*python*” | xargs rm -f

Now that we’ve managed to download an RPM package on our workstation, it’s time to extract their cpio archive by using the rpm2cpio command:

rpm2cpio python-2.6.6-64.el6.x86_64.rpm > python-2.6.6-64.el6.x86_64.cpio

Once the .cpio archive has been created, use the cpio command to extract its content:

cpio -i –make-directories < python-2.6.6-64.el6.x86_64.cpio

cpio command in Linux

cpio command in Linux

Once the archieve has been extracted, use the tree command to view its structure: tree -d usr/

tree command in Linux

tree command in Linux

More about Linux Firewall

A firewall is a System that’s responsible for filtering network packets that are passed through it. Simply put, a firewall is used to block incoming/outgoing packets that can harm network resources. Generally, there are three ways in which a firewall can filter network packets:

  • source and/or destination IP address
  • source and/or destination port
  • network interface on which packets are received
A firewall can filter packets in three situations, as follows:
  • when it receives packets
  • when it sends packets
  • when it routes packets to other destinations
In Linux, these three situations are stored in separate networking tables which are also known as chains. The most common firewall software that can be found on a Linux machine is iptables. The program is an in-built feature on CentOS Systems.
The firewall chains are: INPUT, FORWARD and OUTPUT. You can visualize them by typing iptables -L
A firewall can either accept or reject a packet. This behavior is stated by the rule’s action. Each chain has a default action which can be seen in the above picture, by default, all chains will ACCEPT packets. You can change the policy of a chain by typing the following command:
iptables -P INPUT DROP 
This command will configure the chain to DROP all incoming packets as a default rule.
If you want to block incoming packets from a certain device, type the following:
iptables -A INPUT 10.10.1.5 -j DROP
All incoming packets from the above host will be dropped.
Now let’s say you want to block only port 25 from this machine. To achieve this result, type the following:
iptables -A INPUT -s 10.10.1.5 -p TCP –destination-port 25 -j DROP
For testing purposes let’s also add a rule which blocks all traffic from 192.168.1.0/24 network and another rule that blocks port 53 both UDP and TCP for 10.20.0.0/16 network:
iptables -A INPUT -s 192.168.1.0/24
iptables -A INPUT -s 10.20.0.0/16 -p all
You can also block certain traffic for specific destination machines just like in the following example:
iptables -A INPUT -s 172.16.5.10 -d 10.10.5.8 -p tcp –destination-port 22
My INPUT chain looks like this now:
Note that the kernel will read each firewall chain table from top to bottom. If a firewall rule matches a certain packet then the firewall will automatically apply the rule without moving further with the others. You should always have a DROP rule at the bottom of each chain just to be sure that if no rule is specified above it, the default behavior would be to block all traffic. This way you ensure that only trusted traffic is accepted by the firewall, this is why I suggest setting the default policy for each chain to DROP.
To delete a certain rule type iptables -D INPUT rule_number
To insert a rule on top of a chain type:  iptables -I INPUT -s 10.20.0.0/16 -j ACCEPT
You can also specify a certain insertion port by typing: iptables -I INPUT 3 -s 10.20.0.0/16 -j ACCEPT

It may get a bit tricky to edit firewall rules if you have many entries in each chain, this is why you should use iptables -nL –line-numbers to visualize rules. This command adds numbers at the beginning of each rule thus making it easier to add/remove or insert rules:

I’ll delete the rules 6,7 and 8 by typing the following command:

for i in {6..8}; do iptables -D INPUT $i; done

 
We’ve discussed about firewall filtering based on source and destination IP address, source and destination port numbers but there are a lot more options features with iptables. I recommend reading the man page for this program because you may discover more interesting things about it that can be very helpful.