Introduction to Ansible Part 3

Hi there! Its been a long time since i posted continuation series on Ansible . if you have not read part one and two  it can be found here .

Now lets quickly define our objectives.

  1. Remotely manage ( Start/Stop ) the JBoss server with the help of a custom SystemD initialization script which is in turn called using Ansible remotely.
  2. Deploy a war file on JBoss application server which is placed at a remote machine.
  3. Undeploy a war file on JBoss server remotely.

As explained in our previous article, we can define static variables and defaults inside roles/webserver/defaults/main.yml file. So lets define our defaults variables. place these in the specified file(on ansible server machine) :

name : test

Username: DBuser

Password: DBPass

driver : <drivername>

connection_url : <connection_url>

pool_name: <pool_name>

JNDI_HOME: <jndi_home>

filter : “<user-name>{ { Username} }</user-name>”

deployname: warfile.war

deploy_path: /path/to/deployment/scanner/path

DeploySrcTemp: /remotepath/to/deployment

Now lets create a systemd service to control Jboss service. let us name it as   JbossControl.service with the following content.for tutorial on systemD refer my previous article :

 

[unit]

Description= Jboss Application server control script

After = network.target

[Service]

Type=idle

ExecStart=/path/to/jboss/bin/standalone.sh

ExecStop=/path/to/jboss/bin/jboss-cli.sh –connect command=:shutdown

TimeoutStartSec=400

TimeoutStopSec=400

[install]

WantedBy=multi-user.target

save the file on remote machine and move it to roles/webserver/templates/ JbossControl.service.

Now we have required resources which are required for managing a Jboss app server remotely.

Now on our ansible server machine, we can update our roles/webserver/tasks/main.yml file with following content.

 —

 – name: copy jboss service file to systemd folder

   copy:

     src : “/path/to/ansible/roles/server/templates/JbossControl.service”

    dest: /etc/systemd/system/jbossControl.service

tags: [copyService]

 

 – name: start JBoss service

     systemd:

          state: started

          name: jbossControl

    tags: [StartJboss]

 

 – name: stop JBoss service

     systemd:

          state: stopped

          name: jbossControl

    tags: [StopJboss]

 

As we see in the tasks file, we have  used systemd module of ansible to invoke systemd services and targets . We have also utilized copy module to copy files from one location to another on remote system .now we can create a play.yml in the project root directory. with the following content:

 -hosts : RemoteMachines

  sudo : yes

  roles:

   – webserver 

now we can invoke this playbook to start jboss/stop jboss on remote machine as follows :

to start Jboss application server on remote machines defined in RemoteMachines file :

> ansible-playbook play.yml –tags “copyService,StartJboss”

to stop Jboss application server on remote machines defined in RemoteMachines file :

> ansible-playbook play.yml –tags “StopJboss”

here , tags attribute while calling the playbook helps us to specify which services to run from the role. if we do not specify the tags, every task on the specified role runs and returns the result.

We have successfully completed the first objective of managing the remote JBoss application server! In the next tutorial we will see how to deploy and un deploy the war file remotely which also in turn involves managing data sources and config files. Stay Tuned !

 

Introduction to Ansible : Part 2

In the previous post, we have described basic introduction of Ansible and its advantages . In this post lets build a skeleton of the project hierarchy we follow . I will explain what each of these structure does and how it would benefit us.

Now before we dive  into the structure, Ansible uses YAML syntax inside its file called playbook(playbook means simply a file with set of tasks bundled together)  . If you are not comfortable with YAML syntax or if you want to skim it through please refer Ansible documentation. They have the best explanation out there. http://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html

now lets create our first basic ansible playbook.

Create a file named test.yml inside project folder,And paste this content.

– hosts: remoteMachines
   tasks:
      – shell: echo “Hi!”

Save the file . Before running this append this line to the host under remoteMachines in the hosts file we created in the previous tutorial.

like this ,

192.168.1.112 ansible_user=sshuser1

192.168.1.113 ansible_user=sshuser2

This line specifies this user is used to SSH into the remote Host.(by default it takes the current user of the ansible server machine)

 

and to run the playbook we just created , use the following command :

> ansible-playbook  test.yml

this should return “Hi!” from both the remote machine output. And in the playbook we just created, hosts specifies the hosts we want to run the tasks on. It has a single task which uses the shell module to run echo “Hi!” command.

Now lets assume a scenario in which we have many common tasks to be run on multiple playbooks. To address such scenario , we will follow a specific structure and in order to reuse tasks spread across multiple playbooks, we will use a feature called “roles”.

Inside the project folder create a folder called as “roles”. Inside roles directory we will create another folder this folder is the name of role we want to use.(ex: webserver).

Inside webserver folder we will create these folders and files respectively. hierarchy as follows,
Roles > Webserver > defaults > main.yml

Roles > Webserver > tasks > main.yml

Roles > Webserver > handlers > main.yml

Roles > Webserver > templates 

Inside defaults folder ,we will find variables and default values in format of “name:value” pair inside main.yml file, these are extensively used in playbooks , templates and are placed to increase re usability.

Inside tasks folder , we will find set of tasks associated to the role inside main.yml file .

Inside handlers folder, we will find different handlers associated with the tasks. Handlers are similar to tasks . only difference is handlers are invoked only by another task.

Inside templates folder, we will find templates , which contains changing  values inside configuration files and these can be replaced by values inside defaults folder. They are processed by Jinja 2  template language.

Now in order to run this role, create a file named play.yml file inside project root folder. and place the following content and save

 – hosts: remoteMachines
roles:
– webserver

As we can see there is a granularity of different type of functionalities and reusability of tasks can be achieved by the concept of roles.

In the next tutorial , we will look at how to use this hierarchy to write a functional role and execute this playbook. Thank you!

 

Introduction to Ansible : Part 1

In one of my recent projects, I had this specific task , to automate Jboss Application server deployment. since it was running on a Ubuntu  machine , I thought of automating the whole process altogether . I had several Options , I chose Ansible as it satisfied all my requirements and less overhead Server administration .

In this article we will look at basics of Ansible and we will cover some more extensive features in later tutorials.

What is Ansible?

Ansible is an Infrastructure Automation Tool , widely used for automating day to day management of remote server tasks.

Why Ansible?

Management of multiple  Linux remote Hosts using SSH, in parallel.

Also supports Windows environment using powershell

Easy to use and works out of the box, no overhead of installing extra tools on slave machines . (relies on SSH for controlling)

Uses concept of states in which comparision of current state and the desired state of the task at hand is made, if the condition satisfies , task returns the existing state without running the task .

You can read more about Ansible  : https://www.ansible.com

Main goal of this tutorial : have a centralized server with ansible installed to control remote(slave) machines to run or accomplish a specific task.

Lets Now setup ansible on Server machine.

How To install Ansible ?

You can install Ansible as a python pip package.

> pip install ansible

Once done, create a test project folder and inside it lets create an inventory file named “hosts” . This file is mainly used to define various groups of machine we would like to remotely manage.(use your favourite text editor and save this as content)

content of hosts file :

[myMachine]

127.0.0.1

[remoteMachines]

192.168.1.112

192.168.1.113

As we can see , there are two groups with labels myMachine and remoteMachines to administer different set of machines. myMachine would control my local machine ,whereas the remoteMachines would control other machines in the network.

Note : As a prerequisite the server machine should be configured to auto SSH  to slave machines. ( to avoid specifying parameters on runtime)

now lets run a simple task against myMachine (that is localhost)

> ansible -i hosts   myMachine –connection=local  -m ping

 

here we can see , -i parameter specifies the inventory file name , in this case its “hosts” . myMachine specifies the group label of the machines for which task has to run. here machines under that label will run the task. “–connection=local” which specifies run the command on localhost (not to use SSH). -m arg specifies the module to be used. Here we are using Ping module.(we will see more in upcoming tutorials)

typical response should be,

127.0.0.1 | success >> {
“changed”: false,
“ping”: “pong”
}

If you have successfully completed this , congratulations! you have run your first ansible task. In upcoming tutorials, we will look at more granular ways of running tasks using concept of roles.

 

Introduction to iptables and configure linux firewall rules

In this article , We will see the introduction to iptables and how to write basic packet filtering  rules along with the the glimpse at  how iptables interact with  netfilter hooks to carry out its functionality.

Iptables is a linux firewall administrative utility which helps in packet filtering and NAT ( Network Address Translation ) .

Iptables acts as an interface for an user to carry out the filtering ,mangling of packets and NAT functionality. operation mainly relies on netfilter hooks. ‘netfilter’ is mainly a series of hooks at various layers of kernal network protocol stack . whenever a packet arrives(or leaves)  the interface , loaded kernal modules can register at these hooks  and  triggered accordingly based on the priority provided.

there are totally five hooks that a program can register with,to manipulate or verify the packet.

  1. NF_PRE_ROUTING : packet has just entered the network stack. will be triggered first , decision about the destination is made.
  2. NF_IP_FORWARD :  This hook is triggered only when decision about the destination is not the local system but to forward to other.
  3. NF_IP_LOCAL_IN :  This hook is triggered when the local system is the destination.
  4. NP_IP_LOCAL_OUT : This hook is triggered when the packet is originated from  local system and is about to leave the network interface.
  5. NF_POST_ROUTING : This hook is triggered post NF_IP_FORWARD hook just before it leaves the network interface.

Note : each packet entering or leaving the network interface  is stored as an instance of sk_buff structure. And the hooks are applied on sk_buff structure for packet filtering.

iptables consists of tables defined based on the functionality of the same.these tables consists of series of chains and in turn these chains consists of rules to be matched.

Different tables:

  1. Filter (default)
  2. NAT
  3. Mangle
  4. Raw
  5. security

Different chains :

INPUT, OUTPUT, FORWARD , PREROUTING , POSTROUTING .

As said earlier , these  chains of multiple table are triggered at five hooks with the priority defined and sequentially evaluated.

now lets focus on defining rules for Filter table. This table has mainly three chains namely INPUT , OUTPUT , FILTER.

each of these chains can have different rules in which will be applied to the packets in question.if none of the rules apply to the packet , the Default policy is triggered.You can set the default policy of each chain as follows,

iptables -t filter -P INPUT DROP

iptables -t filter -P OUTPUT ACCEPT

iptables -t filter -P FORWARD REJECT

 

> first rule specifies a default policy ( – p) for input chain as drop the packet by default  if it does not match any rules in the chain.

> second rule specifies a default policy for output chain as accept the packet by default if it does not match any rules in the chain.

> third rule specifies a default policy for forward chain as reject( send an error response) the packet by default if it does not match any rules in the chain.

Now we can append the custom rules to each of the chain.

ex 1 : iptables -t filter -A INPUT -p tcp -s barriersec.com –dport 80 -j ACCEPT

here, this rule specifies that for filter table(-t) ,  append (-A) a rule to INPUT chain  for protocol (-p) tcp and source of packet(-s) from barriersec.com to the destination port (–dport) 80 and ACCEPT the packet (-j).

ex 2 : iptables -t filter -A OUTPUT -p tcp -d barriersec.com –sport 80 -j DROP

here, this rule specifies that for filter table(-t) , append (-A) a rule to OUTPUT chain for protocol (-p) tcp and destination of the packet ( -d) to barriersec.com from the source port(–sport) 80 and DROP the packet(-j).

we can view the list of rules for each of the table using,

 iptables -t [table name] -L

we can make use of conntrack to track the state of each tcp/udp connection.

ex : iptables -A INPUT -p tcp –dport 80 -s barriersec.com -m conntrack –cstate NEW -j ACCEPT

here, we are using conntack module  to accept the packet only if the state is new. ( first packet of the new connection.)

furthermore,  this feature  helps in deciding which packet belongs to which session/connection.

only con with this is, with increase in magnitude of traffic , cost of maintaining state will be very high.

we can also enable logging and use it on a specific rule as shown below :

iptables -A INPUT -s barriersec.com -j LOG –log-level 4 –log-prefix “/Admin”

This would effectively enable logging for the input connections matching the rule in the chain.

 

for detailed information regarding netfilter architecture :   visit https://www.netfilter.org/

Introduction to Initialization systems and creating your first Systemd service

In this post, We will see what is an initialization system and finally we will create a service for one of the popular and modern initialization system ‘Systemd’.

What is an Initialization system?

Init System daemons are the first process spawned by the kernal process when a server boots, which carries major initialization and management of services that are crucial to certain context operations . And also managing services across different runlevels . for example, spawning a getty login shell on startup. There can be custom services too, such as starting ‘ssh’ service on startup.

 Note : Initialization system has process ID = 1  and parent process ID = 0 .

There are many initialization systems available . Few of the popular ones are SysvInit , upstart and systemd.

SysvInit is one of the easy and simplest init system which mainly handles static events such as load all the service once the system boots . This effectively lacks handling dynamic events such as handling Pluggable USB’s , hotplugs , external SSD.  SysvInit initialization system manages different services required based on the default runlevel specified. (default runlevel is runlevel 5 i.e, graphical mode). the services inside the folder /etc/rc.d/rcx.d where x=0,1,2,3,4,5,6  specifies which service scripts has to be run for a particular runlevel on startup .

Note : You can change runlevel of a system using ‘init runlevel’  .

example : ‘init 0’  would effectively halts or shut down the system.

Upstart is another initialization system which was designed to handle dynamic events instead of pre defined sequence of activating services. they use the concept of jobs which effectively initializes and monitors services .

Systemd init system daemons are the recent replacements for other initialization daemons which is complex of them all. Specialty  of this system is that it can start services in parallel. Out of all Systemd manages ‘Units’ .

There are totally 8 type of unit files, out of all we focus mainly on two that is service unit files and target unit files.

service unit files are used to manage different services. And target unit files are a collection of services , which would collectively form run level of a system.

System specific service files reside under ‘/lib/systemd/system’ directory , whereas Custom user service files reside under ‘/etc/systemd/system’ directory.

Main service/target handling commands :

Start a service / target file :  systemctl start unitname.service
or systemctl start unitname.target

ex : systemctl start apache2.service

Stop a service / target file :  systemctl stop unitname.service
or systemctl stop unitname.target

ex :  systemctl stop apache2.service

Status of a service / target file :  systemctl status unitfile.service

ex : systemctl status apache2.service

restart a service unit file :  systemctl restart unitfile.service

ex : systemctl restart apache2.service

enable service on start – up : systemctl enable unitfile.service

ex : systemctl enable ssh.service

disable service on start – up :systemctl disable unitfile.service

ex : systemctl disable ssh.service

Note : .service extension can be omitted and is considered by default.

Now Lets create a custom service unit file and see the required procedure to enable one.

  1. Create a file with extension .service , with the following content.

[Unit]
Description=Trial service

[Service]
Type=oneshot
ExecStart=/bin/sh -c “runtime”
ExecStop=/bin/sh -c “uname -a”
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

 

Above content creates a unit with required unit description , and the attributes of service such as ‘Type ‘ which specifies if the  service is oneshot ,forking , simple, notify or idle . here we have used oneshot which means that it executes some action and exits immediately .This requires another attribute ‘RemainAfterExit’ to be set. ‘ExecStart’ is the attribute which has a process to be executed once this unit is started. ‘ExecStop’ is the attribute which has a process to be executed once this unit is stopped. and finally WantedBy attribute specifies the target file which requires this service for it to initialize.

2. Move the myservice.service unit file to /etc/systemd/system directory . and finally start the service file .

systemctl myservice.service start

Following snaps captures the status of the service once initialized and then stopped.

img (a) : status after starting  the  service, we can see in the systemctl logs from journal logs that the command defined in the ExecStart attribute executed successfully and the output is displayed in the logs.

 

img (b) : status after stopping the  service, we can see in the systemctl logs from journal logs that the command defined in the ExecStop attribute executed successfully and the output is displayed in the logs.

We have successfully created a simple service file and initialized it , now if we want to enable it to start at boot , we can use enable command as described above. this effectively creates a symbolic link to multiuser.target.wants directory with the same service name .

Note : we can also make use other attributes such as ExecStartPre , ExecStopPre etc.

Detailed documentation can be found at  systemd official website :  https://www.freedesktop.org/wiki/Software/systemd/