Enable OpsCenter User Login


                                                                         



                          Enable OpsCenter web user authentication by adding below value in file /etc/opscenter/opscenterd.conf

                                        [authentication]
                    enabled = True 

  Restart opscenter service after that


                   service opscenterd restart   


                
  The default admin username is admin and the password is admin .

                   http://opscenter-host:8888/              
   



Enable OpsCenter User Login


                                                                         



                          Enable OpsCenter web user authentication by adding below value in file /etc/opscenter/opscenterd.conf

                                        [authentication]
                    enabled = True 

    Restart opscenter service after that


                    service opscenterd restart   


                
    The default admin username is admin and the password is admin              
   



Understandin Amazon EBS



Amazon EBS (Elastic Block Storage)
===========================
                EBS is a virtual hard drive which you can attach and remove from your EC2 Instance. It’s like plug in and plug out hard disk  from your system. In Amazon behind the scene EBS is running on a isolated dedicated raw storage device, it is connected to the EC2 instance through some sort of network connecting device. Amazon bill EBS based on storage device and I/O operation .Its other advantage are high availability. One of its limitation are it can attach only to instance in  the same availability zone.
                         
                            We can take the snapshot of the EBS and it get stored in the S3.If we remove the Instance attached to EBS volume, only the instance got removed not the permanent EBS volume.Only the network connection link to EBS storage get removed ,EBS volume still exist, so we can attach it to another instance, that’s why Amazon bill the EBS :) because its permanent raw storage. EBS volume can be from 1 GB to 1000 GB. Best way is to stop the application before taking the snapshot of EBS volume hereby reduce the risk of data to be corrupted. Snapshot backup  only the data on EBS volume at the time of taking it. We can copy the snapshot to other region or another account or we can make it public because S3 is publicly available
                                               

Understanding Amazon EC2



Amazon EC2 (Elastic Cloud Compute)
==================================


Amazon provides resizable elastic cloud computing capacity in amazon cloud infrastructure. It’s like creating Virtual node on cloud environment. We can spin up instance based on our infrastructure requirement. User can create, resize, shutdown and delete instance based on infrastructure on demand requirement. Amazon provide pre-configured machine image known as AMI. We can spin up EC2 instance from AMI.AMI consist of preconfigured template of operating system and application image. We can also create AMI from existing EC2 Instance by creating a machine image.


        We can launch EC2 Instance from Amazon AMI list or our pre-configured AMI. We can launch Instance in different zones and regions based on our requirement. We can spin up instance hardware vertically and horizontally based on requirement. We can run instance inside and outside VPC. We are limited to running total 20 On demand or reserved instance and 20 spot instance per region. It count differ based on region. We can attach EBS Volume to the AWS Instance
  
              Instance firewall rule can be configured on Security group. If you want to open port for the particular instance. Select the security group attached to the instance and port to the Inbound and Outbound of the security group. We can attach existing security key or create new security key on the instance while creating the instance. Amazon store the public security key and user store the private security key. Security group created in one region will not available in another region. If we want to migrate from one region to other region,we can export the security group to other region.

Configure SSH Connection to AWS CodeCommit

                                                                      







Step 1:  Generate ssh key in your local system.

            $ ssh-keygen

            Generating public/private rsa key pair.
            Enter file in which to save the key (/root/.ssh/id_rsa): <Enter the file name Egs:codecommit_QARepo>
            Enter passphrase (empty for no passphrase): <Enter the Password>
            Enter same passphrase again:<Re-enter the Password>


Step 2:  Copy the values of the SSH public key created.

             $ cat ~/.ssh/codecommit_rsa.pub

Step 3 :  Login to AWS IAM console >> Chose Users >> Chose your IAM user  >> Chose Security
              Credential >> Chose  Upload SSH Key.


              Paste the content of the SSH public key  in to the "Upload SSH Key" section selected


Step 4:  Copy and save the information in SSH Key id ,which created after Uploading the public key.

Step 5:  Create config file in your local system and paste the SSH Key id in "user" section as below.
         
              $ nano  config


             Host git-codecommit.*.amazonaws.com
       User APKAEIBAERJR2EXAMPLE
      IdentityFile ~/.ssh/codecommit_rsa


Step 6: Change config file permission

             $ chmod 600 config

Step 7:  Run below command to test the SSH Configuration.

             $ ssh git-codecommit.us-east-1.amazonaws.com

Install OpsCenter & Configuring to existing Datastax Cassandra Cluster


                                                               


Follow the steps to install OpsCenter .Don't forget to signup in Datastax acadamey , which we will need during the installation process 


Step 1:  Add Datastax repository file

             sudo echo "deb https://login_email_Address:login_Password@debian.datastax.com/enterprise stable main" | sudo tee -a /etc/apt/sources.list.d/datastax.sources.list


Step 2:  Add the Datastax repository key

             sudo curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -

Step 3:  Install the package
           
             sudo apt-get update
             sudo apt-get install opscenter



Step 4:  Start Opscenter
           
              sudo service opscenterd start


Step 5:  Browse Opscenter using below url
      
              http://hostaddress:8888/

 
Follow the step to  Configuring OpsCenter to existing Datastax Cassandra Cluster.

Step 1:  Update address.yaml located in your Cassandra cluster nodes.

             sudo echo "stomp_interface: <reachable_opscenterd_ip>" | sudo tee -a /var/lib/datastax-agent/conf/address.yaml

             sudo echo "use_ssl: 1" | sudo tee -a /var/lib/datastax-agent/conf/address.yaml


Step 2:  Start DataStax service 

             sudo service datastax-agent start

Upgrading to Datastax Cassandra 5.0


                                                                       

Follow the instructions to upgrade  DataStax Enterprise  from 4.8 to 5.0 version. First take the backup of cassandra.yaml and dse.yaml. Don't forget to signup in Datastax acadamey , which we will need during the installation process 




Step 1:  Snapshot all keyspaces as needed.
          
             nodetool snapshot <keyspace_name>

Step 2:  Backup keyspace snapshot

Step 3:  Run nodetool drain to flush the commit log of the old installation.
            
             sudo nodetool -h hostname drain

Step 4:  Stop node services
         
            sudo service dse stop
            sudo service datastax-agent stop


Step 5:  Backup configuration file.
   
            cp -r /etc/dse/dse.yaml /home
            cp -r /etc/dse/cassandra/cassandra.yaml /home
            cp -r /etc/dse/cassandra/cassandra-topology.properties /home
            cp -r /etc/dse/cassandra/cassandra-env.sh /home



Step 6:  Make sure you have latest java is installed (1.8.0_40 minimum) or install it

             sudo java -version

Step 7:  Add Datastax repository file
   
      sudo echo "deb https://login_email_Address:login_Password@debian.datastax.com/enterprise stable main" | sudo tee -a /etc/apt/sources.list.d/datastax.sources.list


Step 8:  Add the Datastax repository key
      
            sudo curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -

Step 6: Install the package

            sudo apt-get update
            sudo apt-get install dse-full

Step 7:  Configure the new version using backup configuration file we already taken.

Step 8:  Start Node service

             sudo service dse start
             sudo service datastax-agent start

              and make sure that everything is runnig fine by checking the log.

             tail -f -n 1000 /var/log/cassandra/system.log

Step 9:  Upgrade the SSTables
        
             sudo nodetool upgradesstables

Step 10:  Check Node status
   
               sudo nodetool status

Step 11:  Repeat the upgrade on each node in the cluster

Step 12:  If your upgrade includes DSE Search nodes:

                After all nodes are updated to 5.0, comment out all shard_transport_options instances in each dse.yaml file.
                 Restart the nodes to fully shut down the old shard transport. 

Datastax Cassandra Cluster implementation in Dedian


                                                                   
Lets check how to setup Cassandra clustering in 3 Debian nodes. I am using Cluserssh tool to connect to all nodes at the same time.First create an login account in Datastax acadamey , which we will need during the installation process 

Step 1:  Connect to all 3 nodes using clusterssh
         
              sudo cssh -l username node1ip node2ip node3ip

Step 2:  Make sure java is installed or install it
         
             sudo java -version

Step 3:  Add Datastax repository file
        
             sudo echo "deb https://login_email_Address:login_Password@debian.datastax.com/enterprise stable main" | sudo tee -a /etc/apt/sources.list.d/ datastax.sources.list

Step 4:  Add the Datastax repository key
        
             sudo curl -L https://debian.datastax.com/debian/repo_key | sudo apt-key add -

Step 5:  Install the package
        
             sudo apt-get update
             sudo apt-get install dse-full

Step 6:  Stop DSE service
     
             sudo service dse stop

Step 7:  Delete all files from cassandra data directory for a fresh cluster

             default data directory is : /var/lib/cassandra/

Step 8:  Change the following in each cassandra.yaml file
         
             default file path of cassandra.yaml is  /etc/dse/cassandra/cassandra.yaml


            cluster_name : [some_cluster_name]
            listen_address : [ip of each machine] (use :r !hostname -I to update every machine at a time if you are using cluster ssh)
            rpc_address : [ip of each machine] (use :r !hostname -I to update every machine at a time if you are using cluster ssh)
            seeds : [comma separated ips for identifying clusters. any two or three. Do not include all ips] 

Step 9:  Run DSE service
        
             sudo service dse start

Step 10:  Make sure that everything is running fine by checking the log
        
               sudo tail -f -n 1000 /var/log/cassandra/system.log

              Check cluster status
         
              sudo nodetool status

Amazon EC2(Elastic Cloud Compute)

                                                                            


Amazon EC2(Elastic Cloud Compute)
==========================

Amazon provide resizable elastic cloud compute capacity in cloud.Where users can create an instance,which they can resize with their business model requirment.Amazon provide preconfigured amazon machine image template which you can find from following link provided.

https://aws.amazon.com/marketplace/ref=mkt_ste_amis_redirect?b_k=291

Amazon Machine Image (AMI) is a pre-configured operating system image that is used to create an EC2 instance within the Amazon cloud environment.

          You can deploy a new instance as per your requirment or you can re-launch an instance from an AMI which you had already taken.You can launch an instance on multiple location,fixed ip address from dynamic ip address section and attach an persistant block storage from Elastic block storage(EBS) section.EC2 provide you complete control our your compute resource.You can take advantage of Amazon EC2 free service provided for testing pupose,Chek below link for free Amazon EC2 Tier.

http://aws.amazon.com/free/

             Firewall rule can be configured in Security group Section.Rules for incoming or outgoing connection and port can be configured in Security Group section.Security group is like firewall rule that you configure to restrict access to deployed Amazon Web Services infrastructure.Security groups can exist only within a Amazon EC2 region.Security groups configured in AWS US-WEST region is not available on AWS APAC region. Incase you want to migrate AWS Resource from one region to another region ,you can also export security group configuration

     

Configuring Nagios Client : NRPE Plugin

                                                                           
We have discussed how to "Install Nagios Core on Amazon Linux Instance ". Please check the link for reference : http://linuxhotcoffee.blogspot.in/2016/03/installing-nagios-core-on-amazon-linux.html

My current Infrastructure details is given below : OS & Private IP

Monitoring Server : Amazon Linux Instance (10.10.1.100)
Client Server : Ubuntu (10.10.1.10)


Create User nagios and install below mentioned package

sudo useradd -m nagios
sudo apt-get install libssl-dev openssl xinetd build-essential


Now lets configure Monitoring server . Install NRPE on monitoring server.You can download the latest version of NRPE from Official Download page.Current latest version is NRPE 2.15 .Download the latest release using wget

cd ~
wget http://downloads.sourceforge.net/project/nagios/nrpe-2.x/nrpe-2.15/nrpe-2.15.tar.gz
wget http://nagios-plugins.org/download/nagios-plugins-2.1.1.tar.gz


Extract the tar file

tar -xvzf  nrpe-2.15.tar.gz
tar xzf nagios-plugins-2.1.1.tar.gz


Change the directory to

cd nagios-plugins-2.1.1/

Compile and install Plugin

./configure --with-nagios-user=nagios --with-nagios-group=nagios --with-openssl
make
make install


Change the directory to

cd nrpe-2.15/

Compile and install NRPE

./configure --with-ssl=/usr/bin/openssl --with-ssl-lib=/usr/lib/x86_64-linux-gnu
make install-plugin
make install-daemon
make install-daemon-config
make install-xinetd

Now modify the "only_from"  line in  /etc/xinetd.d/nrpe

vi /etc/xinetd.d/nrpe
only_from = 127.0.0.1 10.10.1.1000

Restart the xinetd service

service xinetd restart

Configuration on Nagios Monitor server is completed ,now we can move with configuration of Client Server

Install Nagios plugin and NRPE on Client Server

apt-get install nagios-plugins nagios-nrpe-server


Configure the Monitor Server ip on client nrpe.cfg file

vi /etc/nagios/nrpe.cfg

find allowed_hosts section in the file and add like the following

allowed_hosts=127.0.0.1,10.10.1.100


Restart then NRPE Service

service nagios-nrpe-server restart