Posts

Configure Nginx - PHP 5.6.x on Mac OS Sierra...

Install homebrew :

    Run this command to install homebrew at system level

$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

    Run bellow command and check if have any thing need to be fixed.

$ sudo chown -R $(whoami) /usr/local
$ brew doctor

Install nginx :

    Command to install nginx via brew.

$ brew install nginx

Install PHP 5.6 :

     To install PHP v 5.6.x (script is from http://php-osx.liip.ch)

$ curl -s http://php-osx.liip.ch/install.sh | bash -s 5.6 

Enable new PHP 5.6 at system :

$ nano ~/.bash_profile

       Enter:

             export PATH="/usr/local/php5/bin:/usr/local/bin:$PATH"

Disable apple builtin old php-fpm version :

        sudo mv /usr/sbin/php-fpm /usr/sbin/php-fpm.old

Create symlink for new version of php-fpm:

        sudo ln -s /usr/local/php5/sbin/php-fpm /usr/local/bin/







AWS CodeCommit & Git on OSX fatal unable to access returned error: 403

You had setup AWS CLI tools with AWS CodeCommit included and integrated it all to Git using the credential.helper config line. All was working correctly, then you tried to clone or push and got an error similar to:



*** fatal: unable to access 'https://git-codecommit.us-east-1.amazonaws.com/v1/repos/YOUR_REPO/': The requested URL returned error: 403 ***


Solution 1 :

Run in terminal

security delete-internet-password -l  git-codecommit.us-east-1.amazonaws.com

It's is not a permanent solution, now the clock is ticking down to the next password reset done on AWS.

Solution 2 :

As per Amazon Docs:
        1) Open the Keychain Access utility. (You can use Finder to locate it.)
        2) Search for git-codecommit.us-east-1.amazonaws.com. Highlight the row, open the context menu or right-click it, and then choose Get Info.
        3) Choose the Access Control tab.
        4) In Always allow access by these applications, choose git-credential-osxkeychain, and then choose the minus sign…

Configure Nagios plugin " check_logfiles " for scanning log file

I was checking for a nagios plugin which could filter the specified log file which i had mentioned and notify me whenever a warning or error appeared in log. Finally i get in hand check_logfilecreated by Gerhard Lausser. Check_logfiles is used to scan the lines of a file for regular expressions.

Install the plugin on client system : 

$ wget https://labs.consol.de/assets/downloads/nagios/check_logfiles-3.8.0.2.tar.gz

$ tar xvzf check_logfiles-3.8.0.2.tar.gz

$ cd check_logfiles-3.8.0.2/

$ ./configure

$ make

$ make install

check_logfiles executable file will get installed on location "/usr/local/nagios/libexec"


Configure Check Pattern :

$ nano  /etc/nagios-plugins/config/check_log.cfg

@searches = (
  {
    tag => 'Error',
    logfile => '/var/logs/error.log',
    criticalpatterns => 'Error',
  }
);

Here i am filtering pattern "Error" from log file which i had mentioned in logfile parameter section


Configure nrpe.cfg in client system :

$ nano /etc/nagio…

Datastax Error : Error initializing cluster data

Image
I was getting the following error in Opscenter continuously and it get resolved automatically after some time.

Error initializing cluster data: The request to /APP_Live/keyspaces?ksfields=column_families%2Creplica_placement_strategy%2Cstrategy_options%2Cis_system%2Cdurable_writes%2Cskip_repair%2Cuser_types%2Cuser_functions%2Cuser_aggregates&cffields=solr_core%2Ccreate_query%2Cis_in_memory%2Ctiers timed out after 10 seconds.. If you continue to see this error message, you can workaround this timeout by setting [ui].default_api_timeout to a value larger than 10 in opscenterd.conf and restarting opscenterd. Note that this is a workaround and you should also contact DataStax Support to follow up.

Solution :
Workaround of this timeout is by setting [ui].default_api_timeout to a value larger than 10 in opscenterd.conf and restarting opscenterd.

Datastax Error : Cannot start node if snitch's data center (dc1) differs from previous data center (dc2)

Image
This error occurs when the node try to start and see it has snitch information which differ from previous data center. In my case, i was using DseSimpleSnitch which name the datacenter based on workload type. Previously i had enabled solr in dse default
configuration file ( /etc/default/dse ) and datasenter name was Solr. Now i tried to start the dse with graph enabled, which change the name to datacenter SearchGraph which is default workload type name.

CassandraDaemon.java:698 - Cannot start node if snitch's data center (Solr) differs from previous data center (SearchGraph). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true

Solution :

1) In the last line of cassandra-env.sh set in the jvm opts as given below :
JVM_OPTS="$JVM_OPTS -Dcassandra.ignore_dc=true"
2) If it starts successfully, execute:

nodetool repair nodetool cleanup

Datastax Error : Cassandra - Saved cluster name Test Cluster != configured name

Image
While changing existing cluster name in datastax, i got name mismatch exception in log file. Process i followed was changed the cluster name in cassandra.yaml configuration file.

$ nano /etc/dse/cassandra/cassandra.yaml

cluster_name: 'Test Cluster1'








Solution : 


Change the cluster name  to "New Cluster Name" :

$ cqlsh `hostname -I`

cqlsh> UPDATE system.local SET cluster_name = 'New Cluster Name' where key='local';
cqlsh> exit;

$ nodetool flush system

Stop the cluster :

$ sudo service dse stop;sudo service datastax-agent stop

Edit the file :

$ sudo vi /etc/dse/cassandra/cassandra.yaml
cluster_name: 'New Cluster Name'

Start the cluster :

$ sudo service dse start;sudo service datastax-agent start

Check Installation log :

 $ cat /var/log/cassandra/output.log


Reference  : https://support.datastax.com/hc/en-us/articles/205289825-Change-Cluster-Name-

Datastax administration Commands ( Part 6)

Image
Start DSE service : sudo service dse start
Stop DSE service : sudo service dse stop

Configuration File Location :
           /etc/dse/cassandra/cassandra.yaml
           /etc/default/dse
           /etc/dse/cassandra/cassandra-env.sh
           /etc/dse/cassandra/cassandra-rackdc.properties

## Show information of cluster
        nodetool status

## Show information about nodes and ring
        nodetool ring

## Repair one or more table
         nodetool repair

## Cleans up keyspaces and partition keys no longer belonging to a node.
         nodetool cleanup

## Flushes one or more tables from the memtable
         nodetool flush

## List files from backup data directories :
        find /mnt/cassandra/data/*/*/backups/* -type f -ls

## Remove files from backup data directories :
        find /mnt/cassandra/data/*/*/backups/* -type f -delete

## Retrieve the list of tokens associated with each node's IP:
      nodetool ring | grep ip_address_of_node | awk '{print $NF ","}&…