<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Dev &#8211; Johnny Morano&#039;s Tech Articles</title>
	<atom:link href="https://jmorano.moretrix.com/tag/dev/feed/" rel="self" type="application/rss+xml" />
	<link>https://jmorano.moretrix.com</link>
	<description>Ramblings of an old-fashioned space cowboy</description>
	<lastBuildDate>Wed, 20 Apr 2022 07:18:55 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.2</generator>

 
	<item>
		<title>Perl script to monitor the rate of logs</title>
		<link>https://jmorano.moretrix.com/2022/04/perl-script-to-monitor-the-rate-of-logs/</link>
					<comments>https://jmorano.moretrix.com/2022/04/perl-script-to-monitor-the-rate-of-logs/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Thu, 07 Apr 2022 12:39:50 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[IPTables]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Logging]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1399</guid>

					<description><![CDATA[In a previous article (IPTables Logging in JSON with NFLOG and ulogd2) we learned how to log certain&#8230;]]></description>
										<content:encoded><![CDATA[
<p>In a previous article (<a href="https://jmorano.moretrix.com/2022/03/logging-in-iptables-with-nflog-and-ulogd2/" data-type="post" data-id="1308">IPTables Logging in JSON with NFLOG and ulogd2</a>) we learned how to log certain IPTables rules to JSON log files.</p>



<p>Monitoring the logs in real-time on the command line, can also be very useful when debugging either the rules themselves or when analyzing certain issues. Rather than just looking at the logs, in some situations it might be useful to track the rate of the log messages. A self-written Perl script can be useful as it allows to be flexible when it comes to:</p>



<ul class="wp-block-list"><li>parsing logs</li><li>formatting the output (with colors or tables or &#8230;)</li><li>calculating statistics</li><li>&#8230;</li></ul>



<p>The following Perl script uses a few modules which need to be present:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">use IO::Async::Timer::Periodic;
use IO::Async::Loop;
use Time::HiRes qw/time/;
use Term::ANSIColor qw(:constants);
use Getopt::Long;</pre>



<p>The first two modules can be installed on Debian systems with:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">apt install libio-async-perl</pre>



<p>The others are part of the normal Perl packages and do not require any extra installation.</p>



<p>Next the script will use a polling mechanism to read from standard output at fixed intervals, to calculate the rate of the unique log lines. The default polling rate is set to 2 seconds but it can be managed through command line parameters:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">my $last_poll_time = time;

my $poll_rate = 2;
GetOptions (
    'p|pollrate=i' => \$poll_rate,
);

my $loop = IO::Async::Loop->new;
my $timer = IO::Async::Timer::Periodic->new(
   interval => $poll_rate,
   on_tick  => \&amp;log_rate
);

$timer->start;
$loop->add( $timer );
$loop->run;</pre>



<p>Finally, the script will define a subroutine called <code>log_rate</code>, which will read from standard output (or even a file) at each poll interval. Important is of course that the log lines from standard output do not contain unique data such as timestamps. The output must be as generic as possible.</p>



<p>Example:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">tail -qf /var/log/ulog/blocked_detailed.json /var/log/ulog/blocked.json /var/log/ulog/passed.json  | jq -r --unbuffered '."oob.prefix"' 
blocked: invalid state
blocked: invalid state
blocked: invalid state
blocked: invalid state
blocked: invalid state
action=blocked
action=blocked
action=blocked
action=blocked
action=blocked
action=passed
action=passed
action=passed
action=passed</pre>



<p>The code snippit for <code>log_rate</code> could contain:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">sub log_rate {
    local $SIG{ALRM} = sub { die time, " time exceeded to read STDIN\n" };

    alarm($poll_rate);
    my $h;
    eval {
        local $| = 1;
        while (my $line = &lt;>) {
            chomp($line);
            $h->{$line}++;
        }
    };
    alarm(0);

    return unless keys %$h;

    my $delta_time = time - $last_poll_time;
    print DARK WHITE . sprintf("%d: ", time) . RESET;
    print( BOLD WHITE . $_ ." [" . GREEN . sprintf("%.2f/s", $h->{$_}/$delta_time) . BOLD WHITE "] | " . RESET) foreach keys %$h; 
    print "\n";

    $last_poll_time = time;
}</pre>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 2</mark> will start with declaring the &#8220;<code>ALARM</code>&#8221; signal. This signal is called when the <code>alarm</code> timeout has been reached (see further below).</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 4</mark> defines the <code>alarm</code> timeout in seconds: meaning: if everything below<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color"> line 4</mark> (until the next <code>alarm</code> line) takes longer than the defined timeout in seconds, the &#8220;ALRM&#8221; signal handler defined at <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">line 2</mark> will be called, which basically stops the code execution with a <code>die</code> (which in theory should stop the script with an <code>exit 1</code>).</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 5</mark> defines a hash reference which is required down below, to temporarily store unique log lines.</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 6</mark> until <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">12</mark> define an <code>eval</code> block. The <code>eval</code> block will catch the ALRM signal <code>die</code> (once reached) without stopping the script with an <code>exit 1</code>. Inside the <code>eval</code> block, the standard output will be read with the diamond operator (<code>&lt;></code>) and unique lines will be counted and stored in the <code>$h</code> hash reference.</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 13</mark>, right after the <code>eval</code> block, sets to <code>alarm</code> timeout to 0 again, which means it is disabled. This allows that only execution of the <code>eval</code> block will be evaluated for timeout. </p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 15</mark> ensures that only when log lines were discovered and stored in the temporary hash-ref<code> $h</code>, that rates will be printed to the screen.</p>



<p>The rest of the code will take care of printing the discovered log lines with their rates to the screen. Colors from <code>Term::ANSIColor</code> are used to make the output more vivid.</p>



<p>Example output:</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="911" height="285" src="https://jmorano.moretrix.com/wp-content/uploads/2022/04/Screenshot-from-2022-04-06-14-14-00.png" alt="" class="wp-image-1405" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/04/Screenshot-from-2022-04-06-14-14-00.png 911w, https://jmorano.moretrix.com/wp-content/uploads/2022/04/Screenshot-from-2022-04-06-14-14-00-300x94.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/04/Screenshot-from-2022-04-06-14-14-00-768x240.png 768w" sizes="(max-width: 911px) 100vw, 911px" /></figure>



<p>The full version of the script can be found at: <a href="https://github.com/insani4c/perl_tools/tree/master/log_rate" target="_blank" rel="noreferrer noopener">https://github.com/insani4c/perl_tools/tree/master/log_rate</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/perl-script-to-monitor-the-rate-of-logs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Perl: Create schema backups in PostgreSQL</title>
		<link>https://jmorano.moretrix.com/2014/08/perl-create-schema-backups-in-postgresql/</link>
					<comments>https://jmorano.moretrix.com/2014/08/perl-create-schema-backups-in-postgresql/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Fri, 22 Aug 2014 09:09:20 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Postgresql]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1114</guid>

					<description><![CDATA[At my recent job, I was asked to create a backup procedure, which would dump a PostgreSQL schema&#8230;]]></description>
										<content:encoded><![CDATA[
<p>At my recent job, I was asked to create a backup procedure, which would dump a PostgreSQL schema to a compressed file and which was able to create weekly and daily backups.<br />The backups had to be full backups each time a backup was made and the amount of daily and weekly backups should be defined through thresholds.</p>



<p>The PostgreSQL tool used for those backups is &#8216;<code>pg_dump</code>&#8216; and I have used Perl to script all the interesting stuff together.</p>



<p>The script will basically go through the following steps:</p>



<ul class="wp-block-list"><li>Check the backup path for the required directories (and if not, create them)</li><li>Rotate old backups based on thresholds</li><li>Create a new backup</li></ul>



<p>The script shown below is just an example and probably needs to be adopted for your own needs. The script works for me and the environment it was created in.</p>



<p>First things first.<br />The script uses the following Perl modules:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">use DateTime;
use Pod::Usage;
use YAML qw/LoadFile/;
use File::Path qw/make_path/;
use File::Copy;
use Data::Dumper;
use POSIX qw/setuid/;
</pre>



<p>A YAML configuration file is used to provide the script with essential information. An example configuration file looks like the following:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">thresholds:
    daily: 7
    weekly: 4

backup_path: /data/backup/schema_backups

database: my_db

daily_to_weekly_pattern: sunday

schemas:
    - my_cool_schema
    - my_not_so_cool_schema
</pre>



<p>Remember: YAML is sensitive about tabs!</p>



<p>Command line arguments are set up in the script by using Getopt::Long.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">my ($help, $cfg_file, $schema, $verbose, $debug) = @_;
# Check command line arguments
GetOptions(
    "help"     =&amp;gt; \$help,
    "verbose"  =&amp;gt; \$verbose,
    "debug"    =&amp;gt; \$debug,
    "cfg=s"    =&amp;gt; \$cfg_file,
    "schema=s" =&amp;gt; \$schema,
);
pod2usage(1) if $help;
</pre>



<p>The script needs to run as the &#8216;postgres&#8217; user. Should it be executed by another user (for instance root), then script will try to switch to the &#8216;postgres&#8217; user.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">my ($user) = ( split /\c/, getpwuid($&amp;lt;) )[0]; 
unless ($user eq 'postgres') { 
    p_info("Script $0 needs to run as 'postgres', switching user..."); 
    setuid(scalar getpwnam 'postgres'); 
}</pre>



<p>Next we will load the configuration file and check if a schema name was supplied on the command line. If one was defined, then we will override the schema names which were set in the configuration, and only create a backup of that one schema name.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">if(defined $cfg_file){
    if( -f $cfg_file ){
        p_info("Loading configuration file '$cfg_file'");
        $cfg = LoadFile($cfg_file);
    }
    else {
        die "No such configuration file '$cfg_file'\n";
    }
}

$cfg-&amp;gt;{schemas} = [$schema] if defined $schema;
</pre>



<p>And now we are ready for the mainloop of the script:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">foreach my $s (@{ $cfg-&amp;gt;{schemas} }){
    check_current_backups($s);
    create_backup($s);
}
</pre>



<p>For each schema, we will first check if the required directories are in place and otherwise create them. Afterwards we will check those directories for older backups.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">sub check_current_backups {
    my($schema) = @_;

    check_directory_structure($schema);
    check_backups('daily', $schema);
    check_backups('weekly', $schema);
}

sub check_directory_structure {
    my($schema) = @_;

    foreach my $period (qw/daily weekly/){
        my $_path = return_backup_path($period, $schema);;
        p_info("Checking path '$_path'");
        unless(-d $_path){
            make_path($_path);
            p_info("Created path '$_path'");
        }
    }
}

# check if older backups need rotation / deletion
sub check_backups {
    my($period, $schema) = @_;

    my $path = return_backup_path($period, $schema);

    my @files = glob("$path/*");
    my @sorted = sort { get_date($b) &amp;lt;=&amp;gt; get_date($a) } @files;

    if(scalar @sorted &amp;gt;= $cfg-&amp;gt;{thresholds}{$period}){
        p_info("Rotating backups for period '$period'");
        rotate_backups($period, \@sorted);
    }
}
</pre>



<p>The rotation of the backups works like follows:<br />&#8211; If the day threshold has been reached (for instance 7 daily backups), then those files will be nominated for rotation or deletion</p>



<p>The rotation itself is custom designed for my current job. Each backup filename is appended the day name (Monday, Tuesday, &#8230;). Backup files matching a certain pattern (in my situation &#8216;sunday&#8217;) will be moved into the &#8216;weekly&#8217; backup path, other old files will be deleted.</p>



<p>Since rotation is done before a backup is created, we will delete one more as required file (since a new backup file is going to be created in a few lines further).</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">sub rotate_backups {
    my($period, $files) = @_;

    p_debug("All Files: ".Dumper($files));
    p_debug("$period threshold: ".$cfg-&amp;gt;{thresholds}{$period});

    # make a true copy
    my (@to_move_files) = (@{ $files });
    # The @files contains all backup files, with the youngest as element 0, the oldest 
    # backup as last element.
    # @to_move_files is a slice of @files, starting from the position threshold - 1, 
    # until the end of the array. Those files will be either rotated or removed
    @to_move_files = @to_move_files[ $cfg-&amp;gt;{thresholds}{$period} -1 .. $#to_move_files ];
    p_debug("TO MOVE FILES: ".Dumper(\@to_move_files));

    if($period eq 'daily'){
        foreach my $file (@to_move_files){
            # move backups to weekly
            if($file =~ /$cfg-&amp;gt;{daily_to_weekly_pattern}/){
                p_info("Moving daily backup '$file' to weekly");
                move($file, return_backup_path('weekly', $schema) . '/' . $file)
            }
            else {
                p_info("Removing backup '$file'");
                unlink($file);
            }
        }
    }

    if($period eq 'weekly'){
        foreach my $file (@to_move_files){
            # remove files
            p_info("Removing backup '$file'");
            unlink($file);
        }
    }
}
</pre>



<p>At this point now, the required directory structure has been checked and is present and older backup files have been rotated or deleted.<br />Finally we can create the actual backup:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">sub create_backup {
    my($schema) =@_;

    p_info("Creating backup for schema '$schema', database:" . $cfg-&amp;gt;{database});
    my $now = DateTime-&amp;gt;now;
    my $path = return_backup_path('daily', $schema) 
                . '/' . $now-&amp;gt;ymd('') . $now-&amp;gt;hms('')
                . '_' .lc($now-&amp;gt;day_name) 
                . '.dump.sql';

    # Create the dump file
    my $dump_output = do{
        local $/;
        open my $c, '-|', "pg_dump -v -n $schema -f $path $cfg-&amp;gt;{database} 2&amp;gt;&amp;amp;1" 
            or die "pg_dump for '$schema' failed: $!";
        &amp;lt;$c&amp;gt;;
    };
    p_debug('pg_dump output: ', $dump_output);

    # GZIP the dump file
    my $gzip_output = do{
        local $/;
        open my $c, '-|', "gzip $path 2&amp;gt;&amp;amp;1" 
            or die "gzip for '$path' failed: $!";
        &amp;lt;$c&amp;gt;;
    };
    p_debug('gzip output: ', $gzip_output);

    # change the permissions
    chmod 0660, "$path.gz";

    p_info("Created backup for schema '$schema' in '$path.gz'");
}
</pre>



<p>The backup is created by issuing <code>pg_dump</code> for that schema and it will produce a normal text SQL file. This file will be compressed with <code>gzip</code> and afterwards the file permissions will be changed to 0660. This means that, since the backup file is created by the <code>postgres</code> user, only the <code>postgres</code> user will have access to this file.</p>



<p>The full script and configuration file can be found at <a title="Github Repositry" href="https://github.com/insani4c/perl_tools/tree/master/backup_schema" target="_blank" rel="noopener">https://github.com/insani4c/perl_tools/tree/master/backup_schema</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2014/08/perl-create-schema-backups-in-postgresql/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Monitor running processes with Perl</title>
		<link>https://jmorano.moretrix.com/2014/05/monitor-running-processes-with-perl/</link>
					<comments>https://jmorano.moretrix.com/2014/05/monitor-running-processes-with-perl/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Thu, 15 May 2014 12:33:22 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[CPAN]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1081</guid>

					<description><![CDATA[Update: This article is updated thanks to Colin Keith his excellent comment. I was extremely inspired by it&#8230;]]></description>
										<content:encoded><![CDATA[
<p><strong>Update:</strong> This article is updated thanks to Colin Keith his excellent comment. I was extremely inspired by it</p>



<p>Maintaining a large number of servers cannot be done without proper programming skills. Each good system administrator must therefor make sure he knows how to automate his daily works.</p>



<p>Although many many programming languages exist, most persons will only write code in one. I happen to like Perl.</p>



<p>In this next blog post, I am going to show how to create a script which can be deployed on all the Linux servers you need to maintain and need to check for certain running services.</p>



<p>Of course, a tool as Nagios together with NRPE and a configured event-handler could also be used, but lately I was often in the situation that the &#8216;nrpe daemon&#8217; crashed, Nagios was spewing a lot of errors and the event-handler&#8230; well, since nrpe was down, the event-handler of course couldn&#8217;t connect or do anything. So why rely on a remote triggered action, when a simple script could be used.</p>



<p>The following script will check a default list of services and can additionally load or overwrite these services. A regular expression can be used to check for running processes, and of course, a startup command needs to be defined. And that is all the script will and should do.</p>



<p>The script uses three CPAN modules:</p>



<ul class="wp-block-list"><li><a title="Proc::ProcessTable" href="http://search.cpan.org/~jwb/Proc-ProcessTable-0.50/ProcessTable.pm">Proc::ProcessTable</a></li><li><a title="YAML" href="http://search.cpan.org/~ingy/YAML-0.90/lib/YAML.pm">YAML</a></li><li><a title="File::Slurp" href="http://search.cpan.org/~uri/File-Slurp-9999.19/lib/File/Slurp.pm">File::Slurp</a></li></ul>



<p>The first one will be used to get a full listing of all running processes and the second one will provide us a means for using configuration files.</p>



<p>So let&#8217;s start our script:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">#!/usr/bin/env perl 
use strict; use warnings;
use utf8;

use Proc::ProcessTable;
use YAML qw/LoadFile/;
use File::Slurp;

# Default set of processes to watch
my %default_services = (
    'NRPE' =&amp;gt; {
        'cmd'     =&amp;gt; '/etc/init.d/nagios-nrpe-server restart',
        're'      =&amp;gt; '/usr/sbin/nrpe -c /etc/nagios/nrpe.cfg -d',
	'pidfile' =&amp;gt; '/var/tmp/nagios-nrpe-server.pid',
    },
    'Freshclam' =&amp;gt; {
        'cmd'     =&amp;gt; '/etc/init.d/clamav-freshclam restart',
        're'      =&amp;gt; '/usr/bin/freshclam -d --quiet',
	'pidfile' =&amp;gt; '/var/tmp/clamav-freshclam.pid',
    },
    'Syslog-NG' =&amp;gt; {
        'cmd'     =&amp;gt; '/etc/init.d/syslog-ng restart',
        're'      =&amp;gt; '/usr/sbin/syslog-ng -p /var/run/syslog-ng.pid',
	'pidfile' =&amp;gt; '/var/run/syslog-ng.pid',     
    },
    'VMToolsD' =&amp;gt; {
        'cmd'     =&amp;gt; '/etc/init.d/vmware-tools restart',
        're'      =&amp;gt; '/usr/sbin/vmtoolsd',
	'pidfile' =&amp;gt; '/var/tmp/vmtoolsd.pid',
    },
    'Munin-Node' =&amp;gt; {
        'cmd'     =&amp;gt; '/etc/init.d/munin-node restart',
        're'      =&amp;gt; '/usr/sbin/munin-node',
	'pidfile' =&amp;gt; '/var/tmp/munin-node.pid',
    },
);

my (%services) = (%default_services);
</pre>



<p>Until now, no rocket science. We load the required modules, we defined our default services that need to be checked.</p>



<p>Next part, check if there is a configuration file on disk. The script looks for a hard-coded path &#8221;/etc/default/watchdog.yaml&#8221;:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group=""># Check if there is a local config file and if yes, load them in the services hash
if( -f '/etc/default/watchdog.yaml' ){
    my $local_config = LoadFile '/etc/default/watchdog.yaml';

    %services = (%default_services, %{ $local_config->{services} });
}
</pre>



<p>The last Perl statement actually allows to overwrite one or more (or even all) the default defined services.</p>



<p>Now let&#8217;s see if these processes are actually running. The following code was hugely inspired by Colin Keith&#8217;s comment below. I have combined his examples together with my code.</p>



<p>Let&#8217;s first have a look at the code:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group=""># Get current process table
my $processes = Proc::ProcessTable-&amp;gt;new;
my %procs; 
my %matched_procs;
foreach my $p (@{ $processes }){
    $procs{ $p-&amp;gt;{pid} } = $p-&amp;gt;{cmndline};
    foreach my $s (keys %services){
        if($p-&amp;gt;{cmndline} =~ m#$services{$s}-&amp;gt;{re}#){
            $matched_procs{$s}++;
            last;
        }
    }
}

# Search the process table for not running services
foreach my $service ( keys %services ) {
    if(exists($services{$service}-&amp;gt;{pidfile}) &amp;amp;&amp;amp; -f $services{$service}-&amp;gt;{pidfile} ) {
        my $pid = read_file( glob($services{$service}-&amp;gt;{pidfile}) );
 
        # If we get a pid ensure that it is running, and that we can signal it
        $pid &amp;amp;&amp;amp; exists($procs{$pid}) &amp;amp;&amp;amp; kill(0, $pid) &amp;amp;&amp;amp; next;  
        
        # Remove the stale PID file because no running process for this PID file
        unlink( $services{$service}-&amp;gt;{pidfile} );
    }
    else {
        # check if the configured process regex matches
        if( exists($matched_procs{$service}) ){
            # process is running but has no PID file
            next;
        }
    }
	
    # Execute the service command
    system( $services{$service}-&amp;gt;{'cmd'} );

    # Check the exit code of the service command
    if ($? == -1) {
        print "Failed to restart '$service' with '$services{$service}-&amp;gt;{cmd}': $!\n";
    }
    elsif ($? &amp;amp; 127) {
        printf "Restart of '$service' died with signal %d, %s coredump\n", ($? &amp;amp; 127),  ($? &amp;amp; 128) ? 'with':'without';
    }
    else {
        printf "Process '$service' successfully restarted, exit status:  %d\n", $? &amp;gt;&amp;gt; 8;
    }
}
</pre>



<p>Lines 2 retrieves the current process list. We will save that information in two hashes with a little less information, because we actually only need the PID and the actual &#8216;command line&#8217; of each process.</p>



<p>At line 16 we will start looping through the processes we have defined in the <code>%services</code> hash.<br />Inspired by Colins post, we will check if the process&#8217; PID file is still there and if one is configured. If it still exists, we will then verify if the PID stored in the PID file, exists in the process list, which we have stored in <code>%procs</code>. This happens in lines 18-21.<br />At line 21, if the process is still running and the PID matches, we will check the next service to check (<code>&amp;&amp; next</code> part)<br />If the process is not running anymore but the PID file was still in the defined path, then it will be removed at line 24.</p>



<p>Otherwise, if no PID file was found or no PID file was configured, we will check the process list with the regular expression defined for that process. We have already created a hash, <code>%matched_procs</code> between lines 7 and 10, which we will use for this checking. If the process exists in the hash, we will skip and check the next process to be checked.</p>



<p>Now, if there was no PID file or the PID file was removed at line 24, the process will be started again. This happens at line 35.<br />I&#8217;ve executed it with the &#8216;system&#8217; function since I want to have the output of this command directly in STDOUT. And of course, the last thing to do is to check if the process started up correctly or not by checking its exit code.</p>



<p>Now save that script to for instance &#8216;watchdog.pl&#8217; and configure it in a cron job.<br />Example:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">*/5 * * * * root /usr/local/bin/watchdog.pl
</pre>



<p>And here&#8217;s an example of the configuration file:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">services:
    Exim-Mailserver:
        cmd: /etc/init.d/exim4 restart
        re: /usr/sbin/exim4 -bd -q30m
    Ossec-Agent:
        cmd: /etc/init.d/ossec restart
        re: !!perl/regexp '(?:ossec-agentd|ossec-logcollector|ossec-syscheckd)'

</pre>



<p>Link to script source code: <a href="https://github.com/insani4c/perl_tools/tree/master/watchdog" target="_blank" rel="noopener">https://github.com/insani4c/perl_tools/tree/master/watchdog</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2014/05/monitor-running-processes-with-perl/feed/</wfw:commentRss>
			<slash:comments>15</slash:comments>
		
		
			</item>
		<item>
		<title>Postgresql: Monitor sequence scans with Perl</title>
		<link>https://jmorano.moretrix.com/2014/02/postgresql-monitor-sequence-scans-perl/</link>
					<comments>https://jmorano.moretrix.com/2014/02/postgresql-monitor-sequence-scans-perl/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Wed, 12 Feb 2014 07:33:26 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Postgresql]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1065</guid>

					<description><![CDATA[Not using indexes or huge tables without indexes, can have a very negative impact on the duration of&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Not using indexes or huge tables without indexes, can have a very negative impact on the duration of a SQL query. The query planner will decide to make a sequence scan, which means that the query will go through the table sequentially to search for the required data. When this table is only 100 rows big, you will probably not even notice it is making a sequence scans, but if your table is 1,000,000 rows big or even more, you can probably optimize your table to use indexes to result in faster searches.</p>



<p>In the example script we will be using a <em>Storable</em> state file and we will the statistics as a JSON object in the PostgreSQL database.</p>



<p>First let&#8217;s take a look at the query we will be executing:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="sql" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">SELECT schemaname, relname, seq_tup_read 
FROM pg_stat_all_tables 
WHERE seq_tup_read &amp;gt; '0' 
      AND relname NOT LIKE 'pg_%'
ORDER BY seq_tup_read desc
</pre>



<p>As you can see, PostgreSQL stores all the information we need about our tables in just one table, called <em>pg_stat_all_tables</em>. In this table there is a column called <em>seq_tup_read</em>, which will contain the information we need.</p>



<p>Just reading out this information is not going to be enough, because it contains information since the startup of your PostgreSQL database. Since production databases aren&#8217;t restarted (that often), we will have to compare this information with some previous information (hence the <em>Storable</em> state file).<br />Our plan is to run the script in a cronjob, each 5 minutes.</p>



<p>The statistics are also stored in as a JSON object in a database, just so that we could build some web interface for the statistics, in a later stage. And we want to keep a history of these statistics.</p>



<p>Furthermore the script will <em>setuid</em> to postgres (same like <em>su &#8211; postgres</em> on the command line), so that it could connect to the PostgreSQL UNIX socket file.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">use strict;
use warnings;
use utf8;

use DBI;
use DateTime;
use POSIX qw/setuid/;
use Text::ASCIITable;
use JSON;

my $db   = 'mydatabase';
if(scalar @ARGV){
    $db = shift @ARGV;
}

my $host = '/var/run/postgresql';
my $user = 'postgres';
my $pass = 'undef';

my $state_db   = 'database_statistics';
my $state_host = '192.168.1.1';
my $state_user = 'skeletor';
my $state_pass = 'he-manisawhimp';

my $state_file = '/var/tmp/sequence_read.state';

# suid to postgres
setuid(scalar getpwnam 'postgres');

# define and open up the state file
my $state = {};
$state = retrieve $state_file if -f $state_file;

my $now      = DateTime-&amp;gt;now;

# Connect to the database which we want to monitor
my $dbh = DBI-&amp;gt;connect("dbi:Pg:dbname=$db;host=$host", $user, $pass) 
                or die "Could not connect to database: $!\n";

# Connect to the database that will be used to store the statistics
my $state_dbh = DBI-&amp;gt;connect("dbi:Pg:dbname=$state_db;host=$state_host", $state_user, $state_pass) 
                or die "Could not connect to the State database '$state_db': $!\n";

my $sql = &amp;lt;&amp;lt;EOF;
SELECT schemaname, relname, seq_tup_read 
FROM pg_stat_all_tables 
WHERE seq_tup_read &amp;gt; '0' 
      AND relname NOT LIKE 'pg_%'
ORDER BY seq_tup_read desc
EOF

# Get the statistics
my $results = $dbh-&amp;gt;selectall_arrayref( $sql, undef);

# Store the statistics as a JSON object in the second databse
eval {
    $state_dbh-&amp;gt;do('INSERT INTO mydbschema.seq_tup_read (data) VALUES(?)', undef, encode_json($results));
};
if($@){
    print "Insert into state-db failed: $@\n";
}

# Prepare a nice ASCII table for output
my $t = Text::ASCIITable-&amp;gt;new({ headingText =&amp;gt; 'Seq Tup Read ' . $now-&amp;gt;ymd('-')     . ' ' . $now-&amp;gt;hms(':')});
$t-&amp;gt;setCols('Schema Name','Relation Name ', 'Seq Tup Read', 'Increase (delta)');

my $row_count = 0;
foreach my $r (@{$results}){
    last if $row_count &amp;gt; 25;

    my (@values) = (@{$r});
    my ($increase, $delta) = (0, 0);
    # Calculate the increase and its delta
    if(defined $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{seq_tup_read}){
        $increase = $r-&amp;gt;[2] - $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{seq_tup_read};
        $delta    = $increase / $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{seq_tup_read} * 100;
        my $str = sprintf '%.0f (%.4f %%)', $increase, $delta;
        push @values, ($str);
    }
    else {
        push @values, '0 (0%)';
    }
    # Store this information for the next run of the script
    $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{seq_tup_read} = $r-&amp;gt;[2];
    $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{delta}        = $delta;
    $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{increase}     = $increase;

    # Only add the information to ASCII output table if there was an increase
    next unless $increase &amp;gt; 0;
    $t-&amp;gt;addRow(@values);
    $row_count++;
}
# Print out the ASCII table
print $t;

nstore $state, $state_file;

</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2014/02/postgresql-monitor-sequence-scans-perl/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Postgresql: Monitor unused indexes</title>
		<link>https://jmorano.moretrix.com/2014/02/postgresql-monitor-unused-indexes/</link>
					<comments>https://jmorano.moretrix.com/2014/02/postgresql-monitor-unused-indexes/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Tue, 11 Feb 2014 09:09:08 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Postgresql]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1057</guid>

					<description><![CDATA[Working on large database systems, with many tables and many indexes, it is easy to loose the overview&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Working on large database systems, with many tables and many indexes, it is easy to loose the overview on what is actually being used and what is just consuming unwanted disk space.<br />If indexes are not closely monitored, they could end up using undesired space and moreover, they will consume unnecessary CPU cycles.</p>



<p>Statistics about indexes can be easily retrieved from the PostgreSQL database system. All required information is stored in two tables:</p>



<ul class="wp-block-list"><li>pg_stat_user_indexes</li><li>pg_index</li></ul>



<p>When joining these two tables, interesting information can be read in the following columns:</p>



<ul class="wp-block-list"><li>idx_scan: has the query planner used this index for an &#8216;Index Scan&#8217;, the number returned is the amount of times it was used</li><li>idx_tup_read: how many tuples have been read by using the index</li><li>idx_tup_fetch: how many tuples have been fetch by using the index</li></ul>



<p>A neat function called <em>pg_relation_size()</em> allows to fetch the on-disk size of a relation, in this case the index.</p>



<p>Based on this information, the monitoring query will be built up as follows:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">SELECT 
    relid::regclass AS table, 
    indexrelid::regclass AS index, 
    pg_size_pretty(pg_relation_size(indexrelid::regclass)) AS index_size, 
    idx_tup_read, 
    idx_tup_fetch, 
    idx_scan
FROM 
    pg_stat_user_indexes 
    JOIN pg_index USING (indexrelid) 
WHERE 
    idx_scan = 0 
    AND indisunique IS FALSE
</pre>



<p>Now, all we need to do is write a script which stores this information in some kind of file and periodically report about the statistics.</p>



<p>First of all we will need a configuration file, which contains the database credentials.<br />I&#8217;ve chosen YAML because it is so versatile.</p>



<p>It will contain two important sets of information:</p>



<ul class="wp-block-list"><li>The database credentials</li><li>path to the state file</li></ul>



<p>Example:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dsn: "dbi:Pg:host=/var/run/postgresql;database=testdb"
user: postgres
pass:
state_file: /var/tmp/monitor_unused_indexes.state
</pre>



<p>As you can see, we will be connect to the PostgreSQL database by using its UNIX socket.</p>



<p>The script will use <em>Text::ASCIITable</em> to output the statistics in a nice table. <em>Storable</em> is used to save our statistics to disk.</p>



<p>In the below script, we will check if an index was unused in a timespan of 30 days. If yes, the script will report this index to STDOUT.<br />Therefore, we will store a score and timestamp for each unused index in the state file.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use DBI;
use Storable qw/nstore retrieve/;
use YAML qw/LoadFile/;
use POSIX qw/setuid/;
use Getopt::Long;
use DateTime;
use Text::ASCIITable;

my $cfg_file = './monitor_unused_indexes.yaml';
my $verbose = 0;
GetOptions("cfg=s" =&amp;gt; \$cfg_file,
           "verbose|v" =&amp;gt; \$verbose, 
        );

my $sql = &amp;lt;&amp;lt;EOS;
SELECT 
    relid::regclass AS table, 
    indexrelid::regclass AS index, 
    pg_size_pretty(pg_relation_size(indexrelid::regclass)) AS index_size, 
    idx_tup_read, 
    idx_tup_fetch, 
    idx_scan
FROM 
    pg_stat_user_indexes 
    JOIN pg_index USING (indexrelid) 
WHERE 
    idx_scan = 0 
    AND indisunique IS FALSE
EOS

my ($cfg) = LoadFile($cfg_file);

# suid to postgres, other whatever user is configured in the config.yaml file
setuid(scalar getpwnam $cfg-&amp;gt;{user});

# Connect to the database
my $dbh = DBI-&amp;gt;connect($cfg-&amp;gt;{dsn}, $cfg-&amp;gt;{user}, $cfg-&amp;gt;{pass}) 
            or die "Could not connect to database: $! (DBI ERROR: ".$DBI::errstr.")\n";

my $state;
if(-f $cfg-&amp;gt;{state_file}){
    $state = retrieve $cfg-&amp;gt;{state_file};
}

# Fetch the statistics
my $results = $dbh-&amp;gt;selectall_arrayref( $sql, undef );

my $now_dt   = DateTime-&amp;gt;now;

# Initialize the ASCII table
my $t = Text::ASCIITable-&amp;gt;new({ headingText =&amp;gt; 'INDEX STATISTICS'});
$t-&amp;gt;setCols(qw/Table Index Index_Size idx_tup_read idx_tup_fetch idx_scan/);

# Analyze the results
foreach my $r (@$results){
    if($verbose){
        $t-&amp;gt;addRow(@{$r});
    }
    # Only update the state file if --verbose was not specified.
    # This way the script can be check manually with --verbose many times and executed for instance
    # from a cronjob once a day without --verbose
    else {
        if(defined $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}){
            my $first_dt = DateTime-&amp;gt;from_epoch( epoch =&amp;gt; $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{first_hit} );
            if($first_dt-&amp;gt;add(days =&amp;gt; $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{score})-&amp;gt;day == $now_dt-&amp;gt;day ) {
                $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{score}++;
            }
            else {
                $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{score}     = 1;
                $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{first_hit} = $now_dt-&amp;gt;epoch;;
            }
        }
        else {
            $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{score}     = 1;
            $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{first_hit} = $now_dt-&amp;gt;epoch;;
        }
    }
}

# Print out the statistics table, if --verbose was specified
print $t if $verbose; 

# Store the statistics to disk in a state file
nstore $state, $cfg-&amp;gt;{state_file};

foreach my $idx (keys %{ $state-&amp;gt;{unused_indexes} }){
    my $first_dt = DateTime-&amp;gt;from_epoch( epoch =&amp;gt; $state-&amp;gt;{unused_indexes}{$idx}{first_hit} );
    if( $first_dt-&amp;gt;add(days =&amp;gt; 30) &amp;lt;= $now_dt ){
        my $line = "Index: $idx ready for deletion";
        $line .= " (score:" . $state-&amp;gt;{unused_indexes}{$idx}{score};
        $line .= " (|first_hit:" . DateTime-&amp;gt;from_epoch(epoch =&amp;gt; $state-&amp;gt;{unused_indexes}{$idx}{first_hit})-&amp;gt;ymd . ")";

        print $line."\n" if $verbose;
    }
}
</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2014/02/postgresql-monitor-unused-indexes/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Postgresql 9.3: Creating an index on a JSON attribute</title>
		<link>https://jmorano.moretrix.com/2013/12/postgresql-9-3-creating-index-json-attribute/</link>
					<comments>https://jmorano.moretrix.com/2013/12/postgresql-9-3-creating-index-json-attribute/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Fri, 27 Dec 2013 10:28:25 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[Web]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Postgresql]]></category>
		<category><![CDATA[SQL]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1036</guid>

					<description><![CDATA[Recently I&#8217;ve discovered some very interesting new features in the PostgreSQL 9.3 database.First of all, a new data&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Recently I&#8217;ve discovered some very interesting new features in the PostgreSQL 9.3 database.<br />First of all, a new data type has been introduced: <a title="Datatype JSON" href="http://www.postgresql.org/docs/9.3/static/datatype-json.html" target="_blank" rel="noopener">JSON</a>. Together with this new data type, <a title="JSON Functions" href="http://www.postgresql.org/docs/9.3/static/functions-json.html" target="_blank" rel="noopener">new functions</a> were also introduced.</p>



<p>These new features now simply for instance saving web forms in your Postgresql database. Or actually any kind of dynamic data, such as for instance Perl hashes. Plus, thanks to the new JSON functions, this data can be easily searched and indexed.</p>



<p>Let&#8217;s start with creating a test table.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="sql" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">CREATE SEQUENCE data_seq    
    START WITH 1    
    INCREMENT BY 1    
    NO MINVALUE    
    NO MAXVALUE    
    CACHE 1;

CREATE TABLE data (    
    id bigint DEFAULT nextval('data_seq'::regclass) NOT NULL,
    form_name TEXT,
    form_data JSON
);
</pre>



<p>I&#8217;ve inserted into this table 100k rows of test data with a very simple Perl script.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="sql" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">#!/usr/bin/perl
use strict;
use DBI;
use AnyEvent;
use AnyEvent::Util;
$AnyEvent::Util::MAX_FORKS = 25;

print "Inserting test data...\n";
my $cv = AnyEvent-&amp;gt;condvar;
$cv-&amp;gt;begin;
foreach my $d (0..100000){
    $cv-&amp;gt;begin;
    fork_call {
        my($d) = @_;
        my $name = do{local $/; open my $c, '-|', 'pwgen -B -s -c1 64'; &amp;lt;$c&amp;gt;};
        chomp($name);
        my $dbh = DBI-&amp;gt;connect("dbi:Pg:host=/var/run/postgresql;dbname=test;port=5432",'postgres', undef);
        $dbh-&amp;gt;do(qq{insert into data (form_name,form_data) VALUES('test_form', '{"c":{"d":"ddddd"},"name":"$name","b":"bbbbb", "count":$d}')});
        $dbh-&amp;gt;disconnect;
        return $d;
    } $d,
    sub {
        my ($count) = @_;
        print "$d ";
        $cv-&amp;gt;end;
    }
} 
$cv-&amp;gt;end;
$cv-&amp;gt;recv;
print "\n\nDone\n";
</pre>



<p>Now let&#8217;s assume that the JSON data we are going to insert (or have inserted) always contains the attribute field &#8216;name&#8217;. On this attribute we will create the following database index:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="sql" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">CREATE INDEX ON data USING btree (form_name, json_extract_path_text(form_data,'name'));
</pre>



<p>The above example creates a multi-column index.</p>



<p>Now let&#8217;s a make our first test.<br />The first test will not use the index we have created previously.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="sql" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">EXPLAIN ANALYZE VERBOSE SELECT * FROM data WHERE form_name = 'test_form' AND form_data-&amp;gt;&amp;gt;'name' = 'cbcO5twuPnAYJ1VLV6gsEv9zWs2AbQxQ9PoALLr2w6Rwpr2PtoQHCCK0hyOMuIME';
                                                                             QUERY PLAN                                                                              
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Seq Scan on data  (cost=0.00..4337.28 rows=500 width=102) (actual time=28.608..129.945 rows=1 loops=1)
   Filter: ((data.form_name = 'test_form'::text) AND ((data.form_data -&amp;gt;&amp;gt; 'name'::text) = 'cbcO5twuPnAYJ1VLV6gsEv9zWs2AbQxQ9PoALLr2w6Rwpr2PtoQHCCK0hyOMuIME'::text))
   Rows Removed by Filter: 100000
 Total runtime: 129.968 ms
(5 rows)

</pre>



<p>130ms for searching through 100k rows, is actually quite ok.</p>



<p>Now let&#8217;s see how we can speed up this query by using the index we&#8217;ve created.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="sql" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">EXPLAIN ANALYZE VERBOSE SELECT * FROM data WHERE form_name = 'test_form' AND json_extract_path_text(form_data,'name') = 'cbcO5twuPnAYJ1VLV6gsEv9zWs2AbQxQ9PoALLr2w6Rwpr2PtoQHCCK0hyOMuIME';
                                                                             QUERY PLAN                                                                                                
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Index Scan using data_form_name_json_extract_path_text_idx on data  (cost=0.42..8.44 rows=1 width=102) (actual time=0.056..0.057 rows=1 loops=1)
   Index Cond: ((data.form_name = 'test_form'::text) AND (json_extract_path_text(data.form_data, VARIADIC '{name}'::text[]) = 'cbcO5twuPnAYJ1VLV6gsEv9zWs2AbQxQ9PoALLr2w6Rwpr2PtoQHCCK0hyOMuIME'::text))
 Total runtime: 0.084 ms
(4 rows)

</pre>



<p>0.084ms! That&#8217;s is about 1625 times faster! What makes this index extremely interesting is that the index has only been created on one attribute of the JSON data and not on the entire JSON data. This will keep the index data small and thus will be kept longer in your database&#8217; memory.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2013/12/postgresql-9-3-creating-index-json-attribute/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Datatables and Perl (and a little bit of jQuery)</title>
		<link>https://jmorano.moretrix.com/2013/10/datatables-perl-and-bit-jquery/</link>
					<comments>https://jmorano.moretrix.com/2013/10/datatables-perl-and-bit-jquery/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Wed, 09 Oct 2013 14:05:45 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[Web]]></category>
		<category><![CDATA[Ajax]]></category>
		<category><![CDATA[API]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[HTML]]></category>
		<category><![CDATA[JavaScript]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Web 2.0]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1014</guid>

					<description><![CDATA[Recently I&#8217;ve stumbled on a pretty cool OpenSource project called &#8221;datatables&#8221; (http://datatables.net/), which allows to easily create tables&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Recently I&#8217;ve stumbled on a pretty cool OpenSource project called &#8221;datatables&#8221; (<a title="Datatables" href="http://datatables.net/" target="_blank" rel="noopener">http://datatables.net/</a>), which allows to easily create tables in HTML that can be:</p>



<ul class="wp-block-list"><li>sorted</li><li>searched</li><li>paginated</li><li>scroll infinitely</li><li>themed</li><li>&#8230;</li></ul>



<p>And most important: it&#8217;s for free! I&#8217;ve always wanted to create an infinite scrolling table and now it&#8217;s just too easy:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">$(document).ready( function() {
    oTable = $('#ip_data').dataTable( {
        "bProcessing":     true,
        "bServerSide":     true,
        "bPaginate":       true,  
        "bScrollInfinite": true,
        "bScrollCollapse": true,
        "sScrollY":        "200px",
        "sAjaxSource":     "get_ip_data.pl",
    } );
} );
</pre>



<p>And that&#8217;s it! Well, ok, you need to include the Javascript file and CSS files of the Datatables Project of course and you need to create the table in HTML.</p>



<p>For instance:</p>



<figure id="ip_data" class="wp-block-table"><table><thead><tr><th>IP</th><th>Country</th><th>City</th><th>Latitude</th><th>Longitude</th></tr></thead><tbody><tr><td>Loading data from server</td></tr></tbody><tfoot><tr><th>IP</th><th>Country</th><th>City</th><th>Latitude</th><th>Longitude</th></tr></tfoot></table></figure>



<pre class="wp-block-preformatted">&nbsp;</pre>



<p>And then you will need a Perl script providing you the data for the table.<br />The below example allows to</p>



<ul class="wp-block-list"><li>search the tables</li><li>scroll infinitely</li><li>sort on the columns</li></ul>



<p>It also supplies the Datatables table with a total amount or rows in the database table.</p>



<p>This following script will be saved as &#8221;get_ip_data.pl&#8221;</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">#!/usr/bin/perl
use strict; use warnings;
use DBI;
use JSON;
use CGI;

my @columns = qw/ip country_name city latitude longitude/;

my $q = CGI-&amp;gt;new;
my $db = DBI-&amp;gt;connect("dbi:mysql:host=localhost;db=testdb", 'testuser', 'xxxsecret');

my $params = $q-&amp;gt;Vars;

# Get the total count of rows in the table
my $sql_count = "select count(id) from geo_data";
my $count = $db-&amp;gt;selectrow_arrayref($sql_count)-&amp;gt;[0];

# Start building up the database query
my @values;
my $sql = "select ip,country_name,city,latitude,longitude from geo_data";

# if a search parameter was supplied in the AJAX call, build the WHERE part in the SQL statement
if( $params-&amp;gt;{sSearch} ){
    $sql .= ' WHERE ';
    $sql .= 'ip LIKE ? OR country_name LIKE ? or city LIKE ? or latitude LIKE ? or longitude LIKE ?';
    push @values, ('%'.$params-&amp;gt;{sSearch}.'%','%'.$params-&amp;gt;{sSearch}.'%','%'.$params-&amp;gt;{sSearch}.'%','%'.$params-&amp;gt;{sSearch}.'%','%'.$params-&amp;gt;{sSearch}.'%');
}

# if a sorting parameter was supplied in the AJAX call, build up the ORDER BY part in the SQL statement
if( $params-&amp;gt;{iSortingCols} ){
    $sql .= ' ORDER BY';
    foreach my $c (0 .. ( $params-&amp;gt;{iSortingCols} -1 )){
        $sql .= ' ' . $columns[ $params-&amp;gt;{"iSortCol_$c"} ] . ' ' . $params-&amp;gt;{"sSortDir_$c"};
        $sql .= ','
    }
    $sql =~ s/,$//;
}

# Limit the output and also allow to paginate or scroll infinitely
$sql .= " LIMIT ? OFFSET ?";
push @values, (($params-&amp;gt;{iDisplayLength} &amp;gt; 0 ? $params-&amp;gt;{iDisplayLength} : 25), ( $params-&amp;gt;{iDisplayStart} // 0));

# Fetch the data from the database
my $data = $db-&amp;gt;selectall_arrayref($sql, { Slice =&amp;gt; [] }, @values);

# Return the JSON object
print $q-&amp;gt;header('application/json');
my $json = encode_json({ aaData =&amp;gt; $data, iTotalRecords =&amp;gt; $count, iTotalDisplayRecords =&amp;gt; $count, sEcho =&amp;gt; int($params-&amp;gt;{sEcho}) });
print $json;
</pre>



<p>An example can be found overhere: <a title="Charon Map" href="http://www.moretrix.com/~insaniac/map/map.pl" target="_blank" rel="noopener">http://www.moretrix.com/~insaniac/map/map.pl</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2013/10/datatables-perl-and-bit-jquery/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Google GeoChart, JSON and Perl</title>
		<link>https://jmorano.moretrix.com/2013/10/google-geochart-json-perl/</link>
					<comments>https://jmorano.moretrix.com/2013/10/google-geochart-json-perl/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Wed, 09 Oct 2013 09:33:48 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[Web]]></category>
		<category><![CDATA[Ajax]]></category>
		<category><![CDATA[API]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[JavaScript]]></category>
		<category><![CDATA[jQuery]]></category>
		<category><![CDATA[MySQL]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1006</guid>

					<description><![CDATA[The Google API GeoChart Map (https://developers.google.com/chart/interactive/docs/gallery/geochart) is pretty nice widget to generate nice maps based on certain values.&#8230;]]></description>
										<content:encoded><![CDATA[
<p>The Google API GeoChart Map (<a title="Google GeoChart Map" href="https://developers.google.com/chart/interactive/docs/gallery/geochart" target="_blank" rel="noopener">https://developers.google.com/chart/interactive/docs/gallery/geochart</a>) is pretty nice widget to generate nice maps based on certain values. It has quite a lot of features and it is very easy to use.</p>



<p>Before we look at the Google API for GeoChart, let&#8217;s first set up a script which will get data out of a database and return it in a JSON formatted object.<br />In this example we will use Perl and three Perl modules:</p>



<ul class="wp-block-list"><li>DBI</li><li>JSON</li><li>CGI</li></ul>



<p>When converting database values to a JSON object (or text string), it is very important that all data is properly type-casted.<br />In the following example you will see that we do:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">$_-&amp;gt;[1] = int($_-&amp;gt;[1]) foreach @$data;
</pre>



<p>This snippet will actually make our INTEGER value into a real integer. The DBI module had returned it as a normal string (that&#8217;s just how DBI works).</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">#!/usr/bin/perl
use strict; use warnings;
use DBI;
use JSON;
use CGI;

my $q = CGI-&amp;gt;new;
my $db = DBI-&amp;gt;connect("dbi:mysql:host=localhost;db=testdb", 'testuser', 'xxxsecret');

my $sql = "SELECT country_name, count(id) as total from geo_data group by country_name";
my $data = $db-&amp;gt;selectall_arrayref($sql);
$_-&amp;gt;[1] = int($_-&amp;gt;[1]) foreach @$data;
unshift(@$data, ['Country', 'Attacks']);

print $q-&amp;gt;header('application/json');
my $json = encode_json($data);
print $json;
</pre>



<p>The above Perl script will be saved as &#8221;get_countries_data.pl&#8221;.</p>



<p>In the below Javascript example, we will use the Google API for the GeoChart Map and <a title="jQuery" href="http://www.jquery.com/" target="_blank" rel="noopener">jQuery</a> for making the AJAX call to our Perl script. Since the Perl script already provides the data in a JSON format, we do not need to convert or parse it.<br />Furthermore the Javascript code is pretty straightforward and based on the example found at https://developers.google.com/chart/interactive/docs/gallery/geochart, except for the AJAX part.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="js" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">    google.load('visualization', '1', {packages: ['geochart']});
    google.setOnLoadCallback(drawVisualization);

    function drawVisualization() {
        var options = {
            height: '500',
            width: '1200',
            colorAxis: {minValue: 0,  colors: ['#FFC26B', 
                                               '#FFAF3B', 
                                               '#FF9700', 
                                               '#C1852F', 
                                               '#A86400']},
            datalessRegionColor: '#FAFAFA',
            backgroundColor: '#F4EFE7',
        };
    
        $.ajax({
            type: 'POST',
            url: "get_countries_data.pl",
            dataType: "json",
            async: false,
            success: function(json_data) {
                var data = google.visualization.arrayToDataTable(json_data);
                var chart = new google.visualization.GeoChart(
                                   document.getElementById('visualization') );

                chart.draw(data, options);
            }
        });
    }</pre>



<p>An example of this setup can be found at <a title="Charon Map" href="http://www.moretrix.com/~insaniac/map/map.pl" target="_blank" rel="noopener">http://www.moretrix.com/~insaniac/map/map.pl</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2013/10/google-geochart-json-perl/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Secure Password Generator in Perl</title>
		<link>https://jmorano.moretrix.com/2013/08/secure-password-generator-perl/</link>
					<comments>https://jmorano.moretrix.com/2013/08/secure-password-generator-perl/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Tue, 13 Aug 2013 13:27:18 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[Crypto]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Security]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=953</guid>

					<description><![CDATA[A secure and very random password generator module written in Perl.It can be used to generate passwords or&#8230;]]></description>
										<content:encoded><![CDATA[
<p>A secure and very random password generator module written in Perl.<br />It can be used to generate passwords or unique strings which can be used in sorts of operations.</p>



<p>The default character set is alpha-numerical based, but can be set to any kind of character list.</p>



<p>The complete handling and generating is implemented in a module, which exports one function: &#8216;<code>generate_password</code>&#8216;.<br />This function can take (optional) as arguments:</p>



<ul class="wp-block-list"><li>a length</li><li>a character list</li></ul>



<p>The entropy is generated with Bytes::Random::Secure and random numbers are generated with <code>Math::Random::ISAAC</code>.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">package MORETRIX::Password;
#===============================================================================
#  DESCRIPTION: A password generator module
#     REVISION: $Id: Password.pm 71 2013-07-02 12:28:42Z jmorano $
#===============================================================================

use strict;
use warnings;
use Digest;
use Exporter qw/import/;
use Time::HiRes qw/time/;
use Bytes::Random::Secure;
use Math::Random::ISAAC;

our @EXPORT    = qw/generate_password/;
our @EXPORT_OK = qw/generate_password/;

my $random_state;
my $CHARLIST = q{abcdefghijklmnopqrstuvwxyz0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ!"$%&amp;/\()=?{}[]*+#;:.,-_&lt;>|^~'};

# Generate a cryptographic safe random password
# default length: 12
#
sub generate_password {
    my ($length, $charlist) = @_;
    $length   //= 12;
    $charlist //= $CHARLIST;

    my @temp_passwords;
    foreach my $loop ( 0 .. int(random_number(100)) ){
        my $password = '';
        while (length($password) &lt; $length) {
            $password .= substr($charlist, (int(myrand(length($charlist)))), 1);
        }
        push @temp_passwords, $password;
    }

    return $temp_passwords[int(random_number(length(scalar @temp_passwords)))];
}

sub random_number {
    my ($seed) = @_;

    my $r = Math::Random::ISAAC->new($seed);
    return $r->rand();
}

sub mysrand{
    my $seed = shift || (time ^ $ ^ int(random_number(time)) ^ int(random_number(2048 ^ 128)));
    $random_state = {
        digest  => new Digest ("SHA-512"),
        counter => 0,
        waiting => [],
        prev    => $seed
    };
}

sub myrand{
    my $range = shift || 1.0;
    mysrand() unless defined $random_state;

    if (! @{$random_state->{waiting}}){
        $random_state->{digest}->reset();
        $random_state->{digest}->add( generate_entropy(4096) .
                                     $random_state->{counter}++ .
                                     $random_state->{prev});
        $random_state->{prev} = $random_state->{digest}->digest();
        my @ints = unpack("L*", $random_state->{prev}); # 32 bit unsigned integers
        $random_state->{waiting} = \@ints;
    }
    my $int = shift @{$random_state->{waiting}};
    return $range * $int / 2**32;
}

sub generate_entropy {
    my ($length) = @_;

    $length //= 1024;

    my $random = Bytes::Random::Secure->new( NonBlocking => 1, Bits => 4096 );
    return $random->string_from($CHARLIST, $length);
}

1;</pre>



<p>Example script:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="python" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">#!/usr/bin/env perl 

use strict;
use warnings;
use utf8;
use MORETIX::Password;

my $length = shift @ARGV;
$length //= 32;

print generate_password($length) . "\n";
print generate_password($length) . "\n";
print generate_password($length) . "\n";
print generate_password($length) . "\n";
print generate_password($length) . "\n";</pre>



<p>References:</p>



<ul class="wp-block-list"><li>http://wellington.pm.org/archive/200704/randomness/#slide0</li></ul>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2013/08/secure-password-generator-perl/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 9.2 Master &#8211; Slave Monitoring</title>
		<link>https://jmorano.moretrix.com/2013/08/postgresql-9-2-master-slave-monitoring/</link>
					<comments>https://jmorano.moretrix.com/2013/08/postgresql-9-2-master-slave-monitoring/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Tue, 13 Aug 2013 13:07:04 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Bash]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Nagios]]></category>
		<category><![CDATA[Postgresql]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=943</guid>

					<description><![CDATA[Nagios plugin script written in Bash to check the master-slave replication in PostgreSQL (tested on PostgreSQL 9.2.4) (executed&#8230;]]></description>
										<content:encoded><![CDATA[<p>Nagios plugin script written in Bash to check the master-slave replication in PostgreSQL (tested on PostgreSQL 9.2.4) (executed on the slave).<br />
The script will report how many bytes the slave server is behind, and how many seconds ago the last replay of data occurred.</p>
<p>The script must be executed as &#8216;postgres&#8217; user.</p>
<pre class="brush:bash">
#!/bin/bash

# $Id: check_slave_replication.sh 3421 2013-08-09 07:52:44Z jmorano $

STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
 
## Master (p_) and Slave (s_) DB Server Information	
export s_host=$1
export s_port=$2
export p_db=$3
export p_host=$4
export p_port=$5
 
export psql=/opt/postgresql/bin/psql
export bc=/usr/bin/bc
 
## Limits
export  critical_limit=83886080 # 5 * 16MB, size of 5 WAL files
export   warning_limit=16777216 # 16 MB, size of 1 WAL file
 
master_lag=$($psql -U postgres -h$p_host -p$p_port -A -t -c "SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset" $p_db)
slave_lag=$($psql -U postgres  -h$s_host -p$s_port -A -t -c "SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS receive" $p_db)
replay_lag=$($psql -U postgres -h$s_host -p$s_port -A -t -c "SELECT pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0') AS replay" $p_db)
replay_timediff=$($psql -U postgres -h$s_host -p$s_port -A -t -c "SELECT NOW() - pg_last_xact_replay_timestamp() AS replication_delay" $p_db)
 
if [[ $master_lag -eq '' || $slave_lag -eq '' || $replay_lag -eq '' ]]; then
    echo "CRITICAL: Stream has no value to compare (is replication configured or connectivity problem?)"
    exit $STATE_CRITICAL
else
    if [[ $master_lag -eq $slave_lag && $master_lag -eq $replay_lag && $slave_lag -eq $replay_lag ]] ; then
        echo "OK: Stream: MASTER:$master_lag Slave:$slave_lag Replay:$replay_lag"
        exit $STATE_OK
    else
        if [[ $master_lag -eq $slave_lag ]] ; then
            if [[ $master_lag -ne $replay_lag ]] ; then
                if [ $(bc <<< $master_lag-$replay_lag) -lt $warning_limit ]; then
                    echo "OK: Stream: MASTER:$master_lag Replay:$replay_lag :: REPLAY BEHIND"
                    exit $STATE_OK
                else
                    echo "WARNING: Stream: MASTER:$master_lag Replay:$replay_lag :: REPLAY $(bc <<< $master_lag-$replay_lag)bytes BEHIND (${replay_timediff}seconds)"
                    exit $STATE_WARNING
                fi
            fi
        else
            if [ $(bc <<< $master_lag-$slave_lag) -gt $critical_limit ]; then
                echo "CRITICAL: Stream: MASTER:$master_lag Slave:$slave_lag :: STREAM BEYOND CRITICAL LIMIT ($(bc <<< $master_lag-$slave_lag)bytes)"
                exit $STATE_CRITICAL
            else
                if [ $(bc <<< $master_lag-$slave_lag) -lt $warning_limit ]; then
                    echo "OK: Stream: MASTER:$master_lag Slave:$slave_lag Replay:$replay_lag :: STREAM BEHIND"
                    exit $STATE_OK
                else
                    echo "WARNING: Stream: MASTER:$master_lag Slave:$slave_lag :: STREAM BEYOND WARNING LIMIT ($(bc <<< $master_lag-$replay_lag)bytes)"
                    exit $STATE_WARNING
                fi
            fi
        fi
        echo "UNKNOWN: Stream: MASTER: $master_lag Slave: $slave_lag Replay: $replay_lag"
        exit $STATE_UNKNOWN
    fi
fi
</pre>
<p>Possible outputs:</p>
<pre class="brush:bash">
$ bash check_slave_replication.sh 192.168.0.1 5432 live 192.168.0.2 5432
WARNING: Stream: MASTER:1907958306184 Replay:1907878056888 :: REPLAY 80249296bytes BEHIND (00:03:14.056747seconds)
$ bash check_slave_replication.sh 192.168.0.1 5432 live 192.168.0.2 5432
OK: Stream: MASTER:2055690128376 Slave:2055690143144 Replay:2055690193744 :: STREAM BEHIND
$ bash check_slave_replication.sh 192.168.0.1 5432 live 192.168.0.2 5432
OK: Stream: MASTER:2055690497120 Replay:2055690497328 :: REPLAY BEHIND
$ bash check_slave_replication.sh 192.168.0.1 5432 live 192.168.0.2 5432
OK: Stream: MASTER:2055691704672 Slave:2055691704672 Replay:2055691704672
</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2013/08/postgresql-9-2-master-slave-monitoring/feed/</wfw:commentRss>
			<slash:comments>14</slash:comments>
		
		
			</item>
	</channel>
</rss>
