<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Monitoring &#8211; Johnny Morano&#039;s Tech Articles</title>
	<atom:link href="https://jmorano.moretrix.com/tag/monitoring/feed/" rel="self" type="application/rss+xml" />
	<link>https://jmorano.moretrix.com</link>
	<description>Ramblings of an old-fashioned space cowboy</description>
	<lastBuildDate>Tue, 22 Nov 2022 07:21:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.2</generator>

 
	<item>
		<title>A monitoring solution with Docker</title>
		<link>https://jmorano.moretrix.com/2022/11/a-monitoring-solution-with-docker/</link>
					<comments>https://jmorano.moretrix.com/2022/11/a-monitoring-solution-with-docker/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Tue, 22 Nov 2022 07:21:51 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Grafana]]></category>
		<category><![CDATA[Loki]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Prometheus]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1587</guid>

					<description><![CDATA[Docker Compose is a great way to set up small test environments locally or remotely. It allows to&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Docker Compose is a great way to set up small test environments locally or remotely. It allows to define your infrastructure as code and does not require any prerequisite tasks or after deployments tasks.</p>



<p>The Docker installation is well documented at <a rel="noreferrer noopener" href="https://docs.docker.com/get-docker/" target="_blank">https://docs.docker.com/get-docker/</a> and is well supported amongst the most popular operating systems. The installation itself will not be covered in this article. If you want to get familiar with the details of Docker, start with the documentation at <a href="https://docs.docker.com/get-started/" target="_blank" rel="noreferrer noopener">https://docs.docker.com/get-started/</a>.</p>



<p>All code used in this article is available at: <a rel="noreferrer noopener" href="https://github.com/insani4c/docker-monitoring-stack" target="_blank">https://github.com/insani4c/docker-monitoring-stack</a>. </p>



<p>In this article, we will see how to set up a monitoring solution based on:</p>



<ul class="wp-block-list">
<li><a href="https://prometheus.io/" data-type="URL" data-id="https://prometheus.io/" target="_blank" rel="noreferrer noopener">Prometheus</a></li>



<li><a href="https://prometheus.io/docs/guides/node-exporter/" data-type="URL" data-id="https://prometheus.io/docs/guides/node-exporter/" target="_blank" rel="noreferrer noopener">Prometheus Node Exporter</a></li>



<li><a href="https://github.com/prometheus/blackbox_exporter" data-type="URL" data-id="https://github.com/prometheus/blackbox_exporter" target="_blank" rel="noreferrer noopener">Prometheus Black Exporter</a></li>



<li><a href="https://github.com/prometheus/snmp_exporter" data-type="URL" data-id="https://github.com/prometheus/snmp_exporter" target="_blank" rel="noreferrer noopener">Prometheus SNMP Exporter</a></li>



<li><a href="https://grafana.com/oss/loki/" data-type="URL" data-id="https://grafana.com/oss/loki/" target="_blank" rel="noreferrer noopener">Loki</a></li>



<li><a href="https://grafana.com/docs/loki/latest/clients/promtail/" data-type="URL" data-id="https://grafana.com/docs/loki/latest/clients/promtail/" target="_blank" rel="noreferrer noopener">Promtail</a></li>



<li><a href="https://grafana.com/" data-type="URL" data-id="https://grafana.com/" target="_blank" rel="noreferrer noopener">Grafana</a></li>
</ul>



<p>To monitor the deployed containers, we will also deploy <a rel="noreferrer noopener" href="https://github.com/google/cadvisor" data-type="URL" data-id="https://github.com/google/cadvisor" target="_blank">Google&#8217;s cadvisor</a> container, to get some interesting statistics and details in our Prometheus/ Grafana setup.</p>



<p>The Docker Compose file, called <code data-enlighter-language="generic" class="EnlighterJSRAW">docker-compose.yml</code>, contains all the information of the infrastructure such as:</p>



<ul class="wp-block-list">
<li>network information</li>



<li>volumes</li>



<li>services (the containers)</li>



<li>&#8230;</li>
</ul>



<p>Let&#8217;s start from the top of the file.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="dockerfile" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">version: '3.8'

name: docmon

volumes:
  grafana-data: {}
  alertmanager-data: {}
  prometheus-data: {}
  loki-data: {}
</pre>



<p>At first at <code data-enlighter-language="generic" class="EnlighterJSRAW">line 1</code>, the version Docker Compose version is specified, to define which specifications are allowed. At <code data-enlighter-language="generic" class="EnlighterJSRAW">line 3</code> a name for the container group or stack is set. And finally starting from <code data-enlighter-language="generic" class="EnlighterJSRAW">line 5</code>, data volumes (think <em>disks</em>) are defined, which will be used by the containers. These are persistent data volumes which will be reused unless the container has been completely removed.</p>



<p>Next, we will define the services in the <code data-enlighter-language="generic" class="EnlighterJSRAW">docker-compose.yml</code>:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="dockerfile" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="cadvisor and pronetheus" data-enlighter-group="docker-compose.yml">services:
  cadvisor:
    image: 'gcr.io/cadvisor/cadvisor:latest'
    container_name: cadvisor
    restart: always
    mem_limit: 512m
    mem_reservation: 32m
    # ports: 
    #   - '8880:8080'
    volumes:
      - '/:/rootfs:ro'
      - '/var/run:/var/run:ro'
      - '/sys:/sys:ro'
      - '/var/lib/docker/:/var/lib/docker:ro'
      - '/dev/disk/:/dev/disk:ro'
    privileged: true
    devices: 
      - '/dev/kmsg:/dev/kmsg'

  prometheus:
    image: 'prom/prometheus:latest'
    container_name: prometheus
    restart: always
    mem_limit: 2048m
    mem_reservation: 256m
    cpus: 2
    # ports:
    #   - '9090:9090'
    volumes:
      - '$PROMETHEUS_HOME/config:/etc/prometheus'
      - 'prometheus-data:/prometheus'
    extra_hosts:
      myrouter: 192.168.1.1
      myswitch: 192.168.1.10
    depends_on:
      - cadvisor
</pre>



<p>Containers are defined as <code data-enlighter-language="generic" class="EnlighterJSRAW">services</code>. Each <code data-enlighter-language="generic" class="EnlighterJSRAW">service</code> will require at least:</p>



<ul class="wp-block-list">
<li>a service name (example <code data-enlighter-language="generic" class="EnlighterJSRAW">line 2</code> and <code data-enlighter-language="generic" class="EnlighterJSRAW">line 20</code>)</li>



<li>an <code data-enlighter-language="generic" class="EnlighterJSRAW">image</code> definition</li>
</ul>



<p>All other options are optional or required by specific images. </p>



<p>The first image or container defined in the above example is <code data-enlighter-language="generic" class="EnlighterJSRAW">cadvisor</code>. This service provides statistics from Docker and the deployed containers to Prometheus. To be able to provide this information, the container must have read access to certain file paths or sockets on the hypervisor (read: the server where the Docker containers will be running). These are provided in the <code data-enlighter-language="generic" class="EnlighterJSRAW">volumes</code> section of the container. Here, directory paths on the hypervisor will be provided as mount partitions in the container, and they will be mounted with the <code data-enlighter-language="generic" class="EnlighterJSRAW">readonly</code> (<code data-enlighter-language="generic" class="EnlighterJSRAW">:ro</code>) parameter so that the container can&#8217;t make any changes to them.</p>



<p>Furthermore it provides access to a <code data-enlighter-language="generic" class="EnlighterJSRAW">device</code> (to read kernel messages), set <code data-enlighter-language="generic" class="EnlighterJSRAW">memory</code> and <code data-enlighter-language="generic" class="EnlighterJSRAW">cpu</code> limits and will run the container in <code data-enlighter-language="generic" class="EnlighterJSRAW">privilege</code> mode. The <code data-enlighter-language="generic" class="EnlighterJSRAW">ports</code> section has been put in comments, as it isn&#8217;t really require to expose ports, or make them available outside the Docker ecosystem. In our example, only Prometheus must be able to connect to it, and since Prometheus will be deployed as a container, we don&#8217;t need to be able to access the web service running on the container to read out the metrics or see the statistics.</p>



<p>The next container defined is called <code data-enlighter-language="generic" class="EnlighterJSRAW">prometheus</code>. For this container, <code data-enlighter-language="generic" class="EnlighterJSRAW">volumes</code> will be mounted to provide the Prometheus configuration files and to store the data to the volume called <code data-enlighter-language="generic" class="EnlighterJSRAW">prometheus-data</code>. It also defines an <code data-enlighter-language="generic" class="EnlighterJSRAW">extra_hosts</code>. These are entries that are typically defined in an <code data-enlighter-language="generic" class="EnlighterJSRAW">/etc/hosts</code> file, which Docker does not read from the hypervisor. And instead of deploying or mounting the hypervisor&#8217;s <code data-enlighter-language="generic" class="EnlighterJSRAW">hosts</code> file, extra host mappings can be defined or handed to the container, which is set up in the <code data-enlighter-language="generic" class="EnlighterJSRAW">extra_hosts</code> section as above.</p>



<p>At the end of the <code data-enlighter-language="generic" class="EnlighterJSRAW">prometheus</code> container definition, a <code data-enlighter-language="generic" class="EnlighterJSRAW">depends_on</code> section is configured, which means that the <code data-enlighter-language="generic" class="EnlighterJSRAW">prometheus</code> container won&#8217;t be deployed until the container names defined in that section are up and running.</p>



<p>Next we will define all other containers (see the second tab in the above code block, called <code data-enlighter-language="generic" class="EnlighterJSRAW">the rest</code>).</p>



<pre class="EnlighterJSRAW" data-enlighter-language="dockerfile" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="the rest" data-enlighter-group="docker-compose.yml">  hypervisor:
    image: 'prom/node-exporter:latest'
    container_name: hypervisor
    mem_limit: 128m
    mem_reservation: 32m
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'
      - '/proc:/host/proc:ro'
      - '/sys:/host/sys:ro'
    command:
      - '--path.rootfs=/host'
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
      - '--collector.systemd'
      - '--collector.cgroups'
    depends_on:
      - cadvisor

  prom_snmp:
    image: 'prom/snmp-exporter:latest'
    container_name: prom_snmp
    restart: always
    mem_limit: 128m
    mem_reservation: 32m
    # ports: 
    #   - '9116:9116'
    volumes:
      - '$PROMSNMP_HOME/config:/etc/snmp_exporter'
    extra_hosts:
      myrouter: 192.168.1.1
      myswitch: 192.168.1.10
    depends_on:
      - cadvisor
      - prometheus

  alertmanager:
    image: 'prom/alertmanager:latest'
    container_name: alertmanager
    restart: always
    mem_limit: 256m
    mem_reservation: 32m 
    # ports:
    #   - 9093:9093
    volumes:
      - '$ALERTMANAGER_HOME/config/alertmanager.yml:/etc/alertmanager/config.yml'
      - 'alertmanager-data:/alertmanager'
    command:
      - '--config.file=/etc/alertmanager/config.yml'
      - '--storage.path=/alertmanager'
    depends_on:
      - cadvisor
      - prometheus

  loki:
    image: 'grafana/loki:latest'
    container_name: loki
    restart: always
    mem_limit: 32768m
    mem_reservation: 8192m
    cpus: 6 
    ports:
      - '3100:3100'
    volumes:
      - '$LOKI_HOME/config:/etc/loki'
      - 'loki-data:/loki'
    depends_on:
      - cadvisor
      - prometheus
      - alertmanager

  blackbox_exporter:
    image: 'prom/blackbox-exporter:latest'
    container_name: blackbox_exporter
    restart: always
    mem_limit: 128m
    mem_reservation: 32m
    dns:
      - 8.8.8.8
      - 8.8.4.4
    # ports:
    #   - 9115:9115
    volumes:
      - '$BLACKBOXEXPORTER_HOME/config:/etc/blackboxexporter/'
    command:
      - '--config.file=/etc/blackboxexporter/config.yml'
    depends_on:
      - cadvisor
      - prometheus

  promtail:
    image: grafana/promtail:latest
    container_name: promtail
    restart: always
    mem_limit: 256m
    mem_reservation: 64m
    volumes:
      - $PROMTAIL_HOME/config:/etc/promtail/
      # to read container labels and logs
      - '/var/run/docker.sock:/var/run/docker.sock:ro'
      - '/var/lib/docker/containers:/var/lib/docker/containers:ro'
      - '/var/log/ulog:/var/log/ulog/:ro'
    depends_on:
      - cadvisor
      - loki

  grafana:
    image: 'grafana/grafana:latest'
    container_name: grafana
    restart: always
    mem_limit: 2048m
    mem_reservation: 256m
    ports:
      - '3000:3000'
    volumes:
      - '$GRAFANA_HOME/config:/etc/grafana'
      - 'grafana-data:/var/lib/grafana'
      - '$GRAFANA_HOME/dashboards:/var/lib/grafana/dashboards'
    depends_on:
      - cadvisor
      - prometheus
      - loki
      - alertmanager
</pre>



<p>The rest of the code will deploy:</p>



<ul class="wp-block-list">
<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">hypervisor</code>, which is actually the Prometheus node-exporter for the hypervisor.</li>



<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">prom_snmp</code>, which will retrieve SNMP statistics</li>



<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">blackbox_exporter</code>, which mainly checks webservers and their SSL certificates</li>



<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">promtail</code>, which collects logs and log statistics from the hypervisor</li>



<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">loki</code>, which allows to store and index logs sent to the service by <code data-enlighter-language="generic" class="EnlighterJSRAW">promtail</code> (either container or as a service running on some external server)</li>
</ul>



<p>Finally, the last container deployed is the <code data-enlighter-language="generic" class="EnlighterJSRAW">grafana</code> container. Besides its normal configuration file <code data-enlighter-language="generic" class="EnlighterJSRAW">grafana.ini</code>, the Docker container will also automatically provision (the <code data-enlighter-language="generic" class="EnlighterJSRAW">provisioning</code> sub directory in the <code data-enlighter-language="generic" class="EnlighterJSRAW">config</code> directory) datasources and dashboards so that no manual after-tasks are required once the containers are running.</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="354" height="244" src="https://jmorano.moretrix.com/wp-content/uploads/2022/11/Screenshot-from-2022-11-22-07-52-18.png" alt="" class="wp-image-1595" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/11/Screenshot-from-2022-11-22-07-52-18.png 354w, https://jmorano.moretrix.com/wp-content/uploads/2022/11/Screenshot-from-2022-11-22-07-52-18-300x207.png 300w" sizes="(max-width: 354px) 100vw, 354px" /><figcaption class="wp-element-caption">The grafana files</figcaption></figure>



<p>The datasources can be preconfigured in a YAML file called <code data-enlighter-language="generic" class="EnlighterJSRAW">default.yaml</code>, stored in the <code data-enlighter-language="generic" class="EnlighterJSRAW">provisioning/datasources/</code> sub directory.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="yaml" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">apiVersion: 1

datasources:
 - name: Alertmanager
   type: alertmanager 
   access: proxy
   orgId: 1
   url: http://alertmanager:9093
   version: 1
   editable: false
   isDefault: false
   uid: DS_ALERTMANAGER
   jsonData:
    implementation: prometheus
 - name: Prometheus
   type: prometheus
   access: proxy
   orgId: 1
   url: http://prometheus:9090
   version: 1
   editable: false
   isDefault: true
   uid: DS_PROMETHEUS
   jsonData:
    alertmanagerUid: DS_ALERTMANAGER
    manageAlerts: true
    prometheusType: Prometheus
    prometheusVersion: 2.39.1
 - name: Loki
   type: loki 
   access: proxy
   orgId: 1
   url: http://loki:3100
   version: 1
   editable: false
   isDefault: false
   uid: DS_LOKI
   jsonData:
    alertmanagerUid: DS_ALERTMANAGER
    manageAlerts: true
</pre>



<p>Same thing goes for dashboards we want to have automatically deployed:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="yaml" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">apiVersion: 1

providers:
 - name: 'default'
   orgId: 1
   folder: 'Custom'
   folderUid: ''
   type: file
   options:
     path: /var/lib/grafana/dashboards
</pre>



<p>Finally, if Docker is running on multiple network interfaces (for it is a hosted server, or it has internal and external IP addresses), you might want to limit access to the container to specific networks only.</p>



<p>Below is a <code data-enlighter-language="generic" class="EnlighterJSRAW">netfilter</code> example, which allows traffic only coming from <code data-enlighter-language="generic" class="EnlighterJSRAW">192.168.1.0/24</code> and from the network interface <code data-enlighter-language="generic" class="EnlighterJSRAW">enp35so</code>:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">iptables -I DOCKER-USER -i enp35s0 ! -s 192.168.1.0/24 -m conntrack --ctdir ORIGINAL -j DROP</pre>



<p>The chain <code data-enlighter-language="generic" class="EnlighterJSRAW">DOCKER-USER</code> is not flushed by Docker and thus can be created in a general firewall script of <code data-enlighter-language="generic" class="EnlighterJSRAW">netfilter</code> configuration, even at boot time:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">-N DOCKER-USER
-I DOCKER-USER -i enp35s0 ! -s 192.168.1.0/24 -m conntrack --ctdir ORIGINAL -j DROP</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/11/a-monitoring-solution-with-docker/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Read the HAProxy UNIX socket file using Perl</title>
		<link>https://jmorano.moretrix.com/2022/04/read-the-haproxy-unix-socket-file-using-perl/</link>
					<comments>https://jmorano.moretrix.com/2022/04/read-the-haproxy-unix-socket-file-using-perl/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Mon, 25 Apr 2022 10:52:45 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[HAProxy]]></category>
		<category><![CDATA[Monitoring]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1515</guid>

					<description><![CDATA[HAProxy provides a socket file which can be used to do maintenance (enable/ disable backends, retrieve information and&#8230;]]></description>
										<content:encoded><![CDATA[
<p><a rel="noreferrer noopener" href="http://www.haproxy.org/" data-type="URL" data-id="http://www.haproxy.org/" target="_blank">HAProxy</a> provides a <a href="http://docs.haproxy.org/2.5/management.html#9.3" data-type="URL" data-id="http://docs.haproxy.org/2.5/management.html#9.3" target="_blank" rel="noreferrer noopener">socket file</a> which can be used to do maintenance (enable/ disable backends, retrieve information and statistics, &#8230;).</p>



<p>The statistics part contains quite some interesting information for monitoring and alerting.</p>



<p>The below Perl code snippit will loop over a <code>glob</code> of socket files (for instance when you have multiple HAProxy configurations running as separate processes) and print out the values returned by the &#8220;<code>show info</code>&#8221; command.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">use IO::Socket::UNIX;

foreach my $socket_file (glob("/run/haproxy/*.sock")){
    print "- Reading socket: $socket_file\n";
    my $client = IO::Socket::UNIX->new(
        Type => SOCK_STREAM(),
        Peer => $socket_file,
    );

    print "- show info\n";
    print $client "show info\n";
    my $header = &lt;$client>;
    chomp($header);

    $header =~ s/^#\s+//;
    my @keys = split ',', $header;
    print "- header:$header\n";

    while (my $line = &lt;$client>){
        next unless $line =~ /^.+/;

        chomp($line);
        my @values = split ',', $line;
        print " - Got $line\n";
        print "   $keys[$_]: ".($values[$_]//'')."\n" foreach 0..$#keys;
    }

    close $client;
}</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/read-the-haproxy-unix-socket-file-using-perl/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>A Prometheus Exporter framework written in Perl</title>
		<link>https://jmorano.moretrix.com/2022/04/a-prometheus-exporter-framework-written-in-perl/</link>
					<comments>https://jmorano.moretrix.com/2022/04/a-prometheus-exporter-framework-written-in-perl/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Mon, 25 Apr 2022 09:45:51 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[Prometheus]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1513</guid>

					<description><![CDATA[I released a small project I wrote a while ago, to create quick Prometheus exporters in Perl for&#8230;]]></description>
										<content:encoded><![CDATA[
<p>I released a small project I wrote a while ago, to create quick Prometheus exporters in Perl for providing some custom data. The project itself can be found at <a rel="noreferrer noopener" href="https://github.com/insani4c/prometheus-exporter" target="_blank">https://github.com/insani4c/prometheus-exporter</a>. Back then I decided not to use <a rel="noreferrer noopener" href="Prometheus" target="_blank">Net::Prometheus</a> as I wanted to use <a rel="noreferrer noopener" href="https://metacpan.org/pod/HTTP::Daemon" data-type="URL" data-id="https://metacpan.org/pod/HTTP::Daemon" target="_blank">HTTP::Daemon</a> with <a rel="noreferrer noopener" href="https://metacpan.org/pod/threads" data-type="URL" data-id="https://metacpan.org/pod/threads" target="_blank">threads</a> and not <a href="https://metacpan.org/pod/Plack" data-type="URL" data-id="https://metacpan.org/pod/Plack" target="_blank" rel="noreferrer noopener">Plack</a>.</p>



<p>A small example of how to use the framework:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">my $exporter = Prometheus::Exporter->new({
    listen_port => 9090, 
    listen_addr => "127.0.0.1", 
    max_threads => 5,
});

$exporter->register_metrics({
    test_metric        => {type => "gauge",     desc => "A test metric"},
    test_metric_labels => {type => "gauge",     desc => "A test metric", labels => ["code=42", "code=99"]},
    test_counter       => {type => "counter",   desc => "A test metric"},
    test_histogram     => {type => "histogram", buckets => ['0.3', '0.6', '1.2', '+Inf']},
});

$exporter->register_collector(sub {
    my $timeout = int(rand(5));
    sleep $timeout;

    $exporter->get_metric("test_metric")->value(rand(100));
    $exporter->get_metric("test_metric_labels")->value([rand(42), rand(99)]);

    $test_counter += int(rand(20));
    $exporter->get_metric("test_counter")->value($test_counter);

    $histo_buckets{"0.3"}  += rand(20);
    $histo_buckets{"0.6"}  += $histo_buckets{"0.3"} + rand(20);
    $histo_buckets{"1.2"}  += $histo_buckets{"0.6"} + rand(20);
    $histo_buckets{"+Inf"} += $histo_buckets{"1.2"} + rand(20);
    my $histo_sum = 2.0 * $histo_buckets{"+Inf"};
    my $histo_count = $histo_buckets{"+Inf"};
    $exporter->get_metric("test_histogram")->value(\%histo_buckets, $histo_sum, $histo_count);
});

$exporter->run;
</pre>



<p>The framework will start a small HTTP daemon once <code>run()</code> is called and will handle all client requests by using <code>threads</code>. On each request, the framework will call the <code>subroutine</code> or <code>coderef</code> defined at <code>register_collector()</code>. Currently, that coderef must store the observed values by using the construct seen in the above example, by calling the <code>value()</code> method on the registered metric objects.</p>



<p>Also currently, the histogram implementation is not yet supporting labels.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/a-prometheus-exporter-framework-written-in-perl/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>IPTables Logs in Loki and Grafana (with Promtail)</title>
		<link>https://jmorano.moretrix.com/2022/04/iptables-logs-in-loki-and-grafana-with-promtail/</link>
					<comments>https://jmorano.moretrix.com/2022/04/iptables-logs-in-loki-and-grafana-with-promtail/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Fri, 01 Apr 2022 08:00:00 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Grafana]]></category>
		<category><![CDATA[IPTables]]></category>
		<category><![CDATA[Loki]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Promtail]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1310</guid>

					<description><![CDATA[In the previous article (Logging in IPTables with NFLog and ulogd2) rules were created to log certain IPTables&#8230;]]></description>
										<content:encoded><![CDATA[
<p>In the previous article (<a href="https://jmorano.moretrix.com/2022/03/logging-in-iptables-with-nflog-and-ulogd2/" data-type="URL" data-id="https://jmorano.moretrix.com/2022/03/logging-in-iptables-with-nflog-and-ulogd2/">Logging in IPTables with NFLog and ulogd2</a>) rules were created to log certain IPTables rules with the use of <code>NFLOG</code> and <code>ulogd2</code> to a file in JSON format.</p>



<p>With Promtail (<a rel="noreferrer noopener" href="https://grafana.com/docs/loki/latest/clients/promtail/" data-type="URL" data-id="https://grafana.com/docs/loki/latest/clients/promtail/" target="_blank">https://grafana.com/docs/loki/latest/clients/promtail/</a>), the above created log files can be sent to <a rel="noreferrer noopener" href="https://grafana.com/docs/loki/latest/" data-type="URL" data-id="https://grafana.com/docs/loki/latest/" target="_blank">Loki</a> so that they can finally be displayed in <a rel="noreferrer noopener" href="https://grafana.com/grafana/" data-type="URL" data-id="https://grafana.com/grafana/" target="_blank">Grafana</a>.</p>



<p>The installation of both Loki and Grafana are not covered in this article. The installation of Promtail is documented at <a rel="noreferrer noopener" href="https://grafana.com/docs/loki/latest/clients/promtail/installation/" target="_blank">https://grafana.com/docs/loki/latest/clients/promtail/installation/</a>.</p>



<p>Once Promtail is installed, create the following configuration file at <code>/etc/promtail-local-config.yaml</code>:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="json" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">server:                                                                                                                                                                                                            
  http_listen_port: 9080                                                                                                                                                                                           
  grpc_listen_port: 0                                                                                                                                                                                              
                                                                                                                                                                                                                   
positions:                                                                                                                                                                                                         
  filename: /var/tmp/promtail_positions.yaml                                                                                                                                                                       
                                                                                                                                                                                                                   
clients:                                                                                                                                                                                                           
  - url: http://loki_server:3100/loki/api/v1/push       
                                                                                                                                                               
scrape_configs:
    - job_name: iptableslogsjson
      static_configs:
      - targets:
          - localhost
        labels:
          instance: myhostname01
          job: iptableslogsjson
          __path__: /var/log/ulog/*json
      pipeline_stages:
      - json:
          expressions:
            timestamp: timestamp
            prefix: '"oob.prefix"'
            src: src_ip
            dst: dest_ip
      - labels:
          timestamp:
          prefix:
          src:
          dst:</pre>



<p>With the above configuration, Promtail will create 4 extra labels per log line:</p>



<ul class="wp-block-list"><li><code>timestamp</code>: Contains the logged timestamp</li><li><code>prefix</code>: the NFLOG prefix string</li><li><code>src</code>: the source IP address</li><li><code>dst</code>: the destination IP address</li></ul>



<p>Once the logs are arriving in Loki, and Loki has been configured as a datasource in Grafana, graphs can be created using <a href="https://grafana.com/docs/loki/latest/logql/" data-type="URL" data-id="https://grafana.com/docs/loki/latest/logql/" target="_blank" rel="noreferrer noopener">LogQL</a>.</p>



<p>Example:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">sum(rate({job="iptableslogsjson"} [$__interval])) by (prefix)</pre>



<figure class="wp-block-image size-full"><img decoding="async" width="916" height="296" src="https://jmorano.moretrix.com/wp-content/uploads/2022/03/Screenshot-from-2022-03-30-15-29-02.png" alt="" class="wp-image-1311" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/03/Screenshot-from-2022-03-30-15-29-02.png 916w, https://jmorano.moretrix.com/wp-content/uploads/2022/03/Screenshot-from-2022-03-30-15-29-02-300x97.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/03/Screenshot-from-2022-03-30-15-29-02-768x248.png 768w" sizes="(max-width: 916px) 100vw, 916px" /></figure>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/iptables-logs-in-loki-and-grafana-with-promtail/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Postgresql: Monitor sequence scans with Perl</title>
		<link>https://jmorano.moretrix.com/2014/02/postgresql-monitor-sequence-scans-perl/</link>
					<comments>https://jmorano.moretrix.com/2014/02/postgresql-monitor-sequence-scans-perl/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Wed, 12 Feb 2014 07:33:26 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Postgresql]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1065</guid>

					<description><![CDATA[Not using indexes or huge tables without indexes, can have a very negative impact on the duration of&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Not using indexes or huge tables without indexes, can have a very negative impact on the duration of a SQL query. The query planner will decide to make a sequence scan, which means that the query will go through the table sequentially to search for the required data. When this table is only 100 rows big, you will probably not even notice it is making a sequence scans, but if your table is 1,000,000 rows big or even more, you can probably optimize your table to use indexes to result in faster searches.</p>



<p>In the example script we will be using a <em>Storable</em> state file and we will the statistics as a JSON object in the PostgreSQL database.</p>



<p>First let&#8217;s take a look at the query we will be executing:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="sql" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">SELECT schemaname, relname, seq_tup_read 
FROM pg_stat_all_tables 
WHERE seq_tup_read &amp;gt; '0' 
      AND relname NOT LIKE 'pg_%'
ORDER BY seq_tup_read desc
</pre>



<p>As you can see, PostgreSQL stores all the information we need about our tables in just one table, called <em>pg_stat_all_tables</em>. In this table there is a column called <em>seq_tup_read</em>, which will contain the information we need.</p>



<p>Just reading out this information is not going to be enough, because it contains information since the startup of your PostgreSQL database. Since production databases aren&#8217;t restarted (that often), we will have to compare this information with some previous information (hence the <em>Storable</em> state file).<br />Our plan is to run the script in a cronjob, each 5 minutes.</p>



<p>The statistics are also stored in as a JSON object in a database, just so that we could build some web interface for the statistics, in a later stage. And we want to keep a history of these statistics.</p>



<p>Furthermore the script will <em>setuid</em> to postgres (same like <em>su &#8211; postgres</em> on the command line), so that it could connect to the PostgreSQL UNIX socket file.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">use strict;
use warnings;
use utf8;

use DBI;
use DateTime;
use POSIX qw/setuid/;
use Text::ASCIITable;
use JSON;

my $db   = 'mydatabase';
if(scalar @ARGV){
    $db = shift @ARGV;
}

my $host = '/var/run/postgresql';
my $user = 'postgres';
my $pass = 'undef';

my $state_db   = 'database_statistics';
my $state_host = '192.168.1.1';
my $state_user = 'skeletor';
my $state_pass = 'he-manisawhimp';

my $state_file = '/var/tmp/sequence_read.state';

# suid to postgres
setuid(scalar getpwnam 'postgres');

# define and open up the state file
my $state = {};
$state = retrieve $state_file if -f $state_file;

my $now      = DateTime-&amp;gt;now;

# Connect to the database which we want to monitor
my $dbh = DBI-&amp;gt;connect("dbi:Pg:dbname=$db;host=$host", $user, $pass) 
                or die "Could not connect to database: $!\n";

# Connect to the database that will be used to store the statistics
my $state_dbh = DBI-&amp;gt;connect("dbi:Pg:dbname=$state_db;host=$state_host", $state_user, $state_pass) 
                or die "Could not connect to the State database '$state_db': $!\n";

my $sql = &amp;lt;&amp;lt;EOF;
SELECT schemaname, relname, seq_tup_read 
FROM pg_stat_all_tables 
WHERE seq_tup_read &amp;gt; '0' 
      AND relname NOT LIKE 'pg_%'
ORDER BY seq_tup_read desc
EOF

# Get the statistics
my $results = $dbh-&amp;gt;selectall_arrayref( $sql, undef);

# Store the statistics as a JSON object in the second databse
eval {
    $state_dbh-&amp;gt;do('INSERT INTO mydbschema.seq_tup_read (data) VALUES(?)', undef, encode_json($results));
};
if($@){
    print "Insert into state-db failed: $@\n";
}

# Prepare a nice ASCII table for output
my $t = Text::ASCIITable-&amp;gt;new({ headingText =&amp;gt; 'Seq Tup Read ' . $now-&amp;gt;ymd('-')     . ' ' . $now-&amp;gt;hms(':')});
$t-&amp;gt;setCols('Schema Name','Relation Name ', 'Seq Tup Read', 'Increase (delta)');

my $row_count = 0;
foreach my $r (@{$results}){
    last if $row_count &amp;gt; 25;

    my (@values) = (@{$r});
    my ($increase, $delta) = (0, 0);
    # Calculate the increase and its delta
    if(defined $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{seq_tup_read}){
        $increase = $r-&amp;gt;[2] - $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{seq_tup_read};
        $delta    = $increase / $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{seq_tup_read} * 100;
        my $str = sprintf '%.0f (%.4f %%)', $increase, $delta;
        push @values, ($str);
    }
    else {
        push @values, '0 (0%)';
    }
    # Store this information for the next run of the script
    $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{seq_tup_read} = $r-&amp;gt;[2];
    $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{delta}        = $delta;
    $state-&amp;gt;{last}{$r-&amp;gt;[0].':'.$r-&amp;gt;[1]}{increase}     = $increase;

    # Only add the information to ASCII output table if there was an increase
    next unless $increase &amp;gt; 0;
    $t-&amp;gt;addRow(@values);
    $row_count++;
}
# Print out the ASCII table
print $t;

nstore $state, $state_file;

</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2014/02/postgresql-monitor-sequence-scans-perl/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>Postgresql: Monitor unused indexes</title>
		<link>https://jmorano.moretrix.com/2014/02/postgresql-monitor-unused-indexes/</link>
					<comments>https://jmorano.moretrix.com/2014/02/postgresql-monitor-unused-indexes/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Tue, 11 Feb 2014 09:09:08 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Development]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Postgresql]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=1057</guid>

					<description><![CDATA[Working on large database systems, with many tables and many indexes, it is easy to loose the overview&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Working on large database systems, with many tables and many indexes, it is easy to loose the overview on what is actually being used and what is just consuming unwanted disk space.<br />If indexes are not closely monitored, they could end up using undesired space and moreover, they will consume unnecessary CPU cycles.</p>



<p>Statistics about indexes can be easily retrieved from the PostgreSQL database system. All required information is stored in two tables:</p>



<ul class="wp-block-list"><li>pg_stat_user_indexes</li><li>pg_index</li></ul>



<p>When joining these two tables, interesting information can be read in the following columns:</p>



<ul class="wp-block-list"><li>idx_scan: has the query planner used this index for an &#8216;Index Scan&#8217;, the number returned is the amount of times it was used</li><li>idx_tup_read: how many tuples have been read by using the index</li><li>idx_tup_fetch: how many tuples have been fetch by using the index</li></ul>



<p>A neat function called <em>pg_relation_size()</em> allows to fetch the on-disk size of a relation, in this case the index.</p>



<p>Based on this information, the monitoring query will be built up as follows:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">SELECT 
    relid::regclass AS table, 
    indexrelid::regclass AS index, 
    pg_size_pretty(pg_relation_size(indexrelid::regclass)) AS index_size, 
    idx_tup_read, 
    idx_tup_fetch, 
    idx_scan
FROM 
    pg_stat_user_indexes 
    JOIN pg_index USING (indexrelid) 
WHERE 
    idx_scan = 0 
    AND indisunique IS FALSE
</pre>



<p>Now, all we need to do is write a script which stores this information in some kind of file and periodically report about the statistics.</p>



<p>First of all we will need a configuration file, which contains the database credentials.<br />I&#8217;ve chosen YAML because it is so versatile.</p>



<p>It will contain two important sets of information:</p>



<ul class="wp-block-list"><li>The database credentials</li><li>path to the state file</li></ul>



<p>Example:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">dsn: "dbi:Pg:host=/var/run/postgresql;database=testdb"
user: postgres
pass:
state_file: /var/tmp/monitor_unused_indexes.state
</pre>



<p>As you can see, we will be connect to the PostgreSQL database by using its UNIX socket.</p>



<p>The script will use <em>Text::ASCIITable</em> to output the statistics in a nice table. <em>Storable</em> is used to save our statistics to disk.</p>



<p>In the below script, we will check if an index was unused in a timespan of 30 days. If yes, the script will report this index to STDOUT.<br />Therefore, we will store a score and timestamp for each unused index in the state file.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use DBI;
use Storable qw/nstore retrieve/;
use YAML qw/LoadFile/;
use POSIX qw/setuid/;
use Getopt::Long;
use DateTime;
use Text::ASCIITable;

my $cfg_file = './monitor_unused_indexes.yaml';
my $verbose = 0;
GetOptions("cfg=s" =&amp;gt; \$cfg_file,
           "verbose|v" =&amp;gt; \$verbose, 
        );

my $sql = &amp;lt;&amp;lt;EOS;
SELECT 
    relid::regclass AS table, 
    indexrelid::regclass AS index, 
    pg_size_pretty(pg_relation_size(indexrelid::regclass)) AS index_size, 
    idx_tup_read, 
    idx_tup_fetch, 
    idx_scan
FROM 
    pg_stat_user_indexes 
    JOIN pg_index USING (indexrelid) 
WHERE 
    idx_scan = 0 
    AND indisunique IS FALSE
EOS

my ($cfg) = LoadFile($cfg_file);

# suid to postgres, other whatever user is configured in the config.yaml file
setuid(scalar getpwnam $cfg-&amp;gt;{user});

# Connect to the database
my $dbh = DBI-&amp;gt;connect($cfg-&amp;gt;{dsn}, $cfg-&amp;gt;{user}, $cfg-&amp;gt;{pass}) 
            or die "Could not connect to database: $! (DBI ERROR: ".$DBI::errstr.")\n";

my $state;
if(-f $cfg-&amp;gt;{state_file}){
    $state = retrieve $cfg-&amp;gt;{state_file};
}

# Fetch the statistics
my $results = $dbh-&amp;gt;selectall_arrayref( $sql, undef );

my $now_dt   = DateTime-&amp;gt;now;

# Initialize the ASCII table
my $t = Text::ASCIITable-&amp;gt;new({ headingText =&amp;gt; 'INDEX STATISTICS'});
$t-&amp;gt;setCols(qw/Table Index Index_Size idx_tup_read idx_tup_fetch idx_scan/);

# Analyze the results
foreach my $r (@$results){
    if($verbose){
        $t-&amp;gt;addRow(@{$r});
    }
    # Only update the state file if --verbose was not specified.
    # This way the script can be check manually with --verbose many times and executed for instance
    # from a cronjob once a day without --verbose
    else {
        if(defined $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}){
            my $first_dt = DateTime-&amp;gt;from_epoch( epoch =&amp;gt; $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{first_hit} );
            if($first_dt-&amp;gt;add(days =&amp;gt; $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{score})-&amp;gt;day == $now_dt-&amp;gt;day ) {
                $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{score}++;
            }
            else {
                $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{score}     = 1;
                $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{first_hit} = $now_dt-&amp;gt;epoch;;
            }
        }
        else {
            $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{score}     = 1;
            $state-&amp;gt;{unused_indexes}{$r-&amp;gt;[1]}{first_hit} = $now_dt-&amp;gt;epoch;;
        }
    }
}

# Print out the statistics table, if --verbose was specified
print $t if $verbose; 

# Store the statistics to disk in a state file
nstore $state, $cfg-&amp;gt;{state_file};

foreach my $idx (keys %{ $state-&amp;gt;{unused_indexes} }){
    my $first_dt = DateTime-&amp;gt;from_epoch( epoch =&amp;gt; $state-&amp;gt;{unused_indexes}{$idx}{first_hit} );
    if( $first_dt-&amp;gt;add(days =&amp;gt; 30) &amp;lt;= $now_dt ){
        my $line = "Index: $idx ready for deletion";
        $line .= " (score:" . $state-&amp;gt;{unused_indexes}{$idx}{score};
        $line .= " (|first_hit:" . DateTime-&amp;gt;from_epoch(epoch =&amp;gt; $state-&amp;gt;{unused_indexes}{$idx}{first_hit})-&amp;gt;ymd . ")";

        print $line."\n" if $verbose;
    }
}
</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2014/02/postgresql-monitor-unused-indexes/feed/</wfw:commentRss>
			<slash:comments>3</slash:comments>
		
		
			</item>
		<item>
		<title>PostgreSQL 9.2 Master &#8211; Slave Monitoring</title>
		<link>https://jmorano.moretrix.com/2013/08/postgresql-9-2-master-slave-monitoring/</link>
					<comments>https://jmorano.moretrix.com/2013/08/postgresql-9-2-master-slave-monitoring/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Tue, 13 Aug 2013 13:07:04 +0000</pubDate>
				<category><![CDATA[Blog]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Bash]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[Dev]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Nagios]]></category>
		<category><![CDATA[Postgresql]]></category>
		<guid isPermaLink="false">http://jmorano.moretrix.com/?p=943</guid>

					<description><![CDATA[Nagios plugin script written in Bash to check the master-slave replication in PostgreSQL (tested on PostgreSQL 9.2.4) (executed&#8230;]]></description>
										<content:encoded><![CDATA[<p>Nagios plugin script written in Bash to check the master-slave replication in PostgreSQL (tested on PostgreSQL 9.2.4) (executed on the slave).<br />
The script will report how many bytes the slave server is behind, and how many seconds ago the last replay of data occurred.</p>
<p>The script must be executed as &#8216;postgres&#8217; user.</p>
<pre class="brush:bash">
#!/bin/bash

# $Id: check_slave_replication.sh 3421 2013-08-09 07:52:44Z jmorano $

STATE_OK=0
STATE_WARNING=1
STATE_CRITICAL=2
STATE_UNKNOWN=3
 
## Master (p_) and Slave (s_) DB Server Information	
export s_host=$1
export s_port=$2
export p_db=$3
export p_host=$4
export p_port=$5
 
export psql=/opt/postgresql/bin/psql
export bc=/usr/bin/bc
 
## Limits
export  critical_limit=83886080 # 5 * 16MB, size of 5 WAL files
export   warning_limit=16777216 # 16 MB, size of 1 WAL file
 
master_lag=$($psql -U postgres -h$p_host -p$p_port -A -t -c "SELECT pg_xlog_location_diff(pg_current_xlog_location(), '0/0') AS offset" $p_db)
slave_lag=$($psql -U postgres  -h$s_host -p$s_port -A -t -c "SELECT pg_xlog_location_diff(pg_last_xlog_receive_location(), '0/0') AS receive" $p_db)
replay_lag=$($psql -U postgres -h$s_host -p$s_port -A -t -c "SELECT pg_xlog_location_diff(pg_last_xlog_replay_location(), '0/0') AS replay" $p_db)
replay_timediff=$($psql -U postgres -h$s_host -p$s_port -A -t -c "SELECT NOW() - pg_last_xact_replay_timestamp() AS replication_delay" $p_db)
 
if [[ $master_lag -eq '' || $slave_lag -eq '' || $replay_lag -eq '' ]]; then
    echo "CRITICAL: Stream has no value to compare (is replication configured or connectivity problem?)"
    exit $STATE_CRITICAL
else
    if [[ $master_lag -eq $slave_lag && $master_lag -eq $replay_lag && $slave_lag -eq $replay_lag ]] ; then
        echo "OK: Stream: MASTER:$master_lag Slave:$slave_lag Replay:$replay_lag"
        exit $STATE_OK
    else
        if [[ $master_lag -eq $slave_lag ]] ; then
            if [[ $master_lag -ne $replay_lag ]] ; then
                if [ $(bc <<< $master_lag-$replay_lag) -lt $warning_limit ]; then
                    echo "OK: Stream: MASTER:$master_lag Replay:$replay_lag :: REPLAY BEHIND"
                    exit $STATE_OK
                else
                    echo "WARNING: Stream: MASTER:$master_lag Replay:$replay_lag :: REPLAY $(bc <<< $master_lag-$replay_lag)bytes BEHIND (${replay_timediff}seconds)"
                    exit $STATE_WARNING
                fi
            fi
        else
            if [ $(bc <<< $master_lag-$slave_lag) -gt $critical_limit ]; then
                echo "CRITICAL: Stream: MASTER:$master_lag Slave:$slave_lag :: STREAM BEYOND CRITICAL LIMIT ($(bc <<< $master_lag-$slave_lag)bytes)"
                exit $STATE_CRITICAL
            else
                if [ $(bc <<< $master_lag-$slave_lag) -lt $warning_limit ]; then
                    echo "OK: Stream: MASTER:$master_lag Slave:$slave_lag Replay:$replay_lag :: STREAM BEHIND"
                    exit $STATE_OK
                else
                    echo "WARNING: Stream: MASTER:$master_lag Slave:$slave_lag :: STREAM BEYOND WARNING LIMIT ($(bc <<< $master_lag-$replay_lag)bytes)"
                    exit $STATE_WARNING
                fi
            fi
        fi
        echo "UNKNOWN: Stream: MASTER: $master_lag Slave: $slave_lag Replay: $replay_lag"
        exit $STATE_UNKNOWN
    fi
fi
</pre>
<p>Possible outputs:</p>
<pre class="brush:bash">
$ bash check_slave_replication.sh 192.168.0.1 5432 live 192.168.0.2 5432
WARNING: Stream: MASTER:1907958306184 Replay:1907878056888 :: REPLAY 80249296bytes BEHIND (00:03:14.056747seconds)
$ bash check_slave_replication.sh 192.168.0.1 5432 live 192.168.0.2 5432
OK: Stream: MASTER:2055690128376 Slave:2055690143144 Replay:2055690193744 :: STREAM BEHIND
$ bash check_slave_replication.sh 192.168.0.1 5432 live 192.168.0.2 5432
OK: Stream: MASTER:2055690497120 Replay:2055690497328 :: REPLAY BEHIND
$ bash check_slave_replication.sh 192.168.0.1 5432 live 192.168.0.2 5432
OK: Stream: MASTER:2055691704672 Slave:2055691704672 Replay:2055691704672
</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2013/08/postgresql-9-2-master-slave-monitoring/feed/</wfw:commentRss>
			<slash:comments>14</slash:comments>
		
		
			</item>
	</channel>
</rss>
