<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Automation &#8211; Johnny Morano&#039;s Tech Articles</title>
	<atom:link href="https://jmorano.moretrix.com/category/automation/feed/" rel="self" type="application/rss+xml" />
	<link>https://jmorano.moretrix.com</link>
	<description>Ramblings of an old-fashioned space cowboy</description>
	<lastBuildDate>Tue, 22 Nov 2022 07:21:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.6.2</generator>

 
	<item>
		<title>A monitoring solution with Docker</title>
		<link>https://jmorano.moretrix.com/2022/11/a-monitoring-solution-with-docker/</link>
					<comments>https://jmorano.moretrix.com/2022/11/a-monitoring-solution-with-docker/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Tue, 22 Nov 2022 07:21:51 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Docker]]></category>
		<category><![CDATA[Grafana]]></category>
		<category><![CDATA[Loki]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Prometheus]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1587</guid>

					<description><![CDATA[Docker Compose is a great way to set up small test environments locally or remotely. It allows to&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Docker Compose is a great way to set up small test environments locally or remotely. It allows to define your infrastructure as code and does not require any prerequisite tasks or after deployments tasks.</p>



<p>The Docker installation is well documented at <a rel="noreferrer noopener" href="https://docs.docker.com/get-docker/" target="_blank">https://docs.docker.com/get-docker/</a> and is well supported amongst the most popular operating systems. The installation itself will not be covered in this article. If you want to get familiar with the details of Docker, start with the documentation at <a href="https://docs.docker.com/get-started/" target="_blank" rel="noreferrer noopener">https://docs.docker.com/get-started/</a>.</p>



<p>All code used in this article is available at: <a rel="noreferrer noopener" href="https://github.com/insani4c/docker-monitoring-stack" target="_blank">https://github.com/insani4c/docker-monitoring-stack</a>. </p>



<p>In this article, we will see how to set up a monitoring solution based on:</p>



<ul class="wp-block-list">
<li><a href="https://prometheus.io/" data-type="URL" data-id="https://prometheus.io/" target="_blank" rel="noreferrer noopener">Prometheus</a></li>



<li><a href="https://prometheus.io/docs/guides/node-exporter/" data-type="URL" data-id="https://prometheus.io/docs/guides/node-exporter/" target="_blank" rel="noreferrer noopener">Prometheus Node Exporter</a></li>



<li><a href="https://github.com/prometheus/blackbox_exporter" data-type="URL" data-id="https://github.com/prometheus/blackbox_exporter" target="_blank" rel="noreferrer noopener">Prometheus Black Exporter</a></li>



<li><a href="https://github.com/prometheus/snmp_exporter" data-type="URL" data-id="https://github.com/prometheus/snmp_exporter" target="_blank" rel="noreferrer noopener">Prometheus SNMP Exporter</a></li>



<li><a href="https://grafana.com/oss/loki/" data-type="URL" data-id="https://grafana.com/oss/loki/" target="_blank" rel="noreferrer noopener">Loki</a></li>



<li><a href="https://grafana.com/docs/loki/latest/clients/promtail/" data-type="URL" data-id="https://grafana.com/docs/loki/latest/clients/promtail/" target="_blank" rel="noreferrer noopener">Promtail</a></li>



<li><a href="https://grafana.com/" data-type="URL" data-id="https://grafana.com/" target="_blank" rel="noreferrer noopener">Grafana</a></li>
</ul>



<p>To monitor the deployed containers, we will also deploy <a rel="noreferrer noopener" href="https://github.com/google/cadvisor" data-type="URL" data-id="https://github.com/google/cadvisor" target="_blank">Google&#8217;s cadvisor</a> container, to get some interesting statistics and details in our Prometheus/ Grafana setup.</p>



<p>The Docker Compose file, called <code data-enlighter-language="generic" class="EnlighterJSRAW">docker-compose.yml</code>, contains all the information of the infrastructure such as:</p>



<ul class="wp-block-list">
<li>network information</li>



<li>volumes</li>



<li>services (the containers)</li>



<li>&#8230;</li>
</ul>



<p>Let&#8217;s start from the top of the file.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="dockerfile" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">version: '3.8'

name: docmon

volumes:
  grafana-data: {}
  alertmanager-data: {}
  prometheus-data: {}
  loki-data: {}
</pre>



<p>At first at <code data-enlighter-language="generic" class="EnlighterJSRAW">line 1</code>, the version Docker Compose version is specified, to define which specifications are allowed. At <code data-enlighter-language="generic" class="EnlighterJSRAW">line 3</code> a name for the container group or stack is set. And finally starting from <code data-enlighter-language="generic" class="EnlighterJSRAW">line 5</code>, data volumes (think <em>disks</em>) are defined, which will be used by the containers. These are persistent data volumes which will be reused unless the container has been completely removed.</p>



<p>Next, we will define the services in the <code data-enlighter-language="generic" class="EnlighterJSRAW">docker-compose.yml</code>:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="dockerfile" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="cadvisor and pronetheus" data-enlighter-group="docker-compose.yml">services:
  cadvisor:
    image: 'gcr.io/cadvisor/cadvisor:latest'
    container_name: cadvisor
    restart: always
    mem_limit: 512m
    mem_reservation: 32m
    # ports: 
    #   - '8880:8080'
    volumes:
      - '/:/rootfs:ro'
      - '/var/run:/var/run:ro'
      - '/sys:/sys:ro'
      - '/var/lib/docker/:/var/lib/docker:ro'
      - '/dev/disk/:/dev/disk:ro'
    privileged: true
    devices: 
      - '/dev/kmsg:/dev/kmsg'

  prometheus:
    image: 'prom/prometheus:latest'
    container_name: prometheus
    restart: always
    mem_limit: 2048m
    mem_reservation: 256m
    cpus: 2
    # ports:
    #   - '9090:9090'
    volumes:
      - '$PROMETHEUS_HOME/config:/etc/prometheus'
      - 'prometheus-data:/prometheus'
    extra_hosts:
      myrouter: 192.168.1.1
      myswitch: 192.168.1.10
    depends_on:
      - cadvisor
</pre>



<p>Containers are defined as <code data-enlighter-language="generic" class="EnlighterJSRAW">services</code>. Each <code data-enlighter-language="generic" class="EnlighterJSRAW">service</code> will require at least:</p>



<ul class="wp-block-list">
<li>a service name (example <code data-enlighter-language="generic" class="EnlighterJSRAW">line 2</code> and <code data-enlighter-language="generic" class="EnlighterJSRAW">line 20</code>)</li>



<li>an <code data-enlighter-language="generic" class="EnlighterJSRAW">image</code> definition</li>
</ul>



<p>All other options are optional or required by specific images. </p>



<p>The first image or container defined in the above example is <code data-enlighter-language="generic" class="EnlighterJSRAW">cadvisor</code>. This service provides statistics from Docker and the deployed containers to Prometheus. To be able to provide this information, the container must have read access to certain file paths or sockets on the hypervisor (read: the server where the Docker containers will be running). These are provided in the <code data-enlighter-language="generic" class="EnlighterJSRAW">volumes</code> section of the container. Here, directory paths on the hypervisor will be provided as mount partitions in the container, and they will be mounted with the <code data-enlighter-language="generic" class="EnlighterJSRAW">readonly</code> (<code data-enlighter-language="generic" class="EnlighterJSRAW">:ro</code>) parameter so that the container can&#8217;t make any changes to them.</p>



<p>Furthermore it provides access to a <code data-enlighter-language="generic" class="EnlighterJSRAW">device</code> (to read kernel messages), set <code data-enlighter-language="generic" class="EnlighterJSRAW">memory</code> and <code data-enlighter-language="generic" class="EnlighterJSRAW">cpu</code> limits and will run the container in <code data-enlighter-language="generic" class="EnlighterJSRAW">privilege</code> mode. The <code data-enlighter-language="generic" class="EnlighterJSRAW">ports</code> section has been put in comments, as it isn&#8217;t really require to expose ports, or make them available outside the Docker ecosystem. In our example, only Prometheus must be able to connect to it, and since Prometheus will be deployed as a container, we don&#8217;t need to be able to access the web service running on the container to read out the metrics or see the statistics.</p>



<p>The next container defined is called <code data-enlighter-language="generic" class="EnlighterJSRAW">prometheus</code>. For this container, <code data-enlighter-language="generic" class="EnlighterJSRAW">volumes</code> will be mounted to provide the Prometheus configuration files and to store the data to the volume called <code data-enlighter-language="generic" class="EnlighterJSRAW">prometheus-data</code>. It also defines an <code data-enlighter-language="generic" class="EnlighterJSRAW">extra_hosts</code>. These are entries that are typically defined in an <code data-enlighter-language="generic" class="EnlighterJSRAW">/etc/hosts</code> file, which Docker does not read from the hypervisor. And instead of deploying or mounting the hypervisor&#8217;s <code data-enlighter-language="generic" class="EnlighterJSRAW">hosts</code> file, extra host mappings can be defined or handed to the container, which is set up in the <code data-enlighter-language="generic" class="EnlighterJSRAW">extra_hosts</code> section as above.</p>



<p>At the end of the <code data-enlighter-language="generic" class="EnlighterJSRAW">prometheus</code> container definition, a <code data-enlighter-language="generic" class="EnlighterJSRAW">depends_on</code> section is configured, which means that the <code data-enlighter-language="generic" class="EnlighterJSRAW">prometheus</code> container won&#8217;t be deployed until the container names defined in that section are up and running.</p>



<p>Next we will define all other containers (see the second tab in the above code block, called <code data-enlighter-language="generic" class="EnlighterJSRAW">the rest</code>).</p>



<pre class="EnlighterJSRAW" data-enlighter-language="dockerfile" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="the rest" data-enlighter-group="docker-compose.yml">  hypervisor:
    image: 'prom/node-exporter:latest'
    container_name: hypervisor
    mem_limit: 128m
    mem_reservation: 32m
    restart: unless-stopped
    volumes:
      - '/:/host:ro,rslave'
      - '/proc:/host/proc:ro'
      - '/sys:/host/sys:ro'
    command:
      - '--path.rootfs=/host'
      - '--path.procfs=/host/proc'
      - '--path.sysfs=/host/sys'
      - '--collector.filesystem.mount-points-exclude=^/(sys|proc|dev|host|etc)($$|/)'
      - '--collector.systemd'
      - '--collector.cgroups'
    depends_on:
      - cadvisor

  prom_snmp:
    image: 'prom/snmp-exporter:latest'
    container_name: prom_snmp
    restart: always
    mem_limit: 128m
    mem_reservation: 32m
    # ports: 
    #   - '9116:9116'
    volumes:
      - '$PROMSNMP_HOME/config:/etc/snmp_exporter'
    extra_hosts:
      myrouter: 192.168.1.1
      myswitch: 192.168.1.10
    depends_on:
      - cadvisor
      - prometheus

  alertmanager:
    image: 'prom/alertmanager:latest'
    container_name: alertmanager
    restart: always
    mem_limit: 256m
    mem_reservation: 32m 
    # ports:
    #   - 9093:9093
    volumes:
      - '$ALERTMANAGER_HOME/config/alertmanager.yml:/etc/alertmanager/config.yml'
      - 'alertmanager-data:/alertmanager'
    command:
      - '--config.file=/etc/alertmanager/config.yml'
      - '--storage.path=/alertmanager'
    depends_on:
      - cadvisor
      - prometheus

  loki:
    image: 'grafana/loki:latest'
    container_name: loki
    restart: always
    mem_limit: 32768m
    mem_reservation: 8192m
    cpus: 6 
    ports:
      - '3100:3100'
    volumes:
      - '$LOKI_HOME/config:/etc/loki'
      - 'loki-data:/loki'
    depends_on:
      - cadvisor
      - prometheus
      - alertmanager

  blackbox_exporter:
    image: 'prom/blackbox-exporter:latest'
    container_name: blackbox_exporter
    restart: always
    mem_limit: 128m
    mem_reservation: 32m
    dns:
      - 8.8.8.8
      - 8.8.4.4
    # ports:
    #   - 9115:9115
    volumes:
      - '$BLACKBOXEXPORTER_HOME/config:/etc/blackboxexporter/'
    command:
      - '--config.file=/etc/blackboxexporter/config.yml'
    depends_on:
      - cadvisor
      - prometheus

  promtail:
    image: grafana/promtail:latest
    container_name: promtail
    restart: always
    mem_limit: 256m
    mem_reservation: 64m
    volumes:
      - $PROMTAIL_HOME/config:/etc/promtail/
      # to read container labels and logs
      - '/var/run/docker.sock:/var/run/docker.sock:ro'
      - '/var/lib/docker/containers:/var/lib/docker/containers:ro'
      - '/var/log/ulog:/var/log/ulog/:ro'
    depends_on:
      - cadvisor
      - loki

  grafana:
    image: 'grafana/grafana:latest'
    container_name: grafana
    restart: always
    mem_limit: 2048m
    mem_reservation: 256m
    ports:
      - '3000:3000'
    volumes:
      - '$GRAFANA_HOME/config:/etc/grafana'
      - 'grafana-data:/var/lib/grafana'
      - '$GRAFANA_HOME/dashboards:/var/lib/grafana/dashboards'
    depends_on:
      - cadvisor
      - prometheus
      - loki
      - alertmanager
</pre>



<p>The rest of the code will deploy:</p>



<ul class="wp-block-list">
<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">hypervisor</code>, which is actually the Prometheus node-exporter for the hypervisor.</li>



<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">prom_snmp</code>, which will retrieve SNMP statistics</li>



<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">blackbox_exporter</code>, which mainly checks webservers and their SSL certificates</li>



<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">promtail</code>, which collects logs and log statistics from the hypervisor</li>



<li>a container called <code data-enlighter-language="generic" class="EnlighterJSRAW">loki</code>, which allows to store and index logs sent to the service by <code data-enlighter-language="generic" class="EnlighterJSRAW">promtail</code> (either container or as a service running on some external server)</li>
</ul>



<p>Finally, the last container deployed is the <code data-enlighter-language="generic" class="EnlighterJSRAW">grafana</code> container. Besides its normal configuration file <code data-enlighter-language="generic" class="EnlighterJSRAW">grafana.ini</code>, the Docker container will also automatically provision (the <code data-enlighter-language="generic" class="EnlighterJSRAW">provisioning</code> sub directory in the <code data-enlighter-language="generic" class="EnlighterJSRAW">config</code> directory) datasources and dashboards so that no manual after-tasks are required once the containers are running.</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="354" height="244" src="https://jmorano.moretrix.com/wp-content/uploads/2022/11/Screenshot-from-2022-11-22-07-52-18.png" alt="" class="wp-image-1595" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/11/Screenshot-from-2022-11-22-07-52-18.png 354w, https://jmorano.moretrix.com/wp-content/uploads/2022/11/Screenshot-from-2022-11-22-07-52-18-300x207.png 300w" sizes="(max-width: 354px) 100vw, 354px" /><figcaption class="wp-element-caption">The grafana files</figcaption></figure>



<p>The datasources can be preconfigured in a YAML file called <code data-enlighter-language="generic" class="EnlighterJSRAW">default.yaml</code>, stored in the <code data-enlighter-language="generic" class="EnlighterJSRAW">provisioning/datasources/</code> sub directory.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="yaml" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">apiVersion: 1

datasources:
 - name: Alertmanager
   type: alertmanager 
   access: proxy
   orgId: 1
   url: http://alertmanager:9093
   version: 1
   editable: false
   isDefault: false
   uid: DS_ALERTMANAGER
   jsonData:
    implementation: prometheus
 - name: Prometheus
   type: prometheus
   access: proxy
   orgId: 1
   url: http://prometheus:9090
   version: 1
   editable: false
   isDefault: true
   uid: DS_PROMETHEUS
   jsonData:
    alertmanagerUid: DS_ALERTMANAGER
    manageAlerts: true
    prometheusType: Prometheus
    prometheusVersion: 2.39.1
 - name: Loki
   type: loki 
   access: proxy
   orgId: 1
   url: http://loki:3100
   version: 1
   editable: false
   isDefault: false
   uid: DS_LOKI
   jsonData:
    alertmanagerUid: DS_ALERTMANAGER
    manageAlerts: true
</pre>



<p>Same thing goes for dashboards we want to have automatically deployed:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="yaml" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">apiVersion: 1

providers:
 - name: 'default'
   orgId: 1
   folder: 'Custom'
   folderUid: ''
   type: file
   options:
     path: /var/lib/grafana/dashboards
</pre>



<p>Finally, if Docker is running on multiple network interfaces (for it is a hosted server, or it has internal and external IP addresses), you might want to limit access to the container to specific networks only.</p>



<p>Below is a <code data-enlighter-language="generic" class="EnlighterJSRAW">netfilter</code> example, which allows traffic only coming from <code data-enlighter-language="generic" class="EnlighterJSRAW">192.168.1.0/24</code> and from the network interface <code data-enlighter-language="generic" class="EnlighterJSRAW">enp35so</code>:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">iptables -I DOCKER-USER -i enp35s0 ! -s 192.168.1.0/24 -m conntrack --ctdir ORIGINAL -j DROP</pre>



<p>The chain <code data-enlighter-language="generic" class="EnlighterJSRAW">DOCKER-USER</code> is not flushed by Docker and thus can be created in a general firewall script of <code data-enlighter-language="generic" class="EnlighterJSRAW">netfilter</code> configuration, even at boot time:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">-N DOCKER-USER
-I DOCKER-USER -i enp35s0 ! -s 192.168.1.0/24 -m conntrack --ctdir ORIGINAL -j DROP</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/11/a-monitoring-solution-with-docker/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Jenkins to manage a libvirt infrastructure with Terraform</title>
		<link>https://jmorano.moretrix.com/2022/08/jenkins-to-manage-azure-infrastructure-with-terraform/</link>
					<comments>https://jmorano.moretrix.com/2022/08/jenkins-to-manage-azure-infrastructure-with-terraform/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Thu, 18 Aug 2022 10:53:59 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[Debian]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Groovy]]></category>
		<category><![CDATA[Jenkins]]></category>
		<category><![CDATA[Libvirt]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1546</guid>

					<description><![CDATA[Jenkins is an open source automation server which provides hundreds of plugins to build, deploy and automate projects.&#8230;]]></description>
										<content:encoded><![CDATA[
<p><a href="https://www.jenkins.io/" data-type="URL" data-id="https://www.jenkins.io/" target="_blank" rel="noreferrer noopener">Jenkins</a> is an open source automation server which provides hundreds of plugins to build, deploy and automate projects.</p>



<p><a href="https://www.terraform.io/" data-type="URL" data-id="https://www.terraform.io/" target="_blank" rel="noreferrer noopener">Terraform</a> codifies cloud, virtualization and many other APIs into declarative configuration files, allowing to define the infrastructure as code.</p>



<p>The combination of both is an excellent platform to manage resources in the <a rel="noreferrer noopener" href="https://azure.microsoft.com/" data-type="URL" data-id="https://azure.microsoft.com/" target="_blank">Microsoft Azure</a> cloud in an automated or semi-automated way.</p>



<h2 class="wp-block-heading">Installation</h2>



<p>Let&#8217;s start by installing Jenkins and Terraform. The official documentation at <a rel="noreferrer noopener" href="https://www.jenkins.io/doc/book/installing/linux/#debianubuntu" target="_blank">https://www.jenkins.io/doc/book/installing/linux/#debianubuntu</a> describes how to install the Jenkins binaries on a Debian/ Ubuntu system, so we&#8217;ll not go further into details on that.</p>



<p>Once the required packages are installed as described above, go to <a rel="noreferrer noopener" href="http://localhost:8080/" target="_blank">http://localhost:8080/</a> to finalize the Jenkins setup. After installing the essential plugins required to run Jenkins, you should be directed to the following screen:</p>



<figure class="wp-block-image size-full"><img decoding="async" width="957" height="887" src="https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-30-11-52-08.png" alt="" class="wp-image-1552" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-30-11-52-08.png 957w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-30-11-52-08-300x278.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-30-11-52-08-768x712.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-30-11-52-08-380x352.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-30-11-52-08-550x510.png 550w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-30-11-52-08-800x741.png 800w" sizes="(max-width: 957px) 100vw, 957px" /><figcaption>Home screen</figcaption></figure>



<p>Next, install the required plugins.</p>



<p>Go the main dashboard and then click on &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Manage Jenkins</mark>&#8221; -&gt; &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Manage Plugins</mark>&#8220;. In the &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Available</mark>&#8221; tab, search for:</p>



<ul class="wp-block-list"><li>Azure Credentials</li><li>Terraform</li><li>AnsiColor</li><li>Git plugin</li></ul>



<p>Check the checkbox of the required plugins and install them. Jenkins will restart at the end of the installation.</p>



<h2 class="wp-block-heading">Pipeline Setup</h2>



<p>After all plugins have been installed, the next step will be to add a &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">New Item</mark>&#8221; (see the menu on the right side).</p>



<figure class="wp-block-image size-full"><img decoding="async" width="943" height="918" src="https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-31-13-01-45.png" alt="" class="wp-image-1555" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-31-13-01-45.png 943w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-31-13-01-45-300x292.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-31-13-01-45-768x748.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-31-13-01-45-380x370.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-31-13-01-45-550x535.png 550w, https://jmorano.moretrix.com/wp-content/uploads/2022/07/Screenshot-from-2022-07-31-13-01-45-800x779.png 800w" sizes="(max-width: 943px) 100vw, 943px" /><figcaption>Add a new item</figcaption></figure>



<p>Start by entering a valid name for the job and choose the type &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Pipeline</mark>&#8220;. Hit &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">OK</mark>&#8221; to continue to the next screen.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="613" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-00-55-1024x613.png" alt="" class="wp-image-1558" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-00-55-1024x613.png 1024w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-00-55-300x179.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-00-55-768x459.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-00-55-380x227.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-00-55-550x329.png 550w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-00-55-800x479.png 800w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-00-55.png 1110w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Enter a description for the pipeline and scroll down to the &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Pipeline</mark>&#8221; section and choose in the &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Definition</mark>&#8221; section the option &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Pipeline script from SCM</mark>&#8220;. Choose the SCM you want to use and fill in the repository URL of the code repository containing the <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Jenkinsfile pipeline file</mark>.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="790" height="775" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-01-37.png" alt="" class="wp-image-1560" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-01-37.png 790w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-01-37-300x294.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-01-37-768x753.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-01-37-380x373.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-01-37-550x540.png 550w" sizes="(max-width: 790px) 100vw, 790px" /></figure>



<p>A bit further down it is possible to configure the SCM branch name to work on and the actual filename of the Jenkinsfile, which in most cases will be called just &#8220;<code>Jenkinsfile</code>&#8220;.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="777" height="732" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-02-13.png" alt="" class="wp-image-1559" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-02-13.png 777w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-02-13-300x283.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-02-13-768x724.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-02-13-380x358.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-02-13-550x518.png 550w" sizes="(max-width: 777px) 100vw, 777px" /></figure>



<p>Once all options have been set, press &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Save</mark>&#8221; to save the configuration.</p>



<p>Depending on whether the pipeline was created with or without parameters, either click on &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Build</mark>&#8221; or &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Build with Parameters</mark>&#8221; to start the pipeline.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="557" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837-1024x557.png" alt="" class="wp-image-1563" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837-1024x557.png 1024w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837-300x163.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837-768x418.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837-380x207.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837-550x299.png 550w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837-800x436.png 800w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837-1160x631.png 1160w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect837.png 1453w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>Each build will appear in the &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Build History</mark>&#8221; and contains links to for instance the &#8220;<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">Console Output</mark>&#8220;, which contains the text output of all stages, steps and plugins executed by the pipeline.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="366" height="268" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-33-57.png" alt="" class="wp-image-1565" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-33-57.png 366w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-33-57-300x220.png 300w" sizes="(max-width: 366px) 100vw, 366px" /></figure>



<p>Each build/ run contains information such as:</p>



<ul class="wp-block-list"><li>Who started the pipeline</li><li>The current SCM repository versions (git commit hash)</li><li>An SCM log of the changes since the last build (git commit messages)</li><li>Who approved the pipeline (if applicable)</li><li>The console output of the plugins/ commands executed</li></ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="588" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-1024x588.png" alt="" class="wp-image-1567" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-1024x588.png 1024w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-300x172.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-768x441.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-1536x882.png 1536w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-2048x1175.png 2048w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-380x218.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-550x316.png 550w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-800x459.png 800w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838-1160x666.png 1160w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/rect838.png 2084w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>The pipeline dashboard page, contains a stage view of pipeline runs, divided on the stages and their execution status.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="514" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56-1024x514.png" alt="" class="wp-image-1561" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56-1024x514.png 1024w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56-300x150.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56-768x385.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56-380x191.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56-550x276.png 550w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56-800x401.png 800w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56-1160x582.png 1160w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-12-56.png 1212w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading">Pipeline definitions</h2>



<p>Finally, let&#8217;s have a look on how this pipeline file is actually built.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="880" height="606" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-49-39.png" alt="" class="wp-image-1568" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-49-39.png 880w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-49-39-300x207.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-49-39-768x529.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-49-39-380x262.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-49-39-550x379.png 550w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-49-39-800x551.png 800w" sizes="(max-width: 880px) 100vw, 880px" /><figcaption>Jenkinsfile</figcaption></figure>



<p>The above Jenkinsfile is divided in 4 sections:</p>



<ul class="wp-block-list"><li>the agent section</li><li>the tools section</li><li>the environment section</li><li>the stages section</li></ul>



<p>The first 3 sections basically set some build options like:</p>



<ul class="wp-block-list"><li>on which agent to run the pipeline (in this case, on &#8220;<code>any</code>&#8221; available node)</li><li>which tools are required to be present on the build node</li><li>which environment (shell) parameters should be set</li></ul>



<p>The <code>stages</code> section contains the different stages and steps the pipeline is supposed to run.</p>



<p>Each <code>stage</code>, contains several steps separated per line:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="905" height="387" src="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-56-24.png" alt="" class="wp-image-1569" srcset="https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-56-24.png 905w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-56-24-300x128.png 300w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-56-24-768x328.png 768w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-56-24-380x162.png 380w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-56-24-550x235.png 550w, https://jmorano.moretrix.com/wp-content/uploads/2022/08/Screenshot-from-2022-08-17-09-56-24-800x342.png 800w" sizes="(max-width: 905px) 100vw, 905px" /></figure>



<p>In the above example, all steps are defined between the &#8216;steps { }&#8217; block. Steps can be wrapped (enclosed between curly brackets), like in the above example <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-red-color">line 15</mark> calls the<code> ansiColor()</code> plugin to display a colorized output of the steps contained by it.</p>



<p>Next, the plugin <code>dir()</code> wraps the rest of the steps, which means that all enclosed steps will be executed in a specific directory.</p>



<p>In that specific directory, 3 plugins will be executed:</p>



<ul class="wp-block-list"><li><code>git</code>: line 17 in the above example will <code>git clone</code> the <code>main</code> branch of the supplied Github URL</li><li><code>echo</code>: line 19 will output some text</li><li><code>sh</code>: line 20 &#8211; 23 defines the shell commands to be executed</li></ul>



<p>A complete example can be found at: <a href="https://github.com/insani4c/jenkins-terraform-datacenter-example/blob/main/JenkinsFile" target="_blank" rel="noreferrer noopener">https://github.com/insani4c/jenkins-terraform-datacenter-example/blob/main/JenkinsFile</a> </p>



<p></p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/08/jenkins-to-manage-azure-infrastructure-with-terraform/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Use multiple Azure subscriptions in Terraform modules</title>
		<link>https://jmorano.moretrix.com/2022/04/use-multiple-azure-subscriptions-in-terraform-modules/</link>
					<comments>https://jmorano.moretrix.com/2022/04/use-multiple-azure-subscriptions-in-terraform-modules/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Sat, 30 Apr 2022 12:37:27 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[azurerm]]></category>
		<category><![CDATA[DevOps]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1523</guid>

					<description><![CDATA[Due to billing or organizational structures, certain parts of the infrastructure could be divided over several Azure subscriptions.&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Due to billing or organizational structures, certain parts of the infrastructure could be divided over several Azure subscriptions. From an infrastructure management point of view however, it might be interesting to manage the resources in those multiple subscriptions in one Terraform playbook.</p>



<p>In the <code>required_providers</code> section, the <code>configuration_aliases</code> must be configured first (usually in the <code>main.tf</code> file). This parameter must contain the same name (or names as the parameter takes a list of strings as input) as the <code>alias</code> parameter further below in the second <code>provider</code> section. Each <code>provider</code> section can have its own configuration parameters, such as for instance the <code>subscription_id</code>.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "3.4.0"
      configuration_aliases = [ azurerm.other-sub ]
    }
}

provider "azurerm" {
  subscription_id = var.subscription_id
  features {}
}

provider "azurerm" {
  alias           = "other-sub"
  subscription_id = var.other_subscription_id
  features {}
}
</pre>



<p>When calling a module, each <code>provider</code> containing an <code>alias</code> and which is required in the module, must be specified in the <code>providers</code> parameter. The key is the name configured in the <code>configuration_aliases</code> in the <code>required_providers</code> section of the module (see below), the value is the name of the provider <code>alias</code> in the playbook.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">module "diagnostic_sa_private_endpoint" {
  source = "../terraform_modules/private_endpoint/"
  providers = {
      azurerm.other-sub = azurerm.other-sub
  }

  for_each = { for snet in data.azurerm_subnet.subnets : snet.id => snet }
  ...
}</pre>



<p>The module itself (in <code>../terraform_modules/private_endpoint/</code>) however must again define the alias in a &#8220;<code>required_providers</code>&#8221; section. This means that a file must be include in the directory <code>../terraform_modules/private_endpoint </code>which includes:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">terraform {
  required_providers {
    azurerm = {
      source = "hashicorp/azurerm"
      configuration_aliases = [ azurerm.other-sub ]
    }
  }
}
</pre>



<p>Finally, all Azure resources which require the other Azure subscription, must include a &#8220;<code>provider</code>&#8221; line similar as:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">data "azurerm_private_dns_zone" "dns_zone" {
  provider            = azurerm.other-sub
  name                = var.private_dns_zone_name
}</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/use-multiple-azure-subscriptions-in-terraform-modules/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>A Prometheus Exporter framework written in Perl</title>
		<link>https://jmorano.moretrix.com/2022/04/a-prometheus-exporter-framework-written-in-perl/</link>
					<comments>https://jmorano.moretrix.com/2022/04/a-prometheus-exporter-framework-written-in-perl/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Mon, 25 Apr 2022 09:45:51 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[Prometheus]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1513</guid>

					<description><![CDATA[I released a small project I wrote a while ago, to create quick Prometheus exporters in Perl for&#8230;]]></description>
										<content:encoded><![CDATA[
<p>I released a small project I wrote a while ago, to create quick Prometheus exporters in Perl for providing some custom data. The project itself can be found at <a rel="noreferrer noopener" href="https://github.com/insani4c/prometheus-exporter" target="_blank">https://github.com/insani4c/prometheus-exporter</a>. Back then I decided not to use <a rel="noreferrer noopener" href="Prometheus" target="_blank">Net::Prometheus</a> as I wanted to use <a rel="noreferrer noopener" href="https://metacpan.org/pod/HTTP::Daemon" data-type="URL" data-id="https://metacpan.org/pod/HTTP::Daemon" target="_blank">HTTP::Daemon</a> with <a rel="noreferrer noopener" href="https://metacpan.org/pod/threads" data-type="URL" data-id="https://metacpan.org/pod/threads" target="_blank">threads</a> and not <a href="https://metacpan.org/pod/Plack" data-type="URL" data-id="https://metacpan.org/pod/Plack" target="_blank" rel="noreferrer noopener">Plack</a>.</p>



<p>A small example of how to use the framework:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">my $exporter = Prometheus::Exporter->new({
    listen_port => 9090, 
    listen_addr => "127.0.0.1", 
    max_threads => 5,
});

$exporter->register_metrics({
    test_metric        => {type => "gauge",     desc => "A test metric"},
    test_metric_labels => {type => "gauge",     desc => "A test metric", labels => ["code=42", "code=99"]},
    test_counter       => {type => "counter",   desc => "A test metric"},
    test_histogram     => {type => "histogram", buckets => ['0.3', '0.6', '1.2', '+Inf']},
});

$exporter->register_collector(sub {
    my $timeout = int(rand(5));
    sleep $timeout;

    $exporter->get_metric("test_metric")->value(rand(100));
    $exporter->get_metric("test_metric_labels")->value([rand(42), rand(99)]);

    $test_counter += int(rand(20));
    $exporter->get_metric("test_counter")->value($test_counter);

    $histo_buckets{"0.3"}  += rand(20);
    $histo_buckets{"0.6"}  += $histo_buckets{"0.3"} + rand(20);
    $histo_buckets{"1.2"}  += $histo_buckets{"0.6"} + rand(20);
    $histo_buckets{"+Inf"} += $histo_buckets{"1.2"} + rand(20);
    my $histo_sum = 2.0 * $histo_buckets{"+Inf"};
    my $histo_count = $histo_buckets{"+Inf"};
    $exporter->get_metric("test_histogram")->value(\%histo_buckets, $histo_sum, $histo_count);
});

$exporter->run;
</pre>



<p>The framework will start a small HTTP daemon once <code>run()</code> is called and will handle all client requests by using <code>threads</code>. On each request, the framework will call the <code>subroutine</code> or <code>coderef</code> defined at <code>register_collector()</code>. Currently, that coderef must store the observed values by using the construct seen in the above example, by calling the <code>value()</code> method on the registered metric objects.</p>



<p>Also currently, the histogram implementation is not yet supporting labels.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/a-prometheus-exporter-framework-written-in-perl/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Managing LDAP passwords with Perl</title>
		<link>https://jmorano.moretrix.com/2022/04/managing-ldap-passwords-with-perl/</link>
					<comments>https://jmorano.moretrix.com/2022/04/managing-ldap-passwords-with-perl/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Mon, 25 Apr 2022 09:30:40 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Perl]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[OpenmLDAP]]></category>
		<category><![CDATA[SysAdmin]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1511</guid>

					<description><![CDATA[OpenLDAP Software is an open source implementation of the Lightweight Directory Access Protocol. Many graphical interfaces are available&#8230;]]></description>
										<content:encoded><![CDATA[
<p><a href="https://openldap.org/" data-type="URL" data-id="https://openldap.org/" target="_blank" rel="noreferrer noopener">OpenLDAP</a> Software is an open source implementation of the Lightweight Directory Access Protocol.</p>



<p>Many graphical interfaces are available for managing user accounts in OpenLDAP like PHPLDAPAdmin (<a rel="noreferrer noopener" href="http://phpldapadmin.sourceforge.net/wiki/index.php/Main_Page" target="_blank">http://phpldapadmin.sourceforge.net/wiki/index.php/Main_Page</a>) or LAM (<a rel="noreferrer noopener" href="https://www.ldap-account-manager.org/lamcms/" target="_blank">https://www.ldap-account-manager.org/lamcms/</a>).</p>



<p>When generating a bulk amount of accounts with automation or just managing user details with a simple script, allows much more flexibility and can be even quicker.</p>



<p>LDAP passwords can be stored or changed by using an LDIF file. This LDIF file needs 3 required lines:</p>



<ol class="wp-block-list"><li>The &#8220;<code>dn</code>&#8221; you are about to change</li><li>the &#8220;<code>changetype</code>&#8221; set to &#8220;<code>modify</code>&#8220;</li><li>A &#8220;<code>replace</code>&#8221; line containing the field you want to change (in our case, since we are changing the password, this will be &#8220;<code>userPassword</code>&#8220;)</li></ol>



<p>Your LDAP password can be stored either in clear-text (which is not advisable) or by sending a <code>SHA-hash</code>. The <code>SHA-hash</code> must include the salt at the end and must be <code>base64</code> encoded.</p>



<p>The code snippit below will call a subroutine called <code>generate_password()</code> which comes from a previous article (<a href="https://jmorano.moretrix.com/2013/08/secure-password-generator-perl/" data-type="post" data-id="953">Secure Password Generator in Perl</a>).</p>



<p>At the end of the script, it will print out the LDIF file content, which needs to be saved to <code>change.ldif</code>. As last, it will print the <code>ldapmodify</code> command to make the actual change. You will need to know the <code>admin</code> password for this. Alternatively, you could also make this change using your own <code>dn</code> for authentication.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">use Digest::SHA;
use MIME::Base64;

my $random_password = generate_password(24);
my $random_salt     = generate_password(3);

my $ctx = Digest::SHA->new;
$ctx->add($random_password);
$ctx->add($random_salt);
my $hashedPasswd = encode_base64($ctx->digest . $random_salt, '');

print "password: $random_password\n";
print "salt: $random_salt\n";
print &lt;&lt;EOF;
# LDIF
dn: uid=user1,ou=users,dc=shihai-corp,dc=at
changetype: modify
replace: userPassword
userPassword: {SSHA}$hashedPasswd
EOF

print "\n";
print q{LDAP cmd: ldapmodify -H "ldap://ldap_server01" -Z -x -W -D "cn=ldapadmin,ou=admins,dc=shihai-corp,dc=at" -f change.ldif} . "\n\n"</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/managing-ldap-passwords-with-perl/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Deploy a PostgreSQL database with an initial schema using Ansible</title>
		<link>https://jmorano.moretrix.com/2022/04/deploy-a-postgresql-database-with-an-initial-schema-using-ansible/</link>
					<comments>https://jmorano.moretrix.com/2022/04/deploy-a-postgresql-database-with-an-initial-schema-using-ansible/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Sat, 09 Apr 2022 08:46:24 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Database]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Ansible]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Postgresql]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1479</guid>

					<description><![CDATA[Ansible is a great automation tool to manage operating systems, but also to manage database like PostgreSQL. Many&#8230;]]></description>
										<content:encoded><![CDATA[
<p><a rel="noreferrer noopener" href="https://www.ansible.com/" data-type="URL" data-id="https://www.ansible.com/" target="_blank">Ansible</a> is a great automation tool to manage operating systems, but also to manage database like <a rel="noreferrer noopener" href="https://postgresql.org" data-type="URL" data-id="https://postgresql.org" target="_blank">PostgreSQL</a>. Many <a rel="noreferrer noopener" href="https://docs.ansible.com/ansible/latest/collections/community/postgresql/index.html" data-type="URL" data-id="https://docs.ansible.com/ansible/latest/collections/community/postgresql/index.html" target="_blank">Ansible modules</a> are available to create playbooks which execute various database administration tasks.</p>



<p>In this article we will have a closer look how to ensure that </p>



<ul class="wp-block-list"><li>a default database has been created</li><li>a set of configured extensions have been installed</li><li>an initial database schema has been deployed</li></ul>



<p>The user that will log on to the remote host using Ansible (and <code>SSH</code>), must be able to become the <code>postgres</code> user using <code>sudo</code> without a password (in this example, can be changed of course).</p>



<p>The tasks below are split in two big tasks, both containing a block. <a rel="noreferrer noopener" href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_blocks.html" data-type="URL" data-id="https://docs.ansible.com/ansible/latest/user_guide/playbooks_blocks.html" target="_blank">Blocks</a> allow to group certain tasks which might need the same variables (for instance the same tags, become variables, &#8230;).</p>



<p>The first block will ensure that the default database has been created, based on a YAML variable called <code>{{ db_name }}</code>. This variable must be set in either a <code>group_vars</code> file, <code>host_vars</code> file or supplied at the command line.</p>



<p>In the same block, Ansible will also ensure that the <code>pg_stat_statements</code> extension is enabled and installed in the above database. <a href="https://www.postgresql.org/docs/current/pgstatstatements.html" data-type="URL" data-id="https://www.postgresql.org/docs/current/pgstatstatements.html" target="_blank" rel="noreferrer noopener">pg_stat_statements</a> is a very useful extension to debug or display statistics regarding the SQL statements executed on that specific database.</p>



<p>And the end of the first block, a task was added to loop over a YAML variable called <code>{{ shared_preload_extensions }}</code>. The variable is a supposed to be a list and should contain all extra extensions required to enabled to the above database. Shared preload extensions also need to be configured in the <code>postgresql.conf</code> file, which is not covered in this article.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="yaml" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">- name: Default Database
  tags: default_database
  vars:
    ansible_become_user: postgres
    ansible_become_method: sudo
    ansible_become_pass: null
  block:
    - name: Ensure required database
      postgresql_db:
        name: "{{ db_name }}"
        encoding: UTF-8

    - name: Ensure pg_stat_statements extension
      postgresql_ext:
        name: "pg_stat_statements"
        db: "{{ db_name }}"
        schema: public
        state: present

    - name: Ensure shared_preload extensions
      postgresql_ext:
        name: "{{ item }}"
        db: "{{ db_name }}"
        state: present
      loop: "{{ shared_preload_libraries.split(',') }}"
      loop_control:
        label: " {{ item }}"

- name: Ensure initial database schema
  tags: default_database
  block:
    - name: Schema definition file
      template:
        src: "{{ db_name }}/schema_definition.sql.j2"
        dest: /var/lib/postgresql/initial_schema_definition.sql
        owner: postgres
        group: postgres
        mode: 0600
      when: ( role_path + "/templates/" + db_name + "/schema_definition.sql.j2" ) is file
      register: initial_schema

    - name: Apply the schema definition
      ignore_errors: True
      vars:
        ansible_become_user: postgres
        ansible_become_method: sudo
        ansible_become_pass: null
      community.postgresql.postgresql_query:
        db: "{{ db_name }}"
        path_to_script: /var/lib/postgresql/initial_schema_definition.sql
        encoding: UTF-8
        as_single_query: yes
      when: initial_schema.changed

  rescue:
    - debug:
        msg: "No schema definition found for {{db_name}}, skipping..."
</pre>



<p>The second block of tasks will ensure that an initial database schema is deployed. This block consists of two tasks only. </p>



<p>The first task will upload the schema definition SQL-based file to a specific path on the remote host.</p>



<p>The second task will try to execute that SQL file, if it was changed by the first task. This means of course that if the file was changed manually on the remote host (or even removed), Ansible <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">will update that file again</mark> in the first task and it will deploy the full file again in the second task! This might overwrite or break your database.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/deploy-a-postgresql-database-with-an-initial-schema-using-ansible/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Using Ansible to finalize Hashicorp Packer images</title>
		<link>https://jmorano.moretrix.com/2022/04/using-ansible-to-finalize-hashicorp-packer-images/</link>
					<comments>https://jmorano.moretrix.com/2022/04/using-ansible-to-finalize-hashicorp-packer-images/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Fri, 08 Apr 2022 09:23:30 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[Ansible]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Hashicorp]]></category>
		<category><![CDATA[Packer]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1430</guid>

					<description><![CDATA[Ansible provides a more flexible way to fine-tune Hashicorp Packer images compared to cloud-init. Playbooks can be executed&#8230;]]></description>
										<content:encoded><![CDATA[
<p><a rel="noreferrer noopener" href="https://www.ansible.com/" data-type="URL" data-id="https://www.ansible.com/" target="_blank">Ansible</a> provides a more flexible way to fine-tune <a rel="noreferrer noopener" href="https://www.packer.io/" data-type="URL" data-id="https://www.packer.io/" target="_blank">Hashicorp Packer</a> images compared to <a rel="noreferrer noopener" href="https://cloudinit.readthedocs.io/en/latest/" data-type="URL" data-id="https://cloudinit.readthedocs.io/en/latest/" target="_blank">cloud-init</a>. Playbooks can be executed once the guest image building is ready and boots up for the first time. This allows to create different types of Packer images based on different playbooks.</p>



<p>In this article, Packer images will created for <a href="https://azure.microsoft.com/en-us/" data-type="URL" data-id="https://azure.microsoft.com/en-us/" target="_blank" rel="noreferrer noopener">Azure</a> using <a href="https://www.packer.io/plugins/builders/azure/arm" data-type="URL" data-id="https://www.packer.io/plugins/builders/azure/arm" target="_blank" rel="noreferrer noopener">azure-arm</a> build type. The images will use an <a href="https://ubuntu.com/" data-type="URL" data-id="https://ubuntu.com/" target="_blank" rel="noreferrer noopener">Ubuntu</a> image available on Azure, as base image.</p>



<p>Let&#8217;s consider the following head of a Packer <a rel="noreferrer noopener" href="https://www.packer.io/docs/templates" data-type="URL" data-id="https://www.packer.io/docs/templates" target="_blank">template</a> file:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="json" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">{  
  "builders": [{
      "type": "azure-arm",

      "use_azure_cli_auth": true,
      "subscription_id": "{{ user `subscription_id` }}",

      "managed_image_resource_group_name": "{{user `resource_group`}}",
      "managed_image_name": "{{user `managed_image_name`}}",

      "os_type": "{{user `basevm_os_type`}}",
      "image_publisher": "{{user `basevm_publisher`}}",
      "image_offer": "{{user `basevm_offer`}}",
      "image_sku": "{{user `basevm_sku`}}",

      "virtual_network_name": "{{user `virtual_network_name`}}",
      "virtual_network_resource_group_name": "{{user `virtual_network_resource_group_name`}}",
      "virtual_network_subnet_name": "{{user `virtual_network_subnet_name`}}",

      "azure_tags": {
        "dept": "DevOps",
        "task": "My custom base image"
      },

      "location": "{{user `location`}}",
      "vm_size": "{{user `vm_size`}}",

      "os_disk_size_gb": "{{user `os_disk_size_gb`}}",

      "shared_image_gallery_destination": {
        "subscription": "{{ user `subscription_id` }}",
        "resource_group": "{{user `resource_group`}}",
        "gallery_name": "{{user `gallery_name`}}",
        "image_name": "{{user `base_image`}}",
        "image_version": "{{user `base_image_version`}}",
        "replication_regions": "{{user `replication_regions`}}"
      },
      "shared_image_gallery_timeout": "2h1m1s"
    }],
</pre>



<p>All the variables surrounded by <code>{{user `` }}</code> are template variables, and must be configured in a separate file, example <code>my_base_image_vars.json</code>, which will be included later-on when initiating the Packer <a href="https://www.packer.io/docs/commands/build" data-type="URL" data-id="https://www.packer.io/docs/commands/build" target="_blank" rel="noreferrer noopener">build</a> command. The final image will be stored in <a href="https://azure.microsoft.com/" data-type="URL" data-id="https://azure.microsoft.com/" target="_blank" rel="noreferrer noopener">Azure</a>, in a <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/shared-image-galleries" data-type="URL" data-id="https://docs.microsoft.com/en-us/azure/virtual-machines/shared-image-galleries" target="_blank" rel="noreferrer noopener">Shared Image Gallery</a>.</p>



<p>Packer however allows to configure extra <a rel="noreferrer noopener" href="https://www.packer.io/docs/provisioners" data-type="URL" data-id="https://www.packer.io/docs/provisioners" target="_blank">provisioners</a>, which will be executed once the initial virtual machine has been created and before the final image will be created.</p>



<p>One of those provisioner types is the <a rel="noreferrer noopener" href="https://www.packer.io/plugins/provisioners/ansible/ansible-local" data-type="URL" data-id="https://www.packer.io/plugins/provisioners/ansible/ansible-local" target="_blank">ansible-local</a> type. This provisioner allows to execute Ansible playbooks (and roles) once the virtual machine is booted, directly on the guest machine.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">    "provisioners": [
      {
        "type": "shell",
        "inline_shebang": "/bin/sh -x",
        "execute_command": "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'",
        "script": "presetup.sh"
      },
      {
        "type": "ansible-local",
        "playbook_file": "./ansible/default_setup.yml",
        "playbook_dir": "./ansible",
        "role_paths": ["./ansible/defaults"],
        "clean_staging_directory": true,
        "staging_directory": "/tmp/packer-provisioner-ansible-local",
        "extra_arguments" : [ "--extra-vars", "ansible_python_interpreter=/usr/bin/python3" ]
      },
</pre>



<p>The first provisioner called, is the <a rel="noreferrer noopener" href="https://www.packer.io/docs/provisioners/shell" data-type="URL" data-id="https://www.packer.io/docs/provisioners/shell" target="_blank">shell</a> provisioner. This provisioner will upload and execute the script <code>presetup.sh</code>. This script could for instance ensure that the Ansible package is already installed before the next provisioner is called.</p>



<p>The second provisioner is the <a rel="noreferrer noopener" href="https://www.packer.io/plugins/provisioners/ansible/ansible-local" data-type="URL" data-id="https://www.packer.io/plugins/provisioners/ansible/ansible-local" target="_blank">ansible-loca</a>l type. The provisioner requires some parameters that must be configured:</p>



<ul class="wp-block-list"><li><code>playbook_file</code>: the Ansible playbook which will be executed</li><li><code>playbook_dir</code>: the base directory of the Ansible playbooks, roles, static_files, &#8230;</li><li><code>role_paths</code>: the path of the role which will be called in playbook_file (when required)</li></ul>



<p>The above example will upload all files in the <code>./ansible</code> directory (parameter <code>playbook_dir</code>) to the new guest virtual machine, to the directory configured in <code>staging_directory</code>, and will call the Ansible playbook <code>default_setup.yml</code>.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="yaml" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">---
- name: Default post installation
  hosts: all
  connection: local
  become: yes
  tasks:
    - name: Import the Microsoft signing key into apt
      apt_key:
        url: "https://packages.microsoft.com/keys/microsoft.asc"
        state: present

    - name: Add the Azure CLI software repository
      apt_repository:
        repo: "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ {{ansible_distribution_release}} main"
        filename: azure-cli
        state: present

  roles:
    - defaults
</pre>



<p>The above Ansible playbook use the parameter <code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">connection: local</mark></code>. This is very important and must be included in each playbook executed by the ansible-local provisioner.</p>



<p>At first the playbook will execute two tasks which will ensure that the <a href="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt" data-type="URL" data-id="https://docs.microsoft.com/en-us/cli/azure/install-azure-cli-linux?pivots=apt" target="_blank" rel="noreferrer noopener">Microsoft Azure-CLI APT repository</a> is installed and properly set up.</p>



<p>Afterwards, it will call the <code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">defaults</mark></code> Ansbile <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html" data-type="URL" data-id="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html" target="_blank" rel="noreferrer noopener">role</a>. This role is set up as a typical <a href="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html" data-type="URL" data-id="https://docs.ansible.com/ansible/latest/user_guide/playbooks_reuse_roles.html" target="_blank" rel="noreferrer noopener">Ansible role</a>:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">ansible/defaults:
tasks  templates  vars

ansible/defaults/tasks:
main.yml

ansible/defaults/templates:
modprobe_blacklist.j2  policy-rc.d.conf.j2  policy-rc.d.j2  securetty.j2  sysctl_base.j2

ansible/defaults/vars:
main.yml
</pre>



<p>The <code>main.yml</code> in the <code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">tasks</mark></code> sub-directory is used to configure the required post installation steps.</p>



<p>Example:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="yaml" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">---
- name: Ensure default packages
  apt:
    name: "{{ default_packages }}"
    state: latest

- name: Ensure latest version of all packages
  apt:
    upgrade: dist
    force_apt_get: yes
    dpkg_options: 'force-confold,force-confdef'
    autoremove: yes
    autoclean: yes

- name: Update python alternatives (for Log Analytics)
  shell: |
    update-alternatives --remove-all python
    update-alternatives --install /usr/bin/python python /usr/bin/python2 1

- name: Forward Syslog-NG logs to Log Analytics
  ansible.builtin.copy:
    src: static_files/syslog-ng-lad.conf
    dest: /etc/syslog-ng/conf.d/syslog-ng-lad.conf
    owner: root
    group: root
    mode: '0644'

...</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/using-ansible-to-finalize-hashicorp-packer-images/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Import configuration from Hiera or a Git repository with YAML files into Terraform</title>
		<link>https://jmorano.moretrix.com/2022/04/import-configuration-from-hiera-or-a-git-repository-with-yaml-files-into-terraform/</link>
					<comments>https://jmorano.moretrix.com/2022/04/import-configuration-from-hiera-or-a-git-repository-with-yaml-files-into-terraform/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Tue, 05 Apr 2022 11:26:43 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Git]]></category>
		<category><![CDATA[Hashicorp]]></category>
		<category><![CDATA[YAML]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1393</guid>

					<description><![CDATA[De-duplication of configuration information is key when managing large environments which use different types of automation (Terraform, Jenkins,&#8230;]]></description>
										<content:encoded><![CDATA[
<p>De-duplication of configuration information is key when managing large environments which use different types of automation (<a rel="noreferrer noopener" href="https://www.terraform.io/" data-type="URL" data-id="https://www.terraform.io/" target="_blank">Terraform</a>, <a rel="noreferrer noopener" href="https://www.jenkins.io/" data-type="URL" data-id="https://www.jenkins.io/" target="_blank">Jenkins</a>, <a rel="noreferrer noopener" href="https://www.ansible.com/" data-type="URL" data-id="https://www.ansible.com/" target="_blank">Ansible</a>, scripts executed as <a rel="noreferrer noopener" href="https://systemd.io/" data-type="URL" data-id="https://systemd.io/" target="_blank">Systemd</a> timers, <a rel="noreferrer noopener" href="https://puppet.com/" data-type="URL" data-id="https://puppet.com/" target="_blank">Puppet</a>&#8230;). Although many different configuration management tools exist (RDBMS, <a rel="noreferrer noopener" href="https://www.consul.io/" data-type="URL" data-id="https://www.consul.io/" target="_blank">Consul</a>, &#8230;), one of the easiest to use is <a href="https://puppet.com/docs/puppet/7/hiera_intro.html" data-type="URL" data-id="https://puppet.com/docs/puppet/7/hiera_intro.html" target="_blank" rel="noreferrer noopener">Hiera</a> or just a plain normal <a rel="noreferrer noopener" href="https://git-scm.com/" data-type="URL" data-id="https://git-scm.com/" target="_blank">Git</a> repository with <code><a rel="noreferrer noopener" href="https://yaml.org/" data-type="URL" data-id="https://yaml.org/" target="_blank">YAML</a></code> files, in some hierarchical way (which Hiera in theory is).</p>



<p> The YAML configuration hierarchy could be defined as the following file structure:</p>



<ul class="wp-block-list"><li><code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">common.yaml</mark></code>: Default settings no matter which role or host. These can be overridden in all the below.</li><li><code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">my_environment01.yaml</mark></code>: Environment specific configuration (example: development, staging, production, amsterdam, az01, az04, &#8230;). These can be overridden in all the below.</li><li><code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">common/roles/some_server_role.yaml</mark></code>: A server role, or type definition, which contains role specific configuration parameters. The roles could implement an extra hierarchy as for instance:<br /><ul><li><code>debian::databases::postgres</code></li><li><code>debian::databases::postgres::timescale</code></li><li><code>debian::databases::postgres::timescale::prometheus</code></li><li><code>debian::loadbalancer::internal</code></li><li><code>debian::application::request_processor</code><br /><br />The hierarchy steps are divided by<code> :: </code>in the above example, and need to be inherited accordingly, each with their own YAML file.<br />These can be overridden in all the below.</li></ul></li><li><code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">my_environment01/roles/some_server_role.yaml</mark></code>: override role configuration parameters per environment.<br />These can even be overridden on host level below.</li><li><code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">my_environment01/hosts/my_hostname01.yaml</mark></code>: set host specific configuration parameters. This file is actually always required and should contain at least the IP address of the node and the server role string.</li></ul>



<p>Let&#8217;s take the following example: The host <code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-cyan-blue-color">vmazdbprm01</mark></code> has the role <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color"><code>debian::databases::postgres::timescale::prometheus</code> </mark>and is deployed in the environment in <code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-green-cyan-color">my_cool_location03</mark></code>. The configuration management should search for parameters in the following file locations (and first verify if the file path exists):</p>



<ol class="wp-block-list"><li><code>common.yaml</code></li><li><code>my_cool_location01.yaml</code></li><li><code>common/roles/debian.yaml</code></li><li><code>common/roles/debian::databases.yaml</code></li><li><code>common/roles/debian::databases::postgres.yaml</code></li><li><code>common/roles/debian::databases::postgres::timescale.yaml</code></li><li><code>common/roles/debian::databases::postgres::timescale::prometheus.yaml</code></li><li><code>my_cool_location01/roles/debian.yaml</code></li><li><code>my_cool_location01/roles/debian::databases.yaml</code></li><li><code>my_cool_location01/roles/debian::databases::postgres.yaml</code></li><li><code>my_cool_location01/roles/debian::databases::postgres::timescale.yaml</code></li><li><code>my_cool_location01/roles/debian::databases::postgres::timescale::prometheus.yaml</code></li><li><code>my_cool_location01/hosts/vmazdbprm01.yaml</code></li></ol>



<p>This means that any code which wants to implement the above configuration management, needs to verify if the above <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">13 files</mark> exists from top to bottom, and if yes loads the YAML file accordingly.</p>



<p>Terraform is an open-source infrastructure as code software tool that provides a consistent CLI workflow to manage hundreds of cloud services.</p>



<p>Implementing the above <code>YAML</code> hierarchy in Terraform, could be done as follows:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="yaml" data-enlighter-theme="" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">locals {
  host_cfg             = yamldecode(fileexists("cfgmgmt/${var.environment}/hosts/${var.node}.yaml") ? file("cfgmgmt/${var.environment}/hosts/${var.node}.yaml") : "{server_role: debian}")

  roles_list           = split("::", local.host_cfg.server_role)
  all_roles_list       = [ for index in range(length(local.roles_list)): join("::",slice(local.roles_list, 0, index + 1))  ]

  common_cfg           = yamldecode(fileexists("cfgmgmt/common.yaml") ? file("cfgmgmt/common.yaml") : "{}")
  common_role_cfg_list = [ for file in local.all_roles_list:
      yamldecode(fileexists("cfgmgmt/common/roles/${file}.yaml") ? file("cfgmgmt/common/roles/${file}.yaml") : "{}" )]
    
  env_cfg              = yamldecode(fileexists("cfgmgmt/${var.environment}.yaml") ? file("cfgmgmt/${var.environment}.yaml") : "{}")
  env_role_cfg_list    = [ for file in local.all_roles_list:
      yamldecode(fileexists("cfgmgmt/${var.environment}/roles/${file}.yaml") ? file("cfgmgmt/${var.environment}/roles/${file}.yaml") : "{}") ]
        
  common_role_cfg_map  = merge(local.common_role_cfg_list...)
  env_role_cfg_map     = merge(local.env_role_cfg_list...)

  cfg                  = merge(local.common_cfg, local.env_cfg, local.common_role_cfg_map, local.env_role_cfg_map, local.host_cfg)
}</pre>



<p>Let&#8217;s have a look what actually happens in the above code.</p>



<p>All YAML files are stored in a Git/ Hiera repository, accessible in the sub-directory <code><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">cfgmgmt</mark></code>.</p>



<p>The code declares &#8220;<code>local</code>&#8221; variables by issuing the resource &#8220;<code><a rel="noreferrer noopener" href="https://www.terraform.io/language/values/locals" data-type="URL" data-id="https://www.terraform.io/language/values/locals" target="_blank">locals</a></code>&#8220;, starting from <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">line 1</mark>.</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 2</mark> will check if a file called <code>cfgmgmt/${var.environment}/hosts/${var.node}.yaml</code> exists and if <code>true</code>, loads the <code>YAML</code> content as an map into the local variable <code>host_cfg</code>. If the file doesn&#8217;t exists, a default <code>YAML</code> code will be loaded. In theory, each node/ host must have a file defined as it should have at least data configuration such as:</p>



<ul class="wp-block-list"><li>unique node host name</li><li>IP address</li><li>server role</li><li>(optionally) VLAN/ subnet configuration</li><li>&#8230;</li></ul>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 4</mark> splits the server role string, stored in <code>local.host_cfg.server_role</code>, into a list, to build the server role hierarchy further below.</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 5 </mark>creates a list of top level server roles which need to be imported too. Example: if the server role was set to <code>debian::databases::postgres::timescale::prometheus</code> , the list <code>all_roles_list</code> will contain the following elements:</p>



<ul class="wp-block-list"><li><code>debian</code></li><li><code>debian::databases</code></li><li><code>debian::databases::postgres</code></li><li><code>debian::databases::postgres::timescale</code></li><li><code>debian::databases::postgres::timescale::prometheus</code></li></ul>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 7</mark> loads the YAML content of <code>common.yaml</code>, if it exists.</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 8</mark> loops over the <code>all_roles_list</code> elements, created on <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">line 5</mark>, and will load the YAML content of the server roles (if the file exists) into a list element. The result is a list called <code>common_role_cfg_list</code>.</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 11</mark> loads the general environment configuration YAML content (if it exists) into the local variable <code>env_cfg</code>.</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Line 12 </mark>will do the same thing as<mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color"> line 8</mark>, but for environment specific roles. (for instance: when certain server roles have environment specific configuration parameters).</p>



<p><mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">Lines 15</mark> and <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">16</mark> will merge the elements (which in theory are <code><a rel="noreferrer noopener" href="https://www.terraform.io/language/values/variables#map" data-type="URL" data-id="https://www.terraform.io/language/values/variables#map" target="_blank">maps</a></code> of YAML data) of the server role lists into one big map, in the order of the list. This allows that keys can be overridden. The expansion <code>...</code> notation is explained at <a rel="noreferrer noopener" href="https://www.terraform.io/language/expressions/function-calls#expanding-function-arguments" target="_blank">https://www.terraform.io/language/expressions/function-calls#expanding-function-arguments</a>.</p>



<p>Finally on <mark style="background-color:rgba(0, 0, 0, 0)" class="has-inline-color has-vivid-red-color">line 18</mark>, a local variable called <code>cfg</code> will be created, which merges the values of:</p>



<ul class="wp-block-list"><li><code>local.common_cfg</code></li><li><code>local.env_cfg</code></li><li><code>local.common_role_cfg_map</code></li><li><code>local.env_role_cfg_map</code></li><li><code>local.host_cfg</code></li></ul>



<p>By providing the environment name and the host/ node name to the above code (as <code>var.environment</code> and <code>var.node</code> ), all required configuration parameters can be loaded per node in Terraform, but since we&#8217;ve used a Git repository, this information can be loaded in any kind of automation tool (required is of course that each automation implements the same kind of hierarchy code).</p>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/04/import-configuration-from-hiera-or-a-git-repository-with-yaml-files-into-terraform/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Terraform and libvirt nodes</title>
		<link>https://jmorano.moretrix.com/2022/03/terraform-and-libvirtd-nodes/</link>
					<comments>https://jmorano.moretrix.com/2022/03/terraform-and-libvirtd-nodes/#comments</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Wed, 30 Mar 2022 09:45:11 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Hashicorp]]></category>
		<category><![CDATA[Hetzner]]></category>
		<category><![CDATA[KVM]]></category>
		<category><![CDATA[Libvirt]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Qemu]]></category>
		<category><![CDATA[Ubuntu]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1302</guid>

					<description><![CDATA[Libvirt (libvirtd) nodes (based on KVM and Qemu) are a great and cheap (read: free) alternative of deploying&#8230;]]></description>
										<content:encoded><![CDATA[
<p>Libvirt (<a href="https://libvirt.org/manpages/libvirtd.html" data-type="URL" data-id="https://libvirt.org/manpages/libvirtd.html" target="_blank" rel="noreferrer noopener">libvirtd</a>) nodes (based on KVM and Qemu) are a great and cheap (read: free) alternative of deploying virtual nodes in a cloud. Required is a server which will act as a hypervisor, in our article we chose to use a <a href="https://www.hetzner.com/" data-type="URL" data-id="https://www.hetzner.com/" target="_blank" rel="noreferrer noopener">Hetzner</a> server installed with <a href="https://ubuntu.com/" data-type="URL" data-id="https://ubuntu.com/" target="_blank" rel="noreferrer noopener">Ubuntu Linux </a>20.4-lts.</p>



<p>After the default installation of Ubuntu 20.4-lts, the following packages are required to get started as a hypervisor:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-theme="monokai" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">apt install qemu-kvm libvirt-daemon bridge-utils virtinst libvirt-daemon-system virt-top libguestfs-tools libosinfo-bin qemu-system virt-manager qemu pm-utils</pre>



<p>Once these are installed, the <code>vnet_hosts </code>module needs to be pre-loaded in <code>/etc/modules</code>:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="shell" data-enlighter-theme="monokai" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">echo vhost_net | tee -a /etc/modules
modprobe vhost_net</pre>



<p>The hypervisor is now ready to start creating and deploying virtual machines. </p>



<p>In this article, <a rel="noreferrer noopener" href="https://www.terraform.io/" data-type="URL" data-id="https://www.terraform.io/" target="_blank">Terraform</a> will be used to manage the virtual machines in libvirtd. All example code snippets are available on Github, under <a href="https://github.com/insani4c/terraform-libvirt" target="_blank" rel="noreferrer noopener">https://github.com/insani4c/terraform-libvirt</a></p>



<p>Terraform has an excellent provider (<a href="https://registry.terraform.io/providers/dmacvicar/libvirt/latest" data-type="URL" data-id="https://registry.terraform.io/providers/dmacvicar/libvirt/latest" target="_blank" rel="noreferrer noopener">dmacvicar/libvirtd</a>) to manage the libvirtd nodes, which needs to be loaded and initialized:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="monokai" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">terraform {
  required_providers {
    libvirt = {
      source = "dmacvicar/libvirt"
    }
  }
}

provider "libvirt" {
  uri = "qemu+ssh://root@192.168.1.1/system"
}</pre>



<p>In the above code snippet, Terraform will ensure the libvirt provider is loaded and it is configured to connect to the host with IP address <code>192.168.1.1</code> as the root user (this requires that a SSH public key is installed on the remote server, of the user who will be executing Terraform).</p>



<p>Next, the network will defined, where the virtual nodes will be deployed in.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="monokai" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">resource "libvirt_network" "my_network" {
  name = "my_net"

  mode = "nat"

  addresses = ["10.1.2.0/24"]
  domain    = var.dns_domain

  autostart = true

  dhcp {
    enabled = false
  }

  dns {
    enabled = true

    local_only = false
    forwarders { address = "127.0.0.53" }

    hosts  {
        hostname = "host01"
        ip = "10.1.2.10"
      }
    hosts {
        hostname = "host02"
        ip = "10.1.2.20"
      }
  }  
}</pre>



<p>The above code will ensure that a network of type <code>NAT</code> (to allow internal IPs, reachable from the hypervisor only) with network mask <code>10.1.2.0/24</code>, will be created. It will ensure that <code>DHCP</code> is disabled and it will enable a <code>DNS</code> setup (the package <code>dnsmasq</code> must be installed) with two predefined hosts, <code>host01</code> and <code>host02</code>.</p>



<p>Up next is the definition of the storage pools and volumes required for the virtual machines.</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="monokai" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">resource "libvirt_pool" "default" {
  name = "default"
  type = "dir"
  path = "/data/vms/cluster_storage"
}

resource "libvirt_volume" "local_install_image" {
  name   = var.local_install_image
  pool   = libvirt_pool.default.name
  source = var.os_img_url
  format = "qcow2"
}</pre>



<p>The above defines the <code>libvirt_pool</code>, which basically configures the path on-disk for storing all sorts of volumes. Next it defines a volume called &#8220;<code>local_install_image</code>&#8220;, which will be used to set up the virtual machine as the volume will contain the &#8220;cloud image&#8221; for the installation. This volume requires two variables:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="monokai" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">variable "os_img_url" {
  description = "URL to the OS image"
  default     = "https://cloud-images.ubuntu.com/focal/current/focal-server-cloudimg-amd64.img"
}

variable "local_install_image" {
    description = "The name of the local install image"
    default     = "base-os-ubuntu-focal.qcow2"
}</pre>


]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/03/terraform-and-libvirtd-nodes/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>Terraform: Create a map of subnet IDs in Azure</title>
		<link>https://jmorano.moretrix.com/2022/03/terraform-create-a-map-of-subnet-ids-in-azure/</link>
					<comments>https://jmorano.moretrix.com/2022/03/terraform-create-a-map-of-subnet-ids-in-azure/#respond</comments>
		
		<dc:creator><![CDATA[Johnny Morano]]></dc:creator>
		<pubDate>Fri, 04 Mar 2022 11:27:16 +0000</pubDate>
				<category><![CDATA[Automation]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[Terraform]]></category>
		<category><![CDATA[Azure]]></category>
		<category><![CDATA[azurerm]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Hashicorp]]></category>
		<guid isPermaLink="false">https://jmorano.moretrix.com/?p=1295</guid>

					<description><![CDATA[The subnets accessor in the azurerm_virtual_network Terraform data source returns a list of subnet names only. In most&#8230;]]></description>
										<content:encoded><![CDATA[
<p>The <code>subnets</code> accessor in the <code>azurerm_virtual_network</code> Terraform data source returns a list of subnet names only. In most cases however, you will need to use a or multiple subnet IDs, for instance when deploying virtual machines. Instead of creating a new <code>datasource</code> (for possibly a small list of subnets) for each virtual machine you want to deploy, creating a <code>locals</code> map, which can be looked up afterwards, is going to be faster on the <code>apply</code> run.</p>



<p>Create a list of the existing virtual network subnets:</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="monokai" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">data azurerm_subnet "subnets" {
  count = length(data.azurerm_virtual_network.my_vnet.subnets)

  name                 = data.azurerm_virtual_network.my_vnet.subnets[count.index]
  virtual_network_name = var.vnet_name
  resource_group_name  = var.resource_group
}

locals {
  subnets = tomap({
      for snet in data.azurerm_subnet.subnets: snet.name => snet.id
  })
}</pre>



<p>In the above example, we first loop over all subnet names, returned by <code>data.azurerm_virtual_network.my_vnet.subnets</code>, to create a list of Azure virtual network subnet objects.</p>



<p>Afterwards we create a <code>locals</code> map called <code>subnets</code>, which contains mapping like &#8220;subnet name points to subnet ID&#8221;.</p>



<p>Finally, when creating Azure network interfaces with an IP configuration, you can easily lookup the correct subnet ID based on the subnet name (which you might have configured per virtual machine)</p>



<pre class="EnlighterJSRAW" data-enlighter-language="generic" data-enlighter-theme="monokai" data-enlighter-highlight="" data-enlighter-linenumbers="" data-enlighter-lineoffset="" data-enlighter-title="" data-enlighter-group="">resource "azurerm_network_interface" "my_nic" {
...
  ip_configuration {
    ...
    subnet_id = lookup(local.subnets, "my_subnet_name")
  }
...</pre>
]]></content:encoded>
					
					<wfw:commentRss>https://jmorano.moretrix.com/2022/03/terraform-create-a-map-of-subnet-ids-in-azure/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
