Ansible is a great automation tool to manage operating systems, but also to manage database like PostgreSQL. Many Ansible modules are available to create playbooks which execute various database administration tasks.
In this article we will have a closer look how to ensure that
- a default database has been created
- a set of configured extensions have been installed
- an initial database schema has been deployed
The user that will log on to the remote host using Ansible (and SSH), must be able to become the postgres user using sudo without a password (in this example, can be changed of course).
The tasks below are split in two big tasks, both containing a block. Blocks allow to group certain tasks which might need the same variables (for instance the same tags, become variables, …).
The first block will ensure that the default database has been created, based on a YAML variable called {{ db_name }}. This variable must be set in either a group_vars file, host_vars file or supplied at the command line.
In the same block, Ansible will also ensure that the pg_stat_statements extension is enabled and installed in the above database. pg_stat_statements is a very useful extension to debug or display statistics regarding the SQL statements executed on that specific database.
And the end of the first block, a task was added to loop over a YAML variable called {{ shared_preload_extensions }}. The variable is a supposed to be a list and should contain all extra extensions required to enabled to the above database. Shared preload extensions also need to be configured in the postgresql.conf file, which is not covered in this article.
- name: Default Database
tags: default_database
vars:
ansible_become_user: postgres
ansible_become_method: sudo
ansible_become_pass: null
block:
- name: Ensure required database
postgresql_db:
name: "{{ db_name }}"
encoding: UTF-8
- name: Ensure pg_stat_statements extension
postgresql_ext:
name: "pg_stat_statements"
db: "{{ db_name }}"
schema: public
state: present
- name: Ensure shared_preload extensions
postgresql_ext:
name: "{{ item }}"
db: "{{ db_name }}"
state: present
loop: "{{ shared_preload_libraries.split(',') }}"
loop_control:
label: " {{ item }}"
- name: Ensure initial database schema
tags: default_database
block:
- name: Schema definition file
template:
src: "{{ db_name }}/schema_definition.sql.j2"
dest: /var/lib/postgresql/initial_schema_definition.sql
owner: postgres
group: postgres
mode: 0600
when: ( role_path + "/templates/" + db_name + "/schema_definition.sql.j2" ) is file
register: initial_schema
- name: Apply the schema definition
ignore_errors: True
vars:
ansible_become_user: postgres
ansible_become_method: sudo
ansible_become_pass: null
community.postgresql.postgresql_query:
db: "{{ db_name }}"
path_to_script: /var/lib/postgresql/initial_schema_definition.sql
encoding: UTF-8
as_single_query: yes
when: initial_schema.changed
rescue:
- debug:
msg: "No schema definition found for {{db_name}}, skipping..."
The second block of tasks will ensure that an initial database schema is deployed. This block consists of two tasks only.
The first task will upload the schema definition SQL-based file to a specific path on the remote host.
The second task will try to execute that SQL file, if it was changed by the first task. This means of course that if the file was changed manually on the remote host (or even removed), Ansible will update that file again in the first task and it will deploy the full file again in the second task! This might overwrite or break your database.