So, I've created a whole playbook out of this request. The playbook itself includes:
- Check whether URL's are providing status code 200
- Includes the reaction time from the host to the server
- Sending Slack message when failed
- Send the logs to an Elasticsearch server
One could set a cronjob to let the playbook run each x seconds.
To answer this specific question:
- hosts: localhost
vars:
sites:
- https://google.com
- name: verify if all urls works properly
uri:
url: "{{ item }}"
timeout: 10
with_items: "{{ sites }}"
register: health_check
ignore_errors: yes
# From the docs: The time in sec it took from the start until
# the first byte was just about to be transferred. This includes
# time_pretransfer and also the time the server needed to calculate the result.
- name: response times of sites via cURL via the `starttransfer` method
shell: 'curl {{ item }} -s -o /dev/null -w "%{time_starttransfer}\n"'
register: curl_time
ignore_errors: True
args:
warn: false
with_items:
- "{{ sites }}"
- name: write url and status code to file
lineinfile:
line: "{{ start_time }}, {{ item.url }}, {{ item.status }}"
insertafter: EOF
dest: "{{ file_path }}"
with_items: "{{ health_check.results }}"
- name: append response times to the file
lineinfile:
path: "{{ file_path }}"
backrefs: yes
regexp: "^(.*{{ start_time }}, .*{{ item.item | replace ('https://', '') | replace ('http://', '') }}.*)"
line: '\1, {{ item.stdout }}'
with_items: "{{ curl_time.results }}"