Ansible/inventory/group_vars/kube-master.yml
James Tombleson d14c2aaa2c
Dev (#9)
* docker tests are looking good and nfs is able to connect and containers can talk to each other.

* Added pihole support for a new vm

* pihole is not working yet via docker.  Installed it by hand without ansible for now.

* added some docker related tasks and working on collins now to see how to use it.

* forgot to push some changes... kube didnt work out as it adds too much overhead for what I need.

* added two roles to help working with backup and restore of docker volume data.

* did some cleanup on old roles.

* pushing for axw testing

* moving to requirements.yml. adding cron jobs for maint.

* roles are being moved out of this repo.  Roles are handled by requirements.yml going forward. Dev roles are still in the repo but if they stick around a new repo will be made for it.

* Made a bunch of changes

* fixed a problem

* Added a playbook to deploy grafana and added prometheus role to monitor things.

* Updated cron to test

* Updated cron to test

* Updated cron

* updated discord_webhook and now testing if cron will pick up the changes.

* Fixed plex backup for now.

* docker updates and working on nginx

* pushing pending changes that need to go live for cron testing

* fixed debug roles and updated discord test

* fixed debug roles and updated discord test

* Disabling test cron

* its been awhile... I am not sure what I have done anymore but time to push my changes.

* added newsbot configs, added to jenkins, starting to migrate to collections.

* Updated inventory to support the network changes

* jenkinsfile is now working in my local setup.

* node2 is unhealthy and is removed from inv.  I was doing something to this box months ago, but now i dont remember what it was."

* updated images and adding them to jenkins for testing

* removed the old image files and moved to my public image

* Jenkins will now inform discord of jobs. Added post tasks. Added mediaserver common.

* updated the backend update job and adding a jenkins pipeline to handle it for me.

* updated the backup job again

* Updated all the jekins jobs.  Added a jenkins newsbot backup job.  Adjusted newsbot plays to add backup and redeploy jobs.

* updated newsbot backup playbook to make older backup files as needed.

* Added debug message to report in CI what version is getting deployed.

* I did something stupid and this device is not letting me login for now.

* removing twitter source for now as I found a bandwidth related bug that wont get pushed for a bit

* Adding a bunch of changes, some is cleanup and some are adds

* updated the images

* updated the kube common playbook

* Started to work on ceph, stopped due to hardware resources, updated common, added monit, and starting to work on a playbook to handle my ssh access.

* Added a role to deploy monit to my servers.  Still needs some more updates before its ready

* Here is my work on ceph, it might go away but I am not sure yet.

* Starting to migrate my common playbook to a role, not done yet.

* updated kube and inventory

* updated gitignore
2022-01-28 16:22:11 -08:00

143 lines
3.9 KiB
YAML

---
# Inventory vars for the 'kube-master' host
kubernetes_role: master
monit_hosts:
- name: jenkins
group: kube
address: 192.168.1.247
when:
- http:
enabled: true
username: ''
password: ''
port: 80
protocol: http
request: '/login'
then:
alert: false
exec: "{{ monit_discord_alert_script }}"
restart: false
- name: pihole
group: kube
address: 192.168.1.248
when:
- http:
enabled: true
username: ''
password: ''
port: 80
protocol: http
request: '/'
then:
alert: false
exec: "{{ monit_discord_alert_script }}"
restart: false
- name: nextcloud
group: kube
address: 192.168.1.249
when:
- http:
enabled: true
username: ''
password: ''
port: 80
protocol: http
request: '/'
then:
alert: false
exec: "{{ monit_discord_alert_script }}"
restart: false
- name: search
group: kube
address: 192.168.1.251
when:
- http:
enabled: true
protocol: http
username: ''
password: ''
port: 80
request: '/'
then:
alert: false
exec: "{{ monit_discord_alert_script }}"
restart: false
- name: get
group: kube
address: 192.168.1.252
when:
- http:
enabled: true
username: !vault |
$ANSIBLE_VAULT;1.1;AES256
63653338356435333664323436633063663132623530356162653130313435363761613633623266
3237623031353935626131346461303034373433366136640a323436613831646432356566626564
31653733346164383363373238343534613662613636346334646539636134386365656334333638
3037626533363965630a373537363563373566613237663635363132353563656262363939316635
3565
password: !vault |
$ANSIBLE_VAULT;1.1;AES256
32383461323230323435386635316166353461316237356138666335363734333338353131303536
3032383231323461336565303231316338666436313361630a343332383163333932363734653734
62653266623764333335663335623162616235323232653936663166393436633734303363373662
6330363538616166320a353063653863613862373834303331666138333836313530313132613962
3034
port: 80
protocol: http
request: '/'
then:
alert: false
exec: "{{ monit_discord_alert_script }}"
restart: false
- name: son
group: kube
address: 192.168.1.253
when:
- http:
enabled: true
username: ''
password: ''
port: 80
protocol: http
request: '/'
then:
alert: false
exec: "{{ monit_discord_alert_script }}"
restart: false
- name: registry
group: kube
address: 192.168.1.250
when:
- http:
enabled: true
username: !vault |
$ANSIBLE_VAULT;1.1;AES256
63653338356435333664323436633063663132623530356162653130313435363761613633623266
3237623031353935626131346461303034373433366136640a323436613831646432356566626564
31653733346164383363373238343534613662613636346334646539636134386365656334333638
3037626533363965630a373537363563373566613237663635363132353563656262363939316635
3565
password: !vault |
$ANSIBLE_VAULT;1.1;AES256
32383461323230323435386635316166353461316237356138666335363734333338353131303536
3032383231323461336565303231316338666436313361630a343332383163333932363734653734
62653266623764333335663335623162616235323232653936663166393436633734303363373662
6330363538616166320a353063653863613862373834303331666138333836313530313132613962
3034'
port: 443
protocol: https
request: '/v2'
then:
alert: false
exec: "{{ monit_discord_alert_script }}"
restart: false