Ansible/inventory/inv.yaml

94 lines
1.4 KiB
YAML
Raw Normal View History

Dev (#9) * docker tests are looking good and nfs is able to connect and containers can talk to each other. * Added pihole support for a new vm * pihole is not working yet via docker. Installed it by hand without ansible for now. * added some docker related tasks and working on collins now to see how to use it. * forgot to push some changes... kube didnt work out as it adds too much overhead for what I need. * added two roles to help working with backup and restore of docker volume data. * did some cleanup on old roles. * pushing for axw testing * moving to requirements.yml. adding cron jobs for maint. * roles are being moved out of this repo. Roles are handled by requirements.yml going forward. Dev roles are still in the repo but if they stick around a new repo will be made for it. * Made a bunch of changes * fixed a problem * Added a playbook to deploy grafana and added prometheus role to monitor things. * Updated cron to test * Updated cron to test * Updated cron * updated discord_webhook and now testing if cron will pick up the changes. * Fixed plex backup for now. * docker updates and working on nginx * pushing pending changes that need to go live for cron testing * fixed debug roles and updated discord test * fixed debug roles and updated discord test * Disabling test cron * its been awhile... I am not sure what I have done anymore but time to push my changes. * added newsbot configs, added to jenkins, starting to migrate to collections. * Updated inventory to support the network changes * jenkinsfile is now working in my local setup. * node2 is unhealthy and is removed from inv. I was doing something to this box months ago, but now i dont remember what it was." * updated images and adding them to jenkins for testing * removed the old image files and moved to my public image * Jenkins will now inform discord of jobs. Added post tasks. Added mediaserver common. * updated the backend update job and adding a jenkins pipeline to handle it for me. * updated the backup job again * Updated all the jekins jobs. Added a jenkins newsbot backup job. Adjusted newsbot plays to add backup and redeploy jobs. * updated newsbot backup playbook to make older backup files as needed. * Added debug message to report in CI what version is getting deployed. * I did something stupid and this device is not letting me login for now. * removing twitter source for now as I found a bandwidth related bug that wont get pushed for a bit * Adding a bunch of changes, some is cleanup and some are adds * updated the images * updated the kube common playbook * Started to work on ceph, stopped due to hardware resources, updated common, added monit, and starting to work on a playbook to handle my ssh access. * Added a role to deploy monit to my servers. Still needs some more updates before its ready * Here is my work on ceph, it might go away but I am not sure yet. * Starting to migrate my common playbook to a role, not done yet. * updated kube and inventory * updated gitignore
2022-01-28 16:22:11 -08:00
all:
children:
linux-all:
hosts:
children:
kube:
kube-fs:
docker:
jenkins:
ceph:
docker:
hosts:
192.168.1.243:
192.168.1.244:
192.168.1.226:
mediaserver:
children:
#192.168.1.243:
#192.168.1.244:
mediaserver-front:
#mediaserver-back:
mediaserver-back:
hosts:
192.168.1.244:
mediaserver-front:
hosts:
192.168.1.226:
newsbot:
hosts:
192.168.1.244:
duckdns:
hosts:
192.168.1.244:
pihole:
hosts:
192.168.1.223:
jenkins:
hosts:
192.168.1.246:
ceph:
children:
ceph-primary:
ceph-node:
ceph-primary:
hosts:
#fs01.k8s.home.local:
192.168.1.222:
vars:
ceph_primary: true
ceph-node:
hosts:
#fs02.k8s.home.local:
192.168.1.225:
vars:
ceph_primary: false
kube:
children:
kube-master:
kube-node:
kube-master:
hosts:
# master.k8s.home.local:
192.168.1.221: # master
kube-node:
hosts:
#node01.k8s.home.local:
#node02.k8s.home.local:
#node03.k8s.home.local:
192.168.1.223: # node01
# 192.168.1.224: # node02
# 192.168.1.226: # node03
# 192.168.1.225: # node04
kube_media_node:
hosts:
192.168.1.223:
kube-fs:
hosts:
fs01.k8s.home.local: