Merge pull request 'added missing posts' (#6) from features/migrate-missing-posts into main
continuous-integration/drone/push Build is passing Details

Reviewed-on: #6
This commit is contained in:
jtom38 2023-11-28 18:54:57 -08:00
commit 152469ffed
5 changed files with 243 additions and 0 deletions

View File

@ -0,0 +1,49 @@
---
title: "dbv Alpha Release"
date: 2022-12-28T21:21:30-08:00
draft: false
tags: [golang, dvb]
---
So, last month one of my Linux servers failed and I was unable to recover the VM. I was lucky that the data cared about stored on a mobile device! Of my containers, the one running my webdav instance was the one I didnt want to lose any data. With that scare, I wanted to address my backup solution, or lack of. So one thing I wanted to do was build a new tool to backup my docker data.
I had a couple of requirements for this process
- Be able to mount the volumes on the container.
- Tar the data and append a date value of when it was made.
- Be able to start/stop the container.
- Post to Discord so I can monitor the jobs.
- Be able to extend the process later.
This all started as a PowerShell (pwsh) script but started to run into small issues. I like pwsh for when it can be used. But when I started to build out my logic things just quickly became very messy. Overall, I like to organize my code in logical ways. Pwsh wants you to make modules or you have to dot source files into your script. This gets very messy, quickly. I also didn't want to have to install pwsh on all my servers.
So I took a look at rebuilding the process in Go. This was my first attempt at getting Go to interface with a CLI but overall, it works rather well. When working with Docker you are able to adjust the `docker container inspect` command so it can return a subset of data. I adjusted the statement that I would send and get the block that has the container status. With the data comming in as json, I am able to quickly be able to convert that into Go stucts so I can manipulate the data as needed.
One thing that did come up with using the CLI as my primary interface. Using a const to hold the main block of my CLI command was great! I have not used const like values much in other languages but man, Go has really changed my tune about them.
You might ask yourself, why not use the Docker API? I didn't want to change anything on the servers and keep the API locked up as is. So this got me playing with the [script](https://github.com/bitfield/script) package. I wanted to run some tests to see if I can make Go a scripting language. For the most part, it can. My tool just grew.
So, with this alpha release, the tool does some basic things for me.
- Backup multiple containers
- Move them to my NFS server
- Ensure that I only have so many files on the server
- Send notifications to Discord
Given its a basic tool, it does what I want currently. I do have some more plans for it though.
- Run the job as a always on job.
- Setup internal cron triggers.
- Backup data outside of Docker
- Postgres running on a VM
- Move data to a FTP server. (thanks nas!)
- Sometimes NFS mounts stop working and I need remount the drives
- Maybe other notification services.
- Setup systemd service
I didnt have plans for this to replace everything backup related. This it does what it needs to and stays out of the way.
If you want to check out the repo, it can be found [here](https://github.com/jtom38/dvb)!
Thanks for reading!
If you want to leave a comment, go here [Comments (github)](https://github.com/jtom38/jtom38.github.io/issues/3)

View File

@ -0,0 +1,18 @@
---
title: "Is it time to Go?"
date: 2023-02-25T21:13:30-08:00
draft: false
tags: [golang, dotnet]
---
I will have been working with go for a year next month. This last year I have learned so much about the language and come to love how simple and flexible it is. I see a great future for it within the web ecosystem. It's very easy to build up a new web service in go that will work on any system. My problem with it is more around the lack of companies using it.
I got my start in the software world learning C# (8 years ago or so). That worked very well for me as I was able to find a job working on some C# automation. This was a great learning opportunity that gave me some professional experience. Now my background is more in infrastructure so this has hampered my ability to move over to software full-time.
My team at work has been using Go to build our first automation API and some CLI tools. This has been a blast to work on and build up but we ran into some issues. We use Azure for our Cloud solution and we ran into issues with the public packages of the ones we could find. Microsoft does not have great support for Go right now. As the language ages maybe support will improve. But this resulted in us writing a bunch of API packages to interface with Azure. As fun as this is, it's a bunch of boilerplate code and also slows down our delivery. Plus if Microsoft deprecates a feature and the package does not support it, we have to roll our own or hope our PR gets approved upstream. With some of the packages, we found that some have gone stale with no updates in a year or more.
Our other development teams already write .NET and given the tools we use, I chose to move our development focus over to .NET. It makes me sad to do that but I wanted to make sure others can use the tools we make rather than have them thrown away given no one else writes Go.
With that change at work, I am also going to shift my focus back to C# for my projects. I already know how to use it so it's not that big of a shift. The tools like DVB will stay with Go, but Newsbot will be rewritten in C# to help my learning and add it to the portfolio.
I am sad to be changing my focus but as it is I am thinking about learning Rust next year. Who knows what the future holds but I have always worked with Microsoft products at work and open source at home. I am looking to always learning and continuing forward!

View File

@ -0,0 +1,42 @@
---
title: "Hosting Static Sites"
date: 2023-05-28T21:00:00-08:00
draft: false
tags: [azure, hugo, hosting]
---
I have always stayed away from posting anything on the internet. So when it came to getting some of my sites hosted online, well I have had some pitfalls. Hopefully some of the issues I have run into, you can avoid going forward.
## History
I am always looking for the next free thing when it comes to my projects. This has resulted in me hosting things internally only or not at all. I got my domain so I can start to detach more from the big sites and do things on my own a bit. But now, I had to figure out how to build a site.
Building the site was not as hard, [hugo](https://gohugo.io/) did make this site much easier to build. I am not looking for anything complex and with a bunch of javascript overhead so a static website generator was perfect! At first, I was able to get something online at least locally and it looked good. So it was time to deploy it.
## Github
I start with attempting to deploy on [github.io](https://pages.github.com/?(null)). This comes packed in with the repository and all I needed to do was make a couple of changes to deploy the site. Got that setup and then the site was available at jtom38.github.io. Well, that's nice and all but I wanted to get it to route to my domain. This is when things started to fall apart.
Based on the documentation this process should be nothing more than adding a record into my DNS and then setting up the CNAME record. Well, it worked somewhat. One page would work but none of the subpages would load correctly. I would then put the project down for a while as time was limited.
I started to work on a custom hugo theme [cookbook](https://github.com/jtom38/hugo-cookbook) for my new family cookbook. I did learn more about the inner workings of Hugo and that was great! I did get that site running just on [github.io](https://pages.github.com/?(null)) but never made it back to my site (this one).
## Digital Ocean
Later on, I thought about it and figured I needed to take a look at getting a hosted solution for the sites. Attempted Digital Ocean due to having something like 5 free static sites per account. I thought that would be great! I went through the process of getting started and stopped within one night. The problem is I use the go.mod file in my hugo projects. The go.mod file is much better than using a git submodule for hugo.
The problem is Digital Ocean attempts to detect what your project is based on reviewing the repo. Sure, go for it. When I attempted to tell it that it was a Hugo project, it would never build correctly. I would attempt to manually tell it what the project was and then wanted to give me a monthly bill. No, this is a static website so give me the free stuff!
Needless to say, this never worked out and it was abandoned.
## Azure Static Web Sites
I am now back home after a trip and I wanted to get these sites all hosted in a location that was not just my home network. I can make a docker image or an Apache server, but I don't have a static IP at home and that was going to be another problem. I do have a solution for the lack of static IP, but more on that in a later post. :)
At one point in time, I did use an AWS S3 bucket to host a website. It was fine, it worked I am just not an AWS guy. I learned Azure first for work and that's where I stayed. So I got a new Azure subscription setup and I was going to host the site in an Azure Storage Account in a public blob bucket. Then I remembered that Microsoft has a free tier for Azure Static Web Sites. I did some digging and figured, why not try it? The hugo documentation was pushed this way and let me give you my thoughts.
I am now running this site on the Azure service and my initial impressions of Azure Static Web Sites, are good! I created a new FREE resource and give it access to my repo. Azure/Github loaded a new CI workflow to deploy to the Azure resource and within 5 minutes, the site was online. The theme was loaded and the links looked good. I was happy!
Getting DNS setup was also very simple. On the left-hand side, go to Custom Domains to get started. Tell it what the domain would be (example: www.yourdomainhere.com) and copy the URL given and create a new DNS record. Once the record has been made, go back to Azure and tell it Ok. It will check the records and enable the traffic.
Overall this was very painless and I am glad this is another thing off my to-do list. Now back to some of my other projects that I might have been neglecting.

View File

@ -0,0 +1,66 @@
---
title: "Self-Hosting Services"
date: 2023-05-30T00:00:00-08:00
draft: false
tags: [on-prem, docker, hosting, kubernets, unraid]
---
I have always been running something out of the house. This started with just Plex and it's slowly grown as I have grown as a developer and run my apps in the house. It can be a fun hobby, but man when something breaks it might be staying broken for a bit.
This year I am going to be performing some home upgrades to some of my servers. When I say servers, I just mean older desktops. I have been running services on three desktops running various services for myself. I have hit the limit of what I can do with the hardware given I am lacking drives and RAM.
For the last three years, I have been trying out new things to learn and at this point, I am going to share my thoughts on things. Before this point in time, I just had a single computer, running Windows and [Plex](https://www.plex.tv). Nothing super fancy, oh, and maybe a Minecraft server once in a while. But the scope was about to change.
## Kubernets (k8s)
Kubernetes, the new hot thing on the market, was like 5 years ago. It's fantastic if you need to be able to scale out and not worry about vendor lock-in. You can do some very impressive things with it and people have write-ups all over the place on this. Let's take a look at it in the context of on-prem deployments and some of the struggles I ran into.
Installing Kubernetes was a small trial on its own. I think I used an Ansible playbook to build the initial cluster. I think it uses kubeadm at the core and overall, this worked. Getting one node to be the master and the others as agents, did have its issues. Based on your requirements you can run everything on one agent but that's not recommended. At a minimum, you would want to have three agents.
So I picked up two other desktops and installed [Proxmox](https://www.proxmox.com/en/) and set up two bigger VM's. This worked but the can of worms was just getting opened.
Networking in Kubernetes makes assumptions that you are running in the cloud. If you are not the default configuration would just never work. The solution I found was to use [MetalLB](https://metallb.org/) to expose IPs to the pods. This did work once the initial configuration was done. Basic pods got on the network and I was able to talk to them.
Data storage was another fun one. Yet again Kubernetes makes assumptions you are running in the cloud. So you want to run on-prem? Try an NFS mount into the container. This did work for most applications. But did your application make use of SQLite? Oh, you are in for a rough time. SQLite does not play well over NFS due to how the database locks are set up [SQLite and NFS](https://www.sqlite.org/faq.html#q5). This would then show up in the application as random lockups as the lock was removed and relocked. Needless to say, you should move to a full database like [Postgres](https://www.postgresql.org/), but not all applications would support this.
So I would then have to host the application via a local binding to make sure the database would not lock up. So because of this, I threw out the big advantage of Kubernetes, high availability. So, that stinks but things are working so that's good, I guess.
My goal was to use NFS so I would have much more data storage available to each pod, but with that dream dead, several of my applications started to eat into the agent's disk space. The agents are made to be thin so what's the difference between this and just running [Docker](https://www.docker.com/)? Not much at this point.
One thing I did take a look at to avoid using the local binding was [Ceph](https://ceph.com/en/) to create distributed storage pools that K8S would be able to connect to. Yet again this required rescaling my VM's with what little disk space I had per desktop and attempting to get the Ceph agents to talk to each other. I did not continue forward with this due to my hardware constraints. This might be better if you have the space for it but not for my needs.
I did take a look at getting ISCSI working but my NAS did not support it. I would have needed to make another VM just to handle ISCSI and then drop that into an NFS mount to my NAS. Overall, sounds like jank so I did not move forward with this.
Because of the local disk space problems, I would need to rebuild agents and join them in the cluster. Now I had a new issue, I was not able to get the master node to give up the binding to the old agent. This resulted in a zombie agent that I could not get rid of. I would just leave it because it didn't break anything, but just was annoying to look at.
I did not perform any k8s updates to the cluster because of the disk space problems and the zombie agents. This just started to fall apart more and more. If I had provisioned each agent with 16 GB of ram and 512 GB of drive space I might be ok, but these are old desktops, I don't have that kind of hardware available. Dont, mind that some of the pods would enter reboot loops or throw new issues that I would not be able to recover from. The amount of work to keep k8s happy was not worth the squeeze anymore. Time to go back to Docker and start getting some of my time back from this nightmare.
Overall, Kubernetes is great but make sure you have the hardware to support it. When I look at the numbers in Azure too, it's expensive to run for something that will require a fair amount of time to make sure it's happy. If you don't need Kubernetes and just running small containers, then don't. Save yourself the costs and headaches just use Docker/podmon.
`Note: Things could have improved so the notes given here are taken in the point in time of 2020/2021 and my personal experience.`
## Back to Docker
Now that Kubernetes was dead to me in my home, I start to just deploy my containers back to docker and man things got much easier to maintain. I started to convert my k8s deployments over to [docker compose](https://docs.docker.com/compose/) yaml files and stored them in a git repo. Each project had a man file that I could use to make some tasks easier per app/project.
Overall, I did make some good progress but disk space was always a problem. Because I was running VMs inside Proxmox I always had a provision for a chunk of disk space per VM. This made moving things around harder and expanding the disks per VM has been a small problem. I did lose a VM trying to expand the guest disks.
At this point, the services did start to run much smoother for me and I started to get more time back so I could go back to working on code projects. I did add some new services like a webdav instance to break away from Google Drive and put that back on my hardware. This has been nice to have and I would say make a webdav instance. It's simple and to the point on what it will do. I did try bigger services like [NextCloud](https://nextcloud.com/) but it's PHP driven and never got things to work just right and I heard nothing but pain about migrations so I did a hard pass.
One thing I miss was the deployment process of deploying changes to Kubernetes. Docker lacks this and it's just an on-prem problem. I know [Dokku](https://dokku.com/) exists to make on-prem feel like Hiroku but I have not been ready to make that plunge.
I want something like [ArgoCD](https://argo-cd.readthedocs.io/) for Docker, but to my knowledge, this does not exist. I have my thoughts on how something like this could be made and even maybe over multiple Docker hosts. But, this would not be an easy problem to solve and is it even worth it?
## Unraid
Jump forward to April 2023, I am looking to expand my servers but knowing that I would be running [JBOD](https://www.techradar.com/features/what-is-raid-and-jbod) on desktop class hardware did not sound like a great idea. I needed a way to have some raid but also make the server management easier. I don't want to be in the business of maintaining servers at work and home after all.
I have known about [Unraid](https://unraid.net/) for a while now but it's never been something I wanted to dip my toes into. But now that my servers are less complex and Unraid has Docker support, honestly, this might work as a cheaper replacement for Proxmox. The reason I had Proxmox was so I had a backdoor into the server if any of the VMs broke. The last thing I wanted to do was break out a crash cart (monitor and keyboard) to connect to the server to do anything. Proxmox gives has the web UI to manage things. Unraid has the same.
Unraid also gives me the flexibility to expand the array without much issue. Attempting to add a new disk in Proxmox would have been a small task on its own, Don't mind that I would have to shift data around to make things work.
I also wanted to get some of my RAM back. Each VM that is provisioned is a full Linux server so RAM has to be allocated and maintenance needs to be performed on the server. I want to focus on my software, not being a systems administrator so anything to get away from this is a big plus.
I know [TrueNAS](https://www.truenas.com/) is a competing product to Unraid, but at the end of the day, my time is money. TrueNAS might be fantastic, but I am sick of messing around with Linux I just want things to work. Spending some money on Unraid is a drop in the bucket for my long-term sanity as long as things work. For some, saving money is more important but at this point in my life, I have other things to do other than worry about my servers. When something goes offline it's offline for weeks before I can get back to it. I don't want that anymore.
I will be starting my trial of Unraid soon and I will report back my thoughts at a later time.

View File

@ -0,0 +1,68 @@
---
title: "Unraid Upgrade"
date: 2023-06-18T00:00:00-08:00
draft: false
tags: [on-prem, docker, hosting, unraid]
---
- [Install Process](#install-process)
- [Setting up the array](#setting-up-the-array)
- [Cache Disks](#cache-disks)
- [Docker Applications](#docker-applications)
- [Cost](#cost)
- [Closing thoughts](#closing-thoughts)
- [Links](#links)
Last month I brought up some of my self-hosting journeys. With that, I did say that I was going to be moving over to [Unraid](https://www.unraid.com/). My first server has been upgraded and most of my services have been moved. So let's talk about Unraid.
## Install Process
One thing I did like about Unraid was it runs from a USB flash drive. This is great because ProxMox ran from my hard drive and thus needed a partition just to run the operating system. By using the flash drive, I can allocate both of my drives to just data storage.
The first flash drive that I attempted did not work. Unraid checks the serial number of the drive to make sure it's valid. This is used for the licensing process from my understanding. No worries, it was one of my oldest flash drives so I was trying to give it another job. I switched over to a newer drive, flashed the image, and Unraid booted right up.
Yet again because the software does not get installed on the internal drives, you will need to make sure your BIOS is configured to always boot the USB. So I removed all the other boot options from the server and just have the USB as the only option. I did have to enable UEFI to make that work, but no problems for Unraid.
## Setting up the array
Once the server is online, you can connect to it with a web browser and start getting it configured. At this point, I went and configured my drives. (2 8TB disk) One of them is parity and the other is a data disk. One thing to note, make sure you buy at least one large disk. Once the array is set up, you can only install drives up to the largest in the array. So for me, I can only go up to 8TB disks. As this server might get another disk later on, I am good with 8TB usable for an application server.
Once the array started to build it took about 10 hours to complete. I suspect this would be much faster with SSD but I am not going to be throwing that much money at this project. As the array is building you might be able to start playing with the server but I would just say let it finish building the array to avoid any issues.
I might get an SSD to use as a cache disk to offload read-write to the drives.
## Cache Disks
Unraid supports a cache disk. From my understanding, this would be the device that would take the initial changes of data to avoid extra writes to the spinning disks (HDD). I don't have one yet, but it is something I am thinking about for a later time. One of the reasons why is I did notice the drives would heat up with usage. The Parity drive heats the most as it's trying to sync all the data changes.
Adding a cache disk would take the brunt of all the disk changes and then unraid would take the changes and write them back to disk. As I don't have one yet, I am unsure if this feature only works against the network shares or if I can use it with my Docker applications. If I can get it to work with the docker applications then I think it would remove some stress from my disks and make it worth the money.
## Docker Applications
One of the big reasons why I went with Unraid over [TrueNAS](https://truenas.com/) was because Unraid uses Docker. TrueNAS uses BSD jails and honestly, I did not want to go learn jails or figure out how to get my applications to work in a jails container. Docker has made bundling applications so much easier that I did not see the value add in going over to BSD Jails.
Installing a docker application is very simple. The Unraid community has templates to help bootstrap the container deployment and I was able to get most of my applications moved over within the first day of operation. Yet again, I suspect it would take longer to get up and running in Jails, but I don't have experience with that.
I have some applications that I have built and they have also been very easy to migrate over. I have been running into some permissions errors trying to write to some config files, but I think that's more Linux permissions and I can figure some of that out by looking at some of the community templates.
## Cost
Unraid is not free. So with that, I know most people will just not look at this. But as someone who has been playing with Linux and also does his fair share of system administration, it's worth the money.
- Upto 6 disks = $59
- Upto 12 disks = $89
- Unlimited disks = $129
This is a one-time purchase per system and if you want to bump up to a higher tier for more disks, you just pay the difference from my understanding. There is no tier system like you might have seen with Microsoft products. The only difference is the number of disks you can run. All the services are available that everyone has.
At the time of writing this, I am still using the trial instance and it's been rock solid. My device has not needed to be rebooted once things have been set up.
## Closing thoughts
My overall feelings right now are very positive to the point I am going to work on getting another Unraid setup going. This will be smaller in context and will run some of the internal services. I will use something like [Synthing](https://syncthing.net/) to keep the data synced between nodes and then I can also plan on an external node for external backup. But that will take some more time.
## Links
- [Unraid](https://www.unraid.com/)
- [TrueNAS](https://www.truenas.com/)
- [Synthing](https://syncthing.net/)