First I apologize for the brevity of this post. I hadn’t planned on doing a post but this turned into a lot of steps that are not put together in one place. These are more or less notes for me to jog my memory if I ever need to do this again. If I have to do this again, I will definitely do a more detailed post.
So I have been setting up a new monitoring solution (prometheus/alertmanager/grafana) and I wanted to relay alerts through Gmail. This wasn’t as straight forward a process as I had hoped.
I wanted to do things as securely as possible so I have it locked down to IP address and I created a "Service Account" email account for relaying mail. So there were a couple of gotchas.
First, you need to log into webmail and accept the End User License agreement. This hung me up for a spell.
Second, you need to enable 2 Factor authentication so you can create app passwords. Rightfully Google sees the SMTP agent as an unauthorized device if you attempt to use the same password you do to log into the web client.
Lastly I have been experiencing some TLS/SSL issues communicating with Postfix locally. For now AlertManager is connecting to the local Postfix without TLS. To relay mail to Google, Postfix connects securely though. This was acceptable for us at this time.
I also had to make modifications to the startup scripts for prometheus and alertmanager to make use of the NGINX reverse proxy so I can get authentication along with a Let’s Encrypt certificate that encrypts everything. This in turn required a change to prometheus for metrics collection and the Grafana data source so I could graph the collected metrics.
Again I apologize for the vagueness here, this is a basic guideline to help kick start things for me.
The Helios4 Open Source & Open Hardware Network Attached Storage (NAS) box is doing a second run. I was fortunate enough to get one of these during their first run and I have to say I am impressed. I fitted mine out with 4x 10TB drives and have since been dumping everything to it. It is very nice to know that I have a backup target where I know I will have enough space. If you are interested here is a link to get one of your own.
In my new position one of the first tasks was to rework the MongoDB installations to enable the application to automatically fail to a new Master in the event of a failure. Taking stock of things was a bit scary as things were out of date and needed some care. I created new nodes with updated OS and updated MongoDB. I fixed a couple mounting options with this new build and then got around to working on the backups. Backups where dumping to a file system they shouldn’t have been and I started thinking wouldn’t it be cool if I could just dump to the end destination within the current process which happens to be S3.
So after a bit of trial and error I found this to work:
This all assumes you have awscli installed and have the proper IAM roles/permissions to write to your desired S3 bucket.
While testing the restore I setup a local VM and followed my install guide and when running the restore command I kept getting:
mongorestore failed no reachable servers
Because I followed my setup guide this included setting up replication in the mongod.conf file. Apparently configure replication causes you to not be able to restore, which makes sense. Removing that I was able to run the restore command after downloading the .archive from S3:
So in the wake of the Spectre/Meltdown fiasco I wanted to update all my servers and found that something wasn’t working quite right with my repo generation. Well I nuked everything and ran across this post talking to the proper way of deleting repos:
So this was frustrating. Attempting to follow a couple of different “how to install spacewalk” yielded frustration. I did find this post that was very good and worked, with a couple of caveats I found during my trials it get this installed. This is the post I followed.
Have the hostname set in /etc/sysconfig/network : HOSTNAME=server.example.com (doesn’t hurt to have this correct in /etc/hosts and /etc/hostname either). I overlooked this and it is a stupid mistake. Always set your hostname.
This was the bigger one… c3p0 which provides JDBC DataSources/Resource Pools was upgraded and it’s path was moved. Downgrading to the previous version fixes things. Here is what I did:
yum downgrade c3p0-0.9.1.2-2.jpp5.noarch
So I have this up and running now. I am continuing to follow the individuals guide on getting the software channels created and setup. I also need to go through my notes, I have some scripts I used in a previous life on SuSE Manager for cutting new channels. Hopefully they will get me in the right direction to managing this pig when it is working.
While continuing with this project I found the following site had some helpful information regarding Spacewalk channels and client installs:
Well I am finally getting to it. I have wanted to for awhile now update my home network so I can get segmented lab network(s) off my house stuff. I have a router that is getting a little long in the tooth so I am going to give pfSense a shot. Also got my hands on a couple managed switches to give me the capability to run the various VLANs.
I have quite a bit of stuff at the house. It seems I am always trying something new or kicking the tires on some software package or stack to see how to make it work and if it makes sense to bring up at the office.
I am going to be implementing alot stuff but for me it really comes down to:
Do something fun
More important, see if I can still do all of the things.
Like I said I will be implementing a couple of VLANs: LAN, SAN, OPT3, DMZ, DMZ2 to give me some options and to make sure I am able to firewall and route everything properly.
If there are any up and coming admins, developers, or anyone wanting to do more with computers I can’t suggest more to do something similar. The past couple of interview have specifically asked what does my home network look like and what are you doing. It doesn’t take too much to get some of the basics to try a bunch of different stuff.
I have to say I am impressed with the capabilities of pfSense and right now would recommend it to anyone wanting to learn some different networking concepts. The documentation is pretty good (from what I have seen so far) and the community seems pretty active.
Completely optional but I would also suggest using something like Openfiler or FreeNAS in your design. Maybe not to start with but it can easily be added later. This will introduce storage concepts and give you some basic understanding of how to use storage in your designs. Yes it isn’t EMC or Netapp but it gets the concepts and lets you test some of the theory in using attached storage in your designs.
Well the install of KVM just finished, time to figure out network bonding and VLAN tagging.
It has been a little while since I posted anything so a quick update. I have wrapped up coaching for my kids, I am 1 month into a new job, and spring is here so trying to get lots done outside. New job I think is going well (boss, if you read this and disagree come see me). One of the biggest reasons I took the new position is their use of AWS (Amazon Web Services). Now that being said, I have little experience with AWS so what’s a guy to do but go and learn it! I have used Udemy for learning new things which have been hit or miss in the past. Upon looking for AWS related items I stumbled across A Cloud Guru’s courses and so far, I haven’t been disappointed. I am actually really impressed. So far they have been very informative and easy to follow. It has enabled me to have better discussions with those I report to and it is beginning to give me insight into the power of AWS and how it can/will be able to solve some of the problems we are facing. To say I am excited to begin working on providing solutions and taking things to a new level is and understatement. I am anticipating taking the AWS Certified Solutions Architect Associate and then the SysOps Administrator Associate certifications but we will see how fast I get bogged down with issues. I am currently 15% completed on the Solutions Architect course work so I hope I can keep up my momentum.
This was a wonderful trip to be able to spend some time outdoors with my son and see him with his peers doing new and exciting things. I also learned a couple new things and got to really challenge myself as well. Each day we had a couple classes on various topics from Animal Signs
to Winter Survival to Pioneer Life to a night hike.
One of the highlights of the trip was the ropes course. Now if you have never done this before it might look like no big deal but… let’s say it was challenging.
As a chaperone I was asked if I could assist the kids up on the rope course. I am here to help so I agreed. Well I got the zip line. The. Last. Obstacle. So what does that mean, I got to go first. And let me tell you there is no better motivation than having 17 kids watching you as an adult, work through this thing 30′ in the air walking on a 3/4″ steel cable.
The best thing about this was my son down below sending words of encouragement all the way. Needless to say I made it though and really went out of my comfort zone to get this done.
I have to say “thank you” again to all the the staff at Eagle Bluff. From our liaison Grant to the teachers to the cooks and cleaning crew (their motto is “We love mud”) everyone was awesome. The facility is also great. Lots of really cool and interesting things to see everywhere you go.
To close this out, as a parent, if you ever get the opportunity to go and do this with your kids, do it. You won’t ever regret it.
First post! I am going to attempt to post and update this site on things I am working on, pitfalls encountered, and hopefully how to finish what I started. Not entirely sure what all will be going here to start with. At present it will be mostly computer related items, attempting to document installs or break-fix items so I can find it back. I have text documents spread out all over the place and well, it is time to get organized.
So, just finished up getting SSL working on this new site using Let’s Encrypt and certbot. I had to hack /etc/hosts to point back to the internal IP as I was getting hit with hairpinning but once I got that sorted, certbot worked as expected. I had to make a couple manual configuration changes to the apache virtual host file to get the correct certificate files and added the following to enhance security, really disabling insecure cypher suites. Got the A+ at https://www.ssllabs.com/ssltest/analyze.html?d=andrewkrull.com so I am going to go with good for now on that front. Security is an ongoing thing so I will be taking another look at this in the next couple days.
SSLProtocol all -SSLv2 -SSLv3
Header set Strict-Transport-Security "max-age=63072000; includeSubDomains; preload"
Header set X-Frame-Options DENY
Header set X-Content-Type-Options nosniff