May 8

Writing directly to S3 from mongodump

In my new position one of the first tasks was to rework the MongoDB installations to enable the application to automatically fail to a new Master in the event of a failure. Taking stock of things was a bit scary as things were out of date and needed some care. I created new nodes with updated OS and updated MongoDB. I fixed a couple mounting options with this new build and then got around to working on the backups. Backups where dumping to a file system they shouldn’t have been and I started thinking wouldn’t it be cool if I could just dump to the end destination within the current process which happens to be S3.

So after a bit of trial and error I found this to work:

mongodump --host <hostname> --db <database_name> --gzip --archive | aws s3 cp - s3://Your_S3_bucket/<database_name>.archive

This all assumes you have awscli installed and have the proper IAM roles/permissions to write to your desired S3 bucket.

While testing the restore I setup a local VM and followed my install guide and when running the restore command I kept getting:

mongorestore failed no reachable servers

Because I followed my setup guide this included setting up replication in the mongod.conf file. Apparently configure replication causes you to not be able  to restore, which makes sense. Removing that I was able to run the restore command after downloading the .archive from S3:

mongorestore --gzip --archive=<database_name>.archive

Well this job is now done, on to the next thing(s).

January 10

Managing Spacewalk Repos

So in the wake of the Spectre/Meltdown fiasco I wanted to update all my servers and found that something wasn’t working quite right with my repo generation. Well I nuked everything and ran across this post talking to the proper way of deleting repos:

Proper way to delete channel/repository/packages in Spacewalk

Directly from David’s site I ran the following:

spacecmd softwarechannel_delete _channel_name_
spacecmd repo_delete _reponame_name_
spacecmd package_removeorphans
spacewalk-data-fsck -r -S -C -O

This definitely got some long deleted items really gone. My mistake was not running the flags on spacewalk-data-fsck which wasn’t deleting the files.

May 23

Installing Spacewalk

So this was frustrating. Attempting to follow a couple of different “how to install spacewalk” yielded frustration.  I did find this post that was very good and worked, with a couple of caveats I found during my trials it get this installed. This is the post I followed.


  1. Have the hostname set in /etc/sysconfig/network :  (doesn’t hurt to have this correct in /etc/hosts and /etc/hostname either). I overlooked this and it is a stupid mistake. Always set your hostname.
  2. This was the bigger one… c3p0 which provides JDBC DataSources/Resource Pools was upgraded and it’s path was moved. Downgrading to the previous version fixes things. Here is what I did:
    yum downgrade c3p0-

So I have this up and running now. I am continuing to follow the individuals guide on getting the software channels created and setup. I also need to go through my notes, I have some scripts I used in a previous life on SuSE Manager for cutting new channels. Hopefully they will get me in the right direction to managing this pig when it is working.

While continuing with this project I found the following site had some helpful information regarding Spacewalk channels and client installs:

Follow up sites:

How To Manage Spacewalk Channels And Repositories

Install And Register Spacewalk Client To The Spacewalk Server

This is becoming a stream of consciousness and I will need to work out a better way of doing this but another tidbit for password recovery (yes it already happened)

Try using command line tool

satpasswd <username>

Addition of the EPEL repo:

epel channel
Channel Name: centos-7-epel-x86_64
GPG key URL:
GPG key ID: 352C64E5
GPG key Fingerprint: 91E9 7D7C 4A5E 96F1 7F3E 888F 6A2F AEA2 352C 64E5

epel repo
Repository Label: centos-7-epel-x86_64
Repository URL:


May 20

Home Network Update

Well I am finally getting to it. I have wanted to for awhile now update my home network so I can get segmented lab network(s) off my house stuff. I have a router that is getting a little long in the tooth so I am going to give pfSense a shot. Also got my hands on a couple managed switches to give me the capability to run the various VLANs.

I have quite a bit of stuff at the house. It seems I am always trying something new or kicking the tires on some software package or stack to see how to make it work and if it makes sense to bring up at the office.

I am going to be implementing alot stuff but for me it really comes down to:

  1. Do something fun
  2. More important, see if I can still do all of the things.

Like I said I will be implementing a couple of VLANs: LAN, SAN, OPT3, DMZ, DMZ2 to give me some options and to make sure I am able to firewall and route everything properly.

If there are any up and coming admins, developers, or anyone wanting to do more with computers I can’t suggest more to do something similar. The past couple of interview have specifically asked what does my home network look like and what are you doing. It doesn’t take too much to get some of the basics to try a bunch of different stuff.

I have to say I am impressed with the capabilities of pfSense and right now would recommend it to anyone wanting to learn some different networking concepts. The documentation is pretty good (from what I have seen so far) and the community seems pretty active.

Completely optional but I would also suggest using something like Openfiler or FreeNAS in your design. Maybe not to start with but it can easily be added later. This will introduce storage concepts and give you some basic understanding of how to use storage in your designs. Yes it isn’t EMC or Netapp but it gets the concepts and lets you test some of the theory in using attached storage in your designs.

Well the install of KVM just finished, time to figure out network bonding and VLAN tagging.