We leverage kerb-sts to authenticate developers to use the Amazon AWS API/CLI. kerb-sts is cross-platform and uses kerberos tickets generated as part of MS AD Domain authentication that Devs use to login to their workstations anyway. This use of kerberos makes it easy to track the identity of users across the environment.
Last week I ran into a rare instance where kerb-sts stopped functioning. 'Something' had changed in our environment which I could not easily determine and that left me in a bind. While I was able to figure out and solve the problem I realized that I needed to improve my depth of understanding around kerberos/ADFS and AWS STS, so I wrote a tool that attempts to perform this authentication in Golang.
If you want to cut to the chase and see the code, head on over to gkerb-sts to take a look
While deploying a containerized application I made my first foray into docker
scratch images. The application is written in Golang and leverages CGO to interact with SQLite databases which posed a small complication.
While trying to get NC Talk to work I upgraded my Next Cloud server to 15.02 and got side tracked troubleshooting an interesting issue: why are my logins now being silently redirected to HTTP instead of HTTPS? I might not have noticed this as quickly if I hadn't disabled HTTP on the box years ago.
Edit: Updated on Feb 7 with new information
I've signed up to take the Penetration Testing with Kali Linux course from Offensive Security and want to make a few notes for other would-be course takers on the process to get registered.
I found some time this week to upgrade my laptop to Ubuntu 18.04 (from 16.04). To ensure I could still 'go back' if necessary I went down the path of installing a second hard drive to setup a dual-boot configuration. There's only one problem with this approach: the Ubuntu 18.04 GUI installer doesn't give users the ability to setup a second encrypted ubuntu installation side-by-side with an existing one, even if the target is a new disk.
This set me down a path of adventure and discovery!
We're investigating Kubernetes network overlays at work and I am spinning up sample environments to try things out. One that stands out so far is Cilium due to the fine-grained access controls that can be enforced. They have instructions for how to deploy on Minikube, but it took some finangling for me to be successful with my deployment configuration (Ubuntu 18.04 Server running Minikube 'local' without vagrant).
To cut to the chase, skip to the end to see a deploy script that deploys everything in order.
I've had an idea kicking around in the back of my mind for the last few months to create a Thunderbird extension that will indicate if an email sender's domain was recently registered and alert me. With the poor state of Thunderbird add-on documentation it is a real struggle to get started with anything beyond the most basic 'hello-world' extension. This time I decided to double-down and fight my way through to develop a working (Alpha quality) plugin that accomplishes my design.
If you are thinking about developing an extension for Thunderbird 60 and would like some pointers, read on for my choppy journey through Thunderbird extension development. Hopefully one or more of the pointers will save you time
While setting up a new private docker image registry with certificates signed by an internal certificate authority this week we ran into an issue getting our docker nodes to communicate:
Error response from daemon: Get https://private.registry.tld/v2/: x509: certificate signed by unknown authority
Following the guidance on self-signed certificates from Docker did not directly address the issue.
I was asked to help troubleshoot a NodeJS project recently where the team was encountering trouble connecting to an elasticsearch instance securely (via https/tls). They would get an error back about 'self signed certificate in certificate chain`. In examining further, we were able to come up with a client configuration for the elasticsearch library that addressed the issue.
Yesterday marked a first for me: I had to restore a few objects from a large S3 bucket that was backed up to Glacier. Along the way I learned a few things:
- Objects sent to glacier permanently retain the GLACIER storage class
- If your S3 objects were replicated across an AWS Account boundary, you might not have 'full control' of your objects (but AWS will gladly let you pay them to store them)
- The AWS CLI is unhelpful when it comes to recursively copying objects that are restored from glacier
The objects can be restored and downloaded, it just takes some specific knowledge