Why you should add a cypress test to your CI/CD pipeline
Cypress is a framework for testing web applications, primarily used for end-to-end testing. In an end-to-end test you test your application as a whole just as a real user would, by interacting with GUI components, without any mocked components. The goal of this post is to convince you to add a single simple cypress test to your CI/CD pipeline.
In end-to-end tests with cypress you can test a lot of aspects of your application with very little test code. Below is basically the simplest test you can write with cypress.
Better Terminal Experience for Git and Kubernetes
When working on a project that uses git, it is essential to checkout the correct branch. You don’t want to realize after a few hours of work that you worked on the wrong branch and have a few messy merges and rebases in front of you. Of course, the easiest way to show the current branch is to run git status
but that requires an action from you, Instead, I recommend you to include the git branch as part of your prompt string. The prompt string is the string that marks your command line and is set by the shell variable PS1
.
Introducing Workflow Timer
Workflow-timer is a Github action that measures the duration of the workflow and compares it to historical runs.
The purpose of this action is to make the developer aware of when feedbacks loops get longer. Let’s say that you are running unit tests as part of your current workflow. If merging your changes (your PR) would increase the time it takes to run the unit tests by 50%, your changes probably have unwanted side effects. It’s about creating awareness for the developer.
Using managed identities and role based access control is great!
In a project I work with we use Azure App Service for hosting an ASP.NET application. All external configuration used by the application is stored in an Azure App Configuration store. I recently updated how the application authenticates toward the App Configuration store and think it worked out pretty well.
Prior to the change we used connection strings (i.e. a string containing endpoint, username and password) for authentication. The main drawback with this is that we have to manage the credentials ourselves. We must provide the connection string to the application in some way (e.g. set it in a CI/CD pipeline after deploying our application). If our connection string is compromised, we must regenerate it and make sure that the application is provided with the new one.
Long feedback loops invite for context switching
As a backend developer, there is especially one thing that I’m jealous of when it comes to frontend development. The short feedback loop, namely, the time and effort it takes to get feedback for your changes. Enable hot or live reloading while developing, and the results of your changes will be in front of you instantly, which is super nice. This is how it should be, even when you are developing the backend, infrastructure, CI/CD, or whatever. A long feedback loop can have a lot of hidden consequences, and it makes you ineffective. You and your team should spend time making sure that your feedback loop is short, and kept short.
Introducing Frontman
Frontman is a very light-weight NGINX reverse proxy that is deployed using Docker. Its purpose is to act as the entry point to your server. It will redirect incoming traffic to one out of many Docker-based applications running on the same server, based on the hostname in the incoming request.
The rationale behind this is that it enables you to host as many services as you want on the same server, while still only keeping ports 80 and 443 open to the outside world.
How I've automated the setup of my virtual server
Lately I’ve been looking for a good way of hosting some personal projects. I wanted something relatively cheap which I could use to host multiple services. A colleague of mine have for a long time used a single virtual server (more specifically, an EC2 instance in AWS) where he runs multiple services inside docker containers. To enable access to each individual service, there’s a NGINX reverse proxy that forwards traffic to services. I decided to try the same approach.
Trying out AWS Timestream
AWS recently (last year) released their new server-less database focused purely on time series data, Amazon Timestream. On their product page, AWS describes the database like:
“Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases.”
Minimize Java Lambda Cold Start Times
If you have ever run Java inside a lambda function on AWS, you will have noticed the quite significant cold start times that comes with spinning up the JVM environment. In this post, I will discuss some different tricks you can use to minimize these cold start times.
The problem with cold starts arises when there are no “warm” lambda available to handle an incoming request, which usually happens whenever an endpoint experiences a large and sudden spike in traffic. The most commonly occurring scenario when this happens is probably when an endpoint goes from no traffic at all in a while (and thus having no warm lambdas ready) to suddenly having one or more incoming requests to serve.
Migrating data between DynamoDB tables
When setting up a new DynamoDB table, an important decision is to decide what primary key to use. However, it’s not uncommon to not have the full picture up front and therefore it could be hard to make the right decision beforehand. While the official AWS documentation states that “you shouldn’t start designing your schema for DynamoDB until you know the questions it will need to answer”, you often need to experiment to be able to discover what those questions are. Luckily there is an easy approach for how to migrate to a new key schema that we will describe in this blog post.