Case Study – HH Angus

HH Angus is a Canadian consulting engineering company that is constantly pushing the boundaries of innovation. They’ve been in business for over 100 years, have 400+ employees and 6 offices in Canada and the US.

This is a deeper technical look at a B2B SaaS product that we built for them to help them expand their offerings and better serve their customers. Until recently, HH Angus had been a pure services company. They were now crossing over into being a hybrid product / service company with their own SaaS product.

Over many years working with their commercial real estate clients, HH Angus identified that building owners didn’t have the right tools and technologies to properly manage the telecommunication systems that were in place in their buildings.

Things like managing requests from contractors who needed access to a building’s telecom equipment, managing license agreements and contracts between building owners and telecom companies, storing and delivering 3D images of telecom equipment areas, and reporting/analytics across an entire portfolio of buildings, were just a few examples of needs that building owners had which were currently not being met.

Seeing that there was no software solution currently available in the market, HH Angus decided to embark on the mission of solving this challenge themselves.

HH Angus approached our team after going through a discovery process with a large vendor. The result of that first iteration was a rough plan and a hefty price estimate to build. That price estimate was too high for HH Angus, prompting them to get our team to iterate on the plan, offer suggestions on architecture, plan the project, and create an estimate. Our plan was given the thumbs up and our budget was feasible so HH Angus gave us the green light to proceed.

HH Angus was somewhat flexible in terms of technology selection, so we laid out a high level tech stack for them. For the front end, we decided on a selection of popular tools in the Vue ecosystem (TypeScript, Vue, Pinia, Vite, Tailwind). On the back end, we used TypeScript, Node.js, AWS Lambda and MongoDB. For automated testing, we chose Cucumber.js. For devops, we went with AWS, Atlas and CDKTF. We’ll dig deeper into some of these technologies below.

The majority of our previous frontend work had been based on JavaScript, Vue 2 and Vuex. We took this greenfield project as an opportunity to modernize our stack.

For this round of development we chose TypeScript, Vue 3, Pinia, Vite and the Tailwind UI component library. All of these worked seamlessly together and were a pleasure to create with.

Our frontend team opted to utilize Vue 3’s Composition API as it eliminates boilerplate code and offers a flexible architecture for organizing TypeScript and HTML component code.

For state management, the choice was between Pinia and Vuex. Given the relatively small scope of this project, we opted for the newer and lighter weight Pinia. The primary architectual difference with Vuex is the requirement to create discrete stores for your different data types vs Vuex’s hierarchical submodules. The other significant difference is that Pinia only has state and actions, leaving out mutations altogether.

Vite. It certainly lives up to its name. After years of waiting  and watching Webpack’s development servers start up processes, it’s a bit hard to believe how quickly Vite is running and ready to use.

Tailwind and Tailwind UI were new to our team this project. We were slow out of the gate but feel that the investment in time and energy to learn this paradigm was worth the effort as it contributed to improvements in developer productivity, consistency, and quality.

For authentication, we wired in AWS Cognito by picking and chosing from AWS’s Amplify components for Vue.

Unless future clients have a strong preference for React, the feeling our team was left with after working with with this stack for several months is that it is modern, easy to learn, and efficient to use and worthy of continued use for new projects.

Now let’s take a look at the backend in some more detail.

We started by defining our models using Mongoose schemas. One challenge when using Mongoose with TypeScript is that you normally need to duplicate your Mongoose schemas as TypeScript interfaces. One approach to keeping things “DRY” is to use a library called typegoose which generates both the Mongoose schemas and the TypeScript interfaces from a single typegoose schema;  however, this requires learning / using yet another proprietary syntax and also doesn’t support the full Mongoose feature set. So instead, we chose to use a lesser-known library called mongoose-tsgen which generates all of your TypeScript interfaces from your Mongoose schemas. If you haven’t used mongoose-tsgen before, we highly recommend it.

Next we got to work on the REST API. The API was divided into resource types which largely aligned with our Mongoose models. Each resource in the API, such as Users, had a “handler” function, which is a standard AWS Lambda function that receives the API request from AWS API Gateway and does some initial processing. In our architecture, the handler then passes the request onto a specific endpoint function to process the rest of the request. Endpoint functions perform security and validation checks specific to that endpoint and then, if everything looks good, they call service functions. The most common job of the service functions in a data-driven app is generally to execute business logic related to a specific resource type and to query databases, as well as integrate with 3rd party systems. In addition, there are often multiple levels of abstractions within service functions where a higher-level service function will call multiple lower-level service functions to complete a desired task.

One notable aspect of this project was the requirement for a sophisticated Role-Based Access Control (RBAC) system in which each role is assigned one or more permissions and then each user is assigned one or more roles. In this case, there were 14 roles and 70 permissions. Each permission was fairly complex such as: “Ability to read basic fields of a document if the user is an employee of company X and that document is for a tenant of company X”. So 2 users making the exact same API call would get different results returned (both different rows as well as different fields) depending on their role and permissions. To help accommodate this, we programmatically built up complex MongoDB query objects using multiple nested $and and $or boolean query fragments, based on which applicable permissions each user had.

To make sure everything was working as expected, the key component of our automated testing approach was Cucumber.js.

We really like the ability to define test scenarios using plain English using the Given-When-Then syntax so that they can be read, understood and approved by all stakeholders (developers, testers, project managers and our client). The TypeScript implementation of the scenarios can then be neatly “hidden” away in step definition files that only the developers ever see. The test scenarios also make great living documentation.

In particular, Cucumber came in very handy for testing the complex RBAC security system we had built. We wrote test scenarios to cover every possible role that could call each API endpoint. Each test scenario was initialized by granting the test user specific permissions. Then the API call would be made and we’d check the response to see if we got back the expected data.

Last but not least, let’s take a look at DevOps.

We started by creating separate AWS accounts for development, staging and production following the “account-per-environment” strategy.

HH Angus was already using Terraform to manage their infrastructure on other projects, so we continued down this path but decided to introduce CDK for Terraform (CDKTF) so that we could write the Infrastructure-as-Code (IaC) code in TypeScript and have it generate the underlying Terraform code for us. Using CDKTF allowed us all the benefits of having access to a full programming language that we were familiar with while writing our IaC code.

Having said that, one area of friction when using CDKTF is that the auto-generated documentation is very basic/minimal and doesn’t provide examples. So when using a new Terraform resource or module, we usually find ourselves spending most of our time reading Terraform docs, rather than CDKTF docs, and then converting in our heads to the CDKTF equivalent.

Another sneaky gotcha to watch out for with CDKTF is that CDKTF converts Terraform’s snake_case to TypeScript’s camelCase but only for the top-level of a nested JSON structure. So if the Terraform resource has nested configuration, those nested elements will still need to be written in snake case leading to a combination of both snake and camel cases in the same JSON object within CDKTF code. Thankfully this “Frankencode” didn’t appear too often.

Overall we’re still quite happy with CDKTF and continue to use it in new projects.

One area that turned out to be more of a struggle than we expected was AWS Lambda.

This technology was chosen by a previous consultant that our client had worked with and it was our first time using Lambda. Now that we’re more familiar with it, we likely would have chosen a more conventional approach as Lambda has some drawbacks that make it less suitable for a typical website backend. Even so, we were able to successfully overcome these challenges for our client.

The initial challenge was simply getting the Lambda functions to build at a respectable speed. We started out using AWS SAM to configure and build our Lambda functions. However, we found that it was taking ~3 mins to build each Lambda, which was clearly not going to work. We Googled around and found a number of other developers struggling with slow build times who were also using the same combo of SAM + Node.js + TypeScript.

We started digging around for a better solution and eventually landed on using Webpack with the aws-sam-webpack-plugin. This allowed us to replace the SAM builds with Webpack builds which got the build time down to about 15 seconds….much better!

Once we had lambdas up and running, we were faced with the next key issue which was the cold start latency. This often added 5 seconds to the API response time for the unlucky user whose request required a cold start. There were solutions to this, but none of them were great. Pre-warming Lambdas came with its own issues and complexity. Paying for “provisioned concurrency”, was a fancy name for basically paying to keep the (serverless) lambda running 24/7, which sounds and costs a heck of a lot like….a server.

We’ve since used Lambda for other async tasks that weren’t overly sensitive to cold-start time and it has been great in those situations. But for serving up an API, I think a more “traditional” approach of using Express along with Docker and AWS ECS/Fargate would have been simpler and more efficient to run.

Once we got past the Lambda issues, it was smoother sailing with AWS. We set up our VPC, added CI/CD with CodeBuild and CodePipeline, and configured other services such as Route53, Cloudfront, Cloudwatch, EventBridge, Parameter Store, SES, etc.

By the end of the project, HH Angus was very pleased with the finished system and had already received very positive feedback on it from potential partners and customers. We’ve since moved on to working on more projects with them and look forward to continuing to help them along their journey as a trusted software engineering partner.

Scott Bentley

Manager, Software Development

HH Angus & Associates Consulting Engineers Ltd.

Toronto, ON

“I came to know Justin through a nature retreat of sorts on Manitoulin Island. The best I can describe this location is that it is a wonderland for tinkerers and innovators and lovers of nature and community.

Having experienced this place and the culture Justin has fostered there, I knew that it would be a pleasure to work with him and this opportunity presented itself a couple of years later.

It came about that HH Angus, the company where I work, needed to develop a scalable multi-tenanted SaaS application on Amazon Web Services. We found various professional development services to be excessively priced and/or uninspiring to work with, and so I thought to approach Justin and his Steel Toad group to see what they could offer.

We have been working together for about two years now and have completed multiple projects with great success. The members of the Steel Toad group have changed over this time, but the culture of ingenuity, collaboration and warm professionalism have not. I am continuously impressed by the willingness to go above and beyond and to find ways to make a project work both in technology and budget.

I would absolutely recommend Steel Toad to anyone looking to develop software solutions both large and small. Justin and his team can help whether you know exactly what you want or need some ideas and innovative energy to figure that out.”

Reach Us

All our team members are based in the US or Canada.