Skip to main content

7 posts tagged with "node.js"

View All Tags

· 18 min read
Yoni Goldberg

Intro: A sweet pattern that got lost in time

When was the last time you introduced a new pattern to your code? The use-case pattern is a great candidate: it's powerful, sweet, easy to implement, and can strategically elevate your backend code quality in a short time.

The term 'use case' means many different things in our industry. It's being used by product folks to describe a user journey, mentioned by various famous architecture books to describe vague high-level concepts. this article focuses on its practical application at the code level by emphasizing its surprising merits how to implement it correctly.

Technically, the use-case pattern code belongs between the controller (e.g., API routes) and the business logic services (like those calculating or saving data). The use-case code is called by the controller and tells in high-level words the flow that is about to happen in a simple manner. Doing so increases the code readability, navigability, pushes complexity toward the edges, improves observability and 3 other merits that are shown below with examples.

But before we delve into its mechanics, let's first touch on a common problem it aims to address and see some code that calls for trouble.

The problem: too many details, too soon

Imagine a developer, returning to a codebase she hasn't touched in months, tasked with fixing a bug in the 'new orders flow'—specifically, an issue with price calculation in an electronic shop app.

Her journey begins promisingly smooth:

- 🤗 Testing - She starts her journey off the automated tests to learn about the flow from an outside-in approach. The testing code is short and standard, as should be:

test("When adding an order with 100$ product, then the price charge should be 100$ ", async () => {
// ....
})

- 🤗 Controller - She moves to skim through the implementation and starts from the API routes. Unsurprisingly, the Controller code is straightforward:

app.post("/api/order", async (req: Request, res: Response) => {
const newOrder = req.body;
await orderService.addOrder(newOrder); // 👈 This is where the real-work is done
res.status(200).json({ message: "Order created successfully" });
});

Smooth sailing thus far, almost zero complexity. Typically, the controller would now hand off to a Service where the real implementation begins, she navigates into the order service to find where and how to fix that pricing bug.

- 😲 The service - Suddenly! She is thrown into hundred lins of code (at best) with tons of details. She encounters classes with intricate states, inheritance hierarchies, a dependency injection framework that wire all the dependent services, and other boilerplate code. Here is a sneak peak from a real-world service, already simplified for brevity. Read it, feel it:

let DBRepository;

export class OrderService : ServiceBase<OrderDto> {
async addOrder(orderRequest: OrderRequest): Promise<Order> {
try {
ensureDBRepositoryInitialized();
const { openTelemetry, monitoring, secretManager, priceService, userService } =
dependencyInjection.getVariousServices();
logger.info("Add order flow starts now", orderRequest);
openTelemetry.sendEvent("new order", orderRequest);

const validationRules = await getFromConfigSystem("order-validation-rules");
const validatedOrder = validateOrder(orderRequest, validationRules);
if (!validatedOrder) {
throw new Error("Invalid order");
}
this.base.startTransaction();
const user = await userService.getUserInfo(validatedOrder.customerId);
if (!user) {
const savedOrder = await tryAddUserWithLegacySystem(validatedOrder);
return savedOrder;
}
// And it goes on and on until the pricing module is mentioned
}

So many details and things to learn upfront, which of them is crucial for her to learn now before dealing with her task? How can she find where is that pricing module?

She is not happy. Right off the bat, she must make herself acquaintance with a handful of product and technical narratives. She just fell off the complexity cliff: from a zero-complexity controller straight into a 1000-piece puzzle. Many of them are unrelated to her task.

The use-case pattern

In a perfect world, she would love first to get a high-level brief of the involved steps so she can understand the whole flow, and from this comfort standpoint choose where to deepen her journey. This is what this pattern is all about.

The use-case is a file with a single function that is being called by the API controller to orchestrate the various implementation services. It's merely a simple function that enumerates and calls the code that does the actual job:

A use-case code example

Each interaction with the system—whether it's posting a new comment, requesting user deletion, or any other action—is managed by a dedicated use-case function. Each use-case constitutes multiple 'steps' - function calls that fulfill the desired flow.

By design, it's short, flat, no If/else, no try-catch, no algorithms, just plain calls to functions. This way, it tells the story in the simplest manner. Note how it doesn't share too much details, but tells enough for one to understand 'WHAT' is happening here and 'WHO' is doing that, but not 'HOW'.

But why is this minimalistic approach so crucial?

The merits

1. A navigation index

When seeking a specific book in the local library, the visitor doesn't have to skim through all the shelves to find a specific topic of interest. A Library, like any other information system, uses a navigational system, wayfinding signage, to highlight the path to a specific information area.

Library catalog The library catalog redirects the reader to the area of interest

Similarly, in software development, when a developer needs to address a particular issue—such as fixing a bug in pricing calculations—the 'use case' acts like a navigational tool within the application. It serves as a hitchhiker's guide, or the yellow pages, pinpointing exactly where to find the necessary piece of code. While other organizational strategies like modularization and folder structures offer ways to manage code, the 'use case' approach provides a more focused and precise index. it shows only the relevant areas (and not 50 unrelated modules), it tells when precisely this module is used, what is the specific entry point and which exact parameters are passed.

2. Deferred and spread complexity

When a developer begins inspecting a codebase at the level of implementation services, she is immediately bombarded with intricate details. This immersion thrusts her into the depths of both product and technical complexities. Typically, she must navigate through a dependency injection system to instantiate classes, manage null states, and retrieve settings from a distributed configuration system

When the code reader's journey starts at the level of implementation-services, she is immediately bombarded with intricate details. This immersion exposes her to both product and technical complexities right from the start. Typically, like in our example case, the code first use a dependency injection system to factor some classes, check for nulls in the state and get some values from the distributed config system - all before even starting on the primary task. This is called accidental complexity. Tackling complexity is one of the finest art of app design, as the code planner you can't just eliminate complexity, but you may at least reduce the chances of someone meeting it.

Imagine your application as a tree where branches represent functions and the fruits are pockets of embedded complexity, some of which are poisoned (i.e., unnecessary complexities). Your objective is to structure this tree so that navigating through it exposes the visitor to as few poisoned fruits as possible:

The blocking-complexity tree The accidental-complexity tree: A visitor aiming to reach a specific leaf must navigate through all the intervening poisoned fruits.

This is where the 'Use Case' approach shines: by prioritizing high-level product steps and minimal technical details at the outset—a navigation system that simplifies access to various parts of the application. With this navigation tool, she can easily ignore steps that are unrelated with her work, and avoid poisoned fruits. A true strategic design win.

The spread-complexity tree The spread-complexity tree: Complexity is pushed to the periphery, allowing the reader to navigate directly to the essential fruits only.

3. A practical workflow that promotes efficiency

When embarking on a new coding flow, where do you start? After digesting the requirements and setting up some initial API routes and high-level component tests, the next logical step might be less obvious. Here’s a strategy: begin with a use-case. This approach promotes an outside-in workflow that not only streamlines development but also exposes potential risks early on.

While drafting a new use-case, you essentially map out the various steps of the process. Each step is a call to some service or repository functions, sometimes before they even exist. Effortlessly and spontaneously, these steps become your TODO list, a live document that tells not only what should be implemented rather also where risky gotchas hide. Take, for instance, this straightforward use-case for adding an order:

export async function addOrderUseCase(orderRequest: OrderRequest) {
const orderWithPricing = calculateOrderPricing(validatedOrder);
const purchasingCustomer = await assertCustomerExists(orderWithPricing.customerId);
const savedOrder = await insertOrder(orderWithPricing);
await sendSuccessEmailToCustomer(savedOrder, purchasingCustomer.email);
}

This structured approach allows you to preemptively tackle potential implementation hurdles:

- sendSuccessEmailToCustomer - What if you lack a necessary email service token from the Ops team? Sometimes, this demands approval and might last more than a week (believe me, I know). Acting now, before spending 3 days on coding, can make a big difference.

- calculateOrderPricing - Reminds you to confirm pricing details with the product team—ideally before they're out of office, avoiding delays that could impact your delivery timeline.

- assertCustomerExists - This call goes to an external Microservice which belongs to the User Management team. Did they already provide an OpenAPI specification of their routes? Check your Slack now, if they didn't yet, asking too late can prevent it from becoming a roadblock later.

Not only does this high-level thinking highlight your tasks and risks, it's also an optimal spot to start the design from:

4. The optimal design viewpoint

Early on when initiating a use-case, the developers define the various types, functions signature, and their initial skeleton return data. This process naturally evolves into an effective design drill where the overall flow is decomposed into small units that actually fit. This sketch-out results in discovering early when puzzle pieces don't fit while considering the underlying technologies. Here is an example, once I sketched a use-case and initially came up with these steps:

await sendSuccessEmailToCustomer(savedOrder, purchasingCustomer.email, orderId);
const savedOrder = await insertOrder(orderWithPricing);

Going with my initial use-case above, an email is sent before the the order is saved. Soon enough the compiler yelled at me: The email function signature is not satisfied, an 'Order Id' parameter is needed but to obtain one the order must be saved to DB first. I tried to change the order, unfortunately it turned out that my ORM is not returning the ID of saved entities. I'm stuck, my design struggles, at least this is realized before spending days on details. Unlike designing with papers and UML, designing with use-case brings no overhead. Moreover, unlike high-level diagrams detached from implementation realities, use-case design is grounded in the actual constraints of the technology being used.

5. Better coverage reports

Say you have 82.35% testing code coverage, are you happy and feeling confident to deploy? I'd suggest that anyone having below 100% must clarify first which code exactly is not covered with testing. Is this some nitty-gritty niche code or actually critical business operations that are not fully tested? Typically, answering this requires scrutinizing all the app file coverage, a daunting task.

Use-cases simplifies the coverage coverage digest: when looking directly into the use-cases folder, one gets 'features coverage', a unique look into which user features and steps lack testing:

Use case coverage The use-cases folder test coverage report, some use-cases are only partially tested

See how the code above has an excellent overall coverage, 82.35%. But what about the remaining 17.65% code? Looking at the report triggers a red flag: the unusual 'payment-use-case' is not tested. This flow is where revenues are generated, a critical financial process which as turns out has a very low test coverage. This significant observation calls for immediate actions. Use-case coverage thus not only helps in understanding what parts of your application are tested but also prioritizes testing efforts based on business criticality rather than mere technical functionality.

6. Practical domain-driven code

The influential book "Domain-Driven Design" advocates for "committing the team to relentlessly exercise the domain language in all communications within the team and in the code." This principle asserts that aligning code closely with product narratives fosters a common language among diverse stakeholders (e.g., product, team-leads, frontend, backend). While this sounds sensible, this advice is also a little vague - how and where should this happen?

Use-cases bring this idea down to earth: the use-case files are named after user journeys in the system (e.g., purchase-new-goods), the use-case code itself naturally describes the flow in a product language. For instance, if employees commonly use the term 'cut' at the water cooler to refer to a price reduction, the corresponding use-case should employ a function named 'calculatePriceCut'. This naming convention not only reinforces the domain language but also enhances mutual understanding across the team.

7. Consistent observability

I bet you encountered the situation when you turn the log level to 'Debug' (or any other verbose mode) and gets gazillion, overwhelming, and unbearable amount of log statements. Great chances that you also met the opposite when setting the logger level to 'Info' but there are also almost zero logging for that specific route that you're looking into. It's hard to formalize among team members when exactly each type of logging should be invoked, the result is a typical inconsistent and lacking observability.

Use-cases can drive trustworthy and consistent monitoring by taking advantage of the produced use-case steps. Since the precious work of breaking-down the flow into meaningful steps was already done (e.g., send-email, charge-credit-card), each step can produce the desired level of logging. For example, one team's approach might be to emit logger.info on a use-case start and use-case end, and then each step will emit logger.debug. Whatever the chosen specific level is, use-case steps bring consistency and automation. Put aside logging, the same can be applied with any other observability technique like OpenTelemetry to produce custom spans for every flow step.

The implementation though demands some thinking, cluttering every step with a log statement is both verbose and depends on human manual work:

// ❗️Verbose use case
export async function addOrderUseCase(orderRequest: OrderRequest): Promise<Order> {
logger.info("Add order use case - Adding order starts now", orderRequest);
const validatedOrder = validateAndCoerceOrder(orderRequest);
logger.debug("Add order use case - The order was validated", validatedOrder);
const orderWithPricing = calculateOrderPricing(validatedOrder);
logger.debug("Add order use case - The order pricing was decided", validatedOrder);
const purchasingCustomer = await assertCustomerHasEnoughBalance(orderWithPricing);
logger.debug("Add order use case - Verified the user balance already", purchasingCustomer);
const returnOrder = mapFromRepositoryToDto(purchasingCustomer as unknown as OrderRecord);
logger.info("Add order use case - About to return result", returnOrder);
return returnOrder;
}

One way around this is creating a step wrapper function that makes it observable. This wrapper function will get called for each step:

import { openTelemetry } from "@opentelemetry";
async function runUseCaseStep(stepName, stepFunction) {
logger.debug(`Use case step ${stepName} starts now`);
// Create Open Telemetry custom span
openTelemetry.startSpan(stepName);
return await stepFunction();
}

Now the use-case gets automated and consistent transparency:

export async function addOrderUseCase(orderRequest: OrderRequest) {
// 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed
const validatedOrder = await runUseCaseStep("Validation", validateAndCoerceOrder.bind(null, orderRequest));
const orderWithPricing = await runUseCaseStep("Calculate price", calculateOrderPricing.bind(null, validatedOrder));
await runUseCaseStep("Send email", sendSuccessEmailToCustomer.bind(null, orderWithPricing));
}

The code is a little simplified, in real-world wrapper you'll have to put try-catch and cover other corner cases, but it makes the point: each step is a meaningful milestone in the user's journey that gets automated and consistent observability.

Implementation best practices

1. Dead-simple 'no code'

Since use-cases are mostly about zero complexity, use no code constructs but flat calls to functions. No If/Else, no switch, no try/catch, nothing, only a simple list of steps. While ago I decided to put only one If/Else in a use-case:

export async function addOrderUseCase(orderRequest: OrderRequest) {
const validatedOrder = validateAndCoerceOrder(orderRequest);
const purchasingCustomer = await assertCustomerHasEnoughBalance(validatedOrder);
if (purchasingCustomer.isPremium) {//❗️
sendEmailToPremiumCustomer(purchasingCustomer);
// This easily will grow with time to multiple if/else
}
}

A month later when I visited the code above there were already three nested If/elses. Year from now the function above will host a typical imperative code with many nested branches. Avoid this slippery road by putting a very strict border, put the conditions within the step functions:

export async function addOrderUseCase(orderRequest: OrderRequest) {
const validatedOrder = validateAndCoerceOrder(orderRequest);
const purchasingCustomer = await assertCustomerHasEnoughBalance(validatedOrder);
await sendEmailIfPremiumCustomer(purchasingCustomer); //🙂
}

2. Find the right level of specificity

The finest art of a great use case is finding the right level of details. At this early stage, the reader is like a traveler who uses the map to get some sense of the area, or find a specific road. Definitely not learn about every road in the country. On the other hand, a good map doesn't show only the main highway and nothing else. For example, the following use-case is too short and vague:

export async function addOrderUseCase(orderRequest: OrderRequest) {
const validatedOrder = validateAndCoerceOrder(orderRequest);
const finalOrderToSave = await applyAllBusinessLogic(validatedOrder);//🤔
await insertOrder(finalOrderToSave);
}

The code above doesn't tell a story, neither eliminate some paths from the journey. Conversely, the following code is doing better in telling the story brief:

export async function addOrderUseCase(orderRequest: OrderRequest) {
const validatedOrder = validateAndCoerceOrder(orderRequest);
const pricedOrder = await calculatePrice(validatedOrder);
const purchasingCustomer = await assertCustomerHasEnoughBalance(orderWithPricing);
const orderWithShippingInstructions = await addShippingInfo(pricedOrder, purchasingCustomer);
await insertOrder(orderWithShippingInstructions);
}

Things get a little more challenging when dealing with long flows. What if there a handful of important steps, say 20? what if multiple use-case have a lot of repetition and shared step? Consider the case where 'admin approval' is a multi-step process which is invoked by a handful of different use-cases? When facing this, consider breaking-down into multiple use-cases where one is allowed to call the other.

3. When have no choice, control the DB transaction from the use-case

What if step 2 and step 5 both deal with data and must be atomic (fail or succeed together)? Typically you'll handle this with DB transactions, but since each step is discrete, how can a transaction be shared among the coupled steps?

If the steps take place one after the other, it makes sense to let the downstream service/repository handle them together and abstract the transaction from the use-case. What if the atomic steps are not consecutive? In this case, though not ideal, there is no escape from making the use-case acquaintance with a transaction object:

export async function addOrderUseCase(orderRequest: OrderRequest) {
// 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed
const transaction = Repository.startTransaction();
const purchasingCustomer = await assertCustomerHasEnoughBalance(orderRequest, transaction);
const orderWithPricing = calculateOrderPricing(purchasingCustomer);
const savedOrder = await insertOrder(orderWithPricing, transaction);
const returnOrder = mapFromRepositoryToDto(savedOrder);
Repository.commitTransaction(transaction);
return returnOrder;
}

4. Aggregate small use-cases in a single file

A use-case file is created per user-flow that is triggered from an API route. This model make sense for significant flows, how about small operations like getting an order by id? A 'get-order-by-id' use case is likely to have 1 line of code, seems like an unnecessary overhead to create a use-case file for every small request. In this case, consider aggregating multiple operations under a single conceptual use-case file. Here below for example, all the order queries co-live under the query-orders use-case file:

// query-orders-use-cases.ts
export async function getOrder(id) {
// 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed
const result = await orderRepository.getOrderByID(id);
return result;
}

export async function getAllOrders(criteria) {
// 🖼 This is a use case - the story of the flow. Only simple, flat and high-level code is allowed
const result = await orderRepository.queryOrders(criteria);
return result;
}

Closing: Easy to start, use everywhere

If you find it valuable, you'll also get great return for your modest investment: No fancy tooling is needed, the learning time is close to zero (in fact, you just read one of the longest article on this matter...). There is also no need to refactor a whole system rather gradually implement per-feature.

Once you become accustomed to using it, you’ll find that this technique extends well beyond API routes. It’s equally beneficial for managing message queues subscriptions and scheduled jobs. Backend-aside, use it as the facade of every module or library - the code that is being called by the entry file and orchestrates the internals. The same idea can be applied in Frontend as well: declare the core actors at the component top level. Without implementation details, just put the reference to the component's event handlers and hooks - now the reader knows about the key events that will drive this component.

You might think this all sounds remarkably straightforward—and it is. My apologies, this article wasn't about cutting-edge technologies. Neither did it cover shiny new dev toolings or AI-based rocket-science. In a land where complexity is the key enemy, simple ideas can be more impactful than sophisticated tooling and the Use-case is a powerful and sweet pattern that meant to live in every piece of software.

· 13 min read
Yoni Goldberg

What's special about this article?

As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language

Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling

Too busy to read them all? Search for articles that are decorated with a medal 🏅, these are a true masterpiece pieces of content that you never wanna miss

Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal

Here they are, 10 outstanding testing articles:


📄 1. 'Selective Unit Testing – Costs and Benefits'

✍️ Author: Steve Sanderson

🔖 Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:

If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any

The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low

When unit shines

Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios

👓 Read time: 9 min (1850 words)

🔗 Link: https://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/


📄 2. 'Testing implementation details' (JavaScript example)

✍️ Author: Kent C Dodds

🔖 Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing

"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:

  1. Can break when you refactor application code. False negatives
  2. May not fail when you break application code. False positives"

p.s. This author has another outstanding post about a modern testing strategy, checkout this one as well - 'Write tests. Not too many. Mostly integration'

👓 Read time: 13 min (2600 words)

🔗 Link: https://kentcdodds.com/blog/testing-implementation-details


📄 3. 'Testing Microservices, the sane way'

🏅 This is a masterpiece

✍️ Author: Cindy Sridharan

🔖 Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment

This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism

I coined the term “step-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:

When unit shines

Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable

👓 Read time: > 2 hours (10,500 words with many links)

🔗 Link: https://copyconstruct.medium.com/testing-microservices-the-sane-way-9bb31d158c16


📄 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)

✍️ Author: Ryan Jones

🔖 Abstract: One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world

This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)

👓 Read time: 16 min (3000 words)

🔗 Link: https://medium.com/serverlessguru/how-to-unit-test-with-nodejs-76967019ba56


📄 5. 'Unit test fetish'

✍️ Author: Martin Sústrik

🔖 Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'

👓 Read time: 5 min (1000 words)

🔗 Link: https://250bpm.com/blog:40/


📄 6. 'Mocking is a Code Smell' (JavaScript examples)

✍️ Author: Eric Elliott

🔖 Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:

"Mocking is required when our decomposition strategy has failed"

The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more

The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt

👓 Read time: 32 min (6,300 words)

🔗 Link: https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a


📄 7. 'Why Good Developers Write Bad Unit Tests'

🏅 This is a masterpiece

✍️ Author: Michael Lynch

🔖 Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:

Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the “rules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach

Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do

👓 Read time: 11 min (2,2000 words)

🔗 Link: https://mtlynch.io/good-developers-bad-tests/


📄 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)

✍️ Author: Vitali Zaidman

🔖 Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.

"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."

The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more

👓 Read time: 37 min (7,400 words)

🔗 Link: https://medium.com/welldone-software/an-overview-of-javascript-testing-7ce7298b9870


📄 9. Testing in Production, the safe way

✍️ Author: Cindy Sridharan

🔖 Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production

I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.

It’s still better than having nothing - but “works in staging” is only one step better than “works on my machine”.

Testing in production

👓 Read time: 54 min (10,725 words)

🔗 Link: https://copyconstruct.medium.com/testing-in-production-the-safe-way-18ca102d0ef1


📄 10. 'Please don't mock me' (JavaScript examples, from JSConf)

🏅 This is a masterpiece

✍️ Author: Justin Searls

🔖 Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:

"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"

Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one

👓 Read time: 39 min

🔗 Link: https://www.youtube.com/watch?v=x8sKpJwq6lY&list=PL1CRgzydk3vzk5nMZNLTODfMartQQzInE&index=148


📄 Shameless plug: my articles

Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?

🎁 Bonus: Some other great testing content

These articles are also great, some are highly popular:

p.s. Last reminder, less than 48 hours left for my online course 🎁 special launch offer

· 21 min read
Yoni Goldberg
Raz Luvaton

Where the dead-bodies are covered

This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked

Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js

But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime

The hidden corners

Here are a handful of examples that might open your mind to a whole new class of risks and tests

July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com

Test Examples

🧟‍♀️ The zombie process test

👉What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!

📝 Code

Code under test, api.js:

// A common express server initialization
const startWebServer = () => {
return new Promise((resolve, reject) => {
try {
// A typical Express setup
expressApp = express();
defineRoutes(expressApp); // a function that defines all routes
expressApp.listen(process.env.WEB_SERVER_PORT);
} catch (error) {
//log here, fire a metric, maybe even retry and finally:
process.exit();
}
});
};

The test:

const api = require('./entry-points/api'); // our api starter that exposes 'startWebServer' function
const sinon = require('sinon'); // a mocking library

test('When an error happens during the startup phase, then the process exits', async () => {
// Arrange
const processExitListener = sinon.stub(process, 'exit');
// 👇 Choose a function that is part of the initialization phase and make it fail
sinon
.stub(routes, 'defineRoutes')
.throws(new Error('Cant initialize connection'));

// Act
await api.startWebServer();

// Assert
expect(processExitListener.called).toBe(true);
});

👀 The observability test

👉What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:

📝 Code

test('When exception is throw during request, Then logger reports the mandatory fields', async () => {
//Arrange
const orderToAdd = {
userId: 1,
productId: 2,
status: 'approved',
};
const metricsExporterDouble = sinon.stub(metricsExporter, 'fireMetric');
sinon
.stub(OrderRepository.prototype, 'addOrder')
.rejects(new AppError('saving-failed', 'Order could not be saved', 500));
const loggerDouble = sinon.stub(logger, 'error');

//Act
await axiosAPIClient.post('/order', orderToAdd);

//Assert
expect(loggerDouble).toHaveBeenCalledWith({
name: 'saving-failed',
status: 500,
stack: expect.any(String),
message: expect.any(String),
});
expect(
metricsExporterDouble).toHaveBeenCalledWith('error', {
errorName: 'example-error',
})
});

👽 The 'unexpected visitor' test - when an uncaught exception meets our code

👉What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:

researches says that, rejection

📝 Code

test('When an unhandled exception is thrown, then process stays alive and the error is logged', async () => {
//Arrange
const loggerDouble = sinon.stub(logger, 'error');
const processExitListener = sinon.stub(process, 'exit');
const errorToThrow = new Error('An error that wont be caught 😳');

//Act
process.emit('uncaughtException', errorToThrow); //👈 Where the magic is

// Assert
expect(processExitListener.called).toBe(false);
expect(loggerDouble).toHaveBeenCalledWith(errorToThrow);
});

🕵🏼 The 'hidden effect' test - when the code should not mutate at all

👉What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:

📝 Code

it('When adding an invalid order, then it returns 400 and NOT retrievable', async () => {
//Arrange
const orderToAdd = {
userId: 1,
mode: 'draft',
externalIdentifier: uuid(), //no existing record has this value
};

//Act
const { status: addingHTTPStatus } = await axiosAPIClient.post(
'/order',
orderToAdd
);

//Assert
const { status: fetchingHTTPStatus } = await axiosAPIClient.get(
`/order/externalIdentifier/${orderToAdd.externalIdentifier}`
); // Trying to get the order that should have failed
expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({
addingHTTPStatus: 400,
fetchingHTTPStatus: 404,
});
// 👆 Check that no such record exists
});

🧨 The 'overdoing' test - when the code should mutate but it's doing too much

👉What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:

📝 Code

test('When deleting an existing order, Then it should NOT be retrievable', async () => {
// Arrange
const orderToDelete = {
userId: 1,
productId: 2,
};
const deletedOrder = (await axiosAPIClient.post('/order', orderToDelete)).data
.id; // We will delete this soon
const orderNotToBeDeleted = orderToDelete;
const notDeletedOrder = (
await axiosAPIClient.post('/order', orderNotToBeDeleted)
).data.id; // We will not delete this

// Act
await axiosAPIClient.delete(`/order/${deletedOrder}`);

// Assert
const { status: getDeletedOrderStatus } = await axiosAPIClient.get(
`/order/${deletedOrder}`
);
const { status: getNotDeletedOrderStatus } = await axiosAPIClient.get(
`/order/${notDeletedOrder}`
);
expect(getNotDeletedOrderStatus).toBe(200);
expect(getDeletedOrderStatus).toBe(404);
});

🕰 The 'slow collaborator' test - when the other HTTP service times out

👉What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting

📝 Code

// In this example, our code accepts new Orders and while processing them approaches the Users Microservice
test('When users service times out, then return 503 (option 1 with fake timers)', async () => {
//Arrange
const clock = sinon.useFakeTimers();
config.HTTPCallTimeout = 1000; // Set a timeout for outgoing HTTP calls
nock(`${config.userServiceURL}/user/`)
.get('/1', () => clock.tick(2000)) // Reply delay is bigger than configured timeout 👆
.reply(200);
const loggerDouble = sinon.stub(logger, 'error');
const orderToAdd = {
userId: 1,
productId: 2,
mode: 'approved',
};

//Act
// 👇try to add new order which should fail due to User service not available
const response = await axiosAPIClient.post('/order', orderToAdd);

//Assert
// 👇At least our code does its best given this situation
expect(response.status).toBe(503);
expect(loggerDouble.lastCall.firstArg).toMatchObject({
name: 'user-service-not-available',
stack: expect.any(String),
message: expect.any(String),
});
});

💊 The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation

👉What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why

When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB

Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):

📝 Code

  1. Create a fake message queue that does almost nothing but record calls, see full example here
class FakeMessageQueueProvider extends EventEmitter {
// Implement here

publish(message) {}

consume(queueName, callback) {}
}
  1. Make your message queue client accept real or fake provider
class MessageQueueClient extends EventEmitter {
// Pass to it a fake or real message queue
constructor(customMessageQueueProvider) {}

publish(message) {}

consume(queueName, callback) {}

// Simple implementation can be found here:
// https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js
}
  1. Expose a convenient function that tells when certain calls where made
class MessageQueueClient extends EventEmitter {
publish(message) {}

consume(queueName, callback) {}

// 👇
waitForEvent(eventName: 'publish' | 'consume' | 'acknowledge' | 'reject', howManyTimes: number) : Promise
}
  1. The test is now short, flat and expressive 👇
const FakeMessageQueueProvider = require('./libs/fake-message-queue-provider');
const MessageQueueClient = require('./libs/message-queue-client');
const newOrderService = require('./domain/newOrderService');

test('When a poisoned message arrives, then it is being rejected back', async () => {
// Arrange
const messageWithInvalidSchema = { nonExistingProperty: 'invalid❌' };
const messageQueueClient = new MessageQueueClient(
new FakeMessageQueueProvider()
);
// Subscribe to new messages and passing the handler function
messageQueueClient.consume('orders.new', newOrderService.addOrder);

// Act
await messageQueueClient.publish('orders.new', messageWithInvalidSchema);
// Now all the layers of the app will get stretched 👆, including logic and message queue libraries

// Assert
await messageQueueClient.waitFor('reject', { howManyTimes: 1 });
// 👆 This tells us that eventually our code asked the message queue client to reject this poisoned message
});

📝Full code example - is here

📦 Test the package as a consumer

👉What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files

📝 Code

Consider the following scenario, you're developing a library, and you wrote this code:

// index.js
export * from './calculate.js';

// calculate.js 👈
export function calculate() {
return 1;
}

Then some tests:

import { calculate } from './index.js';

test('should return 1', () => {
expect(calculate()).toBe(1);
})

All tests pass 🎊

Finally configure the package.json:

{
// ....
"files": [
"index.js"
]
}

See, 100% coverage, all tests pass locally and in the CI ✅, it just won't work in production 👹. Why? because you forgot to include the calculate.js in the package.json files array 👆

What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself 👇

📝 Code

// global-setup.js

// 1. Setup the in-memory NPM registry, one function that's it! 🔥
await setupVerdaccio();

// 2. Building our package
await exec('npm', ['run', 'build'], {
cwd: packagePath,
});

// 3. Publish it to the in-memory registry
await exec('npm', ['publish', '--registry=http://localhost:4873'], {
cwd: packagePath,
});

// 4. Installing it in the consumer directory
await exec('npm', ['install', 'my-package', '--registry=http://localhost:4873'], {
cwd: consumerPath,
});

// Test file in the consumerPath

// 5. Test the package 🚀
test("should succeed", async () => {
const { fn1 } = await import('my-package');

expect(fn1()).toEqual(1);
});

📝Full code example - is here

What else this technique can be useful for?

  • Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
  • You want to test ESM and CJS consumers
  • If you have CLI application you can test it like your users
  • Making sure all the voodoo magic in that babel file is working as expected

🗞 The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug

👉What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.

Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).

The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:

📝 Code

Code under test, an API throw a new error status

if (doesOrderCouponAlreadyExist) {
throw new AppError('duplicated-coupon', { httpStatus: 409 });
}

The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions

"responses": {
"200": {
"description": "successful",
}
,
"400": {
"description": "Invalid ID",
"content": {}
},// No 409 in this list😲👈
}

The test code

const jestOpenAPI = require('jest-openapi');
jestOpenAPI('../openapi.json');

test('When an order with duplicated coupon is added , then 409 error should get returned', async () => {
// Arrange
const orderToAdd = {
userId: 1,
productId: 2,
couponId: uuid(),
};
await axiosAPIClient.post('/order', orderToAdd);

// Act
// We're adding the same coupon twice 👇
const receivedResponse = await axios.post('/order', orderToAdd);

// Assert;
expect(receivedResponse.status).toBe(409);
expect(res).toSatisfyApiSpec();
// This 👆 will throw if the API response, body or status, is different that was it stated in the OpenAPI
});

Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches

beforeAll(() => {
axios.interceptors.response.use((response) => {
expect(response.toSatisfyApiSpec());
// With this 👆, add nothing to the tests - each will fail if the response deviates from the docs
});
});

Even more ideas

  • Test readiness and health routes
  • Test message queue connection failures
  • Test JWT and JWKS failures
  • Test security-related things like CSRF tokens
  • Test your HTTP client retry mechanism (very easy with nock)
  • Test that the DB migration succeed and the new code can work with old records format
  • Test DB connection disconnects

It's not just ideas, it a whole new mindset

The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'

My new online testing course - If you're intrigued with beyond the basics testing patterns, consider my online course which was just launched and is 🎁 on sale for 30 days (July 2023)

· 2 min read
Yoni Goldberg
Raz Luvaton
Daniel Gluskin
Michael Salomon

Where is our focus now?

We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback

What's new?

Request-level store

Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task

Hardened .dockerfile

Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines

Additional ORM option: Prisma

Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma

Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post

Many small enhancements

More than 10 PR were merged with CLI experience improvements, bug fixes, code patterns enhancements and more

Where do I start?

Definitely follow the getting started guide first and then read the guide coding with practica to realize its full power and genuine value. We will be thankful to receive your feedback

· 24 min read
Yoni Goldberg

Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?

Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?

Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained

Suite with stain

Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...

From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?

In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?

This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs

Ready to explore how good Prisma is and whether you should throw away your current tools?

· 22 min read
Yoni Goldberg

Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?

In his novel book 'Atomic Habits' the author James Clear states that:

"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst

We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change

Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.

Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit

Animals and frameworks shed their skin

The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell

TOC - Patterns to reconsider

  1. Dotenv
  2. Calling a service from a controller
  3. Nest.js dependency injection for all classes
  4. Passport.js
  5. Supertest
  6. Fastify utility decoration
  7. Logging from a catch clause
  8. Morgan logger
  9. NODE_ENV

· 2 min read
Yoni Goldberg

🥳 We're thrilled to launch the very first version of Practica.js.

What is Practica is one paragraph

Although Node.js has great frameworks 💚, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.

Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.

90 seconds video

How to get started

To get up to speed quickly, read our getting started guide.