Skip to main content

6 posts tagged with "node.js"

View All Tags

Β· 13 min read
Yoni Goldberg

What's special about this article?​

As a testing consultant, I read tons of testing articles throughout the years. The majority is nice-to-read, casual pieces of content which not always worth your precious time. Once in a while, not very often, I landed on an article that was shockingly good and could genuinely improve your test writing skills. I've cherry-picked these outstanding articles for you, and added my abstract nearby. Half of these articles are related directly to JavaScript/Node.js, the second half covers ubiquitous testing concepts that are applicable in every language

Why did I find these articles to be outstanding? First, the writing quality is excellent. Second, they deal with the 'new world of testing', not the commonly known 'TDD-ish' stuff but rather modern concepts and tooling

Too busy to read them all? Search for articles that are decorated with a medal πŸ…, these are a true masterpiece pieces of content that you never wanna miss

Before we start: If you haven't heard, I launched my comprehensive Node.js testing course a week ago (curriculum here). There are less than 48 hours left for the 🎁 special launch deal

Here they are, 10 outstanding testing articles:


πŸ“„ 1. 'Selective Unit Testing – Costs and Benefits'​

✍️ Author: Steve Sanderson

πŸ”– Abstract: We all found ourselves at least once in the ongoing and flammable discussion about 'units' vs 'integration'. This articles delve into a greater level of specificity and discuss WHEN unit test shine by considering the costs of writing these tests under various scenarios. Many treat their testing strategy as a static model - a testing technique they always write regardless of the context. "Always write unit tests against functions", "Write mostly integration tests" are a type of arguments often heard. Conversely, this article suggests that the attractiveness of unit tests should be evaluated based on the costs and benefits per module. The article classifies multiple scenarios where the net value of unit tests is high or low, for example:

If your code is basically obvious – so at a glance you can see exactly what it does – then additional design and verification (e.g., through unit testing) yields extremely minimal benefit, if any

The author also puts a 2x2 model to visualize when the attractiveness of unit tests is high or low

When unit shines

Side note, not part of the article: Personally I (Yoni) always start with component tests, outside-in, cover first the high-level user flow details (a.k.a the testing diamond). Then later once I have functions, I add unit tests based on their net value. This article helped me a lot in classifying and evaluating the benefits of units in various scenarios

πŸ‘“ Read time: 9 min (1850 words)

πŸ”— Link: https://blog.stevensanderson.com/2009/11/04/selective-unit-testing-costs-and-benefits/


πŸ“„ 2. 'Testing implementation details' (JavaScript example)​

✍️ Author: Kent C Dodds

πŸ”– Abstract: The author outlines with a code example the unavoidable tragic faith of a tester who assert on implementation details. Put aside the effort in testing so many details, going this route always end with 'false positive' and 'false negative' that clouds the tests reliability. The article illustrate this with a frontend code example but the lesson takeaway is ubiquitous to any kind of testing

"There are two distinct reasons that it's important to avoid testing implementation details. Tests which test implementation details:

  1. Can break when you refactor application code. False negatives
  2. May not fail when you break application code. False positives"

p.s. This author has another outstanding post about a modern testing strategy, checkout this one as well - 'Write tests. Not too many. Mostly integration'

πŸ‘“ Read time: 13 min (2600 words)

πŸ”— Link: https://kentcdodds.com/blog/testing-implementation-details


πŸ“„ 3. 'Testing Microservices, the sane way'​

πŸ… This is a masterpiece

✍️ Author: Cindy Sridharan

πŸ”– Abstract: This one is the entire Microservices and distributed modern testing bible packed in a single long article that is also super engaging. I remember when came across it four years ago, winter time, I spent an hour everyday under my blanket before sleep with a smile is spread over my face. I clicked on every link, pause after every paragraph to think - a whole new world was opening in front of me. In fact, it was so fascinating that it made me want to specialize in this domain. Fast forward, years later, this is a major part of my work and I enjoy every moment

This paper starts by explaining why E2E, unit tests and explanatory QA will fall short in a distributed environment. Not only this, why any kind of coded test won't be enough and a rich toolbox of techniques is needed. It goes through a handful of modern testing techniques that are unfamiliar to most developers. One of its key parts deal with what should be the canonical developer's testing technique: the author advocates for "big unit tests" (i.e., component tests) as it strikes a great balance between developers comfort and realism

I coined the term β€œstep-up testing”, the general idea being to test at one layer above what’s generally advocated for. Under this model, unit tests would look more like integration tests (by treating I/O as a part of the unit under test within a bounded context), integration testing would look more like testing against real production, and testing in production looks more like, well, monitoring and exploration. The restructured test pyramid (test funnel?) for distributed systems would look like the following:

When unit shines

Beyond its main scope, whatever type of system you are dealing with - this article will broaden yours perspective on testing and expose you to many new ideas that are highly applicable

πŸ‘“ Read time: > 2 hours (10,500 words with many links)

πŸ”— Link: https://copyconstruct.medium.com/testing-microservices-the-sane-way-9bb31d158c16


πŸ“„ 4. 'How to Unit Test with Node.js?' (JavaScript examples, for beginners)​

✍️ Author: Ryan Jones

πŸ”– Abstract: One single recommendation for beginners: Any other article on this list covers advanced testing. This article, and only this one, is meant for testing newbies who are looking to take their first practical steps in this world

This tutorial was chosen from a handful of other alternatives because it's well-written and also relatively comprehensive. It covers the first steps 'kata' that a beginner should learn first about: the test anatomy syntax, test runners CLI, assertions and asynchronous tests. Goes without words, this knowledge won't be sufficient for covering a real-world app with testing, but it gets you safely to the next phase. My personal advice: after reading this one, your next step is learning about test doubles (mocking)

πŸ‘“ Read time: 16 min (3000 words)

πŸ”— Link: https://medium.com/serverlessguru/how-to-unit-test-with-nodejs-76967019ba56


πŸ“„ 5. 'Unit test fetish'​

✍️ Author: Martin Sústrik

πŸ”– Abstract: The article opens with 'I hear that prople feel an uncontrollable urge to write unit tests nowaydays. If you are one of those affected, spare few minutes and consider these reasons for NOT writing unit tests'. Despite these words, the article is not against unit tests as a principle rather highlights when & where unit tests fall short. In these cases, other techniques should be considered. Here is an example: Unit tests inherently have lower return on investment, the author comes with a sounding analogy for this: 'If you are painting a house, you want to start with a biggest brush at hand and spare the tiny brush for the end to deal with fine details. If you begin your QA work with unit tests, you are essentially trying to paint entire house using the finest chinese calligraphy brush...'

πŸ‘“ Read time: 5 min (1000 words)

πŸ”— Link: https://250bpm.com/blog:40/


πŸ“„ 6. 'Mocking is a Code Smell' (JavaScript examples)​

✍️ Author: Eric Elliott

πŸ”– Abstract: Most of the articles here belong more to the 'modern wave of testing', here is something more 'classic' and appealing to TDD lovers or just anyone with a need to write unit tests. This article is about HOW to reduce the number of mocking (test doubles) in your tests. Not only because mocking is an overhead in test writing, also because they hint that something might be wrong. In other words, mocking is not definitely wrong and must be fixed right away but many mocking are a sign of something not ideal. Consider a module that inherits from many others, or a chatty one that collaborates with a handful of other modules to do its job - testing and changing this structure is a burden:

"Mocking is required when our decomposition strategy has failed"

The author goes through a various of techniques to design more autonomous units like using pure functions by isolating side-effects from the rest of the program logic, using pub/sub, isolating I/O, composing units with patterns like monadic compositions, and some more

The overall article tone is balanced. In some parts, it encourages functional programming and techniques that are far from the mainstream - consider reading these few parts with a grain of salt

πŸ‘“ Read time: 32 min (6,300 words)

πŸ”— Link: https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a


πŸ“„ 7. 'Why Good Developers Write Bad Unit Tests'​

πŸ… This is a masterpiece

✍️ Author: Michael Lynch

πŸ”– Abstract: I love this one so much. The author exemplifies how unexpectedly it is sometimes the good developers with their great intentions who write bad tests:

Too often, software developers approach unit testing with the same flawed thinking... They mechanically apply all the β€œrules” they learned in production code without examining whether they’re appropriate for tests. As a result, they build skyscrapers at the beach

Concrete code examples show how the test readability deteriorates once we apply 'sky scrapper' thinking and how to keep it simple. In one part, he demonstrates how violating the DRY principle thoughtfully allows the reader to stay within the test while still keeping the code maintainable. This article alone, in 11 minutes, can greatly improve the tests of developers who tend to write sophisticated tests. If you have someone like this in your team, you now know what to do

πŸ‘“ Read time: 11 min (2,2000 words)

πŸ”— Link: https://mtlynch.io/good-developers-bad-tests/


πŸ“„ 8. 'An Overview of JavaScript Testing in 2022' (JavaScript examples)​

✍️ Author: Vitali Zaidman

πŸ”– Abstract: This paper is unique here as it doesn't cover a single topic rather being a rundown of (almost) all JavaScript testing tools. This allows you to enrich the toolbox in your mind, and have more screwdrivers for more type of screws. For example, knowing that there are IDE extensions that shows coverage information right within the code might help you boost the tests adoption in the team, if needed. Knowing that there are solid, free, and open source visual regression tools might encourage you to dip your toes in this water, to name a few examples.

"We reviewed the most trending testing strategies and tools in the web development community and hopefully made it easier for you to test your sites. In the end, the best decisions regarding application architecture today are made by understanding general patterns that are trending in the very active community of developers, and combining them with your own experience and the characteristics of your application."

The author was also kind enough to leave pros/cons nearby most tools so the reader can quickly get a sense of how the various options stack with each other. The article covers categories like assertion libraries, test runners, code coverage tools, visual regression tools, E2E suits and more

πŸ‘“ Read time: 37 min (7,400 words)

πŸ”— Link: https://medium.com/welldone-software/an-overview-of-javascript-testing-7ce7298b9870


πŸ“„ 9. Testing in Production, the safe way​

✍️ Author: Cindy Sridharan

πŸ”– Abstract: 'Testing in production' is a provocative term that sounds like a risky and careless approach of testing over production instead of verifying the delivery beforehand (yet another case of bad testing terminology). In practice, testing in production doesn't replace coding-time testing, it just add additional layer of confidence by safely testing in 3 more phases: deployment, release and post-release. This comprehensive article covers dozens of techniques, some are unusual like traffic shadowing, tap compare and more. More than anything else, it illustrates an holistic testing workflow, build confidence cumulatively from developer machine until the new version is serving users in production

I’m more and more convinced that staging environments are like mocks - at best a pale imitation of the genuine article and the worst form of confirmation bias.

It’s still better than having nothing - but β€œworks in staging” is only one step better than β€œworks on my machine”.

Testing in production

πŸ‘“ Read time: 54 min (10,725 words)

πŸ”— Link: https://copyconstruct.medium.com/testing-in-production-the-safe-way-18ca102d0ef1


πŸ“„ 10. 'Please don't mock me' (JavaScript examples, from JSConf)​

πŸ… This is a masterpiece

✍️ Author: Justin Searls

πŸ”– Abstract: This fantastic YouTube deals with the Achilles heel of testing: where exactly to mock. The dilemma where to end the test scope, what should be mocked and what's not - is presumably the most strategic test design decision. Consider for example having module A which interacts with module B. If you isolate A by mocking B, A will always pass, even when B's interface has changed and A's code didn't follow. This makes A's tests highly stable but... production will fail in hours. In his talk Justin says:

"A test that never fails is a bad test because it doesn't tell you anything. Design tests to fail"

Then he goes and tackle many other interesting mocking crossroads, with beautiful visuals, tons of insights. Please don't miss this one

πŸ‘“ Read time: 39 min

πŸ”— Link: https://www.youtube.com/watch?v=x8sKpJwq6lY&list=PL1CRgzydk3vzk5nMZNLTODfMartQQzInE&index=148


πŸ“„ Shameless plug: my articles​

Here are a few articles that I wrote, obviously I don't 'recommend' my own craft, just checking modestly whether they appeal to you. Together, these articles gained 25,000 GitHub stars, maybe you'll find one of them them useful?

🎁 Bonus: Some other great testing content​

These articles are also great, some are highly popular:

p.s. Last reminder, less than 48 hours left for my online course 🎁 special launch offer

Β· 21 min read
Yoni Goldberg
Raz Luvaton

Where the dead-bodies are covered​

This post is about tests that are easy to write, 5-8 lines typically, they cover dark and dangerous corners of our applications, but are often overlooked

Some context first: How do we test a modern backend? With the testing diamond, of course, by putting the focus on component/integration tests that cover all the layers, including a real DB. With this approach, our tests 99% resemble the production and the user flows, while the development experience is almost as good as with unit tests. Sweet. If this topic is of interest, we've also written a guide with 50 best practices for integration tests in Node.js

But there is a pitfall: most developers write only semi-happy test cases that are focused on the core user flows. Like invalid inputs, CRUD operations, various application states, etc. This is indeed the bread and butter, a great start, but a whole area is left uncovered. For example, typical tests don't simulate an unhandled promise rejection that leads to process crash, nor do they simulate the webserver bootstrap phase that might fail and leave the process idle, or HTTP calls to external services that often end with timeouts and retries. They typically not covering the health and readiness route, nor the integrity of the OpenAPI to the actual routes schema, to name just a few examples. There are many dead bodies covered beyond business logic, things that sometimes are even beyond bugs but rather are concerned with application downtime

The hidden corners

Here are a handful of examples that might open your mind to a whole new class of risks and tests

July 2023: My testing course was launched: I've just released a comprehensive testing course that I've been working on for two years. 🎁 It's now on sale, but only for the month of July. Check it out at testjavascript.com

Test Examples​

πŸ§Ÿβ€β™€οΈ The zombie process test​

πŸ‘‰What & so what? - In all of your tests, you assume that the app has already started successfully, lacking a test against the initialization flow. This is a pity because this phase hides some potential catastrophic failures: First, initialization failures are frequent - many bad things can happen here, like a DB connection failure or a new version that crashes during deployment. For this reason, runtime platforms (like Kubernetes and others) encourage components to signal when they are ready (see readiness probe). Errors at this stage also have a dramatic effect over the app health - if the initialization fails and the process stays alive, it becomes a 'zombie process'. In this scenario, the runtime platform won't realize that something went bad, forward traffic to it and avoid creating alternative instances. Besides exiting gracefully, you may want to consider logging, firing a metric, and adjusting your /readiness route. Does it work? only test can tell!

πŸ“ Code

Code under test, api.js:

// A common express server initialization
const startWebServer = () => {
return new Promise((resolve, reject) => {
try {
// A typical Express setup
expressApp = express();
defineRoutes(expressApp); // a function that defines all routes
expressApp.listen(process.env.WEB_SERVER_PORT);
} catch (error) {
//log here, fire a metric, maybe even retry and finally:
process.exit();
}
});
};

The test:

const api = require('./entry-points/api'); // our api starter that exposes 'startWebServer' function
const sinon = require('sinon'); // a mocking library

test('When an error happens during the startup phase, then the process exits', async () => {
// Arrange
const processExitListener = sinon.stub(process, 'exit');
// πŸ‘‡ Choose a function that is part of the initialization phase and make it fail
sinon
.stub(routes, 'defineRoutes')
.throws(new Error('Cant initialize connection'));

// Act
await api.startWebServer();

// Assert
expect(processExitListener.called).toBe(true);
});

πŸ‘€ The observability test​

πŸ‘‰What & why - For many, testing error means checking the exception type or the API response. This leaves one of the most essential parts uncovered - making the error correctly observable. In plain words, ensuring that it's being logged correctly and exposed to the monitoring system. It might sound like an internal thing, implementation testing, but actually, it goes directly to a user. Yes, not the end-user, but rather another important one - the ops user who is on-call. What are the expectations of this user? At the very basic level, when a production issue arises, she must see detailed log entries, including stack trace, cause and other properties. This info can save the day when dealing with production incidents. On to of this, in many systems, monitoring is managed separately to conclude about the overall system state using cumulative heuristics (e.g., an increase in the number of errors over the last 3 hours). To support this monitoring needs, the code also must fire error metrics. Even tests that do try to cover these needs take a naive approach by checking that the logger function was called - but hey, does it include the right data? Some write better tests that check the error type that was passed to the logger, good enough? No! The ops user doesn't care about the JavaScript class names but the JSON data that is sent out. The following test focuses on the specific properties that are being made observable:

πŸ“ Code

test('When exception is throw during request, Then logger reports the mandatory fields', async () => {
//Arrange
const orderToAdd = {
userId: 1,
productId: 2,
status: 'approved',
};
const metricsExporterDouble = sinon.stub(metricsExporter, 'fireMetric');
sinon
.stub(OrderRepository.prototype, 'addOrder')
.rejects(new AppError('saving-failed', 'Order could not be saved', 500));
const loggerDouble = sinon.stub(logger, 'error');

//Act
await axiosAPIClient.post('/order', orderToAdd);

//Assert
expect(loggerDouble).toHaveBeenCalledWith({
name: 'saving-failed',
status: 500,
stack: expect.any(String),
message: expect.any(String),
});
expect(
metricsExporterDouble).toHaveBeenCalledWith('error', {
errorName: 'example-error',
})
});

πŸ‘½ The 'unexpected visitor' test - when an uncaught exception meets our code​

πŸ‘‰What & why - A typical error flow test falsely assumes two conditions: A valid error object was thrown, and it was caught. Neither is guaranteed, let's focus on the 2nd assumption: it's common for certain errors to left uncaught. The error might get thrown before your framework error handler is ready, some npm libraries can throw surprisingly from different stacks using timer functions, or you just forget to set someEventEmitter.on('error', ...). To name a few examples. These errors will find their way to the global process.on('uncaughtException') handler, hopefully if your code subscribed. How do you simulate this scenario in a test? naively you may locate a code area that is not wrapped with try-catch and stub it to throw during the test. But here's a catch22: if you are familiar with such area - you are likely to fix it and ensure its errors are caught. What do we do then? we can bring to our benefit the fact the JavaScript is 'borderless', if some object can emit an event, we as its subscribers can make it emit this event ourselves, here's an example:

researches says that, rejection

πŸ“ Code

test('When an unhandled exception is thrown, then process stays alive and the error is logged', async () => {
//Arrange
const loggerDouble = sinon.stub(logger, 'error');
const processExitListener = sinon.stub(process, 'exit');
const errorToThrow = new Error('An error that wont be caught 😳');

//Act
process.emit('uncaughtException', errorToThrow); //πŸ‘ˆ Where the magic is

// Assert
expect(processExitListener.called).toBe(false);
expect(loggerDouble).toHaveBeenCalledWith(errorToThrow);
});

πŸ•΅πŸΌ The 'hidden effect' test - when the code should not mutate at all​

πŸ‘‰What & so what - In common scenarios, the code under test should stop early like when the incoming payload is invalid or a user doesn't have sufficient credits to perform an operation. In these cases, no DB records should be mutated. Most tests out there in the wild settle with testing the HTTP response only - got back HTTP 400? great, the validation/authorization probably work. Or does it? The test trusts the code too much, a valid response doesn't guarantee that the code behind behaved as design. Maybe a new record was added although the user has no permissions? Clearly you need to test this, but how would you test that a record was NOT added? There are two options here: If the DB is purged before/after every test, than just try to perform an invalid operation and check that the DB is empty afterward. If you're not cleaning the DB often (like me, but that's another discussion), the payload must contain some unique and queryable value that you can query later and hope to get no records. This is how it looks like:

πŸ“ Code

it('When adding an invalid order, then it returns 400 and NOT retrievable', async () => {
//Arrange
const orderToAdd = {
userId: 1,
mode: 'draft',
externalIdentifier: uuid(), //no existing record has this value
};

//Act
const { status: addingHTTPStatus } = await axiosAPIClient.post(
'/order',
orderToAdd
);

//Assert
const { status: fetchingHTTPStatus } = await axiosAPIClient.get(
`/order/externalIdentifier/${orderToAdd.externalIdentifier}`
); // Trying to get the order that should have failed
expect({ addingHTTPStatus, fetchingHTTPStatus }).toMatchObject({
addingHTTPStatus: 400,
fetchingHTTPStatus: 404,
});
// πŸ‘† Check that no such record exists
});

🧨 The 'overdoing' test - when the code should mutate but it's doing too much​

πŸ‘‰What & why - This is how a typical data-oriented test looks like: first you add some records, then approach the code under test, and finally assert what happens to these specific records. So far, so good. There is one caveat here though: since the test narrows it focus to specific records, it ignores whether other record were unnecessarily affected. This can be really bad, here's a short real-life story that happened to my customer: Some data access code changed and incorporated a bug that updates ALL the system users instead of just one. All test pass since they focused on a specific record which positively updated, they just ignored the others. How would you test and prevent? here is a nice trick that I was taught by my friend Gil Tayar: in the first phase of the test, besides the main records, add one or more 'control' records that should not get mutated during the test. Then, run the code under test, and besides the main assertion, check also that the control records were not affected:

πŸ“ Code

test('When deleting an existing order, Then it should NOT be retrievable', async () => {
// Arrange
const orderToDelete = {
userId: 1,
productId: 2,
};
const deletedOrder = (await axiosAPIClient.post('/order', orderToDelete)).data
.id; // We will delete this soon
const orderNotToBeDeleted = orderToDelete;
const notDeletedOrder = (
await axiosAPIClient.post('/order', orderNotToBeDeleted)
).data.id; // We will not delete this

// Act
await axiosAPIClient.delete(`/order/${deletedOrder}`);

// Assert
const { status: getDeletedOrderStatus } = await axiosAPIClient.get(
`/order/${deletedOrder}`
);
const { status: getNotDeletedOrderStatus } = await axiosAPIClient.get(
`/order/${notDeletedOrder}`
);
expect(getNotDeletedOrderStatus).toBe(200);
expect(getDeletedOrderStatus).toBe(404);
});

πŸ•° The 'slow collaborator' test - when the other HTTP service times out​

πŸ‘‰What & why - When your code approaches other services/microservices via HTTP, savvy testers minimize end-to-end tests because these tests lean toward happy paths (it's harder to simulate scenarios). This mandates using some mocking tool to act like the remote service, for example, using tools like nock or wiremock. These tools are great, only some are using them naively and check mainly that calls outside were indeed made. What if the other service is not available in production, what if it is slower and times out occasionally (one of the biggest risks of Microservices)? While you can't wholly save this transaction, your code should do the best given the situation and retry, or at least log and return the right status to the caller. All the network mocking tools allow simulating delays, timeouts and other 'chaotic' scenarios. Question left is how to simulate slow response without having slow tests? You may use fake timers and trick the system into believing as few seconds passed in a single tick. If you're using nock, it offers an interesting feature to simulate timeouts quickly: the .delay function simulates slow responses, then nock will realize immediately if the delay is higher than the HTTP client timeout and throw a timeout event immediately without waiting

πŸ“ Code

// In this example, our code accepts new Orders and while processing them approaches the Users Microservice
test('When users service times out, then return 503 (option 1 with fake timers)', async () => {
//Arrange
const clock = sinon.useFakeTimers();
config.HTTPCallTimeout = 1000; // Set a timeout for outgoing HTTP calls
nock(`${config.userServiceURL}/user/`)
.get('/1', () => clock.tick(2000)) // Reply delay is bigger than configured timeout πŸ‘†
.reply(200);
const loggerDouble = sinon.stub(logger, 'error');
const orderToAdd = {
userId: 1,
productId: 2,
mode: 'approved',
};

//Act
// πŸ‘‡try to add new order which should fail due to User service not available
const response = await axiosAPIClient.post('/order', orderToAdd);

//Assert
// πŸ‘‡At least our code does its best given this situation
expect(response.status).toBe(503);
expect(loggerDouble.lastCall.firstArg).toMatchObject({
name: 'user-service-not-available',
stack: expect.any(String),
message: expect.any(String),
});
});

πŸ’Š The 'poisoned message' test - when the message consumer gets an invalid payload that might put it in stagnation​

πŸ‘‰What & so what - When testing flows that start or end in a queue, I bet you're going to bypass the message queue layer, where the code and libraries consume a queue, and you approach the logic layer directly. Yes, it makes things easier but leaves a class of uncovered risks. For example, what if the logic part throws an error or the message schema is invalid but the message queue consumer fails to translate this exception into a proper message queue action? For example, the consumer code might fail to reject the message or increment the number of attempts (depends on the type of queue that you're using). When this happens, the message will enter a loop where it always served again and again. Since this will apply to many messages, things can get really bad as the queue gets highly saturated. For this reason this syndrome was called the 'poisoned message'. To mitigate this risk, the tests' scope must include all the layers like how you probably do when testing against APIs. Unfortunately, this is not as easy as testing with DB because message queues are flaky, here is why

When testing with real queues things get curios and curiouser: tests from different process will steal messages from each other, purging queues is harder that you might think (e.g. SQS demand 60 seconds to purge queues), to name a few challenges that you won't find when dealing with real DB

Here is a strategy that works for many teams and holds a small compromise - use a fake in-memory message queue. By 'fake' I mean something simplistic that acts like a stub/spy and do nothing but telling when certain calls are made (e.g., consume, delete, publish). You might find reputable fakes/stubs for your own message queue like this one for SQS and you can code one easily yourself. No worries, I'm not a favour of maintaining myself testing infrastructure, this proposed component is extremely simply and unlikely to surpass 50 lines of code (see example below). On top of this, whether using a real or fake queue, one more thing is needed: create a convenient interface that tells to the test when certain things happened like when a message was acknowledged/deleted or a new message was published. Without this, the test never knows when certain events happened and lean toward quirky techniques like polling. Having this setup, the test will be short, flat and you can easily simulate common message queue scenarios like out of order messages, batch reject, duplicated messages and in our example - the poisoned message scenario (using RabbitMQ):

πŸ“ Code

  1. Create a fake message queue that does almost nothing but record calls, see full example here
class FakeMessageQueueProvider extends EventEmitter {
// Implement here

publish(message) {}

consume(queueName, callback) {}
}
  1. Make your message queue client accept real or fake provider
class MessageQueueClient extends EventEmitter {
// Pass to it a fake or real message queue
constructor(customMessageQueueProvider) {}

publish(message) {}

consume(queueName, callback) {}

// Simple implementation can be found here:
// https://github.com/testjavascript/nodejs-integration-tests-best-practices/blob/master/example-application/libraries/fake-message-queue-provider.js
}
  1. Expose a convenient function that tells when certain calls where made
class MessageQueueClient extends EventEmitter {
publish(message) {}

consume(queueName, callback) {}

// πŸ‘‡
waitForEvent(eventName: 'publish' | 'consume' | 'acknowledge' | 'reject', howManyTimes: number) : Promise
}
  1. The test is now short, flat and expressive πŸ‘‡
const FakeMessageQueueProvider = require('./libs/fake-message-queue-provider');
const MessageQueueClient = require('./libs/message-queue-client');
const newOrderService = require('./domain/newOrderService');

test('When a poisoned message arrives, then it is being rejected back', async () => {
// Arrange
const messageWithInvalidSchema = { nonExistingProperty: 'invalid❌' };
const messageQueueClient = new MessageQueueClient(
new FakeMessageQueueProvider()
);
// Subscribe to new messages and passing the handler function
messageQueueClient.consume('orders.new', newOrderService.addOrder);

// Act
await messageQueueClient.publish('orders.new', messageWithInvalidSchema);
// Now all the layers of the app will get stretched πŸ‘†, including logic and message queue libraries

// Assert
await messageQueueClient.waitFor('reject', { howManyTimes: 1 });
// πŸ‘† This tells us that eventually our code asked the message queue client to reject this poisoned message
});

πŸ“Full code example - is here

πŸ“¦ Test the package as a consumer​

πŸ‘‰What & why - When publishing a library to npm, easily all your tests might pass BUT... the same functionality will fail over the end-user's computer. How come? tests are executed against the local developer files, but the end-user is only exposed to artifacts that were built. See the mismatch here? after running the tests, the package files are transpiled (I'm looking at you babel users), zipped and packed. If a single file is excluded due to .npmignore or a polyfill is not added correctly, the published code will lack mandatory files

πŸ“ Code

Consider the following scenario, you're developing a library, and you wrote this code:

// index.js
export * from './calculate.js';

// calculate.js πŸ‘ˆ
export function calculate() {
return 1;
}

Then some tests:

import { calculate } from './index.js';

test('should return 1', () => {
expect(calculate()).toBe(1);
})

βœ… All tests pass 🎊

Finally configure the package.json:

{
// ....
"files": [
"index.js"
]
}

See, 100% coverage, all tests pass locally and in the CI βœ…, it just won't work in production πŸ‘Ή. Why? because you forgot to include the calculate.js in the package.json files array πŸ‘†

What can we do instead? we can test the library as its end-users. How? publish the package to a local registry like verdaccio, let the tests install and approach the published code. Sounds troublesome? judge yourself πŸ‘‡

πŸ“ Code

// global-setup.js

// 1. Setup the in-memory NPM registry, one function that's it! πŸ”₯
await setupVerdaccio();

// 2. Building our package
await exec('npm', ['run', 'build'], {
cwd: packagePath,
});

// 3. Publish it to the in-memory registry
await exec('npm', ['publish', '--registry=http://localhost:4873'], {
cwd: packagePath,
});

// 4. Installing it in the consumer directory
await exec('npm', ['install', 'my-package', '--registry=http://localhost:4873'], {
cwd: consumerPath,
});

// Test file in the consumerPath

// 5. Test the package πŸš€
test("should succeed", async () => {
const { fn1 } = await import('my-package');

expect(fn1()).toEqual(1);
});

πŸ“Full code example - is here

What else this technique can be useful for?

  • Testing different version of peer dependency you support - let's say your package support react 16 to 18, you can now test that
  • You want to test ESM and CJS consumers
  • If you have CLI application you can test it like your users
  • Making sure all the voodoo magic in that babel file is working as expected

πŸ—ž The 'broken contract' test - when the code is great but its corresponding OpenAPI docs leads to a production bug​

πŸ‘‰What & so what - Quite confidently I'm sure that almost no team test their OpenAPI correctness. "It's just documentation", "we generate it automatically based on code" are typical belief found for this reason. Let me show you how this auto generated documentation can be wrong and lead not only to frustration but also to a bug. In production.

Consider the following scenario, you're requested to return HTTP error status code if an order is duplicated but forget to update the OpenAPI specification with this new HTTP status response. While some framework can update the docs with new fields, none can realize which errors your code throws, this labour is always manual. On the other side of the line, the API client is doing everything just right, going by the spec that you published, adding orders with some duplication because the docs don't forbid doing so. Then, BOOM, production bug -> the client crashes and shows an ugly unknown error message to the user. This type of failure is called the 'contract' problem when two parties interact, each has a code that works perfect, they just operate under different spec and assumptions. While there are fancy sophisticated and exhaustive solution to this challenge (e.g., PACT), there are also leaner approaches that gets you covered easily and quickly (at the price of covering less risks).

The following sweet technique is based on libraries (jest, mocha) that listen to all network responses, compare the payload against the OpenAPI document, and if any deviation is found - make the test fail with a descriptive error. With this new weapon in your toolbox and almost zero effort, another risk is ticked. It's a pity that these libs can't assert also against the incoming requests to tell you that your tests use the API wrong. One small caveat and an elegant solution: These libraries dictate putting an assertion statement in every test - expect(response).toSatisfyApiSpec(), a bit tedious and relies on human discipline. You can do better if your HTTP client supports plugin/hook/interceptor by putting this assertion in a single place that will apply in all the tests:

πŸ“ Code

Code under test, an API throw a new error status

if (doesOrderCouponAlreadyExist) {
throw new AppError('duplicated-coupon', { httpStatus: 409 });
}

The OpenAPI doesn't document HTTP status '409', no framework knows to update the OpenAPI doc based on thrown exceptions

"responses": {
"200": {
"description": "successful",
}
,
"400": {
"description": "Invalid ID",
"content": {}
},// No 409 in this listπŸ˜²πŸ‘ˆ
}

The test code

const jestOpenAPI = require('jest-openapi');
jestOpenAPI('../openapi.json');

test('When an order with duplicated coupon is added , then 409 error should get returned', async () => {
// Arrange
const orderToAdd = {
userId: 1,
productId: 2,
couponId: uuid(),
};
await axiosAPIClient.post('/order', orderToAdd);

// Act
// We're adding the same coupon twice πŸ‘‡
const receivedResponse = await axios.post('/order', orderToAdd);

// Assert;
expect(receivedResponse.status).toBe(409);
expect(res).toSatisfyApiSpec();
// This πŸ‘† will throw if the API response, body or status, is different that was it stated in the OpenAPI
});

Trick: If your HTTP client supports any kind of plugin/hook/interceptor, put the following code in 'beforeAll'. This covers all the tests against OpenAPI mismatches

beforeAll(() => {
axios.interceptors.response.use((response) => {
expect(response.toSatisfyApiSpec());
// With this πŸ‘†, add nothing to the tests - each will fail if the response deviates from the docs
});
});

Even more ideas​

  • Test readiness and health routes
  • Test message queue connection failures
  • Test JWT and JWKS failures
  • Test security-related things like CSRF tokens
  • Test your HTTP client retry mechanism (very easy with nock)
  • Test that the DB migration succeed and the new code can work with old records format
  • Test DB connection disconnects

It's not just ideas, it a whole new mindset​

The examples above were not meant only to be a checklist of 'don't forget' test cases, but rather a fresh mindset on what tests could cover for you. Modern tests are not just about functions, or user flows, but any risk that might visit your production. This is doable only with component/integration tests but never with unit or end-to-end tests. Why? Because unlike unit you need all the parts to play together (e.g., the DB migration file, with the DAL layer and the error handler all together). Unlike E2E, you have the power to simulate in-process scenarios that demand some tweaking and mocking. Component tests allow you to include many production moving parts early on your machine. I like calling this 'production-oriented development'

My new online testing course - If you're intrigued with beyond the basics testing patterns, consider my online course which was just launched and is 🎁 on sale for 30 days (July 2023)

Β· 2 min read
Yoni Goldberg
Raz Luvaton
Daniel Gluskin
Michael Salomon

Where is our focus now?​

We work in two parallel paths: enriching the supported best practices to make the code more production ready and at the same time enhance the existing code based off the community feedback

What's new?​

Request-level store​

Every request now has its own store of variables, you may assign information on the request-level so every code which was called from this specific request has access to these variables. For example, for storing the user permissions. One special variable that is stored is 'request-id' which is a unique UUID per request (also called correlation-id). The logger automatically will emit this to every log entry. We use the built-in AsyncLocal for this task

Hardened .dockerfile​

Although a Dockerfile may contain 10 lines, it easy and common to include 20 mistakes in these short artifact. For example, commonly npmrc secrets are leaked, usage of vulnerable base image and other typical mistakes. Our .Dockerfile follows the best practices from this article and already apply 90% of the guidelines

Additional ORM option: Prisma​

Prisma is an emerging ORM with great type safe support and awesome DX. We will keep Sequelize as our default ORM while Prisma will be an optional choice using the flag: --orm=prisma

Why did we add it to our tools basket and why Sequelize is still the default? We summarized all of our thoughts and data in this blog post

Many small enhancements​

More than 10 PR were merged with CLI experience improvements, bug fixes, code patterns enhancements and more

Where do I start?​

Definitely follow the getting started guide first and then read the guide coding with practica to realize its full power and genuine value. We will be thankful to receive your feedback

Β· 24 min read
Yoni Goldberg

Intro - Why discuss yet another ORM (or the man who had a stain on his fancy suite)?​

Betteridge's law of headlines suggests that a 'headline that ends in a question mark can be answered by the word NO'. Will this article follow this rule?

Imagine an elegant businessman (or woman) walking into a building, wearing a fancy tuxedo and a luxury watch wrapped around his palm. He smiles and waves all over to say hello while people around are starring admirably. You get a little closer, then shockingly, while standing nearby it's hard ignore a bold a dark stain over his white shirt. What a dissonance, suddenly all of that glamour is stained

Suite with stain

Like this businessman, Node is highly capable and popular, and yet, in certain areas, its offering basket is stained with inferior offerings. One of these areas is the ORM space, "I wish we had something like (Java) hibernate or (.NET) Entity Framework" are common words being heard by Node developers. What about existing mature ORMs like TypeORM and Sequelize? We owe so much to these maintainers, and yet, the produced developer experience, the level of maintenance - just don't feel delightful, some may say even mediocre. At least so I believed before writing this article...

From time to time, a shiny new ORM is launched, and there is hope. Then soon it's realized that these new emerging projects are more of the same, if they survive. Until one day, Prisma ORM arrived surrounded with glamour: It's gaining tons of attention all over, producing fantastic content, being used by respectful frameworks and... raised 40,000,000$ (40 million) to build the next generation ORM - Is it the 'Ferrari' ORM we've been waiting for? Is it a game changer? If you're are the 'no ORM for me' type, will this one make you convert your religion?

In Practica.js (the Node.js starter based off Node.js best practices with 83,000 stars) we aim to make the best decisions for our users, the Prisma hype made us stop by for a second, evaluate its unique offering and conclude whether we should upgrade our toolbox?

This article is certainly not an 'ORM 101' but rather a spotlight on specific dimensions in which Prisma aims to shine or struggle. It's compared against the two most popular Node.js ORM - TypeORM and Sequelize. Why not others? Why other promising contenders like MikroORM weren't covered? Just because they are not as popular yet ana maturity is a critical trait of ORMs

Ready to explore how good Prisma is and whether you should throw away your current tools?

Β· 22 min read
Yoni Goldberg

Node.js is maturing. Many patterns and frameworks were embraced - it's my belief that developers' productivity dramatically increased in the past years. One downside of maturity is habits - we now reuse existing techniques more often. How is this a problem?

In his novel book 'Atomic Habits' the author James Clear states that:

"Mastery is created by habits. However, sometimes when we're on auto-pilot performing habits, we tend to slip up... Just being we are gaining experience through performing the habits does not mean that we are improving. We actually go backwards on the improvement scale with most habits that turn into auto-pilot". In other words, practice makes perfect, and bad practices make things worst

We copy-paste mentally and physically things that we are used to, but these things are not necessarily right anymore. Like animals who shed their shells or skin to adapt to a new reality, so the Node.js community should constantly gauge its existing patterns, discuss and change

Luckily, unlike other languages that are more committed to specific design paradigms (Java, Ruby) - Node is a house of many ideas. In this community, I feel safe to question some of our good-old tooling and patterns. The list below contains my personal beliefs, which are brought with reasoning and examples.

Are those disruptive thoughts surely correct? I'm not sure. There is one things I'm sure about though - For Node.js to live longer, we need to encourage critics, focus our loyalty on innovation, and keep the discussion going. The outcome of this discussion is not "don't use this tool!" but rather becoming familiar with other techniques that, under some circumstances might be a better fit

Animals and frameworks shed their skin

The True Crab's exoskeleton is hard and inflexible, he must shed his restrictive exoskeleton to grow and reveal the new roomier shell

TOC - Patterns to reconsider​

  1. Dotenv
  2. Calling a service from a controller
  3. Nest.js dependency injection for all classes
  4. Passport.js
  5. Supertest
  6. Fastify utility decoration
  7. Logging from a catch clause
  8. Morgan logger
  9. NODE_ENV

Β· 2 min read
Yoni Goldberg

πŸ₯³ We're thrilled to launch the very first version of Practica.js.

What is Practica is one paragraph​

Although Node.js has great frameworks πŸ’š, they were never meant to be production ready immediately. Practica.js aims to bridge the gap. Based on your preferred framework, we generate some example code that demonstrates a full workflow, from API to DB, that is packed with good practices. For example, we include a hardened dockerfile, N-Tier folder structure, great testing templates, and more. This saves a great deal of time and can prevent painful mistakes. All decisions made are neatly and thoughtfully documented. We strive to keep things as simple and standard as possible and base our work off the popular guide: Node.js Best Practices.

Your developer experience would look as follows: Generate our starter using the CLI and get an example Node.js solution. This solution is a typical Monorepo setup with an example Microservice and libraries. All is based on super-popular libraries that we merely stitch together. It also constitutes tons of optimization - linters, libraries, Monorepo configuration, tests and much more. Inside the example Microservice you'll find an example flow, from API to DB. Based on this, you can modify the entity and DB fields and build you app.

90 seconds video​

How to get started​

To get up to speed quickly, read our getting started guide.