Tag:node.js
MongoDB Atlas SDK: A Modern Toolkit
Lately, I’ve been diving into the MongoDB Atlas SDK, and it’s clear that this tool isn’t just about simplifying interactions with Atlas it’s about reimagining the developer experience across multiple languages. Whether you’re a JavaScript junkie or a polyglot jugglingGo,Java, andC#, the Atlas SDK aims to be an intuitive, powerful addition to your toolkit.
In this post, I’ll break down some of the core features of the Atlas SDK, share some hands-on experiences, and extend my exploration with examples in Go, Java, and C#. If you’ve ever wished that managing your clusters and configurations could be more straightforward and less “boilerplate heavy,” keep reading.
A Quick Recap: What the Atlas SDK Brings to the Table
At its heart, the MongoDB Atlas SDK abstracts the underlying Atlas API, making it easier to work with managed clusters, deployments, and security configurations. Here are a few standout features:
- Intuitive API: The SDK feels natural, following patterns that resonate with MongoDB’s broader ecosystem. It’salmost always nice to just call into a set of SDK libraries vs. writing up an entire layer to call and manage the calls to an API tier itself.
- Robust Functionality: It covers everything from cluster management to advanced security settings.
- Modern Practices: Asynchronous and promise-based (or equivalent in your language of choice), the SDK fits snugly into today’s development paradigms.
- Streamlined Setup: Detailed documentation and easy configuration mean you can spend more time coding and less time wrestling with setup.
Optimizing Merchandise Ordering, Tracking, and Sales with Barcode and QR Code Scanning
In today’s fast-paced retail and supply chain environments, efficiency is king. The ability to order, track, and sell merchandise without delay or error isn’t just an advantage anymore—it’s a necessity. One of the unsung heroes of this operation? The humble barcode scanner. Whether scanning traditional barcodes or QR codes, this technology has evolved into a critical tool for optimizing the entire flow of goods. From ensuring accurate stock levels to streamlining point-of-sale (POS) systems, barcode scanning is a quiet powerhouse driving modern retail and inventory management.
Here’s how barcode and QR code scanning are revolutionizing merchandise handling—and how you can integrate it into your web-based application with a Scandit demo.
Streamlined Inventory Management
Let’s start at the warehouse or backroom. Efficient ordering hinges on having a clear, real-time understanding of what’s available in stock. Each time merchandise arrives, barcode scanning at intake ensures that the product is logged accurately into the inventory system. This eliminates manual data entry errors and ensures that the stock count is current. Need to know how many units of that best-selling product you have on hand? Just a quick scan, and it’s logged into the system with perfect accuracy.
Continue reading“Optimizing Merchandise Ordering, Tracking, and Sales with Barcode and QR Code Scanning”→DataLoader for GraphQL Implementations
A popular library used in GraphQL implementations is called DataLoader, and in many ways the name is somewhat descriptive of its purpose. As described in the JavaScript repo for the Node.js implementation for GraphQL
“DataLoader is a generic utility to be used as part of your application’s data fetching layer to provide a simplified and consistent API over various remote data sources such as databases or web services via batching and caching.”
The DataLoader solvers the N+1 problem that otherwise requires a resolver to make multiple individual requests to a database (or data source, i.e. another API), resulting in inefficient and slow data retrieval.
A DataLoader serves as a batching and caching layer for combining multiple requests int a single request. Grouping together identical requests and executing them more efficiently, thus minimizing the number of database or API round trips.
DataLoader Operation:
- Create a new instance of DataLoader, specifying a batch loading function. This function would define how to load the data for a given set of keys.
- The resolver iterates through the collection and instead of fetching the related data adds the keys for the data to be fetched to the DataLoader instance.
- The DataLoader collects the keys and for multiple keys, deduplicates the request and executes.
- Once the batch is executed DataLoader returns the results associating them with their respective keys.
- The resolver can then access the response data and resolve the field or relationships as needed.
DataLoader also caches the results of the previous requests so if the same key is requested again DataLoader retrieves from cache instead of making another request. This caching further improves performance and reduces redundant fetching.
DataLoader Implementation Examples
JavaScript & Node.js
The following is a basic implementation using Apollo Server of DataLoader for GraphQL.
const { ApolloServer, gql } = require("apollo-server");const { DataLoader } = require("dataloader");// Simulated data sourceconst db = { users: [ { id: 1, name: "John" }, { id: 2, name: "Jane" }, ], posts: [ { id: 1, userId: 1, title: "Post 1" }, { id: 2, userId: 2, title: "Post 2" }, { id: 3, userId: 1, title: "Post 3" }, ],};// Simulated asynchronous data loader functionconst batchPostsByUserIds = async (userIds) => { console.log("Fetching posts for user ids:", userIds); const posts = db.posts.filter((post) => userIds.includes(post.userId)); return userIds.map((userId) => posts.filter((post) => post.userId === userId));};// Create a DataLoader instanceconst postsLoader = new DataLoader(batchPostsByUserIds);const resolvers = { Query: { getUserById: (_, { id }) => { return db.users.find((user) => user.id === id); }, }, User: { posts: (user) => { // Use DataLoader to load posts for the user return postsLoader.load(user.id); }, },};// Define the GraphQL schemaconst typeDefs = gql` type User { id: ID! name: String! posts: [Post] } type Post { id: ID! title: String! } type Query { getUserById(id: ID!): User }`;// Create Apollo Server instanceconst server = new ApolloServer({ typeDefs, resolvers });// Start the serverserver.listen().then(({ url }) => { console.log(`Server running at ${url}`);});
This example I created a DataLoader instancepostsLoader
using theDataLoader
class from thedataloader
package. I define a batch loading functionbatchPostsByUserIds
that takes an array of user IDs and retrieves the corresponding posts for each user from thedb.posts
array. The function returns an array of arrays, where each sub-array contains the posts for a specific user.
In theUser
resolver I user theload
method of DataLoader to load the posts for a user. Theload
method handles batching and caching behind the scenes, ensuring that redundant requests are minimized and results are cached for subsequent requests.
When the GraphQL server receives a query for theposts
field of aUser
the DataLoader automatically batches the requests for multiple users and executes the batch loading function to retrieve the posts.
This example demonstrates a very basic implementation of DataLoader in a GraphQL server. In a real-world scenario there would of course be a number of additional capabilities and implementation details that you’d need to work on for your particular situation.
Spring Boot Java Implementation
Just furthering the kinds of examples, the following is a Spring Boot example.
First add the dependencies.
<dependencies> <!-- GraphQL for Spring Boot --> <dependency> <groupId>com.graphql-java</groupId> <artifactId>graphql-spring-boot-starter</artifactId> <version>5.0.2</version> </dependency> <!-- DataLoader --> <dependency> <groupId>org.dataloader</groupId> <artifactId>dataloader</artifactId> <version>3.4.0</version> </dependency></dependencies>
Next create the components and configure DataLoader.
import com.graphql.spring.boot.context.GraphQLContext;import graphql.servlet.context.DefaultGraphQLServletContext;import org.dataloader.BatchLoader;import org.dataloader.DataLoader;import org.dataloader.DataLoaderRegistry;import org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.SpringBootApplication;import org.springframework.context.annotation.Bean;import org.springframework.web.context.request.WebRequest;import java.util.List;import java.util.concurrent.CompletableFuture;import java.util.concurrent.CompletionStage;import java.util.stream.Collectors;@SpringBootApplicationpublic class DataLoaderExampleApplication { // Simulated data source private static class Db { List<User> users = List.of( new User(1, "John"), new User(2, "Jane") ); List<Post> posts = List.of( new Post(1, 1, "Post 1"), new Post(2, 2, "Post 2"), new Post(3, 1, "Post 3") ); } // User class private static class User { private final int id; private final String name; User(int id, String name) { this.id = id; this.name = name; } int getId() { return id; } String getName() { return name; } } // Post class private static class Post { private final int id; private final int userId; private final String title; Post(int id, int userId, String title) { this.id = id; this.userId = userId; this.title = title; } int getId() { return id; } int getUserId() { return userId; } String getTitle() { return title; } } // DataLoader batch loading function private static class BatchPostsByUserIds implements BatchLoader<Integer, List<Post>> { private final Db db; BatchPostsByUserIds(Db db) { this.db = db; } @Override public CompletionStage<List<List<Post>>> load(List<Integer> userIds) { System.out.println("Fetching posts for user ids: " + userIds); List<List<Post>> result = userIds.stream() .map(userId -> db.posts.stream() .filter(post -> post.getUserId() == userId) .collect(Collectors.toList())) .collect(Collectors.toList()); return CompletableFuture.completedFuture(result); } } // GraphQL resolver private static class UserResolver implements GraphQLResolver<User> { private final DataLoader<Integer, List<Post>> postsDataLoader; UserResolver(DataLoader<Integer, List<Post>> postsDataLoader) { this.postsDataLoader = postsDataLoader; } List<Post> getPosts(User user) { return postsDataLoader.load(user.getId()).join(); } } // GraphQL configuration @Bean public GraphQLSchemaProvider graphQLSchemaProvider() { return (graphQLSchemaBuilder, environment) -> { // Define the GraphQL schema GraphQLObjectType userObjectType = GraphQLObjectType.newObject() .name("User") .field(field -> field.name("id").type(Scalars.GraphQLInt)) .field(field -> field.name("name").type(Scalars.GraphQLString)) .field(field -> field.name("posts").type(new GraphQLList(postObjectType))) .build(); GraphQLObjectType postObjectType = GraphQLObjectType.newObject() .name("Post") .field(field -> field.name("id").type(Scalars.GraphQLInt)) .field(field -> field.name("title").type(Scalars.GraphQLString)) .build(); GraphQLObjectType queryObjectType = GraphQLObjectType.newObject() .name("Query") .field(field -> field.name("getUserById") .type(userObjectType) .argument(arg -> arg.name("id").type(Scalars.GraphQLInt)) .dataFetcher(environment -> { // Retrieve the requested user ID int userId = environment.getArgument("id"); // Fetch the user by ID from the data source Db db = new Db(); return db.users.stream() .filter(user -> user.getId() == userId) .findFirst() .orElse(null); })) .build(); return graphQLSchemaBuilder.query(queryObjectType).build(); }; } // DataLoader registry bean @Bean public DataLoaderRegistry dataLoaderRegistry() { DataLoaderRegistry dataLoaderRegistry = new DataLoaderRegistry(); Db db = new Db(); dataLoaderRegistry.register("postsDataLoader", DataLoader.newDataLoader(new BatchPostsByUserIds(db))); return dataLoaderRegistry; } // GraphQL context builder @Bean public GraphQLContext.Builder graphQLContextBuilder(DataLoaderRegistry dataLoaderRegistry) { return new GraphQLContext.Builder().dataLoaderRegistry(dataLoaderRegistry); } public static void main(String[] args) { SpringApplication.run(DataLoaderExampleApplication.class, args); }}
This example I define theDb
class as a simulated data source withusers
andposts
lists. I create aBatchPostsByUserIds
class that implements theBatchLoader
interface from DataLoader for batch loading of posts based on user IDs.
TheUserResolver
class is a GraphQL resolver that uses thepostsDataLoader
to load posts for a specific user.
For the configuration I define the schema usingGraphQLSchemaProvider
and createGraphQLObjectType
forUser
andPost
, andQuery
object type with a resolver for thegetUserById
field.
ThedataLoaderRegistry
bean registers thepostsDataLoader
with the DataLoader registry.
This implementation will efficiently batch and cache requests for loading posts based on user IDs.
References
- Github Repository:graphql/dataloader
- Using DataLoader in GraphQL
- GraphQL.NET’s implementation of DataLoader
- Strawberry DataLoader (.NET)
- GraphQL for Spring Java DataLoaders
Other GraphQL Standards, Practices, Patterns, & Related Posts
Single Responsibility Principle for GraphQL API Resolvers
The Single Responsibility Principle (SRP) states that a class or module should have only one reason to change. It emphasizes the importance of keeping modules or components focused on a single task, reducing their complexity, and increasing maintainability.
SRP defined.
In GraphQL API development the importance, and need, of maintaining code quality and scalability is of utmost importance. A powerful principle that can help achieve these goals when developing your API’s resolvers is the Single Responsibility Principle (SRP). I’m not always a die hard when it comes to SRP – there are always situations that may need to skip out on some of the concept – but in general it helps tremendously over time.
By adhering the SRP coders can more easily avoid the pitfalls of large monolithic resolvers that end up doing spurious processing outside of their intended scope. Let’s explore some of the SRP and I’ll provide three practice examples of how to implement some simple SRP use with GraphQL.
When applying SRP in GraphQL the aim is to ensure that each resolver handles a specific data type or field, thereby avoiding scope bloat and convoluted resolvers that handle unrelated responsibilities.
- User Resolvers:
- Imagine a scenario where a GraphQL schema includes a User type with fields like id, name, email, and posts. Instead of writing a single resolver for the User type that fetches and processes all of the data we can adopt SRP by creating separate resolvers for each field. For instance we would have resolvers.getUserById to fetch user details, resolvers.getUserName to retrieve the respective user’s name, and a resolvers.getUserPosts to fetch the user’s posts. In doing so we keep each resolver focused on a specific field and in turn keep the codebase simplified.
- Product Resolvers:
- Another example might be a product object within an e-commerce application. It would contain fields like is, name, price, and reviews. With SRP we’d have resolvers for resolvers.getProductById, resolvers.getProductName, resolvers.getProductPrice, and resolvers.getProductReviews. The naming, by use of SRP, can be utilized to describe what each of these functions do and what one can expect in response. This again, makes the codebase dramatically easier to maintain over time.
- Comment Resolvers:
- Last example, imagine a blog, with a comment type consisting of id, content, and author. This would break out to resolvers.getCommentContent, resolvers.getCommentAuthor, and resolvers.getCommentById. This adheres to SRP and keeps things simple, just like the previous two examples.
Prerequisite: The examples below assume theapollo-server
andgraphql
are installed and available.
User Resolvers Example
A more thorough working example of the user resolvers described above would look something like this. I’ve included the data in a variable to act as the database, but the inferred premise there would be an underlying database should be obvious.
// Assuming you have a database or data source that provides user informationconst db = { users: [ { id: 1, name: "John Doe", email: "john@example.com", posts: [1, 2] }, { id: 2, name: "Jane Smith", email: "jane@example.com", posts: [3] }, ], posts: [ { id: 1, title: "GraphQL Basics", content: "Introduction to GraphQL" }, { id: 2, title: "GraphQL Advanced", content: "Advanced GraphQL techniques" }, { id: 3, title: "GraphQL Best Practices", content: "Tips for GraphQL development" }, ],};const resolvers = { Query: { getUserById: (_, { id }) => { return db.users.find((user) => user.id === id); }, }, User: { name: (user) => { return user.name; }, posts: (user) => { return db.posts.filter((post) => user.posts.includes(post.id)); }, },};// Assuming you have a GraphQL server setup with Apollo Serverconst { ApolloServer, gql } = require("apollo-server");const typeDefs = gql` type User { id: ID! name: String! email: String! posts: [Post!]! } type Post { id: ID! title: String! content: String! } type Query { getUserById(id: ID!): User }`;const server = new ApolloServer({ typeDefs, resolvers });server.listen().then(({ url }) => { console.log(`Server running at ${url}`);});
In this code we have thedb
object acting as our database that we’ll interact with. Then theresolvers
and the GraphQL schema is included inline to show the relationship of the data and how it would love per GraphQL. For this example I’m also, for simplicity, using Apollo server to build this example.
Product Resolvers Example
I’ve also included an example of the product resolvers. Very similar, but has some minor nuance to show how it would coded up. For this example however, to draw more context I’ve added an authors table/entity, and respective fields for authors, as per related to reviews.
// Assuming you have a database or data source that provides product, review, and author informationconst db = { products: [ { id: 1, name: "Product 1", price: 19.99, reviews: [1, 2] }, { id: 2, name: "Product 2", price: 29.99, reviews: [3] }, ], reviews: [ { id: 1, rating: 4, comment: "Great product!", authorId: 1 }, { id: 2, rating: 5, comment: "Excellent quality!", authorId: 2 }, { id: 3, rating: 3, comment: "Average product.", authorId: 1 }, ], authors: [ { id: 1, name: "John Doe", karmaPoints: 100, details: "Product enthusiast" }, { id: 2, name: "Jane Smith", karmaPoints: 150, details: "Tech lover" }, ],};const resolvers = { Query: { getProductById: (_, { id }) => { return db.products.find((product) => product.id === id); }, }, Product: { name: (product) => { return product.name; }, price: (product) => { return product.price; }, reviews: (product) => { return db.reviews.filter((review) => product.reviews.includes(review.id)); }, }, Review: { author: (review) => { return db.authors.find((author) => author.id === review.authorId); }, },};// Assuming you have a GraphQL server setup with Apollo Serverconst { ApolloServer, gql } = require("apollo-server");const typeDefs = gql` type Product { id: ID! name: String! price: Float! reviews: [Review!]! } type Review { id: ID! rating: Int! comment: String! author: Author! } type Author { id: ID! name: String! karmaPoints: Int! details: String! } type Query { getProductById(id: ID!): Product }`;const server = new ApolloServer({ typeDefs, resolvers });server.listen().then(({ url }) => { console.log(`Server running at ${url}`);});
Comment Resolvers Example
Starting right off with this example, here’s what the code would look like.
// Assuming you have a database or data source that provides comment and author informationconst db = { comments: [ { id: 1, content: "Great post!", authorId: 1 }, { id: 2, content: "Nice article!", authorId: 2 }, ], authors: [ { id: 1, name: "John Doe", karmaPoints: 100, details: "Product enthusiast" }, { id: 2, name: "Jane Smith", karmaPoints: 150, details: "Tech lover" }, ],};const resolvers = { Query: { getCommentById: (_, { id }) => { return db.comments.find((comment) => comment.id === id); }, }, Comment: { content: (comment) => { return comment.content; }, author: (comment) => { return db.authors.find((author) => author.id === comment.authorId); }, },};// Assuming you have a GraphQL server setup with Apollo Serverconst { ApolloServer, gql } = require("apollo-server");const typeDefs = gql` type Comment { id: ID! content: String! author: Author! } type Author { id: ID! name: String! karmaPoints: Int! details: String! } type Query { getCommentById(id: ID!): Comment }`;const server = new ApolloServer({ typeDefs, resolvers });server.listen().then(({ url }) => { console.log(`Server running at ${url}`);});
The concept of the Single Responsibility Principle (SRP) and provided code examples using JavaScript demonstrate its application in GraphQL resolvers. The SRP advocates for keeping code modules focused on a specific data type or field, avoiding large and monolithic resolvers that handle multiple unrelated responsibilities. By adhering to the SRP, software developers can build better software that is modular, maintainable, and easier to understand. By dividing functionality into smaller, well-defined units, developers can enhance code reusability, improve testability, and promote better collaboration among team members. Embracing the SRP helps create codebases that are more scalable, extensible, and adaptable to changing requirements, ultimately leading to higher-quality software solutions.
Other GraphQL Standards, Practices, Patterns, & Related Posts
Could not resolve “@popperjs/core”
I keep getting this error on running dev. Albeit it doesn’t appear I’m getting it in production.
I run.
npm run dev
Then everything appears to be ok, with the standard message like this from vite.
vite v2.9.9 dev server running at: > Local: http://localhost:3000/ > Network: use `--host` to expose ready in 740ms.
But then, once I navigate to the site to check things out.
X [ERROR] Could not resolve "@popperjs/core" node_modules/bootstrap/dist/js/bootstrap.esm.js:6:24: 6 │ import * as Popper from '@popperjs/core'; ╵ ~~~~~~~~~~~~~~~~ You can mark the path "@popperjs/core" as external to exclude it from the bundle, which will remove this error.
…and this error annoyingly crops up.
11:36:23 PM [vite] error while updating dependencies:Error: Build failed with 1 error:node_modules/bootstrap/dist/js/bootstrap.esm.js:6:24: ERROR: Could not resolve "@popperjs/core" at failureErrorWithLog (C:\Users\Adron Hall\Codez\estuary\node_modules\esbuild\lib\main.js:1603:15) at C:\Users\Adron Hall\Codez\estuary\node_modules\esbuild\lib\main.js:1249:28 at runOnEndCallbacks (C:\Users\Adron Hall\Codez\estuary\node_modules\esbuild\lib\main.js:1034:63) at buildResponseToResult (C:\Users\Adron Hall\Codez\estuary\node_modules\esbuild\lib\main.js:1247:7) at C:\Users\Adron Hall\Codez\estuary\node_modules\esbuild\lib\main.js:1356:14 at C:\Users\Adron Hall\Codez\estuary\node_modules\esbuild\lib\main.js:666:9 at handleIncomingPacket (C:\Users\Adron Hall\Codez\estuary\node_modules\esbuild\lib\main.js:763:9) at Socket.readFromStdout (C:\Users\Adron Hall\Codez\estuary\node_modules\esbuild\lib\main.js:632:7) at Socket.emit (events.js:315:20) at addChunk (_stream_readable.js:309:12)Vite Error, /node_modules/.vite/deps/pinia.js?v=72977742 optimized info should be definedVite Error, /node_modules/.vite/deps/bootstrap.js?v=14c3224a optimized info should be definedVite Error, /node_modules/.vite/deps/pinia.js?v=72977742 optimized info should be definedVite Error, /node_modules/.vite/deps/pinia.js?v=72977742 optimized info should be defined (x2)...
…and on and on and on goes these errors. For whatever reason, even thoughnpm install
has been run once just get to the point of runningnpm run dev
, there needs to be a subsequent, specifically executednpm install @popperjs/core
to install this particular dependency that throws this error.
So that’s it, that’s your fix. Cheers!