Mongoose and MongoDB patterns — schema design, validation, indexes, virtuals,
99
99%
Does it follow best practices?
Impact
100%
1.11xAverage score across 5 eval scenarios
Passed
No known issues
Patterns, gotchas, and performance guidance for MongoDB with Mongoose. Ordered by impact.
Mongoose pre('save') and post('save') hooks only run on document.save() and Model.create(). They do NOT run on updateOne(), findOneAndUpdate(), findByIdAndUpdate(), or any update operation.
// This pre-save hook recalculates the total:
orderSchema.pre('save', function () {
this.totalCents = this.items.reduce((sum, i) => sum + i.priceCents * i.quantity, 0);
});
// BUG: This update SKIPS the pre-save hook — totalCents is now stale
await Order.findByIdAndUpdate(orderId, { $push: { items: newItem } });// Option A: fetch-then-save so middleware runs
const order = await Order.findById(orderId);
order.items.push(newItem);
await order.save(); // pre('save') fires, totalCents recalculated
// Option B: register a separate pre-hook for update operations
orderSchema.pre('findOneAndUpdate', async function () {
const update = this.getUpdate() as any;
if (update.$push?.items || update.$set?.items) {
const doc = await this.model.findOne(this.getFilter());
const newItems = update.$push?.items
? [...doc.items, update.$push.items]
: update.$set.items;
const total = newItems.reduce((s: number, i: any) => s + i.priceCents * i.quantity, 0);
this.set({ totalCents: total });
}
});Rule: If a model has pre/post save hooks with important logic, either use save() or register equivalent hooks for findOneAndUpdate / updateOne / updateMany.
runValidators: true on UpdatesBy default, Mongoose update operations (findByIdAndUpdate, updateOne, etc.) do NOT run schema validators. This allows invalid data into the database silently.
const userSchema = new Schema({
email: { type: String, required: true, match: /^[^\s@]+@[^\s@]+\.[^\s@]+$/ },
role: { type: String, enum: ['admin', 'user', 'moderator'] },
});
// BUG: This succeeds even though 'superadmin' is not in the enum
await User.findByIdAndUpdate(userId, { role: 'superadmin' });runValidators: trueawait User.findByIdAndUpdate(
userId,
{ role: 'superadmin' },
{ new: true, runValidators: true } // throws ValidationError for invalid enum
);
// Or set it globally so you never forget:
mongoose.set('runValidators', true);Rule: Always pass { runValidators: true } on every update call, or set the global default.
.lean() for Read-Only QueriesMongoose documents carry change tracking, getters/setters, and method overhead. For read-only operations (API responses, reports, exports), use .lean() to return plain JavaScript objects. This is 5-10x faster and uses significantly less memory.
// Returns full Mongoose documents with change tracking overhead
app.get('/api/products', async (req, res) => {
const products = await Product.find({ active: true });
res.json(products); // toJSON() called on each document — slow
});app.get('/api/products', async (req, res) => {
const products = await Product.find({ active: true }).lean();
res.json(products); // Already plain objects — fast
});
// NOTE: .lean() documents don't have .save(), virtuals, or instance methods.
// Only use .lean() when you don't need to modify and save the document.Rule: Default to .lean() for any query whose results are read-only (API responses, templates, reports). Only omit .lean() when you need to call .save() or use instance methods.
Create compound indexes that match your query filter + sort patterns. Field order matters — put equality filters first, then sort fields.
const orderSchema = new Schema({
customerName: { type: String, required: true, trim: true, maxlength: 100 },
status: {
type: String,
required: true,
enum: ['received', 'preparing', 'ready', 'picked_up', 'cancelled'],
default: 'received',
},
items: [{
menuItemId: { type: Schema.Types.ObjectId, ref: 'MenuItem', required: true },
size: { type: String, required: true, enum: ['small', 'medium', 'large'] },
quantity: { type: Number, required: true, min: 1, max: 20 },
priceCents: { type: Number, required: true, min: 0 },
}],
totalCents: { type: Number, required: true, min: 0 },
}, {
timestamps: true, // Auto createdAt + updatedAt — never manage these manually
});
// Compound index: queries filtering by status and sorting by createdAt use this
orderSchema.index({ status: 1, createdAt: -1 });
// Unique index to prevent duplicates
userSchema.index({ email: 1 }, { unique: true });
// TTL index — auto-delete documents after 30 days (e.g., sessions, logs)
sessionSchema.index({ createdAt: 1 }, { expireAfterSeconds: 30 * 24 * 60 * 60 });// BUG: Race condition — two requests can create duplicate emails
const existing = await User.findOne({ email });
if (!existing) {
await User.create({ email, name }); // duplicate possible between check and create
}userSchema.index({ email: 1 }, { unique: true });
// Now duplicates throw a MongoServerError with code 11000
try {
await User.create({ email, name });
} catch (err: any) {
if (err.code === 11000) {
throw new ConflictError('Email already exists');
}
throw err;
}timestamps: true on all schemas — auto-manages createdAt/updatedAtrequired: true on all non-optional fieldsenum for finite value sets — validates at schema leveltrim: true on string fields — strips whitespaceunique indexes for uniqueness constraints — not just application-level checks// BAD: Separate collection for address — always fetched with user, never standalone
const addressSchema = new Schema({ street: String, city: String, zip: String });
const Address = mongoose.model('Address', addressSchema);
const userSchema = new Schema({
name: String,
address: { type: Schema.Types.ObjectId, ref: 'Address' }, // unnecessary ref
});
// Requires an extra query (populate) every time you fetch a user
const user = await User.findById(id).populate('address');const userSchema = new Schema({
name: { type: String, required: true },
address: {
street: { type: String, required: true },
city: { type: String, required: true },
zip: { type: String, required: true, match: /^\d{5}(-\d{4})?$/ },
},
});
// Single query, no populate needed
const user = await User.findById(id);Rule: If the child data has no meaning without the parent and is bounded in size, embed it. If it's shared, unbounded, or independently queried, use a reference.
mongoose.connect('mongodb://localhost:27017/myapp');
// App runs, connection drops silently, requests hangimport mongoose from 'mongoose';
const MONGODB_URI = process.env.MONGODB_URI || 'mongodb://localhost:27017/myapp';
await mongoose.connect(MONGODB_URI, {
maxPoolSize: 10, // connection pool size
serverSelectionTimeoutMS: 5000, // fail fast if no server
socketTimeoutMS: 45000, // close sockets after 45s inactivity
});
mongoose.connection.on('error', (err) => {
console.error('MongoDB connection error:', err);
});
mongoose.connection.on('disconnected', () => {
console.warn('MongoDB disconnected — attempting reconnect');
});
// Graceful shutdown — close connection before process exits
for (const signal of ['SIGTERM', 'SIGINT'] as const) {
process.on(signal, async () => {
await mongoose.connection.close();
process.exit(0);
});
}Rule: Always configure maxPoolSize, handle error/disconnect events, use environment variables for the URI, and close the connection on process shutdown.
skip(N) scans and discards N documents — O(N) performance that degrades as pages increase. Use cursor-based pagination for large collections.
// Page 1000 skips 999,000 documents — extremely slow
const page = parseInt(req.query.page) || 1;
const results = await Product.find()
.sort({ createdAt: -1 })
.skip((page - 1) * 20)
.limit(20);const limit = 20;
const cursor = req.query.cursor; // last document's _id from previous page
const query: any = {};
if (cursor) {
query._id = { $lt: new mongoose.Types.ObjectId(cursor) };
}
const results = await Product.find(query)
.sort({ _id: -1 })
.limit(limit + 1) // fetch one extra to check if there's a next page
.lean();
const hasNextPage = results.length > limit;
if (hasNextPage) results.pop();
res.json({
data: results,
nextCursor: hasNextPage ? results[results.length - 1]._id : null,
});Rule: Use skip/limit only for small collections or early pages (< 1000 skipped docs). For large datasets, use cursor-based pagination with an indexed field.
MongoDB transactions require a replica set. Use session and withTransaction() for multi-document atomic operations.
// BUG: If the second operation fails, the first is not rolled back
await Account.updateOne({ _id: fromId }, { $inc: { balance: -amount } });
await Account.updateOne({ _id: toId }, { $inc: { balance: amount } });const session = await mongoose.startSession();
try {
await session.withTransaction(async () => {
await Account.updateOne(
{ _id: fromId },
{ $inc: { balance: -amount } },
{ session }
);
await Account.updateOne(
{ _id: toId },
{ $inc: { balance: amount } },
{ session }
);
});
} finally {
await session.endSession();
}Rule: Use transactions (startSession + withTransaction) for any operation that must atomically update multiple documents. Always pass { session } to every operation inside the transaction and call endSession() in a finally block.
timestamps: true on all schemasrequired: true on non-optional fieldsenum for finite value setstrim: true on string fields.lean() on all read-only queriesrunValidators: true on all update operations (or set globally)save() used when pre/post save middleware must run (not updateOne/findByIdAndUpdate)maxPoolSize, error handlers, and graceful shutdownstartSession + withTransaction) for multi-document atomicity{ session } passed to every operation inside a transaction