Movatterモバイル変換


[0]ホーム

URL:


I Replaced MongoDB with a Single Postgres Table

Published on
Written byShayan Taslim
I Replaced MongoDB with a Single Postgres Table

Look, I need to show you something that might hurt your feelings:

CREATE TABLE MongoDB (  _id  UUIDPRIMARY KEY,  data JSONB);

Before you @ me in the comments, hear me out. What if I told you that 90% of your NoSQL use cases could be handled by this one weird Postgres trick that database admins don’t want you to know about?

The Problem: Why We Think We Need NoSQL

We’ve all been there. It’s 2 AM, you’re knee-deep in a schema migration, and you’re questioning every life choice that led you to this moment. Your product manager just asked for “just one more field” and now you’re writing migration scripts like it’s 2009.

“Maybe I should just use MongoDB,” you whisper to yourself. “Schema flexibility! No migrations! Document storage!”

But here’s the thing: You probably don’t need MongoDB. You need JSONB.

Enter JSONB: The Hero We Don’t Deserve

JSONB isn’t just JSON slapped into a Postgres column. Oh no, my friend. It’s JSON’s cooler, faster, more attractive older sibling who went to the gym and learned how to use indexes.

Here’s what makes JSONB special:

  • Binary storage format (the B stands for Binary, not 🐝)
  • GIN indexes that make queries stupid fast
  • Native operators that would make a JavaScript developer weep with joy
  • Full SQL power combined with NoSQL flexibility

It’s like MongoDB and Postgres had a baby, and that baby grew up to be a superhero.

Mind-Blowing Features Most Devs Don’t Know About

The Operators That Will Change Your Life

-- The containment operator @>-- "Does this JSON contain this structure?"SELECT * FROM usersWHERE preferences @> '{"theme": "dark"}';-- The existence operator ?-- "Does this key exist?"SELECT * FROM productsWHERE attributes ?'wireless';-- The arrow operators -> and ->>-- -> returns JSON, ->> returns textSELECT  data->>'name' AS name,  data->'address'->>'city' AS cityFROM users;-- The path operator #>-- Navigate deep into nested JSONSELECT * FROM eventsWHERE data #> '{user,settings,notifications}' = 'true';

Indexing Specific JSON Paths (Wait, What?)

This is where things get spicy. You can create indexes on specific paths within your JSON:

-- Index a specific fieldCREATE INDEX idx_user_email ON users ((data->>'email'));-- Index for existence queriesCREATE INDEX idx_attributes ON productsUSING GIN (attributes);-- Index for containment queriesCREATE INDEX idx_preferences ON usersUSING GIN (preferences);

Now your JSON queries are faster than your coworker who claims they “don’t need indexes because MongoDB handles it.”

Full-Text Search Inside JSON 🤯

Hold onto your keyboards:

-- Add full-text search to JSON fieldsCREATE INDEX idx_content_search ON articlesUSING GIN (to_tsvector('english',data->>'content'));-- Search like a bossSELECT * FROM articlesWHERE to_tsvector('english',data->>'content') @@ plainto_tsquery('postgres jsonb amazing');

Real Code Examples (The Meat)

Let’s start with something practical. Say you’re building a SaaS product (likeUserJot - shameless plug for my feedback management tool) and you need to store user preferences:

-- The hybrid approach: structured + flexibleCREATE TABLE users (  id UUIDPRIMARY KEY DEFAULT gen_random_uuid(),  emailTEXT NOT NULL UNIQUE,  created_atTIMESTAMPTZ DEFAULT NOW(),  preferences JSONBDEFAULT '{}',  metadata JSONBDEFAULT '{}');-- Insert a user with preferencesINSERT INTO users (email, preferences)VALUES (  'john@example.com',  '{    "theme": "dark",    "notifications": {      "email": true,      "push": false,      "frequency": "daily"    },    "features": {      "beta": true,      "advancedAnalytics": false    }  }');-- Query users who have dark theme AND email notificationsSELECT emailFROM usersWHERE preferences @> '{"theme": "dark", "notifications": {"email": true}}';-- Update nested preferencesUPDATE usersSET preferences= jsonb_set(  preferences,  '{notifications,push}',  'true')WHERE email= 'john@example.com';

The Event Log Pattern (Chef’s Kiss)

This is where JSONB absolutely shines:

CREATE TABLE events (  id UUIDPRIMARY KEY DEFAULT gen_random_uuid(),  event_typeTEXT NOT NULL,  user_id UUID,  occurred_atTIMESTAMPTZ DEFAULT NOW(),  data JSONBNOT NULL);-- Index for fast event type + data queriesCREATE INDEX idx_events_type_data ON events (event_type)WHERE event_typeIN ('purchase','signup','feedback');CREATE INDEX idx_events_data ON eventsUSING GIN (data);-- Insert different event types with different schemasINSERT INTO events (event_type, user_id,data)VALUES('signup','user-123','{  "source": "google",  "campaign": "summer-2024",  "referrer": "blog-post"}'),('purchase','user-123','{  "items": [    {"sku": "PROD-1", "quantity": 2, "price": 49.99},    {"sku": "PROD-2", "quantity": 1, "price": 19.99}  ],  "discount": "SUMMER20",  "total": 99.97}'),('feedback','user-123','{  "type": "feature_request",  "title": "Add dark mode",  "priority": "high",  "tags": ["ui", "accessibility"]}');-- Find all purchases with a specific discountSELECT * FROM eventsWHERE event_type= 'purchase'AND data @> '{"discount": "SUMMER20"}';-- Calculate total revenue from eventsSELECT SUM((data->>'total')::NUMERIC)AS total_revenueFROM eventsWHERE event_type= 'purchase'AND occurred_at>= NOW()- INTERVAL'30 days';

Product Catalog with Dynamic Attributes

This is the example that makes MongoDB developers question everything:

CREATE TABLE products (  id UUIDPRIMARY KEY DEFAULT gen_random_uuid(),  name TEXT NOT NULL,  priceNUMERIC(10,2)NOT NULL,  attributes JSONBDEFAULT '{}');-- Insert products with completely different attributesINSERT INTO products (name, price, attributes)VALUES('iPhone 15',999.00,'{  "brand": "Apple",  "storage": "256GB",  "color": "Blue",  "5g": true,  "screen": {    "size": "6.1 inches",    "type": "OLED",    "resolution": "2532x1170"  }}'),('Nike Air Max',120.00,'{  "brand": "Nike",  "size": "10",  "color": "Black/White",  "material": "Mesh",  "style": "Running"}'),('The Pragmatic Programmer',39.99,'{  "author": "David Thomas",  "isbn": "978-0135957059",  "pages": 352,  "publisher": "Addison-Wesley",  "edition": "2nd"}');-- Find all products with 5GSELECT name, priceFROM productsWHERE attributes @> '{"5g": true}';-- Find products by brandSELECT * FROM productsWHERE attributes->>'brand' = 'Apple';-- Complex query: Find all products with screens larger than 6 inchesSELECT name, attributes->'screen'->>'size' AS screen_sizeFROM productsWHERE (attributes->'screen'->>'size')::FLOAT > 6.0;

When JSONB Absolutely Destroys (Use Cases)

Here’s where you should absolutely use JSONB:

  1. User Preferences/Settings: Every user wants different things. Don’t create 50 boolean columns.

  2. Event Logs: Different events = different data. JSONB handles it like a champ.

  3. Product Catalogs: Books have ISBNs, shoes have sizes, phones have screen resolutions. One schema to rule them all.

  4. API Response Caching: Store that third-party API response without parsing it.

  5. Form Submissions: Especially when you’re building something like UserJot where user feedback can have custom fields.

  6. Feature Flags & Configuration:

CREATE TABLE feature_flags (  key TEXT PRIMARY KEY,  config JSONB);INSERT INTO feature_flagsVALUES('new_dashboard','{  "enabled": true,  "rollout_percentage": 25,  "whitelist_users": ["user-123", "user-456"],  "blacklist_countries": ["XX"],  "start_date": "2024-01-01",  "end_date": null}');

The Plot Twist: When You Still Need Real Columns

Let’s be real for a second. JSONB isn’t always the answer. Here’s when you should use regular columns:

  • Foreign Keys: You can’t reference JSONB fields in foreign key constraints
  • Heavy Aggregations: SUM, AVG, COUNT on JSONB fields are slower
  • Frequent Updates: Updating a single JSONB field rewrites the entire JSON
  • Type Safety: When you REALLY need that data to be an integer

The secret sauce?Hybrid approach:

CREATE TABLE orders (  id UUIDPRIMARY KEY,  user_id UUIDREFERENCES users(id),-- Real FK  totalNUMERIC(10,2)NOT NULL,-- For fast aggregations  status TEXT NOT NULL,-- For indexed lookups  created_atTIMESTAMPTZ DEFAULT NOW(),  line_items JSONB,-- Flexible item details  metadata JSONB-- Everything else);

The Grand Finale: Migration Strategy

Here’s how to migrate from MongoDB to Postgres/JSONB:

# Pseudo-code for the braveimport psycopg2from pymongoimport MongoClient# Connect to bothmongo= MongoClient('mongodb://localhost:27017/')postgres= psycopg2.connect("postgresql://...")# Migrate with stylefor docin mongo.mydb.mycollection.find():    postgres.execute(        "INSERT INTO my_table (id, data) VALUES (%s,%s)",        (str(doc['_id']), Json(doc))    )

Try This One Query and Tell Me It’s Not Magic

Here’s your homework. Create this table and run this query:

-- Create a tableCREATE TABLE magic (  idSERIAL PRIMARY KEY,  data JSONB);-- Insert nested, complex dataINSERT INTO magic (data)VALUES('{"user": {"name": "Alice", "scores": [10, 20, 30], "preferences": {"level": "expert"}}}'),('{"user": {"name": "Bob", "scores": [5, 15, 25], "preferences": {"level": "beginner"}}}');-- Mind-blowing query: Find users with average score > 15 AND expert levelSELECT  data->'user'->>'name' AS name,  (SELECT AVG(value::INT)FROM jsonb_array_elements_text(data->'user'->'scores')AS value)AS avg_scoreFROM magicWHERE data @> '{"user": {"preferences": {"level": "expert"}}}'AND (  SELECT AVG(value::INT)  FROM jsonb_array_elements_text(data->'user'->'scores')AS value)> 15;

If that doesn’t make you reconsider your MongoDB addiction, I don’t know what will.

Bonus: The Ultimate JSONB Cheat Sheet

-- Operators@>  -- Contains<@-- Is contained by?-- Key exists?|-- Any key exists?&-- All keys exist||  -- Concatenate-   -- Delete key/element#-  -- Delete at path-- Functionsjsonb_set()-- Update value at pathjsonb_insert()-- Insert value at pathjsonb_strip_nulls()-- Remove null valuesjsonb_pretty()-- Format for humansjsonb_agg()-- Aggregate into arrayjsonb_object_agg()-- Aggregate into object-- Performance tips1.Use GIN indexesfor @> and ? operators2.Use btree indexesfor ->> on specific fields3.Partial indexesfor common queries4. Don't nest more than 3-4 levels deep5. Keep JSONB documents under 1MB

The Real Talk

Look, I’m not saying MongoDB is bad. It has its place. But before you reach for a separate NoSQL database, ask yourself: Could JSONB do this?

9 times out of 10, the answer is yes. And you get to keep:

  • ACID transactions
  • Joins when you need them
  • Your existing Postgres knowledge
  • One less database to manage
  • Money in your pocket (Postgres is free!)

AtUserJot, we use JSONB extensively for storing user feedback metadata, custom fields, and integration configurations. It gives us MongoDB-like flexibility with Postgres reliability. Best of both worlds.

Now go forth and@> all the things! Drop a comment with your wildest JSONB use case. I’ll be here, answering questions and probably making more bad database jokes.


P.S. - That MongoDB table at the beginning? It actually works. I’m not saying you should use it, but… you could. 😈

P.P.S. - If you’re collecting user feedback and want something better than a JSONB column (though honestly, JSONB would work), check outUserJot. We built it with lots of JSONB magic under the hood.

Get started with UserJot for free

Let your users tell you exactly what to build next

Collect feedback, let users vote, and ship what actually matters. All in one simple tool that takes minutes to set up.

No credit card required14-day free trialCancel anytime

[8]ページ先頭

©2009-2025 Movatter.jp