Skip to content
Toova
All Tools

7 JSON Tricks That Will Save You Hours

Toova

Every JavaScript developer uses JSON.parse and JSON.stringify dozens of times a week. But most stop at the basics — parsing API responses and serializing objects to strings. The API has more to offer, and some of the less-used features solve problems that would otherwise require third-party libraries or hours of debugging.

This guide covers seven tricks that go beyond the defaults. Each one is immediately applicable to real codebases. No abstractions, no invented examples — these are patterns that come up in production systems.

You can verify and explore any of the code examples in this article using the Toova JSON Formatter, which validates and pretty-prints JSON entirely in your browser.

1. Deep Clone with JSON Round-Trip (and Its Limits)

The oldest trick in the JavaScript playbook: use JSON.parse(JSON.stringify(obj)) to create a deep clone of a plain object.

const original = { a: 1, b: { c: 2 } };

// Naive approach — deeply nested objects are still shared
const shallowCopy = { ...original }; // b still points to same object

// JSON deep clone — creates a completely independent copy
const deepClone = JSON.parse(JSON.stringify(original));

deepClone.b.c = 99;
console.log(original.b.c); // 2 — original untouched

This works because serializing to a string and parsing back creates an entirely new object tree with no shared references. It is fast, requires no dependencies, and has been available since ES5.

The modern alternative for in-memory use is structuredClone():

// Modern alternative: structuredClone() — handles more types
// Supported in Node.js 17+ and all evergreen browsers
const clone = structuredClone(original);

Know the limits of the JSON approach before relying on it:

// JSON.parse/stringify CANNOT handle:
const broken = {
  date: new Date(),      // becomes a string — loses Date prototype
  fn: () => 'hello',    // dropped silently
  undef: undefined,     // dropped silently
  inf: Infinity,        // becomes null
  map: new Map(),       // becomes {}
  cycle: null,          // circular refs throw
};

If your object only contains plain data — strings, numbers, booleans, arrays, and nested plain objects — the JSON round-trip is safe and fast. For anything else, prefer structuredClone() or a dedicated library.

2. Custom Replacer for Filtering and Masking

Most developers know that JSON.stringify takes a second argument, but rarely use it. That second argument is the replacer: either an array of keys to include, or a function that controls exactly how each value is serialized.

Array replacer — whitelist specific keys:

const user = {
  id: 'u_001',
  name: 'Alice',
  password: 'hunter2',        // must not appear in logs
  creditCard: '4111111111111111', // must not appear in logs
  role: 'admin',
};

// Replacer as array: only include these keys
JSON.stringify(user, ['id', 'name', 'role']);
// '{"id":"u_001","name":"Alice","role":"admin"}'

Function replacer — transform or redact values:

// Replacer as function: full control over key/value
const masked = JSON.stringify(user, (key, value) => {
  if (key === 'password' || key === 'creditCard') return '[REDACTED]';
  if (key === '' ) return value; // root object — always return
  return value;
});
// '{"id":"u_001","name":"Alice","password":"[REDACTED]","creditCard":"[REDACTED]","role":"admin"}'

A more sophisticated version masks values based on their shape rather than their key name:

// Replacer for type-based masking
const sanitize = (key, value) => {
  if (typeof value === 'string' && value.match(/^4[0-9]{15}$/)) {
    return '****-****-****-' + value.slice(-4);
  }
  return value;
};

This technique is indispensable for logging middleware: you want structured logs with full object context, but certain fields must never reach a log aggregator. The replacer lets you handle this at the serialization boundary rather than scattering redaction logic across the codebase.

3. Sort Keys Deterministically

Object key order in JavaScript is insertion order (for string keys). Two objects with the same keys but created in different orders produce different JSON strings, which breaks naive equality checks, cache keys, and content hashes.

function sortedStringify(obj) {
  return JSON.stringify(obj, Object.keys(obj).sort());
}

const a = { z: 1, a: 2, m: 3 };
const b = { a: 2, m: 3, z: 1 };

sortedStringify(a) === sortedStringify(b); // true — key order normalized

For deeply nested objects, apply sorting recursively:

// Recursive key sorting for nested objects
function sortKeys(value) {
  if (Array.isArray(value)) return value.map(sortKeys);
  if (value !== null && typeof value === 'object') {
    return Object.fromEntries(
      Object.keys(value).sort().map((k) => [k, sortKeys(value[k])])
    );
  }
  return value;
}

const sorted = JSON.stringify(sortKeys(deepNested));

Sorted JSON is essential when you are:

  • Generating cache keys from request bodies
  • Computing checksums or signatures over JSON payloads
  • Comparing API responses in tests regardless of field order
  • Storing configuration objects where order should not affect equality

After sorting and pretty-printing, use the Text Diff tool to compare two normalized JSON strings and see exactly which values changed between versions.

4. Handling BigInt Without Losing Precision

JavaScript's Number type can safely represent integers up to 253 − 1. For IDs generated by distributed systems, financial amounts in minor units, or timestamps in nanoseconds, this is not enough. BigInt covers arbitrary precision integers, but JSON.stringify does not know what to do with them.

const data = {
  amount: 9007199254740993n, // larger than Number.MAX_SAFE_INTEGER
};

// This throws: TypeError: Do not know how to serialize a BigInt
JSON.stringify(data); // ERROR

The standard workarounds:

// Solution 1: Convert to string with a replacer
JSON.stringify(data, (key, value) =>
  typeof value === 'bigint' ? value.toString() : value
);
// '{"amount":"9007199254740993"}'

// Solution 2: toJSON() on the BigInt prototype (monkey-patch — use with caution)
BigInt.prototype.toJSON = function () { return this.toString(); };
JSON.stringify(data); // '{"amount":"9007199254740993"}'

On the parsing side, a reviver function can restore BigInt values from their string representation:

// Reviver to restore BigInt on parse
const revived = JSON.parse('{"amount":"9007199254740993"}', (key, value) => {
  if (key === 'amount') return BigInt(value);
  return value;
});
console.log(typeof revived.amount); // 'bigint'

Keep the BigInt-to-string conversion in a shared serialization layer so it is applied consistently. Letting BigInts leak through to ad hoc JSON.stringify calls at the edges of a codebase leads to unpredictable errors that are hard to trace.

5. Circular Reference Detection

A circular reference occurs when an object contains a reference to itself or to an ancestor in the object graph. They are more common than you might think: event emitters, DOM nodes, React fiber nodes, and ORM entities all frequently have back-references.

// Circular reference example
const obj = { name: 'node' };
obj.self = obj; // obj references itself

JSON.stringify(obj); // throws: TypeError: Converting circular structure to JSON

Handle it with a custom replacer that tracks visited objects:

// Manual circular reference handler
function safeStringify(obj) {
  const seen = new WeakSet();
  return JSON.stringify(obj, (key, value) => {
    if (typeof value === 'object' && value !== null) {
      if (seen.has(value)) return '[Circular]';
      seen.add(value);
    }
    return value;
  });
}

safeStringify(obj); // '{"name":"node","self":"[Circular]"}'

The WeakSet is the right data structure here: it holds object references without preventing garbage collection, and lookup is O(1). This pattern also works in logging middleware where you want errors to degrade gracefully rather than throwing during serialization.

A common variation: instead of marking circular references as [Circular], replace them with a path string like [Circular: $.config.parent] to make the reference location explicit during debugging.

6. Pretty-Print Arrays on a Single Line

The third argument to JSON.stringify is the indent. Passing 2 or 4 expands everything onto multiple lines, which is great for objects but can be verbose for arrays of primitives like tags, IDs, or coordinates.

const mixed = {
  title: 'Report',
  tags: ['json', 'api', 'debug'],
  config: { indent: 2, sortKeys: true },
  count: 42,
};

// Default: everything multi-line
JSON.stringify(mixed, null, 2);
// {
//   "title": "Report",
//   "tags": [
//     "json",
//     "api",
//     "debug"
//   ],
//   "config": {
//     "indent": 2,
//     "sortKeys": true
//   },
//   "count": 42
// }

You can post-process the output to collapse simple arrays onto a single line:

// Custom replacer for single-line arrays
function prettyMixed(obj) {
  const raw = JSON.stringify(obj, null, 2);
  // Collapse arrays that contain only primitives onto one line
  return raw.replace(
    /\[\n\s+([\s\S]*?)\n\s+\]/g,
    (match, inner) => {
      const items = inner.split(',\n').map((s) => s.trim());
      if (items.every((s) => !/^[{\[]/.test(s))) {
        return '[' + items.join(', ') + ']';
      }
      return match;
    }
  );
}

The result: objects remain multi-line for readability, arrays of primitives collapse to one line for compactness. This is the format used in many configuration files and structured log outputs where humans and machines both need to read the same data. The JSON Formatter and JSON to YAML converter handle this collapsing automatically.

7. Streaming Large JSON

Loading a large JSON file entirely into memory before parsing is the single most common performance mistake in JSON processing pipelines. A 100 MB JSON response allocates at least 100 MB of heap for the raw string, then a second allocation for the parsed object. For files above a few megabytes, a streaming parser processes the data incrementally.

On Node.js with a streaming library:

// node --experimental-vm-modules (or use a streaming JSON lib)
// Native streaming with the JSON Source Map proposal (Stage 3, 2026):

// For now, use a library like @streamparser/json or jsonstream2
import { createReadStream } from 'fs';
import parser from '@streamparser/json';

const jsonParser = new parser.JSONParser();
jsonParser.onValue = ({ value, key, parent }) => {
  if (key === 'id') {
    console.log('Found id:', value);
  }
};

createReadStream('large.json').pipe(jsonParser);

In the browser using the Fetch Streams API:

// Browser-side streaming with the Fetch + JSON stream decoder:
const response = await fetch('/api/large-data');
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';

while (true) {
  const { done, value } = await reader.read();
  if (done) break;
  buffer += decoder.decode(value, { stream: true });
  // Process complete JSON objects from buffer...
}

Streaming is most useful in batch processing pipelines, large export endpoints, and log analysis scripts. For typical API responses under 1 MB, the standard JSON.parse is fast enough and far simpler. The threshold where streaming starts to pay off in practice is around 5–10 MB, depending on the complexity of the parsed structure and the target environment's heap size.

For exploring the structure of large JSON responses before you build a streaming parser, paste a sample into the JSON Formatter to see which paths contain the data you need, then convert to CSV or YAML for further analysis.

Putting It Together

These seven techniques cover the most impactful corners of the JSON API:

  • Deep clone via round-trip — fast for plain data, know the limits
  • Replacer — filter keys and mask sensitive values at the serialization boundary
  • Key sorting — deterministic output for cache keys, signatures, and test assertions
  • BigInt handling — stringify to string, revive on parse
  • Circular reference detection — WeakSet-based replacer for safe serialization
  • Pretty-print with compact arrays — human-readable output without excess whitespace
  • Streaming large JSON — incremental parsing for large files and API responses

For full documentation of JSON.stringify's replacer and space arguments, see the MDN JSON.stringify reference. For the modern deep clone alternative, see the structuredClone() documentation, which covers all supported types and edge cases.