Stop Abusing localStorage: The Complete Guide to Browser Storage in 2026

localStorage, sessionStorage, IndexedDB, Cache API, Origin Private File System, cookies — six browser storage APIs that each solve a different problem. Most developers use one for everything. Here’s when each one is the right choice.


localStorage is the most abused API in frontend development. It’s synchronous, string-only, limited to 5–10MB, blocks the main thread on every read and write, and was never designed for structured data, files, binary blobs, or caching network responses. Yet it ends up holding authentication tokens, user preferences, API response caches, feature flags, shopping carts, and sometimes entire application states.

The browser offers six distinct storage APIs. Each one exists because a different set of requirements demanded it. Using the wrong one doesn’t just add technical debt — it causes real user-facing problems: blocked UI threads, data loss on logout, storage quota errors in production, and security vulnerabilities that wouldn’t exist with the right tool.

This is the complete, honest guide to all six.


The Storage Landscape at a Glance

Storage API           Size Limit       Sync/Async    Persists Past Tab?    Good For
─────────────────────────────────────────────────────────────────────────────────────
localStorage          5–10 MB          Sync          Yes (per origin)      Preferences, small settings
sessionStorage        5–10 MB          Sync          No (tab only)         Temporary form state, session data
IndexedDB             Quota-managed    Async          Yes (per origin)      Structured data, large datasets
Cache API             Quota-managed    Async          Yes (per origin)      Network response caching (PWA)
Origin Private FS     Quota-managed    Async          Yes (per origin)      Files, binary blobs, OPFS
Cookies               4 KB per cookie  Sync           Configurable          Server communication, auth sessions

localStorage: What It’s Actually For

What it is

localStorage is a synchronous key-value store that persists data across browser sessions. Data survives tab closes, browser restarts, and page reloads. Items are scoped to the origin (scheme + hostname + port) and serialised as strings.

The synchronous penalty

This is the most important thing to understand about localStorage. Every getItem, setItem, and removeItem call runs on the main thread and blocks it. For small strings this is negligible. For large JSON payloads serialised on every write — or read on every page load — it introduces measurable jank.

// This blocks the main thread while parsing
const cart = JSON.parse(localStorage.getItem('cart') || '[]')

// This blocks the main thread while serialising
localStorage.setItem('cart', JSON.stringify(largeCartObject))

What it’s actually good for

// ✓ Theme preference — tiny string, reads once on load
localStorage.setItem('theme', 'dark')
const theme = localStorage.getItem('theme') // 'dark'

// ✓ UI preferences — small, string-safe settings
localStorage.setItem('sidebar-collapsed', 'true')
localStorage.setItem('locale', 'en-IN')
localStorage.setItem('font-size', 'large')

// ✓ Feature flag overrides — small strings
localStorage.setItem('feature:new-dashboard', 'true')

What it’s not for

// ✗ Large API response caches — use Cache API or IndexedDB
localStorage.setItem('products', JSON.stringify(allProducts))  // could be megabytes

// ✗ Auth tokens — use HttpOnly cookies (localStorage is readable by XSS)
localStorage.setItem('access_token', token)

// ✗ Binary data — use IndexedDB or OPFS
localStorage.setItem('avatar', btoa(binaryData))  // base64 wastes 33% space

// ✗ Sensitive user data — localStorage is not encrypted, not scoped to sessions
localStorage.setItem('credit_card_last4', '4242')

A safe localStorage wrapper with TypeScript

// utils/storage.ts
function getItem<T>(key: string, fallback: T): T {
  try {
    const item = localStorage.getItem(key)
    if (item === null) return fallback
    return JSON.parse(item) as T
  } catch {
    return fallback
  }
}

function setItem<T>(key: string, value: T): void {
  try {
    localStorage.setItem(key, JSON.stringify(value))
  } catch (err) {
    // QuotaExceededError — handle gracefully
    console.warn('localStorage write failed:', err)
  }
}

function removeItem(key: string): void {
  localStorage.removeItem(key)
}

export const storage = { getItem, setItem, removeItem }

// Usage
storage.setItem<'light' | 'dark'>('theme', 'dark')
const theme = storage.getItem<'light' | 'dark'>('theme', 'light')

sessionStorage: The Tab-Scoped Sibling

What it is

sessionStorage has the same API as localStorage with one critical difference: data is scoped to the browser tab session. It’s lost when the tab closes, when the user navigates away and comes back with a new tab, and crucially — it is not shared between tabs.

sessionStorage.setItem('checkout-step', '2')
sessionStorage.setItem('draft-form-data', JSON.stringify(formState))

When sessionStorage is correct

// ✓ Multi-step form progress — should be lost if the user abandons
sessionStorage.setItem('signup-step', JSON.stringify({ step: 3, data: formData }))

// ✓ Scroll position memory within a session
sessionStorage.setItem(`scroll:${pathname}`, String(window.scrollY))

// ✓ Wizard/onboarding state — intentionally non-persistent
sessionStorage.setItem('onboarding-complete', 'false')

// ✓ Temporary search filters — not worth persisting
sessionStorage.setItem('search-filters', JSON.stringify(activeFilters))

The tab isolation trap

The most common sessionStorage mistake is expecting it to be shared between tabs. It isn’t.

// Tab 1: user adds item to cart
sessionStorage.setItem('cart', JSON.stringify([item]))

// Tab 2: completely independent sessionStorage
sessionStorage.getItem('cart')  // null — nothing here

If data needs to be shared between tabs, use localStorage or a BroadcastChannel.


IndexedDB: The Right Answer for Structured Data

What it is

IndexedDB is a full-featured, asynchronous, transactional database built into the browser. It stores JavaScript objects (not just strings), supports indexes and cursors, handles multiple concurrent connections, and its quota is managed by the browser based on available disk space — typically gigabytes, not megabytes.

It is not a key-value store. It is a proper object database with structured queries.

The raw API is painful — use a wrapper

// The raw IndexedDB API is verbose and callback-based
const request = indexedDB.open('my-db', 1)
request.onupgradeneeded = (event) => {
  const db = event.target.result
  const store = db.createObjectStore('users', { keyPath: 'id' })
  store.createIndex('email', 'email', { unique: true })
}
request.onsuccess = (event) => {
  const db = event.target.result
  const tx = db.transaction('users', 'readwrite')
  const store = tx.objectStore('users')
  store.add({ id: 1, name: 'Taylor', email: 'taylor@example.com' })
}

Use idb — a tiny wrapper by Jake Archibald that makes IndexedDB feel like a modern async API:

npm install idb
import { openDB, type IDBPDatabase } from 'idb'

interface AppDB {
  users: {
    key:   number
    value: { id: number; name: string; email: string; role: string }
    indexes: { 'by-email': string }
  }
  posts: {
    key:   number
    value: { id: number; title: string; content: string; authorId: number; createdAt: Date }
    indexes: { 'by-author': number }
  }
  drafts: {
    key:   string
    value: { id: string; content: string; savedAt: Date }
  }
}

const dbPromise = openDB<AppDB>('my-app', 1, {
  upgrade(db) {
    // Users store
    const userStore = db.createObjectStore('users', { keyPath: 'id' })
    userStore.createIndex('by-email', 'email', { unique: true })

    // Posts store
    const postStore = db.createObjectStore('posts', { keyPath: 'id' })
    postStore.createIndex('by-author', 'authorId')

    // Drafts store — string key (UUID)
    db.createObjectStore('drafts', { keyPath: 'id' })
  },
})

// Reads
async function getUserByEmail(email: string) {
  const db = await dbPromise
  return db.getFromIndex('users', 'by-email', email)
}

async function getPostsByAuthor(authorId: number) {
  const db = await dbPromise
  return db.getAllFromIndex('posts', 'by-author', authorId)
}

// Writes
async function saveUser(user: AppDB['users']['value']) {
  const db = await dbPromise
  return db.put('users', user)
}

async function saveDraft(id: string, content: string) {
  const db = await dbPromise
  return db.put('drafts', { id, content, savedAt: new Date() })
}

// Delete
async function deleteDraft(id: string) {
  const db = await dbPromise
  return db.delete('drafts', id)
}

When IndexedDB is the right choice

// ✓ Offline-capable app data — articles, products, contacts
await db.put('articles', fetchedArticle)  // read when offline

// ✓ Large structured datasets — hundreds to thousands of records
await db.put('products', productsArray)  // JSON.stringify to localStorage = quota error

// ✓ Draft saving with autosave — rich text, form state, document editors
await db.put('drafts', { id, content, savedAt: new Date() })

// ✓ Client-side search indexes — full text, tags, categories
await db.put('search-index', { term, documentIds })

// ✓ Synced data with conflict resolution — offline-first apps
// Store the local state, queue of pending changes, and sync timestamps

Transactions: The Critical Concept

IndexedDB operations happen within transactions. A transaction that completes successfully commits atomically. If anything fails, the entire transaction rolls back — no partial writes.

async function transferPoints(fromUserId: number, toUserId: number, points: number) {
  const db = await dbPromise

  // Both updates happen in a single transaction — either both succeed or neither does
  const tx = db.transaction('users', 'readwrite')

  const [from, to] = await Promise.all([
    tx.store.get(fromUserId),
    tx.store.get(toUserId),
  ])

  if (!from || !to || from.points < points) {
    tx.abort()
    throw new Error('Transfer failed')
  }

  await Promise.all([
    tx.store.put({ ...from, points: from.points - points }),
    tx.store.put({ ...to,   points: to.points + points }),
  ])

  await tx.done
}

Cache API: Network Response Caching

What it is

The Cache API is a key-value store where keys are Request objects and values are Response objects. It was designed specifically for service workers to cache network responses — HTML, CSS, JavaScript, images, and API responses — enabling offline functionality and instant cache-first loading.

// In a service worker
const CACHE_NAME = 'my-app-v2'

// Cache assets on install
self.addEventListener('install', (event) => {
  event.waitUntil(
    caches.open(CACHE_NAME).then(cache =>
      cache.addAll([
        '/',
        '/index.html',
        '/main.css',
        '/app.js',
        '/fonts/inter.woff2',
      ])
    )
  )
})

// Serve from cache with network fallback
self.addEventListener('fetch', (event) => {
  event.respondWith(
    caches.match(event.request).then(cached => {
      if (cached) return cached  // instant cache hit

      return fetch(event.request).then(response => {
        // Cache successful network responses for next time
        if (response.ok) {
          const clone = response.clone()
          caches.open(CACHE_NAME).then(cache => cache.put(event.request, clone))
        }
        return response
      })
    })
  )
})

Cache Strategies

// Cache First — best for versioned assets (fonts, images, JS bundles)
async function cacheFirst(request) {
  const cached = await caches.match(request)
  if (cached) return cached
  const response = await fetch(request)
  const cache = await caches.open(CACHE_NAME)
  cache.put(request, response.clone())
  return response
}

// Network First — best for API responses (fresh data preferred)
async function networkFirst(request, timeoutMs = 3000) {
  const controller = new AbortController()
  const timeout    = setTimeout(() => controller.abort(), timeoutMs)

  try {
    const response = await fetch(request, { signal: controller.signal })
    clearTimeout(timeout)
    const cache = await caches.open(CACHE_NAME)
    cache.put(request, response.clone())
    return response
  } catch {
    clearTimeout(timeout)
    const cached = await caches.match(request)
    if (cached) return cached
    throw new Error('No network and no cache for: ' + request.url)
  }
}

// Stale While Revalidate — best for non-critical resources (avatars, secondary content)
async function staleWhileRevalidate(request) {
  const cache  = await caches.open(CACHE_NAME)
  const cached = await cache.match(request)

  // Fetch in background to update cache — even if we return cached version
  const fetchPromise = fetch(request).then(response => {
    cache.put(request, response.clone())
    return response
  })

  return cached || fetchPromise  // return cached immediately, update quietly
}

Cache API Outside Service Workers

The Cache API is also accessible from the main thread — useful for precaching resources or managing cache entries programmatically:

// Main thread — precache a set of API responses
async function precacheUserData(userId) {
  const cache = await caches.open('user-data-v1')
  const endpoints = [
    `/api/users/${userId}`,
    `/api/users/${userId}/preferences`,
    `/api/users/${userId}/recent-activity`,
  ]

  await Promise.all(
    endpoints.map(async url => {
      const response = await fetch(url)
      if (response.ok) await cache.put(url, response)
    })
  )
}

Origin Private File System (OPFS): File Storage in the Browser

What it is

The Origin Private File System is a sandboxed file system API, available since 2023 and now well-supported in all major browsers. Unlike the public file system APIs that require user permission prompts, OPFS provides a private, origin-scoped file system that your web app can read and write freely.

OPFS is ideal for large binary files, databases that need file-level access (like SQLite), video/audio processing pipelines, and any use case where you need actual file semantics rather than a key-value store.

// Get the OPFS root directory
const root = await navigator.storage.getDirectory()

// Create/open a file
const fileHandle = await root.getFileHandle('user-data.json', { create: true })

// Write to a file using a writable stream
const writable = await fileHandle.createWritable()
await writable.write(JSON.stringify({ name: 'Taylor', preferences: { theme: 'dark' } }))
await writable.close()

// Read the file
const file    = await fileHandle.getFile()
const content = await file.text()
const data    = JSON.parse(content)

Synchronous Access in Web Workers

OPFS supports a synchronous API inside Web Workers — enabling high-performance file operations without the async overhead:

// worker.js — synchronous OPFS inside a Web Worker
const root       = navigator.storage.getDirectory()  // synchronous in workers
const fileHandle = root.getFileHandleSync('large-dataset.bin', { create: true })

// Synchronous read/write — no await, no callbacks
const accessHandle  = fileHandle.createSyncAccessHandle()
const buffer        = new ArrayBuffer(accessHandle.getSize())
accessHandle.read(buffer, { at: 0 })
// process buffer...
accessHandle.close()

SQLite in the Browser with OPFS

OPFS is the storage backend for running SQLite in the browser via WebAssembly. Libraries like sql.js and wa-sqlite use OPFS to persist SQLite databases as files:

import { createDbWorker } from 'sql.js-httpvfs'

// SQLite backed by OPFS — full SQL in the browser, persisted to a file
const worker = await createDbWorker(
  [{ from: 'jsonconfig', configUrl: '/db-config.json' }],
  '/sqlite.worker.js',
  '/sql-wasm.wasm',
)

const results = await worker.db.exec(`
  SELECT * FROM products
  WHERE category = 'electronics'
  ORDER BY price ASC
  LIMIT 20
`)

When OPFS is the right choice

  • Storing generated PDFs, exports, or files that users will download
  • Audio/video processing — write processed chunks to a file
  • Running SQLite via WASM for complex local queries
  • Large binary datasets (medical imaging, 3D models, geospatial data)
  • Applications that need actual file operations, not just key-value storage

Cookies: Still Relevant in 2026

What they actually are for

Cookies were designed for one thing: sending small pieces of state from the browser to the server on every request. They are the mechanism by which HTTP, a stateless protocol, gains session awareness.

Every HTTP request to the same origin automatically sends all non-expired cookies
→ Server receives the session ID
→ Server looks up the session
→ Server knows who the user is

This is what cookies are for.

HttpOnly Cookies: The Right Way to Handle Auth Tokens

Authentication tokens stored in localStorage are accessible to any JavaScript running on the page — including injected XSS scripts. HttpOnly cookies are completely inaccessible to JavaScript. They exist only in the browser’s cookie jar and are sent automatically with every request.

Set-Cookie: session_token=abc123; HttpOnly; Secure; SameSite=Strict; Path=/; Max-Age=86400
// ✗ Vulnerable — XSS can steal this token
localStorage.setItem('access_token', token)

// ✓ Secure — JavaScript cannot read HttpOnly cookies
// The server sets the cookie with Set-Cookie response header
// The browser sends it automatically with every request
// Your JavaScript code never touches the token at all

Cookie Attributes That Matter in 2026

// Setting cookies from JavaScript (non-sensitive data only)
document.cookie = [
  'user-timezone=Asia/Kolkata',
  'Secure',              // HTTPS only
  'SameSite=Lax',        // Sent on same-site navigation, blocked on cross-site
  'Path=/',
  'Max-Age=31536000',    // 1 year
].join('; ')
HttpOnly    → Cannot be read by JavaScript (use for auth tokens)
Secure      → Only sent over HTTPS
SameSite=Strict → Only sent on same-site requests (maximum CSRF protection)
SameSite=Lax    → Sent on top-level navigation (good default)
SameSite=None   → Sent on all requests, requires Secure (for cross-site embeds)
Max-Age     → Seconds until expiry (preferred over Expires)
Domain      → Can be scoped to subdomain
Path        → Cookie only sent for requests to this path

The Cookie API in 2026: CookieStore

The modern CookieStore API replaces the painful document.cookie string parsing with an async, Promise-based interface:

// The old way — string parsing nightmare
const getCookie = (name) => {
  return document.cookie.split('; ')
    .find(row => row.startsWith(`${name}=`))
    ?.split('=')[1]
}

// The modern way — CookieStore API (well-supported in 2026)
const cookie = await cookieStore.get('user-timezone')
console.log(cookie?.value)  // 'Asia/Kolkata'

// Set a cookie
await cookieStore.set({
  name:     'user-timezone',
  value:    'Asia/Kolkata',
  expires:  Date.now() + (365 * 24 * 60 * 60 * 1000),
  sameSite: 'lax',
})

// Delete a cookie
await cookieStore.delete('user-timezone')

// Listen for cookie changes
cookieStore.addEventListener('change', (event) => {
  for (const cookie of event.changed) {
    console.log('Changed:', cookie.name, cookie.value)
  }
})

Storage Quota and Eviction: The Production Problem Nobody Plans For

The browser doesn’t give you unlimited storage. The quota varies by browser and available disk space, but the more important issue is eviction — the browser can delete your stored data without warning under storage pressure.

Checking Available Quota

// Check how much storage you're using and what's available
const estimate = await navigator.storage.estimate()

console.log({
  quota:   estimate.quota,    // total available bytes
  usage:   estimate.usage,    // bytes currently used
  percent: (estimate.usage / estimate.quota * 100).toFixed(1) + '%',
})
// { quota: 107374182400, usage: 2048576, percent: '0.0%' }

Requesting Persistent Storage

By default, browser storage (IndexedDB, Cache API, OPFS) can be evicted under disk pressure. Request persistent storage to prevent eviction:

// Request that the browser not evict this origin's data
const persisted = await navigator.storage.persist()

if (persisted) {
  console.log('Storage will not be evicted without user action')
} else {
  console.log('Storage may be evicted under disk pressure')
  // Warn the user or reduce what you store
}

// Check if persistence is already granted
const isPersisted = await navigator.storage.persisted()

Storage persistence is automatically granted if the site is installed as a PWA, if the user has added it to their home screen, or if the user has interacted extensively with the site.


The Decision Framework

What are you storing?
│
├── User preferences (theme, locale, font size, layout)
│   └── localStorage — small strings, persists correctly
│
├── Temporary session state (form wizard, checkout step, filters)
│   └── sessionStorage — lost when tab closes, which is correct
│
├── Authentication / session token
│   └── HttpOnly cookie — set by server, never readable by JS
│
├── Structured data (contacts, articles, orders, search results)
│   ├── Small (< 100 records, < 1MB) → localStorage (with JSON)
│   └── Large, indexed, or complex → IndexedDB (use idb wrapper)
│
├── Network response cache (HTML, JS, CSS, API responses for offline)
│   └── Cache API (inside a service worker)
│
├── Binary data (images, PDFs, audio, video, SQLite database)
│   └── Origin Private File System (OPFS)
│
└── Data the server needs on every request (session, A/B variant, locale)
    └── Cookie (SameSite=Lax, Secure; HttpOnly for auth)

Security: The Mistakes That Matter

Never store sensitive data in localStorage or sessionStorage

Both are accessible to any JavaScript on the page. XSS attacks directly read localStorage. Sensitive data — tokens, PII, payment information — belongs in HttpOnly cookies or never in the browser at all.

Token storage is not the only XSS risk

Even if your auth token is in an HttpOnly cookie, an XSS attacker can still make authenticated API requests from the victim’s browser. The defence against this is a Content-Security-Policy that prevents injected scripts from running, combined with CSRF tokens for state-changing operations.

Be careful with postMessage and shared storage

BroadcastChannel and localStorage events are broadcast to all tabs of the same origin. If any tab is compromised, cross-tab communication channels can be used to spread the attack. Validate all incoming postMessage events and don’t trust data from cross-tab channels blindly.

Origin isolation

All of these APIs are scoped to the origin (scheme + host + port). https://app.example.com and https://api.example.com have separate storage — they cannot read each other’s localStorage, IndexedDB, or cookies (unless the cookie has a Domain=.example.com attribute).


Quick Reference

APILimitSyncSurvives Tab CloseBest For
localStorage5–10 MBSyncYesPreferences, small settings
sessionStorage5–10 MBSyncNoTemporary session state
IndexedDBQuotaAsyncYesStructured data, offline apps
Cache APIQuotaAsyncYesNetwork caching, PWA assets
OPFSQuotaAsync (sync in Worker)YesFiles, binary data, SQLite
Cookies4 KB/cookieSyncConfigurableServer auth, small server-read state

Final Thoughts

Every browser storage API exists because there was a real use case that couldn’t be served well by the options that already existed. localStorage is not a general-purpose storage API — it’s a small, synchronous key-value store for preferences. IndexedDB is not unnecessarily complex — it’s a database, because some problems need a database. The Cache API is not just for service workers — it’s the right answer for any network response you want to reuse. OPFS is not exotic — it’s the obvious choice once you’re working with actual files or need SQLite.

The rule is simple: match the storage API to the data’s nature. Small string setting → localStorage. Tab-scoped session state → sessionStorage. Structured records → IndexedDB. Cached network responses → Cache API. Files and binary data → OPFS. Server-visible auth state → HttpOnly cookie.

Use the right tool. The browser ships them all for free.

Leave a Reply

Your email address will not be published. Required fields are marked *