VIEWs should be the cleanest abstraction SQL, and therefore Postgres, has on offer. I love the concept. The promise of decoupling logical intent from physical storage is perfect on paper. In practice, few things in the database world trigger such a heated debate or carry as much historical baggage. VIEWs mix big promises with false hopes, and the promises rarely survive contact with production.

The appeal is straightforward. Abstract "active customer" once and reuse it everywhere. Every query, report and dashboard uses the same definition. The "active customer" then becomes the foundation of a "customer orders" view, which in turn powers an operational "customer summary" view.

-- layer 1: who counts as an active customer?
CREATE VIEW active_customers AS
SELECT c.*
FROM customers c
WHERE c.deleted_at IS NULL
  AND c.status = 'active'
  AND c.last_login_at > now() - interval '90 days';

-- layer 2: active customers with their recent orders
CREATE VIEW customer_orders AS
SELECT
    ac.*,
    o.id         AS order_id,
    o.total_cents,
    o.created_at AS ordered_at,
    o.status     AS order_status
FROM active_customers ac
LEFT JOIN orders o ON o.customer_id = ac.id
WHERE o.created_at > now() - interval '12 months'
   OR o.created_at IS NULL;

-- layer 3: one row per customer, ready for the dashboard
CREATE VIEW customer_summary AS
SELECT
    co.id,
    co.email,
    co.name,
    COUNT(co.order_id)                                   AS orders_12mo,
    COALESCE(SUM(co.total_cents), 0)                     AS revenue_12mo_cents,
    MAX(co.ordered_at)                                   AS last_order_at,
    COUNT(*) FILTER (WHERE co.order_status = 'refunded') AS refunds_12mo
FROM customer_orders co
GROUP BY co.id, co.email, co.name;

Each layer has one job. "Active customer" is defined exactly once - if marketing changes the ninety-day rule tomorrow, it is one line in one place, and the dashboard query collapses to SELECT * FROM customer_summary WHERE id = $1.

VIEWs also have the potential to be a real security boundary. Three hand-written queries means three places where a predicate can be forgotten, and three subtly different result sets a quarter later. With one definition, you have one result set.

For simple views, there is no performance penalty either. PostgreSQL inlines them directly into the calling query, so the planner sees through the view and plans as if you had written the underlying SQL by hand. That mechanism, and the cases where it stops working, has a deep-dive of its own. This article assumes it as background and focuses on the other side: what views are made of, and what happens when the schema underneath them moves.

VIEWs are just like tables

The trouble starts with the syntax.

SELECT * FROM customer_summary reads exactly like SELECT * FROM customers. The catalog reinforces it: pg_class.relkind = 'v' instead of 'r', but everything else (column lists, grants, comments, even \d output) looks the same. So developers reach for them the same way: joining them, nesting them, wrapping them in other views without thinking about it.

A table is a heap with indexes. A view is a stored parse tree that gets expanded into your query at planning time. Most of the time the planner is clever enough that the difference does not matter: simple views are inlined, predicates are pushed down, the resulting plan is what you would have written by hand.

The nested view spiral

In practice, what you get is the dependency tree from the introduction (customer_summary on top of customer_orders on top of active_customers) and a new engineer who has no way to know that SELECT * FROM customer_summary WHERE id = $1 expands into a three-layer rewrite, a LEFT JOIN against twelve months of orders, and a GROUP BY that the planner cannot push the id predicate through.

Nothing in the query, the schema browser, or the ORM signals this. You only learn the shape of the tree by reading every definition top to bottom, which is exactly the work the abstraction was meant to save you.

You cannot reason about the performance of a query against a view without reading the view, the views it depends on, and the tables underneath.

Views are rewrite rules

A view is a macro. When you reference one, Postgres pastes its body into your query before the planner runs. There is no stored result, no view "object" the executor consults. The pg_class row you see when you query the catalog is an empty shell: it holds the name, the column list, and the grants, but no definition.

The definition itself, the SELECT you wrote, is stored separately in pg_rewrite:

SELECT ev_class::regclass, ev_type, is_instead
FROM pg_rewrite
WHERE ev_class = 'active_customers'::regclass;
     ev_class     | ev_type | is_instead
------------------+---------+------------
 active_customers | 1       | t

That row is the view. ev_type = 1 is SELECT, is_instead = t means "replace, do not augment" (the pg_class shell has no rows to read, so substitution is the only option that makes sense). Between the parser and the planner Postgres runs a stage called the rewriter, and the rewriter's job is to walk the parse tree, find references to relations that have rules, and substitute the rule body in place. This way a query like SELECT * FROM active_customers WHERE id = $1 arrives at the planner already rewritten:

-- what the planner actually sees
SELECT *
FROM (
    SELECT c.id, c.email, c.name, c.status,
           c.last_login_at, c.deleted_at
    FROM customers c
    WHERE c.deleted_at IS NULL
      AND c.status = 'active'
      AND c.last_login_at > now() - interval '90 days'
) active_customers
WHERE id = $1;

Notice that c.* is already expanded into an explicit column list. That happened at CREATE VIEW time and was frozen into the stored parse tree; the rewriter does not re-expand it. From here the planner can usually flatten the subquery into the outer query through a pass called subquery pull-up, leaving id = $1 as a predicate next to the others so an index on customers.id is reachable.

That flattening is what makes simple views free. When it bails out (because the view body has LIMIT, DISTINCT, an aggregate, a set operation, or a few other shapes) the subquery stays put and outer predicates cannot move past it. The full list of planner barriers and how to spot them in EXPLAIN lives in the inlining post.

Historically Postgres exposed this same machinery as user-facing CREATE RULE, on the theory that arbitrary query rewriting was a general-purpose feature. It mostly was not. Rules are effectively deprecated outside of views; triggers do the same jobs without the surprises. The rule infrastructure survives because views need it.

Once you hold that model, every awkward thing about views falls out of it. Two in particular drive the rest of this article.

Columns are referenced by attribute number, not name. The stored parse tree does not remember that it reads customers.email; it remembers that it reads attribute 2 of relation 16385. Rename the column and the view keeps working, because the OID and the position are unchanged. Drop a column in the middle and Postgres refuses upfront, because the stored attribute numbers would no longer line up. Every dependency error later in this article traces back to this.

The body is expanded once per reference. Every mention of the view in a query produces an independent copy of its parse tree in the rewritten plan. For a stable expression that is invisible. For a volatile one like random() or clock_timestamp(), each copy evaluates separately:

CREATE VIEW v_rand AS
SELECT id, random() AS r FROM customers;

SELECT a.id, a.r AS r_a, b.r AS r_b
FROM v_rand a
JOIN v_rand b ON a.id = b.id
LIMIT 3;
 id |         r_a          |         r_b
----+----------------------+---------------------
  1 |     0.84043639656488 |  0.0833458769902089
  2 | 0.009846241116064247 |  0.6574000469586228
  3 |  0.14667469313524628 | 0.21862693208148087

Same view, same row, two different r values. The rewriter expanded v_rand twice, so the rewritten query contains two independent random() calls, and the planner has no reason to share them. You might hit this expecting a.r = b.r and end up rewriting the view to push the volatile call outside.

This is also why CREATE OR REPLACE VIEW can only append columns to the end of the column list. Existing attribute numbers must stay stable for every dependent, so anything that would shift them is forbidden.

A useful side effect of the same mechanism: views are immune to the search-path attacks that bite SECURITY DEFINER functions. Identifiers are resolved to OIDs at CREATE VIEW time, so a later SET search_path cannot redirect a view to a different table. Functions resolve identifiers per call against the caller's search_path unless you pin it; views never have that exposure.

Writable views, the half-kept promise

Funnily enough, I was not aware writable views existed until earlier this year. It came up during a hallway conversation with Ervin Weber and Jesper St John at PgDay Nordic 2026 in Helsinki. Twelve years writing PostgreSQL and the auto-updatable rules had simply never crossed my desk.

The original promise was symmetry: if a view looks like a table for reads, it should look like one for writes. Postgres delivers this for the easy case (a view over a single base table, no joins, no aggregates, no DISTINCT) and calls them auto-updatable. You can INSERT, UPDATE, and DELETE through them, and the rewrite rule translates the operation against the underlying table.

INSERT INTO active_customers (email, name)
VALUES ('alice@example.com', 'Alice');
-- becomes: INSERT INTO customers (email, name, status, last_login_at, deleted_at)
--          VALUES ('alice@example.com', 'Alice', DEFAULT, DEFAULT, DEFAULT);

Add a join, an aggregate, a GROUP BY, or anything else outside the auto-updatable rules and the write path goes silent. SELECT still works, but INSERT errors and you have to wire up INSTEAD OF triggers by hand. So customer_orders and customer_summary are read-only by accident, not by design, and the rules for what counts are not visible in the view definition itself. You find out at write time, in production.

WITH CHECK OPTION rejects any write that would produce a row the view cannot see, with LOCAL checking only this view's predicate and CASCADED (the default) checking every underlying view up the chain. It is how you hand an application a writable, scoped slice of a table without trusting it to enforce the scope itself.

The architect's stigma

When I first got into databases, views were the thing senior people warned you off. "Don't use views in production", "views are slow", "views hide the real query". You learned to write the SQL inline, even when the same predicate appeared in twenty places, because the alternative was disapproval at the next code review.

In Oracle shops the warning had specific roots. The cost-based optimiser of that era struggled to merge predicates through views with non-trivial projections, joins, or set operations, so a clean three-layer view could expand into a plan that scanned everything and filtered late. The shop-wide answer was almost always materialised views or PL/SQL packages that encapsulated every access path; a simple view sat awkwardly between the two, not cached like a matview, not encapsulated like a package, and with optimiser surprises on top.

Postgres carried the same baggage by association. Nested-view spirals were real, rule-system surprises were real, and the optimiser of fifteen years ago was less capable than today's. The rule of thumb hardened into doctrine and outlived its conditions.

The improvements are real; the reputation has not caught up. Reach for a view in a code review today and someone will still object on principle. Worth knowing before your PR hits review.

What happens when you change a table

Everything covered so far stays invisible while the schema underneath holds still. View bodies live in pg_rewrite, columns are pinned to attribute numbers, types are frozen at CREATE VIEW time, and the planner just expands and runs.

The moment someone tries to change a column the view tree references, every one of those mechanics surfaces at once. Suppose someone needs to drop the name column from customers:

ALTER TABLE customers DROP COLUMN name;
ERROR:  cannot drop column name of table customers because other objects depend on it
DETAIL:  view active_customers depends on column name of table customers
HINT:  Use DROP ... CASCADE to drop the dependent objects too.

active_customers references name (the c.* was expanded at view-creation time), customer_orders inherits the dependency through ac.*, and customer_summary is another layer down. The whole tree is frozen on one column.

It is not just dropping. Try widening a type:

ALTER TABLE customers ALTER COLUMN email TYPE TEXT;
ERROR:  cannot alter type of a column used by a view or rule
DETAIL:  rule _RETURN on view active_customers depends on column "email"

The same wall blocks any structural change to a column a view references: drop, rename, type change. Even widening VARCHAR(255) to TEXT, a change that loses no data and adds no constraint, is forbidden because the view's stored definition references the old type OID.

The hint suggests CASCADE:

ALTER TABLE customers DROP COLUMN name CASCADE;
NOTICE:  drop cascades to 3 other objects
DETAIL:  drop cascades to view active_customers
drop cascades to view customer_orders
drop cascades to view customer_summary

All three views are dropped, not modified, along with their grants, RLS policies, and any other dependent objects. On a schema with dozens of interconnected views, CASCADE is nightmare.

Never use CASCADE on production views without a full recreation script ready.

CASCADE doesn't modify views, it drops them. All GRANTs, row-level security policies, and downstream dependencies disappear with them. There is no undo.

So the manual path is: save every view definition, drop them in reverse dependency order, alter the table, recreate them in forward dependency order, reapply all grants. For three views this is tedious. For thirty views across multiple schemas, it is a full migration project.

-- 1. save definitions 
SELECT pg_get_viewdef('customer_summary', true);
SELECT pg_get_viewdef('customer_orders', true);
SELECT pg_get_viewdef('active_customers', true);

-- 2. drop in leaf-first order
DROP VIEW customer_summary;
DROP VIEW customer_orders;
DROP VIEW active_customers;

-- 3. now you can alter the table
ALTER TABLE customers DROP COLUMN name;

-- 4. recreate in correct order
CREATE VIEW active_customers AS
SELECT id, email, status, last_login_at, deleted_at
FROM customers
WHERE deleted_at IS NULL
  AND status = 'active'
  AND last_login_at > now() - interval '90 days';

CREATE VIEW customer_orders AS ...;
CREATE VIEW customer_summary AS ...;

-- 5. reapply grants and so on
-- ...

The SELECT * trap

You might think SELECT * saves you from column-level dependencies. It does not. It makes things worse.

CREATE VIEW all_customers AS
SELECT * FROM customers;

This looks flexible. PostgreSQL expands SELECT * at view creation time and freezes the result. Check what the database actually stored:

SELECT pg_get_viewdef('all_customers'::regclass, true);
 SELECT id,
    email,
    name,
    status,
    last_login_at,
    deleted_at
   FROM customers;

The * was expanded to the columns that existed when the view was created. Now add a column to the base table:

ALTER TABLE customers ADD COLUMN phone TEXT;
SELECT * FROM all_customers;

The phone column is not there. The view still returns the original columns. To pick up new columns, you have to CREATE OR REPLACE VIEW all_customers AS SELECT * FROM customers, which re-expands the * against the current table definition.

Drop a column that was in the original expansion and the migration is blocked the same way it would be with an explicit column list, except now the dependency is hidden in the catalog instead of visible in the view body. SELECT * gives the illusion of flexibility while creating the same rigid coupling.

This is actually documented behaviour and follows the SQL standard. But it surprises nearly everyone the first time they encounter it, and it is the source of many "my view is missing columns" bug reports.

Always use explicit column lists in views. At least then the dependency is visible and the breakage is predictable.

The dependency wall, the type-OID coupling, the frozen SELECT * expansion: all of it is the same trade-off, applied consistently. Rigidity is the feature, not a bug.

Where this isn't a PostgreSQL bug

The rigidity is a design choice, not a defect. Other databases made different choices, and none of them are free.

Oracle marks dependent views as INVALID rather than blocking the DDL. The views are recompiled automatically on next access. If they still work, great; if not, you get an error at query time. This sounds better until you realize what it means in practice: you can deploy a migration, get a clean exit code, and not discover that a critical reporting view is broken until a user hits it on Monday morning. The breakage moves from migration time, where you are paying attention, to runtime, where you may not be. For teams that value deployment confidence, this is arguably worse.

SQL Server has sp_refreshview, which recompiles a view's metadata against the current table definitions. You can alter a table and then refresh the dependent views to pick up the changes. But sp_refreshview works on one view at a time. There is no built-in way to refresh an entire dependency chain in the right order. And if a view references a dropped column, sp_refreshview fails; it does not remove the reference for you. You still have to manually edit and recreate the view. It is a convenience, not a solution.

PostgreSQL chose compile-time safety: no surprise INVALID views in production, no silently wrong results from a lazy recompilation papering over a structural change. The cost is manual dependency management on every schema change, and it is high enough to push experienced teams away from views entirely.

The saving grace: transactional DDL

Before the workarounds, there is one PostgreSQL feature that materially changes the risk calculus: DDL is transactional. The entire drop-alter-recreate-regrant sequence can live inside a single BEGIN/COMMIT:

BEGIN;
DROP VIEW customer_summary;
DROP VIEW customer_orders;
DROP VIEW active_customers;

ALTER TABLE customers DROP COLUMN name;

CREATE VIEW active_customers AS ...;
CREATE VIEW customer_orders AS ...;
CREATE VIEW customer_summary AS ...;

GRANT SELECT ON customer_summary TO reporting;
COMMIT;

If any statement raises (a typo in a recreated view, a missing grant, an unexpected dependency) the whole migration rolls back and the database is exactly as it was. Oracle and SQL Server cannot do this for most DDL; their CREATE VIEW auto-commits, so a partial failure leaves you with a half-migrated schema and no ROLLBACK to lean on. PostgreSQL's "you have to drop and recreate everything" pain is real, but the recovery story is much better than Oracle's or SQL Server's.

This does not solve the locking problem. DROP VIEW and CREATE VIEW take AccessExclusiveLock on the view; ALTER TABLE takes one on the table. For the duration of the transaction, anything that touches those objects waits behind it. A long view-rebuild transaction on a busy system is a stop-the-world window. Keep the transaction tight: prepare the new SQL outside the transaction, hold no other locks, and do not run it during peak traffic. Transactional DDL gives you safety, not concurrency.

The workarounds

None of these solve the fundamental problem; they make it manageable.

Avoid views entirely. Replace views with application-level query builders or ORM scopes. You lose the shared abstraction: when three services need the same definition of "active customer," each one implements it independently. Definitions drift. For teams with full control over a small codebase, this is a reasonable trade. For shared data platforms, it is not.

Script the recreation. Use pg_get_viewdef() to extract definitions, then write migration scripts that drop and recreate views around schema changes. The problems: no impact analysis (you have to figure out the dependency order yourself), no grant preservation (view definitions do not include permissions), and it does not scale past a handful of views. Your migration framework will not help here either. They run your SQL files in order, but figuring out what SQL to write is entirely on you.

Version your views. Instead of modifying views in place, create active_customers_v2 alongside active_customers_v1. Migrate consumers one at a time. Drop the old version when nothing uses it. This works well when views are consumed by multiple independent teams, since you cannot force everyone to migrate in the same deployment window. The trade-off is naming discipline and version proliferation: if customer_orders_v1 depends from active_customers_v1, you now need customer_orders_v2 too.

Schema-based versioning (CREATE SCHEMA api_v2; CREATE VIEW api_v2.active_customers AS ...) is cleaner than suffixes for public-facing APIs. It gives you a natural namespace and lets you SET search_path to switch versions.

Query pg_depend yourself. PostgreSQL tracks all object dependencies in the pg_depend system catalog. The catch: views do not depend on tables directly. The rewrite rule that implements the view does, so every traversal goes pg_depend -> pg_rewrite -> pg_class. Direct view dependents of customers:

SELECT
    depns.nspname AS dependent_schema,
    depc.relname  AS dependent_view,
    COALESCE(
        (SELECT a.attname FROM pg_attribute a
         WHERE a.attrelid = d.refobjid AND a.attnum = d.refobjsubid),
        '*'
    ) AS source_column
FROM pg_depend d
JOIN pg_rewrite r       ON r.oid = d.objid
JOIN pg_class depc      ON depc.oid = r.ev_class
JOIN pg_namespace depns ON depns.oid = depc.relnamespace
WHERE d.refobjid    = 'customers'::regclass
  AND d.classid     = 'pg_rewrite'::regclass
  AND d.refclassid  = 'pg_class'::regclass
  AND depc.relkind IN ('v', 'm')
  AND depc.oid     <> 'customers'::regclass
ORDER BY depns.nspname, depc.relname;

Useful, but only one hop. customer_summary depends on customers transitively trough two intermediate views, which this query misses. A recursive CTE walks the chain:

WITH RECURSIVE view_deps AS (
    -- direct dependents of the target table
    SELECT DISTINCT
        depc.oid                              AS view_oid,
        depns.nspname || '.' || depc.relname  AS view_name,
        1                                     AS depth
    FROM pg_depend d
    JOIN pg_rewrite r       ON r.oid = d.objid
    JOIN pg_class depc      ON depc.oid = r.ev_class
    JOIN pg_namespace depns ON depns.oid = depc.relnamespace
    WHERE d.refobjid = 'customers'::regclass
      AND d.classid  = 'pg_rewrite'::regclass
      AND depc.relkind IN ('v', 'm')

    UNION

    -- views depending on views we already found
    SELECT depc.oid,
           depns.nspname || '.' || depc.relname,
           vd.depth + 1
    FROM view_deps vd
    JOIN pg_depend d        ON d.refobjid = vd.view_oid
    JOIN pg_rewrite r       ON r.oid = d.objid
    JOIN pg_class depc      ON depc.oid = r.ev_class
    JOIN pg_namespace depns ON depns.oid = depc.relnamespace
    WHERE d.classid  = 'pg_rewrite'::regclass
      AND depc.relkind IN ('v', 'm')
      AND depc.oid <> vd.view_oid  -- skip the view's own _RETURN rule
)
SELECT view_name, MIN(depth) AS depth
FROM view_deps
GROUP BY view_name
ORDER BY depth, view_name;

The depc.oid <> vd.view_oid filter matters: each view's _RETURN rule has a self-dependency entry that without the filter sends the walk back to the view it just visited.

This works, but at this point you are writing a dependency-analysis tool. Add topological sort, grant capture, matview handling, RLS-policy preservation, and you have small product.

Materialized views have it worse. Everything above applies, plus you lose the cached result set on DROP. After recreating, REFRESH rebuilds from scratch: minutes to hours on large datasets, with no CONCURRENTLY option until a unique index exists. For matviews that back dashboards, this means downtime.

Prior art hiding in pg_dump

The mechanism the manual workaround keeps reinventing, rewriting a view's body while preserving its identity, grants, and dependents, already exists inside PostgreSQL. It happens to live in pg_dump, used only when needed.

Most of the time pg_dump does the obvious thing: builds a dependency graph, topologically sorts it, and emits each view in base-first order with a normal CREATE VIEW. Dump our four-view chain and that is exactly what you get.

The interesting case is when the sort fails. Cycles are uncommon but real: a view body that calls a function whose own body references the view back, a trigger on a base table that reads through a view reading the same table, cross-referencing matviews with RLS policies pointing back at views. When pg_dump hits one, it falls back to the placeholder view trick: emit one of the views early as a stub with the right column list and types but a dummy body, then come back with CREATE OR REPLACE VIEW to install the real definition once the rest of the cycle exists.

-- emitted early, as a placeholder
CREATE VIEW customer_summary AS
SELECT
    NULL::integer AS id,
    NULL::text    AS email,
    NULL::text    AS name,
    NULL::bigint  AS orders_12mo,
    NULL::bigint  AS revenue_12mo_cents;

-- emitted later, once dependencies exist
CREATE OR REPLACE VIEW customer_summary AS
SELECT co.id, co.email, co.name,
       COUNT(co.order_id)               AS orders_12mo,
       COALESCE(SUM(co.total_cents), 0) AS revenue_12mo_cents
FROM customer_orders co
GROUP BY co.id, co.email, co.name;

Grants, comments, and policies attach to the stub and survive the rewrite because the OID never changes. CREATE OR REPLACE VIEW mutates the same pg_class row in place. PostgreSQL has been doing this on every pg_dump --schema-only for years. The mechanism is there; it is simply not exposed as user-facing DDL.

Before you reach for a VIEW

Views are worth using. The abstraction is real: shared definitions, clean layering, column-level security, and no runtime cost when the planner can inline them. The trouble is that none of the rigidity is visible from the outside. A view looks like a table in the catalog, in \d, in the ORM, and in every query that touches it. The teardown cost only appears the first time someone tries to drop a column, widen a type, or CASCADE their way out of a migration on a Friday afternoon.

So before you reach for one, hold the trade-off in mind:

  • SELECT * in a view body is a trap. It freezes the column list at creation time, hides the dependency in the catalog, and still blocks the same DDL an explicit list would. Always write the columns out.
  • Every layer multiplies the teardown cost. A three-deep view chain means three drops, three recreates, three sets of grants and policies to reapply, in the right order, for any structural change to a column at the bottom. Keep dependency trees shallow, and be honest about whether the abstraction is paying for itself.
  • CASCADE is not a fix. It drops dependent views along with their grants, RLS policies, and downstream dependencies, with no undo. Never run it in production without a recreation script ready.

When the migration finally has to happen, two things make the cycle survivable. Wrap the whole drop-alter-recreate-regrant sequence in a transaction so a typo or missed grant rolls back cleanly, and keep that transaction tight — AccessExclusiveLock blocks everything else for its duration. Then map the dependency graph before you cut: pg_depend joined through pg_rewrite, walked with a recursive CTE, tells you what actually breaks. Run that query before the migration, not after the incident.

The pain traces back to one missing primitive. ALTER VIEW today handles renames, owner and schema changes, column defaults, and options like security_barrier, but nothing structural. CREATE OR REPLACE VIEW can append columns at the end and nothing else. A real ALTER VIEW DROP COLUMN, ADD COLUMN, ALTER COLUMN TYPE is what would make views safe to evolve. The catalog and the in-place-rewrite mechanism are already there, as pg_dump's placeholder trick demonstrates; the user-facing DDL is what is missing.

Even without it, views are still worth using. Just don't pretend they're tables.