Table of Contents

You may have heard that entropy, and the “fundamental law of entropostatics1, is fundamentally an “epistemic” concept, having to do with the beliefs of an observer, rather than an “ontic” one. That its place is alongside statistics, information theory, computability theory and logic, decision theory, games and economics — rather than mechanics, relativity, field theory and chemistry.

This understanding might further be the source of confusion2, e.g.

  1. “So if I know the exact positions and momenta of every particle in this pot of boiling water, its temperature is zero?” “Yep!” “… and yet, when I touch it, it scalds.”
  2. How does observation not decrease entropy — it literally gives you new information!
  3. OK, “hot water separating into cold water + electricity” is unlikely — but if you do observe that, hasn’t entropy increased? Does this not contradict the highly principled arguments for the fundamental law of entropostatics based on Liouville’s theorem?
  4. Does the fundamental law just state “entropy tends to increase” or “entropy is literally maximized”? Liouville’s theorem etc. only seems to imply the former, while e.g. Jaynes’ MaxEnt stuff is the latter.
  5. If the increase/maximization in entropy is entirely a matter of the observer’s computational limitations, how is it possible they say anything predictive at all about real physical behaviour, i.e. “free energy minimization”, “Landauer’s principle” etc?
  6. Similarly, when we justify the Maxwell-Boltzmann distribution by saying it is the correct distribution a Bayesian ought to have, knowing just the energy … how does this happen to also match the actual distribution of particle velocities in the gas? Does “the universe” perform Bayesian inference too?

  1. Preliminaries: homomorphisms and the second law
  2. Preliminaries: MaxEnt as Bayesian inference
  3. Constraints as observations
  4. Optimization, temperature and free energy
  5. Thermodynamics of computation and bounded rationality
  6. Thermodynamics and economics
  7. “And yet it scalds”
  8. Active inference

References, in some sensible order


1. Preliminaries: Homomorphisms and the second law

One informal way to think of homomorphisms in math is that they are maps that do not “create information out of thin air”. Isomorphisms further do not destroy information. The terminal object (e.g. the trivial group, the singleton topological space, or the trivial vector space) is the “highest-entropy state”, where all distinctions disappear and reaching it is heat death.

  • Take, for instance the group homomorphism \(\phi:\mathbb{Z}^+\to\mathbb{Z}_{4}^+\). Before \(\phi\) was applied, “1” and “5” were distinguished: 2 + 3 = 5 was correct, but 2 + 3 = 1 was wrong. Upon applying this homomorphism, this information disappears — however, no new information has been created, that is: no true indinstinctions (equalities) have become false.
  • Similarly in topology, “indistinction” is “arbitrary closeness”. Wiggle-room (aka “open sets”) is information, it cannot be created from nothing. If a set or sequence goes arbitrarily close to a point, it will always be arbitrarily close to that point after any continuous transformations.
  • There is no information-theoretical formalization of “indistinction” on these structures, because this notion is more general than information theory. In the category of measurable spaces, two points in the sample space are indistinct if they are not distinguished by any measurable set — and measurable functions are not allowed to create measurable sets out of nothing.

(there is also an alternate, maybe dual/opposite analogy I can make based on presentations — here, the the highest-entropy state is the “free object” e.g. a discrete topological space or free group, and each constraint (e.g. \(a^5=1\)) is information — morphisms are “observations”. In this picture we see knowledge as encoded by identities rather than distinctions — we may express our knowledge as a presentation like: \(\langle X_1,\dots X_n\mid X_3=4,X_2-X_1=2\rangle\), and morphisms cannot be concretely understood as functions on sets but rather show a tree of possible outcomes, like maybe you believe in Everett branches or whatever.)

In general if you postulate:

  • … you live on some object in a category
  • … time-evolution is governed by some automorphism \(H\)
  • … you, the observer, have beliefs about your universe and keep forgetting some information (“coarse-grains the phase space”) — i.e. your subjective phase space is also an object in that category, which undergoes homomorphisms

Then the second law is just the tautology

2. Preliminaries: MaxEnt as Bayesian inference

MaxEnt as Bayesian inference.


3. Boltzma

Footnotes:

1

aka “second law of thermodynamics”

2

:)

Author: Abhimanyu Pallavi Sudhir

Created: 2025-04-26 Sat 00:18