Agent Latents
Table of Contents
1. Agent latents
The key questions seem to be:
- Do we want the “latent agents” to be precisely-defined trading strategies from a simple class of strategies — or should they be fuzzy (e.g. defined by prompts)?
- How do we handle “slightly different” agents? E.g. both agents are bullish on crypto, but one agent is intelligently so while another is batshit crazy? How do we decide what a “uniform prior” is?
- How do we interpret investments in agents as probabilities?
- How do we deal with “adversarial attacks” – e.g. if there is a “bullish on crypto agent”, what if someone just generates questions like “will Satoshi Nakamoto be elected president?” or “will WW-III be fought over bitcoin?”
2. Liquidity flowing
Approaches:
- Agent latents
- Mutual information
- Gated probabilities
3. Mutual information
Recall that the LMSR subsidy parameter \(\beta\) on a prediction market for \(X\) is the “price of information on \(X\)”, in dollars-per-bit (well actually dollars-per-nat) — i.e. if your subjective belief has entropy $H(\mathbf{p}) and the market belief has entropy \(H(\mathbf{p}_0)\), then your expected return (according to your beliefs) is \(\beta(H(\mathbf{p})-H(\mathbf{p}))\). In particular what this means is that the price you should be willing to pay for some other piece of information \(Y\) is proportional to the mutual information: \(\beta(H(X\mid Y)-H(X))=\beta I(X;Y)\).
This gives us a way to “update” the liquidity of a market based on the liquidities of other related markets. Perhaps the “inherent” value of information on \(Y\) is \(\beta_Y H(Y)\), but it also gives information on \(X\), which gives us \(\beta_Y H(Y) + \beta_X I(X;Y)\).
This can be extended to any number of markets: say you have \(N\) markets \(X_1\dots X_n\) with liquidities \(\beta_1\dots \beta_N\). If you know the mutual information between these markets, you can get the updated liquidities as:
\[\beta' = \mathbf{I}\beta\]
Where:
\[\mathbf{I} =\begin{bmatrix} 1 & \frac{I(X_2;X_1)}{I(X_1)} & \frac{I(X_3;X_1)}{I(X_1)} & \dots & \frac{I(X_n;X_1)}{I(X_1)} \\ \frac{I(X_1;X_2)}{I(X_2)} & 1 & \frac{I(X_3;X_2)}{I(X_2)} & \dots & \frac{I(X_n;X_2)}{I(X_2)} \\ \frac{I(X_1;X_3)}{I(X_3)} & \frac{I(X_2;X_3)}{I(X_3)} & 1 & \dots & \frac{I(X_n;X_3)}{I(X_3)} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{I(X_1;X_n)}{I(X_n)} & \frac{I(X_2;X_n)}{I(X_n)} & \frac{I(X_3;X_n)}{I(X_n)} & \dots & 1 \end{bmatrix}\]
Right?
Wrong. Usually in economics when two goods are circularly interdependent, you don’t just apply this transformation once, but recursively until an equilibrium is reached. I.e. it’s not that \(\beta'_Y=\beta_Y+\beta_X\frac{I(X;Y)}{I(Y)}\), but rather:
\[\beta'_Y=\beta_Y+\beta'_X\frac{I(X;Y)}{I(Y)}\] \[\beta'_X=\beta_X+\beta'_Y\frac{I(Y;X)}{I(X)}\]
Which can be solved as a system of linear equations. More generally we have:
\[\beta'_XI(X)=\beta_XI(X)+\sum_{Y}\beta'_YI(X;Y)\]
(note that mutual information is symmetric i.e. \(I(X;Y)=I(Y;X)\)). This gives us:
\[\mathbf{J}\beta'=\beta\]
Where:
\[\mathbf{J} = \begin{bmatrix} I(X_1) & -I(X_1;X_2) & -I(X_1;X_3) & \dots & -I(X_1;X_n) \\ -I(X_2;X_1) & I(X_2) & -I(X_2;X_3) & \dots & -I(X_2;X_n) \\ -I(X_3;X_1) & -I(X_3;X_2) & I(X_3) & \dots & -I(X_3;X_n) \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -I(X_n;X_1) & -I(X_n;X_2) & -I(X_n;X_3) & \dots & I(X_n) \end{bmatrix} \]
Which is the correct expression. Given the vector of “inherent liquidities” \(\beta\), you can calculate the correct liquidities \(\beta'\) by inverting this matrix.
Then there is the question of invertibility. I believe (though have not proven) that the matrix will be invertible as long as there is no “redundancy” in the questions.
(This can be seen in the two-question case: the matrix would be redundant if and only if \(I(X_1)I(X_2)=I(X_1;X_2)^2\) which, since \(I(X_1;X_2)\le I(X_1)\) and also \(\le I(X_2)\), can only occur when \(I(X_1)=I(X_2)=I(X_1;X_2)\).)
Note: to get these mutual informations, you need markets on each conditional question:
\[\begin{align} I(X;Y) &= H(X) - H(X\mid Y) \\ &= - \sum_x {P(X=x)\log P(X=x)} + \sum_{x}P(X=x)\sum_{y}P(Y=y\mid X=x)\log P(Y=y\mid X=x)\end{align}\]
For the conditional questions you do not have markets for, it is fair to just assume the mutual information is 0.