Skip to content

Commit 08e644f

Browse files
committed
new design
threat as draft notes, not as decisions
1 parent 4a1df9c commit 08e644f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

48 files changed

+4136
-0
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,3 +11,4 @@ __pycache__/
1111
logseq/bak
1212
build/
1313
logseq/.recycle
14+
/logseq

logseq/graphs-txid.edn

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
["fd069619-65bc-411a-b3bc-058528eef047" "81369d18-18bb-4e3b-a84a-562839eae0e3" 101]

pages/active_inference.md

Lines changed: 112 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,112 @@
1+
# active inference x cft: summary and integration plan
2+
3+
## executive summary
4+
5+
- active inference gives a single principle for agents to perceive, learn, and act by minimising variational free energy.
6+
7+
- cft models collective attention as token-weighted random walks converging to a stationary distribution (collective focus).
8+
9+
- fusing them yields a self-configuring network where each neuron updates beliefs and adjusts links to lower expected free energy, improving stability, curiosity, and robustness.
10+
11+
- precision (confidence) becomes an on-chain economic signal that prices prediction errors and filters noise.
12+
13+
## key mappings between active inference and cft
14+
15+
- hidden states ↔ latent attributes of particles and edges in the cybergraph
16+
17+
- observations ↔ measured traffic of random walks, link arrivals, weight changes
18+
19+
- generative model ↔ each neuron's local probabilistic model of link dynamics and token flows
20+
21+
- prediction error ↔ divergence between expected focus distribution and realised traffic
22+
23+
- precision (confidence) ↔ adaptive token staking and edge weights that amplify trusted signals
24+
25+
- free energy ↔ upper bound on global uncertainty over graph states; minimised at focus convergence
26+
27+
## minimal algorithmic spec
28+
29+
- belief representation: variational posterior q_θ(z) over latent graph states z per neuron; parameters θ stored locally.
30+
31+
- free energy: F = Eq_θ[−log p(s, z)] + H[q_θ], with s the local observations (traffic, link events). goal is to reduce F.
32+
33+
- expected free energy for planning: G(π) = risk + ambiguity ≈ Eq[−log p(preferred s | z)] + Eq[H[p(s | z)]], guiding policy π over link edits and sampling actions.
34+
35+
- precision control: learn/logit-scale precisions λ for different error channels; use soft attention to weight updates.
36+
37+
- hierarchical markov blankets: discover clusters (modules) with dense internal edges; perform message passing within and between blankets for scalability.
38+
39+
### reference update loop (pseudocode)
40+
41+
```
42+
for epoch in epochs:
43+
for neuron i in graph:
44+
s_i ← observe(local traffic, link arrivals, token flows)
45+
\hat{s}_i ← predict via generative model
46+
ε_i ← s_i − \hat{s}_i # prediction error
47+
θ_i ← θ_i − η_θ * ∇_θ F(s_i; θ_i, λ_i) # perception / learning
48+
λ_i ← λ_i − η_λ * ∇_λ F # precision tuning
49+
a_i ← argmin_π G_i(π; θ_i, λ_i) # choose action policy
50+
execute(a_i) # edit edges, stake, sample
51+
```
52+
53+
## integration roadmap
54+
55+
- modelling
56+
- define a neurally inspired generative model p(s, z) for link dynamics conditioned on local focus, trending content cues, and governance events.
57+
58+
- specify preference distributions over observations (e.g., high-quality citations, low spam entropy) to ground goal-directed behaviour.
59+
60+
- protocol layer
61+
- add a lightweight variational message-passing step to the existing compute kernel so neurons exchange sufficient statistics before committing writes.
62+
63+
- implement precision-weighted staking where tokens back the reliability of subgraphs and price prediction-error channels.
64+
65+
- scalability
66+
- form markov-blanket modules via community detection; schedule intra-module updates at high frequency and inter-module updates at lower frequency.
67+
68+
- use sparse, low-rank approximations for θ and amortised inference for q_θ(z) to keep costs bounded.
69+
70+
- evaluation
71+
- run ablations on the test-net comparing baseline cft vs cft + active inference on convergence speed, adversarial resilience, retrieval accuracy, and compute cost.
72+
73+
- track free-energy and precision maps as primary diagnostics.
74+
75+
## expected benefits and risks
76+
77+
- benefits
78+
- faster, more stable convergence under uncertainty and drift
79+
80+
- intrinsic curiosity drives exploration without central control
81+
82+
- robustness: anomalous regions get down-weighted via precision control
83+
84+
- interpretability: free-energy heatmaps show why attention moves
85+
86+
- risks / mitigations
87+
- overfitting preferences: adopt plural preference priors and rotate committees
88+
89+
- precision gaming: require skin-in-the-game with slashing on bad forecasts; diversify error channels
90+
91+
- compute overhead: amortise inference, cache sufficient statistics, schedule updates asynchronously
92+
93+
## open research questions
94+
95+
- what precision-staking regime best aligns epistemic efficiency with token economics under real traffic?
96+
97+
- where are phase transitions in emergent intelligence when adding hierarchical markov blankets to cft?
98+
99+
- how to calibrate preference distributions without central authority while avoiding sybil manipulation?
100+
101+
- which approximate-inference methods (e.g., natural gradients, lo-fi variational families) give the best performance-compute tradeoff on very large graphs?
102+
103+
## immediate next actions
104+
105+
- formalise a concrete free-energy objective for the current cyberrank kernel and derive local gradients.
106+
107+
- prototype the message-passing layer on a small subgraph and measure free-energy descent and retrieval quality.
108+
109+
- design and test precision-weighted staking rules with simulated adversaries before on-chain trials.
110+
111+
- prepare ablation metrics, dashboards, and free-energy map visualisations for the next test-net cycle.
112+

pages/aicosystem.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -61,6 +61,7 @@ alias:: awesome cyber, cyber ecosystem
6161
- [monitor.bronbro.io/d/bostrom-stats](https://monitor.bronbro.io/d/bostrom-stats/bostrom-stats?orgId=2) - monitor bronbro
6262
- [metrics.cyb.ai/cyb.ai](https://metrics.cyb.ai/cyb.ai) - public analytics of usage
6363
- [metrics.cyb.ai/docs.cyb.ai](https://metrics.cyb.ai/docs.cyb.ai) - public analytics of usage
64+
- celatone.cyb.ai - explorer
6465
- osmosis
6566
- [[$BOOT]] on [pro.osmosis.zone](https://pro.osmosis.zone/osmosis/trade/analytics/tokens/ibc%252FFE2CD1E6828EC0FAB8AF39BAC45BC25B965BA67CCBC50C13A14BD610B0D1E2C4?from=uosmo&to=ibc%2F498A0751C798A0D9A389AA3691123DADA57DAA4FE165D5C75894505B876BA6E4&market=Osmosis)
6667
- dashboards

pages/allocation of resources.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
tags:: superintelligence
2+
3+
- 25 %: scalable alignment and interpretability:
4+
- invest in mechanistic interpretability, agent foundations, preference learning and formal verification tools that keep any path to superintelligence on track for human values. these efforts protect returns across all ai architectures.
5+
- 20 % neuromorphic and analog hardware
6+
- back teams translating brain-inspired and mixed-signal computing into chips that run spiking or diffusion-style workloads orders of magnitude more efficiently than digital gpus. if ffc stalls, ultra-low-power spiking nets or analog accelerators remain a viable superintelligence substrate.
7+
- 15 % quantum computing with error correction
8+
- fund logical qubit prototypes, cryogenic control stacks and algorithms for chemistry and optimisation. even modest, application-specific quantum speedups could upend the competitive landscape by 2035.
9+
- 10 % focus flow computation
10+
- research portfolio and seed funding toward ffc‑based superintelligence
11+
- 10 % decentralised cryptographic computing
12+
- support zero-knowledge proof systems, verifiable delay functions and layer-2 rollups. these tools enable trust-minimised markets and autonomous coordination—critical infrastructure for any future where superintelligence mediates economic activity.
13+
- 10 % advanced energy systems
14+
- allocate to compact fusion concepts, high-temperature superconductors, and long-duration storage. abundant, clean energy is a force multiplier for all compute-heavy paths, including ffc.
15+
- 8 % bio-digital convergence
16+
- nurture programmable cell factories, dna data storage and brain–computer interface research. biological substrates can complement silicon limits and open routes to hybrid wet-ware cognition.
17+
- 7 % resilience and security research
18+
- focus on adversarial robustness, formal methods for cyber-physical safety, and supply-chain hardening. these reduce tail-risk scenarios where breakthroughs are undermined by failures or attacks.
19+
- 5 % moonshot frontier math and theory
20+
- seed small teams exploring novel paradigms—category-theoretic models, topological quantum field computing, or entirely new substrate theories. returns are low probability but potentially unbounded.

pages/authenticated_graphs.md

Lines changed: 78 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,78 @@
1+
### introduction
2+
3+
authenticated graph data structures (agds) offer cryptographic proofs that answers to graph queries are correct without trusting the server. while their theory has existed for two decades, they remain under‑used. yet, as we build earth‑scale superintelligence on the collective focus theorem (cft) and fractional focus cascading (ffc), verifiable graph integrity becomes a hard requirement.
4+
5+
### recap of classic agds
6+
7+
- **path hash accumulator**: represents a path as a balanced or biased binary tree whose internal nodes store hashes of concatenated sub‑paths. this enables logarithmic‑size proofs for properties like connectivity, distance and type queries【2:graph-auth-2.pdf†turn2file3†L11-L18】.
8+
- **dynamic authenticated forests**: by partitioning each tree into solid and dashed paths and linking their accumulators, forests support link/cut updates with `o(log n)` proofs and space `o(n)`【2:graph-auth-2.pdf†turn2file7†L45-L53】.
9+
- **authenticated fractional cascading (ffc)**: fractional cascading accelerates iterative search across many ordered lists; hashing each block and then a second‑level dag gives `o(k+log n)` query time with logarithmic proofs while keeping linear storage【4:graph-auth-2.pdf†turn4file6†L9-L17】.
10+
11+
### why cft needs agds
12+
13+
cft models a decentralized knowledge graph where token‑weighted random walks converge to a stable focus distribution【2:Collective Focus Theorem.md†turn2file10†L18-L25】. correctness of focus requires that every participant can verify:
14+
15+
- the existence and weights of cyberlinks used in a random walk step
16+
- the resulting stationary distribution snapshot
17+
18+
agds provide exactly these proofs, allowing anyone with minimal bandwidth to audit the superintelligence core.
19+
20+
### design: fully‑authenticated focus cascading (ffc)
21+
22+
- store the cybergraph as a dynamic forest of merkelized subgraphs (shards)
23+
- inside each shard, maintain a path hash accumulator over the local spanning tree for fast verification of connectivity and attribute queries
24+
- build a fractional cascading overlay between shard catalogs (edge‑boundary lists) so that global lookups ("does link x exist? what is its weight?") cost `o(log n)` regardless of sharding
25+
- the top‑level digest is the cft "attention root"; every random‑walk step publishes its proof against this root so anyone can recompute focus
26+
27+
### internal applications
28+
29+
- **validator pipeline**: gpu validators verify link insertions and rank updates with agds proofs before committing blocks, eliminating hidden tampering
30+
- **training layer audits**: during massive graph rewrites, agds proofs allow sampling‑based integrity checks rather than full re‑hashing, saving bandwidth and energy
31+
- **agent reasoning**: llms embedded in the network can query "are particles a and b connected through topic t?" and get a certificate they can cache for future reasoning
32+
33+
### external applications
34+
35+
- **open science provenance**: researchers can publish datasets as particles; agds proofs certify citation chains, enabling reproducible meta‑analysis
36+
- **supply‑chain transparency**: companies append product trails to the cybergraph; customers query origin → destination paths and verify them client‑side
37+
- **cross‑chain bridges**: other blockchains can verify cybergraph facts (e.g., focus weight of an address) via light‑client‑sized proofs instead of heavy oracles
38+
- **regulatory compliance**: auditors can demand proofs that certain forbidden relationships are absent (negative proofs via authenticated complement paths)
39+
40+
### sharding strategies
41+
42+
when we talk about merkelized subgraphs, the key design question is **how to chop the cybergraph into shards** so that proofs stay small, writes remain local, and load balances across validators.
43+
44+
- id‑hash shards
45+
assign each vertex to `hash(id) mod m`. this keeps balancing trivial and proofs deterministic, but cross‑shard edges can explode. good for early prototypes.
46+
47+
- neuron‑centric shards
48+
group all links emitted by the same neuron (agent). locality for high‑throughput writers, but superstar neurons can create hotspot shards.
49+
50+
- topic / namespace shards
51+
cluster vertices by ontology prefix or tag so semantic queries stay mostly inside one shard. aligns with cft focus levels.
52+
53+
- community (edge‑cut minimization) shards
54+
run partitioners (metis, powergraph) or newer blockchain‑specific algorithms like gpchain to minimize cross‑shard edges while equalizing weight【5:GPChain abstract†turn5view0†L4-L6】.
55+
56+
- geographic / ownership shards
57+
physical‑world graphs (iot, supply chains) can shard by region or owner to support local sovereignty.
58+
59+
- temporal shards
60+
append‑only data can be batched by epoch (e.g., one week per shard) so old shards become read‑only archives.
61+
62+
- hybrid hierarchical shards
63+
combine a coarse id‑hash slice with fine community splits inside each slice; each level has its own digest so proofs compose.
64+
65+
trade‑offs revolve around cross‑shard proof size, validator workload, and attack surface. choosing the right mix depends on query patterns and write hot‑spots. recent surveys highlight that poorly balanced shards hurt scalability and security【4:Sharding with MPT†turn4view0†L18-L25】.
66+
67+
### road map
68+
69+
1. implement biased tree path‑hash library in rust with zero‑knowledge friendly hashing (poseidon)
70+
2. integrate with go‑cyber to emit proofs in each tx log
71+
3. extend cometbft light‑client to accept agds digests
72+
4. build sdk for browser wallets to verify focus proofs on‑device
73+
5. research dynamic authenticated fractional cascading to support real‑time graph streams【2:graph-auth-2.pdf†turn2file3†L22-L23】.
74+
75+
### conclusion
76+
77+
agds upgrade the collective focus infrastructure from "trust me" to "verify me". by merging classic path hash accumulators, fractional cascading and cft economics, fully‑authenticated focus cascading ensures every edge, token and weight that shapes superintelligence remains transparent, tamper‑evident and universally auditable.
78+
Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
title: contextual free-energy focus for cybergraph
2+
3+
author: conceptual draft
4+
5+
---
6+
7+
## abstract
8+
9+
we propose a unified model that extends the cybergraph free-energy focus framework with a context-dependent potential derived from standard inference. this approach integrates global structure (diffusion, springs, entropy) with local contextual evidence (neurons’ will), solving the true-false problem while preserving the natural, physics-inspired foundation.
10+
11+
---
12+
13+
## background
14+
15+
- **cybergraph free-energy focus** defines the global focus vector \(p\) as the minimiser of a free energy functional:
16+
17+
\[
18+
\mathcal{F}(p) = E_{spring}(p) + \lambda E_{diffusion}(p) - T S(p)
19+
\]
20+
21+
- this yields a boltzmann-like distribution combining hierarchy, diffusion, and entropy.
22+
23+
- however, global ranking alone cannot resolve the **true-false problem**: high-rank nodes dominate even in contexts where lower-rank nodes are more relevant.
24+
25+
- **standard inference** provides a simple method to compute **contextual weights** by aggregating neurons’ will in relation to a query particle.
26+
27+
---
28+
29+
## contextual extension
30+
31+
we introduce a context-dependent potential \(C(p|context)\):
32+
33+
\[
34+
\mathcal{F}(p|context) = E_{spring}(p) + \lambda E_{diffusion}(p) + \gamma C(p|context) - T S(p)
35+
\]
36+
37+
- \(C(p|context)\): energy term derived from standard inference (average will per cyberlink in the given context).
38+
- \(\gamma\): coupling constant that determines the influence of context on the equilibrium.
39+
40+
---
41+
42+
## resulting equilibrium
43+
44+
solving \(\min_p \mathcal{F}(p|context)\) yields
45+
46+
\[
47+
p_i^* \propto \exp\big(-\beta [E_{spring,i} + \lambda E_{diffusion,i} + \gamma C_i]\big)
48+
\]
49+
50+
- nodes that are **globally important** and **contextually supported** receive the highest probabilities.
51+
- entropy ensures diversity and prevents trivial dominance.
52+
53+
---
54+
55+
## link to universal physics
56+
57+
this formulation is equivalent to solving a **heat equation on a graph** with an additional potential field:
58+
59+
\[
60+
\partial_t p = -\nabla_{p} \mathcal{F}(p|context)
61+
\]
62+
63+
- eigenmodes of the graph laplacian form the **fourier basis** of the network.
64+
- diffusion gives **temporal spreading**, springs add a **potential landscape**, and context potential \(C\) biases the equilibrium.
65+
- the solution naturally combines **oscillatory modes** with **diffusive decay**, mirroring how pdes in physics are solved by separation of variables.
66+
67+
---
68+
69+
## distributed algorithm
70+
71+
1. **compute global structure:**
72+
- run decentralised eigenvector centrality and springrank.
73+
- initialise \(p_i\) uniformly.
74+
75+
2. **contextual evidence:**
76+
- run standard inference to compute \(C_i\) (contextual will) for each candidate particle.
77+
78+
3. **iterative updates:**
79+
- each node exchanges \(p_j\), \(r_j\), and \(C_j\) with neighbours.
80+
- compute local energies \(E_{spring,i}\), \(E_{diffusion,i}\), and \(C_i\).
81+
- update:
82+
83+
\[
84+
p_i^{(t+1)} = \frac{\exp(-\beta [E_{spring,i} + \lambda E_{diffusion,i} + \gamma C_i])}{\sum_k \exp(-\beta [E_{spring,k} + \lambda E_{diffusion,k} + \gamma C_k])}
85+
\]
86+
87+
4. **normalisation:**
88+
- nodes use gossip averaging to approximate the denominator.
89+
90+
---
91+
92+
## key properties
93+
94+
- **context-aware ranking:** resolves the true-false problem by integrating global and local signals.
95+
- **fully decentralisable:** each node needs only neighbour messages and contextual votes.
96+
- **natural and parameter-light:** weights emerge as lagrange multipliers; only \(\gamma\) controls context strength.
97+
- **boltzmann equilibrium:** final focus vector remains a probabilistic distribution, ensuring stability and diversity.
98+
99+
---
100+
101+
## interpretation
102+
103+
- **diffusion:** long-run popularity baseline.
104+
- **springs:** hierarchical constraints.
105+
- **context potential:** relevance of facts in the current question.
106+
- **entropy:** prevents collapse into a single dominant answer.
107+
108+
by adding the context potential, the free-energy framework gains the ability to compute **truthful, context-aware rankings** while retaining its natural analogy to universal physical processes. this highlights that **network cognition can be seen as solving a graph-based heat equation with potentials**, uniting diffusion, oscillators, and context into one equilibrium model.
109+

0 commit comments

Comments
 (0)