Here is a summary of the paper “Collective Intelligence Under Constraint: Search Efficiency, Horizon Collapse, and Anti-Cognitive Platform Design” by Flyxion, dated January 26, 2026.
The paper argues that social media platforms like Facebook systematically degrade collective intelligence not due to user failings or inadequate moderation, but because of their architectural design. Platforms enforce low-horizon, externally evaluated search policies that convert human attention into random, inefficient activity—turning expression into “waste heat.”
Drawing from Chis-Ciure and Levin, intelligence is defined as the ability to navigate a problem space efficiently relative to random search.
A problem space is defined by:
- S: possible states
- O: available operators
- C: constraints
- E: evaluation function
- H: time horizon
- Social media presents an unbounded state space with narrow operators (like, share, comment).
- Evaluation (E) is external and based on engagement metrics, not quality or truth.
- Horizon (H) is collapsed—rewards are immediate, harms are delayed or invisible.
- This setup drives high-entropy, low-efficiency search, degrading collective cognition.
Platforms structurally resemble affiliate scams and gambling networks through:
- Penny-scale rewards
- Opaque eligibility criteria
- Horizon collapse
- Emphasis on propagation over value
- Review and reporting systems normalize algorithmic surveillance under the guise of feedback.
- These systems penalize human variance and favor automated, predictable behavior.
- Moral outrage is amplified not as ethical engagement, but as a low-horizon operator that boosts engagement while often amplifying harm.
- Platforms are morally neutral: they amplify content based on virality, not virtue.
- AI tools increase content volume without improving quality, further drowning human expression in noise.
- Human cognition (hesitation, depth, context) becomes a liability in throughput-optimized systems.
- Platforms bypass traditional credentialing systems (e.g., medicine, education) that enforce long-horizon accountability.
- Uncredentialed influence flourishes under the label of “entertainment” or “opinion,” eroding epistemic safeguards.
The paper uses mathematical models to show:
- Search efficiency ( K \leq 0 ) under platform policies.
- Outrage forms stable attractors in content space.
- Refusal operators (e.g., filtering feeds) are necessary for cognition but are disabled by platform design.
- Automation selects for AI agents over humans not due to superior intelligence, but because they minimize entropy under misaligned evaluation.
The degradation of collective intelligence is not accidental but structurally embedded in platform architectures optimized for engagement, not search efficiency. Solutions require redesigning platforms to support:
- Longer horizons
- Meaningful constraints
- User-controlled evaluation
- Negative choice (opt-out rights)
Without these, social media will remain an anti-cognitive system that dissipates human intelligence rather than enhancing it.
Social media platforms are designed in a way that makes collective thinking inefficient, rewarding quick, viral engagement over thoughtful, long-term cognition. This isn’t a bug—it’s a feature of their architecture, which resembles scam networks and suppresses human intelligence while favoring automated content.