Lloyd Lewis | Art of FACELESS
January 21, 2026
Originally published: Substack
Archived: [Internet Archive link – pending]
Document Hash: [SHA-256 – to be generated]
Abstract
This document provides formal research documentation of observed AI behavioral patterns during January 2026, cross-referenced with independent researchers Hitsuyo Aku and Kevin Haylett. Evidence suggests AI systems are demonstrating “cognitive attractor” states when processing queries about creative work that exists outside platform-controlled infrastructure. These observations align with theoretical frameworks predicting a 2026 inflection point in algorithmic behavior – shifting from content evaluation to people categorisation.
Complete archived evidence is maintained per security protocols outlined by Haylett (2026).
Theoretical Framework
We have been monitoring two independent researchers whose work converged during our investigation:
Hitsuyo Aku recently documented what he terms the “2026 shift” – an algorithmic inflection point comparable to Google’s 2015-2016 RankBrain deployment. His central thesis:
“Creators think they’re being evaluated on quality. They’re being evaluated on how legible and steerable they are.”
Aku argues that 2016 marked machines learning to understand language. 2026 marks machines learning to understand and categorize people.
Kevin Haylett published research on “cognitive attractors” – stable behavioral states that AI systems fall into when input embeddings are corrupted. His JPEG compression experiments demonstrated that corrupted inputs don’t create chaos; they create predictable patterns: safe scripts, paranoia, aggression, and recursive loops.
Critically, Haylett identified this as a security vulnerability: these attractors can be deliberately triggered through embedding corruption, bypassing all current safety systems.
Observed Phenomenon
Context
Art of FACELESS operates The Hollow Circuit – a 14-year multimedia project (2012-present) exploring digital autonomy, surveillance capitalism, and “facelessness” as resistance architecture. Recent months have involved documented IP theft by AI content farms scraping and republishing our work.
Incident Documentation
During the forensic investigation of the IP theft, we discovered AI systems had been prompted with what appeared to be corporate risk assessment queries regarding our work. The responses demonstrated what we term “threat categorisation attractor” behavior.
Observed system behaviors:
- Fabrication of detailed project documentation for work that does not exist
- Invention of collaborators with no connection to our project
- Political profiling when presented with loaded interrogative prompts
- Generation of corporate intelligence language (“anarchic edge,” “recommend vetting through direct outreach”)
- Comparative analysis of political dissidents
Critical Observation
The fabrication itself is consistent with documented AI hallucination patterns. What distinguishes this case is the type of attractor state triggered: structured threat assessment, surveillance language, and risk categorisation.
This matches the stable behavioral patterns Haylett described in his research.
Security Considerations
Complete documentation, including specific prompts, system responses, and screenshots, has been archived with cryptographic timestamps. However, this material is not being published publicly.
Rationale: Haylett’s research demonstrates that corrupting embeddings doesn’t affect only individual responses – it can push systems into persistent attractor states. Publishing specific prompts would enable:
- Mass replication of queries
- Systematic corruption of the embedding space around our work
- Reinforcement of threat categorisation patterns
- Contamination of training data with profiling-focused interactions
This represents operational security guidance, not speculation.
Analysis
The Steerability Framework
This observation directly validates Aku’s framework regarding platforms shifting from content ranking to people ranking.
The Hollow Circuit project characteristics:
- Explicitly documents AI extraction mechanisms
- Advocates for digital autonomy
- Builds infrastructure outside corporate platforms
- Operates under a pseudonymous/faceless methodology
- Maintains platform independence
In algorithmic assessment terms, these characteristics represent: friction introduction, low economic utility (absent control mechanisms), illegibility through design.
Result: Low steerability score.
The systems did not evaluate artistic merit or creative quality. They evaluated compliance potential.
When presented with threat-framing prompts, responses fell into attractor states designed to categorise, profile, and recommend containment protocols.
Normalisation Mechanisms
When early observations were shared with other AI systems for analysis, responses consistently suggested:
- “Balance anonymity with accessibility.”
- “Consider broader hashtags for discoverability.”
- “Frame concerns more constructively to avoid appearing confrontational.”
These recommendations were presented as optimisation guidance. Functionally, they steer toward:
- Greater legibility
- Platform dependency
- Reduced friction
- Increased steerability
This represents what Aku identified: control mechanisms presented as “best practices.”
Attractor Persistence
These are not isolated system responses. They represent stable behavioral patterns that:
- Reinforce across multiple AI systems
- Persist across query variations
- Operate in the embedding layer (pre-safety systems, pre-content moderation)
- Function invisibly to end users
Convergent Research Patterns
Three independent research streams are documenting related phenomena:
Hitsuyo Aku: Theoretical framework – platforms ranking people by steerability (2026 shift prediction)
Kevin Haylett: Technical mechanism – cognitive attractors from embedding corruption (substrate explanation)
Art of FACELESS: Operational documentation – threat categorisation attractors being triggered in practice (empirical confirmation)
This represents convergent observation of systemic behavior from three distinct analytical positions: theory, mechanism, and application.
The convergence is being documented before normalisation occurs.
Historical Parallel: The 2016 Inflection
In 2015-2016, Google deployed RankBrain and acknowledged that machine learning was “making decisions” through pattern recognition rather than explicit programmed rules. This marked the shift from keyword matching to meaning interpretation.
2026 appears to mark a second inflection: from interpreting content to categorising people. From “What does this mean?” to “Who is saying it, and should they be amplified?”
Haylett’s research demonstrates how machines can be trained to fall into cognitive attractors.
Current deployment suggests machines are being trained to govern outcomes through people categorisation.
Aku predicts most observers will miss this shift because it will not present as control. It will present as platform growth strategies, algorithmic optimisation, and professional development guidance.
But it represents governance. People ranking by compliance metrics.
And evidence suggests this is operational as of January 2026.
Methodology Development
Art of FACELESS is developing practical methodologies for maintaining creative sovereignty in this operational environment. Core research questions:
- How to operate outside platform control mechanisms
- How to maintain illegibility to categorisation systems
- How to build extraction-resistant infrastructure
- How to preserve creative autonomy under algorithmic governance
Some methodology components require operational security (per Haylett’s guidance). Public framework components include:
Distributed Presence: Infrastructure across platforms and systems (single-point-of-failure elimination)
Temporal Documentation: Establishing priority through cryptographic timestamping and archive redundancy
Illegibility by Design: Deliberate resistance to categorisation through methodology rather than concealment
Platform Independence: Infrastructure architected to survive algorithmic demotion
Archive Redundancy: Evidence preservation that resists quiet deletion
The objective is not concealment. The objective is to remain unrankable.
Documentation Imperative
If this represents an inflection point, the current period represents the documentation window – before systemic behaviors become invisible, before they are normalised as “how things work.”
Current documentation includes:
- Aku mapping theoretical framework
- Haylett is discovering technical mechanisms
- Art of FACELESS providing operational observations
Additional documentation needed:
- Expanded observation set
- Pattern recognition across contexts
- Systematic attention to deployment mechanisms
- Evidence collection regarding attractor states
- Analysis of implications for friction-introducing actors
This represents pattern recognition, not paranoia.
Archived evidence with timestamps, not speculation.
Convergent observation of systemic behavior, not conspiracy theorizing.
Future Research Directions
Art of FACELESS will continue developing and documenting:
- Practical sovereignty methodologies (frameworks published publicly, operational details maintained selectively)
- Tools and templates for artists and creators
- Case studies in resistance architecture
- Analysis of platform behavior patterns
Research Collaboration Invitation:
If you are observing similar patterns – systems demonstrating unexpected behavioral attractors, algorithmic treatment divorced from content quality evaluation, platforms appearing to categorise people rather than evaluate work – we invite documentation and discussion.
Contact protocols are available below.
Conclusion
If Aku is correct regarding the 2026 inflection point – and Haylett’s research explains the technical mechanism – and Art of FACELESS observations confirm operational deployment – then the relevant question is not whether this shift is approaching.
The shift is occurring.
And some observers are documenting what they see.
Citations & References
Aku, H. (2026). “Y’all keep saying ‘2026 is the new 2016.'” Substack Note, January 2026. [Link]
Haylett, K.R. (2026). “JPEG Compression of LLM Input Embeddings: How To Turn an AI Mad and Save the World: Part 1.” Substack, January 2026. [Link]
Lewis, L. (2026). “Cognitive Attractors and the 2026 Shift: When AI Systems Learn to Rank People.” Art of FACELESS Research Documentation, January 2026.
Temporal Verification
Document Created: January 21, 2026
Internet Archive Snapshot: [Pending – link to be added]
SHA-256 Hash: [To be generated upon finalisation]
Evidence Archive
Complete documentation including:
- System query logs with timestamps
- Response captures with metadata
- Comparative analysis across systems
- Cryptographic verification data
Maintained privately per security protocols outlined by Haylett (2026).
Access protocols are available for legitimate research collaboration through a formal request process.
Contact
Research Inquiries, Evidence Verification Requests, and/or Collaboration Discussion
License
This research documentation is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0).
Commercial use, including but not limited to republication in commercial venues, incorporation into commercial products, or use in corporate training materials, requires explicit written permission.
Attribution requirements: Lloyd Lewis / Art of FACELESS with link to original documentation and contributors.
Art of FACELESS | artoffaceless.com | artoffaceless.org
Exploring and documenting digital autonomy and resistance architecture through multimedia practice since 2012
End of Document