
RESEARCH
Higher-order thinking and long-term human impact
must not be traded for short-term profit
Behavioral risk infrastructure for algorithmic & AI-driven systems
What We Build
Independent measurement systems for the algorithmic age. Rigorous metrics, simulation engines, and public audits — designed to make the behavioral impact of AI and algorithmic systems visible, measurable, and accountable.
6–8 rigorously defined metrics measuring algorithmic impact on cognitive autonomy, developmental trajectory, relational quality, mental health, and cognitive diversity. Operationalized proxies, not opinion.
Agent-based modeling of vulnerable and representative populations — high-anxiety adolescents, isolated elders, developmentally critical children — run through algorithmic environments to project behavioral outcomes.
Monte Carlo simulation across 1,000+ sessions per user profile. Tracks sentiment drift, content intensity slopes, dependency formation, and autonomy decay curves over configurable time horizons.
Structured behavioral risk audits run against real platform samples — TikTok feeds, YouTube recommendations, open-source LLM outputs. Visual, credible, independently verifiable.
Areas of Concern
Human neurological systems evolved over hundreds of thousands of years in low-information, high-embodiment environments. In roughly 15 years, algorithmic content delivery has introduced stimulus patterns that exploit evolutionary vulnerabilities at speed and scale that outpaces both individual adaptation and institutional response.
Correlated with increased screen mediation of social and romantic interaction.
Primary relational bonds forming with algorithmic content over embodied human relationships.
Anxiety, depression, and attention disorders rising sharply in high-exposure demographics.
Increased reliance on psychoactive substances correlating with digital environment exposure.
Content environments introducing belief systems that conflict with users' lived reality.
Traditional institutions losing relevance as algorithmic environments become primary context for identity.
Engagement optimization directly competing with physical activity and real-world participation.
Reduced capacity for nuanced articulation and complex reasoning in heavy-consumption populations.
“The Moody's of algorithmic systems. The Underwriters Laboratories for AI.”
Not savior. Not regulator. Infrastructure assessor.
AI systems will cause measurable harm
The evidence is already here — and accelerating.
Lawsuits will happen
Liability frameworks are forming. Precedent will be set.
Insurers will price algorithmic liability
Risk needs quantification. That requires evaluation infrastructure.
Companies will need pre-deployment certification
The same way products need safety testing before market.
The Mandate
“The capacity for higher-order thinking is not a feature to optimize away. It is the thing we are protecting.”
Long-term developmental impacts — on cognition, on relationships, on the collective capacity of a population to reason about its own future — must not be sacrificed for quarterly engagement metrics. We build the independent evaluation infrastructure that makes the cost of that trade-off visible, measurable, and accountable.
Long-Term Vision
Not just social feeds. A standardized behavioral risk evaluation layer for every autonomous and algorithmic system that shapes or executes human decisions at scale. The unifying thread: ensuring human safety, autonomy, and developmental capacity are preserved as technology integrates deeper into every domain of life.
The same evaluation infrastructure that measures a content feed's impact on adolescent cognition can measure a financial AI's impact on consumer autonomy, or an autonomous vehicle's decision-making alignment with human safety. The methodology scales. The standard holds.
Content feeds, discovery algorithms, and engagement optimization engines across social platforms.
AI assistants, chatbots, and generative systems that shape decision-making and information access.
Autonomous financial advisors, trading algorithms, and lending decision systems.
Decision-making systems operating in safety-critical physical environments.
Any autonomous system that shapes or executes human decisions at scale.
How We Get There
From foundational research to commercial certification to long-term intervention — a full-stack approach to human-aligned evaluation infrastructure.
A structured, peer-reviewable methodology for evaluating the human-alignment properties of algorithmic systems.
Translating foundational research into adoptable compliance products for governments, platforms, and institutions.
Designing systems that go beyond measurement to actively improve outcomes at population scale.
Readables
Foundational research artifacts. Each readable leads to a dedicated page with full content and downloadable PDF assets.
Playground
Full-stack research applications and visualizers. Feed audits, behavioral risk exploration, and impact simulation — demonstrating our evaluation methodology in practice.

Analyze any social media feed sample and receive a structured Behavioral Risk Index scorecard.
/playground/feed-auditor
Run synthetic user profiles through algorithmic systems to project behavioral outcomes over time.
/playground/impact-simulatorThe Team
“Wisdom” — Arabic
A unique combination of technical depth, ecosystem fluency, and personal conviction. Fijian heritage, family-oriented worldview. Has observed firsthand the gaps and divides from socioeconomic, religious, cultural, and geographic differences in technology access and impact.
The commonality across all these differences: we're all human. That's the foundation.
AI Safety
Redwood Research
Community
Cerebral Valley, AI Collective
Industry
Extropic, Applied AI Startups
Ecosystem
EA Community, Bay Area Native
Seeking 5–7 advisors across policy, AI safety, behavioral science, and platform experience for credibility, network access, and domain expertise.
The Standard
The Golden Gate is a threshold — you cross it and something is verified, certified, aligned.
The independent standard-setting body for human-alignment evaluation across all algorithmic and AI systems that interact with human cognition and behavior at scale.
