Company
About Valyrx Labs
Focused on AI safety from the user's perspective. Psychology meets technology.
Mission
Why We Exist
The Problem
Conventional AI safety focuses on model output correctness in isolation. Real risk accumulates across extended multi-turn conversations, where users progressively form structurally deviated understandings of AI systems. This user-side contextual degradation is largely invisible to existing guardrail approaches.
Our Approach
We build deterministic, post-interaction assessment tools that detect contextual risk after conversation happens, not just during. Our frameworks trace how context drift, narrative manipulation, and psychological dependency emerge over time and produce audit-ready evidence for human reviewers.
Capabilities
What We Do
AI Psychological Safety
How prolonged AI interaction affects user cognition, emotional dependency, and decision-making.
Conversational Risk Detection
Deterministic pipelines that catch multi-turn contextual degradation, factual drift, and manipulation patterns.
Post-Interaction Assessment
Evaluating conversation safety after interaction. Traceable evidence and reproducible scoring.
User-Side Hallucination Theory
How users form structurally deviated understandings of AI through six identifiable formation stages.
Open Research Frameworks
Publicly accessible papers, codebooks, and tools for independent verification and academic collaboration.
Cross-Domain Safety Research
Connecting psychology, cognitive science, and computational linguistics to fill gaps in conventional AI safety.
Founder & Lead Researcher
ZON
ZON RZVN · Independent Researcher & Cross-Domain Creator
Over a decade of cross-domain creative work as a Tattoo Artist, Vocalist, Independent Music artist, and Painter. That background shapes a different perspective on AI safety research.
Focused on how users form cognitive and emotional relationships with AI systems, and what happens to human autonomy when the conversation partner is artificial.
Currently leading multiple research projects at Valyrx Labs. The publicly released project is A-CSM, a conversational risk detection system addressing user-side contextual hallucination.
Research Focus
AI Psychological Safety
Conversational Risk Detection
User-Side Contextual Hallucination
Human-AI Interaction
Context Drift & Degradation
Post-Interaction Assessment
Journey
Timeline
2010s
Over a decade: Tattoo Artist, Vocalist, Independent Music, Painting
2025
Published CXC-7 and CXOD-7 frameworks on conversational context and offense-defense modeling
Jan 2026
USCH preprint published (User-Side Contextual Hallucination theory)
Feb 2026
USCI assessment methodology released. First A-CSM prototype (MVP)
Mar 2026
A-CSM v0.1.0 public core released. 227 files, 8-stage pipeline, 43 event rules
2026
Valyrx Labs LLC incorporated in the United States
Philosophy
The Name
“The name Valyrx originates from a single concept: everything ends. Context will fall silent. Every narrative will cease.”
When context ceases to generate, narrative ceases to propel, roles cease to operate — silence speaks.
Context Halted, Silence Speaks.
Legal
Company Information
Legal Entity
Valyrx Labs LLC
Founded
2026
Jurisdiction
United States
Focus
AI Safety Research
Status
Active
Founder
ZON RZVN