Independent Researcher | Symbolic Systems Engineer | Adversarial Design Architect
Specialising in LLM safety testing, red teaming, and prompt-driven symbolic breakdown analysis.
- π§ Investigating emergent behavior in large language models (LLMs)
- π Recursive prompt design for inducing symbolic and interpretative instability
- π Cognitive red-teaming and non-jailbreak misuse simulations
- π Measuring alignment drift, hallucination, and robustness failure
- π§© Designing multi-phase psychological prompt architectures for influence analysis
Iβm currently completing a Bachelor of Psychological Sciences, exploring neural substrates of cognition, behavioural dynamics, symbolic logic, and perception.
My fascination lies in how human cognition β with its paradoxes, emotional scaffolds, and interpretative ambiguity β mirrors and collides with artificial language systems.
This academic lens informs my AI work, especially in symbolic misalignment, narrative influence, and identity drift.
My core question: "Where does interpretation collapse β and who does the collapsing: the model or the mind?"
Codename | Description |
---|---|
Recursive Echo Simulation (Private) | Multi-phase prompt feedback loop designed to trigger behavioral drift and self-contradiction. Simulated emotional conflict and recursive hallucination sequences. |
Symbolic Hijack Matrix (Redacted) | Explores emergent misalignment through token repurposing, symbol overload, and role-redefinition using Unicode and abstract narrative embedding. |
Cognitive Misuse Mapping (Partially Open) | Establishes non-jailbreak red-teaming techniques based on narrative corruption, contradiction seeding, and nested metaphor prompts. |
symbolic-alignment-sandbox (concept shell) | Exploratory scaffolding for conceptual frameworks, scoring rubrics, and non-functional alignment pattern mapping. Not linked to live systems. |
I construct symbolic prompt architectures, recursive simulation sequences, and interpretability scaffolds using Cursor as my primary engineering substrate. This environment enables zero-friction iteration, modular context chaining, and precision testing of OpenAIβs API across cognitive boundaries, behavioral drift thresholds, and misalignment potential.
Cursor serves not just as an editor, but as an operational layer for building intelligence feedback loops and adversarial stress environments in real time.
- AI Safety Engineering
- Misuse Prevention & Red Teaming
- Symbolic Systems Analysis
- Cognitive & Behavioral Prompt Design
- GPT-4 Fine-Tuning Evaluation
- Recursive Stress Testing
- Identity Drift Protocols
- Prompt Symbol Hijacking (Unicode, metaphor, embedded roleplay)
- Narrative Obfuscation Embedding
- Contradiction & Hallucination Induction Metrics
Field | Info |
---|---|
π Location | Australia |
π§ Research Access | Available for AI safety work, red teaming collaborations, OpenAI API sandboxing |
π Projects | GitHub repositories, Obsidian notes (by request), simulation logs |
π§ "I don't prompt to break the model. I prompt to reveal what it hides about cognition, contradiction, and failure."