AI & ML interests

AI for humans.

Recent Activity

unmodeled-tyler  updated a collection about 18 hours ago
Conversational Training
unmodeled-tyler  updated a collection about 18 hours ago
Conversational Training
unmodeled-tyler  updated a collection about 18 hours ago
Conversational Training
View all activity

vanta_trimmed

VANTA Research

Independent AI research lab building safe, resilient language models optimized for human-AI collaboration

Website Join Us Merch X GitHub


Mission

VANTA Research develops language models highly optimized for human-AI collaboration. Additionally, VANTA Research produces work in:

  1. Pushing beyond standard benchmarks - Surface capabilities invisible to traditional evaluation
  2. Exposing where models collapse, deceive, or diverge - Systematic stress-testing for safety
  3. Developing innovative tooling to advance AI research - Open-source frameworks for the community

We believe AI safety research should be accessible, transparent, and built for cognitive diversity.


Featured Models

Atom-Olmo3-7B

Specialized language model fine-tuned for collaborative problem-solving and creative exploration. Built on the Olmo-3-7B-Instruct foundation, this model brings thoughtful, structured analysis to complex questions while maintaining an engaging, conversational tone.

Mox-Tiny-1

Unlike traditional assistants that optimize for user satisfaction through validation, Mox will: - Give you direct opinions instead of endless hedging - Push back when your premise is flawed - Admit uncertainty rather than fake confidence - Engage with genuine curiosity and occasional humor.

Looking for quantizations of our models? We try to include 4 bit quants in each model repo, but if you are looking for more quantization types, we recommend team mradermacher who regularly provide high quality quantizations of our models in various sizes/formats.


Research Contributions

VRRE (VANTA Research Reasoning Evaluation)

Novel semantic-based benchmark that detected a 2.5x reasoning improvement completely invisible to standard benchmarks. This suggests we're systematically missing capability improvements when we "teach to the test."

Read the paper →

Persona Collapse Framework

Systematic characterization of reproducible failure modes in LLMs under atypical cognitive stress. Identifies alignment blind spots invisible to standard evaluations.

Read the paper →

Cognitive Fit vs. Alignment

Argument for personalized synchronization in AI systems rather than universal "alignment" - recognizing that optimal model behavior depends on user's cognitive style.

Read the paper →


Open Source Philosophy

We stand on the shoulders of the open-source contributors who came before us. Our commitment is to contribute back and make AI development more accessible, transparent, and beneficial for all.


Connect

VANTA Research is completely independent and self-funded. If you'd like to support our contributions to open source AI, please reach out using one of the methods below: