YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

base_model: unsloth/gpt-oss-20b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gpt_oss - agentic - tool-use - kilo-code license: apache-2.0 language: - en

Kilo Code Tool-Specialized Fine-Tuned Model Developed by: hybridfree

License: Apache-2.0

Base model: unsloth/gpt-oss-20b-unsloth-bnb-4bit

Overview This model is a task-oriented, agentic fine-tune of GPT-OSS-20B, trained on a carefully curated dataset focused exclusively on Kilo Code tool and function usage.

Unlike general instruction-tuned models, this model was optimized for:

Accurate tool invocation

Correct function argument construction

Strict adherence to Kilo Code execution flow

Reduced hallucinated logs and fake task completion

Long-horizon task consistency

The training data emphasizes realistic tool traces, partial executions, retries, and failure states, ensuring the model learns how to operate rather than merely how to explain.

Training Details Fine-tuned using Unsloth for high-efficiency training

Leveraged Hugging Face TRL for supervised fine-tuning

Achieved ~2× faster training compared to standard pipelines

Base model quantized to 4-bit (bnb) for efficient inference without sacrificing task reliability

Intended Use This model is designed for:

Kilo Code–based agent systems

Tool-driven code execution workflows

Autonomous or semi-autonomous coding agents

Verifier-compatible execution pipelines

It is not optimized for casual chat or creative writing. Its primary objective is execution correctness over conversational polish.

Notes For best results, pair this model with:

A strict tool-execution environment

A verifier or auditor pass to enforce artifact-based completion

This ensures maximum reliability and eliminates premature or fabricated task completion.

Training Framework This model was trained using Unsloth and Hugging Face’s TRL library.

Downloads last month
32
Safetensors
Model size
21B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support