Llama 3 8B - PokerBench SFT (GGUF)
GGUF quantized versions of YiPz/llama3-8b-pokerbench-sft.
Available Files
| File | Size | Description |
|---|---|---|
llama3-8b-pokerbench-sft-q4_k_m.gguf |
~4.5 GB | Recommended - good quality/size balance |
llama3-8b-pokerbench-sft-q8_0.gguf |
~8.5 GB | Higher quality |
Usage with Ollama
# Download
huggingface-cli download YiPz/llama3-8b-pokerbench-sft-gguf llama3-8b-pokerbench-sft-q4_k_m.gguf --local-dir ./
# Create Modelfile
cat > Modelfile << 'EOF'
FROM ./llama3-8b-pokerbench-sft-q4_k_m.gguf
PARAMETER temperature 0.1
SYSTEM "You are an expert poker player. Respond with your action in <action></action> tags."
EOF
# Create and run
ollama create pokerbench -f Modelfile
ollama run pokerbench "Your scenario..."
License
Subject to Llama 3 license.
- Downloads last month
- 53
Hardware compatibility
Log In
to view the estimation
4-bit
8-bit
Model tree for YiPz/llama3-8b-pokerbench-sft-gguf
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct
Finetuned
YiPz/llama3-8b-pokerbench-sft