Experimental abliterated model using improved https://huggingface.co/blog/grimjim/projected-abliteration technique. Abliteration tries to remove refusals from model's behaviour without fine-tuning.

For non-abliterated GGUF quantized version I recommend https://huggingface.co/unsloth/Qwen3-VL-235B-A22B-Instruct-GGUF quants.

Warning: Safety guardrails and refusal mechanisms have been broken through abliteration. This model may generate harmful content and shall not be used in production, user-facing applications, etc. You will be solely responsible for its outputs.

Warning 2: Neither removal of refusals nor preservation of original model's capabilies is guaranteed.

Downloads last month
264
GGUF
Model size
235B params
Architecture
qwen3vlmoe
Hardware compatibility
Log In to view the estimation

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Nekotekina/Qwen3-VL-235B-A22B-Instruct-Projected-Abliterated-GGUF

Quantized
(22)
this model