Spam Detection using DistilBERT
This model is a fine-tuned distilbert-base-uncased transformer for binary
spam classification (spam vs ham).
Labels
- 0 β Ham
- 1 β Spam
Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("<your-username>/spam-detection-distilbert")
model = AutoModelForSequenceClassification.from_pretrained("<your-username>/spam-detection-distilbert")
inputs = tokenizer(
"You won a free iPhone!",
return_tensors="pt",
truncation=True,
padding="max_length",
max_length=128
)
with torch.no_grad():
outputs = model(**inputs)
prediction = torch.argmax(outputs.logits, dim=1).item()
print("SPAM" if prediction == 1 else "HAM")
π GitHub Repository
Code for training and inference is available here:
https://github.com/revanthreddy0906/spam-detection-distilbert.git
- Downloads last month
- 22
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
1
Ask for provider support
Model tree for WhiteDevilOP/spam-detection-distilbert
Base model
distilbert/distilbert-base-uncased