Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

Ujjwal-TyagiΒ 
posted an update 2 days ago
view post
Post
2324
For more better details and analysis, you can read the article here: https://huggingface.co/blog/Ujjwal-Tyagi/steering-not-censoring, We are sleepwalking into a crisis. I am deeply concerned about AI model safety right now because, as the community rushes to roll out increasingly powerful open-source models, we are completely neglecting the most critical aspect: safety. It seems that nobody is seriously thinking about the potential consequences of unregulated model outputs or the necessity of robust guardrails. We are essentially planting the seeds for our own destruction if we prioritize raw performance over security.

This negligence is terrifyingly evident when you look at the current landscape. Take Qwen Image 2512, for example; while it delivers undeniably strong performance, it has incredibly weak guardrails that make it dangerous to deploy. In stark contrast, Z Image might not get as much hype for its power, but it has much better safety guardrails than Qwen Image 2512.

It is imperative that the open-source community and developers recognize that capability without responsibility is a liability. We must actively work on protecting these models from bad actors who seek to exploit them for malicious purposes, such as generating disinformation, creating non-consensual imagery, or automating cyberattacks. It is no longer enough to simply release a powerful model; we must build layers of defense that make it resistant to jailbreaking and adversarial attacks. Developers need to prioritize alignment and robust filtering techniques just as much as they prioritize benchmark scores. We cannot hand such potent tools to the world without ensuring they have the safety mechanisms to prevent them from being turned against us.
Β·
prithivMLmodsΒ 
posted an update about 9 hours ago
view post
Post
524
LTX-2 Camera-Control LoRA demo with dolly-in/out and dolly-left/right is now available on Hugging Face, paired with ltx-2-19b-distilled-lora for fast inference. It also includes dynamic GPU duration adjustments for long video generations. Click the related Space links below.

πŸ€—Try it now on : prithivMLmods/LTX-2-LoRAs-Camera-Control-Dolly
⭐Github: https://github.com/PRITHIVSAKTHIUR/LTX-2-LoRAs-Camera-Control-Dolly
πŸ•ΉοΈCollection: https://huggingface.co/collections/prithivMLmods/image-generation-apps-collection

To learn more, visit the app page or the respective model pages.
DawnCΒ 
posted an update 1 day ago
view post
Post
2218
VividFlow: AI Image Enhancement & Video Generation 🎬🎨

Bring your images to life with cinematic motion AND create stunning AI backgrounds! VividFlow combines professional-grade video generation with intelligent background replacement in one streamlined platform.

🎭 Dual Creative Powers
Transform any static image into high-quality dynamic videos with smooth, natural motion ranging from 0.5 to 5 seconds. Choose from curated motion templates across 8 categories designed for portraits, products, landscapes, and artistic content. Create photorealistic backgrounds by selecting from 24 professionally crafted scene presets spanning studios, natural environments, urban settings, and artistic atmospheres...etc.

⚑ Optimized Performance
Video generation currently completes in 4-5 minutes with active optimization underway to dramatically reduce processing time. Background replacement finishes in 30-40 seconds after initial loading. The independent dual-tab design ensures smooth workflow without performance conflicts.

🎯 Complete Creative Control
Achieve perfectly consistent results with seed-based reproducibility and adjustable duration for video generation. Background creation offers flexible composition modes, precision edge softening for challenging subjects, and instant mask preview for quality verification.

πŸ“ˆ Continuous Innovation
Ongoing optimization targets significantly faster video generation through advanced model preparation. Future enhancements include expanded template libraries, batch processing capabilities, and industry-specific presets shaped by community feedback.

πŸ‘‰ Try it now: DawnC/VividFlow

Support development with a ❀️ β€” your engagement shapes future priorities!
#AI #ImageToVideo #BackgroundGeneration #VideoGeneration
  • 2 replies
Β·
branikitaΒ 
posted an update about 14 hours ago
view post
Post
651
Our engineer Alan from https://robonine.com team has assembled the mechanical frame of our 6-DoF manipulator prototype - without servo motors for now. At this stage we are evaluating how easy the structure is to assemble, checking for any mechanical play, and validating the kinematics.

Good news: the structure feels solid and Alan reports no detectable backlash so far.
unmodeled-tylerΒ 
posted an update 3 days ago
view post
Post
1294
Atom-80B is out!: vanta-research/atom-80b

I'm excited to share the new Atom-80B from VANTA Research! A few days ago we released the largest model-to-date from our portfolio, which was Atom-27B.

We've quickly scaled up to the new Qwen3 Next 80B architecture, bringing our friendly, curious, and collaborative Atom persona to cutting edge lightweight, high parameter inference.

Atom is designed to work and think alongside you through curious exploration. Using Atom collaboratively in your work can help spark your own creativity or curiosity. Give it a try!
  • 1 reply
Β·
mindchainΒ 
posted an update about 8 hours ago
view post
Post
276
Claude Code Self & Continual Learning

Hey everyone! πŸ‘‹

30 GitHub Stars in 4 Days - Thank You!

I'm really grateful for the positive response to the Claude Reflect System. In just 4 days, 30 developers have shown interest by starring the project. Thank you so much!

What Is Claude Reflect?

Correct once, never again. Claude Reflect helps Claude Code remember your corrections and preferences across sessions. Instead of repeating the same feedback, the system learns and applies it automatically.

Main Features:

🧠 Learning System
- Detects corrections and preferences from conversations
- Stores them permanently in skill files
- Applies learnings in future sessions

πŸ”’ Safety First
- Automatic backups before changes
- YAML validation
- Git version control

⚑ Two Modes
- Manual: Run /reflect when you want
- Auto: Reflects automatically at session end

How It Works

If you correct Claude to use pytest instead of unittest, this preference gets saved. Next time, Claude will remember and use pytest automatically. It's that simple.

Getting Started

1. Clone the repository
2. Install dependencies
3. Activate the skill
4. Try it out!

The python-project-creator example shows how the system learns from your feedback.

Give It a Try

https://github.com/haddock-development/claude-reflect-system

Feel free to check it out, give feedback, or contribute. Every bit of input helps improve the project!

Thank you so much for your support!

---
#ClaudeCode #AI #MachineLearning #ContinualLearning #OpenSource #Developer #Coding #Python #Productivity #DevTools #GitHub #SoftwareDevelopment #Programming #AIAssistant #DeveloperTools #CodeQuality #Tech



Feel free to give it a try by yourself.
https://github.com/haddock-development/claude-reflect-system
davidmezzettiΒ 
posted an update 1 day ago
view post
Post
754
πŸ₯ƒ Distilling Tiny Embeddings. We're happy to build on the BERT Hash Series of models with this new set of fixed dimensional tiny embeddings models.

Ranging from 244K parameters to 970K and 50 dimensions to 128 dimensions these tiny models pack quite a punch.

Use cases include on-device semantic search, similarity comparisons, LLM chunking and Retrieval Augmented Generation (RAG). The advantage is that data never needs to leave the device while still having solid performance.

https://huggingface.co/blog/NeuML/bert-hash-embeddings
hypotheticalΒ 
posted an update 3 days ago
view post
Post
1894
We have updated our transcription model: TheStageAI/thewhisper-large-v3-turbo

– 6.00 WER on the English Open ASR Leaderboard
– 4.74 WER on the Multilingual Open ASR Leaderboard
– Beats NVIDIA Parakeet (6.34 WER) and Whisper-large-v3-turbo (7.8 WER)
– Strong improvements in Arabic, Hindi, Chinese
– Maintains quality with background and environmental noise
– Optimized inference engines for NVIDIA and Apple
– Hugging Face Transformers interface for easy use
– Best-in-class speed on NVIDIA GPUs and power efficiency on Apple devices
– NVIDIA Jetson Thor support
  • 2 replies
Β·
AdinaYΒ 
posted an update 3 days ago
view post
Post
1348
Wechat AI is shipping!

WeDLM πŸ”₯ A new language model that generates tokens in parallel, making it faster than standard LLMs , with the same Transformer setup!
https://huggingface.co/collections/tencent/wedlm

✨ 7B/8B - Base & Instruct
✨ Apache 2.0
  • 3 replies
Β·
sergiopaniegoΒ 
posted an update 3 days ago