Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

ginipick 
posted an update about 22 hours ago
view post
Post
2781
🚀 FLUX Workflow Canvas

Welcome to Workflow Canvas, your ultimate AI-driven platform for crafting stunning design concepts and intricate workflow diagrams that empower your business! 🤖✨

ginigen/Workflow-Canvas

Features
Product Design 🛠️
Transform your ideas into reality with sleek, industrial product designs that blend modern aesthetics with advanced technology.

Mindmap 🧠
Generate vibrant, educational mind maps that outline your strategies and processes in a clear, visually engaging layout.

Mockup 📱
Quickly prototype intuitive app interfaces and web designs using clean, hand-drawn wireframes that capture your vision.

Infographic 📊
Build polished, data-rich infographics that communicate complex corporate metrics and trends with style and clarity.

Diagram 📈
Illustrate comprehensive, end-to-end business workflows—from market analysis to implementation—with detailed and organized diagrams.

Flowchart 🔄
Design easy-to-follow, hand-drawn style flowcharts that map out your operational processes using vibrant colors and minimalistic icons.

How It Works
Set Your Parameters:
Customize your creative process by adjusting the seed, dimensions, inference steps, and guidance scale through the intuitive sidebar.

Choose Your Visual Style:
Explore our diverse range of tabs—from Product Design and Mindmap to Flowchart—each tailored to a unique creative output.

Get Inspired:
Dive into our rich library of example prompts featuring detailed lists and tree structures to instantly populate your design ideas.

Generate Your Masterpiece:
Click the “Generate” button and watch as your ideas come to life in beautifully rendered images! 🎨

Experience the fusion of art and technology with Workflow Canvas – where your business ideas transform into dynamic, visual masterpieces. Get started today and revolutionize the way you design! 🚀
Jaward 
posted an update 2 days ago
view post
Post
2732
Finally here it is: a faster, custom, scalable GRPO trainer for smaller models with < 500M params, can train on 8gb ram cpu, also supports gpu for sanity sake (includes support for vllm + flash attention). Using smolLM2-135M/360M-instructs as ref & base models. Experience your own “aha” moment 🐳 on 8gb ram.
Code: https://github.com/Jaykef/ai-algorithms/blob/main/smollm2_360M_135M_grpo_gsm8k.ipynb
  • 1 reply
·
prithivMLmods 
posted an update 2 days ago
view post
Post
4127
The last week of Impression Craft Arts and sketches from strangerzonehf🎨🧑🏻‍🎨

- Collection : strangerzonehf/Flux-Ultimate-LoRA-Collection

Adapters:
+ Ld-Art : strangerzonehf/Ld-Art
+ Animeopix-Flux : strangerzonehf/Animeopix-Flux
+ Flux-Super-Paint-LoRA : strangerzonehf/Flux-Super-Paint-LoRA
+ CinematicShot-Pics-Flux : strangerzonehf/cinematicShot-Pics-Flux
+ Oil-Wall-Art-Flux : strangerzonehf/Oil-Wall-Art-Flux
+ Pixelo-Flux : strangerzonehf/Pixelo-Flux
+ Abstract-Shattered : strangerzonehf/Abstract-Shattered
+ Neon-Impressionism-Flux : strangerzonehf/Neon-Impressionism-Flux
+ NewG-Art : strangerzonehf/NewG-Art

🪧Demo : prithivMLmods/FLUX-LoRA-DLC
🤗Page : https://huggingface.co/strangerzonehf
tianchez 
posted an update 3 days ago
view post
Post
3477
Introducing VLM-R1!

GRPO has helped DeepSeek R1 to learn reasoning. Can it also help VLMs perform stronger for general computer vision tasks?

The answer is YES and it generalizes better than SFT. We trained Qwen 2.5 VL 3B on RefCOCO (a visual grounding task) and eval on RefCOCO Val and RefGTA (an OOD task).

https://github.com/om-ai-lab/VLM-R1
clem 
posted an update 1 day ago
view post
Post
1881
We crossed 1B+ tokens routed to inference providers partners on HF, that we released just a few days ago.

Just getting started of course but early users seem to like it & always happy to be able to partner with cool startups in the ecosystem.

Have you been using any integration and how can we make it better?

https://huggingface.co/blog/inference-providers
schuler 
posted an update 2 days ago
view post
Post
2883
🔮 GPT-3 implemented in pure Free Pascal!
https://github.com/joaopauloschuler/gpt-3-for-pascal

This implementation follows the GPT-3 Small architecture from the landmark paper "Language Models are Few-Shot Learners":
┌─────────────────────────┐
│     Input Layer       │
├─────────────────────────┤
│ Token & Positional    │
│     Embedding         │
├─────────────────────────┤
│   12x Transformer     │
│      Blocks           │
│  - 12 heads           │
│  - 768 hidden dims    │
│  - 3072 intermediate  │
├─────────────────────────┤
│   Output Layer        │
└─────────────────────────┘

Clean Pascal Implementation
for CntLayer := 1 to {Layers=}12 do
begin
  Result.AddTransformerBlockCAI(
    {Heads=}12, 
    {intermediate dimensions=}4*768, 
    {NoForward=}true, 
    {HasNorm=}true, 
    false
  );
end;

jasoncorkill 
posted an update 1 day ago
view post
Post
1503
This dataset was collected in roughly 4 hours using the Rapidata Python API, showcasing how quickly large-scale annotations can be performed with the right tooling!

All that at less than the cost of a single hour of a typical ML engineer in Zurich!

The new dataset of ~22,000 human annotations evaluating AI-generated videos based on different dimensions, such as Prompt-Video Alignment, Word for Word Prompt Alignment, Style, Speed of Time flow and Quality of Physics.

Rapidata/text-2-video-Rich-Human-Feedback
burtenshaw 
posted an update 1 day ago
view post
Post
1915
NEW COURSE! We’re cooking hard on Hugging Face courses, and it’s not just agents. The NLP course is getting the same treatment with a new chapter on Supervised Fine-Tuning!

👉 Follow to get more updates https://huggingface.co/nlp-course

The new SFT chapter will guide you through these topics:

1️⃣ Chat Templates: Master the art of structuring AI conversations for consistent and helpful responses.

2️⃣ Supervised Fine-Tuning (SFT): Learn the core techniques to adapt pre-trained models to your specific outputs.

3️⃣ Low Rank Adaptation (LoRA): Discover efficient fine-tuning methods that save memory and resources.

4️⃣ Evaluation: Measure your model's performance and ensure top-notch results.

This is the first update in a series, so follow along if you’re upskilling in AI.
tegridydev 
posted an update 2 days ago
view post
Post
1747
WTF is Fine-Tuning? (intro4devs)

Fine-tuning your LLM is like min-maxing your ARPG hero so you can push high-level dungeons and get the most out of your build/gear... Makes sense, right? 😃

Here's a cheat sheet for devs (but open to anyone!)

---

TL;DR

- Full Fine-Tuning: Max performance, high resource needs, best reliability.
- PEFT: Efficient, cost-effective, mainstream, enhanced by AutoML.
- Instruction Fine-Tuning: Ideal for command-following AI, often combined with RLHF and CoT.
- RAFT: Best for fact-grounded models with dynamic retrieval.
- RLHF: Produces ethical, high-quality conversational AI, but expensive.

Choose wisely and match your approach to your task, budget, and deployment constraints.

I just posted the full extended article here
if you want to continue reading >>>

https://huggingface.co/blog/tegridydev/fine-tuning-dev-intro-2025
louisbrulenaudet 
posted an update 2 days ago
view post
Post
2798
I am pleased to introduce my first project built upon Hugging Face’s smolagents framework, integrated with Alpaca for financial market analysis automation 🦙🤗

The project implements technical indicators such as the Relative Strength Index (RSI) and Bollinger Bands to provide momentum and volatility analysis. Market data is retrieved through the Alpaca API, enabling access to historical price information across various timeframes.

AI-powered insights are generated using Hugging Face’s inference API, facilitating the analysis of market trends through natural language processing with DuckDuckGo search integration for real-time sentiment analysis based on financial news 🦆

Link to the GitHub project: https://github.com/louisbrulenaudet/agentic-market-tool