🎓 B.Tech CSE (AI & Data Science – IBM Collaboration)
💡 Passionate about AI, local intelligence, multimodal learning, and full-stack development.
🔬 Building the bridge between offline AI systems and real-world usability.
I’m a developer who believes that true intelligence doesn’t always need the cloud.
I love combining AI, Web, and IoT to build self-contained intelligent systems that can think, respond, and visualize — completely offline.
From creating multimodal retrieval systems to designing real-time IoT dashboards, my focus has always been on autonomy, optimization, and user experience.
I enjoy solving complex challenges involving data processing, local LLM inference, automation, and interface design.
An advanced offline AI assistant capable of understanding and reasoning over documents, images, and audio — without internet.
- Offline Multimodal RAG (text, image, audio)
- Local inference using Meta Llama 3 (1B, 3B, 8B) via
Llama.cpp - FAISS + SentenceTransformer for semantic vector retrieval
- Whisper for speech-to-text transcription
- Pytesseract OCR for image understanding
- Flask Backend + Modern Web UI (HTML, CSS, JS)
- Citation Transparency, Voice Input, Dark/Light UI
- Real-time GPU/CPU/RAM Monitoring
Tech Stack:
Python · Flask · FAISS · SentenceTransformer · Whisper · Pytesseract · Llama.cpp · HTML · CSS · JavaScript
Performance:
- Query: 2.8s (text) / 6.4s (multimodal)
- GPU Utilization: 72%
- Accuracy: 95% · Citation Precision: 100%
- Predicts mental health condition using IBM Watson Studio & SPSS Modeler 18.5
- Flask-based web interface, predictive API endpoint integration
- Uses ML pipeline for real-time emotional insights
Tech:Flask·IBM Watson·SPSS Modeler·Python
- Built using ESP32 WROOM32, DHT22, Flame Sensor, OLED Display, and Buzzer
- Features 3-stage boot animation, weather visuals, and real-time fire alerts
- Converts GIF animations into frame sequences for OLED display
Purpose: Continuous environment monitoring for high-risk zones
Tech:C/C++·Arduino IDE·ESP32·DHT22·OLED·Flame Sensor
- Custom web platform to control ESP-based relays & sensors in real time
- Built to overcome Blynk’s 10K message limit
- Designed for unlimited local control requests with responsive web UI
Tech:HTML·CSS·JavaScript·Flask·ESP32
- Integrated Saavn.dev API to fetch and play real songs
- Fully custom UI (no template use) with dynamic playlist & song cards
Tech:HTML·CSS·JavaScript·API Integration
- Takes “From” & “To” input and shows shortest path visualization
- Built using Graph algorithms (Dijkstra/DFS)
- Option to view all nodes/paths like a mini map system
Tech:Python·Tkinter/NetworkX(conceptual base)
Languages: Python · JavaScript · C++
Frameworks: Flask · React (learning)
AI/ML: Llama.cpp · FAISS · Whisper · SentenceTransformers · IBM Watson
IoT: ESP32 · DHT22 · Flame Sensor · OLED Display
Databases: SQLite · JSON-based storage
Core Strengths: AI Integration · System Optimization · Web + IoT Fusion · Offline Deployment
I’m driven to create autonomous, multimodal AI systems that run entirely offline — bridging the gap between human-like intelligence and local computation.
In the future, I aim to design next-gen AI frameworks, offline copilots, and self-learning assistants that operate independently yet responsibly.
My goal isn’t just to use AI — it’s to make AI truly yours, right on your device.