Project
STAY TUNEDCOMING
SOON
SOON
SourceSurf
building
SourceSurf is a sophisticated project discovery and curation platform designed to serve as the central hub for a multi-service open-source ecosystem. It solves the fragmentation of high-quality repository discovery by aggregating data streams from specialized microservices. The platform features a robust webhook architecture that ingests enriched metadata—including real-time star counts, fork activity, and GSoC history—directly into a structured PostgreSQL database.
Key technical achievements include designing an asynchronous data pipeline to handle high-concurrency ingestion from external scrapers and implementing a flexible schema that standardizes diverse metadata formats. SourceSurf provides developers with a unified dashboard for exploring curated, trending projects, backed by a scalable Node.js/Express backend and a type-safe TypeScript implementation that ensures data integrity across the entire discovery workflow.
Project
STAY TUNEDCOMING
SOON
SOON
Quorum
building
Quorum is a comprehensive full-stack Q&A platform tailored specifically for the developer community, facilitating technical knowledge sharing and collaborative problem-solving. Built with a unified TypeScript stack, the application features a React-based frontend and a robust Express backend. The platform implements complex threaded discussions, real-time interaction, and a developer-centric UI optimized for code snippet sharing and technical discourse.
Technically, Quorum solves the challenge of maintaining end-to-end type safety between the client and server through shared TypeScript interfaces, significantly reducing integration bugs and runtime errors. The system manages a relational database of questions, answers, and user profiles, utilizing optimized PostgreSQL queries to handle nested discussion trees and high-traffic user interactions, providing a seamless and responsive experience for technical developers.
Project
STAY TUNEDCOMING
SOON
SOON
OSS Trends Scraper
building
The OSS Trends Scraper is a high-performance Python microservice engineered to automate the discovery and enrichment of trending GitHub repositories. Built with FastAPI and an entirely asynchronous pipeline, it utilizes BeautifulSoup4 and httpx for high-concurrency web scraping of the GitHub Trending pages. Once a repository is discovered, the service enriches the record by interfacing with the official GitHub REST API to fetch deep metadata, including owner profiles and precise activity metrics.
To solve the challenge of GitHub's aggressive rate limiting, the service implements an intelligent scheduling and batching system using APScheduler. Enriched data is 'upserted' into a PostgreSQL database (hosted via Supabase) using asyncpg to ensure non-blocking I/O. The microservice then automatically synchronizes this data with the SourceSurf backend via secure webhooks, providing a continuous stream of fresh, high-quality open-source leads while maintaining extreme operational efficiency.
Project
STAY TUNEDCOMING
SOON
SOON
Price Pulse
building
Price Pulse is a modern real-time price tracking application built on Next.js, designed to monitor and analyze price fluctuations across various e-commerce platforms. The application leverages Server-Side Rendering (SSR) and Edge-compatible middleware to deliver high-performance data fetching and request filtering. It provides users with automated monitoring of product URLs, visual price history charts, and instant notifications for significant price drops.
One of the primary technical challenges solved was the reliable extraction of data from dynamic e-commerce sites that rely heavily on client-side JavaScript. This was achieved through a robust background scraping layer that periodically validates product data and stores historical 'pulses' in a MongoDB database. The frontend utilizes custom React hooks and TypeScript-based charting libraries to transform raw historical data into actionable trend analysis, helping users make data-driven purchasing decisions.
Project
STAY TUNEDCOMING
SOON
SOON
Plant Disease Prediction
building
This project is an advanced agricultural AI system—the Smart Crop Stress Advisor—that predicts plant health status from IoT sensor data and provides actionable field recommendations. Unlike standard classification models, this system utilizes a Hybrid Machine Learning pipeline (combining Random Forest and SVM) to generate a 0-100 risk triage score. It processes 11 distinct sensor features, including soil moisture, chlorophyll levels, and nitrogen content, to provide a comprehensive view of crop health.
A critical technical hurdle addressed was the 'Black Box' nature of AI in agriculture. By integrating SHAP (Explainable AI), the system identifies the specific sensor drivers behind a stress prediction, allowing farmers to understand *why* a plant is at risk. The solution is deployed as a dockerized Streamlit application capable of both single-plant diagnosis and high-volume batch triage from CSV data, bridging the gap between machine learning research and practical field operations.
Project
STAY TUNEDCOMING
SOON
SOON
Drizzle Auth API
building
Drizzle Auth API is a production-ready authentication backend designed with a primary focus on security and immediate session control. Built with Express 5 and Drizzle ORM, it implements a database-backed session strategy that avoids the common pitfalls of stateless JWTs. By storing sessions in a PostgreSQL table and utilizing HTTP-only, secure cookies, the API provides a robust defense against XSS and CSRF attacks while allowing for immediate, server-side session revocation.
The project demonstrates the power of Drizzle ORM for type-safe database interactions, ensuring that the SQL schema and TypeScript interfaces remain perfectly synchronized. The authentication flow includes secure registration with password hashing, login, logout with instant session destruction, and protected profile management. This backend serves as a high-security foundation for applications requiring strict session management and end-to-end type safety across the database and API layers.