Problem
The challenge was not just to fetch metrics. It was to make the output useful to a person reading the report, which meant turning noisy numbers into a clear summary without losing the selected context.
Projects
These are not generic summaries. Each flagship project below is grounded in the code and plan documents that already exist in this workspace. Planned work is labeled as planned. Paused work is labeled as paused.
Jump to a flagship
A scheduled reporting app that turns ad-platform performance data into readable, AI-generated insight reports with web, email, and PDF delivery paths.
Problem
The challenge was not just to fetch metrics. It was to make the output useful to a person reading the report, which meant turning noisy numbers into a clear summary without losing the selected context.
Build
I built the project in Next.js with typed report configuration, scheduling, report generation, HTML and PDF output, and a reporting flow that supports both manual runs and recurring delivery.
AI usage
This is the most direct product-facing AI example in the portfolio. The app collects selected metrics and raw report data, builds a constrained prompt, sends it through the OpenAI API, and returns an AI summary that is then sanitized and embedded into the finished report output.
Tradeoffs
The reporting flow keeps working even if the AI step fails. The code falls back to a deterministic basic summary so AI improves the output, but does not block the product from delivering a report.
An internal transportation workflow app focused on centralizing trips, invoicing, payments, admin controls, and future intake and deployment planning.
Problem
The underlying workflow was fragmented across spreadsheets and role-specific handoffs. The goal was to move invoicing, accounting, trip tracking, and approvals into a single application instead of scattered operational memory.
Build
The project combines Node, Express, MongoDB, Electron, and a role-aware UI to manage trips, print outputs, payment flows, brand variants, and business rules in one system.
AI usage
AI is part of how the project is managed and scaled. The plans directory records phased architecture work for backup and restore, future on-prem deployment, public intake design, PO workflow hardening, and admin controls. It shows AI being used to keep a complicated internal product moving with traceable decisions.
Tradeoffs
Some of the most important future work is intentionally still documented rather than presented as finished. Public intake, parts of the deployment path, and certain operational phases are clearly marked as planned so the story stays accurate.
A Linux desktop configuration repo that evolved into a reproducible restore kit with custom Waybar modules, popup tooling, and scoped backup management.
Problem
I wanted more than a personalized Linux desktop. I wanted a setup I could understand deeply, customize aggressively, and still restore on a fresh machine without starting over from scratch.
Build
The repo tracks Hyprland, Waybar, GTK, autostart entries, custom Python and shell helpers, and a CachyOS bootstrap flow that restores packages, config targets, and selected editor and shell assets.
AI usage
This repo also reflects how I blend AI into systems work. It includes a Waybar AI-usage monitor for Claude and Codex usage, extensive plan records, and tooling that treats AI workflows as something observable and operational instead of vague magic.
Tradeoffs
The restore flow is intentionally selective. Secrets, browser profiles, auth tokens, and heavyweight machine-local data are excluded so the Git-backed bootstrap stays reproducible without becoming reckless.
A private AI voice assistant built around dockerized local models, streaming speech, and native always-on clients that turn Ollama into something you can actually talk to.
Problem
The real problem was making a local-first voice assistant feel useful in real time. That meant wake-word detection, microphone capture, transcription, response generation, speech output, and device access all had to work together without pushing every interaction through a hosted API.
Build
I separated the voice-focused code into `AI_Voice_Assistant`, preserving the Docker backend and the native runtime together: FastAPI, Ollama, Redis, SearXNG, Whisper, Piper, wake-word tooling, and Windows, macOS, and Linux shell clients that stream audio into a shared assistant service.
AI usage
AI is the product here. Ollama handles the local language models, Whisper handles authoritative speech-to-text, Piper handles text-to-speech, and the runtime coordinates wake detection, streaming voice turns, interruption handling, and the decision of when local AI should speak versus act.
Tradeoffs
The biggest lesson was that privacy and local inference are not enough on their own. The assistant only matters if it responds quickly, so the design leans on warm model management, streaming audio, concise spoken replies, and targeted latency reduction instead of treating local LLM usage as the finish line.
Other builds / workflow systems
These are intentionally lighter than the flagship stories. They are here to show range: one is a paused local AI-coder experiment, one is the bootstrapper I use to shape AI around my workflow, and one is a smaller business-software proof point.
A local AI coding assistant experiment with multi-model candidate generation, judge synthesis, planner and executor flows, and a browser-facing control surface.
Why it belongs here
I intentionally paused this because the project started demanding more than implementation alone. To make it truly strong, it needed deeper benchmarking, evaluation, and training work, and I chose to prioritize other products and systems projects instead.
Evidence
A portable repo scaffold that fine-tunes AI assistance around my process by giving agents shared rules, append-only plans, and implementation history.
Why it belongs here
It adds light process overhead, but the trade is worth it on bigger projects because the extra structure reduces context loss and makes AI help more consistent over time.
Evidence
An Electron and Node.js maintenance-tracking app for bus repair workflows, costs, and shared operational visibility.
Why it belongs here
This is a lighter portfolio entry because the repo evidence is smaller and the strongest narrative value is breadth rather than a deep architecture story.
Evidence