Tiago Fortunato
Product Engineer
Shipping AI-powered products end-to-end with LLMs, RAG, and modern web stacks. Founder of Odys, a live multi-tenant SaaS. Builder of production RAG and Vision AI systems on FastAPI.
About
Who I am
Product Engineer based in Berlin, shipping AI-powered products end-to-end with LLMs, RAG, and modern web stacks. I build real production systems, not prototypes, and care about whether what I build actually solves the problem.
I shipped Odys, a live multi-tenant SaaS scheduling platform for Brazilian freelance professionals, as the sole developer. Full Next.js + TypeScript stack on Supabase, Stripe billing, self-hosted WhatsApp messaging via Evolution API on Railway, automated reminder flows via Supabase pg_cron, error monitoring with Sentry, and CI/CD on GitHub Actions.
I also built a production RAG career chatbot with hybrid retrieval (semantic + BM25 with Reciprocal Rank Fusion), custom section-aware chunking, streaming SSE responses via Groq (Llama 3.1), and a RAGAs evaluation pipeline. Self-hosted on AWS EC2 with Docker, Nginx, and Let's Encrypt HTTPS. And an Inspection Management API with autonomous Vision AI classification on FastAPI, JWT auth, comprehensive Pytest suite, structured LLM output via LangChain, and full observability through LangSmith.
MSc Software Engineering (Berlin, 2026). Background in Mechanical Engineering. Daily user of Claude Code and agentic AI development workflows: my edge is shipping fast and owning outcomes from idea to production. Trilingual: Portuguese (native), English (fluent), German (B2.2). Open to Product Engineer, AI Engineer, Solutions Engineer, or Founding Engineer roles.
Skills
What I work with
Projects
What I've built
WhatsApp-first scheduling SaaS for Brazilian freelance professionals (therapists, personal trainers, salons). Clients book online, receive automatic WhatsApp reminders 24h before, and professionals manage everything from a dashboard. Live product at odys.com.br, currently in active validation with industry professionals before scaling marketing.
- WhatsApp-native reminders via self-hosted Evolution API v2 on Railway, triggered by Supabase pg_cron (not a third-party bot)
- Multi-tenant architecture: each professional gets a public booking page at /p/[slug] with fully isolated data and configurable availability
- Stripe subscriptions + PIX with webhook handling; Supabase Auth; Drizzle ORM on PostgreSQL; resolved PgBouncer transaction mode compatibility
- Rate limiting with Upstash Redis; error monitoring with Sentry; transactional email via Resend
- Deployed on Vercel (app) + Railway (WhatsApp API Docker container); CI/CD via GitHub Actions
Production RAG chatbot where recruiters and hiring managers ask questions about my background, projects, and skills, and get answers grounded in a curated knowledge base. Self-hosted on AWS EC2.
- Hybrid retrieval: semantic search + BM25 fused with Reciprocal Rank Fusion (RRF) for higher precision than either method alone
- Custom section-aware chunking via font-size analysis: keeps semantically related content together for better retrieval quality
- Streaming SSE responses via Groq (Llama 3.1) with conversation history and suggested follow-up questions
- RAGAs evaluation pipeline measuring faithfulness, answer relevance, context precision, and context recall
- FastAPI backend with LangChain and ChromaDB vector store; vanilla JS frontend; containerized with Docker
- Self-hosted on AWS EC2 with Nginx reverse proxy and Let's Encrypt HTTPS at chatbot.tifortunato.com
Production REST API for infrastructure inspections with autonomous Vision AI classification. Upload a photo of road damage and the system classifies severity, damage type, and generates an explainable rationale, all running in the background.
- Vision AI classification via Groq SDK: images are compressed client-side, sent to the API, and classified autonomously using background tasks
- Explainable AI (XAI): every classification includes a human-readable rationale field so decisions are transparent and auditable
- LangSmith integration for full LLM observability: trace every prompt, response, and latency in production
- Structured LLM output enforcement with LangChain for consistent severity and damage type responses
- JWT auth with admin role, per-user data isolation, status lifecycle, rate limiting
- Comprehensive Pytest suite covering all endpoints; CI/CD via GitHub Actions; Docker; frontend dashboard on Vercel with image upload and AI status badges
Two-stage hybrid pipeline combining YOLOv8-based object detection with a rule-based expert system to detect road surface damage and automatically prioritize maintenance actions. Trained and evaluated on the RDD2022 benchmark dataset.
- Conducted 4 controlled experiments (EXP1 to EXP4) varying model capacity, input resolution, and training length. Best config (YOLOv8s): mAP50 0.663, Precision 0.694, Recall 0.604
- Built a rule-based expert system with geometric filtering, continuous severity scoring, and quantile-based prioritization (LOW / MEDIUM / HIGH)
- Post-processing reduced noisy detections by 31.2% (8304 → 5711) while preserving structurally relevant defects
- Designed for interpretability: every prioritization decision traceable to explicit rules, with no black boxes
End-to-end ML pipeline analyzing vehicle sensor data (temperature, vibration, oil pressure, RPM) to predict failures before they occur.
- Multi-stage pipeline: data generation → ETL → EDA → ML → dashboard
- Random Forest classifier with imbalanced data handling (99.9% accuracy)
- Live Streamlit dashboard for real-time vehicle condition monitoring
Recommendation engine using matrix multiplication and Python multithreading to simulate scalable user-product scoring, with performance benchmarking between sequential and parallel execution.
- Implemented S = U × P matrix scoring with score normalization
- Compared sequential vs. multithreaded performance with visual output
- Demonstrated threading tradeoffs: overhead vs. scalability at scale
Experience
What I've done
Education
Academic background
Thesis: Computer Vision Object Detection (YOLO-based pipeline).
Languages
Communication
Contact
Let's talk
Open to Product Engineer, AI Engineer, Solutions Engineer, or Founding Engineer roles in Berlin and remote. If you're building AI-powered products and need someone who ships fast and owns outcomes from idea to production, let's talk.