Why every PM needs to understand AI pipelines, not just prompts.
I built CartCoach AI. It's an AI-powered grocery assistant that uses Claude to score products against your fitness goals in real-time.
There’s a dangerous myth in product management right now. People think that understanding AI means knowing how to write good prompts. It doesn’t. Prompting is to AI what writing SQL queries is to data engineering. A useful skill, but nowhere near the full picture.
When I built CartCoach AI, I learned this the hard way. The app scans grocery products and tells you whether they’re good for your fitness goal. Lose fat, gain muscle, go anti-inflammatory, whatever. Claude API powers the recommendations. But here’s the thing. The prompt that generates the recommendation is maybe 15 lines of code. The system around it? That’s 95% of the work.
## The prompt is the tip of the iceberg
When a user scans a barcode in CartCoach, here’s what actually happens before Claude ever sees a prompt:
1. The barcode hits our API, which queries OpenFoodFacts for nutrition data
2. If that fails, we fall back to a curated demo product database
3. The product gets run through a category-aware health scoring algorithm (0 to 100) that weights protein density, sugar, fibre, processing level, and fat quality differently based on what kind of food it is
4. We pull the user’s current fitness goal and macro targets, calculated from their biometrics using the Mifflin-St Jeor equation
5. We pull their current cart contents so Claude has context on what they’ve already picked up
6. Only then does the prompt get assembled and sent to Claude
7. Claude’s response gets parsed for structured output. The recommendation message and a swap suggestion
8. If Claude fails (rate limit, timeout, no API key), the rule-based fallback engine kicks in and generates a recommendation using pattern matching
That’s 8 steps. The prompt is step 6. A product manager who only understands prompting would have built steps 6 and 7 and called it a day. They would have missed the health scoring, the cart context, the graceful degradation, and the fallback engine. Which, by the way, delivers about 80% of the AI’s value with zero external dependencies.
## Graceful degradation is a product decision, not a technical one
The single most important architectural decision in CartCoach wasn’t which model to use. It was deciding that the app must work fully without an API key.
This sounds like an engineering constraint. It’s not. It’s a product strategy decision. If your AI product breaks when the API is down, you don’t have a product. You have a wrapper. Users don’t care why something failed. They care that it did.
So we built a rule-based recommendation engine that runs entirely locally. It detects high sugar, low protein, excessive processing, and poor fat quality. It has pre-written swap suggestions for common products. It’s not as good as Claude, but it’s always available, always fast, and always free.
The best AI products are the ones that work without AI. The AI just makes them better.
## Context injection is where the real intelligence lives
Here’s what most people get wrong about AI recommendations. They treat every request as independent. User scans a product, AI evaluates it in isolation, done.
But recommendations aren’t independent. If you’ve already got chicken breast and brown rice in your cart, and you scan a protein shake, the recommendation should be different than if your cart was empty. The protein shake might be great for someone starting their shop, but redundant for someone who’s already hit their protein target.
This is context injection. Feeding the model not just the product data, but the user’s goal, their biometrics-derived targets, and their current cart state. It’s the difference between a generic nutrition label reader and a personalised shopping assistant.
## What PMs should actually learn about AI
If you’re a product manager building with AI, here’s what matters more than prompt engineering.
**Data pipeline design.** Where does your data come from? How do you validate it? What happens when a source fails? In CartCoach, we have a triple fallback for product data: OpenFoodFacts API, demo database, manual entry. Each has different latency, accuracy, and coverage characteristics. Understanding these trade-offs is a PM skill, not an engineering one.
**Evaluation frameworks.** How do you know if your AI output is good? In CartCoach, we built a health scoring algorithm that runs independently of Claude. This gives us a baseline. If Claude’s recommendation contradicts the health score, something’s wrong. You need ground truth before you can evaluate AI output.
**Fallback strategies.** What happens when the AI fails? This isn’t an edge case. It’s a certainty. APIs go down, rate limits hit, latency spikes. Your fallback strategy defines your minimum viable experience. In CartCoach, the rule-based engine isn’t a backup. It’s a product in its own right.
**Context management.** What information does the model need to give a good answer? More importantly, what information should you not send? Context windows aren’t infinite, and every token costs money. We send goal, cart state, and product data. Not the user’s full purchase history, not every product in the database, not their life story.
**Cost modelling.** Every API call costs money. At scale, this adds up. CartCoach uses Claude-sonnet with a 300-token max response. That’s a deliberate choice. Longer responses cost more and don’t add value. The recommendation needs to be 1-2 sentences, not a paragraph. Understanding the cost-quality trade-off is essential for any PM shipping AI features.
## The bottom line
The AI gold rush has created a generation of product managers who think “AI-powered” means “we added an API call to GPT.” It doesn’t. The best AI products are systems. Carefully designed pipelines where the model is one component among many.
If you’re building with AI, spend 20% of your time on the prompt and 80% on everything around it. The pipeline is the product. The prompt is just the cherry on top.
---
