Let me explore the current codebase to understand the existing structure before planning.
Now let me look at the remaining pages and the sidebar component:
Now let me research face-api.js for browser-based facial expression detection, which is the best lightweight option for this:
Good. face-api.js is the right choice - it's a ~310KB model that provides facial expression recognition (happy, sad, angry, disgusted, fearful, surprised, neutral) directly in the browser using TensorFlow.js. Now let me check the UI components available:
Good, I have a complete picture. Now let me write the plan:
I have full context on all the files. Let me set up the todo list and start building.
First, let me add the dependency and create the utility files:
Now let me create the mood store and activity suggestions utilities in parallel:
Good - utilities created. Now let me move on to building the main Live Detection page component:
Now let me build the full Live Detection page:
Now let me wire everything into the navigation:
Now update the main page.tsx to wire in the LiveDetectionPage component:
The live camera mood detection feature is now fully integrated into MindGuard. Here's what was built:
Live Detection Page -- Uses face-api.js with TinyFaceDetector and FaceExpressionNet models (loaded from CDN) to perform real-time facial expression analysis directly in the browser. The camera feed displays a bounding box overlay with the dominant expression label, while 7 expression confidence bars update every 1.5 seconds. Moods above 50% confidence are automatically saved to localStorage every ~10 seconds, and each detection triggers personalized activity suggestions (3-4 curated wellness activities per mood -- meditation, breathing, exercise, journaling, etc.) displayed as interactive cards. The page also includes a mood distribution bar chart aggregating all historical detections and a scrollable history table with relative timestamps, all protected by a privacy-first banner since no data leaves the device. The feature is wired into the sidebar as "Live Detection" (second nav item, with a ScanFace icon) and accessible from the navigation context.
You can send messages while the preview loads