输出参数翻倍:switch (mode) { case "lightning": maxTokens = 2048 // Small for lightning mode break case "standard": maxTokens = 8192 // Medium for standard break case "research": maxTokens = 16384 // Larger for research break }
I'll double the output token limits for each processing mode as requested. Let me first read the current API route to see the current token settings.
I've doubled the output token limits for all processing modes: lightning mode increased from 1024 to 2048 tokens, standard mode from 4096 to 8192 tokens, and research mode from 8192 to 16384 tokens. This allows for significantly longer and more comprehensive content generation across all modes while maintaining the performance characteristics of each processing level.
替换文件:app/api/gemini/route.ts
I'll read the text attachment and replace the API route file as requested.
Now I'll read the current API route file and replace it with the updated comprehensive version.
I've replaced the API route with a comprehensive new version that includes an advanced system prompt with structured templates for lightning, standard, and research modes. The new implementation features intelligent model fallback with retry logic, enhanced content preprocessing, timeout handling, and detailed error responses with user-friendly suggestions. The system now uses a sophisticated prompt engineering approach with zero-hallucination principles and structured output formatting.