v2.73.0
 4 months ago by Craig Adam
New Features
- Smart Context Length Management: Automatically switches to larger context models when token limits are exceeded, preventing infinite retry loops and reducing API costs
 - Proactive Model Selection: Estimates token usage and selects appropriate models before execution to avoid context length errors
 - New AI Model Support: Added Kimi K2 model with Groq provider integration and GPT-4.1 nano model for enhanced AI capabilities
 - AI Agent Analytics Dashboard: Added comprehensive analytics component with expandable chart cards and flexible viewing modes for AI agent run monitoring
 - AST-based React Component Linter: Enhanced test harness with component linting for React state patterns and improved code quality checks
 - PayloadManager Operations: Added support for replaceElements and DELETE marker functionality with enhanced array update behavior
 
Improvements
- Enhanced AI Agent Timeline: Optimized loading performance and fixed sub-agent expansion to properly display children
 - Improved Loop Agent Resilience: Better handling of scenarios where AI completes work via payload change requests without properly setting taskComplete flag
 - Better JSON Repair System: Updated prompts and added new system prompt for JSON repair with GPT-4.1 nano integration
 - React/Angular Integration: Fixed memory leaks and added redundant state update detection to prevent infinite loops
 - AI Agent Form Layout: Improved form organization with better panel structure and optimized data loading using batched queries
 - Metadata Sync Efficiency: Only update lastModified timestamps when records actually change, reducing unnecessary updates
 
Bug Fixes
- BaseAgent Payload Preservation: Fixed critical bug where sub-agent failure during StartingPayloadValidation would lose the entire parent payload
 - Entity Save Inconsistencies: Replaced error throwing with warning logging for entity save inconsistencies, allowing operations to continue while maintaining proper error tracking
 - PayloadManager Delete Operations: Fixed delete+add operation order and enhanced type definitions for better reliability
 - AI Model Token Limits: Updated maximum input token amounts for various AI models to reflect actual provider capabilities
 - Response Format Handling: Fixed default responseFormat issues in AI Prompt Runner for better compatibility
 - Test Harness Improvements: Prevented duplicate function registration errors and enhanced integration with real MJ utilities instead of mocks
 
