Scenario Parameter Configuration
Please fill in your AI inference scenario parameters. The system will automatically calculate memory requirements and recommend suitable server configurations.
Inference Model Configuration
Model Name #1
Auxiliary Model Configuration
Business Load Configuration
System Configuration
Memory reservation ratio for caching key-value pairs
CUDA runtime, drivers and other system components usage
💡 Calculation Logic Explanation
Calculation Results
Memory requirements and server recommendations based on your configuration parameters.
Waiting for Calculation
Please configure scenario parameters and click "Calculate Memory Requirements" button