The Aetina SuperEdge platform is an advanced, scalable, and AI compute-accelerated platform certified by NGC-ready™ - a series of security, scalability, functionality, and performance validated. It expanded to combine the NVIDIA Tesla T4 accelerator, providing up to 35X faster for machine learning training and 10X faster for deep learning training over CPU-only systems. Backed by comprehensive AI software and tools from the NGC catalog, giving developers the confidence to pull and run the latest software containers for AI to faster time-to-solution. It leverages the core-to-edge software services, and also Aetina designed value-added software service to commit to delivering enterprise-grade software support. This hardware and software integration service makes businesses accelerate AI infrastructure deployment and manage the fleet of edge devices quickly and securely for Real-Time inference and Real-Time decision making possible.
Aetina SuperEdge AIS-D422-A1 is NGC-ready™ System, tested and validated by NVIDIA to fully support NVIDIA NGC software for deep-learning, machine learning and AI-inference frameworks, enabling enterprise to deploy and run GPU-optimized AI solutions.Learn More
Extreme hardware integrated. Fully compliance with the NVIDIA NGC-ready validation.
Expansion design with the leading GPU- accelerators for higher AI compute performance.
Thorough management services, enhancement for managing and AI deployment at the edge.
High-performance, low-power operation for micro data centers and Edge computing across every industry.
The Aetina SuperEdge platform is the Multi-Access Edge Computing platform, which integrates high-performance Intel Xeon server microprocessor scalable with multi-GPU accelerators in a hardware configuration for running optimized AI workloads and inference at the edge.
It provides a broader range of hardware selections, including higher GPU performance from T4 to A100 GPUs, ECC RDIMM, ECC LRDIMM, and Mellanox networking solutions, with the Aetina Intelligent Management (AIM) software initially bound to Innodisk NVME SSD for dramatical simplicity of AI development at the edge.