Partnered with Mellanox we can provide all of Mellanox-based solutions for your business. We specialise in HPC,Data Center, Artificial Intelligence / Machine Learning, Cloud, Storage, Media and Entertainment, and Telecom.
You can install any Mellanox adapter cards in your configurations using DiGiCOR configurator. Or make an enquiry right now to get the latest pricing for any Mellanox products you require for your business.
Mellanox LinkX cables and transceivers make 100Gb/s deployments as easy and as universal as 10Gb/s links. Because Mellanox offers one of industry’s broadest portfolio of 10, 25, 40, 50,100, 200 and 400Gb/s Direct Attach Copper cables (DACs), Copper Splitter cables, Active Optical Cables (AOCs) and Transceivers, every data center reach from 0.5m to 10km is supported. To maximize system performance. Mellanox tests every product in an end-to-end environment assuring a Bit Error Rate of less than 1E-15. A BER of 1E-15 is 1000x better than many competitors.
Mellanox LinkX Ethernet DACs, AOCs and Optical Transceivers meet or exceed all of the IEEE 802.3xx industry standards for 1G, 10G, 25G, 50G and 100G products.
Mellanox LinkX cables and transceivers are designed to maximize the performance of High-Performance Computing networks, requiring high-bandwidth, low-latency connections between compute nodes and switch nodes. To assure superior system performance, Mellanox tests every product in an end-to-end environment to a Bit Error Rate of less than 1E-15 which is 1,000x better than competitors.
DACs and AOCs enable all links to be same exact length - important in HPC applications where signal time-of-flight is important design concern. DAC is available up to 7m. AOCs are available in < 30m OM2 fiber lowest-cost lengths; OM3/OM4 multimode to 100m and data rates of QDR (40G), FDR10(40G), FDR(56G), EDR(100G) and new in 2017 HDR100 (100G) and HDR (200G).
Intelligent ConnectX-6 adapter cards, the newest additions to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, bring new acceleration engines for maximizing High Performance, Machine Learning, Web 2.0, Cloud, Data Analytics and Telecommunications platforms.
ConnectX-6 with Virtual Protocol Interconnect® supports two ports of 200Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and 215 million messages per second, enabling the highest performance and most flexible solution for the most demanding data center applications.
Intelligent ConnectX-5 adapter cards, the newest additions to the Mellanox Smart Interconnect suite and supporting Co-Design and In-Network Compute, introduce new acceleration engines for maximizing High Performance, Web 2.0, Cloud, Data Analytics and Storage platforms.
ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet connectivity, sub-600 nanosecond latency, and very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing the highest performance and most flexible solution for the most demanding applications and markets.
ConnectX-4 adapters with Virtual Protocol Interconnect (VPI), support EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, and provide the highest performance and most flexible solution for high-performance, Web 2.0, Cloud, data analytics, database, and storage platforms.
ConnectX-4 adapters provide an unmatched combination of 100Gb/s bandwidth in a single port, the lowest available latency, 150 million messages per second and application hardware offloads, addressing both today's and the next generation's compute and storage data center demands.
ConnectX-3 Pro adapter cards with Virtual Protocol Interconnect (VPI), supporting InfiniBand and Ethernet connectivity with hardware offload engines to Overlay Networks ("Tunneling"), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.
Mellanox's industry-leading ConnectX-3 InfiniBand adapters provides the highest performing and most flexible interconnect solution. ConnectX-3 delivers up to 56Gb/s throughput across the PCI Express 3.0 host bus, enables the fastest transaction latency, less than 1usec, and can deliver more than 90M MPI messages per second making it the most scalable and suitable solution for current and future transaction-demanding applications. ConnectX-3 maximizes the network efficiency making it ideal for
Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.
Mellanox InfiniBand Host Channel Adapters (HCAs) provide the highest performing interconnect solution for Enterprise Data Centers, Web 2.0, Cloud Computing, High-Performance Computing, and embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.
As the world’s most advanced cloud SmartNIC, ConnectX-6 Dx provides up to two ports of 25, 50 or 100Gb/s, or a single port of 200Gb/s Ethernet connectivity, powered by 50Gb/s PAM4 SerDes technology and PCIe 4.0 host connectivity. ConnectX-6 Dx continues along Mellanox’s innovation path in scalable cloud fabrics, delivering unparalleled performance and efficiency at every scale. ConnectX-6 Dx’s innovative hardware offload engines, including IPsec and TLS inline data-in-motion encryption, are ideal for enabling secure network connectivity in modern data-center environments.
ConnectX-6 Dx’s extensive SmartNIC portfolio offers cards in several form factors, feeds and speeds, including low-profile PCIe, OCP 2.0 and OCP 3.0-compliant cards with advanced options for Mellanox Multi-Host® and Mellanox Socket Direct® configurations, enabling customers to choose the best fit for their needs.
ConnectX-6 is the world's first 200Gb/s Ethernet network adapter card, offering world-leading performance, smart offloads and In-Network Computing, leading to the highest return on investment for Cloud, Web 2.0, Big Data, Storage and Machine Learning applications.
ConnectX-6 EN provides two ports of 200Gb/s for Ethernet connectivity and 215 million messages per second, enabling the highest performance and most flexible solution for the most demanding data center applications.
Intelligent ConnectX-5 Ethernet (EN) adapter cards offer new acceleration engines that optimize performance of web 2.0, cloud, data analytics, high Performance and storage platforms. ConnectX-5 supports two ports of 100Gb/s Ethernet connectivity, very high message rate, plus PCIe switch and NVMe over Fabric offloads, providing high performance and cost-effective solution for a wide range of applications and markets.
ConnectX-4 Lx EN provides an unmatched combination of 10, 25, 40, and 50GbE bandwidth, sub-microsecond latency and a 75 million packets per second message rate. It includes native hardware support for RDMA over Converged Ethernet, Ethernet stateless offload engines, Overlay Networks,and GPUDirect® Technology.
ConnectX-4 Lx offers the best cost effective Ethernet adapter solution for 10, 25, 40 and 50Gb/s Ethernet speeds, enabling seamless networking, clustering, or storage. The adapter reduces application runtime, and offers the flexibility and scalability to make infrastructure run as efficiently and productively as possible.
Advanced Streaming Telemetry Technology
The virtual switch is responsible for connecting Virtual Machines in a virtual network. At scale, and with increased data center traffic, this type of switching consumes precious CPU cycles.
One of the most commonly used virtual switching software solutions is Open vSwitch (OVS) which is targeted at multi-server virtualization deployments. Whether running in Kernel mode or on top of DPDK, virtual switches are plagued by:
Mellanox ASAP2 - Accelerated Switch and Packet Processing® solution combines the performance and efficiency of server/storage networking hardware with the flexibility of virtual switching software. ASAP2 offers up to 10 times better performance than non-offloaded OVS solutions, delivering software-defined networks with the highest total infrastructure efficiency, deployment flexibility and operational simplicity.
Starting from ConnectX®-5 NICs, Mellanox supports accelerated virtual switching in server NIC hardware through the ASAP2 feature.
While accelerating the data plane, ASAP2 keeps the SDN control plane intact thus staying completely transparent to applications, maintaining flexibility and ease of deployments.
Copyright © 1997-2019 Digicor. All rights reserved.