IoT and Embedded
Stay in touch
Who we are?
Partnered with Mellanox we can provide all of Mellanox-based solutions for your business. We specialise in
HPC,Data Center, Artificial Intelligence / Machine Learning, Cloud, Storage, Media and Entertainment, and
You can install any Mellanox adapter cards in your configurations using DiGiCOR configurator. Or make an
right now to get the latest pricing for any Mellanox products you require for your business.
InfiniBand Switch Systems Brochure
Mellanox LinkX cables and transceivers make 100Gb/s deployments as easy and as universal as 10Gb/s
Because Mellanox offers one of industry’s broadest portfolio of 10, 25, 40, 50,100, 200 and 400Gb/s
Attach Copper cables (DACs), Copper Splitter cables, Active Optical Cables (AOCs) and Transceivers,
data center reach from 0.5m to 10km is supported. To maximize system performance. Mellanox tests every
product in an end-to-end environment assuring a Bit Error Rate of less than 1E-15. A BER of 1E-15 is
better than many competitors.
Mellanox LinkX Ethernet DACs, AOCs and Optical Transceivers meet or exceed all of the IEEE 802.3xx
standards for 1G, 10G, 25G, 50G and 100G products.
about Ethernet technology, and download the free LinkX eBook
Mellanox data center cables and transceivers
Mellanox LinkX cables and transceivers are designed to maximize the performance of High-Performance
Computing networks, requiring high-bandwidth, low-latency connections between compute nodes and switch
nodes. To assure superior system performance, Mellanox tests every product in an end-to-end
Bit Error Rate of less than 1E-15 which is 1,000x better than competitors.
DACs and AOCs enable all links to be same exact length - important in HPC applications where signal
time-of-flight is important design concern. DAC is available up to 7m. AOCs are available in < 30m OM2
fiber lowest-cost lengths; OM3/OM4 multimode to 100m and data rates of QDR (40G), FDR10(40G),
FDR(56G), EDR(100G) and new in 2017 HDR100 (100G) and HDR (200G).
Download the Cable Management Guidelines and FAQs Application Note
Intelligent ConnectX-6 adapter cards, the newest additions to the Mellanox Smart Interconnect suite
supporting Co-Design and In-Network Compute, bring new acceleration engines for maximizing High
Machine Learning, Web 2.0, Cloud, Data Analytics and Telecommunications platforms.
ConnectX-6 with Virtual Protocol Interconnect® supports two ports of 200Gb/s InfiniBand and Ethernet
connectivity, sub-600 nanosecond latency, and 215 million messages per second, enabling the highest
performance and most flexible solution for the most demanding data center applications.
Intelligent ConnectX-5 adapter cards, the newest additions to the Mellanox Smart Interconnect suite
supporting Co-Design and In-Network Compute, introduce new acceleration engines for maximizing High
Performance, Web 2.0, Cloud, Data Analytics and Storage platforms.
ConnectX-5 with Virtual Protocol Interconnect® supports two ports of 100Gb/s InfiniBand and Ethernet
connectivity, sub-600 nanosecond latency, and very high message rate, plus PCIe switch and NVMe over
offloads, providing the highest performance and most flexible solution for the most demanding
ConnectX-4 adapters with Virtual Protocol Interconnect (VPI), support EDR 100Gb/s InfiniBand and
Ethernet connectivity, and provide the highest performance and most flexible solution for
Web 2.0, Cloud, data analytics, database, and storage platforms.
ConnectX-4 adapters provide an unmatched combination of 100Gb/s bandwidth in a single port, the lowest
available latency, 150 million messages per second and application hardware offloads, addressing both
and the next generation's compute and storage data center demands.
ConnectX-3 Pro adapter cards with Virtual Protocol Interconnect (VPI), supporting InfiniBand and
connectivity with hardware offload engines to Overlay Networks ("Tunneling"), provide the highest
and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds,
enterprise data centers, and high performance computing.
Mellanox's industry-leading ConnectX-3 InfiniBand adapters provides the highest performing and most
interconnect solution. ConnectX-3 delivers up to 56Gb/s throughput across the PCI Express 3.0 host
enables the fastest transaction latency, less than 1usec, and can deliver more than 90M MPI messages
second making it the most scalable and suitable solution for current and future transaction-demanding
applications. ConnectX-3 maximizes the network efficiency making it ideal for
Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency
performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express
and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low
latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage
FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.
Mellanox InfiniBand Host Channel Adapters (HCAs) provide the highest performing interconnect solution
Enterprise Data Centers, Web 2.0, Cloud Computing, High-Performance Computing, and embedded
Clustered data bases, parallelized applications, transactional services and high-performance embedded
applications will achieve significant performance improvements resulting in reduced completion time
cost per operation.
Mellanox Ethernet Card Brochure
As the world’s most advanced cloud SmartNIC, ConnectX-6 Dx provides up to two ports of 25, 50 or
single port of 200Gb/s Ethernet connectivity, powered by 50Gb/s PAM4 SerDes technology and PCIe 4.0
connectivity. ConnectX-6 Dx continues along Mellanox’s innovation path in scalable cloud fabrics,
unparalleled performance and efficiency at every scale. ConnectX-6 Dx’s innovative hardware offload
including IPsec and TLS inline data-in-motion encryption, are ideal for enabling secure network
in modern data-center environments.
ConnectX-6 Dx’s extensive SmartNIC portfolio offers cards in several form factors, feeds and speeds,
low-profile PCIe, OCP 2.0 and OCP 3.0-compliant cards with advanced options for Mellanox Multi-Host®
Mellanox Socket Direct® configurations, enabling customers to choose the best fit for their needs.
ConnectX-6 is the world's first 200Gb/s Ethernet network adapter card, offering world-leading
smart offloads and In-Network Computing, leading to the highest return on investment for Cloud, Web 2.0,
Data, Storage and Machine Learning applications.
ConnectX-6 EN provides two ports of 200Gb/s for Ethernet connectivity and 215 million messages per
enabling the highest performance and most flexible solution for the most demanding data center
Intelligent ConnectX-5 Ethernet (EN) adapter cards offer new acceleration engines that optimize
web 2.0, cloud, data analytics, high Performance and storage platforms.
ConnectX-5 supports two ports of 100Gb/s Ethernet connectivity, very high message rate, plus PCIe
NVMe over Fabric offloads, providing high performance and cost-effective solution for a wide range of
applications and markets.
ConnectX-4 Lx EN provides an unmatched combination of 10, 25, 40, and 50GbE bandwidth, sub-microsecond
and a 75 million packets per second message rate. It includes native hardware support for RDMA over
Ethernet, Ethernet stateless offload engines, Overlay Networks,and GPUDirect® Technology.
ConnectX-4 Lx offers the best cost effective Ethernet adapter solution for 10, 25, 40 and 50Gb/s
speeds, enabling seamless networking, clustering, or storage. The adapter reduces application runtime,
offers the flexibility and scalability to make infrastructure run as efficiently and productively as
Advanced Streaming Telemetry Technology
The virtual switch is responsible for connecting Virtual Machines in a virtual network. At scale, and with
increased data center traffic, this type of switching consumes precious CPU cycles.
One of the most commonly used virtual switching software solutions is Open vSwitch (OVS) which is targeted
multi-server virtualization deployments. Whether running in Kernel mode or on top of DPDK, virtual
Mellanox ASAP2 - Accelerated Switch and Packet Processing® solution combines the performance and
server/storage networking hardware with the flexibility of virtual switching software. ASAP2 offers up to
times better performance than non-offloaded OVS solutions, delivering software-defined networks with the
total infrastructure efficiency, deployment flexibility and operational simplicity.
Starting from ConnectX®-5 NICs, Mellanox supports accelerated virtual switching in server NIC hardware
the ASAP2 feature.
While accelerating the data plane, ASAP2 keeps the SDN control plane intact thus staying completely
to applications, maintaining flexibility and ease of deployments.