HP MicroServer N40L Wiki
Advertisement

Integrated NIC[]

lspci information:

03:00.0 Ethernet controller: Broadcom Limited NetXtreme BCM5723 Gigabit Ethernet PCIe (rev 10)
	Subsystem: Hewlett-Packard Company NC107i Integrated PCI Express Gigabit Server Adapter
	Kernel driver in use: tg3
	Kernel modules: tg3

features:

  • No support of Jumbo-frames, maximum MTU is 1500

1G adapters[]

A cheap alternative for users wanting to boost the standard 1Gbps of the Microserver is the Intel EXPI9301CTBLK.

10G adapters[]

Intel X520-T2[]

For 10 Gbe, the Intel X520-T2 server adapter works. Initial iperf tests show the throughput at 3.5-4 Gbps, though this could probably be better through some tweaking since it doesn't seem to be pegging the CPU on either the server or client end. I'm unable to comment as to whether the low-profile bracket of the adapter fits since I only had a full-size bracket that I took off to fit the card into the N40L.

Mellanox ConnectX EN[]

P.N. MNPA19-XTR.

Works fine with:

  • SFP 1G
  • SFP+ 10G
  • 3m copper Mellanox MCP2100-X003B revision A1

lspci output:

02:00.0 Ethernet controller: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s] (rev b0)
	Subsystem: Mellanox Technologies MT26448 [ConnectX EN 10GigE, PCIe 2.0 5GT/s]
	Kernel driver in use: mlx4_core
	Kernel modules: mlx4_core

dmesg information:

[    2.372079] mlx4_core: Mellanox ConnectX core driver v4.0-0
[    2.372099] mlx4_core: Initializing 0000:02:00.0
[    4.765927] mlx4_core 0000:02:00.0: PCIe link speed is 5.0GT/s, device supports 5.0GT/s
[    4.765929] mlx4_core 0000:02:00.0: PCIe link width is x8, device supports x8
[    4.790887] mlx4_en: Mellanox ConnectX HCA Ethernet driver v4.0-0
[    4.791043] mlx4_en 0000:02:00.0: Activating port:1
[    4.791168] mlx4_en: 0000:02:00.0: Port 1: enabling only PFC DCB ops
[    4.793064] mlx4_en: 0000:02:00.0: Port 1: Using 6 TX rings
[    4.793065] mlx4_en: 0000:02:00.0: Port 1: Using 4 RX rings
[    4.793190] mlx4_en: 0000:02:00.0: Port 1: Initializing port
[    4.794634] <mlx4_ib> mlx4_ib_add: mlx4_ib: Mellanox ConnectX InfiniBand driver v4.0-0
[    4.794756] mlx4_core 0000:02:00.0 enp2s0: renamed from eth0
[    4.794834] <mlx4_ib> mlx4_ib_add: counter index 1 for port 1 allocated 1
[    6.570537] mlx4_en: enp2s0: Steering Mode 1
[  861.434410] mlx4_core 0000:02:00.0: MLX4_CMD_MAD_IFC Get Module info attr(ff60) port(1) i2c_addr(50) offset(0) size(2): Response Mad Status(21c) - operation not supported for this port (the port is of type CX4 or internal)

Features:

  • This card have no support of i2c DOM/DDM, so you can't use SFP Digital Optical Monitoring
  • Jumbo frames works fine with MTU=9000

External posts[]

Post about the various Intel (generally better compatibility) based cards

UK readers: this card appears to be very popular, see reviews

OCAU Stanza

Many posts about fitting a HP NC360T Dual Port Intel based card in the PCIe X1 slot:

Post 2737

Advertisement