Compatible Boards

The following development boards are compatible with the FPGA Drive FMC Gen4 and can support at least one SSD. If you know of a board that is not listed here and you would like to know if it is compatible, please contact us.

Note that we don’t currently have example designs for all of these carrier boards. For a list of carrier boards for which we do have example designs, please refer to the list of supported carriers in the reference design documentation.

Series-7 boards

CarrierFMCRef designPCIeSSD 1SSD 2
AMD Xilinx KC705 Kintex-7 Development boardHPCGen24-lanesNot supported
AMD Xilinx KC705 Kintex-7 Development boardLPCGen21-lane 1Not supported 1
AMD Xilinx VC707 Virtex-7 Development boardHPC1Gen24-lanes4-lanes
AMD Xilinx VC707 Virtex-7 Development boardHPC2Gen24-lanes4-lanes
AMD Xilinx VC709 Virtex-7 Development boardHPCGen34-lanes4-lanes
AMD Xilinx ZC706 Zynq-7000 Development boardHPCGen24-lanesNot supported 2
AMD Xilinx ZC706 Zynq-7000 Development boardLPCGen21-lane 1Not supported 1
Avnet PicoZed FMC Carrier Card V2 Zynq-7000 Development BoardLPCGen21-lane 1Not supported 1

UltraScale boards

CarrierFMCRef designPCIeSSD 1SSD 2
AMD Xilinx KCU105 Kintex UltraScale Development boardHPCGen34-lanes4-lanes
AMD Xilinx KCU105 Kintex UltraScale Development boardLPCGen31-lane 1Not supported 1
AMD Xilinx VCU108 Virtex UltraScale Development boardHPC0Gen34-lanes4-lanes
AMD Xilinx VCU108 Virtex UltraScale Development boardHPC1Gen34-lanes4-lanes

Zynq Ultrascale+ boards

CarrierFMCRef designPCIeSSD 1SSD 2
AMD Xilinx ZCU104 Zynq UltraScale+ Development boardLPCGen31-lane 1Not supported 1
AMD Xilinx ZCU102 Zynq UltraScale+ Development boardHPC0Gen34-lanes 34-lanes 3
AMD Xilinx ZCU102 Zynq UltraScale+ Development boardHPC1Gen34-lanes 34-lanes 3
AMD Xilinx ZCU106 Zynq UltraScale+ Development boardHPC0Gen34-lanes4-lanes
AMD Xilinx ZCU106 Zynq UltraScale+ Development boardHPC1Gen31-lanesNot supported
AMD Xilinx ZCU111 Zynq UltraScale+ Development boardFMC+Gen34-lanes4-lanes
AMD Xilinx ZCU208 Zynq UltraScale+ Development boardFMC+Gen34-lanes4-lanes
Avnet UltraZed EV Carrier Zynq UltraScale+ Development boardHPCGen34-lanes4-lanes
Trenz UltraITX+ Baseboard Zynq UltraScale+ Development boardHPCGen34-lanes 34-lanes 3

Ultrascale+ boards

CarrierFMCRef designPCIeSSD 1SSD 2
AMD Xilinx VCU118 Virtex UltraScale+ Development boardHPCGen3Not supportedNot supported
AMD Xilinx VCU118 Virtex UltraScale+ Development boardFMC+Gen34-lanes4-lanes

Versal boards

CarrierFMCRef designPCIeSSD 1SSD 2
AMD Xilinx VCK190 Versal AI Core Development boardFMC+1Gen44-lanes4-lanes
AMD Xilinx VCK190 Versal AI Core Development boardFMC+2Gen44-lanes4-lanes
AMD Xilinx VEK280 Versal AI Edge Development boardFMC+Coming soonGen44-lanes4-lanes
AMD Xilinx VHK158 Versal HBM Series Development boardFMC+Coming soonGen44-lanesNot supported 4
AMD Xilinx VMK180 Versal Prime Series Development boardFMC+1Gen44-lanes4-lanes
AMD Xilinx VMK180 Versal Prime Series Development boardFMC+2Gen44-lanes4-lanes
AMD Xilinx VPK120 Versal Premium Series Development boardFMC+Coming soonGen44-lanesNot supported 4
AMD Xilinx VPK180 Versal Premium Series Development boardFMC+Coming soonGen44-lanesNot supported 4

Compatibility requirements

If you need to determine the compatibility of a development board that is not listed here, or you are designing a carrier board to mate with the FPGA Drive FMC Gen4, you can check your board against the list of requirements below.

VADJ

The carrier board must have the ability to supply a VADJ voltage between 1.2VDC and 3.3VDC.

Gigabit transceivers

The FPGA or MPSoC device must have gigabit transceivers and they must be routed to the FMC connector. For support of both SSDs, transceivers DP0-DP7 must all be connected to the FPGA. In the AMD Xilinx devices, the transceivers are typically grouped into quads containing 4 transceivers. Ideally, each SSD should be connected to a single quad and the lane ordering should match the MGT ordering as shown in the tables below:

Quad 1

The first quad should be connected to SSD A (SSD1) as follows:

FPGA pinPCIe laneFMC PinFMC nameNet name
MGT_RXP/N00C6/C7DP0_M2C_P/NSSDA2FPGA_0_P/N
MGT_TXP/N00C2/C3DP0_C2M_P/NFPGA2SSDA_0_P/N
MGT_RXP/N11A2/A3DP1_M2C_P/NSSDA2FPGA_1_P/N
MGT_TXP/N11A22/A23DP1_C2M_P/NFPGA2SSDA_1_P/N
MGT_RXP/N22A6/A7DP2_M2C_P/NSSDA2FPGA_2_P/N
MGT_TXP/N22A26/A27DP2_C2M_P/NFPGA2SSDA_2_P/N
MGT_RXP/N33A10/A11DP3_M2C_P/NSSDA2FPGA_3_P/N
MGT_TXP/N33A30/A31DP3_C2M_P/NFPGA2SSDA_3_P/N

The clock reference for this SSD (FMC pins GBTCLK0_M2C_P/N) should be connected to MGTREFCLK0P/N or MGTREFCLK1P/N of this quad.

Quad 2

The second quad should be connected to SSD B (SSD2) as follows:

DirectionPCIe laneFMC PinFMC nameNet name
MGT_RXP/N00A14/A15DP4_M2C_P/NSSDB2FPGA_0_P/N
MGT_TXP/N00A34/A35DP4_C2M_P/NFPGA2SSDB_0_P/N
MGT_RXP/N11A18/A19DP5_M2C_P/NSSDB2FPGA_1_P/N
MGT_TXP/N11A38/A39DP5_C2M_P/NFPGA2SSDB_1_P/N
MGT_RXP/N22B16/B17DP6_M2C_P/NSSDB2FPGA_2_P/N
MGT_TXP/N22B36/B37DP6_C2M_P/NFPGA2SSDB_2_P/N
MGT_RXP/N33B12/B13DP7_M2C_P/NSSDB2FPGA_3_P/N
MGT_TXP/N33B32/B33DP7_C2M_P/NFPGA2SSDB_3_P/N

The clock reference for this SSD (FMC pins GBTCLK1_M2C_P/N) should be connected to MGTREFCLK0P/N or MGTREFCLK1P/N of this quad.

Required I/O

The following I/O pins should be connected to the FPGA as they are required by the mezzanine card:

FMC PinFMC nameNetDescription
G6LA00_P_CCPERST_APCIe reset for SSD1 (active high)
G7LA00_N_CCPEDET_APCIe detect for SSD1
H10LA04_PPERST_BPCIe reset for SSD2 (active high)
H11LA04_NPEDET_BPCIe detect for SSD2
H13LA07_PDISABLE_SSD2_PWRDisable switching regulator for SSD2 (0=Enable,1=Disable)

  1. LPC connectors can only support 1-lane PCIe ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  2. Zynq-7000 devices only have 1 PCIe block ↩︎

  3. This board’s device does not have integrated PCIe blocks, but it can be used with 3rd party IP to implement the required PCIe root complex ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

  4. VHK150, VPK120 and VPK180 boards have enough PCIe blocks and GTs to support both M.2 slots, however one of the PCIe blocks is located on the opposite side of the device to the relevant GTs, making routing a challenge. For this reason we do not support the use of the second M.2 slot on these boards. ↩︎ ↩︎ ↩︎