by support »
Hello,
Xillybus works on top of Xilinx' PCIe block, so the question is not if Xillybus' IP core supports lane downsizing, but if the PCIe block does.
The PCIe spec requires a negotiation between the two link partners on the largest possible lane width. As far as I've seen so far, all Xilinx' PCIe blocks worked properly with the number of lanes available, even when the number of connected lanes were lower than those supported by the PCIe block. Which isn't a surprise, as they wouldn't be PCIe compliant otherwise.
So short answer: Yes, I would expect that just putting a 16x -> 1x PCIe riser will just work with PCIe 1x, with no need to change anything in the FPGA design.
However for a permanent solution, this is a waste of logic resources and power consumption. Should you want to reduce the lane count in the design, please refer to section 4.5 of the Getting Started guide for Xilinx:
http://xillybus.com/downloads/doc/xilly ... xilinx.pdf Regards,
Eli
Hello,
Xillybus works on top of Xilinx' PCIe block, so the question is not if Xillybus' IP core supports lane downsizing, but if the PCIe block does.
The PCIe spec requires a negotiation between the two link partners on the largest possible lane width. As far as I've seen so far, all Xilinx' PCIe blocks worked properly with the number of lanes available, even when the number of connected lanes were lower than those supported by the PCIe block. Which isn't a surprise, as they wouldn't be PCIe compliant otherwise.
So short answer: Yes, I would expect that just putting a 16x -> 1x PCIe riser will just work with PCIe 1x, with no need to change anything in the FPGA design.
However for a permanent solution, this is a waste of logic resources and power consumption. Should you want to reduce the lane count in the design, please refer to section 4.5 of the Getting Started guide for Xilinx:
http://xillybus.com/downloads/doc/xillybus_getting_started_xilinx.pdf
Regards,
Eli