by support »
Hi,
To begin with, there happens to be an out-of-the-box bundle for Kintex with 1x lane. It goes up to 400 MB/s in both directions, if your host supports Gen2 PCIe, or else it's 200 MB/s. If that's good enough for you, just drop a note to Xillybus' support email and you'll get your copy right away.
Moving from 8x to lower rates is somewhat tricky, mostly because reducing the lane width probably involves reducing the application clock (known as bus_clk in Xillybus' user signals interface). This, in turn, requires changing the constraints and some other settings, as I'll detail next. Anyhow, the suggested method to get on top of things is to generate the example design for the x8 PCIe core, and then again for the desired lane width. Please change only the lane width and the application clock frequency in the PCIe endpoint block. Nothing else.
So the first thing to do, is to see how the timing and placement constraints have changed between the two example designs, and track down the related lines in Xillybus' constraint file, and make the changes. This doesn't require an exact understanding of what the constraints mean, but rather sticking to the patterns.
The nontrivial change is editing the instantiation parameters of pipe_clock in xillybus.v (probably PCIE_USERCLKn_FREQ and PCIE_LANE), so that it generates internal clocks that are adequate to the new lane width. The easiest way for this is to synthesize the example project with the desired lane width, and get the values that appear in the synthesis report for the instantiation of the pipe_clock module. Trying to deduce the correct number from the sources is a lost battle.
And lastly, fix the widths of the signals that carry the connections to the physical PCIe pads, in particular in the top level (Xillydemo?) module.
Either way, custom IP cores from the IP core factory can be used. The only thing to note, is that you may not get the warnings when expecting too much bandwidth, since the web application assumes 800 MB/s.
Regards,
Eli