Page 1 of 1

Root to Root PCIe

PostPosted:
by Guest
Hello,

Excellent information, thanks.

The scenario I have is a Freescale processor (Root) and a Xillinx FPGA (EP). The Freescale processor just sends out TLPs to read/write into memory on FPGA side. However, I'm trying simulate the FPGA with a Linux machine. The problem is that the Linux machine acts like a Root as well. My ideas are to use some sort of non-transparent bridge in between the two Roots or some sort of multi-root switch. Another problem is that I cannot install many drivers on the Freescale processor to accommodate many hardware modules. I would like to just directly send TLPs from the Freescale to the Linux machine at addressable space. Have you ever heard of this type of scenario being attempted? If so, could you please let me know. Or if you have any other ideas for me that would be great. Thank you.

Felix

Re: Root to Root PCIe

PostPosted:
by support
Hello,

Personally, I don't have any experience with connecting processors to each other via a PCIe link. You may want to look at Intel processor's NTB, and possibly also at some PCIe switches which support a non-transparent mode. Avago (formerly PLX) has a few of these.

The question is why you want to connect a processor to another one. If you want to generate traffic, I know that some of Avago's PCIe switches support that. I don't know how easy that's going to be, anyhow.

All in all, my take on this is that the easiest way to create just any packet, is from an FPGA. You may have a soft processor running on the FPGA (Microblaze / Nios) that generate the packet data. Or even a hard processor combo (Zynq / Cyclone V SoC). All of these can run Linux and supply a TCP/IP connection. In short, if I was to pick an arbitrary PCIe packet generator for experiments, I would use an FPGA as the base point.

Regards,
Eli