Xillybus to DDR3 RAM to GTX Transcievers

Questions and discussions about the Xillybus IP core and drivers

Xillybus to DDR3 RAM to GTX Transcievers

Postby Guest » Tue Nov 01, 2016 3:42 pm

Hello,

I'm trying to implement a project which reads a 3 Gb binary file from a PC, and outputs the bit stream over the high-speed GTX pins in one second. So, a sort of one-second function generator. My approach to this thus far has been dumping the file to the DDR3 RAM on my ML605 board, and then from there interfacing to the GTX. I tried using XPS, Microblaze and the UART Serial interface, but loading the whole 3 Gb file at a kb baud rate took way too long. I was considering Ethernet, as XPS has built in Microblaze cores I could use -- but now I'm thinking packet loss would be a problem.

So, I'm now looking at PCIe. I've used Xillybus before for a much simpler application, and I really only have experience using the RAM MIG in the XPS / SDK interfaces, where the high-level functions take care of all the MIG interfacing for you. Is there an example design with xillybus integrated into a Microblaze, or one which shows how to write from the PC to onboard RAM?

Thank you.
Guest
 

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby support » Tue Nov 01, 2016 5:58 pm

Hello,

It's much simpler than you think.

3 Gb/s equals 375 MB/s. Xillybus' limit for ML605 is ~400 MB/s, so we have a small margin.

So all you have to do, is to use xillybus_write_32. Connect one side of a dual-clock FIFO to Xillybus' IP core, and the other side to the logic that feeds the GTX, which drains data in its own pace.

Now you're left with writing data to the file descriptor at 375 MB/s. It will be very difficult to read data from a disk that fast, but you can write a simple program on the host, which reads the data into a large array in memory, and only then writes to /dev/xillybus_write_32 from the memory array. Just be sure that array doesn't get swapped out into disk again.

This way, you can actually hold out much longer than a second, depending on how much RAM you have on the host. And there's no need to hassle with DDR memories nor Microblaze.

Regards,
Eli
support
 
Posts: 623
Joined: Tue Apr 24, 2012 3:46 pm

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby mwayne » Tue Nov 01, 2016 6:18 pm

Hi,

I registered for the forums and am the original poster.

Ok well I'll give this a try. That sounds very doable and helpful, thank you!
Do you think a solid state drive could read that fast?
mwayne
 
Posts: 8
Joined: Tue Nov 01, 2016 3:51 pm

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby support » Tue Nov 01, 2016 6:21 pm

There's no need for a fast drive, and it might be quite difficult to maintain that speed, even with solid state drives (even though they might be fast with reads).

That's why I suggested reading the data into RAM first.

Regards,
Eli
support
 
Posts: 623
Joined: Tue Apr 24, 2012 3:46 pm

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby mwayne » Thu Nov 03, 2016 3:58 pm

Hi,

I have been working on this a bit and have a quick question.

I am familiar with the xillydemo for the ML605, but I just now started making my own cores at the ipcore factory on the webpage. My intended design is to have two devices, both writing from the PC to the FPGA. Device 1 is intended to be reading a large file (from RAM) and interacting with the GTX transceivers to output the bit stream at 3 Gb/s. So for that, I just went with default Data Aquisition / Playback (10ms) and input 400 MB/s as the intended bandwidth. I would also like the option to mask each 32-bit sequence with another (to introduce predictable errors in the bitstream), but that value is set only in between runs and doesn't change often. For this I have set up another device in the ipfactory, and will just write one 32-bit value to it whenever I need to change the bit mask.

I don't want this 2nd device to take up too many resources, so I am trying to specify it's requirements on the webpage. Its bandwidth is extremely low... I will probably write to it once every couple minutes. Is choosing 0.000001 MB/s as a bandwidth going to cause the tool to misbehave? I have also selected the 'short message transport' option ... but the others (command and status / general purpose) also seem feasible. Is there an optimal one here?
mwayne
 
Posts: 8
Joined: Tue Nov 01, 2016 3:51 pm

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby support » Thu Nov 03, 2016 6:23 pm

Hello,

The bandwidths stated at the IP Core Factory are used to calculate the amount of resources (in logic and DMA buffers) to allocate for the stream, but it's not like there's a cake of bandwidth divided between the streams while its configured. So you may exceed the limit of 400 MB/s in the specification with no problem. What counts is not to exceed that bandwidth in the de-facto use of bandwidth. Meaning, that while data is actually transported, the overall cake of bandwidth is divided between the streams that have data to transmit. Pretty much regardless of their settings, in most cases.

So for that narrow stream, just pick 1 MB/s or 10 MB/s, it doesn't matter. Ignore that warning that you'll probably get regarding overcommitting bandwidth.

Regards,
Eli
support
 
Posts: 623
Joined: Tue Apr 24, 2012 3:46 pm

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby mwayne » Thu Jan 19, 2017 6:21 pm

Hello again, I took a break on this project for a while but have begun again and believe I am almost done.

To recap: The goal of the project was to make a 3 Gb/s function generator by reading a binary file, transmitting that file over PCIe to my ML605 board, and then transmitting that bit pattern out over the high-speed GTX transceivers. I have successfully set up the GTX transceivers and made my custom IP core, and when using the example 'streamwrite.c' file in the windowspack, I can see the ASCII values of the keys I press output over the GTX transceiver. This is great! Now I just need to have the input be the file.

Currently I am wanting to read in one 3 Gb file, and just have it repeat over and over at the output. You had previously suggested just dumping that file directly to RAM and then writing it to the FPGA.

When reading the host programming guide it says

Precautions should however be taken to avoid a shortage of kernel RAM. Xillybus’
IP Core Factory’s automatic memory allocation (“autoset internals”) algorithm is de-
signed not to consume more than 50% of the relevant memory pool, i.e. 512 MB,
based upon the assumption that a modern PC has more than 1 GB of RAM installed.
It’s probably safe to go as high as 75% as well, which can be done by setting the buffer sizes manually


So, when creating my custom IP core I chose the 'autoset internals' option, so the DMA buffer size is chosen automatically right? When I go into control panel and right click / properties on the Xillybus generic FPGA item, it says that my Memory Range is 0x0000 0000 F710 0000 to 0x0000 0000 F710 007F. Does that mean I only have 0x7F of DMA buffer space allocated? I was planning on just writing to that address in my C code... but this doesn't seem right.

If I want to read this large file into the RAM space that xillybus has allocated for it.... do I just declare a large char [] array or whatever as normal, write the file into that, and then use the _write function to my //./xillybus_ device and , and xillybus handles everything else internally?

Thank you!
mwayne
 
Posts: 8
Joined: Tue Nov 01, 2016 3:51 pm

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby support » Thu Jan 19, 2017 8:06 pm

Hello,
mwayne wrote:If I want to read this large file into the RAM space that xillybus has allocated for it.... do I just declare a large char [] array or whatever as normal, write the file into that, and then use the _write function to my //./xillybus_ device and , and xillybus handles everything else internally?

Yes, exactly. There's no need to get down to the technicalities.

But since you did:
(1) "Autoset internals" is indeed the preferred choice. It protects you against the issue you cited.
(2) The segment you found in the control panel (128 bytes long) is the memory mapped register segment (the PCI BAR range). This is where the driver writes to for setting the hardware registers. It has nothing to do with the DMA memory, and you had no chance writing to it, since the application has no access to this region (your program would get a kick in the bottom, exactly as with a null pointer).

In short, plain _write(). That's the whole story.

Regards,
Eli
support
 
Posts: 623
Joined: Tue Apr 24, 2012 3:46 pm

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby mwayne » Mon Jan 23, 2017 3:02 pm

Awesome, thank you. Things seem to be working well.

I (hope) my last question is now dealing with repeating the file. Is there any optimal way to interface with the core for having the FPGA output the data file in a cyclic fashion? I.e., write the same 3 Gb to the FPGA repeatedly?

Currently I'm just doing something like

bytesread = _read(file, buf, sizeof(buf))

while(1)
{
_write(//./xillybus_stream, buf, bytesread)
}

Am I likely to encounter problems at the end of one cycle and the beginning of another as the large file is read into xillybus, or will I be ok since the file contents is already stored in an array. Would writing in smaller chunks be better?
mwayne
 
Posts: 8
Joined: Tue Nov 01, 2016 3:51 pm

Re: Xillybus to DDR3 RAM to GTX Transcievers

Postby mwayne » Mon Jan 23, 2017 3:49 pm

Ah, another question. :)

How does _close behave when closing a xilly_bus handle? Is it necessary / good programming practice to close a xillybus device after use?
mwayne
 
Posts: 8
Joined: Tue Nov 01, 2016 3:51 pm

Next

Return to Xillybus

cron