Multiple DMA buffers

Post a reply

Confirmation code
Enter the code exactly as it appears. All letters are case insensitive.
Smilies
:D :) ;) :( :o :shock: :? 8-) :lol: :x :P :oops: :cry: :evil: :twisted: :roll: :!: :?: :idea: :arrow: :| :mrgreen: :geek: :ugeek:
BBCode is ON
[img] is ON
[flash] is OFF
[url] is ON
Smilies are ON
Topic review
   

Expand view Topic review: Multiple DMA buffers

Re: Multiple DMA buffers

Post by support »

Thanks. :)

Xillybus started off somewhere in 2010.

Eli

Re: Multiple DMA buffers

Post by Guest »

:oops:

Now I get it. So the device files are made upon the settings read from the device. It's really cool. How many years have you been working on this stuff? I am really glad I found you :mrgreen:

Thanks a lot

Re: Multiple DMA buffers

Post by support »

Ehm, well, that was 50% correct... :P

The web interface generates an IP core indeed, but the DMA buffers are on the host. There's very little room for memory on an FPGA.

The device is queried for its configuration when the driver loads, which among others, includes the DMA buffer settings. The configuration needs to be agreed upon by the driver and the device, so the design choice was to put all settings in the device, and let the driver learn that during the initial setup.

I hope it's clear now. :)

Eli

Re: Multiple DMA buffers

Post by Guest »

Oh, I think I had a huge misunderstanding. :o

I thought the web api is creating a device driver for me. It's an IP core generator, and we are talking about DMA buffers in the device! Now everything makes more sense.

Thanks Eli

Re: Multiple DMA buffers

Post by support »

Hello,

Well, the 10 ms is defined "buffering" -- it's not latency, but how much data, at the given expected data rate, the DMA buffers are supposed to absorb.

The latency that is imposed by Xillybus itself is negligible compared with the delays caused by the operating system itself. For example, if a read() system call is made, and there is no data for delivery, because there is no full DMA buffer yet, the driver will fetch the data from a partially filled DMA buffer, after telling the FPGA to abandon that buffer and start with the next one. This little handshake involves a brief sleep of the driver (until the FPGA has confirmed that the buffer is OK for use by the host), but it's measured in microseconds.

So there is no tradeoff here. Just pick the figures that match your needs.

Regards,
Eli

Re: Multiple DMA buffers

Post by Guest »

Thanks Eli,

I think I got my answer to this question. However, now I have a more general question. I the web API I can choose to have a desired latency (like 10ms), or a desired bandwidth (like 200 MB/s). I know this question is very broad. But can you briefly mention how these trade-offs are handled? For example, can I have the minimum latency and max bandwidth at the same time? (I don't think so...) What do you change in device driver code to address these requirements and balance the trade-off? or how do you even fix the latency in device driver? Isn't that supposed to be architecture dependent? :roll:

Thanks

Re: Multiple DMA buffers

Post by support »

Hi,

it's quite common to have more than one data buffer when sending data from one entity to another. The most popular solution is double-buffering, where one side writes to one buffer, and the other reads from the second one.

One could use a single buffer, and do the writing in a cyclic manner. This is however less convenient with DMA, because some host architectures require explicit cache synchronization, which is done in chunks of typically 16, 32 or 64 bytes. So it's simpler to pass a buffer with a fixed size from side to side.

The protocol used by Xillybus is based upon handing over buffers with a fixed size. These buffers may be partially filled, so extra information is conveyed to inform how many bytes are valid in each buffer. Without getting too deep into the details, it has been arranged, so that the buffers are always passed over completely filled when there's a heavy data load, and possibly partially filled when the data rate is low.

All in all, the purpose of multiple buffering wasn't improving performance, but making the implementation simple and robust.

I hope this shed some light.

Regards,
Eli

Multiple DMA buffers

Post by Guest »

Hello,

I was wondering why Xillybus uses multiple DMA buffers on host's RAM. Why would we need more than one buffer for one stream? and how are they managed? I don't understand how throughput can be boosted by multiple buffers. I appreciate if someone give me an insight.

Thanks

Top

cron