by support »
Hi,
it's quite common to have more than one data buffer when sending data from one entity to another. The most popular solution is double-buffering, where one side writes to one buffer, and the other reads from the second one.
One could use a single buffer, and do the writing in a cyclic manner. This is however less convenient with DMA, because some host architectures require explicit cache synchronization, which is done in chunks of typically 16, 32 or 64 bytes. So it's simpler to pass a buffer with a fixed size from side to side.
The protocol used by Xillybus is based upon handing over buffers with a fixed size. These buffers may be partially filled, so extra information is conveyed to inform how many bytes are valid in each buffer. Without getting too deep into the details, it has been arranged, so that the buffers are always passed over completely filled when there's a heavy data load, and possibly partially filled when the data rate is low.
All in all, the purpose of multiple buffering wasn't improving performance, but making the implementation simple and robust.
I hope this shed some light.
Regards,
Eli
Hi,
it's quite common to have more than one data buffer when sending data from one entity to another. The most popular solution is double-buffering, where one side writes to one buffer, and the other reads from the second one.
One could use a single buffer, and do the writing in a cyclic manner. This is however less convenient with DMA, because some host architectures require explicit cache synchronization, which is done in chunks of typically 16, 32 or 64 bytes. So it's simpler to pass a buffer with a fixed size from side to side.
The protocol used by Xillybus is based upon handing over buffers with a fixed size. These buffers may be partially filled, so extra information is conveyed to inform how many bytes are valid in each buffer. Without getting too deep into the details, it has been arranged, so that the buffers are always passed over completely filled when there's a heavy data load, and possibly partially filled when the data rate is low.
All in all, the purpose of multiple buffering wasn't improving performance, but making the implementation simple and robust.
I hope this shed some light.
Regards,
Eli