Guidance for using the ACP port on the Zynq SoC

Questions and discussions about the Xillybus IP core and drivers

Guidance for using the ACP port on the Zynq SoC

Postby Guest »

I have some questions about setup of your IP core with the ACP port on the Zynq.

1. Is it always recommended to use the ACP port for xillybus if possible? My understanding is that depending on the application and transfer sizes of data chunks is that the HP ports may be a better fit in some cases. My application is streaming data in and out of the processor on asynchronous streams. Total BW over all streams is roughly 100MB/s.

2. Is it required/recommended to check the box in the Vivado ps7 IP box to tie the AxUSER pins high for cache coherent transfers? Does the IP core handle setting the others signals needed or is any custom glue logic required? (dma-coherent is set in device tree). There has been discussion on some of the Xilinx forums relative to what this check box really does and we are scratching our heads at a few memory issues we don't see on X86 platforms with xillybus. Just wanted to check.

https://forums.xilinx.com/t5/Embedded-L ... d-p/595844

3. We have 1GB of DDR memory for the PS on our HW. The address table in Vivado defaults to showing 4 regions and a 512MBs for low DDR. Since we have the ACP port hooked directly to the core without any AXI IP between I believe that the settings in the address table for ACP don't really matter since there is no crossbar doing address translation. Is this correct? If not do you have guidance on what to use in the address table?

3. Is the interrupt input to the PS7 rising edge triggered or level? Vivado is coming up with level sensitive, but I see in your devicetree example you have it marked as rising edge. Can you clarify? Again I "think" that Vivado can come up with what it wants here, but all that really matters for Linux is the devicetree being correct?

4. I guess this is not an ACP port question, but while I am at it I will ask about the latest patch I saw you committed to the Kernel in February. It looks like there is not a reason to run out and patch our kernel, but is there any expected impact for this fix on the Zynq/arm architecture. We are running a 4.1 Kernel from the Yocto Project at the moment.

Thanks for any information you can provide.

Kevin
Guest
 

Re: Guidance for using the ACP port on the Zynq SoC

Postby support »

Hello,

Let's take'em one by one.
Guest wrote:1. Is it always recommended to use the ACP port for xillybus if possible? My understanding is that depending on the application and transfer sizes of data chunks is that the HP ports may be a better fit in some cases. My application is streaming data in and out of the processor on asynchronous streams. Total BW over all streams is roughly 100MB/s.


The motivation for using the ACP port is not needing to explicitly flush or synchronize the cache with the RAM. Doing this on ARM processors is extremely slow. In fact, without ACP, a transfer of 200 MB/s consumes 100% CPU for that process. Using an HP port is reasonable if you need the ACP port for something else, and intend to use XIllybus for really slow transactions. The "dma-coherent" attribute needs to be removed from the device tree accordingly, of course.

Guest wrote:2. Is it required/recommended to check the box in the Vivado ps7 IP box to tie the AxUSER pins high for cache coherent transfers? Does the IP core handle setting the others signals needed or is any custom glue logic required? (dma-coherent is set in device tree). There has been discussion on some of the Xilinx forums relative to what this check box really does and we are scratching our heads at a few memory issues we don't see on X86 platforms with xillybus. Just wanted to check.

The recommendation is, of course, not to touch anything from the out-of-box setting. :)

So yes, the "Tie off AxUSER signals to always enable coherency" should be checked (on Vivado versions that have that checkbox). I have a faint memory about it working without that as well, but I'm not sure. What I am sure about, is that the relevant pins are set correctly with this checkbox set.

Guest wrote:3. We have 1GB of DDR memory for the PS on our HW. The address table in Vivado defaults to showing 4 regions and a 512MBs for low DDR. Since we have the ACP port hooked directly to the core without any AXI IP between I believe that the settings in the address table for ACP don't really matter since there is no crossbar doing address translation. Is this correct? If not do you have guidance on what to use in the address table?


The PS definitions on the Vivado project aren't set very carefully, as it's the XPS project's definitions that were used to create the FSBL. I can't see any reason why the address range given in Vivado would matter: As far as I can see, there is no address translation done by any AXI element in the PL region, and as you correctly pointed out, the ACP port is connected directly to Xillybus' IP core.

I don't think the address range matters at all, as a matter of fact, as I can't remember any way it would affect the FSBL. So setting the correct address range in the device tree should be enough. Maybe it's used by PetaLinux' automatic setting.

Guest wrote:3. Is the interrupt input to the PS7 rising edge triggered or level? Vivado is coming up with level sensitive, but I see in your devicetree example you have it marked as rising edge. Can you clarify? Again I "think" that Vivado can come up with what it wants here, but all that really matters for Linux is the devicetree being correct?

Linux' interrupt subsystem (GIC) driver sets the interrupt type by writing to the relevant register, according to the device tree. So yes, it's what's written in the device tree that counts. Rising edge, in this case.

Guest wrote:4. I guess this is not an ACP port question, but while I am at it I will ask about the latest patch I saw you committed to the Kernel in February. It looks like there is not a reason to run out and patch our kernel, but is there any expected impact for this fix on the Zynq/arm architecture. We are running a 4.1 Kernel from the Yocto Project at the moment.

The said patch (https://lkml.org/lkml/2016/2/24/145) might have a very slight positive impact. If you're happy with how things are, there's no hurry to patch your own system, but it's generally advisable to do so.

Regards,
Eli
support
 
Posts: 802
Joined:


Return to Xillybus