Timing fail when creating a big RAM

Questions and discussions about the Xillybus IP core and drivers

Timing fail when creating a big RAM

Postby Guest »

Hi Eli,

So far everything with xillybus is superbly good. When I modify the depth of the `demoarray` inside the xillydemo to 16K. Also I use the customized core with a 32-bits address seeable RAM, so the depth of RAM can be 2^32>>16K. However, there will be a timing fail happen during bitstream generation. The `TPWS` of the `usr_clk1` started to be negative.

Is this due to the fanout of `bus_clk` is not high enough?

I am not sure what does that mean and how to address this. Could you please help giving me some clue?

Best,
Chongxi
Guest
 

Re: Timing fail when creating a big RAM

Postby support »

Hello,

It seems like you're failing to meet timing on your logic design. This has nothing to do with bus_clk's fanout, as the FPGA has dedicated wires for clocks that are designed for a huge fanout.

It's more likely that something in your own logic design isn't written so it can meet the clock frequency (I presume it's 250 MHz). So odds are that the problem is not directly related to Xillybus, but just a common timing closure problem. Common, but difficult: Writing fast logic (or working around the requirement for writing such) is one of the tricky things about FPGA design.

Regards,
Eli
support
 
Posts: 802
Joined:

Re: Timing fail when creating a big RAM

Postby Guest »

Yes, I use bus_clk from xillydemo to drive every submodules, that clock is 250MHz. It is just increasing the seekable RAM size will lead to timing failure.

If I reduce the RAM size from 16K*4Bytes to 512*4Bytes then the problem is gone. In the original xillydemo, the `demoarray` is 32*1Byte.

Best,
Chongxi
Guest
 

Re: Timing fail when creating a big RAM

Postby support »

Hello,

The question is what happens with those signals. It's quite typical that increasing the amount of logic turns a design that meets timing to one that fails timing.

Regards,
Eli
support
 
Posts: 802
Joined:

Re: Timing fail when creating a big RAM

Postby Guest »

Hi Eli,

After 2 days of trying everything, I encounter a simple solution to this:

(* ram_style = "block" *)
reg signed [31:0] mem_thr [0:1023]; // reg [wordsize:0] array_name [0:arraysize], 32 bits (little endian) * 32 units
//// Distributed RAM store the threshold
always @(posedge clk) begin
if (mem_thr_32_wren)
mem_thr[mem_thr_32_addr] <= mem_thr_32_din;
if (mem_thr_32_rden)
mem_thr_32_dout <= mem_thr[mem_thr_32_addr];
end

(* ram_style = "block" *) is the cure. I don't quite understand the reason, but now the timing can pass and also in the `device` page, the area being used is dramatically decrease becase there is no massive `distributed RAM` now.

Is this some general rule or what? If you need a large array you use `Block RAM` instead of `Distributed RAM`

Best,
Chongxi
Guest
 

Re: Timing fail when creating a big RAM

Postby support »

Hi,

In retrospective, it makes sense: A large array should be implemented as a block RAM and not in distributed RAM, or you get a lot of logic, which in turn tangles the wiring, and ends up with poor timing. Not good enough for 250 MHz, that is.

The question is why the synthesizer chose to put a distributed RAM there in the first place. This should have been the default for an array of that size.

I suggest getting acquainted with timing closure techniques, and in particular learn to read the timing reports that detail the critical timing paths. These reports often give a hint on what needs to be fixed (but are unfortunately somewhat misleading sometimes).

So all I can say is welcome aboard to the world of FPGA, with all its peculiarities. It's a prestigious field for good reasons. :)

Regards,
Eli
support
 
Posts: 802
Joined:


Return to Xillybus