Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Jun 9, 10:18=A0pm, john <jprovide...@yahoo.com> wrote: > On Jun 9, 5:42=A0pm, Matthew Hicks <mdhic...@uiuc.edu> wrote: > > > > > > rickman wrote: > > > >> Why do you think there is no delay in the real part? =A0No matter wh= ere > > >> you measure it, there will be some delay. > > > > True, but the exact delay is unknown until place and route. I prefer > > > to to leave it as a delta delay for the description and let synthesis > > > fill in the real value for STA or backanno. > > > > -- Mike Treseler > > > Why? =A0What does it hurt to have a short stand-in delay in the functio= nal > > simulation model. =A0It has already been pointed out that it makes debu= gging > > a little easier, especially for the less trained. > > > ---Matthew Hicks > > Why does it hurt? =A0Supposing you've got a bad clocking scheme that > would show a problem in > unit simulation if the part was modeled like all the other behavioral > code? =A0The delay could mask > a potential problem that could be a nightmare to debug in the FPGA. > > John Providenza I'm not following. What sort of problem would this delay mask? What bad clocking scheme would be hidden by a signal delay? RickArticle: 141176
On Jun 10, 3:27=A0am, Allan Herriman <allanherri...@hotmail.com> wrote: > On Tue, 09 Jun 2009 13:49:40 -0500, maxascent wrote: > >>Why do you think there is no delay in the real part? > > > I never said I thought there was no delay in the real part. But if you > > look at their datasheet 100ps doesnt relate to anything. If they want t= o > > put delays in why dont they put the actual delays from their datasheet? > > It is then obvious for anyone looking at a simulation what the meaning > > of the delay is. > > > Jon > > They're adding a nominal delay. =A0But what value should they use? =A0If = it's > too small (e.g. 10 fs) it will be converted to 0 if it's less than the > simulator resolution, and this may cause functional errors (as it is no > longer able to work around the delta delay problem (if my understand of > these delays is correct)). =A0If it's too large (e.g. 1 ns) it will be > greater than half a clock period if the clock is over 500MHz, and this > may cause functional errors. > > Simulator resolutions are always(?) powers of ten time 1 fs. =A0100 ps is > the only sensible delay to use for this model - it works with the largest > range of simulator resolution settings while not being so large that it > causes simulation errors. > > That it doesn't relate to any actual delay from the datasheet is > irrelevant. > > Regards, > Allan I'm unclear about what you are saying about a short delay interfering with the use of delta delays. Delta delays have 0 ns resolution, in fact, they are in essence "outside" of time delays. All delta delays must be resolved before a time delay is considered. Are you saying that with the delay truncated to 0 ns, it is really no delay at all and so this signal would be processed with a delta delay instead? Even if true, how would that create a problem? RickArticle: 141177
Antti.Lukats@googlemail.com wrote: > if you as ram is specified as 15ns type > then do not hope it attaches to soc system with 15ns cycle time of course, the pin and trace capacitance as well as the setup&hold of the other pins must be taken into account. anyway, the available access times are close enough to the cycle times of the pipelines so I modified my design to make it completely "synchronous". It is less troublesome... > Antti yg -- http://ygdes.com / http://yasep.orgArticle: 141178
Hi, I am getting Cortex-M1 HAL folder with driver software for Actel peripherals. Should I use same HAL folder for 8051s. If not pl. send me the link to download HAL for the same. Following doc I found from ACTEL side "Hardware Abstraction Layers The HALs enable the software drivers to be used without modification with Cortex-M1, CoreMP7 and Core8051s and are available for FREE. The software driver's interaction with the hardware platform is done through the HAL. This isolates the driver's implementation from the hardware platform variations. A driver implementation typically interacts with the hardware peripheral it is controlling through sequences of register reads and register writes. The implementation of the HAL translates the read and write requests into the bus transactions relevant to the hardware platform. This enables programmers to seamlessly reuse code even when the hardware platform changes. The Cortex-M1 HAL is included with each individual software driver that is available for download."Article: 141179
On Jun 10, 4:01=A0am, whygee <why...@yg.yg> wrote: > Hello, > > I've been busy lately, trying to understand how to interface asynchronous= SRAMs > (like IDT 71V016, CY7C10xx or other 16-bit wide and fast parts in TSOP-II= ) > > I have found some descriptions of multicycle methods, using FSMs, but thi= s does > not fit my target because my circuits already run at "nominal speed" (8 t= o 15ns > cycles, depending on the SRAM chip). So I attempt to find how SRAM reads = and > writes can be done in one cycle, with a FPGA that can't (or shouldn't) go= faster. > (Yes I use Actel's ProASIC and I'm fine). > > I have found (through the examination of timing diagrams in several datas= heets) > that I can design a stateless async SRAM interface with this behaviour : > > Read : > =A0 on clock's rising edge : > =A0 =A0 =A0latch the address bus's value, > =A0 =A0 =A0Output Enable =3D 1, WriteEnable =3D 1, > =A0 =A0 =A0and keep data bus floating > =A0 after 1/3 of clock cycle : > =A0 =A0 =A0Output Enable =3D 0 > =A0 on next clock rising edge : > =A0 =A0 =A0latch the data bus input. > > Write : > =A0 =A0on clock rising edge : > =A0 =A0 =A0latch addres bus, OE=3D1, WriteEnable=3D1, > =A0 =A0 =A0keep data bus floating > =A0 =A0after 1/3 of clock cycle : > =A0 =A0 =A0 WriteEnable =3D 0 > =A0 =A0after 1/2 clock cycle (falling edge) > =A0 =A0 =A0 latch data output and drive the output buffer > > It's fine for me because it can be done by > correctly wiring latches to the proper control/data/clock signals > and it should work. Now comes the big question : > > How would I generate the 1/3 clock cycle signal ? > > * I don't want to use a 3x clock because the design already > fast and even though the PLL can output 350MHz, I'm not sure > that the logic and routing will follow (so making a 3-state FSM > is eventually possible but not realistic, too uncertain). > A dual-edged FSM with 1,5x clock would be another improbable chimera... > > * I have seen that the PLL can generate a 5ns (max) delayed clock > based on the main clock, so it's fine for 15ns-rated SRAMs > (I could set the delay to 2,6ns for 8ns parts) > but what happens if 20ns or slower SRAMs are to be used ? > (I have 70ns chips for example, but I don't want to make > a FSM just for a few slow parts) > Also, I would like to keep/reserve PLL outputs for other purposes. > > * In all the datasheets I have found, it is implicitly > necessary to have this 1/3 delay : > =A0 =A0- if shorter, there is a driver conflict on the data bus > =A0 =A0 =A0 if the precendent cycle was a read > =A0 =A0 =A0 =A0(data remain present up to about 1/3 of the next clock cyc= le) > =A0 =A0- if longer, the data setup time is not respected and reliability = suffers. > > Any idea ? > And are my assumption valid ? > (for those who have already designed this kind of circuitry) > > yg > --http://ygdes.com/http://yasep.org Is there any way you can have "bursty" accesses, ie, a bunch of reads in a row? In that case, there is no chance of bus contention once the burst starts. All you need to do is change the address and a bit later you can latch the new data. Writes are trickier since you will need to set/clr the sram enable signal, but again if you can group writes together, you can avoid data bus contention. John ProvidenzaArticle: 141180
Antti wrote : >> if you as ram is specified as 15ns type >> then do not hope it attaches to soc system with 15ns cycle time >> Antti Now I remember that this is possible, but only when consecutive reads are performed. Beware of the propagation delays, but it's not impossible to get cyle time =3D access time. However, when writes are done, this does not hold anymore. Ok, I stop here :-) My only "real world example" for the study is the "FoxVHDL" (superseeded by the "Colibri") board of ACME Systems ( http://colibri.acmesystems.it/ ) It has two 12ns SRAMs with a 64MHz clock, so there is 3,625ns of margin. I've been unable to understand how/why the memory interface of the VGA framebuffer works, though. Now that I look at this number (I thought that it was only 3ns, rather than almost 4), it appears that the memory cycle could be split into 4 even sub-cycles, requiring only a local 2x clock increase. I like this idea :-) With 8ns parts, that would enable 90MHz (roughly) pipeline frequencies (with good layout and decoupling and...). And with 15ns parts, that makes a nice, round 50MHz frequency. This fits well with my stock of 25MHz oscillators and 15ns SRAMs. Does it sound right to others ? Jacko wrote: > Paying attention to the r/w line and that it can be raised at the end > of a cycle not just after 70% of the cycle, and realizing this is the > main issue for meeting the cycle time constraints. I realized it yesterday, right, but my working hypothesis was to lower it at 30% and raising at 100%, so the data bus has less chance (or duration) of contention. SRAMs seem to take more time than I expected to release the data bus. And when writing to the SRAM, the data setup is long (70% of the access time) and the hold time is often 0, this privileges the "late WE/ raise" rather than your version= =2E What do you think ? > Keeping /OE low for > read and write (as r/w overides the output buffer, check the specs). I've seen that too, even though it is not often clearly mentioned. It surprised me... But I stick to the safe side and only lower /OE when needed to avoid contentions. > Raising r/w early is no problem as data the same, lowering it early > maybe is what you need? maybe but I didn't want to play too much with the specs > The drive shorting rail to rail would be a > power loss issue so maybe delay the data output of the processor > slighly... "power matters" (to paraphrase one FPGA vendor ;-P) currently, I'm just trying to apply what I know for years (at last), but that I couldn't test before. Later, I'll probably have power constraints and I don't want to redesign and retest memory controllers. So I take contention seriously. > the rising edge will be countered by the addressing delay. can you elaborate ? > The timing may be very possible if you do interleave every write with > a read, but as I said the timing will be tight. I can't count on such systematic interleave. This access pattern is rare, and not often found in CPUs or graphic framebuffers, for example. It sounds that if I add 33% of margin to the SRAM access time, things fall in place and I can just use a PLL to generate a 2x frequency, or a 90=B0 out of phase control signal. When the input clock changes, the phase will not change, contrary with a fixed delay approach. I'll recheck all the timings and datasheets... --=20 http://ygdes.com / http://yasep.orgArticle: 141181
On Wed, 10 Jun 2009 06:44:47 -0700 (PDT), rickman wrote: >I'm unclear about what you are saying about a short delay interfering >with the use of delta delays. Delta delays have 0 ns resolution, in >fact, they are in essence "outside" of time delays. All delta delays >must be resolved before a time delay is considered. Are you saying >that with the delay truncated to 0 ns, it is really no delay at all >and so this signal would be processed with a delta delay instead? >Even if true, how would that create a problem? RTL simulation of registered logic works only because delta delay models the clock-to-output delay of registers, so that each register sees its clock edge happening at least one delta cycle before the upstream registers' outputs have changed their value. Conceptually, the clock arrives at every flop truly simultaneously - in the exact same delta - but data changes happen at least one delta after the clock edge. Things like clock gating can introduce one or more delta delays in a clock path. In real hardware, the non-zero clock-to-output delay of upstream logic means that you still have plenty of hold time on the flop that has a gated clock. But in simulation, you lose your one delta of hold time and you can get bizarre results. process (clock) begin if rising_edge (clock) then Q_early <= D; end if; end process; clock_delayed <= clock; -- delta delay process (clock_delayed) begin if rising_edge (clock_delayed) then Q_bad <= Q_early; -- hold time problem here end if; end process; Adding any non-zero delay to this assignment: Q_early <= D after 100 ps; will fix the problem. But if the simulator resolution is smaller than that delay, then the delay will be rounded down to 0ns and thus lapses back to being only one delta again. (In VHDL, 0ns delay is exactly the same as one delta delay; "Q <= D after 0 ns" is identical to "Q <= D"). This is a pernicious problem when trying to do functional (zero-delay) simulation of FPGA designs that use DCMs or whatever. It's extremely hard to guarantee that a multiplied clock's edges line up with the source clock's edges to the exact delta, but that's what you need to do if you are to model RTL transfer in both directions between source clock and multiplied clock domains. Verilog allows you to hack your way around this problem by using blocking assignment in combinational logic (and, in particular, in clock gating/management models) whilst using nonblocking assignment in clocked logic. Nonblocking assignment works pretty much like VHDL delta delay, but blocking assignment introduces "less delta" (ugh!!!). In VHDL, the only reliable way I know to deal with it is to introduce some non-zero delay in every data path; that's not something I like to do, because it's very easy to do it on some signals and not on others. The RTL abstraction, which says that everything happens atomically in zero time on a clock edge, is a wonderful tool - but it has its limitations. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 141182
Could I download (with XMD) an executable.elf Application in more than one board?. I use USB Xilinx Cable.Article: 141183
Hi, john wrote: > Is there any way you can have "bursty" accesses, ie, a bunch of reads in a row? In the present case, well, no :-/ It's for microcontroller applications, one FPGA, one SRAM chip and some electronic interfaces (that kind of thing, then each system is customised for a specific target). The usage pattern would be an average of 2 or 3 reads (may be more, may be less) and 1 write. For higher performance, I can use synchronous SRAMs that have 4-word bursts, from 66MHz to 250MHz... but it takes more room, it is a bit harder to route and consumes more power... SSRAM looks more adapted (and in fact, it is indeed designed) as L2 cache, where reads and writes are bursted. I'll see this later. > In that case, there is no chance of bus contention once the burst starts. All you need > to do is change the address and a bit later you can latch the new data. > Writes are trickier since you will need to set/clr the sram enable signal, but again > if you can group writes together, you can avoid data bus contention. right but then, how does one manage the bus turnaround ? it looks like a cycle or sub-cycle must be inserted, which affects the rest of the system. > John Providenza yg -- http://ygdes.com / http://yasep.orgArticle: 141184
"Pablo" <pbantunez@gmail.com> wrote in message news:75f0b159-5d1d-45e8-9f99-826c0488e5e9@r13g2000vbr.googlegroups.com... > Could I download (with XMD) an executable.elf Application in more than > one board?. I use USB Xilinx Cable. I believe Xilinx tools can't distinguish between 2 USB cables. You can however use 2 cables if one is USB and another parallel... /MikhailArticle: 141185
On 10 jun, 19:24, "MM" <mb...@yahoo.com> wrote: > "Pablo" <pbantu...@gmail.com> wrote in message > > news:75f0b159-5d1d-45e8-9f99-826c0488e5e9@r13g2000vbr.googlegroups.com... > > > Could I download (with XMD) an executable.elf Application in more than > > one board?. I use USB Xilinx Cable. > > I believe Xilinx tools can't distinguish between 2 USB cables. You can > however use 2 cables if one is USB and another parallel... > > /Mikhail And two boards at the same time with the same executable.elf???Article: 141186
I receive this message when I build "Software Applications". I am not sure but I think that hardware is not right although I don't receive any error message when I build the Bitstream. /cygdrive/c/Windows/Temp//ccDJflt9.s: Assembler messages: /cygdrive/c/Windows/Temp//ccDJflt9.s:72: Error: register expected, but saw 'rfsli' /cygdrive/c/Windows/Temp//ccDJflt9.s:72: Warning: ignoring operands: rfsli make: *** [mb0_thread/executable.elf] Error 1 Done! my best regardsArticle: 141187
> And two boards at the same time with the same executable.elf??? It probably doesn't matter whether you are trying to load the same file or not. But I guess you could probably do it by connecting your boards in a single JTAG chain... /MikhailArticle: 141188
Hi, I look at the FFT core description of Xilinx and I do not understand the function of switch in the diagram. There are two switches in the FFT LogiCore diagram, at the both sides of Buttfly. One switch connects the output of RAM block, which the other switch connects to the RAM input. I think the data input and output of RAM can directly connect to the butterfly, because the different RAM contents are controlled by addresses. What use for the switches? Can you help me about that? Thank you very much.Article: 141189
On Jun 10, 2:26=A0pm, fl <rxjw...@gmail.com> wrote: > Hi, > I look at the FFT core description of Xilinx and I do not understand > the function of switch in the diagram. There are two switches in the > FFT LogiCore diagram, at the both sides of Buttfly. One switch > connects the output of RAM block, which the other switch connects to > the RAM input. I think the data input and output of RAM can directly > connect to the butterfly, because the different RAM contents are > controlled by addresses. What use for the switches? Can you help me > about that? Thank you very much. Hi, Please list the Xilinx website address for what your FFT core description and I may help you. WengArticle: 141190
Hi, I wanted to send some data from my computer to the Xilinx ML401 board using the USB port. My restriction is that I cannot use the MircoBlaze package to do the USB interfacing. I was using the USB3300 card to connect the ML401 to the USB port of my computer but I did not get a VID or PID upon connection. I wanted to know if I need some kind of VHDL code/core initially running on the ML401 that activates the USB3300 card which will enumerate the VID and PID and set it up for further operations? Also, is this type of core or code available online someplace? How much effort/time will it take to make such a code/core from scratch? Thanks. Zubin.Article: 141191
Hi all, I'm having the following problem: I'm trying to interface a XUP Virtex-II Pro Development System with a CMOS camera, and the output voltage on the board's expansion connectors is 2.5V for logic '1', no matter what IOSTANDARD is specified in the ucf file. I understand this is because the Vcco for the io bank is 2.5V. (However, in the schematic it shows that for the IO Banks that relate to the expansion connectors, Vcco=3.3V.) My question is if it possible to change the Vcco supply voltage by any sw/hw means. Thank you in advance.Article: 141192
>On May 25, 1:08=A0pm, Pablo <pbantu...@gmail.com> wrote: >> > I use the MPMC to map memory ports to DDR2 external memory. This >> > mechanism has been successfully tested with up to 7 microblazes. >> >> > /Per >> >> Firstly, thanks a lot. >> >> Secondly, I would be grateful if you could tell me if you used XUP >> Board and the version of Xilinx Platform Studio. Do you know anything >> else of this type of design? >> >> again, my best regards > >I've built multi-processors with both mb and ppc using the Xilinx >ML401, ML403, ML505, ML506 demo boards. This works fine under multiple >development environments (virtual pcs with ISE/EDK v9.2 and v10.1). > >IMHO the Xilinx docs/tutorials for multi-processors are so thin that >they are not particularly useful. We have a fairly detailed appnote >showing how to design/build/test MPSoC using our high-level tools for >registered users at www.codetronix.com. Go to >Downloads>AppNotes>MPSoC. This shows how to start from a single- >processor BSP-wizard-generated xps system, replicate necessary hw >structures, and download elf files -- it may provide some clues for >you. > >/Per > >/Per > Hi, I am new to this site but i found the topic very helpful for me. I just need to know if the DDR memory is the only memory that can store MPSoC systems (up to 8 Microblaze systems) on the ML403 board. Can we use the SRAM too? And how much would it hold (up to how many microblaze processors)? and in case I am using the FPGA's BRAMs as cache memories, would that affect how many microblaze that could be added? Thanks a lot, NArticle: 141193
"recoder" <kurtulmehtap@gmail.com> wrote in message news:002ad0a8-0cbf-42d3-ab66-40ec0d72b307@x5g2000yqk.googlegroups.com... > We have implemented a high speed qpsk demodulator in a FPGA > demodulator board. > Until now we fed the I and Q inputs from another board by wire. > Now we are looking for an IF board that can take a 70 Mhz RF signal > and output the I and Q signals to be fed to our FPGA board. > Can anybody recommend one? > Thanx in advance There are several vendors of ADC+FPGA boards out there with a variety of form factors. They primarily target the military and other specialty signal processing markets, so they aren't generally cheap ($10k and up). Our company has had good results with boards from ICS, Ltd. (now part of GE Fanuc) and Pentek. I think Transtech has a family of FPGA boards with a variety of daughter cards for I/O and ADC. Depending on your host platform, a PCI, PMC, or PCI-Express board may be a good choice. Many of these support multiple channels of ~200 MHz ADC's (what you need for 70 MHz RF unless you want to deal with balancing quadrature sampling) feeding something like a V4 or V5 FPGA (significant part of the $$$). I wouldn't recommend rolling your own interface board as getting the signal timing on all those ADC bits is a bit tricky. That's why we tend to buy boards that have the ADC and FPGA integrated together. -MartyArticle: 141194
On Jun 10, 5:28=A0pm, "zubinkumar" <zubinku...@ufl.edu> wrote: > Hi, > > I wanted to send some data from my computer to the Xilinx ML401 board > using the USB port. My restriction is that I cannot use the MircoBlaze > package to do the USB interfacing. I was using the USB3300 card to connec= t > the ML401 to the USB port of my computer but I did not get a VID or PID > upon connection. I wanted to know if I need some kind of VHDL code/core > initially running on the ML401 that activates the USB3300 card which will > enumerate the VID and PID and set it up for further operations? Also, is > this type of core or code available online someplace? How much effort/tim= e > will it take to make such a code/core from scratch? > > Thanks. > Zubin. A USB3300 is a PHY. You will need to add the bit-stuffer, serializer/ deserializer, framer, FIFO and host interface. Then you get to write the host and device drivers. Good luck with that, ALArticle: 141195
On Jun 10, 5:29=A0pm, "naim32" <engineer_n...@yahoo.com> wrote: > >On May 25, 1:08=3DA0pm, Pablo <pbantu...@gmail.com> wrote: > >> > I use the MPMC to map memory ports to DDR2 external memory. This > >> > mechanism has been successfully tested with up to 7 microblazes. > > >> > /Per > > >> Firstly, thanks a lot. > > >> Secondly, I would be grateful if you could tell me if you used XUP > >> Board and the version of Xilinx Platform Studio. Do you know anything > >> else of this type of design? > > >> again, my best regards > > >I've built multi-processors with both mb and ppc using the Xilinx > >ML401, ML403, ML505, ML506 demo boards. This works fine under multiple > >development environments (virtual pcs with ISE/EDK v9.2 and v10.1). > > >IMHO the Xilinx docs/tutorials for multi-processors are so thin that > >they are not particularly useful. We have a fairly detailed appnote > >showing how to design/build/test MPSoC using our high-level tools for > >registered users atwww.codetronix.com. Go to > >Downloads>AppNotes>MPSoC. This shows how to start from a single- > >processor BSP-wizard-generated xps system, replicate necessary hw > >structures, and download elf files -- it may provide some clues for > >you. > > >/Per > > >/Per > > Hi, > > I am new to this site but i found the topic very helpful for me. I just > need to know if the DDR memory is the only memory that can store MPSoC > systems (up to 8 Microblaze systems) on the ML403 board. Can we use the > SRAM too? And how much would it hold (up to how many microblaze > processors)? and in case I am using the FPGA's BRAMs as cache memories, > would that affect how many microblaze that could be added? > > Thanks a lot, > N No external memories are required by a MB since its elf and stack/heap can be stored in local BRAM. On large fpgas you can probably squeeze in at least a dozen MB. If the elf or stack/heap is so large that it doesn't fit in BRAM, then you can also use external DDR2 or SRAM or Flash. Using BRAM for other purposes simply reduces the amount available for MB(s). The DDR2 uses the Xilinx multiport memory controller (MPMC described in DS643) with max 8 ports per bank, while the SRAM (or Flash) uses the Xilinx multi-channel external memory controller (XPS MCH EMC described in DS575) with max 4 ports. On the Xilinx MLxxx boards there is only 1 bank of DDR2, SRAM or Flash, while other boards may have multiple banks. For higher throughput and higher energy efficiency, it makes sense to move functionality from sw targets to hw targets. /PerArticle: 141196
FPGA timing closure is a time-consuming task and sometimes a source of heated debates between team members. My question to this group is what is a safe margin in static timing analysis in which a design will still work. Specifically, if a critical net in Xilinx Virtex5 -2 chip running at 250 MHz misses timing by 100ps is it still ok. I understand that there are factors like temperature, voltages, jitter, etc. to consider, and it's always better to meet timing. But I'm interested in how much the timing can be violated if a design is running under "normal" conditions in a lab.Article: 141197
By looking at the slice structure in FPGA Editor it seems possible to route O5 and O6 to different FFs in the same slice. The surest way to know is to create a small design with LUT or CFGLUT5, and two FF primitives, synthesize it and try to reroute in FPGA Editor. Or constrain that design into a single slice. - outputlogic -- visit http://OutputLogic.com : tools that improve productivity --Article: 141198
"OutputLogic" <evgenist@gmail.com> wrote in message news:451a59d0-06c9-4a12-ab71-e25d6259d430@p21g2000prn.googlegroups.com... > FPGA timing closure is a time-consuming task and sometimes a source of > heated debates between team members. > My question to this group is what is a safe margin in static timing > analysis in which a design will still work. > Specifically, if a critical net in Xilinx Virtex5 -2 chip running at > 250 MHz misses timing by 100ps is it still ok. > I understand that there are factors like temperature, voltages, > jitter, etc. to consider, and it's always better to meet timing. But > I'm interested in how much the timing can be violated if a design is > running under "normal" conditions in a lab. I am not sure by how much you can safely stretch it, but 100 ps has never given me a problem in a lab. Occasionally I would try designs with timing errors up to 5 times as much, but then I don't feel very confident and don't know whether timing is to blame when things don't work as expected :) It might be worth noting that I haven't worked with V5 yet... /MikhailArticle: 141199
On Jun 8, 6:11=A0pm, john <jprovide...@yahoo.com> wrote: > On Jun 7, 11:56=A0pm, Venkat <venkat.ja...@gmail.com> wrote: > > > Hi all, > > > I am wondering if it is possible to have both the 06 and O5 Outputs of > > the LUT registered with the Flip-flop within the Slice when I use the > > LUT to implement two 5 Input Functions. > > > My understanding is No but could not find a confirmation on the User > > Guide. Can anyone clarify? > > > Thanks in advance, > > Venkat. > > I believe you can't do this in V5. =A0I just attended a seminar on V6 > and one of the new > features they mentioned was adding a 2nd flip-flop to the slice so > that both the > outputs could be registered. > > John P something interesting at that seminar? Antti
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z