Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Go to www.xilinx.com, search for LFSR, There is a PDF doc there with a table of all the LFSR feedback taps positions for an N-bit LFSR. 64-bits is among them. I wave empirically proven using C-code that the 16-bit and 32-bit are indeed maximal length. I would expect I could trust the others too. Regards, Spock "Subbu Meiyappan" <msubbu@cisco.com> wrote in message news:993161117.666970@sj-nntpcache-3... > Pick up any Error Control Coding / Algebraic coding book (for example Lin > and Costello or Wicker) and their > appendices usually have the minimal polynomials for different lengths. > > HTH, > Subbu > "Dave Feustel" <dfeustel@mindspring.com> wrote in message > news:9ec4nt$ipc$1@slb6.atl.mindspring.net... > > I'm looking for taps for 64-bit linear feedback shift registers. > > Can someone post either values for such taps or a source > > of information for generating the taps? > > > > Thanks, > > > > Dave Feustel > > Fort Wayne, Indiana > > > > > >Article: 32353
I came across this problem about a month ago. The work around I found was to stuff the primitives in different sub Modules (I use Verilog) and use the Directive : /* synthesis syn_hier = "hard " */ in the sub module declaration. Hope it helps, Rotem. Ray Andraka <ray@andraka.com> wrote in message news:<3B32271D.7B7A6B17@andraka.com>... > Synplicity 6.2 is "optimizing" an instantiated design. In particular, I > > have a design with instantiated Xilinx primitives including a carry > chain. The synthesis is apparently flattening the xilinx primitives and > > doing its own optimization on them, which results in a multiplexer > placed between the xorcy and the flip-flop. When I instantiate > primitives, I don't expect to see them remapped. I didn't see this > happening in 5.3.1. The instantiated primitives have a syn_black_box > attribute on them. This new 'feature' makes Synplicity useless for the > > designs I am doing, and actually does damage for the large base I have > already fielded. > > Has anyone else seen this? Any fixes? Rotem Gazit MystiCom LTD mailto:rotemg@mysticom.com http://www.mysticom.com/Article: 32354
Does FPGA Express do register balancing? Synplicity calls this "retiming". The idea is to take a large block of combinatorial code and move registers into it to speed up the clock speed. Then you would define a multiplier (for example) as a large combinatorial block and simply add registers to the output until it meets your timing requirement. I have seen this used in ASIC code, but I have not seen it work for FPGA's. I have been trying to get Synplify Pro 6.2.4, which supports retiming, to do this with no success. When I add more than a couple of registers to the output, it turns them into an SRL and makes it *slower*. I am using Xilinx XCV1000E, Synplify Pro 6.2.4, Xilinx Alliance 3.3.08i, Win2K, Netscape 4.75, Red Hat Linux 5.2, Flash 5, Acrobat 4.0 Alan Nishioka alann@accom.comArticle: 32356
Mike, No counter flip-flop will ever create glitches by itself. Even a ripple-counter has no glitching flip-flops. Only decoded outputs can glitch. You need a total of four flip-flops, and in Spartan2 ( i.e. Virtex ) you have those four in one CLB. With the look-up tables you can create any synchronous 4-bit counter sequenceyou can imagine ( there are 15-factorial different choices, ) but I suggest you stick with binary, and then take the most significant Q as your 2 MHz output. It is delayed by one clock-to-Q from the 32 MHz clock, i.e. 1.4 ns. It's easy. Peter Alfke, Xilinx Applications ========================================= mjd001 wrote: > Thanks to both of you for the feed-back. The idea here was to create a T-FF > that drove the clock output instead of a binary counter which I felt may > create glitches when switching values. I investigated a Johnson counter also > but this would eat m/2 FF's (m being count length) and I was concerned with > resources at the time since I was targeting a XC2S15. Being the lone wolf on > FPGA design projects I have few experienced people to bounce ideas off of. > Thank you again. > > Mike D. > > "Falk Brunner" <Falk.Brunner@gmx.de> wrote in message > news:3B338E5A.492E41D4@gmx.de... > > mjd001 schrieb: > > > > > > Hi, > > > I am currently testing a VHDL design that derives a 2.048 MHz clock > from > > > the 32.768 Mhz source. Below is a snippet of the code I am using. the > part > > > > [VHDL stuff] > > > > Why dont you simply use the 3rd bit of your counter as your 2.048 MHz > > clock? > > The second process is useless. > > > > -- > > MFG > > Falk > > > >Article: 32357
mjd001 schrieb: > > Thanks to both of you for the feed-back. The idea here was to create a T-FF > that drove the clock output instead of a binary counter which I felt may > create glitches when switching values. I investigated a Johnson counter also > but this would eat m/2 FF's (m being count length) and I was concerned with > resources at the time since I was targeting a XC2S15. Being the lone wolf on Ahh, when using the XC2S15, just use a DLL, it can divide a clock "for free" and even phase align the divided clock to the original clock. -- MFG FalkArticle: 32358
"Keith R. Williams" wrote: > > In article <3B34C985.CD55786A@akamail.com>, dclark@akamail.com says... > > "Keith R. Williams" wrote: > > > > > > In article <3B32F4AA.7025F2D1@algor.co.uk>, rick@algor.co.uk says... > > > > > > > > Why not go Linux instead ? > > > > > > Tools, mainly. > > > > This is one area that has recently improved dramatically. Synplicity, > > Cadence, Modelsim, Synopsis and many other smaller tool suppliers, all > > provide Linux tools now. If you haven't done so recently, take a look > > through the news and notes at: > > > > http://www.polybus.com/xilinx_on_linux.html > > > > The one weak link is of course the FPGA vendor tools, but several of > > these run just fine under Wine, including xilinx. > > Yes, I see Xilinx dragging their feet all the time. I was somewhat > surprised to see Synplicity porting to Linux. I don't think they're > quite there yet (Amplify?). My guess is that everyone is more afraid > of support costs than they are of porting software. > > Have you looked at the cost of a Linux licenses? I know the Win > Licenses are far cheaper than the various Unix licenses. Yes, most are still discriminating against Linux and Unix. A long time major gripe. If you get the exact same product for Windoze or Linux/Unix, the price is sometimes similar these days. But they generally offer a much cheaper version, exclusively for Windows, typically node locked. Very annoying. Synplicity Pro - only floating licenses - US$15K - available now according to the press release. http://www.synplicity.com/about/pressreleases/SYB-104final.html Modelsim - only Modelsim SE - US$20K - available since last year. Works great on Linux (I use this one). http://www.model.com/products/datasheets/ModelSimSE.htm Synopsis - Design Compiler and other tools. It is not clear whether they are actually available yet, and I don't see prices. http://www.synopsis.com/news/announce/press2000/linux_pr.html Cadence - NCSim and other tools - US$5K (I think, the Cadence web site is rather hard to use) http://www.cadence.com/company/pr/04_17_00linux.html Here are some other Linux simulators: Green Mountain VHDL Compiler about $600 from Green Mountain Computing Systems. A nice looking VHDL Studio, with extra features such as graphical state diagram entry, is also available. BlueHDL about $1.5K from Blue Pacific Computing. A free student version is also available. Pathway about $4K from FTL Systems. A $95 student version is also available. Riviera from Aldec. DuaneArticle: 32359
Duane Clark <dclark@akamail.com> writes: > Synopsis - Design Compiler and other tools. It is not clear whether they > are actually available yet, and I don't see prices. > http://www.synopsis.com/news/announce/press2000/linux_pr.html I use Synopsys Design Compiler, PrimeTime, and VCS under Linux. The latter has been available for several years. > Cadence - NCSim and other tools - US$5K (I think, the Cadence web site > is rather hard to use) > http://www.cadence.com/company/pr/04_17_00linux.html I use Cadence Verilog XL under Linux - works great. I have simulations running for weeks. I can't use Windows since it has to be booted all the time. A reboot is an accepted part of the installation procedure on Windows. Install a small utility program, and you have to reboot - three weeks worth of simulation has to be halted. Petter -- ________________________________________________________________________ Petter Gustad 8'h2B | (~8'h2B) - Hamlet in Verilog http://gustad.comArticle: 32360
We are targeting a Xilinx XC4000 part and need to push some registers into the IOBs. We like to keep our designs vendor independant, so we prefer not to instantiate special features in the Verilog, such as IOB FFs. I checked on the Xilinx web site and found that there is a directive to designate a register to be implemented in the IOB. The directive is /* synthesis IOB=TRUE */, and is placed in the reg declaration. This is claimed to work with Synplify and Xilinx Alliance. When this was tried, the FFs were still in the CLBs. Does anyone have experience with what we are trying to do? Should we give up and instantiate the IOB FFs? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 32361
Hi, I am using synopsys test compiler to insert scan in my design. The design is around 500kgate-big and contains as many as 15 scan chains. The problem is that there are too many scan input pins that we have to share the bidirectional functional pins and the 15 scan test input pins, but we cannot make the atpg work with that constraint. Will somebody please tell me how to use bidirectional pins as scan test input pins? Otherwise indication of any referece would be greatly appreciated. Thank you, BenArticle: 32362
Hi, I am having a hard time relating the parameters: Minimum period, Maximum combinatorial path delay, Maximum net delay indicated by the Alliance tools to the true clock speed of my implementation. I have an example indicated below. ---------------------------------------------- Could anyone help me in interpreting the following results for determining which of the 2 multiplier implementations can operate at a higher clock rate. I used Synplify and the Alliance 3.3 with the latest update (version 8). Both implementations are targeted for the Virtex-II xc2v1000-5 chip. -They are 12bit by 12bit signed parallel multipliers using the coregenerator. -Input and output are registed. - Maximum pipelining was selected. Now to the differences: Implementation I: Selected the Virtex-II hardware multiplier Implementation II Selected the LUTs for the multiplier implementation. After synthesis (The only constraint selected in Synplify was a clock speed of 50Mhz), followed by running the Alliance Tools These are the Timing summary results I obtained. Implementation I (i.e. using built-in multiplier) Minimum period: 7.034ns (Maximum frequency : 142.167 Mhz) Maximum combinatorial path delay: 8.179ns maximum net delay: 3.330ns Implementation II Minimum period: 4.617ns (Maximum frequency : 216.591 Mhz) Maximum combinatorial path delay: 8.123ns Maximum net delay: 5.963ns DavidArticle: 32363
Thanks to both of you for the feed-back. The idea here was to create a T-FF that drove the clock output instead of a binary counter which I felt may create glitches when switching values. I investigated a Johnson counter also but this would eat m/2 FF's (m being count length) and I was concerned with resources at the time since I was targeting a XC2S15. Being the lone wolf on FPGA design projects I have few experienced people to bounce ideas off of. Thank you again. Mike D. "Falk Brunner" <Falk.Brunner@gmx.de> wrote in message news:3B338E5A.492E41D4@gmx.de... > mjd001 schrieb: > > > > Hi, > > I am currently testing a VHDL design that derives a 2.048 MHz clock from > > the 32.768 Mhz source. Below is a snippet of the code I am using. the part > > [VHDL stuff] > > Why dont you simply use the 3rd bit of your counter as your 2.048 MHz > clock? > The second process is useless. > > -- > MFG > Falk > >Article: 32365
I have a feeling that you mix up two things: Pipelining breaks up long combinatorial paths and inserts a flip-flop, so that tha clock can run faster. If you do this indiscriminately, you may get into a situation where two branches of logic must meet, but they have different numbers of pipeline stages. You must compensate this and "balance" the number of flip-flops, so that the branches have the same number of pipeline delays. It's obvious if you think about it. And SRL16s are ideally suited to implement several cascaded flip-flops. Just cascading flip-flops at the end of a combinatorial chain is meaningless, unless it is needed for balancing purposes. You got the higher speed from breaking up the combinatorial chains... Peter Alfke, Xilinx Applications ================================ Alan Nishioka wrote: > Does FPGA Express do register balancing? > > Synplicity calls this "retiming". The idea is to take a large block of > combinatorial code and move registers into it to speed up the clock > speed. > > Then you would define a multiplier (for example) as a large > combinatorial block and simply add registers to the output until it > meets your timing requirement. > > I have seen this used in ASIC code, but I have not seen it work for > FPGA's. > > I have been trying to get Synplify Pro 6.2.4, which supports retiming, > to do this with no success. When I add more than a couple of registers > to the output, it turns them into an SRL and makes it *slower*. > > I am using Xilinx XCV1000E, Synplify Pro 6.2.4, Xilinx Alliance 3.3.08i, > Win2K, Netscape 4.75, Red Hat Linux 5.2, Flash 5, Acrobat 4.0 > > Alan Nishioka > alann@accom.comArticle: 32366
Rick Collins wrote: > We are targeting a Xilinx XC4000 part and need to push some registers > into the IOBs. We like to keep our designs vendor independant, so we > prefer not to instantiate special features in the Verilog, such as IOB > FFs. > > I checked on the Xilinx web site and found that there is a directive to > designate a register to be implemented in the IOB. The directive is /* > synthesis IOB=TRUE */, and is placed in the reg declaration. This is > claimed to work with Synplify and Xilinx Alliance. When this was tried, > the FFs were still in the CLBs. > > Does anyone have experience with what we are trying to do? Should we > give up and instantiate the IOB FFs? > > -- > > Rick "rickman" Collins > > rick.collins@XYarius.com > Ignore the reply address. To email me use the above address with the XY > removed. > > Arius - A Signal Processing Solutions Company > Specializing in DSP and FPGA design URL http://www.arius.com > 4 King Ave 301-682-7772 Voice > Frederick, MD 21701-3110 301-682-7666 FAX I think the correct syntax is via a Xilinx properties attribute: /*synthesis xc_props = "IOB=TRUE"*/ You then need to check that this makes it into the .ncf file produced by Synplify and that this file is visible to NGDBUILD. There are 2 alternatives to this: o Put the IOB=TRUE commands in your .ucf file e.g. INST "foo_reg" IOB=TRUE; o Set the ``-pr'' pack registers flag to MAP. [this is what I do]. This sets up a general requirement to put FFs in the IOB if at all possible, -pr i for inputs, -pr o for outputs, -pr b for both. Now if there are any you don't want packed you can selectively turn them off with IOB=FALSE commands in the UCF. None of this will work if the way the registers have been synthesised is un-packable. An example of an unpackable register would be one driving an output pin whose output gets fed back into the rest of the logic. The synth attribute above won't stop this happening since Synplify just passes it through, you have to add the attribute /*synthesis syn_useioff=1*/ for this. In general IMO its better to keep these attributes in a separate .sdc constraints file with a separate one for each technology. The only things you have to put in at source code level are the directives like ``syn_keep'', ``syn_preserve'' etc. and these are all (?) technology independent.Article: 32367
In article <BMSY6.9405$bR5.2269268@typhoon2.gnilink.net>, "Lewin A.R.W. Edwards" <larwe@larwe.com> wrote: >> ...an interface whereby other devices can read or write >to >> the chip, without having to know *a priori* the details of the hardware >> interface... > >So if I read this correctly, you are trying to create an abstraction layer, >with the goal that at some future time you might replace the SSFDC-based >system with some other removable storage. It's possible that in future we might replace the SSFDC system. But that's not really the goal - we expect to retain the SSFDC system for some time. It's the other devices connected to the system that might change frequently. Indeed, they might actually change out from underneath a working device, so to speak. So what's really important is that we don't have an interface that expects a specific "hard-wiring" from the Flash card to the device it's ultimately expected to communicate to, nor one where the data format is hardware-specific to the device being communicated with. > >> From the POV of the CPLD, we can view the device as raw flash blocks. We >don't >> have to implement SSFDC compliance, although we could, but in any case it >> would be the other device, not the CPLD, which would handle that layer. > >Whoa! On one hand you want to totally abstract the storage layer from the >other devices. On the other hand, you are happy to do the logical and >physical layer format management in the external device? That work is all >highly SSFDC-specific, so your abstraction is destroyed. Not exactly. We could choose to implement SSFDC compliance, in which case in order to read the same information off a particular SSFDC medium, the devices connected to it would then have, by our own requirements, a native capability to support SSFDC. But then you could go and connect a device without SSFDC compliance, and it could still use the same medium - it just wouldn't be able to read the data - the chip would be a blank slate - and it wouldn't use the same interfacing methods. As long as you stuck with SSFDC devices, the same medium would stay readable, but when you switched to a different type of device, it would destroy the data from the standpoint of the SSFDC devices, kind of like reformatting a hard drive, at a lower level. >Or are you only seeking to abstract the signal timing, so that SSFDC can be >tied to a variety of different host buses? Definitely this much. > >> >Unfortunately your posting sort of reads like this: "I want >recommendations .. >> I agree that my posts have a tendency to sound that way. ... > >Remember, I was being humorous there, or at the very least wry :).. > >> things in general terms first so as minimize the degree people answer >coming >> in with preconceptions... > >The flip side of that coin is that if you say "I am solving some >non-academic problem XYZ", you're a tad more likely to get people responding >who have done something directly substitutable for what you want. If you say >"We intend to do X, and the problem we've hit is how to do sub-task X sub >5", someone else might say "Well, try doing Y instead". I wrestle with 2 competing problems in communication, which makes posing questions in the way you recommend, which I can agree would be preferable, very difficult. First of all, I have trouble with overspecifying. I can easily manage to detail the problem in such exacting terms that people don't see a solution, precisely because the problem has been phrased in a language that embodies the source of the problem into the description. Secondly, I rarely think in terms of well-defined implementations. Suppose you came to me and asked me for a design for a speaker. Well, most people, I'm sure, would be perfectly happy detemining what woofers, tweeters, midranges, etc. they needed, crossover designs, cabinet materials, etc. (subject to expected price range). I'd approach it from a totally different perspective. My thinking would be "I'm looking for a device that can generate sound. What would be the optimal way of doing so? Well, I would want something that didn't introduce extra harmonics into the system. I'd want something that has absolutely flat frequency response. I want something that can respond instantaneously to changes in signal level. I want something that has the widest possible dynamic range...etc." If a reasonably thorough examination of that problem suggested that the conventional speaker methods really did work best, I'd use them without wasting too much time. But if other possibilities suggested themselves, I'd want to explore those first. The final result might bear virtually no resemblance to more typical designs, but it would probably sound far better at any given price point. So the combination of seeing beyond the possibilites of conventional architecture and its limitations and giving too many details if phrasing a question in a specific way usually means that when I do ask such questions I get either "that's impossible (or impractical)", or "I don't understand". I'll admit that in the current question I've asked I don't seem to be doing any better, but then what's the alternative? >> >* what is the reason for avoiding a microcontroller? >> >> Redundant parts, extra programming,... > >I'm still confused on a couple of points, sorry. > >1. "extra programming" - your CPLD/FPGA has to be programmed anyway. So >using a microcontroller instead is no loss from that standpoint. A microcontroller typically requires a greater number of lines of object code to do the same task, since it's general-purpose. That's what I mean by extra programming. >2. "serial execution" - The SSFDC card can only do one thing at a time, so >this doesn't seem to be an issue. But there's no reason we couldn't have its *controller* doing multiple things at once. >3. "isolate the other devices" - as I read your earlier paragraphs, the >whole point of this exercise is to isolate/abstract the memory subsystem >from the rest of the system. I'll retract that last comment. It was a badly chosen description of an issue I've not defined well enough to state concisely. >>... Can you outline where the compelling benefits to a microcontroller >> might be? > >That very much depends; I still can't understand what you're trying to >design :) I'm probably misunderstanding, but it seems that you're trying to >satisfy mutually exclusive objectives. I'm willing to bet that this is poor communication on my part rather than the task I've set because time and again I've asked for answers to problems that turned out in the end to be perfectly solvable which in the interim got a lot of "you're trying to achieve the impossible" responses. > If you just want to abstract the >signal timing from the host...then that's going to be easy to do in >your FPGA, trivial in fact, and I don't see why you need to outsource that >IP. Mostly because I need to solve about a hundred other difficult problems, and if this one issue turns out to be easy to solve, that's one extra design chore I've not had to tackle, and if it's hard, finding something that does the task or gives me idea will lessen the burden. Alex Rast arast@qwest.net arast@inficom.comArticle: 32368
I wrote: > > through, you have to add the attribute > > /*synthesis syn_useioff=1*/ > > for this. > This should work but doesn't always. For example if the FF concerned is a couple of levels down in the hierarchy. I haven't yet found a way of telling Synplify to obey the Xilinx IOB packing rules for an arbitrary FF. It gets much worse when you are trying to pack an input and an output FF into the same IOB. I consider this a bug but, like some other recent discussions on Synplify, I've more-or-less given up trying to make them understand this. Ray A. would understand this since I was basically told to make my design style compatible with the attribute rather than Synplify modifying ``syn_useioff'' to do the right thing.Article: 32369
Ben wrote: > > Hi, > > I am using synopsys test compiler to insert scan in my design. > The design is around 500kgate-big and contains as many as 15 scan chains. > The problem is that there are too many scan input pins that we have to share > the bidirectional functional pins and the 15 scan test input pins, but we > cannot make the atpg work with that constraint. > > Will somebody please tell me how to use bidirectional pins as scan test > input pins? IMHO there are two possibilities to use bidir ports as scan in: - Assure that all test patterns have the correct level in the FF connected to the output enable (cant remember the tc command). That reduces test coverage. - Gate all the output enable pins with the test enable port in the way, that when test enable is asserted all desired port have the correct direction. I prefer the second. > Otherwise indication of any referece would be greatly appreciated. I could offer my diploma thesis "Design for Test (DFT) and Testability of a Multi-Million Gate ASIC", but I didn't discuss muxing mission mode pins in detail. (You'll get a copy by mail, if you like) Hope this helps Patrick Schulz -- Patrick R. Schulz (mailto:schulz@uni-mannheim.de) University of Mannheim - Dep. of Computer Architecture 68161 Mannheim - GERMANY / http://ra.ti.uni-mannheim.de Phone: +49-621-181-2720 Fax: +49-621-181-2713Article: 32370
Alan Nishioka wrote: > > Does FPGA Express do register balancing? As far as I can tell, FPGA express does not perform retiming, only FPGA Compiler II (more expensive) does. > > Synplicity calls this "retiming". The idea is to take a large block of > combinatorial code and move registers into it to speed up the clock > speed. > > Then you would define a multiplier (for example) as a large > combinatorial block and simply add registers to the output until it > meets your timing requirement. > > I have seen this used in ASIC code, but I have not seen it work for > FPGA's. > > I have been trying to get Synplify Pro 6.2.4, which supports retiming, > to do this with no success. When I add more than a couple of registers > to the output, it turns them into an SRL and makes it *slower*. One of the limitaion of such a transformation is to determine the initial state of the "retimied registers", hence if you add sevarll registre at the combinational path, make sure that they don't need a reset signal and share the same clock enable signal. > > I am using Xilinx XCV1000E, Synplify Pro 6.2.4, Xilinx Alliance 3.3.08i, > Win2K, Netscape 4.75, Red Hat Linux 5.2, Flash 5, Acrobat 4.0 > > Alan Nishioka > alann@accom.comArticle: 32371
<!doctype html public "-//w3c//dtd html 4.0 transitional//en"> <html> David, <p>We cannot tell what the 'true' clock speed can be from this report. The design will need to be constrained further. Currently, a good guess will be the 'Minimum period' number. <p>Traveling a bit deeper... <br>Here are a few of the fields you might see while perusing Timing Summary section of a timing report: <p><b>Minimum period</b> - The fastest period that the slowest clock can be run for paths that travel between two synchronous elements (flops, RAMs, latches) who both reside _inside_ the FPGA. This generally takes the largest period from all the paths covered by FROM:TO and PERIOD constraints. This does not include any paths that start or end at a pad.<b></b> <p><b>Minimum input arrival time before clock</b> - The largest input setup time. This reports the longest path for all the OFFSET IN BEFORE constraints. <p><b>Minimum output required time after clock</b> - The largest clock to pad time for all the paths constrained by an OFFSET OUT AFTER constraint. <p><b>Maximum combinatorial path delay</b> - The delay for the longest purely combinatorial path through the FPGA covered by constraints. These paths will generally begin at a pad and end at a pad without being clocked. Usually constrained by a FROM:TO constraint. <p><b>Maximum net delay</b> - The longest delay for a single net that was specifically constrained. You might see these when you apply a MAXDELAY constraint to a net. <p>Notice that these summaries only report on paths that were covered by timing constraints. Thus these numbers are only valid if all the timing critical paths are constrained. This can be seen by the 'Constraints cover' section of the summary. I would recommend comparing this number to a report generated with the 'Analyze Against Auto-Generated Design Constraints' command in Timing Analyzer (or a -a option in trce) to see if you are in the same ballpark. <p>Ok, now how do we know how fast we can run the clock? Of course the worst case clock speed of a FPGA will be the longest path between two synchronous elements in your system... <p>These paths will include (with the summaries they correspond to): <br>All paths that start in at a synchronous element in an upstream chip and end on a input flop of the FPGA (Minimum input + upstream delay) <br>All paths between synchronous elements inside the FPGA (Minimum period) <br>All paths that start at a synchronous element in the FPGA and end in a downstream device (Maximum output + downstream delay). <br>All synchronous paths in the system which use the purely combinatorial paths from the FPGA (Maximum Combinatorial). <p>Thus we would need all these numbers to have a good idea of clock speed. Unfortunately, there are no input or output lines in your summary. You might want to constrain these paths with offsets. More information can be found at: <A HREF="http://www.xilinx.com/support/techsup/journals/timing/index.htm">http://www.xilinx.com/support/techsup/journals/timing/index.htm</A>. For now without knowing anything about the chips that your FPGA is interfacing to, your best guess at a clock speed is most likely the 'Minimum period'. <p>I hope this helps, <p>-Dylan <br>Xilinx Applications <p>David Nyarko wrote: <blockquote TYPE=CITE>Hi, <br>I am having a hard time relating the parameters: <p>Minimum period, <br>Maximum combinatorial path delay, <br>Maximum net delay <p>indicated by the Alliance tools to the true clock speed <br>of my implementation. <p>I have an example indicated below. <br>---------------------------------------------- <p>Could anyone help me in interpreting the following results <br>for determining which of the 2 multiplier implementations <br>can operate at a higher clock rate. <p>I used Synplify and the Alliance 3.3 with the latest update <br>(version 8). <br>Both implementations are targeted for the Virtex-II xc2v1000-5 chip. <br>-They are 12bit by 12bit signed parallel multipliers using the <br>coregenerator. <br>-Input and output are registed. <br>- Maximum pipelining was selected. <p>Now to the differences: <br>Implementation I: <br>Selected the Virtex-II hardware multiplier <p>Implementation II <br>Selected the LUTs for the multiplier implementation. <p>After synthesis (The only constraint selected in Synplify was <br>a clock speed of 50Mhz), followed by running the Alliance Tools <br>These are the Timing summary results I obtained. <p>Implementation I (i.e. using built-in multiplier) <br>Minimum period: 7.034ns (Maximum frequency : 142.167 Mhz) <br>Maximum combinatorial path delay: 8.179ns <br>maximum net delay: 3.330ns <p>Implementation II <br>Minimum period: 4.617ns (Maximum frequency : 216.591 Mhz) <br>Maximum combinatorial path delay: 8.123ns <br>Maximum net delay: 5.963ns <p>David</blockquote> </html>Article: 32373
Problem solved - see document Simulation-332.html in SolvNet. Andreas Andreas Purde wrote: > Hi, > > in my installation of Synopsys Simulation v2000.12 (sparcOS5) vhdlan > can't find the design-ware libs. The settings in .synopsys_vss_setup > seem to be correct but if I take a look in the directories where the > settings point to I only see broken links. > > Normally I would ask Synopsys about that - but they don't want > universities to do that - so if you have you any ideas... > > Thanks, > > AndreasArticle: 32374
I am having a problem with XC95108 programmable part. The part has worked in the past without problem, but with the following date codes, the part doesn't function properly. With date codes 0017, 0033, & 0037 I am recieving varying degrees of failure. With the 0017 I have one output that doesn't function and the output oscillates. With the other two I cannot generate any outputs or read inputs. The parts verify using both the Jtag in circuit programmer and a data I/O external programmer. With date code 0001 the board functions without error. All of the above stated problems occur with multiple samples of each date code, the problem is repeatable with the various date codes. I am new to the programmable device world, so any help in troubleshooting this problem would be greatly appriciated. Dan Briggs Opex Corporation dbriggs@opex.com
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z