Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
>You can design your state machines so that every state >leads eventually to a valid state. Prevents perminent lock up. Right. One needs to make sure that if an input to a state machine is async, the states are arranged so that only *one* bit changes as a result of that input. E.g. an async input fed into a state machine whose particular state might be 001101 should cause the next state to be something like 011101 or 001100 etc, i.e. only 1 bit changes. How this applies to one-hot state machines I don't know. Perhaps it doesn't matter. The problem with *those* is that there is a huge number of possible illegal states, so one cannot exercise the other option for dealing with them, which is to define all possible states and exit any illegal state to a useful state explicitly. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8301
In article <34897326.357866354@news.netcomuk.co.uk>, Peter <z80@ds.com> wrote: >I know the advantages of one-hot state machines w.r.t. the peculiar >features/limitations of FPGAs but I would never use them otherwise. I >also do a lot of 22V10-based state machines, and of course there one >does an encoded state machine. I have just done a really complex >design in a 22V10 which *could* run at 110MHz and draw 30mA. In fact >it runs at 1MHz (and draws ~0.5mA, in a Philips P5Z22V10, $4) and this >just shows that one-hot designs are certainly not always appropriate, >because the FPGA might be run slowly. The only other reason for using >one-hot designs is that one can feasibly do the design as a schematic >whereas only a masochist would be doing an encoded design in schematic >form :) Another advantage is that it is likely that one-hot state machines will take up less space in the FPGA. Sure they will use more flip-flops, but they don't waste space on all those decoders and encoders. Actually is this still true for FPGAs like the XC4000 series, where you have built-in ram/rom? Maybe I have to rethink that- most of my experience is with the 3000 series. -- /* jhallen@world.std.com (192.74.137.5) */ /* Joseph H. Allen */ int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0) +r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+q*2 ]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}Article: 8302
Peter wrote in message <34897326.357866354@news.netcomuk.co.uk>... > [snip] >I know the advantages of one-hot state machines w.r.t. the peculiar >features/limitations of FPGAs but I would never use them otherwise. I >also do a lot of 22V10-based state machines, and of course there one >does an encoded state machine. I have just done a really complex >design in a 22V10 which *could* run at 110MHz and draw 30mA. In fact >it runs at 1MHz (and draws ~0.5mA, in a Philips P5Z22V10, $4) and this >just shows that one-hot designs are certainly not always appropriate, >because the FPGA might be run slowly. The only other reason for using >one-hot designs is that one can feasibly do the design as a schematic >whereas only a masochist would be doing an encoded design in schematic >form :) [snip] Actually, the choice of one-hot vs. encoded has much more to do with the target architecture. Sum-of-products devices, like a 22V10 and CPLDs, have logic blocks with wide fan-in but very few flip-flops. Consequently, encoding the state vectors into a few flip-flops and then decoding them again in each macrocell's wide combinatorial function make a lot of sense. An encoded state machine is well matched to the CPLD's architecture. Now consider an FPGA. It has lots of flip-flops and the logic blocks have a much narrower fan-in. For FPGAs, it often makes sense to use one flip-flop per state (one-hot encoding), in order to reduce the logic fan-in to each block. Also, it generally means that the output from each state flip-flop only need be routed to a few other logic blocks. One-hot encoding also eliminates the large amount of wude fab-in decoding logic required for highly-encoded state machines. Is one-hot encoding best for all FPGA designs? No, it still depends on the application. However, for most applications, a one-hot implementation results in higher performance and fewer required resources, including reduced routing. So, even if performance is not a key factor, the reduced resources are a benefit. One-hot encoding for CPLD device is generally not a good idea because it generally will consume more resources, not less, and probably will not provide any better performance. The only time where one-hot is useful in CPLD designs is when you run into function block fan-in problems or product term limitations. ----------------------------------------------------------- Steven K. Knapp OptiMagic, Inc. -- "Great Designs Happen 'OptiMagic'-ally" E-mail: sknapp@optimagic.com Web: http://www.optimagic.com -----------------------------------------------------------Article: 8303
Peter <z80@ds.com> wrote in article <348a762b.358639496@news.netcomuk.co.uk>... > > >You can design your state machines so that every state > >leads eventually to a valid state. Prevents perminent lock up. True. > Right. One needs to make sure that if an input to a state machine is > async, the states are arranged so that only *one* bit changes as a > result of that input. Also true, but not the same thing. > E.g. an async input fed into a state machine whose particular state > might be > > 001101 > > should cause the next state to be something like > > 011101 or > 001100 > > etc, i.e. only 1 bit changes. > > How this applies to one-hot state machines I don't know. Perhaps it > doesn't matter. The problem with *those* is that there is a huge > number of possible illegal states, so one cannot exercise the other > option for dealing with them, which is to define all possible states > and exit any illegal state to a useful state explicitly. Well, that's the problem. A state change in a one-hot system requires *two* FFs to change...the current state FF turns off and the new state FF turns on. If a synchronization failure occurs, we could end up with two active states...or none! Either way, you've got a problem. - MichaelArticle: 8304
--------------FE7833186BD7631D11C38130 Content-Type: text/plain; charset=us-ascii; x-mac-type="54455854"; x-mac-creator="4D4F5353" Content-Transfer-Encoding: 7bit jim granville wrote: > .....For this, you could do useful, and indicative calculations > > 1) Power based on ALL CLOCK nodes, at Fc, and with ALL FF's ClkEnable > =FALSE. This is a MINIMUM number, and also encourages Chip vendors to > make the lowest power sync-disabled instance Registers. > This number INCLUDES the internal power lost in a FF, that is clocking, > but whose output is NOT changing. You can read exactly that number in the 1996 Xilinx data book, page 13-12. But that is not the lowest number. Xilinx XC4000 devices distribute the clock first to the center of the chip and then horizontally across ( I call that the backbone and it takes a tiny portion of the above mentioned power). Then the "vertical Longlines" that send the clock to the CLBs are connected selectively, only when needed. That saves a lot of capacitance and thus power.Then the clock input is only connected when needed, that can save a lot of power again. The data book is overly pessimistic and assumes that all flip-flops are driven. Here is some new data for the XC4036XL: The clock backbone consumes 0.53 mW /MHz That is the clock buffer driving the horizontal backbone, but none of the vertical clock Longlines. This is the lowest meaningful value. Add 0.1 mW for every additional unloaded vertical clock Longline. A fully routed but unloaded complete clock distribution consumes 4 mW/MHz That means the whole "ribcage" with one vertical clock Longline per column, but no flip-flops connected to it. A clock driving all 2500 flip-flops consumes 25 mW/MHz This is the extreme case where the clock drives every flip-flop in sight. That's what we claim as clock power on the present page 13-12, although it really is an unrealistic worst-worst case. > > > 2) Power based on ALL CLOCK nodes, at Fc, and ALL FF's toggling. > This is a MAXIMUM number - not a real world one, but still usefull, > as it shows the power cost of Activity. Again, see the 1996 Xilinx data book, page 1-12, left column. It gives you the power consumed per flip-flop. Actually two values: minimal interconnect means 0.1 mW per million transitions per second, max interconnect loading ( 9 wires per output) give twice that value. All numbers on that page assume Vcc=5 V. For 3.3 V just cut the power in half. > > > A straight line ratio between the two can be used, to get a better > real > chip fit. It can be better than that, as explained above. > > > 3) And, if you like, one in between, where the user can specify a > BLOCK > to > enable / disable. > > I would be happy with 1) and 2) I promise to make you even happier and publish all these numbers for all XC4000XL devices "real soon". But it is still YOU who has to figure out how many flip-flops change on each clock tick. And I think "12.5%" is a gross oversimplification Peter Alfke, Xilinx Applications. --------------FE7833186BD7631D11C38130 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <HTML> <BODY BGCOLOR="#FFFFFF"> <P>jim granville wrote: <P>> .....For this, you could do useful, and indicative calculations <BR>> <BR>> 1) Power based on ALL CLOCK nodes, at Fc, and with ALL FF's ClkEnable <BR>> =FALSE. This is a MINIMUM number, and also encourages Chip vendors to <BR>> make the lowest power sync-disabled instance Registers. <BR>> This number INCLUDES the internal power lost in a FF, that is clocking, <BR>> but whose output is NOT changing. <P><B>You can read exactly that number in the 1996 Xilinx data book, page</B> <BR><B>13-12. But that is not the lowest number. Xilinx XC4000 devices</B> <BR><B>distribute the clock first to the center of the chip and then</B> <BR><B>horizontally across ( I call that the backbone and it takes a tiny</B> <BR><B>portion of the above mentioned power). Then the "vertical Longlines"</B> <BR><B>that send the clock to the CLBs are connected selectively, only when</B> <BR><B>needed. That saves a lot of capacitance and thus power.Then the clock</B> <BR><B>input is only connected when needed, that can save a lot of power again.</B><B></B> <P><B>The data book is overly pessimistic and assumes that all flip-flops are</B> <BR><B>driven.</B><B></B> <P><B><FONT COLOR="#FF0731">Here is some new data for the XC4036XL:</FONT></B><B><FONT COLOR="#FF0731"></FONT></B> <P><B> </B> <BR><B> The clock backbone consumes 0.53 mW /MHz</B> <BR><B> That is the clock buffer driving the horizontal backbone, but none</B> <BR><B>of the vertical clock Longlines. This is the lowest meaningful value.</B> <BR><B>Add 0.1 mW for every additional unloaded vertical clock Longline.</B> <BR><B> </B><B></B> <P><B> A fully routed but unloaded complete clock distribution consumes 4</B> <BR><B>mW/MHz</B> <BR><B> That means the whole "ribcage" with one vertical clock Longline per</B> <BR><B>column, but no flip-flops connected to it.</B><B></B> <P><B> A clock driving all 2500 flip-flops consumes 25 mW/MHz</B> <BR><B> This is the extreme case where the clock drives every flip-flop in</B> <BR><B>sight. That's what we claim as clock power on the present page 13-12,</B> <BR><B>although it really is an unrealistic worst-worst case.</B> <P>> <BR>> <BR>> 2) Power based on ALL CLOCK nodes, at Fc, and ALL FF's toggling. <BR>> This is a MAXIMUM number - not a real world one, but still usefull, <BR>> as it shows the power cost of Activity. <P><B>Again, see the 1996 Xilinx data book, page 1-12, left column. It gives</B> <BR><B>you the power consumed per flip-flop. Actually two values: minimal</B> <BR><B>interconnect means 0.1 mW per million transitions per second, max</B> <BR><B>interconnect loading ( 9 wires per output) give twice that value. All</B> <BR><B>numbers on that page assume Vcc=5 V. For 3.3 V just cut the power in</B> <BR><B>half.</B><B></B> <P>> <BR>> <BR>> A straight line ratio between the two can be used, to get a better <BR>> real <BR>> chip fit. <P><B>It can be better than that, as explained above.</B> <P>> <BR>> <BR>> 3) And, if you like, one in between, where the user can specify a <BR>> BLOCK <BR>> to <BR>> enable / disable. <BR>> <BR>> I would be happy with 1) and 2) <P><B>I promise to make you even happier and publish all these numbers for</B> <BR><B>all XC4000XL devices "real soon". But it is still YOU who has to figure</B> <BR><B>out how many flip-flops change on each clock tick. And I think "12.5%"</B> <BR><B>is a gross oversimplification</B><B></B> <P><B>Peter Alfke, Xilinx Applications.</B> <BR><B></B> <BR> </BODY> </HTML> --------------FE7833186BD7631D11C38130--Article: 8305
The quickest way to do this is just to burn a part, and measure current with an amp-meter. If it draws too much current, then find another vendor. Also - my experience is that much (if not most) of the current consumption on SRAM parts comes from the I/O pins. That means that you have to model what the parts are connected to, as well. "Erik de Castro Lopo" <e.de.castro@fairlightesp.com.au> wrote: >Hi all (and paticularly Xilinx), >I'm currently doing a design with XC4000XL and XC9500 >parts and have been asked to estimate power consumption >of the finished design. >Xilinx on their web site does have an app note on how >to estimate design power consumption, ands thats how >I'll be doing it. However I also thought wouldn't it be >great if part of the Xilinx software could estimate >power consumption on a completed design, given the >clock frequencies and a given load capacitance for >the outputs. It would also be really useful if it >could estimate a probable error. >What do the other readers of this group think? >ErikArticle: 8306
OK, let me have a second go :) >> >You can design your state machines so that every state >> >leads eventually to a valid state. Prevents perminent >lock up. >True. I assume this is simply defining *all* possible states in the state machine source. The problem is that if one does get into an invalid state, it takes at least 1 extra clock to get out of it. One may not have that much time. Conversely, if one does have the extra time, then the state machine is probably using up more states than necessary for its normal business. >> Right. One needs to make sure that if an input to a state >machine is async, the states are arranged so that only *one* bit >changes as a result of that input. >Also true, but not the same thing. It is a better solution, IMHO, than the first suggestion. The problem is that a gray-code sequence can waste states, so more D-types are required. But this does avoid the need to synchronise the inputs and have 1 clock delay there. Which in any case would not avoid metastability *completely* - nothing can. >Well, that's the problem. A state change in a one-hot >system requires *two* FFs to change...the current state FF >turns off and the new state FF turns on. If a >synchronization failure occurs, we could end up with two >active states...or none! Either way, you've got a problem. Learnt something here :) Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8307
>Another advantage is that it is likely that one-hot state machines will take >up less space in the FPGA. Sure they will use more flip-flops, but they >don't waste space on all those decoders and encoders. > >Actually is this still true for FPGAs like the XC4000 series, where you have >built-in ram/rom? Maybe I have to rethink that- most of my experience is >with the 3000 series. It may or may not be true, depending on how wide a decode is required. A XC3k can decode 5 bits (IIRC - not used one for >1 year) in one CLB. Any more than that, one has to cascade 3 CLBs and this is where the waste starts. Another FPGA would have a real advantage only if it could decode 6 or 7 bits in parallel. I know one can also do wide decodes using long lines and open-drain wire-OR stuff, but that isn't very fast and I don't know how one would tell the P&R program to implement wide decodes in such a way. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8308
May I also suggest that a tool which could very easily (I mean with minimal up-front work) predict the dynamic Icc of an FPGA might discourage some users from using them, and going to an ASIC instead? Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8309
Sam Goldwasser wrote: > In article <01bd0136$748e8180$d080accf@default> "Richard B. Katz" <stellare_nospam@erols.com> writes: > > RE: demonstation circuit for metastability. > > > you might want to put two pots in, coarse and fine delay. > > Yes, and the fine delay better be able to resolve much much less than a > nanosecond! Surely, the noise on the power supply affects the recovery time. I would think a noisy environment would help quit a bit. Do people spec these times with noisy or clean power? To be clear, the rejection of the power supply noise by the amps is probably quite poor at high frequencies. Noisy power will push the metastable state one way or the other. As, a mater of fact I would think that the right amount of 1GHz noise could guarantee recovery within 1.5ns ChuckArticle: 8310
Chuck Parsons wrote: > Sam Goldwasser wrote: > > > In article <01bd0136$748e8180$d080accf@default> "Richard B. Katz" <stellare_nospam@erols.com> writes: > > > > RE: demonstation circuit for metastability. > > > > > you might want to put two pots in, coarse and fine delay. > > > > Yes, and the fine delay better be able to resolve much much less than a > > nanosecond! > > Surely, the noise on the power supply affects the recovery time. I would > think a noisy environment would help quit a bit. Do people spec these times > with noisy or clean power? > > To be clear, the rejection of the power supply noise by the amps is probably > quite poor at high frequencies. Noisy power will push the metastable state one > way or the other. As, a mater of fact I would think that the right amount of > 1GHz noise could guarantee recovery within 1.5ns > > Chuck As an after thought, I would think that designing in an inverter or two into flip flop after the clock to cause a short delay, and AC coupling these outputs over the flip-flop transistors with the right area of metal, could guarantee pushing the flip-flop out of the metastable region in only a few hundred pico-seconds. You just need to make the coupling large enough to push a metastable state one way or the other, but not large enough to transition a valid state, and delayed enough so that you avoid a race condition. This should work except for those metastable events which start to recover just before a noise pulse. Those pulses may be just large enough so that the noise pulse throws the system back into metastability. A subsequent pulse takes care of that. I realize that at least if we assume full linearity of EM and avoid quantum mechanics we can't avoid some voltage/time region that still gives a metastable output. But surely, techniques such as this must be able to produce orders of magnitude decreases for every cycle of the noise. Perhaps this type of effect, and smaller get capacitance which increases high frequency noise is the real reason flip-flops has shown such marked improvement in this way?Article: 8311
> To be clear, the rejection of the power supply noise by the amps is probably >quite poor at high frequencies. Noisy power will push the metastable state one >way or the other. As, a mater of fact I would think that the right amount of >1GHz noise could guarantee recovery within 1.5ns I think this is an excellent idea. It is only thermal noise, after all, which ensures eventual recovery from a metastable state. As to the practicalities of actually producing an on-chip noise generator, that's another matter :) But this method could be useful in certain very unusual situations. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8312
Chuck Parsons (chuck@CatenaryScientific.com) wrote: > As an after thought, I would think that designing in an inverter or two into flip flop > after the clock to cause a short delay, and AC coupling these outputs over the flip-flop > transistors with the right area of metal, could guarantee pushing the flip-flop out > of the metastable region in only a few hundred pico-seconds. You just need to > make the coupling large enough to push a metastable state one way or the other, > but not large enough to transition a valid state, and delayed enough so that you avoid > a race condition. > This should work except for those metastable events which start to recover just before > a noise pulse. Those pulses may be just large enough so that the noise pulse throws the > system back into metastability. A subsequent pulse takes care of that. > Perhaps this type of effect, and smaller get capacitance which increases high frequency noise is the > real reason flip-flops has shown such marked improvement in this way? That still won't work. A FF might be on its way to stability, when a pulse moves it back to the metastable point, at which time another pulse moves it out, only to have another pulse possibly move it back to the metastable point. You can play that game all day long, and I don't think you have really changed the probability of metastability all that much. GREGArticle: 8313
Philip Freidin (fliptron@netcom.com) wrote: : schmitt triggers go metastable too. Not something I was aware of, but I can understand how this can be true. I would suspect that this is even less well known that DFF metastability. What sort of test circuit can demonstrate schmitt trigger metastability ? Graeme Gill.Article: 8314
You can download for free the working demo versions of our VHDL Simulator and Synthesis tools. They are seamlessly integrated together, which really makes life simpler during design. And the software's price is quite a bit lower than other leading vendors who don't even offer integrated VHDL Simulation!! The demo software is a design-size limited version of the product. This Demonstration Edition software allows you to evaluate the features for simulation and synthesis by running small design examples (of up to 70 statement lines per VHDL source file). If you require higher design capacity for a complete evaluation, please contact APS for information about our product evaluation program. -- __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ Richard Schwarz, President EDA & Engineering Tools Associated Professional Systems (APS) http://www.associatedpro.com 3003 Latrobe Court richard@associatedpro.com Abingdon, Maryland 21009 Phone: 410.569.5897 Fax:410.661.2760 __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/ __/Article: 8315
Hi, Look at http://www.xilinx.com/products/logicore/alliance/tblpart.htm. Xilinx has there a list over different companies that makes PCI cores. - Phoenix Technology/Virtual Chips - Logic Innovations - Eureka Technology Inc. S.Gailhard avec un h wrote: > > Hello, > > I am looking for PCI cores (FPGA or ASIC) and > HDL models (VHDL or Verilog) of the PCI bus. > Do you know where I can find information about that. > > -- > Best regards. > > Stephane. >Article: 8316
Hi, Alot of memory vendors are coming with SDRAMs with something called SSTL (Stub Serial-Terminated Logic) interface. Does anyone know if any FPGA vendors has support for this SSTL interface? Does anyone know where information on SSTL can be found?Article: 8317
Can anybody help with this problem: There is a bug in the M1 release at the point where the ABEL BLIF format is converted to an EDIF netlist. The Xilinx workaround is to return to the old PALASM design flow when using ABEL. The solution note on Xilinx's Web site states that this only works for top-level designs. Anybody know why this is so ? Is there a work-around for the work-around ? The reason this is important is that as far as I can see there is no way of persuading the M1 tools to use a specific clock buffer when targetting an XC4K device with ABEL as the HDL source. This means to do this (& other device specific things) I need to instantiate my ABEL module as a macrocell in a piece of top-level VHDL source. Failing this I will have to start learning to do the full monty in VHDL - what a tedious and long winded language it is. I think I'd rather do some PERL hacking to an XNF/EDIF netlist. [Why oh why didn't they include a Verilog compiler at least as an option !!]. BTW: Can anyone tell me where I can get a downloadable/printable manual for the Aldec simulator that comes with Foundation M1 ? And I suppose I had better get a copy of the Metamor VHDL manual as well (sigh). -- _________________________________________________________________________ Dr. Richard Filipkiewicz phone: +44 171 700 3301 Algorithmics Ltd. fax: +44 171 700 3400 3 Drayton Park email: rick@algor.co.uk London N5 1NU EnglandArticle: 8318
Graeme Gill wrote: > > Philip Freidin (fliptron@netcom.com) wrote: > : schmitt triggers go metastable too. > > Not something I was aware of, but I can understand how this > can be true. > > I would suspect that this is even less well known that > DFF metastability. You can sort of convince yourself of the logical necessity that ST's (Schmitt triggers) have a mt (metastable) operation thus: If ST's had no mt operation, they could be used to solve the flip flop mt problem. Since this hasn't happened, ST's must have a mt problem. Also, if you look at an ST just right, you reaslize that it is memory element analogous to an unclocked RS flip flop, so again, if all ff's have mt problems, then ST's being just another type of ff must have a mt problem. Opinions expressed herein are my own and may not represent those of my employer.Article: 8319
Victor Levandovsky wrote: > I wish to simulate a project with external connections > between pin in Altera`s MAX+Plus II. for a simple connection between two pins on an Altera device, MAX+plus II has a useful feature. it's located under the Assign menu item as "Connected Pins...". if you wish to simulate two Altera designs connected together, make sure you compile each design in the same directory first. then make a 'top-level' design that includes the other two and connects them up using AHDL or schematic capture. before you compile this top-level design, make sure you click on "Processing" and select the "Linked SNF Extractor". this will link together the two previously-compiled designs so you can simulate. guyArticle: 8320
On Mon, 08 Dec 1997 12:43:47 +0000, Rick Filipkewicz <rick@xxxxz.co.uk> wrote: >Can anybody help with this problem: There is a bug in the M1 release at >the point where the ABEL BLIF format is converted to an EDIF netlist. what's the bug? evan (ems@riverside-machines.com)Article: 8321
Lars wrote in message <348BC3BB.557D@sn.no>... >Hi, >Alot of memory vendors are coming with SDRAMs with something called SSTL >(Stub Serial-Terminated Logic) interface. >Does anyone know if any FPGA vendors has support for this SSTL >interface? So far, the only one that I know that has announced it (i.e.--not available today but within the next six months) is Xilinx on their upcoming Virtex series. See http://www.xilinx.com/spot/virtexspot.htm. >Does anyone know where information on SSTL can be found? JEDEC spec.: (Adobe Acrobat format) http://www.eia.org/jedec/DOWNLOAD/freeStd/jesd8-xx/JESD8-8.PDF Integrated System Design Article: http://www.eedesign.com/Editorial/1997/ASICColumn9702.html Electronic Design Article: http://www.penton.com/ed/resource/archives/apr1497/0414132.htm ----------------------------------------------------------- Steven K. Knapp OptiMagic, Inc. -- "Great Designs Happen 'OptiMagic'-ally" E-mail: sknapp@optimagic.com Web: http://www.optimagic.com -----------------------------------------------------------Article: 8322
I don't know how hard it will be to demonstrate but what you need is a circuit that can generate an adjustable width (energy) pulse with its resting state in the hysteresis band of the Schmitt Trigger. 2V_ | _______|_____________ 1V_ | | At some critical energy below the minimum pulse width spec for the device, the output will get stuck in the metastable region for an indeterinent amount of time. Again, I don't know how hard this would be to demonstrate with simple test equipment. --- sam : Sci.Electronics.Repair FAQ: http://www.repairfaq.org/ Lasers: http://www.geocities.com/CapeCanaveral/Lab/3931/lasersam.htm Usually latest (ASCII): http://www.pacwest.net/byron13/sammenu.htm In article <66g0k7$6v5@wallaby.digideas.com.au> graeme@wallaby.digideas.com.au (Graeme Gill) writes: Philip Freidin (fliptron@netcom.com) wrote: : schmitt triggers go metastable too. Not something I was aware of, but I can understand how this can be true. I would suspect that this is even less well known that DFF metastability. What sort of test circuit can demonstrate schmitt trigger metastability ? Graeme Gill.Article: 8323
Craig, Viewlogic has a DOS-based program called "altran" that will make this conversion. If your primary library has the "primary" alias, you can run: altran -l primary xc3000=xc4000 This will change all of the components taken from the xc3000 library and match them to components found in the xc4000 library. This will prevent any complications with macros, and will remove all references to the xc3000 family. The final viewdraw.ini, if using M1, should look something like this (as shown in the Viewlogic Interface / Tutorial Guide online document): DIR [p] . (primary) DIR [rm] c:\xilinx\viewlog\data\xc4000 (xc4000) DIR [rm] c:\xilinx\viewlog\data\simprims (simprims) DIR [rm] c:\xilinx\viewlog\data\builtin (builtin) DIR [rm] c:\xilinx\viewlog\data\xbuiltin (xbuiltin) If components used in the design exist in the 3k but not the 4k (there aren't many) you will have to replace them by hand. They will appear as empty white boxes in the schematic; run Check -p <design> to find them. If your original design uses the pre-unified library (x3000 instead of xc3000), the conversion must be done by hand. thanks, david. David Dye Xilinx Product Applications Craig Slorach wrote: > Hi, > > We have an exisiting application on a 3000 series FPGA which was > originally > written using the old ViewLogic 4.1.3a (running under DOS). We now > need to > take the schematics etc. and edit them to run on a 4000 series device > (we > have access to M1 and ViewLogic WorkView Office). > > Can anyone advise on the simplest route to doing this ? (ie. is there > an > easy way to convert the schematics to 4000 blocks etc.) > > As we have a problem with our mail server at the time, please could > you > e-mail any responses to: c.cossar@elec.gla.ac.uk > > Best regards > > Calum CossarArticle: 8324
Jonathan Bromley <jsebromley@brookes.ac.uk> wrote: >I think you proved beyond reasonable doubt something that many of us >already suspected: Verilog beats VHDL by a couple of lengths if you are >measuring designer productivity on small, timing-critical designs. The >open question (much harder to answer, I fear) is this: on a big project >where design re-use, integration of many designers' work, and ease of >maintenance are the important criteria, does VHDL deliver the advantages >it was designed to provide? I'm not sure how to objectively answer that question because on large projects you have so many variables. Here's a few that I can think of right off the top of my head: Designer Experience: How experienced is the team of designers as far at the actual hardware being attempted designed? (That is, has the team ever designed a ATM before or are they going off of specs?) What's the mix of experiences on the design team? Have they ever designed ASICs, Standard Cell, Full Custom chips before? What about designing for power, testability, speed? EDA Experience: What's the mix of EDA experiences on the team? Have they used any simulation language before now? How about synthesis? ATPG? P&R tools? Are they at an advanced scripting level or do they still believe what their EDA vendor is telling them? What type of support is the team getting from various EDA vendors? Team / Interpersonal Experience: What are the personal dynamics of the design team? Is the boss (or some select group) always "right"? Does the team work together or are they a pack of lone wolves? Is risk taking encouraged? Are failed risks punished? Does the team have software engineers trying to design hardware? Fab / Foundry Experience: Is this a new process? Are the libraries even tested? How much beta testing are customers really doing? And these are just a few concerns that would be very hard to factor out in testing large projects, Jonathan! I wouldn't even attempt to try to deal with the political aspects that influence large projects and the choice of using Verilog or VHDL in them! >I promise I won't whinge on about this any more! No, please do. A good discussion usually brings out a few barbs! - John Cooley Part Time EDA Consumer Advocate Full Time ASIC, FPGA & EDA Design Consultant =========================================================================== Trapped trying to figure out a Synopsys bug? Want to hear how 5459 other users dealt with it ? Then join the E-Mail Synopsys Users Group (ESNUG)! !!! "It's not a BUG, jcooley@world.std.com /o o\ / it's a FEATURE!" (508) 429-4357 ( > ) \ - / - John Cooley, EDA & ASIC Design Consultant in Synopsys, _] [_ Verilog, VHDL and numerous Design Methodologies. Holliston Poor Farm, P.O. Box 6222, Holliston, MA 01746-6222 Legal Disclaimer: "As always, anything said here is only opinion."
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z