Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
In article <uo9u2yjy4mw.fsf@netcom2.netcom.com>, Tom Lane <tgl@netcom.com> wrote: > 2. What you actually want is not a DCT but a quantized DCT. Any > multiply that you can push to the DCT-coefficient end of the operation > can be eliminated by folding its constant into the coefficient > quantization factors that you're going to have to multiply or divide by > anyway. If you follow this road you find that you only need 5 or so > multiplies per 1-D DCT inside the "DCT box" itself. There are accuracy > issues to contend with if you are using fixed-point math, but overall > this approach yields the fastest practical DCT algorithms that I know > about. Call me an idiot, Tom, but it's not clear to me quite what you're getting at here. The data flow simply following the spec looks something like loop parse an AC coeff multiply it by quantization_scale*matrix[which AC Coeff] end loop IDCT this block which equals loop over rows and columns do a bunch of coeff[i]*some_fixed_point_value + more of the same You seem to be proposing that this be replaced with loop parse an AC coeff end loop IDCT this block which equals loop over rows and columns do a bunch of coeff[i] *quantization_scale*matrix[which AC Coeff]*some_fixed_point_value + more of the same So how does this in and of itself reduce the number of multiplies? Is your assumption that quantization_scale changes infrequently (heck with JPEG I don't know---maybe it's a one-off for the entire image) so that at the start before parsing we calculate a matrix of size [64][11] which equals quantization_scale*matrix[0..63]*theSpecificIDCTCoeff[0..10]? MaynardArticle: 14076
Will somebody give the detail on how to pre-simulate the netlist generated with SYNOPSYS fpga compiler? i.e. I've used Altera's flex10k as the target library, and now try to presim with flex10k's ftgs. In fact, I don't know even the differences between ftgs, ftsm, etc. What I did after synthesis was adding the two lines library flex10k_ftgs; use flex10k_ftgs.all; for each entity in the top netlist, and specify flex10k_ftgs in .synopsys_vss.setup file. The result from running vhdldbx was so many WARNINGs for a component being not instanced because it is unbound. For example, U147 - ATBL_1 which was instanced during synthesis is unbound. What's wrong? Do I have to configure all the instances, which, I'm sure, is not the right way?Article: 14077
I have pretty strong opinions about this. Hopefully when you read this you will try to understand my points, not pick apart my choice of words: 1. If you are a digital designer and you do not understand metastability, you cannot do reliable digital design (except by luck, or as a synchronous subsystem only) 2. If you do not understand exactly what metastable behavior will do to your design, your design should be considered "unsafe" in that it *might* do something unpredictable. 3. If you understand how often you design might fail due to metastability, and the nature of the failure, and you are satisfied with it, you have done your job. >Fact is, I am designing relatively simple uP boards. I even >have to write the software for it. My designs aren't 100% >reliable, mostly because I haven't removed all software >flaws yet. I can understand that one can get excited about >metastability, but *I* couldn't care less. The article I >have read, showed an example where an improvement was >suggested, putting 2 d-flipflops in cascade, resulting in 1 >occurence of metastability once every 2 million years. >Although very interesting, I am not even considering that >advice for my future designs, because I like to see this >metastability-thing at least once in my life. > What was the failure rate *before* the second flip flop was added? The whole point is that the second flip-flop makes the failure rate tolerable (in most situations). My rule of thumb is that if the metastable failure rate is better that the failure rate for the worst part in the system, then I'm in the right ballpark. If your software crashes once a week, (and you work for Microsoft so that is OK with you <g>), then your metastable requirements have opened up nicely. The important point is that every digital designer *must* consider metastability in the context in which the design is being done. It should never be totally overlooked. bruce "Having spent untold hours at analyzing and measuring metastable behavior, I can assure you that it is (today) a highly overrated problem. You can almost ignore it." Peter Alfke 12/28/98 "Having spend untold hours debugging digital designs, I can assure you that metastable behavior is a real problem, and every digital designer had better understand it" Bruce Nepple 12/31/98Article: 14078
handleym@ricochet.net (Maynard Handley) writes: > Tom Lane <tgl@netcom.com> wrote: >> 2. What you actually want is not a DCT but a quantized DCT. Any >> multiply that you can push to the DCT-coefficient end of the operation >> can be eliminated by folding its constant into the coefficient >> quantization factors that you're going to have to multiply or divide by >> anyway. If you follow this road you find that you only need 5 or so >> multiplies per 1-D DCT inside the "DCT box" itself. > Call me an idiot, Tom, but it's not clear to me quite what you're getting > at here. > The data flow simply following the spec looks something like > loop > parse an AC coeff > multiply it by quantization_scale*matrix[which AC Coeff] > end loop > IDCT this block which equals > loop over rows and columns > do a bunch of coeff[i]*some_fixed_point_value + more of the same > You seem to be proposing that this be replaced with > loop > parse an AC coeff > end loop > IDCT this block which equals > loop over rows and columns > do a bunch of coeff[i] > *quantization_scale*matrix[which AC Coeff]*some_fixed_point_value > + more of the same > So how does this in and of itself reduce the number of multiplies? No, that's not what I'm getting at. What you do is adjust the quantization multipliers (matrix[n] in your pseudocode above) by multiplying them by DCT-determined constants before you start the decoding pass. The pseudocode still looks much as in your first example --- but the number of multiplies needed per 1-D DCT drops from 11 or so to 5 or so (something that can't be seen in your pseudocode, since it glosses over the contents of the DCT step). JPEG doesn't have quantization_scale, btw, unless you are implementing some of the Part 3 extensions. > Is your assumption that quantization_scale changes infrequently (heck with > JPEG I don't know---maybe it's a one-off for the entire image) so that at > the start before parsing we calculate a matrix of size [64][11] which > equals > quantization_scale*matrix[0..63]*theSpecificIDCTCoeff[0..10]? I'm not following what the second subscript is supposed to represent there, but it's not needed in what I'm thinking of. regards, tom lane organizer, Independent JPEG GroupArticle: 14079
Please go to: http://www.ti.com/sc/docs/psheets/abstract/apps/sdya006.htm and all will be revealed! bruceArticle: 14080
Peter Alfke wrote: > Luckily, most asynchronous interfaces operate at a much > slower pace. Peter, I must admit I am surprised to hear you say this. I guess you've succumbed to the 'most designs' philosophy that had Xilinx management convinced to not support schematic simulation for Virtex. I suppose if you consider that "most" FPGA designs are just random logic to consolidate a bunch of TTL into one chip then OK. When dealing with high performance DSP designs such as found in radar and digital communications, it is fairly common to have an async interface running data rates in the 50-100 MHz range, often between clock domains with similar clock rates. -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 14081
The symmetry in a 1D DCt can be exploited to reduce the multiplies. Additionally, if you look at the cos terms, you can pair terms so that you get a sin and cos, then by using a CORDIC rotator you get two terms at a time without an explicit multiplication. By taking advantage of the sin and cos plus symmetry, an 8 point DCT can be done with 6 CORDIC rotations and a bunch of butterfly operations each consisting of an add and a subtract. For high speed (completely pipelined) hardware, the resulting complexity (number of CLBS) is about the same as a distributed arithmetic approach. The CORDIC approach appears off hand to have a smaller error than the straightforward multipliers too because the number of truncations is reduced in the process. Maynard Handley wrote: > In article <uo9u2yjy4mw.fsf@netcom2.netcom.com>, Tom Lane <tgl@netcom.com> > wrote: > > > 2. What you actually want is not a DCT but a quantized DCT. Any > > multiply that you can push to the DCT-coefficient end of the operation > > can be eliminated by folding its constant into the coefficient > > quantization factors that you're going to have to multiply or divide by > > anyway. If you follow this road you find that you only need 5 or so > > multiplies per 1-D DCT inside the "DCT box" itself. There are accuracy > > issues to contend with if you are using fixed-point math, but overall > > this approach yields the fastest practical DCT algorithms that I know > > about. > > Call me an idiot, Tom, but it's not clear to me quite what you're getting > at here. > The data flow simply following the spec looks something like > loop > parse an AC coeff > multiply it by quantization_scale*matrix[which AC Coeff] > end loop > IDCT this block which equals > loop over rows and columns > do a bunch of coeff[i]*some_fixed_point_value + more of the same > > You seem to be proposing that this be replaced with > loop > parse an AC coeff > end loop > IDCT this block which equals > loop over rows and columns > do a bunch of coeff[i] > *quantization_scale*matrix[which AC Coeff]*some_fixed_point_value > + more of the same > > So how does this in and of itself reduce the number of multiplies? > > Is your assumption that quantization_scale changes infrequently (heck with > JPEG I don't know---maybe it's a one-off for the entire image) so that at > the start before parsing we calculate a matrix of size [64][11] which > equals > quantization_scale*matrix[0..63]*theSpecificIDCTCoeff[0..10]? > > Maynard -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 14082
> Surely, if you can reliably detect the m-state, you can force it > whichever way you want in a defined time, if possible short enough not > to be of any significance? I don't think so. You just pushed things around. Consider what happens if the FF is late but about to switch to a 1, and your window says "bad" and tries to fix it by resetting the FF. (If your extra circuit wasn't there everything would be fine.) You might get a runt pulse out of your circuit. After that, all bets are off. (Unless you wait long enough, but then you would have been better to start waiting when the clock ticked.) I'm assuming that you gate your fix-up logic with, say, the middle 1/3 of the cycle. You have to do something like that or your circuit will go off everytime the FF tries to switch. -- These are my opinions, not necessarily my employers.Article: 14083
> I don't know if it's possible to construct a device that guarantees a > metastability time limit. I am not convinced by the hand-waving > arguments around here that say it's impossible -- but the maths is not > simple. A real device does not have one or two continuous state > parameters, but infinitely many (think of the internal energy flows). This whole mess might makes more sense if you put on an analog hat rather than your digital one. Both Mead and Conway and Glasser and Dobberphul have good graphs. Consider the internal voltage inside the FF. If it is near the metastability point it will drift exponentially to high or low. The closer to the metastability point it starts the longer it takes to get to a valid signal level. When the clock ticks, that internal voltage gets set depending upon the input voltage. (Or maybe the input voltage a few ns before or after the clock.) If you change the input voltage just a bit early or late, the internal voltage will be close to metastable and it will take a long time to recover to a clean output. One of the standard reasons to show that metastability is unavoidable is because the input voltage and internal voltage are continous functions. You also have to prove that any bi-stable device is going to have similar characteristics. Johnson and Graham have a very good section in their High-Speed Digital Design - A Handbook of Black Magic. They have a graph of clock-out time as a function of input setup/hold time. Ugly. They also have a setup for teasing metastability and a lot of really ugly scope pictures. It is a wonderful book. Everybody interested in fast designs should get a copy. ---- Thanks for suggesting Mead and Conway. -- These are my opinions, not necessarily my employers.Article: 14084
hi tim, a few questions, comments. good evening, rk ============================ Tim Hubberstey wrote: Peter Alfke wrote: > [snip} > I was just pointing out that the best of modern CMOS > flip-flops have reduced the likelihood of metastability to a > point where we can be more relaxed about it. > But it still makes for a spirited discussion... > > Peter Alfke > Xilinx Apps. I've heard these sentiments echoed in several posts in this thread and I'm afraid I have to disagree. i basically agree with peter. many situations a decade ago which drove a design are now easily handled, as long as the design is reasonable (proper syncing) and adequate settling time is provided. for instance, a 100 kHz request to memory on a 2.25 MHz clock is easy with modern technologies.. now, the case you describe is a bit different, with about 13 nsec for the clock->clock delay. ================================================ I am currently involved in a design involving several chips using a 0.25 micron process from a large, respected vendor (who will remain undisclosed due to NDAs) and we have found metastability to be a very real potential problem. The chips in question fall into the "system on a chip" category and as such have several high-speed clock domains. Analysis by the vendor has predicted an MTBF of less than a minute for the standard 2 flip-flop synchronizer when going between a 60 MHz and a 75 MHz clock domain. This is obviously not something "we can be more relaxed about". no, this is not something to be relaxed about. i started to run some numbers on some different technologies, but i would like to run exactly your case. first, how often does the signal on the 60 MHz clock change? second, how much time is available in the 75 MHz clock period for settling? lastly, did you select the flip-flop for it's use in a synchronizer or did the vendor analyze the flip-flop that the synthesis tool selected? running some numbers on chip express, for example, one can see that the choice of flip flop can make a big difference in the results. here's an example, assuming a sync running at 75 MHz, and the data coming from the 60 MHz domain switching at a 20 MHz average rate. *************************************** Manufacturer = ChipExpress F-F Model = CX2000_DMxPP_FALL Minimum Slack = 1.000 Maximum Slack = 13.000 Slack Increment = 1.000 K1 = 0.300 K2 = 6.000 Clock Frequency = 7.5000E+0007 Data Frequency = 2.0000E+0007 Slack Time MTBF (sec) MTBF (years) 1.00 8.965084E-0004 2.842810E-0011 2.00 3.616773E-0001 1.146871E-0008 3.00 1.459110E+0002 4.626809E-0006 4.00 5.886472E+0004 1.866588E-0003 5.00 2.374772E+0007 7.530353E-0001 6.00 9.580515E+0009 3.037961E+0002 7.00 3.865055E+0012 1.225601E+0005 8.00 1.559275E+0015 4.944427E+0007 9.00 6.290563E+0017 1.994724E+0010 10.00 2.537794E+0020 8.047293E+0012 11.00 1.023819E+0023 3.246510E+0015 12.00 4.130382E+0025 1.309735E+0018 13.00 1.666315E+0028 5.283850E+0020 MTBF Improvement per increment = 403.429 *************************************** Manufacturer = ChipExpress F-F Model = CX2000_TFxPC_FALL Minimum Slack = 1.000 Maximum Slack = 13.000 Slack Increment = 1.000 K1 = 6.900 K2 = 3.400 Clock Frequency = 7.5000E+0007 Data Frequency = 2.0000E+0007 Slack Time MTBF (sec) MTBF (years) 1.00 2.895082E-0006 9.180245E-0014 2.00 8.674853E-0005 2.750778E-0012 3.00 2.599342E-0003 8.242458E-0011 4.00 7.788693E-0002 2.469778E-0009 5.00 2.333812E+0000 7.400469E-0008 6.00 6.993057E+0001 2.217484E-0006 7.00 2.095407E+0003 6.644491E-0005 8.00 6.278697E+0004 1.990962E-0003 9.00 1.881355E+0006 5.965738E-0002 10.00 5.637312E+0007 1.787580E+0000 11.00 1.689170E+0009 5.356322E+0001 12.00 5.061445E+0010 1.604974E+0003 13.00 1.516616E+0012 4.809159E+0004 MTBF Improvement per increment = 29.964 assuming 10 nsec of slack, and the above numbers, we have either 8E+12 years as a mtbf or 1.8 years, which is clearly not very good. in any event, quite a large difference, which was the point. additionally, choice of edge makes a difference, too. the data below, on a xilinx part, shows the best performance. ================================================================ One of the biggest contributors to this problem turns out to be junction temperature. In these days of systems on a chip, the die will often have a worst case temperature spec of over 100C. The vendor's projections show a 10000-fold difference in MTBF across a die temperature range of 0C to 100C. 10^4 is quite a difference. did they supply a curve of mtbf vs. temperature? ========================================================= > Your comment may be valid for FPGAs (I defer to your experience here) > but I can't agree with it as a blanket statement. here's some numbers, with the same assumptions, run on an FPGA: *************************************** Manufacturer = Xilinx F-F Model = XC4005E-3_CLB Minimum Slack = 1.000 Maximum Slack = 13.000 Slack Increment = 1.000 K1 = 0.100 K2 = 19.400 Clock Frequency = 7.5000E+0007 Data Frequency = 2.0000E+0007 Slack Time MTBF (sec) MTBF (years) 1.00 1.775095E+0003 5.628790E-0005 2.00 4.726445E+0011 1.498746E+0004 3.00 1.258483E+0020 3.990626E+0012 4.00 3.350893E+0028 1.062561E+0021 5.00 8.922231E+0036 2.829221E+0029 6.00 2.375672E+0045 7.533206E+0037 7.00 6.325566E+0053 2.005824E+0046 8.00 1.684272E+0062 5.340793E+0054 9.00 4.484616E+0070 1.422062E+0063 10.00 1.194093E+0079 3.786445E+0071 11.00 3.179444E+0087 1.008195E+0080 12.00 8.465724E+0095 2.684464E+0088 13.00 2.254120E+0104 7.147768E+0096 MTBF Improvement per increment = 266264304.669 ==================================================== Tim Hubberstey, P.Eng. . . . . . . . . . . . . . . Marmot Engineering Vancouver, BC, Canada . . . . . Hardware/Software Consulting Engineer Email: marmot@rogers.wave.ca . . . . . . VHDL, ASICs, embedded systemsArticle: 14085
Will somebody give the detail on how to pre-simulate the netlist generated with SYNOPSYS fpga compiler? i.e. I've used Altera's flex10k as the target library, and now try to presim with flex10k's ftgs. In fact, I don't know even the differences between ftgs, ftsm, etc. What I did after synthesis was adding the two lines library flex10k_ftgs; use flex10k_ftgs.all; for each entity in the top netlist, and specify flex10k_ftgs in .synopsys_vss.setup file. The result from running vhdldbx was so many WARNINGs for a component being not instanced because it is unbound. For example, U147 - ATBL_1 which was instanced during synthesis, says the simulator, is unbound. What's wrong? Do I have to configure all the instances, which, I'm sure, is not the right way.Article: 14086
I am using FPGA Express with Foundation Series 1.4 to implement VHDL code. I have been coming accross problems when using processes in my code and I usually receive a warning that the output of the cell is constant. This causes the cell to be eliminated when it is optimized. I am using the student edition of this software if it makes any difference. Here is a simple VHDL program that this error occurs with... ***************************************** library IEEE; use IEEE.std_logic_1164.all; entity dff is port(D:in std_logic; CLK:in std_logic; CLEAR:in std_logic; Q:out std_logic); end dff; architecture dff1 of dff is signal V: std_logic; begin process(CLK, CLEAR) begin if CLEAR = '1' then V <= '0'; elsif (CLK = '1' and CLK'event) then V <= D; end if; end process; Q <= V; end dff1; ************************************************** Any help in this matter would be greatly appreciated. Thanks, NickArticle: 14087
rk wrote: > [snip] > Tim Hubberstey wrote: > Peter Alfke wrote: > > I was just pointing out that the best of modern CMOS > > flip-flops have reduced the likelihood of metastability to a > > point where we can be more relaxed about it. > > But it still makes for a spirited discussion... > > > > Peter Alfke > > Xilinx Apps. > I've heard these sentiments echoed in several posts in this thread and > I'm afraid I have to disagree. > i basically agree with peter. many situations a decade ago which drove a > design are now easily handled, as long as the design is reasonable (proper > syncing) and adequate settling time is provided. for instance, a 100 kHz > request to memory on a 2.25 MHz clock is easy with modern technologies.. > now, the case you describe is a bit different, with about 13 nsec for the > clock->clock delay. I agree with both of you 100%. *IF* you are dealing with a situation such as those that existed a decade ago then metastability is much less of a problem. My point, which I apparently didn't make clearly, is that my design problem is typical of those encountered in *CURRENT* leading-edge designs and that you cannot be complacent just because processes have improved. I have encountered similar problems in other recent designs. [lots of interesting stuff snipped] > One of the biggest contributors to this problem turns out to be > junction > temperature. In these days of systems on a chip, the die will often > have > a worst case temperature spec of over 100C. The vendor's projections > show a 10000-fold difference in MTBF across a die temperature range of > 0C to 100C. > 10^4 is quite a difference. did they supply a curve of mtbf vs. > temperature? Yes. That's where this data came from. Again, this is an example of a current design problem. 10 years ago I would never have considered designing for a die temperature this high but without expensive cooling, fancy packages and/or gated clocks, this is the reality. [FPGA data snipped] -- Tim Hubberstey, P.Eng. . . . . . . . . . . . . . . Marmot Engineering Vancouver, BC, Canada . . . . . Hardware/Software Consulting Engineer Email: marmot@rogers.wave.ca . . . . . . VHDL, ASICs, embedded systemsArticle: 14088
Hi everyone, can someone tell me which harware is presente in the cypress ISP cable ? Are there just some buffers? Thanks in advance Matteo Ricchetti ricchetti@eidomedia.com Togliere NOSPAM dall' indirizzoArticle: 14089
Greetings Please check Verilog FAQ for list of free Verilog Simulators It is located at http://www.angelfire.com/in/verilogfaq/ Here is s list of free simulators There are three free Verilog simulators available with limited capabilities. SILOS III from Simucad. www.simucad.com SILOS III's high performance logic and fault simulation environment supports the Verilog Hardware Description Language for simulation at multiple levels of abstraction. The Environment's state-of-the-art architecture incorporates an exclusive integrated / interactive multi-tasking graphical debugging environment that provides unsurpassed accuracy and outstanding performance. VeriLogger from SynaptiCAD VeriLogger is a free an IEEE-1364 compliant Verilog simulator. VeriLogger combines many of the best ideas from modern programming IDEs and SynaptiCAD's timing diagram editing environment to created an interactive simulator with graphical stimulus generation. VeriLogger has a powerful hierarchical browser that displays the structural relationships of the modules. It also includes waveform viewing, single step debugging, point-and-click breakpoints, graphical and console execution (command line version). Download a free evaluation version of VeriLogger Pro from http://www.syncad.com/ SMASH from Dolphin Integration Dolphin Integration offers evaluation version of SMASH simulator which is a mixed signal,multi-level simulator.SMASH implements the full Verilog-HDL IEEE standard. The implementation is based on the OVI Reference Manuals. SMASH supports the SDF (Standard Delay File) format, to allow backannotation from layout tools. This evaluation version is a full-featured system (they will not allow you to compile new behavioral models though). They will not handle large circuits. The number of analog nodes is limited to 25, and the number of digital nodes is limited to 50. http://www.dolphin.fr/ Hope this helps. Rajesh Verilog Page : http://www.angelfire.com/in/rajesh52/verilog.html In article <iUem2.3623$id.220@cabot.ops.attcanada.net>, "Michael Faltas" <mfaltas@attcanada.net> wrote: > Does anyone know of a free Verilog simulator for any platform? > > takehiro@rr.iij4u.or.jp wrote in message > <3698D73F.A044BCF8@rr.iij4u.or.jp>... > >Dear all, > > > >I look for free Verilog-HDL simulator which lets me experience PLI > >operation. > >Does anyone have information on this? > >I wish the Verilog-HDL simulator to run on Windows98(95)/NT. > > > >Thanks, > >Hiro > > > > > > -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your OwnArticle: 14090
Bruce Nepple heeft geschreven in bericht ... >I have pretty strong opinions about this. Hopefully when you read this you >will try to understand my points, not pick apart my choice of words: I like strong opinions - it was the *crime against humanity* (rk wrote that) comparison that made me write my reply ;-) >1. If you are a digital designer and you do not understand metastability, >you cannot do reliable digital design (except by luck, or as a synchronous >subsystem only) Depends on how strict you specify 'reliable'. Don't forget the possibilities to correct errors, by hardware and/or software or whatever. Even with circuits that simply can't suffer from metastability by design, it's not unusual to add some extra safety, if certain risks are at stake. >2. If you do not understand exactly what metastable behavior will do to >your design, your design should be considered "unsafe" in that it *might* do >something unpredictable. Point taken. OTH for practical reasons it's not always possible to mathematically prove that for instance a piece of software will always behave according the intentions of it's maker. Unpredictable systems are a part of life. Metastabillty or not... >3. If you understand how often you design might fail due to metastability, >and the nature of the failure, and you are satisfied with it, you have done >your job. A very good argument - the nature of the failure has a very important weight. >What was the failure rate *before* the second flip flop was added? The >whole point is that the second flip-flop makes the failure rate tolerable 54 minutes, it was a circuit that clocked a d-flipflop (sn74als74)with a 10 Mhz system clock, and the q-output was used to trigger an interrupt. This is -partially - a kind of circuit I sometimes use. The document also recommended a different logic family (sn74as74) use for the particular circuit, resulting in 2.4 x 10^21 years. If another logic family is out of the question, a two stage circuit can be made, resulting in 2 million years MTBF. Even if my input circuit fails once per hour, it would not hurt much, because I have not the exactly the same circuit as the example circuit. My input signal is much less than 10mhz, and the interrupt will be generated anyway, although one clockcycle later. This is something I can live with, because nobody would notice. The interrupt latency is much and much bigger than this 0.1 uSec. >(in most situations). My rule of thumb is that if the metastable failure >rate is better that the failure rate for the worst part in the system, then >I'm in the right ballpark. If your software crashes once a week, (and you >work for Microsoft so that is OK with you <g>), then your metastable >requirements have opened up nicely. >The important point is that every digital designer *must* consider >metastability in the context in which the design is being done. It should >never be totally overlooked. Your last phrase puts it (imho) in a nice perspective. Met vriendelijke groeten, Frank Bemelman (reageren per email ? verwijder dan de 'x' uit mijn emailadres)Article: 14091
> > > Tom Lane <tgl@netcom.com> wrote: > >> 2. What you actually want is not a DCT but a quantized DCT. Any > >> multiply that you can push to the DCT-coefficient end of the operation > >> can be eliminated by folding its constant into the coefficient And the quantizing process can also be embedded into the DCT computation itself, as described in "Quantized Discrete Cosine Transform: A Combination of DCT and Scalar Quantization" by K.Nguyen-Phi et al., ICASSP'99. Khanh Nguyen-PhiArticle: 14092
Matteo Ricchetti wrote in message <369b7cb0.21532686@news.interbusiness.it>... >Hi everyone, >can someone tell me which harware is presente in the cypress ISP cable >? Are there just some buffers? It's just a buffer and a "brick" style DC to DC converter. The Cypress CPLDs use 12V during programming, and the Cypress ISP cable steals 5V from your board and runs it through the DC to DC converter to provide that 12V. I believe Cypress publishes the schematic of the thing somewhere. Interestingly enough, the early versions of the Cypress cables used a Cypress brand buffer that didn't interpret the output voltage levels of many parallel ports the way the parallel port intended. Newer versions use a non-Cypress chip (different logic family) with different voltage thresholds. :-) ---Joel KolstadArticle: 14093
I was wondering if anyone else has been having problems with the Foundation v1.5i when compiling to a Spartin device. (Specificly a XCS30-PQ208). I have just installed the update and re-fit the design as a new revision. Everything seems to run fine. The processing reports about the same statistics, but when I go to download using XChecker I get an error about "INIT does not go high". Everything is set up exactly the same as when I ran it with v1.5. The hardware debugger gets all the way through the download before reporting this. I uninstalled 1.5i and reinstalled 1.5, recompiled and was up and running again. When I initially installed v1.5i I uninstalled the previous version as the documentation stated. Thanks, Mark Sasten Apex PC SolutionsArticle: 14094
In article <77g460$57m$1@nnrp2.dejanews.com>, rajesh52@my-dejanews.com writes >SMASH from Dolphin Integration Dolphin Integration offers evaluation version >of SMASH simulator which is a mixed signal,multi-level simulator.SMASH >implements the full Verilog-HDL IEEE standard. The implementation is based on >the OVI Reference Manuals. SMASH supports the SDF (Standard Delay File) >format, to allow backannotation from layout tools. This evaluation version is >a full-featured system (they will not allow you to compile new behavioral >models though). They will not handle large circuits. The number of analog >nodes is limited to 25, and the number of digital nodes is limited to 50. >http://www.dolphin.fr/ > Since this says it is mixed signal then does it support VERILOG-AMS ? > >Hope this helps. >Rajesh > >Verilog Page : http://www.angelfire.com/in/rajesh52/verilog.html > > > >In article <iUem2.3623$id.220@cabot.ops.attcanada.net>, > "Michael Faltas" <mfaltas@attcanada.net> wrote: >> Does anyone know of a free Verilog simulator for any platform? >> >> takehiro@rr.iij4u.or.jp wrote in message >> <3698D73F.A044BCF8@rr.iij4u.or.jp>... >> >Dear all, >> > >> >I look for free Verilog-HDL simulator which lets me experience PLI >> >operation. >> >Does anyone have information on this? >> >I wish the Verilog-HDL simulator to run on Windows98(95)/NT. >> > >> >Thanks, >> >Hiro >> > >> > >> >> > >-----------== Posted via Deja News, The Discussion Network ==---------- >http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own -- Andy BotterillArticle: 14095
Jeffrey L. Madden wrote in message <369B642A.843BCDF1@iupui.edu>... >I am using FPGA Express with Foundation Series 1.4 to implement VHDL >code. I have been coming accross problems when using processes in my >code and I usually receive a warning that the output of the cell is >constant. This causes the cell to be eliminated when it is optimized. I >am using the student edition of this software if it makes any >difference. Here is a simple VHDL program that this error occurs with... > >***************************************** >library IEEE; >use IEEE.std_logic_1164.all; >entity dff is > port(D:in std_logic; > CLK:in std_logic; > CLEAR:in std_logic; > Q:out std_logic); >end dff; > >architecture dff1 of dff is > signal V: std_logic; >begin > process(CLK, CLEAR) > begin > if CLEAR = '1' then > V <= '0'; > elsif (CLK = '1' and CLK'event) then > V <= D; > end if; > end process; > Q <= V; >end dff1; >************************************************** >Any help in this matter would be greatly appreciated. > > Thanks, > Nick > I think it might have to do with the way you test for the clock edge. I use Express as well, and it is a bit picky about how the async clear and the clock edge are tested for. Try this: process (clk, clear) begin if clear = '1' then V <= '0'; else if (clk = '1' and clk'event) then V <= D; end if; end if; end process; I had this trouble. In the help (somewhere) it says that the clock 'if should simply be if (clk stuff) code; end if; it should not have an else. See the FPGA Express VHDL Reference (p.169 in the .pdf version) Something else: is there a particular reason that you have the signal V? Can't you simply assign D to Q within the process? Hope this helps JimArticle: 14096
Jeffrey L. Madden wrote in message <369B642A.843BCDF1@iupui.edu>... >I am using FPGA Express with Foundation Series 1.4 to implement VHDL >code. I am on Express version 2.3ish, the reference I mentioned is dated December 1997. I don't know if there are any differences, but it may affect the usefulness of my previous post. Cheers JimArticle: 14097
There is a I2C EEPROM Simulator at the site below. See if it is of any use. It is written in VHDL. http://www.eiv.vsnet.ch/electro/micro/pages/Vhdl.htm Best Regards, SoonHuat Goh <soonhuat@singnet.com.sg> saffary wrote in message <7768i9$s6r$1@front5.grolier.fr>... >Anyone know where I can find i2c VHDL core. > >thank. > >Article: 14098
Dan Prysby (dprysby1@email.mot.com) wrote: : So if a combination of 3 FFs sample 1 signal close enough in time : and each has a different threshold by design, then only 1 will go : metastable. Just a supposition. Probably true, but in the marginal case 1 FF will vote "1", 1 FF will vote "0", and the third will go metastable. Then whatever circuit combines the 3 votes can potentially give an invalid output level (part way between what it would give if the third FF had voted "0" or "1"). In other words, your suggestion doesn't solve the problem completely, but might affect the probablility of a metastable signal propagating. Best regards, Alan MarshallArticle: 14099
Hi: Can anybody tell me where I can get 1-wire interface code for Dallas parts. Henry -- ------------------------------------
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z