Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Alasdair MacLean wrote: > > We tend to use antifuse parts (Actel) which are OTP rather > than SRAM based parts. I'm fairly sure we would not be allowed > to use SRAM parts in "flight-critical" applications. > > Let's just assume: 1- FPGAs allow you to weed out 'all' the BAD devices via extensive device testing. 3- FPGAs do not ever just flip a program bit randomly. 4- Loading of configuation data into an FPGA is 100% reliable if timing and voltage margins are met. If anyone has any evidence contrary to the above assumptions I'd like to see it. Lets also assume there are 0 bugs in the firmware. Folllowing these assumptions, the failures we will be left with will include real hardware failures over time. The 9/96 Xilinx data book includes the some curves (page 11-2) for FPGA failure rates at 70 deg C. These are based on 20,000 devices and 36,000,000 hours of testing. Here are a few representative data points for 12/95: XC4000 devices 20 FITS XC3000 devices 15 FITS XC3100 devices 0 FITS XC2000 devices 0 FITS A FIT is a device failure in 1 billion operating hours. I assume this implies that a system of 100, 10 FIT devices would run for an average of 100 years years before failing due to a hardware failure. The failure curves are extrapolated from high temperature testing 125-145 deg C. I wonder if custom devices will have has so much measured testing? Of course, we have to wonder what to make of such low rates. The curves also show the XC3100 curves to be -1 FITS in August of 95. I am trying to determine the physical meaning of this. - BradArticle: 4801
On Mon, 16 Dec 1996, Randy Tietz wrote: > DISCLAIMER: The opinions expressed are my own and do not neccessarily > reflect those of my employer. My experience is in commercial avionics > and may not be applicable to other safety critical apps. > > If I had my druthers, I'd also steer away from SRAM-based FPGAs. It is > not that they are technically inferior to antifuse parts, but rather > that > SRAM-based FPGAs bring with them some extra baggage from a safety > assessment and reliability perspective. For safety assessment you have > to be able to prove to people who don't know all the technical details > of any particular FPGA technology that bad things won't happen. The > fact that the hardware is dynamically configurable immediately raises > red flags to even the most naive. Now it is not that you can't make > a valid argument, it is just that you don't need to make that argument > with antifuse parts. From a reliability perspective you also now will > most likely have two parts to complete the intended function instead > of just one. You may also need to add monitoring circuitry to verify > that everything has been loaded as one would expect. > > Other concerns are the actual configuration time whether it be on power > up or after a error is detected, the reaction time from when a wrong > configuration is detected until it is corrected (exposure time), and > the susceptibility to short power interrupts. > > Don't get me wrong - I KNOW there are SRAM-based FPGAs flying in > critical > systems. It is just my opinion that antifuse FPGAs are easier to design > into critical systems. This really gets to the root of the problem. At the design review, a non-expert would probably feel very uncomfortable that out of all the perhaps thousands of times the re-loading is done, that for some reason (which the designer must now put forth) the loading may be in error. What is more disturbing is that if it is likely to occur at least once in the lifetime of the system, why can't it occurr twice or more times if the conditions persist. Putting these parts in a design for critical systems can open up a "can of worms" during the design review. And also perhaps later if system undergoes rigorous testing. For example, during shock tests on a shaker table to simulate dropping the package, the system operation may be inconsistant because of an FPGA reload-- the analysis of the test result will complicated and acceptance of the package may be held up. ************************************************************** Alvin E. Toda aet@lava.net sr. engineer Phone: 1-808-455-1331 2-Sigma WEB: http://www.lava.net/~aet/2-sigma.html 1363-A Hoowali St. Pearl City, Hawaii, USAArticle: 4802
============================================================================= Call for Papers 1997 International Symposium on Physical Design April 14-16, 1997 Napa Valley, California Sponsored by the ACM SIGDA in cooperation with IEEE Circuits and Systems Society The International Symposium on Physical Design provides a forum to exchange ideas and promote research on critical areas related to the physical design of VLSI systems. All aspects of physical design, from interactions with behavior- and logic-level synthesis, to back-end performance analysis and verification, are within the scope of the Symposium. Target domains include semi-custom and full-custom IC, MCM and FPGA based systems. The Symposium is an outgrowth of the ACM/SIGDA Physical Design Workshop. Following its five predecessors, the symposium will highlight key new directions and leading-edge theoretical and experimental contributions to the field. Accepted papers will be published by ACM Press in the Symposium proceedings. Topics of interest include but are not limited to: 1. Management of design data and constraints 2. Interactions with behavior-level synthesis flows 3. Interactions with logic-level (re-)synthesis flows 4. Analysis and management of power dissipation 5. Techniques for high-performance design 6. Floorplanning and building-block assembly 7. Estimation and point-tool modeling 8. Partitioning, placement and routing 9. Special structures for clock, power, or test 10. Compaction and layout verification 11. Performance analysis and physical verification 12. Physical design for manufacturability and yield 13. Mixed-signal and system-level issues. IMPORTANT DATES: Submission deadline: December 20, 1996 Acceptance notification: February 1, 1997 Camera-ready (6 page limit) due: March 1, 1997 SUBMISSION OF PAPERS: Authors should submit full-length, original, unpublished papers (maximum 20 pages double spaced) along with an abstract of at most 200 words and contact author information (name, street/mailing address, telephone/fax, e-mail). Electronic submission via uuencoded e-mail is encouraged (single postscript file, formatted for 8 1/2" x 11" paper, compressed with Unix "compress" or "gzip''). Email to: ispd97@ece.nwu.edu Alternatively, send ten (10) copies of the paper to: Prof. Majid Sarrafzadeh Technical Program Chair, ISPD-97 Dept. of ECE, Northwestern University 2145 Sheridan Road, Evanston, IL 60208 USA Tel 847-491-7378 / Fax 847-467-4144 SYMPOSIUM INFORMATION: To obtain information regarding the Symposium or to be added to the Symposium mailing list, please send e-mail to ispd97@cs.virginia.edu. Information can also be found on the ISPD-97 web page: http://www.cs.virginia.edu/~ispd97/ SYMPOSIUM ORGANIZATION: General Chair: A. B. Kahng (UCLA and Cadence) Past Chair: G. Robins (Virginia) Steering Committee: J. Cohoon (Virginia), S. Dasgupta (Sematech), S. M. Kang (Illinois), B. Preas (Xerox PARC) Program Chair: M. Sarrafzadeh (Northwestern) Keynote Address: T. C. Hu (UC San Diego) & E. S. Kuh (UC Berkeley) Special Address: R. Camposano (Synopsys) Publicity Chair: M. J. Alexander (Washington State) Local Arrangements Chair: J. Lillis (UC Berkeley) Technical Program Committee: C. K. Cheng (UC San Diego) W. W.-M. Dai (UC Santa Cruz) J. Frankle (Xilinx) D. D. Hill (Synopsys) M. A. B. Jackson (Motorola) J. A. G. Jess (Eindhoven) Y.-L. Lin (Tsing Hua) C. L. Liu (Illinois) M. Marek-Sadowska (UC Santa Barbara) M. Sarrafzadeh (Northwestern) C. Sechen (Washington) K. Takamizawa (NEC) M. Wiesel (Intel) D. F. Wong (Texas-Austin) E. Yoffa (IBM) =============================================================================Article: 4803
Brad Taylor wrote: > > > The 9/96 Xilinx data book includes the some curves (page 11-2) for FPGA > failure rates at 70 deg C. These are based on 20,000 devices and > 36,000,000 hours of testing. > > Here are a few representative data points for 12/95: > > XC4000 devices 20 FITS > XC3000 devices 15 FITS > XC3100 devices 0 FITS > XC2000 devices 0 FITS > > A FIT is a device failure in 1 billion operating hours. I assume this > implies that a system of 100, 10 FIT devices would run for an average of > 100 years years before failing due to a hardware failure. The failure > curves are extrapolated from high temperature testing 125-145 deg C. > > I wonder if custom devices will have has so much measured testing? > > Of course, we have to wonder what to make of such low rates. The curves > also show the XC3100 curves to be -1 FITS in August of 95. I am trying > to determine the physical meaning of this. -1 FITS : Perfect parts get more perfect, use a bad part for a billion hours and it will fix itself. Amazing devices ;o) -- John L. Smith, Pr. Engr. | Sometimes we are inclined to class Univision Technologies, Inc. | those who are once-and-a-half witted 6 Fortune Dr. | with the half-witted, because we Billerica, MA 01821-3917 | appreciate only a third part of their wit. jsmith@univision.com | - Henry David ThoreauArticle: 4804
I'm doing an overview of the FPGA market. Does anyone know a review which compares the FPGA's of the leader vendors in the market today(ALTERA,XILINX,ACTEL,ORCA etc.)? Alon HazayArticle: 4805
>the reliability of the part. Naturally, a system that reloads several >times in the course of operation has a higher probability of a bad >load. However, by utilizing the CRC and readback features, a designer >can ensure that the load was performed properly. It is incumbent on the >designer of the load logic to use these features properly if the >application demands it. This is an understatement, IMO. It is quite easy to have problems with loading up these devices. I have done a number of (very different) designs using SRAM-based FPGAs, and have had far more problems than should have been the case. One little example, discussed here ad nauseum some months ago: the edge on CCLK needs to have its transition time within certain limits - for no obvious reason. Glitches (or whatever) which are not visible on a 350MHz scope can prevent reliable config. This is on an input which never sees anything remotely that fast. They could so easily have put a schmitt trigger in there. I know Xilinx will say 99.99999% of their customers never have config problems, but this is of no help to me when it happens. I personally have never done any safety-critical systems, but I know of people who refuse to use SRAM-based FPGAs in them, and *their* customers also ban their use. I think this is somewhat short-sighted because any glitch which can corrupt an FPGA can equally crash the CPU, as well as any other digital logic. The designer must ensure these glitches never get inside the box in the first place, because there is a very small margin (in terms of the spike's power) between crashing logic and blowing up logic. There is one difference however: you can generally detect a corrupted CPU with a watchdog timer, but you cannot put a watchdog on an FPGA. >The SRAM based FPGA will usually come through a power anomally better >than other RAM or even a microprocessor (which can do some screwy things >under adverse power conditions). Either way, a power upset that is big >enough to potentially upset system operation should be dealt with using >a complete reinitialization. If this is a problem, some steps should be >taken to fortify the power system to prevent such upsets. Another understatement :) I now never use anything less than a TL7705 (power monitor with reset) and some external gating (HC132) to generate the proper D/P signal timing, and a low-going *pulse* (rather than just a low level) on /RST. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiserve.com.Article: 4806
Simon wrote: > > I am aware that a 40 MSPS 1024-point complex data FFT engine has > been built using 24 XC4010. The total cost of the silicon is only > around £2000 and it is faster than a CRAY supercomputer. > Performance of 10 to 100 times that of a DSP chip is also > achievable using single devices. I note that Altera are offering a FFT MegaCore function from December. Quotes are "significantly faster than DSP processors and substantially faster than previously available programmable logic implementations" "512 point, 8bit data, 8 bit twiddle, 1150 LEs, 94uS" lots of other figures available in the ad, life's too short to retype them. I'd guess their web page http://www.altera.com will have more data. Please note that I have _no_ idea how fast a hardware FFT _ought_ to go, so I'm not commenting, just passing on the ad. A side issue, has anyone out there used any of the AMPP modules? Success? Nightmare? inbetween? Steve -- Steve Wiseman, Senior Systems Engineer, SJ Consulting Ltd, Cambridge, UK Desk +44 1223 578524 (Fax 578525) Group +44 1223 578518 steve@sj.co.ukArticle: 4807
I am aware that a 40 MSPS 1024-point complex data FFT engine has been built using 24 XC4010. The total cost of the silicon is only around £2000 and it is faster than a CRAY supercomputer. Performance of 10 to 100 times that of a DSP chip is also achievable using single devices. -- Simon 106072.1620@Compuserve.com These opinions are entirely my own and, in keeping with the true nature of opinions, are not always valid or rational.Article: 4808
Xilinx FPGAs, although of an SRAM nature are in fact around 10 times more reliable than a comparable SRAM part. Also it is possible to have a system where by in the unlikely event that the FPGA fails to power up correctly you can detect it and re-initialise it. If you asic develops a fault your buggered. Also, I am aware of a number of saftey critical systems which depend on Xilinx FPGAs. Most notably a well known ship to shore, explosive delivery system (Made famouse in the Gulf War) which uses Xilinx FPGAs in its guidance system. -- Simon 106072.1620@Compuserve.com These opinions are entirely my own and, in keeping with the true nature of opinions, are not always valid or rational.Article: 4809
Just thought that I aught to point out that despite calling them cplds the Altera Flex10k parts do not have the predictable timing they claim. In order to fit achieve the desities that they claim Altera have had to sacrifice routing resurces to try and keep the die size down. As a consequence of this as the routing complexity increases the Altera parts start having to use logic cells to perform routing functions which adds additional delay and removes any sense of predictability that may have been there originally. As an aside if you look at Altera Flex10K pricing Vs Xilinx 4KE pricing you'll see that Altera have accepted that the Flex10K50 is only really the equivalent of the Xilinx XC4028E in terms of usable gates. -- Simon 106072.1620@Compuserve.com These opinions are entirely my own and, in keeping with the true nature of opinions, are not always valid or rational.Article: 4810
In article <32b223f4.187444683@news.smart.net>, bkwilli@smart.net (Bryan Williams) wrote: >Know anything about using Data I/O Unisite to program the Atmel >17C128's? I use it but there's a confusing issue - the Atmel, like >the Xilinx 17128, claims to have a programmable reset polarity. How >you specify this is a mystery -- the algorithm mentions the >programmable polarity, but how you specify it is unclear. I've been >just doing the way I did the Xilinx part -- for the 128kbit part, the >Xilinx algorithm reports a 4004 (hex) device size, the final 4 bytes = >0 for active-low reset, ff for active-high. The Atmel reports a 4000 >(hex) device size, no mention of the extra bytes, but once I tried it >with FF's in the same place the Xilinx expects them - no dice (the >board was designed for active low reset). Set them to zeros like the >Xilinx and it worked. We use the Data I/O Unisite, and they only way I know to set the polarity is to manually set it - I didn't change the polarity from the default, so I can't say that it actually worked, although perhaps a colleague has done this... **************************************************************************** Dan Bartram, Jr. Internet: dan.bartram@gtri.gatech.edu ****************************************************************************Article: 4811
On Thu, 12 Dec 1996 15:47:39 -0700, peter@xilinx.com (Peter Alfke) wrote: >EPLDs and CPLDs are derived from the PAL architecture and thus have an >AND-OR logic structure ( with most of the programmability in the AND array >) feeding a flip-flop macrocell. >That means less logic flexibility, but predictable delays, fewer >flip-flops, but faster compile times than for FPGAs. >Also, the technology used tends to have substantial static power consumption. I certainly would like to see what Mr. Alfke can do with his boolean gates (in SRAM look-up tables) that offers more "logic flexibility" that PAL / PLA boolean structures. Perhaps he can demonstrate doing a 37-wide full Sum of Products equation in 8nS from external pin-to-pin like I can in a single macrocell with CoolRunner CPLDs in any of his FPGA (or CPLD) products. And while most CPLDs (including Xilinx's) have 'substantial static power consumption' - he is obviously hasn't seen (or chooses to ignore) our recent introduction that offers 'zero' static power ( less than 50 microamps) without sacrificing very deterministic high speeds (6nS pin-to-pin at 5 product terms wide, 8nS at 37). >Altera confuses the issue by calling their SRAM-based FPGA-like families >"CPLDs" for peculiar, non-technical reasons. Ah yes - the continuing Altera mis-information...perhaps it is Xilinx's litigation in progress that caused Altera to call their FPGA a CPLD...hmmm. Mark Aaldering Philips Programmable Products Group Mark.Aaldering@abq.sc.philips.com www.coolpld.comArticle: 4812
In article <32B61C94.54B9@univision.com>, "John L. Smith" <jsmith@univision.com> wrote: > > > > Of course, we have to wonder what to make of such low rates. The curves > > also show the XC3100 curves to be -1 FITS in August of 95. I am trying > > to determine the physical meaning of this. > > -1 FITS : Perfect parts get more perfect, use a bad part for a billion > hours and it will fix itself. Amazing devices ;o) > Bit rot is a known problem. Leave your code and CPU alone and untouched for a long time and it mysteriously decays. Obviously Xilinx have found a way to turn this phenomenon to their advantage. Maynard -- My opinion onlyArticle: 4813
In article <5961mp$lh5$3@mhade.production.compuserve.com>, Simon <106072.1620@CompuServe.COM> wrote: >Just thought that I aught to point out that despite calling them >cplds the Altera Flex10k parts do not have the predictable timing >they claim. In order to fit achieve the desities that they claim >Altera have had to sacrifice routing resurces to try and keep the >die size down. As a consequence of this as the routing complexity >increases the Altera parts start having to use logic cells to >perform routing functions which adds additional delay and removes >any sense of predictability that may have been there originally. Really? The Flex 10K has more routing resources both per row and per column than its predecessor, the Flex 8000, as well as additional routing resources added to every row/column corner to allow more row-to-column connections. Feel free to offer data to support your claim of "less routing resources". >As an aside if you look at Altera Flex10K pricing Vs Xilinx 4KE >pricing you'll see that Altera have accepted that the Flex10K50 >is only really the equivalent of the Xilinx XC4028E in terms of >usable gates. Incredible. From the Xilinx data book, the 4028E is a 28,000 logic gate device or a 32,768 bit RAM device (but don't forget, you can do both because CLBs can only be RAM *OR* logic, but not both) with 2560 flops. The Flex 10K50 is a 50,000 gate device (there is a white paper available on the gate-counting methodology, if you like) with 3184 flops (624 or 25% MORE) PLUS you can still use all 20,480 bits of RAM even if all of the logic is used (because there is no trade-off in Altera devices: RAM is RAM and LOGIC is LOGIC). So you are saying because Altera will give you a part that has 25% more flops PLUS over 20,000 more bits of RAM (when both devices are fully utilized) for the same price as the 4028E that it means that Altera considers the devices equivalent? I just call that a better deal... WayneArticle: 4814
Would you like to get MORE ORDERS for ANYTHING YOU SELL? Do you market products/services with your computer? Would you like to learn how to do so QUITE a bit BETTER than what you are allready doing? If so just E-mail me the words "MORE ORDERS" and I'll send you a free file that will show you how. Best regards, Jim (in California)Article: 4815
In article <32B61C94.54B9@univision.com> "John L. Smith" <jsmith@univision.com> writes: >Brad Taylor wrote: >> >> >> The 9/96 Xilinx data book includes the some curves (page 11-2) for FPGA >> failure rates at 70 deg C. These are based on 20,000 devices and >> 36,000,000 hours of testing. >> >> Here are a few representative data points for 12/95: >> >> XC4000 devices 20 FITS >> XC3000 devices 15 FITS >> XC3100 devices 0 FITS >> XC2000 devices 0 FITS >> >> A FIT is a device failure in 1 billion operating hours. I assume this >> implies that a system of 100, 10 FIT devices would run for an average of >> 100 years years before failing due to a hardware failure. The failure >> curves are extrapolated from high temperature testing 125-145 deg C. >> One expectation in flight critical applications is to have a part that is not expected to fail during the lifetime of the *fleet* of planes. This is much different than one system running 100 years. The service life of a commercial airliner is typically 50,000 to 100,000 hours. A production run of 1,000 planes, while high, is not out of the question, and many avionics systems are shared between different aircraft models. (Boeing has delivered or has orders for about 3,500 737s.) This leads to a total fleet lifetime of 100,000,000 (10^8) hours. Thus a 10 FIT part has a significant chance of failing in that time period. And it is almost certain that a system of 100 such parts would fail. If the failure of such a part causes the crash of a comercial airliner even once during the lifetime of the system, I would consider that unacceptable. -- James Thiele jet@eskimo.comArticle: 4816
James Thiele wrote: > > > One expectation in flight critical applications is to have a part > that is not expected to fail during the lifetime of the *fleet* of > planes. This is much different than one system running 100 years. > > The service life of a commercial airliner is typically 50,000 to > 100,000 hours. A production run of 1,000 planes, while high, is > not out of the question, and many avionics systems are shared between > different aircraft models. (Boeing has delivered or has orders for > about 3,500 737s.) > > This leads to a total fleet lifetime of 100,000,000 (10^8) hours. > Thus a 10 FIT part has a significant chance of failing in that > time period. And it is almost certain that a system of 100 such > parts would fail. > > If the failure of such a part causes the crash of a comercial > airliner even once during the lifetime of the system, I would > consider that unacceptable. > This has got me thinking. Isn't the real problem undetected failures? With an SRAM based FPGA, the logic can be tested on boot. This cannot be done with an ASIC. Also, it is easy to build redundant systems with FPGAs, you just parallel two FPGAs and run only one at a time. Given the apparent opportunity to increase reliability with intelligent use of FPGAs, doesn't it make sense to figure out how to use them, how to qualify them, and to develop the methodologies necessary. As you say peoples lives are at stake. - BradArticle: 4817
Brad Taylor wrote: > > James Thiele wrote: > > > > > > One expectation in flight critical applications is to have a part > > that is not expected to fail during the lifetime of the *fleet* of > > planes. This is much different than one system running 100 years. > > > > The service life of a commercial airliner is typically 50,000 to > > 100,000 hours. A production run of 1,000 planes, while high, is > > not out of the question, and many avionics systems are shared between > > different aircraft models. (Boeing has delivered or has orders for > > about 3,500 737s.) > > > > This leads to a total fleet lifetime of 100,000,000 (10^8) hours. > > Thus a 10 FIT part has a significant chance of failing in that > > time period. And it is almost certain that a system of 100 such > > parts would fail. > > > > If the failure of such a part causes the crash of a comercial > > airliner even once during the lifetime of the system, I would > > consider that unacceptable. > > > > This has got me thinking. Isn't the real problem undetected failures? > With an SRAM based FPGA, the logic can be tested on boot. This cannot > be done with an ASIC. Also, it is easy to build redundant systems with > FPGAs, you just parallel two FPGAs and run only one at a time. Given the > apparent opportunity to increase reliability with intelligent use of > FPGAs, doesn't it make sense to figure out how to use them, how to > qualify them, and to develop the methodologies necessary. As you say > peoples lives are at stake. > - Agreed! What's wrong with reading back the configuration now and then, just to make sure everything is OK? And let's not forget than many of the EPROM based CPLD's are really SRAM based. The SRAM switches are loaded at startup from the EPROM array. As there is no means of verifying these, I vastly prefer pure SRAM based devices. And how many modern peripheral devices have write-only configuration registers? How do you know that these things haven't gone silly? A little care in the design of a device can mitigate many of the problems (perhaps even the imagined ones) with the technologies used therein. Regards, ScottArticle: 4818
Hi, Despite the references line, I'm not responding to any particular post. If you're looking at parts in isolation, you are doing something wrong. If your FPGA absolutely, positively must not fail then at some point you are going to end up in court explaining why you placed peoples lives at the mercy of a single electronic component. You have to accept that these parts will fail. You design your system to accommodate the expected failure rate (plus a safety margin of course). If one particular type of device is more prone to failure then you may require a higher level of redundancy, or a more robust sanity checking system or whatever... There is no reason why you can't use components with a MTBF of 5 minutes, it just means that the redundancy required probably means your kit is somewhat bigger and bulkier than you want (and maintenance is probably not cheap either). What if you decide you can solder the SRAM device to the board, but you have to put the OTP device in a socket to allow for upgrades? I bet the SRAM based device suddenly looks a lot more reliable... (Of course, this ignores where the configuration is actually stored, but you get the point). Sure, its nice to know if OTP devices are more reliable when sitting on their own on the bench doing nothing. In the end though, it doesn't actually buy you much. Dale. -- ************************************************************************** * Dale Shuttleworth * * Email: dale@giskard.demon.co.uk * **************************************************************************Article: 4819
Are you willing to share this overview with us ? You would get some unbiased or biased opinions, since several of us are not shy and willing to agree or disagree with you. I want to say it once more: It is a joy to be a part of this professional group, where controversial subjects are being discussed in a reasonably civilized manner. Intelligent people know how to disagree without getting personal or vulgar about it. Life would be boring if we all had the same opinion. Peter Alfke, Xilinx ApplicationsArticle: 4820
Phil Ptkwt Kristin wrote: > I've heard that Exemplar will release a Linux version of their Leonardo > synthesis tool for Linux early next year. Has anyone else heard this? > It would be good if someone from Exemplar could comment on this. The port is complete for Leonardo 4.0.3. Available in January 97. It's wonderful now to be able to rlogin to a PC and run jobs on it just like any UNIX workstation. And getting the EE Times front page (12/9/96), 'What's Hot' was really nice. I'll just make one minor edit. EE Times referred to Leonardo as an FPGA synthesis tool. It is also an ASIC synthesis tool. > > If this is true it is very good news for the Linux engineering community. > Leonardo is a popular synthesis tool on NT, having it ported to Linux > could very well be the catalyst that causes other EDA vendors to follow > suit. > > Also, Exemplar is part of Antares now, which is owned by Mentor. Can we > look forward to seeing other Antares family companies (like Model > Technologies - hint, hint) follow Exemplar's lead? I think user pressure would be required for that. As far as MTI is concerned, a simulator port is more involved than a synthesizer port. Although Frontline is already on Linux. And Speedsim. Regards, David Emrich Exemplar LogicArticle: 4821
This is a testArticle: 4822
Help! I am attempting to create a small device programmer for a CPLD (more for an academic challenge than anything practical) and I am trying to decipher the downloadable JEDEC (.jed) file. I have a programming specification for the CPLD that details the mechanics of the programming sequence, however it does not provide any details on JEDEC file interpretation. So I have the JED file, and the info on how to get binary data into the part, but I have no way of determining the correct address within the part from the JED file [ must be an obvious issue and left to the student :) ]. Any help on this would be greatly appreciated. Regards, Frank. P.S. Yeah, I know this is not an fpga issue, but my news server has no CPLD groups. Sigh.... feel free to flame.Article: 4823
Symon, (and other interested readers :-) Your readings (when you were shorter) were true, and in particular, in systems with different clock domains that are asynchronous and pass data between them, metastability is indeed unavoidable. If you understand the problem and can use devices that have been characterized for metastability, you can reduce the probability of metastability affecting your system to an acceptable level. An example of 'acceptable' is 'twice the duration you intend to be working for that employer' . In my article I described 3 types of FIFOs, and pointed out that the first two have arbiters (the thing that crosses the clock domains) and that these are NOT characterized for metastability. In the third case (included below), the device is self clocking, and uses asynchronous state machines to move the data through the fifo. Within itself, it can't go metastable. It does however present at least two flags, an input ready, and an output available signal. Assuming that both the source and destination environments have their own clocks, then they will need to still deal with metastability on either side of the FIFO, since it can be thought of as representing a third domain (actually a fourth too, since the input and output are asynchronous to each other). I consider this to be better than the first two architectures (for this issue), since the management of the metastability is left up to me, and I know how to design a suitable level of robustness. So metastability does not disappear, it just moves from being inside the FIFO where I can't help it and it isn't characterized, to outside where it is clearly my domain. Philip In article <32B52ADF.7441@wago.de> symon.brewer@wago.de writes: >Philip, > I once read somewhere, when I was little, that metastability was as >unavoidable as death and taxes. This is something I've always found to >be true except in the 'trivial' case of fully synchronous systems. (I >say trivial because almost all systems I've had experience with accept >data that are not synchronised to the system master clock). I'm kind of >interested how the fifo system you describe can avoid metastability >issues if the input and output systems' clocks are not phase locked >somehow. Otherwise, doesn't any fifo need at least one status flag so >that the circuitry knows when the register bank is either empty or full? > regards, Symon. > >Philip Freidin wrote: >***snipped fifo stuff*** >> Some fifo's are built out of an array of registers, with each register >> having a validity flag (i.e. the register has valid data in it). These >> registers are cascaded together (sort of like a shift register), and they >> autonomously move data from one register to the next, if the current >> register has valid data and the next does not. the data therefore ripples >> through the array. All this logic is self clocking. The output basically >> looks at the last registers validity flag, and the input uses the first >> registers validity flag. If designed right, this type of fifo does not >> have metastability issues, unless it includes additional status flags. >> >> The set of validity flags, and their control logic is what you were >> asking about: "the token chain". >> >> Philip Freidin. >>Article: 4824
jet@eskimo.com (James Thiele) writes: > This leads to a total fleet lifetime of 100,000,000 (10^8) hours. > Thus a 10 FIT part has a significant chance of failing in that > time period. And it is almost certain that a system of 100 such > parts would fail. > > If the failure of such a part causes the crash of a comercial > airliner even once during the lifetime of the system, I would > consider that unacceptable. Is this a reasonable target, compared to other components of the system (especially the human part)? That is, aren't we holding the electronic component to (nuch?) higher standards than the rest of the system, and to much higher standards than we historically tolerated in a system of this type? Jan
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z