Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hal, I very much agree with you. Peter is a very valuable resource to the group. But smilies or no smilies (yes, I saw it), I felt that it was a cheap shot and not needed. "Hal Murray" <murray@pa.dec.com> wrote in message news:8nnfqj$p72@src-news.pa.dec.com... > > > What a cheap shot Peter! > > I saw the smilies. Didn't you? > > Peter's track record here is very good. He's one of the best examples > I can think of for a vendor contributing to a newsgroup. I used to > follow some network group where somebody from Cisco had a similar > reputation. It's pretty easy to spot if you follow a newsgroup for > a while. > > I think he has been very fair when discussing other vendors products. > If I've been missing things, please point them out in the future. > > > > > Good designers that follow a disciplined design flow have no problem using > > Antifuse based FPGAs. > > > > It is very unfortunate that the RE-PROGRAMMABLE DESIGN BY THE SEAT OF YOUR > > PANTS BIGOTs insist that the design community is not good enough to use > > Antifuse based FPGAs. Fortunately there are many good designers out there > > using this technology! > > Sounds to me like there are bigots on the other side too. Shouting > doesn't make your case any stronger. > > People build ASICs and fully custome chips too. They are all > various points on spectrum of cost, time to market, and speed/size > availability space. > > > I admit that I belong to the seat-of-the-pants school, but I'm > willing to learn, or at least try. Do you have suggestions for how > to design a chip when the specs aren't firm? > > > Being able reprogram a chip after the board has shipped has saved > my ass a couple of times. Both quirks involved fuzzy areas in the > specs. No amount of testing would have uncovered the problem. > (Having another person working on the project and looking at that > area might have found them. But I'll bet I could have explained > why I had done what I did.) > > > > Perhaps this newsgroup should be renamed comp.arch.XILINX_FPGA, as most of > > the issues/problems that come up in this group are in regards to Xilinx and > > not Altera, Actel, Quicklogic, or Lattice/Vantis. > > Perhaps all the good designers are using non-Xilinx chips and don't have > any questioms to ask here. ;) > > I haven't seen any serious complaints about too much Xilinx traffic. > I'll be glad to support creating another group if that's a real problem. > > > -- > These are my opinions, not necessarily my employers. I hate spam. >Article: 24826
Hal Murray wrote: > I frequently see comments here about silicon getting faster > so metastability is less of a problem. Is that really correct? > > Are there other mechanisims that work the other way, such as > chips getting bigger and clock cycle times getting shorter? > Chips getting bigger is irrelevant, since metastability is located in a latch. Clock cycle times getting shorter is, of course, a problem. But flip-flops are getting faster more so than the clock rates do, simply because interconnects become more dominant. We are now measuring clock-to-Q as low as 200 ps, but the interconnect delays are longer. Why did I not repeat the 1988 experiments, now with Virtex? I would love to. But in order to do the exponential extrapolation, I must actually count metastable events with two frequencies ( the ratio of 65,000 used to be helpful). Then I can measure something in minutes and extrapolate to MTBFs of thousands of years. Log paper is great ! If I see NO metastable events within my attention span, then I cannot extrapolate. Sad to say. To tell you that there was no metastable event in an hour, when you allow, say, 2 extra nanoseconds of settling time ( or even 1 ns ) is worthless information. You want to know the probability of one metastable excessive delay of a few ns once per thousand years. I don't have the time or patience to measure that . :-( So, until I find a clever way to engage the DLL and increment the clock delay in steps of 40 picoseconds, ( that may be possible!), I am not sure that I can say more than: Common sense tells me that the new flip-flops are much better than the old XC4000 ones in resolving metastability. But I know you want quantitative information, not arm-waving generalities.... Peter AlfkeArticle: 24827
Good suggestions! I don't want to sound defensive, although I am explaining a bad situation. Peter Alfke Hal Murray wrote: > > Recently, we tried it with Virtex parts, and did not immediately get any > > meaningful results, so we abandoned the tests for the time being. We > > were not able to bring the measuring clock edge close enough to the > > first clock. The flip-flops are just too fast in resolving > > metastability. > > Perhaps it's time for a new approach. We've got a lot of smart > people here - many are interested in this problem. > > I think there are two issues. One is social, the other is technical. > > Here are some technical suggestions/questions: > > Why can't we use two clocks rather than the rising/falling edge? > I assume the problem is that there would be skew within the chip. > Can we find a way to correct for that? How about two runs, one with > clock A ahead of clock B and the other with clock B ahead of clock A? > > Can we find a way to collect data faster? There are a lot of FFs > in a modern chip. Maybe we could run 100s of copies of Peter's basic > circuit. Peter's recipe involved 1 second runs. We could also set > things up to run overnight - 10,000 seconds. > > What sort of clock difference do we need? > > What sort of delays can we get with a bit of hackery? I'm thinking > of tricks like running a signal through a chain of CLBs to > make a short delay. Adding or deleting a CLB would make a pretty > fine adjustment. > > I can't quickly find or derive the equations to compute the > K1 and K2 type parameters from a pair of Peter's test runs. > > There are two unknowns and two data points, so it seems like > a reasonable match. But suppose you are using two clocks > rather than both edges. How well will two clock distribution > chains track in a modern chip? Can we correct for any skew > with a 3rd data point? (3 unknowns need 3 data points)... > > Now for some social issues. > > What can we do to make vendors pay attention to this issue? > > Is there any good journal that would take a letter to the > editor or a short editorial type article? > > What do ASIC vendors do in this area? > > Would any of the companies who make FPGA prototyping boards > be willing to let this sort of test run overnight and on > weekends, perhaps while burning in a board? I'm just fishing > for a way to collect a lot more data. > > It might be neat to see if there are any trends. > > Is this an appropriate topic for an undergraduate thesis? > Anybody know a school that would be interested in this > area? Anybody willing to donate a board? > > Anybody got any other ideas? > > -- > These are my opinions, not necessarily my employers. I hate spam.Article: 24828
> > These good designers that follow a disciplined anti-fuse design > > flow better start getting it EXACTLY correct the very FIRST time, > > in spite of design changes mandated by design specification > > changes. The reason I say this is because FPGA design jobs are > > getting so big that they demand BGAs to handle the extremely > > large IO. If one of these really good designers does a BIG job > > and blows it (not the anti-fuses), it's going to be tough getting > > that anti-fuse FPGA BGA off that board and putting another on. > > It can be done, but it's a big pain in the pad layout. > > Heck, he doesn't even have to blow it. As I said earlier, > > all that is needed is a change to the design specification. > > So these really good designers either are going to be stuck > > doing small FPGA designs or they're going to have to have > > really good design specifications, have lots of time to do > > extremely good simulation up front, and get everything extremely > > correct the FIRST time. > > It is no wonder that Actel is desperately looking into > > reprogrammable parts that seem to be unobtainium. > > I agree that this newsgroup is dominated by Xilinx and Altera, > > but so is the market pie. > > -Simon Ramirez > > ******************************************************************** > Simon, > > All good points. Issues that ASIC designers have faced forever. > When companies such as Juniper, RedBack, Nexabit/Lucent, > Cisco, etc.... go to the investment community, they are not > highlighting their FPGA designs, they are highlighting their ASIC > designs. > Which only proves that complex designs can be done correctly > the first time and EXACTLY correct the very FIRST time. > > Of course none of this is possible if engineering management > does not demand a solid design specification. It is common > sense practice for ASIC designs to have a detailed design > specification locked down before the design is taped out. > > I spent 10 years designing ASICs with multiple first pass silicon > success. Each of these designs required and had a detailed > specification. > > I then spent 5 years designing FPGAs and PLDs (Xilinx, Altera, > Actel, Lattice, AMD), for which management never required any > specification at all. I always wrote one, but was later dinged on > my performance reviews for having done so. My teams designs > also set new company standards for prototype hardware > availability to the software team. Sorry, but I think that it is the > design engineer's responsibility to ensure that the design has a > well defined and locked down specification, before venturing into > the design! It is unfortunate that Design management has > decided that a specification is not needed for FPGA designs and > that designers no longer see the value in a design specification. > > If your happy with only having two choices when you choose a > device for a programmable design, so be it. But I believe that the > design community is better served by multiple vendors offering > multiple technologies to the design community. I know that the > community of designers implementing designs for the space, > avionics, military, and medical markets appreciate the fact that > they have antifuse technology as a choice. > > All BGA devices offered by Actel and Quicklogic have a very > easy to use socketing solution and with improving technologies > for BGA soldering and removal > (http://www.zephyrtronics.com/frameset.htm) designers do have > workable technology choice and we as a design community need > to think twice before discarding this viable technology choice. > > Yes, Actel is investing in differentiated re-programmable > technologies and will deliver them to the design community when > they are ready. But they also continue to invest in antifuse > technology, because in areas of hi-reliability, design security, > and increasing speed demands designers do want a quick turn > programmable ASIC solution. > > Quicklogic also continues to churn out very good differentiated > products. > > Both of these companies continue to survive and turn a profit > despite the best efforts of the two titans to discredit and eliminate > antifuse technology. > ************************************************************************ Eltmann, Likewise you make some good points in your dialogue above. I disagree, though, about ASICs always being EXACTLY right the first time. I have seen ASICs churned out after much simulation and still have some bugs. Usually it was because the design specification was "designed" wrong. The end result was that software had to overcome these bugs through patches that made for awkward coding and usage. Many products are churned out with ASICs that have bugs. I've used consumer electronics that had quirks, and some of these quirks were related to ASIC bugs. In my experience, I've seen ASICs churned out after being simulated in a system, i.e., the whole system was simulated -- ASICs, FPGAs, board, interfaces, computer hardware, computer software, etc. The simulations were incredible and fascinating. There is nothing like them to get it right the first time. The problem was that there turned out to be an error with modeling some of the parts. No one caught it until after the product was well in production and being used by the end users. They caught the errors. Software patched them, and the system worked with quirks. I also agree with you that a solid design specification is a must for good design methodology; however, with bean counters running the show and managers counting design minutes until the next milestone, sometimes it is hard to write a good design specification. I am a consultant that often walks in on something that has to be done yesterday. I value and treasure a good design specification, but sometimes it just isn't possible. In this case, it is better to use reprogrammable parts and live with design changes and iterations. I think that is why most engineers prefer reprogrammable parts over one time programmable parts. And I do acknowledge that there is a lot of sloppy engineering going on. In fact, I am aware of companies where the engineers don't simulate! Some of these guys use the excuse that simulation tools cost too much! It's obvious that these guys are going to blow their schedule and budget if they start designing big parts. I remember simulating 22V10s, for God's sake! I agree with you that there are some applications where anti-fuse devices are the only solution. You mentioned space and military applications, and I have actually designed in Actel devices into such applications. But that was a while back when device densities were not as great as they are today. More importantly, the packaging issue that I brought up earlier is catching up to the anti-fuse technology. RK posted a message saying that he hasn't tried out BGA sockets, but he may try them later. I would suggest that as IO densities increase, the reliability of BGA, especially FBGA, sockets will suffer. I personally do not like interconnect problems interfering with my debugging. I have seen interconnect problems make me think that they were functional problems inside the FPGA. If you have interconnect problems and are trying to fix them functionally, then a great amount of time will be wasted. This is why I am not a proponent of anti-fuse FBGA (and denser) technology. Finally, I agree with you that there should be several players in the FPGA market. Competition is what made today's FPGA vendors produce much better parts than 1,2,3,4,5,6,7,8,9 and even 10 years ago. There is room for all the players today, or they wouldn't be profitable as you say. But the reality is that there are two big vendors out there taking the majority of the market pie, and both of them are in the reprogrammable parts business. It is not a coincidence; it is a fact that designers today prefer reprogrammable parts over one-time programmable parts. I do admire you for wanting all engineers to have good design methodologies where the outcomes are genuinely good working parts the first time. This is a goal that I try to achieve, and I have achieved that goal some but not always. It seems that there are always design specification changes, screw ups on my part (no pun intended), mounting pressure on schedules that cause me to rush, and countless other reasons/excuses. Sometimes when I am being rushed, I think back and remember when I flipped the power switch on with my manager watching and it worked the first time! I was not amazed, but he certainly was! He had never seen that before. Good luck on all of your present and future designs. -Simon Ramirez, ConsultantArticle: 24829
--------------8A8E646CF2D381CCD31263F4 Content-Type: text/plain; charset=us-ascii; x-mac-type="54455854"; x-mac-creator="4D4F5353" Content-Transfer-Encoding: 7bit C'mon you guys, loosen up. This brouhaha started with Renzo Venturi inserting "Use Actel.." into a discussion of how to get from a Xilinx download cable to Master Serial programming. Pretty trivial problem. And I gave a tongue-in-cheek reprimand " use :-) when you do this..." Now this has blown up into something big. That was not my intent. I will fight any commercials in this newsgroup, but we also must have a sense of humor, or at least of perspective. Engineering is fun, especially when you can fix your own mistakes, as well as accommodate your boss's oversight, or the evolution of the marketplace, even after the system has shipped. That's one of the reasons why I like it where I am. Others may be proud to have designed a million-gate system without a single mistake, and exactly to spec. Not my cup of tea. We are all different... (Thank God!) Have a nice weekend, and smile occasionally ! Peter Alfke ================================================= Renzo Venturi wrote: > " > > Also, If any of you know some web-sites that could provide information on > programming FPGAs with PROM that would be helpful too. > > > > Thanks, > > Ramy > > Use Actel anti-fuse FPGA... > > > Renzo Venturi --------------8A8E646CF2D381CCD31263F4 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit <!doctype html public "-//w3c//dtd html 4.0 transitional//en"> <html> C'mon you guys, loosen up. <br>This brouhaha started with Renzo Venturi inserting "Use Actel.." into a discussion of how to get from a Xilinx download cable to Master Serial programming. Pretty trivial problem. <br>And I gave a tongue-in-cheek reprimand " use :-) when you do this..." <p>Now this has blown up into something big. That was not my intent. I will fight any commercials in this newsgroup, but we also must have a sense of humor, or at least of perspective. <p>Engineering is fun, especially when you can fix your own mistakes, as well as accommodate your boss's oversight, or the evolution of the marketplace, even after the system has shipped. That's one of the reasons why I like it where I am. <br>Others may be proud to have designed a million-gate system without a single mistake, and exactly to spec. Not my cup of tea. We are all different... (Thank God!) <p>Have a nice weekend, and smile occasionally ! <p>Peter Alfke <br>================================================= <br>Renzo Venturi wrote: <blockquote TYPE=CITE>" <br>> Also, If any of you know some web-sites that could provide information on <br>programming FPGAs with PROM that would be helpful too. <br>> <br>> Thanks, <br>> Ramy <p>Use Actel anti-fuse FPGA... <br><a href="http://www.actel.com/"></a> <p>Renzo Venturi</blockquote> </html> --------------8A8E646CF2D381CCD31263F4--Article: 24830
I don't buy the metastability is becoming a non-issue thing at all. Not one little bit. As the filpflops get faster, so do the clock rates. Sure, if you are keeping the same clock rate the metastability problem diminishes. If however you are pushing the performance of these parts to the edges of the envelope, metastability is waiting behind every turn waiting to bite you in the behind. It all comes down to the clock speed, therate of change of the async data and the gain BW of the flip flops. push up the clock and/or the rate of change high enough and you can gt yourself into problems period. Hal Murray wrote: > > I frequently see comments here about silicon getting faster > so metastability is less of a problem. Is that really correct? > > Are there other mechanisims that work the other way, such as > chips getting bigger and clock cycle times getting shorter? > > Can we make a Moore's Law type graph? > > -- > These are my opinions, not necessarily my employers. I hate spam. -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com or http://www.fpga-guru.comArticle: 24831
Phil Hays wrote: > > Peter Alfke wrote: > > > To tell you that there was no metastable event in an hour, when you > > allow, say, 2 extra nanoseconds of settling time ( or even 1 ns ) is > > worthless information. You want to know the probability of one > > metastable excessive delay of a few ns once per thousand years. I don't > > have the time or patience to measure that . :-( > > Thank you, Peter. I was under the mistaken impression your excess delay was > tens of ps rather than 1 or 2 ns, and you still were not seeing any events. > With a designed excess delay of a few ns, I thought that might be useful > information. > > Wit and humor: With 1 ns excess delay, 520 V3200E parts, and 10000 metastable > measuring FFs per V3200E, and look for a week? See no events, that would imply > MTBF estimated to 10,000 years. Practical to implement? Probably not. > I see a big problem in the future is clock jitter.You still have metastability problems of a different sort as clock speeds get higher. You data can be on time for set/hold requirements but if your clock jitters too much there go the setup and hold times. Ben.Article: 24832
> I also agree with you that a solid design specification is a must for > good design methodology; however, with bean counters running the show and > managers counting design minutes until the next milestone, sometimes it is > hard to write a good design specification. It's part of my job and yours to tell the bean-counters to jump in a lake if they are asking for stupid things. Writing code before you know what you are doing has a long history of contributing to botched projects. For anybody who wants a good read, the classic book for software people is "The Mytical Man-Month" by Fred Brooks. My copy says it was initially published in 1975. There is a 25th year aniversary version with another chapter. If you have problem with a bean counter, I suggest you give them a copy. (I'm talking cash out of your own pocket.) My favorite paragraph is on page 47: "It is a very humbling experience to make a multimillion-dollar mistake, but it is also very memorable. I vividly recall the night ... (Remember, those were 1960-1970 dollars.) Note that there are two types of specs - the ones you control and the ones provided by external organizations. You can't fix the external ones. They are often fuzzy and sometimes even buggy. Reprogrammable devices are a just another tool. Maybe not the answer to every problem, but a nice option to have available. -- These are my opinions, not necessarily my employers. I hate spam.Article: 24833
What sort of metastability data is available for antifuze parts? How do they measure it? Any reason that technique can't be applied to SRAM based parts? How do ASIC vendors get metastability data? -- These are my opinions, not necessarily my employers. I hate spam.Article: 24834
In article <399F587E.7C0F4A56@andraka.com>, Ray Andraka <ray@andraka.com> writes: > I don't buy the metastability is becoming a non-issue thing at all. Not one > little bit. As the filpflops get faster, so do the clock rates. Sure, if you > are keeping the same clock rate the metastability problem diminishes. If > however you are pushing the performance of these parts to the edges of the > envelope, metastability is waiting behind every turn waiting to bite you in the > behind. It all comes down to the clock speed, therate of change of the async > data and the gain BW of the flip flops. push up the clock and/or the rate of > change high enough and you can gt yourself into problems period. It sure isn't a non-issue, not with all the recent traffic on the subject. :) But Peter reports that he couldn't measure it, so something interesting has changed. I suspect there is a parameter that we aren't considering yet - some way to look at the problem that will give us some insight. What's the ratio of raw FF speed to typical cycle time, even if we consider the cycle time when pushing the envelope? Do modern chips spend more time in routing relative to the routing for the FF-FF pattern needed for a synchronizer? Has the basic design of the master latch improved? -- These are my opinions, not necessarily my employers. I hate spam.Article: 24835
> XESS Corp. is releasing the eighth section of its "myCSoC" tutorial for > free downloading at http://www.xess.com/myCSoC-CDROM.html. We will > release a new section each week. > Each section describes a design example for the Triscend configurable > system-on-chip device (CSoC). The Triscend TE505 CSoC integrates an > 8051 microcontroller core with a programmable logic array to create a > chip whose software and hardware are both reprogrammable. Why do all the design tutorials settle on 8 bit cpu's or 16 bit RISC's or PDP-8's. Would not a medium sized project (24/32/36) be better examples to show both the power and the limitations of FPGA's. Or is it that the low cost/student development systems can only handle the low end (ie small) FPGA's. Also a tutorial using FPGA from different manufactures would be useful too.What kind of designs map better to what style of hardware. For example I am developing a 24 bit home-brew cpu on a low cost <$100 protyping board. The only FPGA that will meet my size requirements is Altera's 10K10 and still be in a low cost PLCC package and have easy to find development software.I would rather use Actel's 42MX09? but I am still waiting on the FPGA software Ben.Article: 24836
Elftmann wrote: > Good designers that follow a disciplined design flow have no problem using > Antifuse based FPGAs. It's hard to follow a disciplined design flow when you have two different specifications (and both are incomplete!) for the same function, and both are different from what the programmers claim the function does. I had this joyful task as part of my first FPGA design. We were designing a replacement for hardware designed in 1969. The ability to upgrade both hardware and software without opening the box also can be very useful. I think that EEPROM, antifuse and SRAM based programable parts all have advantages, and I expect that all of them will be around for quite a while. -- Phil HaysArticle: 24837
Most of this sub-thread has gotten into things that I don't care to discuss, but I want to respond to one part. Jon Kirwan wrote: > > On Sat, 19 Aug 2000 15:05:48 -0700, Neil Nelson <n_nelson@pacbell.net> > wrote: > >If you actually need a job, you need to stop screwing yourself > >and get politically savy. And it is quite easy to search the > >newsgroups by poster name to find out a person's attitudes and > >opinions. > > If acting like an obsequious sycophant, fawning for a job, is your > idea of being "politcally savy" then I hope fewer are as savy. > > I don't think anything that Rick Cole said comes even close to > reducing his appeal to employers, not here in this thread and > certainly not in his other excellent posts. Do you think so? > > >Otherwise, it has been a very interesting thread. > > I agree. > > Jon I have had several people contact me about work from reading this thread. So I don't think that I have scared anyone off my my "attitude". On the other hand one of them indicated that I would have to sign a NDA to interview with them. :-) -- Rick Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design Arius 4 King Ave Frederick, MD 21701-3110 301-682-7772 Voice 301-682-7666 FAX Internet URL http://www.arius.comArticle: 24838
Peter Alfke wrote: > To tell you that there was no metastable event in an hour, when you > allow, say, 2 extra nanoseconds of settling time ( or even 1 ns ) is > worthless information. You want to know the probability of one > metastable excessive delay of a few ns once per thousand years. I don't > have the time or patience to measure that . :-( Thank you, Peter. I was under the mistaken impression your excess delay was tens of ps rather than 1 or 2 ns, and you still were not seeing any events. With a designed excess delay of a few ns, I thought that might be useful information. Wit and humor: With 1 ns excess delay, 520 V3200E parts, and 10000 metastable measuring FFs per V3200E, and look for a week? See no events, that would imply MTBF estimated to 10,000 years. Practical to implement? Probably not. -- Phil HaysArticle: 24839
This might be a crazy idea, but then maybe I'll learn something when it gets shot down. Peter reports that he didn't get enough errors to measure. How about making errors happen (much) more often? Suppose the input data isn't asynchronous but we adjust it to have at the worst possible timing. I'm thinking of a feedback loop (PLL) to adjust the timing of the input data transition so that the output follows it half the time and misses the other half. Is that likely to make enough errors to measure conveniently? If the answer is still "no", would you believe there is no longer a metastability problem? 1/2 :) Note that you still need another mechanisim to adjust the delay between the main clock and the clock for the FF that catches the errors. It might help to have a setup where you can adjust the clock-high time and keep the period fixed rather than adjust the clock frequency. If we had data from this sort of setup, can we compute the traditional mestability parameters? Does the frequency and/or PLL filter parameters matter, or do they drop out? I think we could build that setup with a pair of analog comparators. The idea is to feed a sawtooth into one side and adjust the DC (well filtered) signal going into the other. One comparator would generate the clock. The other would generate the data. I'm thinking of a simple R-C filter. We could generate the sawtooth input with a similar R-C filter that has a much faster time constant. -- These are my opinions, not necessarily my employers. I hate spam.Article: 24840
XESS Corp. is releasing the eighth section of its "myCSoC" tutorial for free downloading at http://www.xess.com/myCSoC-CDROM.html. We will release a new section each week. Each section describes a design example for the Triscend configurable system-on-chip device (CSoC). The Triscend TE505 CSoC integrates an 8051 microcontroller core with a programmable logic array to create a chip whose software and hardware are both reprogrammable. The tutorial examples show how the Triscend FastChip development software is used to configure the TE505's programmable logic into peripheral functions that cooperate with the microcontroller core. -- || Dr. Dave Van den Bout XESS Corp. (919) 387-0076 || || devb@xess.com 2608 Sweetgum Dr. (800) 549-9377 || || http://www.xess.com Apex, NC 27502 USA FAX:(919) 387-1302 ||Article: 24841
Ben Franchuk <bfranchuk@jetnet.ab.ca> writes > > Why do all the design tutorials settle on 8 bit cpu's or >16 bit RISC's or PDP-8's. Would not a medium sized project >(24/32/36) be better examples to show both the power and the >limitations of FPGA's. Or is it that the low cost/student development >systems can only handle the low end (ie small) FPGA's. Because according to most industry figures I see the 8-bit embedded industry is between 55 and 65% of the whole. A small but significant market is still the 4 -bit systems. The 16-bit market is also large and growing. There is a small 64 bit market 2%? This leaves less than 15% of the market for 24/32/32 bit systems The architecture with by far the largest market share is the 8051 with 20-30% of the WHOLE embedded market. No other single architecture comes close. This not a technical argument but a simple commercial one. 8051 may be a pig technically but it has the market. /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ \/\/\/\/\ Chris Hills Staffs England /\/\/\/\/\ /\/\/ chris@phaedsys.org www.phaedsys.org \/\/ \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/Article: 24842
Metastability is caused by the (limited) gain bandwidth product of the master latch in the flip-flop. As such it has nothing to do with the programming technology. Xilinx ( and Altera :-) ) flip-flops are hard-implemented, optimized structures that contain no programming elements in the critical path. I suppose the same is true of most ASICs. The original Actel devices composed their latches out of multiplexers, and we always pointed out their (theoretically) poor metastable performance. I assume that Actel now also has "hard-coded" flip-flops. (?) So the physics is the same. It's just that SRAM-based FPGAs are easier to "play with". Peter Alfke ======================================= Hal Murray wrote: Hal Murray wrote: > What sort of metastability data is available for antifuze parts? > > How do they measure it? Any reason that technique can't be > applied to SRAM based parts? > > How do ASIC vendors get metastability data? > -- > These are my opinions, not necessarily my employers. I hate spam.Article: 24843
Elftmann, It's not just hardware design managers that are blowing off design specs. I am always shocked and then sickened when a US Govt project leaders for software development worth $100K's and above refuse to write specs and refuse to pay for a spec to be written. Then when the project fails to meet schedule or fails to function as "imagined" they request more money to fix things. In the mean time all the project management team gets awards for view-graph presentations. These people should all be working at MacDonald's. That would be safe for me. I never go there!! EricArticle: 24844
I tried exactly that, 18 years ago, while at AMD. I used a neat feedback system, to force metastable event to occur all the time. I failed miserably. Well, that was then... I still like the idea. Peter Alfke ============================= Hal Murray wrote: > This might be a crazy idea, but then maybe I'll learn something > when it gets shot down. > > Peter reports that he didn't get enough errors to measure. How > about making errors happen (much) more often? > > Suppose the input data isn't asynchronous but we adjust it to > have at the worst possible timing. I'm thinking of a feedback > loop (PLL) to adjust the timing of the input data transition > so that the output follows it half the time and misses the other > half. > > Is that likely to make enough errors to measure conveniently? > If the answer is still "no", would you believe there is no > longer a metastability problem? 1/2 :) > > Note that you still need another mechanisim to adjust the > delay between the main clock and the clock for the FF > that catches the errors. It might help to have a setup > where you can adjust the clock-high time and keep the period > fixed rather than adjust the clock frequency. > > If we had data from this sort of setup, can we compute > the traditional mestability parameters? Does the frequency > and/or PLL filter parameters matter, or do they drop out? > > I think we could build that setup with a pair of analog > comparators. The idea is to feed a sawtooth into one side > and adjust the DC (well filtered) signal going into the > other. One comparator would generate the clock. The other > would generate the data. I'm thinking of a simple R-C filter. > We could generate the sawtooth input with a similar R-C filter > that has a much faster time constant. > > -- > These are my opinions, not necessarily my employers. I hate spam.Article: 24845
Peter, The A40MX02 and A40MX04 devices still require the master/slave latch implementation, we call these CC-Flops. While the A40MX family is not a qualified space environment device, it is interesting to note that the CC-Flops have traditionally had better "Single Event Upset" SEU characteristics versus the "hard-coded" flops in the RH1280 (same architecture as A42MX). Please note that the previous statement is no-longer true with the SX, RTSX, and SXA devices. The A42MX, SX, RTSX, and SXA devices all have "hard-coded" flip-flops. Actel set out on the same mission to characterize metastability characteristics in our newer devices, but ran into the same problem as the Xilinx team did. I'll check back with Product Engineers next week and see where it sits on the priority list these days. As a result of past discussions in this group I continue to push on Product Engineering to make metastability characterization standard operating procedure. "Peter Alfke" <palfke@earthlink.net> wrote in message news:399FFF7A.CF624270@earthlink.net... > Metastability is caused by the (limited) gain bandwidth product of the > master latch in the flip-flop. As such it has nothing to do with the > programming technology. Xilinx ( and Altera :-) ) flip-flops are > hard-implemented, optimized structures that contain no programming > elements in the critical path. > I suppose the same is true of most ASICs. The original Actel devices > composed their latches out of multiplexers, and we always pointed out > their (theoretically) poor metastable performance. I assume that Actel > now also has "hard-coded" flip-flops. (?) > So the physics is the same. It's just that SRAM-based FPGAs are easier > to "play with". > > Peter Alfke > ======================================= > Hal Murray wrote: > > Hal Murray wrote: > > > What sort of metastability data is available for antifuze parts? > > > > How do they measure it? Any reason that technique can't be > > applied to SRAM based parts? > > > > How do ASIC vendors get metastability data? > > -- > > These are my opinions, not necessarily my employers. I hate spam. >Article: 24846
rickman wrote: > Most of this sub-thread has gotten into things that I don't care to > discuss, but I want to respond to one part. > > Jon Kirwan wrote: > > > > On Sat, 19 Aug 2000 15:05:48 -0700, Neil Nelson <n_nelson@pacbell.net> > > wrote: > > >If you actually need a job, you need to stop screwing yourself > > >and get politically savy. And it is quite easy to search the > > >newsgroups by poster name to find out a person's attitudes and > > >opinions. > > > > If acting like an obsequious sycophant, fawning for a job, is your > > idea of being "politcally savy" then I hope fewer are as savy. > > > > I don't think anything that Rick Cole said comes even close to > > reducing his appeal to employers, not here in this thread and > > certainly not in his other excellent posts. Do you think so? > > > > >Otherwise, it has been a very interesting thread. > > > > I agree. > > > > Jon > > I have had several people contact me about work from reading this > thread. So I don't think that I have scared anyone off my my "attitude". > On the other hand one of them indicated that I would have to sign a NDA > to interview with them. :-) > > -- > > Rick Collins > > rick.collins@XYarius.com > > Ignore the reply address. To email me use the above address with the XY > removed. > > Arius - A Signal Processing Solutions Company > Specializing in DSP and FPGA design > > Arius > 4 King Ave > Frederick, MD 21701-3110 > 301-682-7772 Voice > 301-682-7666 FAX > > Internet URL http://www.arius.com All too often we are not very clear as to our objectives. If, e.g., an objective requires a little fawning and we do not fawn sufficiently, we will not reach our objective. Or if, e.g., our objective requires less fawning, and we fawn excessively, we will not reach our objective. If our primary objective is to not fawn (or fawn, as the case may be), then any other secondary objective that is against not fawning (the primary) will not be obtained. We might speak of how we would like the world to work or how the world should work if it was just or fair, but commonly we can only deal with how the world actually does work. Clearly we should work for world we want and believe to be fair, but that objective is usually not one we can effectively pursue and one almost certainly at odds with what others want and believe to be fair. Though when those few occasions arise that will allow us to make some headway for a world we want and believe to be fair let us make the best of it that we can. What is our current, primary objective? Is it to make a political (what we think is socially fair) statement about NDAs? Is it to get a large number of job interviews? Is it to stand up for equality between employers and employees? Is it to denounce fawning and other ``suggested'' undesirable behaviors? Is it to get a good job? If our primary objective is to get a good job, then we will need a sufficient number of job interviews, but a large number of job interviews alone will not get us a good job. I.e., we could trash every job interview and we would not receive a job offer. We need to get a good job offer from a good company that we are willing to accept, and it would be better to have several good offers from good companies within a closely spaced period of time that would allow us to evaluate and choose the best. (The best method of getting a job will depend on a number of factors such that those I am (we are) making should not be assumed to be standard.) Now that we have properly determined a best success path to our primary objective, how can we fit in our position on NDAs without needlessly injuring the expected success of our primary objective? I personally think that NDAs have negligible or no impact on the opportunities of the job seeker. Plus it would seem rather difficult for a company to bring suit against a job seeker for violating an NDA except in very extreme, obvious cases. E.g., to violate an NDA, the NDA company would need to divulge some unique, significant knowledge about their company that the job seeker could then take to another company, which the second company could then use in a manner that would be apparent to the first company that the specific information had been transferred by the particular job seeker. This would be a very hard case to prove without showing with reasonable certainty facts that the first company would not reasonably have. I.e., when and to whom in the second company did the job seeker give the NDA information? It would seem that only the job seeker and second company have this information, and as they would be the defendants, they would not easily make this information known. Rather NDAs are a formalization of an expectation of trust and loyalty that is a basic given of employment. The practical effect of an NDA is to keep in mind something we should already have in mind that employers (just like individuals) do not want their company specific (personal) information spread around. The job seeker or employee would want, as a given, to show respect for the privacy of the companies and individuals they become privy to in the routine course of performing their job seeking and employment activities. For example, when I worked at a hospital I became privy to medical information for a large number of people that we would naturally think is the hospital's responsibility to keep private and is becoming more now a legal responsibility. What should a hospital do in a case where a job seeker or employee shows an obvious disregard for this non-disclosure requirement? I expect NDAs (and perhaps a little fawning) have a rather different look when _you_ are the one being protected (and perhaps fawned). Regards, Neil NelsonArticle: 24847
Hi - Take a look at p. 121 of Johnson and Graham, which describes just such a circuit. I haven't tried it, though. Bob Perlman On Sun, 20 Aug 2000 16:01:04 GMT, Peter Alfke <palfke@earthlink.net> wrote: >I tried exactly that, 18 years ago, while at AMD. I used a neat feedback >system, to force metastable event to occur all the time. >I failed miserably. >Well, that was then... I still like the idea. > >Peter Alfke >============================= >Hal Murray wrote: > >> This might be a crazy idea, but then maybe I'll learn something >> when it gets shot down. >> >> Peter reports that he didn't get enough errors to measure. How >> about making errors happen (much) more often? >> >> Suppose the input data isn't asynchronous but we adjust it to >> have at the worst possible timing. I'm thinking of a feedback >> loop (PLL) to adjust the timing of the input data transition >> so that the output follows it half the time and misses the other >> half. >> >> Is that likely to make enough errors to measure conveniently? >> If the answer is still "no", would you believe there is no >> longer a metastability problem? 1/2 :) >> >> Note that you still need another mechanisim to adjust the >> delay between the main clock and the clock for the FF >> that catches the errors. It might help to have a setup >> where you can adjust the clock-high time and keep the period >> fixed rather than adjust the clock frequency. >> >> If we had data from this sort of setup, can we compute >> the traditional mestability parameters? Does the frequency >> and/or PLL filter parameters matter, or do they drop out? >> >> I think we could build that setup with a pair of analog >> comparators. The idea is to feed a sawtooth into one side >> and adjust the DC (well filtered) signal going into the >> other. One comparator would generate the clock. The other >> would generate the data. I'm thinking of a simple R-C filter. >> We could generate the sawtooth input with a similar R-C filter >> that has a much faster time constant. >> >> -- >> These are my opinions, not necessarily my employers. I hate spam. ----------------------------------------------------- Bob Perlman Cambrian Design Works Digital Design, Signal Integrity http://www.cambriandesign.com Send e-mail replies to best<dot>com, username bobperl -----------------------------------------------------Article: 24848
Chris Hills wrote: > > Ben Franchuk <bfranchuk@jetnet.ab.ca> writes > > > > Why do all the design tutorials settle on 8 bit cpu's or > >16 bit RISC's or PDP-8's. Would not a medium sized project > >(24/32/36) be better examples to show both the power and the > >limitations of FPGA's. Or is it that the low cost/student development > >systems can only handle the low end (ie small) FPGA's. > > Because according to most industry figures I see the 8-bit embedded > industry is between 55 and 65% of the whole. A small but significant > market is still the 4 -bit systems. The 16-bit market is also large and > growing. There is a small 64 bit market 2%? > > This leaves less than 15% of the market for 24/32/32 bit systems > > The architecture with by far the largest market share is the 8051 with > 20-30% of the WHOLE embedded market. No other single > architecture comes close. > > This not a technical argument but a simple commercial one. 8051 may > be a pig technically but it has the market. However the embedded industry is still simple control that a 59 cent real 8051 the can be used in aunt sandys washing machine. It does not make sense to use a $50 fpga in $49 modem.I am not sure what the main generic FPGA market is but it not low priced computer products. Ben. -- "We do not inherit our time on this planet from our parents... We borrow it from our children." "Octal Computers:Where a step backward is two steps forward!" http://www.jetnet.ab.ca/users/bfranchuk/index.htmlArticle: 24849
On Sun, 20 Aug 2000 10:09:40 -0700, Neil Nelson <n_nelson@pacbell.net> wrote: >All too often we are not very clear as to our objectives. If, e.g., >an objective requires a little fawning and we do not fawn sufficiently, >we will not reach our objective. Or if, e.g., our objective requires >less fawning, and we fawn excessively, we will not reach our >objective. If our primary objective is to not fawn (or fawn, as the >case may be), then any other secondary objective that is against >not fawning (the primary) will not be obtained. Hehe. I love your way of writing all that! >We might speak of how we would like the world to work or how >the world should work if it was just or fair, but commonly we can >only deal with how the world actually does work. Clearly we >should work for world we want and believe to be fair, but that >objective is usually not one we can effectively pursue and one >almost certainly at odds with what others want and believe to be >fair. Though when those few occasions arise that will allow us to >make some headway for a world we want and believe to be fair let >us make the best of it that we can. This kind of "I'm practical about life" kind of argument justifies all manner of behavior and explains little. But I do also understand that it is a common way of saying, "I'm doing what I have to." We all do what we must, but in practice there are wide ranges of behavior that such practical considerations allow us. In the end, what really helps better to make a better world is when each of us takes small actions together. Not when we allow others to divide and split us into expedient behaviors, practical for the moment. The neat thing that's even better about that, is that it actually can work out better in the long run too, from a personal perspective. Some folks will just fold their cards on the first push. They can make it worse for themselves and others, being so easy. Luckily, we do live in a day and age in the US where we actually *can* take a hike from a company that's asking everyone to sign a visitor's log that is also a sneaky contract and not look back. Wasn't always that way. I was born to a very poor family. My dad died at 31 and we had very little to live on. I understand deeply the need to "do what one must," since I've had to live on free dried bread from stores that couldn't sell it and milk and sugar to add taste. (I got to like it, actually.) And I'm glad for the efforts others have taken that allow me to live a wonderful life today, work on wonderful projects, and not to be treated poorly in spite of my meager beginnings. I owe a lot to the efforts of others. >What is our current, primary objective? > >Is it to make a political (what we think is socially fair) statement >about NDAs? > >Is it to get a large number of job interviews? > >Is it to stand up for equality between employers and employees? > >Is it to denounce fawning and other ``suggested'' undesirable >behaviors? > >Is it to get a good job? Well, just to be argmentative while staying truthful, my primary objective is to work on interesting projects with good quality teams of people. I care about learning and I can't do that if I'm their better resource, so I look for teams that have some folks who are better than I am, smarter, more talented and love teaching others. I also look for projects that are, themselves, likely to educate me about nature (physics) in some way, though they can be interesting in other ways, too. That's what a "good job" is, I think. >If our primary objective is to get a good job, then we will need a >sufficient number of job interviews, but a large number of job >interviews alone will not get us a good job. I.e., we could trash >every job interview and we would not receive a job offer. Accurately said. "Sufficient number" is by definition necessary, of course. Problem is that you go on to smoothly add in "but a large number of..." as though "large number" is implied by "sufficient number." Sufficient could be just one. But "1" is rarely seen as a "large number." So sufficient, yes. That's a good premise. But "large number" does not logically follow this assumption, so your reasoning appears to be slipping. If you are now going to draw further conclusions from this departure from sound logic, such conclusions will also be unsound. >We need >to get a good job offer from a good company that we are willing to >accept, and it would be better to have several good offers from good >companies within a closely spaced period of time that would allow us >to evaluate and choose the best. (The best method of getting a job >will depend on a number of factors such that those I am (we are) >making should not be assumed to be standard.) I like this point. It is better to have a choice between good choices. Life is complex and there are a variety of factors, unspoken and explicit, to match up. No choice will be perfect, so having more means there is a better chance to find a more healthy and mutual match up. <snip> >I personally think that NDAs have negligible or no impact on the >opportunities of the job seeker. That's your opinion, but the facts in my own life argue strongly against this. Most folks coming out of college and going to a work-a-day situation will go from one employer to another. If there are any troubles in the process, it just winds up being on the shoulders of their prior employer and their current one. The employee can just let the "big boys" settle these details amongst themselves. They are, after all, just a lowly employee. They can't be expected to understand these things. For example, agreements about intellectual property rights are often simply, "you own nothing and the company owns anything and everything you think or do." Close enough, anyway. So, this employee now takes another job, with a similar agreement. Well, the employee has never owned any of their ideas anyway, since every company they have worked for has always had some similar agreement. They have nothing to protect and shouldn't care in the least if they sign one more of them. Their new employer and the their old ones can work out amongst themselves who owns what, if it should ever come to loggerheads. But the employee knows for sure that he/she simply doesn't care. They own nothing, no matter how it plays out. But I've been self-employed most of my adult life. I do have my own interests, skills, ideas, and prior and existing relationships to protect. I have a past, in other words, and I actually own part of my past, too. I can't just sign a standard intellectual property agreement. Every case I've seen so far has required significant modification -- and that includes those from Intel to much smaller companies I've worked with. Sometimes, the company figures it's not worth the negotiation. Sometimes, they figure otherwise. But either way, I can't just sign a blind document. I have a life. And getting back to your "no impact" point, the fact is that NDAs *do* have an impact. I've been involved in such a case only a few years ago, where one company sought to block me from contract work they had no business attempting to halt. After spending much time on the phone with their president it simply boiled down to "they felt that my skills working for this new company would significantly hamper their ability to compete in the market and that their NDA allowed them to prevent me from working there, because I might accidently spill something they'd talked to me about." However, I didn't intend on working on any competing products there and that potential client of mine had no intention of asking me to do so, either. I had to get a lawyer involved to point out to them what I'm sure their own lawyers had already told them -- that they had no case at all. But the real pain in this is that I had only the best feelings for these people. I liked them and wanted them to succeed. I still do and I wouldn't do anything to harm them. More, the new company I wanted to work for also had their interests honestly in mind too, in spite of the fact that there was a slight competitive overlap, and had explicitly told me that they would not ask me to get anywhere near any of their projects that might be considered competitive. This president and his company proceeded to destroy their working relationships with several companies in the process, as well as their ability to continue working with me, over a perception of harm that simply didn't exist and still doesn't exist. And when I talked to the president later, all he could point to was his own perception of words in the NDA and what power he felt it should entitle him to (and it didn't.) NDAs sometimes destroy, not for what they legally are, but for what they emotionally represent. I cannot ever forget that fact. >Plus it would seem rather difficult >for a company to bring suit against a job seeker for violating an >NDA except in very extreme, obvious cases. E.g., to violate an >NDA, the NDA company would need to divulge some unique, >significant knowledge about their company that the job seeker >could then take to another company, which the second company >could then use in a manner that would be apparent to the first >company that the specific information had been transferred by the >particular job seeker. > >This would be a very hard case to prove >without showing with reasonable certainty facts that the first >company would not reasonably have. I.e., when and to whom >in the second company did the job seeker give the NDA >information? It would seem that only the job seeker and second >company have this information, and as they would be the >defendants, they would not easily make this information known. Few things actually reach a court of law. As I mentioned above, that case didn't reach court. But relationships were destroyed, anyway. >Rather NDAs are a formalization of an expectation of trust and >loyalty that is a basic given of employment. The practical effect >of an NDA is to keep in mind something we should already have >in mind that employers (just like individuals) do not want their >company specific (personal) information spread around. The job >seeker or employee would want, as a given, to show respect for >the privacy of the companies and individuals they become privy >to in the routine course of performing their job seeking and >employment activities. There remains no reasonable basis I can imagine now, where a company needs to disclose NDA-quality information on their first interview. It's almost silly to imagine a company so reckless as to start disclosing proprietary information to each and every applicant the first time they see them. I am still waiting for you to clarify a situation where such would be appropriate and necessary. >For example, when I worked at a hospital I became privy to >medical information for a large number of people that we would >naturally think is the hospital's responsibility to keep private and >is becoming more now a legal responsibility. What should a >hospital do in a case where a job seeker or employee shows an >obvious disregard for this non-disclosure requirement? I expect >NDAs (and perhaps a little fawning) have a rather different look >when _you_ are the one being protected (and perhaps fawned). Again, as a first interview, I rather doubt a responsible employer would start disclosing such information. An employee should be responsible, of course. But you are mixing logical elements here. We've been discussing the case of NDAs for first time applicant interviews. Not NDAs, generally. NDAs have a purpose, of course. I hope nothing I've said suggests I think otherwise. I just don't see a proper place for NDAs hidden in visitor logs and explicitly required for the first-time interview. Peace, Jon
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z