Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Antti Lukats wrote: > well I made a Xilinx version of it ;) project files are at OpenForge > http://gforge.openchip.org/projects/mico8/ > > the syntesis report is at and of message, 0% utilization is nice to see > feature !!! > does it mean I can use infinite number of Mico8 in S3-1500? ok joking ;) Do you happen to have the same summary for a like-resourced PicoBlaze ? Be interesting to compare the two. -jgArticle: 87551
> A quick google lead me to this tool: > http://www.eecg.toronto.edu/~jayar/software/edif2blif/edif2blif.html Wow, a blast from my past! I wrote that as a project quite a while ago. It was intended to take an EDIF file plus a library and convert it into a BLIF file. It was only tested on EDIF produced from MaxPlus II, and could handle only pure logic designs (no RAMs, not sure about adders, etc.). The problem with EDIF is not parsing it (that's easy). It's knowing what the underlying library is. The sort of flow you could use is: (1) Synthesize to some target architecture (2) Dump an EDIF netlist (3) Write a library file for edif2blif (and no, I don't really remember how any more) (4) Run edif2blif and hope for the best. Good luck! Paul Leventis Altera Corp.Article: 87552
"Jerry Avins" <jya@ieee.org> wrote in message news:8OOdnYevi6v6_3nfRVn-jw@rcn.net... > Before I write code, I want to know what it's supposed to do. Sure, but you have to know that before you write design docs, too. > When it's for a client, I want to be very sure that the client knows it's > supposed to do. They know about it in their own head-space. If you ask an engineer to build you a bridge, for example, you probably know where you want it to go and what kind of traffic you want to drive across it, but that's not all a bridge does. It's up to the engineer to anticipate additional loads from wind, temperature changes, tides and currents, etc. What the bridge does is deal with all of these things, but it's not fair to ask a customer to describe the parameters to which he expects his bridge to conform, because he doesn't even know what's reasonable. > I did programming for hire for a time, sometimes moonlighting, and for a > while as part of my only professional activity. I learned from other's bad > experiences before I started to insist on a set of specifications that > would allow the client to prove that I didn't finish the job, and allow me > to prove that I did. > Some clients didn't like to be pinned down [...] There's some level of CYA you have to do when you're working on a contract basis, but even here I think that the onus is primarily on the software engineering professional to be personally sure that he is delivering what the customer needs. There is a lot of customer interaction required to create that surety, of course -- more than is required just to safely paint the customer into a workable corner. > The design part consists of deciding what the used will see and what he > must to. Coding comes after. Yes, there's a lot of that that needs to happen before coding, of course. -- MattArticle: 87553
> do have performance data for S-II in fabric speed? Yes. What would you like to know? If you compile a design in the freely available Quartus II Web Edition, it will tell you a fair bit about fabric speed (the timing models are very accurate). > I cant unfortunatly measure it myself as I dont have S-IIs around but I > have > measured actual in fabric clock speeds of 950MHz in lowest speed grade > V4 - > what would be the case in S-II lowest speed grade? Are 1GHz+ internal > signals possible in the fabric? While you can get the FPGA fabric to run very fast (probably a few Ghz for shift-registers, counters, local single-level logic functions, etc), getting a clock to that fabric is another story. We limit the Stratix II clock timing model to 550 Mhz; this is for the global clock network. Some of the specialized I/O-related clocks run faster. The reason is that things start getting funky when you run clocks at high speeds, and frankly, there aren't any applications (that I know of) that require operation in excess of 550 Mhz. Our DSP/RAM blocks max out at that speed, and <2 ns isn't long to get very much at all done. That said, you could probably clock a SII unit up to a high frequency (I've seen some stuff up around 1 Ghz from the lab, IIRC). Getting that to run reliably across process, temperature, and voltage is another story. Regards, Paul Leventis Altera Corp.Article: 87554
I am trying understand how a distributed arithmetic design can achieve a density of 1 LUT(4 input) per four taps per input data bit. I have read the www.andraka.com tutorial and a lot of the many previous posts on distributed arithmetic but still cannot see it.... I understand how the scaling accumulator implements a bit serial multiply and I see how the partial product summation is moved to be in fornt of the scaling accumulator. What I can't see is how the partial products for four taps can be implemented in a single 4 input LUT? (I realise that a LUT = 16x1 RAM, in Xilinx anyway) To caculate the partial product for four taps and a single bit position of our input data then we need to add four bits? If all four bits are 1's then our sum results in 3 bits (or 2bits and a carry out). How can a single LUT4 represent that? A single LUT has only 1 output bit....Article: 87555
Amr, what do you really want to know, and why? Your original question was very simplistic (I think you have used the word na=EFve before), but now you get into a fair amount of detail. What is really on your mind? Maybe we can answer a direct question more easily. Peter Alfke, XilinxArticle: 87556
scottfrye wrote: >>... For some >>reason, managers and customers seem to think that software can be >>changed much more easily than mechanical systems... so they do. >> >> > >Do you think that software is as hard to change as mechanical systems? > > Bad experiences in defence systems means some defence customers are *much* happier to make large physical changes than small software ones. The software change itself can be simple, but testing it for unforseen consequences can be a nightmare. Especially if it is safety critical (e.g. flight control systems). >or is it possible that, even though software is easier to change than a >mechnical system, it still requires some work and many >managers/customers assume easier change = free change? > > I think most managers are just in love with the notion they can fix things economically after the product has shipped. Viva flash! :-) Regards, SteveArticle: 87557
Peter K. wrote: >Randy Yates wrote: > > > >>I'm not familiar with the spiral/agile model, but you'll have to >>do some pretty fancy talking to convince me to agree that something >>other than a strict development cycle flow is going to produce a better >>system in the long run. >> >> > >Oh, it's still a strict development cycle... if anything, it's more >strict than the big design up front. It just panders a little better >to customers and, in my experience, handles changing requirements much >better. > > > >>The problem I've seen is that, in the process of stumbling over one's >>feet to get _something_ out that works, that the managers can >>touchie-feelie-see, one ends up skimping or dumbing down the high >>level design. >> >> > >Not at all. You don't dumb down the design, you just instantiate those >parts of it that are clearest first. > > > >>Then, when that happens, the remainder of the project's >>development is crippled, with the end result either that it took MORE >>time than it would have taken had a proper high level design been >>performed, or that the performance/maintainability/extendability of >>the system is greatly comprimised. >> >> > >It depends on what you mean by "proper high level design" --- don't get >me wrong, there is still abstract, forward-leaning thought involved. >It's just that most designs I've seen try to get too stuck into details >too early. > > > >>>Big up-front design can be a killer when requirements change. For some >>>reason, managers and customers seem to think that software can be >>>changed much more easily than mechanical systems... so they do. The >>>trick is to figure out when they're really expressing a new >>>requirement, or just jumping on the latest Big Thing. >>> >>> >>You can't have your cake and eat it too. Either take a hit sticking >>to your requirements, or dumb-down your system. >> >> > >Actually, you can. That's part of the point of the spiral / agile >methods: allow some flux in requirements (it will always be there), but >don't make dumb decisions about it. > > The key problem with anything that produces an early prototype is managers tend to mistake the prototype for something near to market ready, however emphatically they are told it isn't. Without the pressure that brings I doubt people like Randy would be criticising more adaptive methodologies. My experience with completely up front designs is they always produce something obsolete by its shipping date. I think anything more adaptive than that has to be of huge benefit, whatever drawbacks it has. Regards, SteveArticle: 87558
Hi Peter, I swear there is nothing on my mind. It's just that I'm a little bit confused and I'm trying to clear things up. There is a lot of literature, books, publications and reports on testing techniques and mechanisms and I'm trying to figure out how the entire testing process is done from an industrial perspective. I'm a 4rd year PhD student at University of Cincinnati, and FPGA reliability is just one part of my PhD dissertation. So yes, there seems to be an rough idea on my mind but just for acedemic purposes. I'm still working on it and it all has to do predicting reliability for FPGA devices. You can say a new prediction method. But I'm not yet sure of its feasibility so I'm just trying to clear thinsg up. That's all. I'm not trying by any means to pull information out of anybody by asking indirect questions. Maybe my questions gave you that impression but again there is no purpose to them at all and this is again my fault :) Thanks a lot. AmrArticle: 87559
"Jerry Avins" <jya@ieee.org> wrote in message news:8OOdnYevi6v6_3nfRVn-jw@rcn.net... > Matt Timmermans wrote: > > Matt, > > Before I write code, I want to know what it's supposed to do. When it's for a > client, I want to be very sure that the client knows it's supposed to do. I > did programming for hire for a time, sometimes moonlighting, and for a while > as part of my only professional activity. I learned from other's bad > experiences before I started to insist on a set of specifications that would > allow the client to prove that I didn't finish the job, and allow me to prove > that I did. Some clients didn't like to be pinned down, and if I liked the job > I would help to write the document. I excepted as a requirements specification > a users manual that explained how to use every feature and described > everything that a user would see. If it was loose, I reserved (in writing) the > right to fill in the details as I thought reasonable, and to charge extra for > changes. It was a good policy, allowing me to keep friendships that I would > otherwise have lost. Once, it sort of backfired. In my experience, those who are tasked with writing detailed software specifications (e.g. sales and marketing) are often not qualified to do so. It seems that the ability to write a very precise and detailed specification is a very similar skill set to the ability to write the software itself! So unless the spec writer is a programmer himself, it is usually severely lacking in detail and precision, at least in my experience. Your story (snipped) illustrates that concept nicely.Article: 87560
"Jan Panteltje" <pNaonStpealmtje@yahoo.com> wrote in message news:1122287353.66c85a156040e266c1dce57215063b78@teranews... > On a sunny day (Mon, 25 Jul 2005 10:23:45 +0800) it happened Steve Underwood > <steveu@dis.org> wrote in <dc1ifi$cs2$1@home.itg.ti.com>: > >>Oh what bliss. A man who's free of inner conflicts. I often vigorously >>disagree with myself - especially when my tongue is saying "eat that" >>while my brain is saying "now, just how many kilos did the bathroom >>scales show the other day?" :-) >> >>As America's great philsopher G.W. Bush so wisely said "I have opinions > You mean THIS philosopher: > ftp://panteltje.com/Bush_a_man_without_vision.jpg FYI, that photo is debunked here: http://www.snopes.com/photos/binoculars.aspArticle: 87561
"Fred Marshall" <fmarshallx@remove_the_x.acm.org> wrote in message news:roCdnQY0lNkxynjfRVn-uA@centurytel.net... > > "scottfrye" <scottf3095@aol.com> wrote in message > news:1122308450.491025.165850@g14g2000cwa.googlegroups.com... >> >... For some >>>reason, managers and customers seem to think that software can be >>>changed much more easily than mechanical systems... so they do. >> >> Do you think that software is as hard to change as mechanical systems? >> >> or is it possible that, even though software is easier to change than a >> mechnical system, it still requires some work and many >> managers/customers assume easier change = free change? >> > > I depends. It's pretty easy to change the diameter of a hole in AutoCAD. So, > if that's the context of a change to a mechanical system then it's just as > easy as changing a line of code. In fact, it's equivalent to changing the > value of a constant and that's how it's done. Now, if the next step is to > reprogram an NC machine, then that's something additional. But isn't that > similar to linking the new compiled code...? Maybe. At least in my company, after you change the diameter of the hold in AutoCAD, you have to update required documentation, notify the vendor, possibly pay a change fee, figure out how to deal with the existing units with the "wrong" whole size (see through, throw away, return for correction), verify the fix has been implemented, etc.. The software is more like change the code, test it, and post it on the web site. It still takes work, but it's all in house, requires less time and management, and usually less cost. But for a manufacturing company that out-sourced its software, it could be the opposite!Article: 87562
Amr, thanks for the clarification. High temperature, high voltage and excessive currents are the predominant reasons for semiconductors to fail. Nuclear radiation may be an exotic source of failure. Inside an FPGA, there is little chance to ever encounter these phenomena, unless you run the whole chip at very high temperature or high voltage. On the other hand, when circuits are this good and long-lasting, it gets increasinly difficult, and time-consuming, and expensive to prove how good they are. Austin mentioned some of these difficulties. Commercial users are really not so much interested whether a particular FPGA lasts 50 or 100 or 150 years. They know that the equipment will be scrapped as obsolete long before that. They are interested in the "midlife mortality", how many out of 100,000 ICs fail during the nth year of life of the equipment, where n is between 2 and 20. That's why they are looking for, and are satisfied with, statistical reliability data. Peter Alfke, from home.Article: 87563
Jerry Avins wrote: Stung by the spell checker! > document. I excepted as a requirements specification a users manual that I accepted as a requirements ... Jerry -- Engineering is the art of making what you want from things you can get. ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻArticle: 87564
Jon Harris wrote: > "Jerry Avins" <jya@ieee.org> wrote in message > news:8OOdnYevi6v6_3nfRVn-jw@rcn.net... > >>Matt Timmermans wrote: >> >>Matt, >> >>Before I write code, I want to know what it's supposed to do. When it's for a >>client, I want to be very sure that the client knows it's supposed to do. I >>did programming for hire for a time, sometimes moonlighting, and for a while >>as part of my only professional activity. I learned from other's bad >>experiences before I started to insist on a set of specifications that would >>allow the client to prove that I didn't finish the job, and allow me to prove >>that I did. Some clients didn't like to be pinned down, and if I liked the job >>I would help to write the document. I excepted as a requirements specification >>a users manual that explained how to use every feature and described >>everything that a user would see. If it was loose, I reserved (in writing) the >>right to fill in the details as I thought reasonable, and to charge extra for >>changes. It was a good policy, allowing me to keep friendships that I would >>otherwise have lost. Once, it sort of backfired. > > > In my experience, those who are tasked with writing detailed software > specifications (e.g. sales and marketing) are often not qualified to do so. It > seems that the ability to write a very precise and detailed specification is a > very similar skill set to the ability to write the software itself! So unless > the spec writer is a programmer himself, it is usually severely lacking in > detail and precision, at least in my experience. Your story (snipped) > illustrates that concept nicely. Sort of. If you want to sell what I write, don't tell me what it should do. Describe to your prospective customer what it will do, and show me. Jerry -- Engineering is the art of making what you want from things you can get. ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻArticle: 87565
Yes I'm using Modelsim, is there some way to do that with this tool? I tried to perform a behavioral simulation, but I don't know how to check the memory (I'm not able to check it with the wave pane because the DIA is generated during the simulation). Thanks a lot GiovanniArticle: 87566
see comments below "Jim Granville" <no.spam@designtools.co.nz> schrieb im Newsbeitrag news:42e58544$1@clear.net.nz... > Antti Lukats wrote: > > > "Martin Schoeberl" <mschoebe@mail.tuwien.ac.at> schrieb im Newsbeitrag > > news:42e540a2$0$8024$3b214f66@tunews.univie.ac.at... > > > >>>Xilinx FPGA's are nice but all of them (after XC4K?) do not have any > > more > >>>access to the OnChipOscillator - it is not usually required also, but in > >>>some rare cases it may be useful to have some OnChip Clock available in > > case > >>>all external clock sources fail, or do have emergency Watchdog timer to > >>>monitor some events also in the case of external clock circuitry > > failures. > >>>For this purpose we are developing OnChip Oscillator IP Cores. > >>> > >>>http://gforge.openchip.org/frs/?group_id=32 > > Interesting - any data on Vcc and temp variations, and on other > frequencies ? the OnChip Oscillator is part of larger project and yes we do some measurements, the vcc-temp variatans measurements are not done yet. I only have measured some 5% frequency change in V4 when the chips temp raises from ambient to normal working temp. But the maximal variation can actually be indirectly be calculated from Xilinx datasheets, from my estimate the frwquency should not be off more than +-20% over full temp and vcc range or xilinx datasheet values would not much - that comes from the margin in timing specs in datasheets. of course this applies for one given device-speed grade combination and should be measured in the same CLB location > Power consumption ? > Seems to me, it would be better to create a clock as low as possible, > before driving the high-load clock buffers. - thus a cell that is > both OSC and Divider could be better ? > This cell is OSC and Divider by 2, the high speed signal path (about 440MHz) is kept completly inside the routing switch of an single CLB, ie it is not driving any short or long connections at all, only goes to swithcbox and back. The only load on this net is Divide by stage, what is the actual output of the OCO Cell. we are developing OCO cells with low clock output as well, more suitable for things like watchdog, etc.. > An advantage of on chip Osc, is they self-margin, so track Vcc and Temp. > If I've understood your results, they show quite close correlation > across the die. > you understand correctly, the frequencies shown in the screenshot are from different CLB locations accross the die, and the correlation is about what was to be expected. > This would also be a good way to see if faster speed grades REALLY are > faster, or just stamped to match the market :) > Yes, you are a mindreader - this is part of FPGA fine tuning project exactly targetted for timinig measurements of the FPGA internals. > > >>> > >> > >>Just looked at the sources - There are binary VHDL files. What does > >>this mean? > >> > >>Martin > >> > > > > This means they can only be used by ISE tools. There should be no problems > > using those files with ISE 6.2 to 7.1, just add to your project and > > synthesise as normal. If there are any problems let me know. > > ?! - do you mean you cannot save as ASCII source code, or use any other > editor ? > Surely this nonsense can be disabled ? > > -jg > Sure, ask for commercial licensing and you will get readable source code instantly. At the moment the sources are encrypted to prevent modification. AnttiArticle: 87567
"praetorian" <Hua.Zheng@jpl.nasa.gov> schrieb im Newsbeitrag news:dc3gfl$2ka$1@nntp1.jpl.nasa.gov... > Is there any contrainsts for the V2P to disallow routing in a certain area? read cgd.pdf - but I think no. you can only disable some CLBs to used but not prohibit the routing fabric areas from being used. AnttiArticle: 87568
hi, i think no, too. in the constraints guide (cgd.pdf) are no constraints defined to exclude areas from routing for the synthesis process (map and place&route). you can only define constraints for placing logic, bram etc. to specific areas. Sven Antti Lukats schrieb: > "praetorian" <Hua.Zheng@jpl.nasa.gov> schrieb im Newsbeitrag > news:dc3gfl$2ka$1@nntp1.jpl.nasa.gov... > > Is there any contrainsts for the V2P to disallow routing in a certain > area? > > read cgd.pdf - but I think no. you can only disable some CLBs to used but > not prohibit the routing fabric areas from being used. > > AnttiArticle: 87569
I'm struggling to see that, when it appears you can use the design without distributing the source. Cheers, JonArticle: 87570
Hi, I want to understand the basic path types as: clock t osetu; clock to pad because i have many problems when i made the static timing after place and route ( xst tool of xilinx) and also the timing verification....thanks a lotArticle: 87571
Hi all, I am looking for information about licensing of HW designs. To be more specific I would like to find out what kind of license comes with the WishBone specification (rev B.3). I read on the OpenCores web site that they refer to the GPL licence but after having read it on http://www.gnu.org I have a lot of questions. It is not clear to me once a GPL licensed softcore is included in a larger design then the whole design MUST be made available under the same conditions of the GPL core. Moreover on the OpenCore web site (http://opencores.nnytech.net/faq.cgi/section/4/4.1#4.1) it is written that: "Conversely, if you use a GPLed item, you MUST distribute all information about it and NOT prevent others from redistributing or modifying it. Stating it as an oversimplification: you cannot keep secrets unless you want your butt sued." Does this mean that a company who makes money on the design they make cannot use free cores because they would be OBLIGED to "distribute all information about it"? If a design center uses a free IP in their design, must they make the source code of the whole IC available to everybody? Or the entire data base? Or none of them? And what if they design their IPs but all of them are equipped with a WishBone interface: they do not use any source code but just the WishBone specifications. Must they publish their source code? I've already googled but I was not able to find the information I was looking for. Suggestion, articles, URLs and anything useful to my purpose is welcomed. Thanks in advance. hataArticle: 87572
Steve Underwood wrote: > The key problem with anything that produces an early prototype is > managers tend to mistake the prototype for something near to market > ready, however emphatically they are told it isn't. Without the pressure > that brings I doubt people like Randy would be criticising more adaptive > methodologies. I agree that this is an issue. It's a matter of "training" your stakeholders, though, and I've had more success with incremental involvement of stakeholders (in parallel with the incremental implementation), than trying to get their whole-hearted attention for the times when a big design needs it. The phrase "attention span of a gnat" springs to mind. > My experience with completely up front designs is they always produce > something obsolete by its shipping date. I think anything more adaptive > than that has to be of huge benefit, whatever drawbacks it has. There are certainly benefits and costs to all approaches. As previous posters have said, you need to choose the approach that satisfies as many of the interested parties as possible. The more you know about each approach, the better you can pick the right(est) one. Ciao, Peter K.Article: 87573
Baxter wrote: > The Design can (and shoule) also tell you things that the code doesn't - > like the relationship between functions, the flow of data, etc. Ah, at last somebody who is capable of putting into words what I think! The reason why I got concerned about these things (apart from having learned x86 assembly under that ridiculous MS-DOS memory model with 64k segments and the memory beyond 1M all but unreachable) was a code for scientific computations I used during my MSc project. The MSC project, where I tested a method to do a certain type of data analysis, evolved into a code-development project for a company whose data I had used for testing. These people wanted to use my techniques in their R&D facility. They wanted to see if they could use the technique in their in-house data processing. The brief for the post-MSc project was reasonable: Take the ecclectic program system, consisting of fortran, C and matlab code snippets, and convert into a C demo program that could be used as a stand-alone application by the staff. Everybody understood that it was out of the question to re-implement the core scientific routine that was available in fortran. I did everyting else: Wrote the main wrap-around, the data-I/O, implemented the tricks I had come up with in the MSc project. All that remained was to make a GUI for some interactive handling of the data, and to marry the fortran code to the rest of the program. The GUI had to wait for now, it could be handled by other means. The important thing was to include the core fortran program, so the main fucntionality was available. The fortran program originated as some "example code" for a textbook on numerical modelling, that appeared around 1978-79. The five-page overview of the particular modeling technique, as found in that book, was the only "documentation" available for the code. The book gave an overview of the theory. It was of no use whatsoever, what the code was concerned. The program itself was straight-forward. There was a main function that imported some parameters, organized some loops, and passed the parameters to the core subfunctions. The parameter IO in the main program was by file. Not that it mattered, I could write a wrap-around function in C and call the fortran core subfunctions from there. Except it was not possible. Every status or error message generated by the core subfunctions were written to file, not passed back to the main function. Every computed parameter was written to file, not passed back. The concept of "returning the results in arrays for the main program to decide what to do" had not been respected. There was no way of implementing a smilar code from scratch; doing that is a project at the PhD level. (I know people who got their PhD degrees by writing codes with exactly the same functionally, if not in the same way.) There was no way of re-structuring the existing code, since no relevant documentation existed. So the project blew sky high, if compared to the original brief. I was less than happy about that, but "luckily", the people involved decided it could continue as an R&D project, and so I got onto the path towards my PhD. Since then, I have been very aware about documentation, error handling, data structures, data flows etc. What I call "design". RuneArticle: 87574
"Fred Marshall" wrote.... >"scottfrye" <scottf3095@aol.com> wrote in message >news:1122308450.491025.165850@g14g2000cwa.googlegroups.com... >> >... For some >>>reason, managers and customers seem to think that software can be >>>changed much more easily than mechanical systems... so they do. >> >> Do you think that software is as hard to change as mechanical systems? >> >> or is it possible that, even though software is easier to change than a >> mechnical system, it still requires some work and many >> managers/customers assume easier change = free change? >> >I depends. It's pretty easy to change the diameter of a hole in AutoCAD. >So, if that's the context of a change to a mechanical system then it's just >as easy as changing a line of code. In fact, it's equivalent to changing >the value of a constant and that's how it's done. Now, if the next step is >to reprogram an NC machine, then that's something additional. But isn't >that similar to linking the new compiled code...? Maybe. > >Fred However, when you change the diameter of a hole in AutoCAD you then have to go thru the physical process of actually changing the hole diameter it the end part. With software, once the line of code is changed, you recompile and the change is done (minus any paperwork and distribution). - Scott Frye
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z