Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
dwtdw <dwtdw@tulsa.oklahoma.net> wrote in article <34B6AB45.98333AE7@tulsa.oklahoma.net>... : Attention Altera FLEX10K users: : : Is anyone else concerned about the imminent increase in Standby Current : for the 10K100A from only 0.5mA to 10mA??? : : I design mobile/battery-powered products and this increase is gigantic : relative to my : overall power budget. Is there anyone else designing mobile, : battery-powered, and/or : other products which require low-power consumption or do I have the only : application : that is power critical? : : It seems like Altera could continue to offer a low-power version of the : device for those : who need it. Maybe something like a 10K100AL? out of curiousity (and an attempt to get some technical stuff discussed here), do you know where the power is going in the chip? 10 mA is a pile of current if you're doing low-powered stuff and yes, i do that sort of design too. most of the fpga's that i use consume < 1 mA for Iccstdby, with many models running around 130 uA or so. for a cmos part, 10 mA suggests that they're running some oscillators, sense amps w/ bias currents, or just have poor quality parts. any other ideas? also, i assume that the 10 mA spec is for the inputs at cmos levels. is that number over temperature? just a thought or two, -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 8576
> >>What customers will be able to, or want to, use a million gate device? > >A lot fewer than the PR machine claims. > > I agree with this, at least for the next five years. > >>What the tool flow will be (Schematics anyone?). > >Depends on the above. Schematics are certainly an easier way to get > >into FPGAs in a quick and useful way than VHDL etc. Either way is 'quick' to get into FPGAs, depending on what design methodology you are proficient at (er, and how 'easy' your tools are ;-) . I would say one can generate more 'random' VHDL code in a shorter time one can generate 'random' schematics, but for data path, I believe schematics are faster and easier. Though, I have found quick for HDLs does not necessarily mean useful or functional ;-) > > Obviously when you pretend to be a visinaire I don't find that in the dictionary, did you mean 'visionary'? > you cannot afford > to write the statement mentioned above. > A person that believes > schematics are easier to design than VHDL absolutely cannot understand > the high end FPGA market. This is certainly incorrect. Today, schematics, especially given certain designs, CAN BE FAR EASIER to design with than VHDL, Verilog or any HDL. If you believe otherwise, then you are probably lacking experience, and possibly not very proficient in FPGA design tools and FPGA design. I have bailed out many companies who have taken the HDL route to FPGA design. Major problems have been speed (to many levels of logic) and size (logic not very well optimized). Granted, some of the tools are better today, but still not anywhere near as good as one can do with schematics (that is, unless you want to put an inordinate amount of work into massaging your VHDL). HDL might be quicker to implement for a low speed design. Actually, a blend of schematics and HDL can be a more appropriate design methodology. HDLs are hard to follow for data paths. Schematics, if drawn well, can give much visual information about the data flow, HDL does not. For complex logic, HDL 'can', if done correctly, have a documentation advantage, but it looses an advantage if you need to map the logic for speed, or size. > > > Stick to the low end stuff. Open with an insult...and close with an insult. A tactful sort you are! Austin Franklin darkroom@ix.netcom.com P.S. RM - Please leave the insults out of your posts, the are not necessary.Article: 8577
This thread has gone on for a while ( I obviously do not like the heading). Only the XC5200 family has this sensitivity to Vcc rise time. All Xilinx FPGAs contain a Vcc monitor circuit, and start their operation after they have detected a "reasonable" voltage, AND after they have then waited for a specified time. In XC3000 and XC4000, this time is 64 milliseconds in master modes, and is 16 ms in the other modes. The XC5200 devices wait only 4 ms, and make no destinction between master and non-master modes. The rationale is that interconnecting the INIT pins takes care of any differences between devices in a daisy-chain. INIT goes High when the last or slowest of the devices is ready for configuration. This is a perfect solution, if it were not for the SPROM which starts up with its own voltage monitor and clear circuitry. That's why there can be problems with slow Vcc rise times in XC5200-based systems that use master serial mode. When the SPROM sense-threshold is higher than the XC5200's threshold, there is only a 4-ms delay to hold back the FPGA from starting the configuration process too early. The best solution is to use one of these little and very cheap ( < $1 ) precision voltage monitor or watchdog circuits, available from many linear manufacturers. It is impossible to incorporate such precision linear circuitry in a big, high-performance digital device. Louis mentioned something about Done going High without a proper configuration in the device. I do not believe that story. I have debugged and helped the debugging of hundreds of sick designs, and I have never found such a story to be true. We have CRC to make that impossible. Xilinx has shipped many tens of millions of devices, which means that the sum of all Xilinx devices has gone through many billions of successful configuration procedures. Configuration is a reliable, state-machine-controlled procedure that works 100% of the time, as long as the user provides a reasonably clean system environment with no glitches on supplies and clocks. That is what the almost 1 billion dollar per year "SRAM"-FPGA business is built upon. The 4-ms Vcc rise-time is an unfortunately tough requirement, but applies only to the XC5200, and only in master serial configuration mode. Peter Alfke, Xilinx ApplicationsArticle: 8578
On 8 Jan 1998 23:35:27 GMT, madarass@cats.ucsc.edu (Rita Madarassy) wrote: >Obviously when you pretend to be a visinaire you cannot afford >to write the statement mentioned above. A person that believes >schematics are easier to design than VHDL absolutely cannot understand >the high end FPGA market. I thought he said "easier way to get into FPGA's". I'd completely agree with Peter's statement. However, being familiar with VHDL, I would not go back to schematics. I also would not employ an engineer who did not have HDL experience/knowledge, as those skills are a little more portable, and when provided with the correct tools, such engineers will be far more productive in medium to large designs than schematic gate-bashers. I'd also have the freedom (with a heirarchical design methodology) to switch vendors with much less effort if one crapped out on me. Stuart -- For Email remove "die.spammer." from the addressArticle: 8579
I never pretended to be a "visionary", I was merely saying that most of the FPGA market is in small devices, and when using small devices it is far easier to get into it than with VHDL. I have been on a 2-day VHDL course, and despite being in electronics for >30 years, and doing h/w and s/w design for >20 years, and FPGA design for nearly 10 years, I came to the conclusion that to be *productive* in VHDL you need to a) put in a lot more time than that, and b) do it regularly. VHDL has so many quirks. Very subtle language features will generate subtly different (i.e. either working or not working) circuits. Until you learn all that stuff, you will have lots of fun with VHDL. And the stuff which is really trivial with VHDL is also usually really trivial with schematic entry - except state machines for which I would always use a language of some sort. It is a bit like the old C compilers. To generate good code, you had to know what sort of C generates what sort of assembler. OK if you are in a job and paid per hour. Not OK if you work for yourself. If you are doing truly massive state machines, then VHDL etc is the only way. My point is that huge designs are rare. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8580
I recall a figure of 20ms given in a Xilinx data book, for the 3k devices. I have had many problems, in various different designs, with powering up these devices. Eventually I adopted a circuit, based on the old TL7705 reset controller, a HC132, and some Rs and Cs, which waits until VCC is up to spec (over 4.75V) and *then* it drops /RST and then about 10ms later it raises /RST. So, one does not even being to apply /RST=0 until VCC is in spec. The Xilinx data book mentions this method for designs where VCC rise is not monotonic. My opinion is that it is desirable even for situations where VCC rise *is* definitely monotonic. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8581
z80@ds.com (Peter) wrote: >I have been on a 2-day VHDL course, and despite being in electronics >for >30 years, and doing h/w and s/w design for >20 years, and FPGA >design for nearly 10 years, I came to the conclusion that to be >*productive* in VHDL you need to a) put in a lot more time than that, >and b) do it regularly. > >VHDL has so many quirks. Very subtle language features will generate >subtly different (i.e. either working or not working) circuits. Until >you learn all that stuff, you will have lots of fun with VHDL. > >And the stuff which is really trivial with VHDL is also usually really >trivial with schematic entry - except state machines for which I would >always use a language of some sort. Maybe you should try verilog (no wars please). I think you are missing some of the point, VHDL and verilog are simulation languages parts of which happen to be synthesisable. For many designs you should be writing more code for the design testbench than the actual design. How do you simulate your schematic designs? Maybe you don't, but designs don't have to be very big for it to get extremely tedious to manually verify their function and timing margins. Cheers Terry...Article: 8582
Can anyone relate some experiences using the Xilinix PCI cores? I'm thinking of using them in my next design. Thanks. cheers, aaronArticle: 8583
Terry Harris <terry.harris@dial.pipex.com> wrote in article <34b9104c.22471184@news.dial.pipex.com>... : z80@ds.com (Peter) wrote: : <major snip> : : For many designs you should be writing more code for the design : testbench than the actual design. How do you simulate your schematic : designs? Maybe you don't, but designs don't have to be very big for it : to get extremely tedious to manually verify their function and timing : margins. : : : Cheers Terry... i think the amount of effort for simulating designs depends mostly on the structure of the circuit you're 'testing,' not the methodology. and i can do a 'coded' test bench OR a schematic data generator/checker for designs which are coded with a schematic or hdl. to verify timing margins, i don't use the simulator but use the static timing analyzer. much, much quicker and it takes the burden off of the designer from having to go in and stimulate the worst-case path. for example, for long counters, this can be incredibly time consuming unless you design in extra circuitry to help with this - but this is a snap to analyze with a static timing analyzer. lastly, all of the logic simulators that i have seen for fpgas are notoriously poor at simulating clock skew as they treat a net as something that switches at a single point in time. anyone have experience with a logic simulator that puts in back annotated delays for each branch of the clock tree? just a thought, -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 8584
Hi, Does anybody know is it possible to access a PCI's board resources from another PCI board without the CPU? Thanks! AlexArticle: 8585
Alexandre Pechev <A.Pechev@rdg.ac.uk> wrote in article <69adi8$586$1@susscsc1.reading.ac.uk>... > Hi, > Does anybody know > is it possible to access a PCI's board resources from > another PCI board without the CPU? Yes, assuming that by CPU you mean as in an x86 based system, the CPU you are referring to is the x86 on the system board, as opposed to a CPU on the PCI board... The PCI board who wants the information has to be capable of being a master, and must know the address of the PCI board it wants to get the information from, and that board has to be capable of being a target. Austin Franklin darkroom@ix.netcom.comArticle: 8586
Aaron Holtzman <aholtzma@nospam.uvic.ca> wrote in article <34B84B67.50D82011@nospam.uvic.ca>... > Can anyone relate some experiences using the Xilinix PCI cores? I'm > thinking of using them in my next design. Thanks. It is only a starting point, and an OK one at that. The back end interface is about %70 of the work, and do not underestimate it! Austin Franklin darkroom@ix.netcom.comArticle: 8587
> all of the logic simulators > that i have seen for fpgas are notoriously poor at simulating clock skew as > they treat a net as something that switches at a single point in time. > anyone have experience with a logic simulator that puts in back annotated > delays for each branch of the clock tree? I believe ViewSim does this. I know the structure of the simulation file allows this, but whether the back annotation tools actually characterize this correctly is another question... Austin Franklin darkroom@ix.netcom.comArticle: 8588
Stuart Clubb wrote: > > On 8 Jan 1998 23:35:27 GMT, madarass@cats.ucsc.edu (Rita Madarassy) > wrote: > > >Obviously when you pretend to be a visinaire you cannot afford > >to write the statement mentioned above. A person that believes > >schematics are easier to design than VHDL absolutely cannot understand > >the high end FPGA market. > > I thought he said "easier way to get into FPGA's". I'd completely > agree with Peter's statement. However, being familiar with VHDL, I > would not go back to schematics. I also would not employ an engineer > who did not have HDL experience/knowledge, as those skills are a > little more portable, and when provided with the correct tools, such > engineers will be far more productive in medium to large designs than > schematic gate-bashers. I'd also have the freedom (with a heirarchical > design methodology) to switch vendors with much less effort if one > crapped out on me. > > Stuart > > -- > For Email remove "die.spammer." from the address Why is it that the HDL proponents all seem to assume that schematic entry is not/cannot be done hierarchically? My schematics are typically only one page at a given level so that the function and architecture can be discerned at a glance. Need more detail? push down a level to see what is inside the function. I continue to believe that a well laid out schematic is much more readable than HDL code. I will admit that it is easier to put comments on an HDL design (but I also think they are more necessary there). In my business (pushing the performance envelope of FPGAs, mostly in DSP type applications) I find that using schematic entry rather the HDL gives me considerably more control over the design with less effort. To obtain the same performance from an HDL requires low level instantiation rather than synthesis, in which case you are doing at least as much work as with schematics. In both cases, the development of a heirarchical library of lower level functions speeds the design process in subsequent projects. I am not saying that HDLs do not have a place: they can be useful if desired performance and density is low enough that the detail can be left to synthesis. In my experience however, the performance and/or density hit is significant in FPGA designs unless a large amount of effort is used in low level instantiation. The low level instantiation has to be device specific too, so the ballyhooed portability advantage evaporates for any design that pushes the part's performance or density. I've always found it a bit amusing to observe that the electronics design industry is pushing away from a graphical design methodology (schematics) while most other disciplines are flocking toward graphical user interfaces. -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 8589
On Tue, 06 Jan 1998 19:45:08 +0000, allard jean-marc <jm.allard@hol.fr> wrote: >Lars wrote: > >> Hi, >> does anyone where I can find a VHDL model for Synchronous DRAM? > >Virtual Chip sells such models. and Micron gives them for free. ftp://www.micron.com/pub/ >Bye UdiArticle: 8590
Ray Andraka <no_spam_randraka@ids.net> wrote in article <34B90D3F.A75@ids.net>... : Stuart Clubb wrote: : > : > On 8 Jan 1998 23:35:27 GMT, madarass@cats.ucsc.edu (Rita Madarassy) : > wrote: : > : > >Obviously when you pretend to be a visinaire you cannot afford <snip> : : I've always found it a bit amusing to observe that the electronics : design industry is pushing away from a graphical design methodology : (schematics) while most other disciplines are flocking toward graphical : user interfaces. it's interesting to see the advertisements for the graphical front ends to generate the HDL so you don't have to see the HDL source which replaced the graphical schematics. :-) actually, i downloaded the statecad demo version and find it, so far, a really nice way to work as you can concentrate on the problem solving and not the mechanism of a state machine. i draw the state diagrams in the tool the same way that i do on a piece of paper and it has a nice graphical simulation capability too. anybody have any detailed experience with this tool or any other similar tools? -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 8591
a geezer wrote: : > all of the logic simulators : > that i have seen for fpgas are notoriously poor at simulating clock skew : as : > they treat a net as something that switches at a single point in time. : > anyone have experience with a logic simulator that puts in back annotated : > delays for each branch of the clock tree? : Austin Franklin responded: : I believe ViewSim does this. I know the structure of the simulation file : allows this, but whether the back annotation tools actually characterize : this correctly is another question... i haven't figured out how to get viewsim to do this - perhaps the viewlogic guys can chime in. also, i've troubleshot a number of designs done on a number of systems (i.e., mentor, orcad, etc.) and clock skew problems were rampant and not modelled by the simulator, as used. -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 8592
i would use 89c51 at 11.059MHZ make a download and continual readback cable for any Xilinx FPGA.(HEX or BIN) how do this project. please .. thank.Article: 8593
Does Atmel cary any larger EEPROMS (larger than 256K)? I also saw that Atmel has a programmer for the EEPROMS. Are there schematics/software available for the ones who would like to build it themselves? Thanks, Ivan >In article <34B33324.7BFA@netas.com.tr>, > Yekta Ayduk <yekta@netas.com.tr> wrote: >> >> Do serial EEPROMS exist for Xilinx configuration in the market to >> replace Xilinx OTP PROMS? > >Atmel makes them, for more details check out : > >http://www.atmel.com/atmel/products/prod182.htm > >Martin Mason >Atmel Corp. > >-------------------==== Posted via Deja News ====----------------------- > http://www.dejanews.com/ Search, Read, Post to UsenetArticle: 8594
The Xilinx stuff has allowed back-annonation of path delays from way back. I spent some time trying to make it work a while ago, and was not very successful. Nowadays I use unit-delay simulation, and make sure that the worst-case path delay is shorter than the clock period (for a fully sync design). Very simple. I have never had problems when using this properly, i.e. using the global clock net etc. as one is supposed to. BTW I don't think that setting up a simulation script is any different between a schematic and a HDL design. I cannot see why they should differ. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8595
As Austin writes below, ViewSim is capable of simulating clock skew, the issue is whether the back annotation supports this. ViewSim requires two things to support this. One is that there be a way to model the delay to each flipflop clock pin, and second, that this delay is set to something other than default. Looking at the viewsim model for an XC4000E:FDCE flipflop, we find that there is a buf in the model for just this purpose. As to whether Xilinx back-annotates the per flipflop delays, the answer is I don't know, because my design approach matches Peter's described below, and with the Xilinx global clock nets, this then becomes a non issue. Clearly if the back annotation is not happening, this would be a Xilinx issue, not a Viewlogic one. Philip Freidin. In article <01bd1ef3$f82e1de0$d380accf@default> "richard katz" <stellare@erols.com.NOSPAM> writes: >a geezer wrote: >: > all of the logic simulators >: > that i have seen for fpgas are notoriously poor at simulating clock >: > skew as >: > they treat a net as something that switches at a single point in time. >: > anyone have experience with a logic simulator that puts in back >: > annotated delays for each branch of the clock tree? > >Austin Franklin responded: >: I believe ViewSim does this. I know the structure of the simulation file >: allows this, but whether the back annotation tools actually characterize >: this correctly is another question... > >i haven't figured out how to get viewsim to do this - perhaps the viewlogic >guys can chime in. also, i've troubleshot a number of designs done on a >number of systems (i.e., mentor, orcad, etc.) and clock skew problems were >rampant and not modelled by the simulator, as used. > I believe that Richards problems with rampant clock skews are because he is not using Xilinx global clock buffers. Peter wrote: >The Xilinx stuff has allowed back-annonation of path delays from way >back. I spent some time trying to make it work a while ago, and was >not very successful. >Nowadays I use unit-delay simulation, and make sure that the >worst-case path delay is shorter than the clock period (for a fully >sync design). Very simple. I have never had problems when using this >properly, i.e. using the global clock net etc. as one is supposed to. > >Peter. >E-mail replies to z80@digiXYZserve.com but >remove the XYZ.Article: 8596
In article <34b84da0.10844503@news.demon.nl>, brian@shapes.demon.co.uk says... > >ees1ht@ee.surrey.ac.uk (Hans) wrote: > >>Hi, >> >>I wonder if anybody can help me. I am trying to synthesize a very large 24 by 1 > >>bit look-up table. Of the 32M addresses 64231 should produce a "1". > >If possible, factorise it... (decompose it into smaller LUTs) >(if the addresses are truly random, this won't help much... >if there is some sort of pattern, either a regular one or a clustering, >it can help enormously) > Brian, and others, Thanks for all the help and recommendations I received. I did managed to get hold of a UNIX machine with Espresso. Unfortunately after 3 days of number crunching espresso did not come up with any results. The loading on that machine was low at the time. The constant array suggestion could not be implemented because my synthesis tool did not supported it. I did managed to reduce the number of case statements by using vector_1 | vector_2 etc. construct. However, again my synthesis tool was not able to synthesize it. I then looked at the original formula which was used to create this table. It consist of the following, a,b,c the three input bytes (24 bits table), LUT is an inverted lookup table derived from Y=1+X^2+X^3+X^4+X^8. Thus Y is given and X needs to be returned. Create_LUT_table(); // Create inverted polynominal for (a=1;a<256;a++) { for (b=1;b<256;b++) { for (c=1;c<256;c++) { if (((LUT[c] - LUT[a] + 255) % 255) == (2*((LUT[c] - LUT[a] + 255) % 255))%256) { output=1; } else output=0; } } } Apart from the LUT the formula is relative easy to implement. The inverted LUT is were the problem lies. This table requires a lot of logic to implement (600 logic modules of an ACT2). Does anybody no of a technique to synthesize polynomials without using a clock (i.e. not with a shift registers and some feedback XOR gates)? Are there any mathematical tricks to solve this? Thanks, Hans. >- Brian >Article: 8597
How can I transfer VHDL to GigaOps Board? How can I interface it?? I have found in the GigaOps homepage, however, I can't nothing useful. Thanks -- +------------------------------------------------------------------+ Eric, Lo Man Fai 盧文輝 Dept. of Computer Science & Engineering 計算機科學及工程系 Shaw College - Computer Engineering 3/3 逸夫書院 The Chinese University of Hong Kong 計算機工程 三年級 E-Mail: mflo@cs.cuhk.edu.hk 香港中文大學 http://www.cs.cuhk.edu.hk/~mflo/ phone: 9419-3741 God so love the world that He give His only begotten son that whoever believes in Him shall never die and have everlasting life. +-------------------------------------------------------------------+Article: 8598
Philip Freidin says: : As Austin writes below, ViewSim is capable of simulating clock skew, the : issue is whether the back annotation supports this. <snip> Geezer Designer says: : >i haven't figured out how to get viewsim to do this - perhaps the viewlogic : >guys can chime in. also, i've troubleshot a number of designs done on a : >number of systems (i.e., mentor, orcad, etc.) and clock skew problems were : >rampant and not modelled by the simulator, as used. : > Geezer adds: and had to trouble shoot one that only manifested itself at -35C, pain in the hand. Philip typed here, possibly a tad too fast: : I believe that Richards problems with rampant clock skews are because he : is not using Xilinx global clock buffers. Me types here, possibly too early in the morning, slightly insulted. :-) me, i have no problem with "rampant clock skews" - or even non-rampant ones. and i use the static timing analyzers to verify this in designs. however, i have been called in to trouble shoot "bad chips" on more than one occaision [too early in the morning to spell well] and have found that the devices were good but the designs had an unacceptable amount of clock skew. this problem has come up far too many times. and when we back track through the design/analysis data and flows, it was found that designers simply used back annotated logic simulations for timing analysis, which did not model the clock skew [viewlogic, mentor, orcad]. this was easily picked up static timing analyzer which also requires far less work to verify timing than a logic simulator with less chances for error, in my opinion. actually, i find that the actel (most of my designs) global clock buffers work pretty well, don't really need the xilinx ones, but i'm sure that they work fine, too. :-) so, i can repeat my question, possibly the fpga manufacturers can answer: do the models that they supply model clock skew in the logic simulation accurately enough for timing analysis? -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 8599
In article <34bd4ad1.13434167@news.tau.ac.il>, Udi Finkelstein <udif@usa.net> writes >On Tue, 06 Jan 1998 19:45:08 +0000, allard jean-marc <jm.allard@hol.fr> wrote: > >>Lars wrote: >> >>> Hi, >>> does anyone where I can find a VHDL model for Synchronous DRAM? Have a look at our web-site www.vulcanasic.com, there is an area called "Model Shop" with links to a number of companies providing HDL models. We also sell a tool from Denali called Memory Modeler which works with most VHDL simulators allowing you to develop and simulate memory models quickly. >> >>Virtual Chip sells such models. > >and Micron gives them for free. > >ftp://www.micron.com/pub/ > >>Bye > >Udi -- Regards Mark ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Mark Goodson Vulcan ASIC Ltd, Cambridge The ASIC & EDA Solution Company Tel: 01223 321391 Mob: 0374 168643 Fax: 01223 301165 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z