Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
In Article <linchih.794184341@guitar> linchih@guitar.ece.ucsb.edu (Chih-chang Lin) writes: > >Hi, > > I am looking for the information (call for paper, registration) >for "FPGA Custom Computing Machine workshop". > > Help please! > >Chih-chang Lin >UC, Santa Barbara FCCM is April 11-14 in NAPA. More more info refer to the MOSAIC URL page: http://www.super.org:8000/FPGA/comp.arch.fpga. You can also email Ken Pocek at kpocek@sc.intel.com or Peter Athanas at athanas@vt.edu for details. They are the co chairs for the workshop. -Ray Andraka Chairman, the Andraka Consulting Group 401/884-7930 FAX 401/884-7950 email randraka@ids.net The Andraka Consulting Group is a digital hardware design firm specializing in obtaining maximum performance from FPGAs. Services include complete design, development, simulation and integration of these devices and the surrounding circuits. We also evaluate, troubleshoot and improve existing designs. Please call or write for a brochure.Article: 801
get it from http://www.super.org:8000/FPGA/fccm95.html --- -------------------------------------------------------- Andreas Kugel Chair for Computer Science V Phone:(49)621-292-5755 University of Mannheim Fax:(49)621-292-5756 A5 D-68131 Mannheim Germany e-mail:kugel@mp-sun1.informatik.uni-mannheim.de --------------------------------------------------------Article: 802
Dear Sirs ! I would like to get a Price List on XC3000 and XC4000 fpga series from vendors in USA. The second question: What is price discount for XILINX Hardware GATE ARRAY being identical counterparts to the FPGAs ? ---- Sergey Chernyshov, chern@unc.nnov.suArticle: 803
Sami Sallinen (sjs@varian.fi) wrote: : dong@icsl.ee.washinton.edu (Dong-Lok Kim) wrote: : > : > Hi, : > : > I happened to read a paper : > "Area & Time Limitations of FPGA-based Virtual Hardware", : .. : There might be a significant overhead when compared to a fixed VLSI solution : both real-estate- and performance-wise, but when you compare trying to implement : the same features with a software-only solution the odds are reversed. Yes, but the issue here is whether the "uncommitted logic" on the expensive real estate of the main processor can be justified or not. The cost for adding the FPGA versus VLSI is about 100 times, so it might not be feasible. But the above paper seems to compare the area-speed cost of FPGA in an unfair way, i.e., they did not consider the fact that the FPGA section can contain multiple functions. If the reconfigurability is considered, the area factor must be scaled accordingly (which they did not). -- Donglok Kim ICSL (Image Computing Systems Lab) Dept. of Electrical Eng., FT-10 University of Washington Seattle, WA 98195 Phone) 543-1019 FAX) 543-0977Article: 804
> Subject: Limits on on-chip FPGA virtual computing > From: dong@icsl.ee.washinton.edu (Dong-Lok Kim) > I happened to read a paper > "Area & Time Limitations of FPGA-based Virtual Hardware", > Osama T. Albaharna, etc, IEEE International Conference > on Computer Design 1994, pp. 184-189. > and found a quite interesting fact from it as follows: > > The author says they found the FPGA area to implement certain set of circuits > has overhead of ~100 times compared to the fixed VLSI implementation. Also, the > delay overhead is ~10 times, so using the FPGA on the same die with a RISC core > would not be feasible with the current technology. (Please forgive me if > I am misinterpreting his conclusion). Interestingly enough, they seem to make a very different conclusion in "Virtual Hardware and the Limits of Computational Speed-up" [same authors, ISCAS '94]. Quoting the final paragraph of the paper: \begin{quote} Our invetigation indicates that even with these limitations, an FPGA-based platform can still outperform today's advanced general purpose processors. Furthermore, an adaptive platform is a better utilsation of the extra transistors than the integration of an additional processor which could only give, at best, a maximum overall speed-up of 2. Finally, as the enhancement area increases, the speed advantage over using multiple processors decreases. \end{quote} > Another paper that attempts such an integration of FPGA and a CPU core on the > same chip was > "DPGA-Coupled Microprocessors: Commodity ICs for the Early 21st > Century", Andre DeHon, IEEE Workshop on FPGAs for Custom Computing > Machines, 1994, pp. 31-39. > Here, the author just seems to assume that the integration of the FPGA area > on the same die is feasible (very well phrased, but without any proof). Thanks :-) Hmm, depends on what you call feasible.... I'm not aware of any technical limitations which make it difficult to integrate the two on the same die. After all, many high-end processors and FPGAs are run on the same fab lines. Of course, there still remains the question of whether or not it is profitable or beneficial. As both I and Albaharna et. al. summarize, there is plenty of point evidence to suggest that mixed FPGA+uP systems can outperform conventional, uP only systems on a variety of tasks with small cost additions (simply put, more than double the performance at less than half the cost). One thing to note here is that the overhead factors quoted from the earlier Albaharna et. al. paper assume you build the *same* circuitry on the FPGA as you do in custom logic. The advantage of the FPGA is that you can build much more specialized circuitry which is tailored to a particular problem. In a general purpose architecture you don't get to build exactly the circuitry which would make a particular application (with a particular dataset) fast, you have to build circuitry which does a decent job across a wide range of applications. Conversely, when implementing a routine in an FPGA to accelerate an application, you don't build exactly the circuitry which the fixed procesing unit provides, you build exactly the circuitry which is most beneficial for to the problem. The overhead factor suggests that there is a tradeoff here. To beneficially employ the FPGA, you need to gain enough from function specialization in the FPGA to offset the overheads associated with using field programmable logic instead of custom, fixed logic. In fact, it is because of this tradeoff that designs which mix reconfigurable and fixed function logic on the same die are attractive. Understanding and quantifying this tradeoff and the regions of benefit more percisely is partially where the research lies. You might also want to checkout: "A High-Performance Microarchitecture with Hardware-Programmable Functional Units" in Micro-27 by Rahul Razdan and Michael D. Smith. They take a particularly restricted view of how one might integrate FPGA-style logic into a processor (in order to make mapping and experimentation simple) and how it might be exploited. Despite the restrictions placed, they still find that the incorporation of the programmable logic is a profitable use of silicon area. > I just wonder if any of you want to discuss about the limits of this > idea (i.e., on-chip FPGA + CPU core) and the above papers. I can provide > further information about the papers if you want. (Do the authors read this > news group?) "DPGA-Coupled Microprocessors: Commodity ICs for the Early 21st is available as: HTML: http://www.ai.mit.edu/projects/transit/tn100/tn100.html PS: ftp transit.ai.mit.edu:papers/dpga-proc-fccm94.ps.Z slides: ftp transit.ai.mit.edu:slides/dpga+proc-fccm94.ps.Z Andre'Article: 805
In article <3j6qgl$4la@euas20.eua.ericsson.se> ekatjr@eua.ericsson.se (Robert Tjarnstrom) writes: >From: ekatjr@eua.ericsson.se (Robert Tjarnstrom) >Subject: Power gain when moving from FPGA to Gate Array >Date: 3 Mar 1995 10:20:05 GMT >What is the experiences of reduction in power dissipation/consumption when moving an FPGA design to a Gate Array. >Obviously there are two different situations >A) 1 FPGA -> 1 Gate Array. Is a factor 4 power gain reasonable to expect? Depends upon the device utilization of the FPGA..... For instance in a 5K gate FPGA, with useable gate count of 2K..... there is a 2.5 reduction there.... but it really depends on the vendor of the FPGA, Gate array.... what process do they use ( size of cell... .6, .8u etc... ) >B) N FPGA -> 1 Gate Array. Here the power gain should be considerably larger due to reduced io power. >Opinions are welcome >Robert Tjarnstrom Wassail, MjodalfRArticle: 806
In article <3j873u$61c@paperboy.ids.net>, randraka@ids.net writes: |> |> FCCM is April 11-14 ... Correction. It is April 19-21. The correct date is on the mosaic page. -- Brad L. Hutchings (801) 378-2667 Assistant Professor Brigham Young University - Electrical Eng. Dept. - 459 CB - Provo, UT 84602 Reconfigurable Logic LaboratoryArticle: 807
On Thu, 2 Mar 1995, John Cooley wrote: > As far as I know, Actel was the only company that signed an OEM deal with IST > to resell their software. If you were a user of this tool either through the > Actel OEM process or through some other independant IST sales channel, I'd > like to hear what you thought of it in a technical sense. Was this a case of > the financial people killing a possibly good product because they saw its > market as too competitive, was the tool they were offering buggy & ugly or > was this just a bad time to try to commercialize French university developed > FPGA synthesis tools? Compass Design Automation also resold ISTs FPGA optimisation tools. the truth is that they were very buggy. i would say that IST was a particularly badly run company who put very little effort into supporting their tools. this is just my personal opinion after working at Compass for three months. \onathan (__) -- Jonathan AH Hogg, Computing Science Department, The University, Glasgow G12 8QQ jonathan@dcs.gla.ac.uk http://www.dcs.gla.ac.uk/~jonathan 0141 339 8855 x2069Article: 808
SRAM in ATmel FPGAs takes up abouut 3.5 cells per bit for standard SRAM, register files can be more efficient. The new Atmel architecture (which will be introduced inthe second half of 96) will be much better in terms of on-chip SRAM.Article: 809
In article <199502280015.TAA02793@play.cs.columbia.edu>, cheng@news.cs.columbia.edu (Fu-Chiung Cheng) writes: |> We are considering implementing asynchronous circuits using FPGAs. |> We need to choose FPGAs such that hazard-free logic can be realized |> and the FPGAs can be reprogrammable in circuits. I've worked with the Xilinx 3000 series parts but don't know much about any other FPGAs. I'm not quite sure what you want to do. I think you want to build self timed state machines using RS flip flops made out of gates rather the traditional edge triggered logic where everything runs off the same clock. First, the good news: If you look carefully in their app notes, you can find a place where they will promise that if you only change one input to a CLB at a time, the output won't glitch. I occasionally build RS flip flops out of gates. They work. Now the bad news: The architecture of the part encourages/expects designs where most of the logic runs on the same clock edge. The software does too. If you don't design that way you are giving up a major fraction of the resources on the chip. That may be OK for an educational experiment. Examples: Within a CLB, there are feedback paths from the FFs to the input. If you want similar feedback without using a FF you have to use external routing resources. The FFs have a clock-enable. This can frequently be used as an extra logic input. The FFs have an asynchronous reset input. Again, this is occasionally handy as an extra logic term. (I try to avoid it because it is asynchronous.) It is very handy for (re)initialization. There is also a reset pin on the chip that clears all the FFs in the chip. Again, handy for smashing things back into a known state. The timing analyzer knows how to analyze clock-FF=>gates=>setup-FF paths. If you have feedback paths within your gates it will scream at you. So, yes, you can do it. It may not be much fun.Article: 810
> Depends upon the device utilization of the FPGA..... For instance in a 5K >gate FPGA, with useable gate count of 2K..... there is a 2.5 reduction >there.... but it really depends on the vendor of the FPGA, Gate array.... what >process do they use ( size of cell... .6, .8u etc... ) Not quite. Since the unused logic blocks are usually not clocked, then there is usually not much power drain to them. -- || Dave Van den Bout || || Xess Corporation ||Article: 811
I'm very interested in this argument. I have also heard different opinion. Someone says that FPGA need too much area that could be spend for specialized VLSI functions in a 1/10 of area. Others say that FPGA represents a speed-up factor for implementing at run time specialized function that can be managed by the compiler. An interesting paper is from Smith published in MICRO27, if you are interested I can send the www address. The articles you cite are electronically available? It happens that I can't have access to the proceedings you mention so I cannot read the papers, could you send me a copy of the papers? Alessandro De Gloria Dong-Lok Kim (dong@icsl.ee.washinton.edu) wrote: : Hi, : I happened to read a paper : "Area & Time Limitations of FPGA-based Virtual Hardware", : Osama T. Albaharna, etc, IEEE International Conference : on Computer Design 1994, pp. 184-189. : and found a quite interesting fact from it as follows: : The author says they found the FPGA area to implement certain set of circuits : has overhead of ~100 times compared to the fixed VLSI implementation. Also, the : delay overhead is ~10 times, so using the FPGA on the same die with a RISC core : would not be feasible with the current technology. (Please forgive me if : I am misinterpreting his conclusion). : Another paper that attempts such an integration of FPGA and a CPU core on the : same chip was : "DPGA-Coupled Microprocessors: Commodity ICs for the Early 21st : Century", Andre DeHon, IEEE Workshop on FPGAs for Custom Computing : Machines, 1994, pp. 31-39. : Here, the author just seems to assume that the integration of the FPGA area : on the same die is feasible (very well phrased, but without any proof). : I just wonder if any of you want to discuss about the limits of this : idea (i.e., on-chip FPGA + CPU core) and the above papers. I can provide : further information about the papers if you want. (Do the authors read this : news group?) : Thank you in advance. : -- : Donglok Kim : ICSL (Image Computing Systems Lab) : Dept. of Electrical Eng., FT-10 : University of Washington : Seattle, WA 98195 : Phone) 543-1019 : FAX) 543-0977Article: 812
In article <3janil$q4b$1@mhadf.production.compuserve.com>, Alfred <100441.524@CompuServe.COM> wrote: >SRAM in ATmel FPGAs takes up abouut 3.5 cells per bit for >standard SRAM, register files can be more efficient. The new i think you are saying it takes 3.5 logic cells to emulate an SRAM bit in an atmel fpga, but i don't think this answered the author's question. the question was how much die area (as a percentage the size of a logic cell, for instance) does an SRAM configuration bit take? eg, how big is a bit in the lookup table, a bit which tells the flip-flop whether to operate in D/T/JK modes, etc. you know... those bits that get blasted in at power-up to configure the device :-) i think we can safely form an upper bound on the size of the SRAM cell := die size / number of configuration bits. although i've never thought hard about this, how about a 5-transistor SRAM cell (think of a pass-gate and 2 inverters) which can be chained together to scan in the configuration bits. you would need two of these per configuration bit and a two-phase non-overlapping clock to shift the data. if you can guarantee a minimum clock speed, you can probably replace one of these cells with just a pass gate and dynamic latch (i.e. inverter). thus, 8 transistors per configuration bit. i have designed something similar in a 1.2 micron double-metal single- poly process, and the size was about 45 x 32 microns for two 5-transistor cells (i.e., a single bit). being a novice rectangle hack, i must admit that this far from optimal. i realized later that i could easily have saved 10-20% in area by rearranging things. also, note that the scan-path is not time critical so removing those extra 2 transistors, resizing the others, and possibly using poly instead of metal would probably save 50%. finally, another layer of metal or poly (as is common in current fpga processes) would help quite a bit. guyArticle: 813
I have a couple of questions on programming with Altera FPGA devices. I have written some code using PLDshell Plus / PLDasm and the design does not fit in one device (We have a NFX780_84). (1) I know that I can write different modules in different files and then MERGE the design. I guess this does not help in fitting the design in more than one device. In any case, if I use the MERGE option, does PLDshell consider one device per file OR does it merge all the files into ONE device? (2) If MERGE does not consider one device per file, how do I compile each module or group of modules into one device and the rest in one or more devices and then simulate the whole design? In other words, how can I program one design (with many modules) in two or more devices? I would appreciate any pointers to the above queries. Thanks for your time and help. SIncerely, Shakuntala -------------------------------------------------------------------------- -------------- e-mail : skarkada@aol.com OR sanjanai@buster.eng.ua.edu -------------------------------------------------------------------------- --------------Article: 814
Robert Tjarnstrom (ekatjr@eua.ericsson.se) wrote: : What is the experiences of reduction in power : dissipation/consumption when moving an FPGA design to a Gate Array. The main source of power reduction when moving a design from an FPGA to an MPGA (Mask Programmable Gate Array) is the reduction in capacitance (mainly from the programmable routing) that has to switch with each clock or data transition. The other lesser source could be the reduction in short-circuit current due to the smaller and fewer internal buffers in an MPGA. So, an estimate of the power reduction, in moving a design from an FPGA to an MPGA, can be computed from the reduction in delays of the various logic paths. The reduction in short-circuit current may also be computed in the same way. This could be a good research topic to extrapolate the predicted degradation of circuit speeds in FPGAs vs. MPGAs, and come up with a predicted increase in power. For example, some estimates suggest that the FPGAs could be up to 3 to 10 times slower than MPGAs. How much more power the same circuit will consume in in FPGA, if the clock rate is kept the same? Enough rambling! Satwant.Article: 815
In article <3jdb0i$6cn@alpha.cisi.unige.it>, adg@PROBLEM_WITH_INEWS_DOMAIN_FILE (alessandro de gloria) writes: |> the compiler. An interesting paper is from Smith published in MICRO27, if you |> are interested I can send the www address. |> Please post the www address here. -- Brad L. Hutchings (801) 378-2667 Assistant Professor Brigham Young University - Electrical Eng. Dept. - 459 CB - Provo, UT 84602 Reconfigurable Logic LaboratoryArticle: 816
>> There might be a significant overhead when compared to a fixed VLSI solution >> both real-estate- and performance-wise, but when you compare trying to implement >> the same features with a software-only solution the odds are reversed. >> Hi there, I'm joining the fray a little late on this thread, but I just can't believe the direction its taking. Forgetting about software comparisons and focusing on "fixed VLSI solutions" for a moment, why would you ever compare a "fixed VLSI solution" to an equivalent FPGA implementation?? As my collegue colleague Andre Dehon at the AI Lab points out in defense of his work, no reasonable person would ever do such a thing. And thus statements like you lose a factor of ~100 in density are pointless. You just can't make any reasonable comparison between apples and oranges. Look, modern high performance processors spend the good majority of their transistors doing things like optimizing cache performance, adding more ports to a register file, etc. ASICs also spend many transistors broading their application base to insure that their development costs can be amortized over a large enough user base to be cost effective. But FPGAs, by virtue of the fact that they are dynamically reconfigurable, do not have to waste logic resources on any of that extra "stuff" that make GPPs and ASICs go. They allow you to implement architectures that are absolutely specific to you task. If your task changes, no problem, just drop a new configuration onto your working set. That is the essence of custom computing. Dynamic reconfiguration removes the need to lay down all that extra "stuff" in silicon. Thus designs targeted for FPGAs that take advantage of this fact would use far fewer gates than their "fixed VLSI design" counter parts. The relevent, and really interesting, question is: For any fixed VLSI design (Here I mean ASICs) how few gates can you get away with at any one time in an equivalent FPGA design (taking advantage of D.R.) that accomplishs the same task? Has anyone done any work in this area? I'd certainly be interested to hear of it. I'll personally offer several of my favorite lollipops to the first person that writes up a study on this topic for a set of image processing type applications. You'd better hurry though, I'll be attacking this next month, and I like lollipops. Also just a really stray thought, does the old economics of silicon area really apply in an age of more silicon than we know what to do with and custom computing on the rise? How much is enough silicon area? Is there a point beyond which we don't care? Has anyone else had this strange guttural feeling? I'd be interested in hearing! Thanks, Ed Acosta ---------------------------------------------------------------------------- | Edward Acosta phone: (617)253-2241 | Edward Acosta | | Research Associate fax: (617)258-6264 | 4 Longfellow Place #1504 | | MIT Media Laboratory | Boston, MA 02114 | | E15-319 | (617)227-9338 | | 20 Ames St. | | | Cambridge, MA 02139 | | ---------------------------------------------------------------------------- | Why go 90% of the way for 25% of the reward when you can go 100% of the | | way for all of it! | ----------------------------------------------------------------------------Article: 817
Subject: Re: Lattice ispLSI starter kit From: Sami Sallinen, sjs@varian.fi Date: 3 Mar 1995 08:11:48 GMT In article <3j6j04$v6@idefix.eunet.fi> Sami Sallinen, sjs@varian.fi writes: >iisakkil@alpha.hut.fi (Mika Iisakkila) wrote: >> >> Check out ftp.intel.com. PLDShell should be still available despite >> that the business was bought out by Altera. I haven't tried yet if the >> fuse maps generated for iPLD22V10 work with the ispGAL22V10, but I >> can't see why not. The software even includes a simulator - I'd use >> the Intel/Altera parts, if only I had a programmer... >> -- I have successfully programmed GAL22V10 (unfortunately, not the ISP variety) using the free PLDShell software. I would expect that it should work for ISP parts too.Article: 818
Hi all, Somebody had earlier posted an article with the same subject with a lot of questions. I want to add one more question. What is the basis of partitioning the circuit among multiple devices? Thanks AshutoshArticle: 819
Hi everybody, I want to implement a special purpose DSP algorithm in FPGA, hoping that it will be faster,cheaper than the alternatives which are: A general purpose DSP microprocessor An ALU chip using an external state machine the ASIC would basically be a 32-bit multiplier,adder, half a dozen or so registers, 3 busses, and an FSM implementing the algorithm. I want to put many of these on a single chip and exploit parallelism. I want to evaluate this idea without having to spend X thousand dollars on development software and a device programmer. Have there been any recent articles or papers on this particular topic ? Has anyone been down this road and can let me know whether "chip densities and cost still have a ways to go yet" or "FPGA really are cheap and fast, go for it" Are there any vendor application engineers reading this group ? FPGA consultants ? - Are there customizable circuits like OAK/PINE but for FPGA ? I suspect that the prices of the really high density devices have a ways to go before FPGA u-processors can compete. -robArticle: 820
> > >FCCM is April 11-14 in NAPA. Wrong: April 19 - 21 > More more info refer to the MOSAIC URL page: >http://www.super.org:8000/FPGA/comp.arch.fpga. or http://www.super.org:8000/FPGA/fccm95.html --- -------------------------------------------------------- Andreas Kugel Chair for Computer Science V Phone:(49)621-292-5755 University of Mannheim Fax:(49)621-292-5756 A5 D-68131 Mannheim Germany e-mail:kugel@mp-sun1.informatik.uni-mannheim.de --------------------------------------------------------Article: 821
Before implementing asynchronous circuits on FPGAs it might be helpful to pin down your goals. If you are interested in testing an algorithm implemented from asynchronous building blocks (a la Philips labs) or trying out the micropipelines approach, an alternative is to use a fully synchronous implementation. John O'Leary and I have a paper exploring this idea which I will send to anyone who is interested. Geoffrey Brown Synchronous Emulation of Asynchronous Circuits We present a novel approach to prototyping asynchronous circuits which uses clocked field programmable gate arrays (FPGAs). Unlike other proposed techniques for implementing asynchronous circuits on FPGAs, our method does not attempt to preserve the pure asynchronous nature of the circuit. Rather, it preserves the communication behavior of the circuits and uses synchronous duals for common asynchronous modules.Article: 822
In article <199502280015.TAA02793@play.cs.columbia.edu> cheng@news.cs.columbia.edu (Fu-Chiung Cheng) writes: >Question 4. Any Email-address, WWW, or tel. no. related to the above products > are welcomed. > > Thanks a lot in advance. > -John > Email: cheng@cs.columbia.edu WWW addresses: Xilinx http://www.xilinx.com/xilinx.htm AMD (make PLDs) http://www.amd.com/ ArvinArticle: 823
XESS Corp. has just released the second chapter of "FPGA Workout II". This chapter covers the PLDasm hardware description language that's used to program PLDs and FPGAs. It's a hypertext document that will execute on a DOS machine with a VGA display. If interested, you can retrieve this file via anonymous FTP from ftp.vnet.net in directory pub/xess/hyperdoc. Get the ZIPPED and executable file pldasm.exe and install.txt. -- || Dave Van den Bout || || Xess Corporation ||Article: 824
Dear Fellows: I have write about the estimation of cost for possible fabrication of a a massively parallel system (probably MCM) using FPGA tech. COuld you give some hints that how can I get the standard figures amount/gates etc. and breakup of cost etc. Also the same info is required for ASIC design tech. any suggestions! Saghir -- Peace Saghir A. Shaikh Email:saghir@ece.utexas.edu
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z