Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I am delighted by the appearance of this Internet news.froup, and cannot resist the temptation to (attempt to) post the first article. In keeping with my interest and specialties, I invite everyone to read about "my" division's supercomputer. - michaelt/M6 ----------------------- ----------------------- ----------------------- -- *** <disclaimer> ** Michael R. Tchou ** <disclaimer> *** -- -- My inventions, produce, and the benefits thereof, are my -- -- employer's property. My opinions are my own, and _only_ -- -- my own. All in all, a reasonably equitable arrangement. -- ----------------------- ----------------------- ----------------------- >:> >:> Giant Paragon System at Sandia Predicts Comet Fireballs >:> >:> >:> Predictions based on shock wave simulations performed on the world's >:> most powerful computer were borne out this week as comet fragments >:> slammed into Jupiter and spawned spectacular fireballs visible in some >:> cases even to amateur astronomers. >:> >:> Scientists at Sandia National Laboratories used their 1,840-node Intel >:> Paragon XP/S 140 supercomputer to model the impact effects of the >:> Shoemaker-Levy 9 comet, accurately predicting the giant fireballs that >:> observatories around the world have captured this week. >:> >:> "No one really expected to see a fireball," said Sandia physicist Mark >:> Boslough, who, along with project co-leader David Crawford, observed >:> the astrophysical fireworks from the Air Force Maui Optical Station at the >:> summit of Haleakala in Hawaii. "We predicted the visible fireballs, and we >:> suggested that the trajectory of the fireball could be used to calculate the >:> size of the comet fragments. Using the trajectory from our simulation, the >:> biggest fragments would be approximately three kilometers in diameter, which >:> is larger than most astronomers predicted." >:> >:> To model the 3D hydrodynamics of the comet fragment interactions with the >:> Jovian atmosphere, Sandia scientists Boslough, David Crawford, Allen >:> Robinson and Timothy Trucano used PCTH, the parallel version of the widely >:> used CTH shock physics code. The software was developed by Sandia and >:> originally used in modeling a variety of nuclear weapons problems. >:> >:> The team's 10-hour simulations on the Paragon system showed each >:> fragment creating a bow shock similar to the shock wave at the tip of a >:> supersonic jet fighter, with temperatures inside the bow shock region >:> reaching as high as 35,000 degrees Centigrade. The visual output from >:> the model depicts the flattening and breakup of the fragment, as well as >:> the trail of hot, high-pressure air and comet debris that explodes into the >:> surrounding atmosphere and creates the fireballs that have been highly >:> visible this week. >:> >:> Taking advantage of Sandia's mammoth Paragon system, the code used a cell >:> size of 5 km on a size and a mesh of eight million cells. They modeled a >:> volume of 1,440 x 1,440 x 480 km, with bilateral symmetry making the total >:> dimensions 1,440 x 1,440 x 960 km. Their code is unique is offering a full >:> three-dimensional simulation with realistic materials modeling that can >:> distinguish between the comet's debris and the atmospheric materials. >:> >:> "We believe our results are exceptionally accurate because the Paragon >:> computer allowed us to model the comet behavior with higher resolution than >:> ever before possible," said Phil Stanton, manager of Sandia's Experimental >:> Impact Physics Department. "Resolution is especially important because the >:> codes 'smear' or average the pressures, temperatures and conditions over >:> a cell width, so having finer cells ends up giving us a much better look >:> at the shock wave. The shock propagates farther and faster in our >:> calculations compared to some of the work done on less powerful systems than >:> the Paragon computer." >:> >:> The higher resolution afforded by the Paragon system also allowed the Sandia >:> scientists to resolve a fireball's ejection velocity and the temperatures >:> within a fireball, according to Crawford. "Our simulation is the only one to >:> follow the commentary debris from the time of impact through the time when the >:> debris is ejected from the fireball," he said. "We're able to track where the >:> material falls back onto the planet, and we expect that information to be >:> helpful in answering our questions about some of the features of the black >:> spots we're seeing now in our observations of Jupiter." >:> >:> The Paragon system will now lend its computational muscle to the task of >:> determining the size of the comet fragments. Using its PCTH code on the >:> Paragon system, Sandia anticipates being able to rerun its calculations for >:> different size fragments to match up with the observed data. "At that point," >:> said Stanton, "we'll be able to say with some degree of confidence what the >:> actual fragment size was. I think that's going to be the best estimate >:> anybody will be able to provide of the true size of these fragments." >:> >:> The Paragon system at Sandia is the largest Intel supercomputer installation. >:> It has 1,840 computational nodes, 3,680 Intel i860 XP processors, 38 >:> Gbytes of memory and 330 Gbytes of disk storage. In May of this year, the >:> system set a new record for the Massively Parallel Linpack benchmark of >:> 143.4 Gflops double-precision, returning the "world's fastest computer" >:> banner to the United States. The previous record of 124.5 Gflops was set in >:> August, 1993, by Fujitsu's Numerical Wind Tunnel, a one-of-a-kind system >:> owned by the Japanese National Aerospace Laboratory. >:> >:> Sandia is a national laboratory based in Albuquerque, N.M. and Livermore, CA, >:> and operated by Martin Marietta Corp. for the U.S. Department of Energy. >:> The impact modeling of the comet was supported with funds from the National >:> Science Foundation. >:> >:> >:> Paragon is a trademark and i860 is a registered trademark of Intel >:> Corporation. >:> >:> ********************************************************************** >:> HPCwire has released all copyright restrictions for this item. >:> Please feel free to distribute this article to your friends and >:> colleagues. >:> >Article: 1
Michael Tchou wrote: > I am delighted by the appearance of this Internet news.froup, > and cannot resist the temptation to (attempt to) post the > first article. In keeping with my interest and specialties, > I invite everyone to read about "my" division's supercomputer. Um, OK, given your posting, maybe you or someone else can clue me in to a few things: 1) I was under the impression that the "fpga" in "comp.arch.fpga" stood for "Field Programmable Gate Array", such as the devices made by Xilinx, Actel, AMD, Lattice, Altera, etc. (I apologize for any companies I missed). Why did you post an article about a supercomputer here? Does this supercomputer use FPGAs? Or is there another meaning to FPGAs that I'm not aware of? Or did you just post this so that you could claim to be the "first poster"? 2) Sorry if I'm not aware of the latest abbreviations, but what's a "froup"? As in: "I am delighted by the appearance of this Internet news.froup". Thanks for any pointers. Chuck Corley chuckc@sr.hp.comArticle: 2
Ok finally the group I have been looking for. Has anybody made a whole fpga based machine [Besides the Splash from SRC]. Any idea about the status of the virtual machine from VCC. -Baiju Baiju E Jacob Dept of Electrical Engineering, Temple University. Ph: (215) 204-5742 | bjacob@isac.temple.edu | http://astro.temple.edu/~jakeArticle: 3
>From Chuck Corley: > 1) I was under the impression that the "fpga" in "comp.arch.fpga" > stood for "Field Programmable Gate Array", such as the devices > made by Xilinx, Actel, AMD, Lattice, Altera, etc. (I apologize > for any companies I missed). I also thought this group was for "Field Programmable Gate Arrays". I am starting the design on my first FPGA (Xilinx) this week and look forward to reading about and discussing the technology here. ------------------------------------------------------------------------ Larry Weissman | All opinions expressed are my own and Martin Marietta Corp. | in no way related to my company. All my Moorestown, NJ USA | designs are my companies and in no way lweissma@motown.ge.com | considered my own.Article: 4
Welcome to the new Xilinx users: Get ready to spend a lot more time doing place and route than you EVER anticipated if you are using XC4010 or larger. Best bet, skip the Xilinx PPR software and get NeoCad! Charles F. Shelor SHELOR ENGINEERING VHDL Training, Consulting, and models 3308 Hollow Creek Rd Arlington, TX 76017-5346 cfshelor@acm.org (817) 467-9367Article: 5
I have always worked on the software side writing drivers and I would like get started learning about programming (??) Field Programmable Gate Arrays. The only background that I have in electronics is a year's worth of electric circuits (Ohm's law, Kirchoff's law, ...) back in school, so can someone please tell me the prerequisites and point me to the revelant books that will tell me how to get started working with FPGAs (preferably Xilinx's). Thanks!! direction of interfaced to hArticle: 6
Like it sez, does discussion of the Intel iFX780 FPGA belong here as well? It was easy to work with except: We didn't like the PC programmer as we were a Sun-Unix house. Intel did give us a Sun-based tool to build the JEDEC files, but we still have to fight tooth & nail to get their PC-based (PC-only!) pengn program to convert JEDEC to bitmap files. Intel support wasn't in any way ready to support downloading the bitmap files to daisy chained iFX780's. It took 3 weeks to get the info we needed to program the right sequence to download #2 & #3 parts. Otherwise, they worked like a champ. We used them on a board w/ 2 TMS320C31's. One did frame buffer arbitration, another did address relocation for a poor man's hardware anti-alias scheme, & the 3rd did video stync & tyming. What fun. Cheers, Bob -- <> Bob "Bear" Geer <> <> <> bgeer@xmission.com <> Even paranoids <> <> Salt Lake City, <> have enemies, too! <> <> Utah, USA <> <>Article: 7
Teekkarit, insinööritoimistot jne. huom! Tarvitsisin jonkun, joka suunnittelisi minulle liityntäpiirin käyttäen Xilinxin FPGA-piiriä. (Tekisin itse, mutta voi tulla kiire.) Piiri ottaa dataa rinnakkaisväylältä siirtorekisteriin (2 kpl) ja kellottaa sen sarjamuotoisille DA-muuntimille ja sama toiseen suuntaan (AD -> prosessori). Helpohko sinänsä, mutta aika monta rekisteriä, joten kyllä siinä oma hommansa on. Saat piirin tarkan kuvauksen ja tarvittavat työkalut käyttöösi (skemapiirto, automaattireititin jne, joten ei pitäisi tulla ongelmia). Opiskelijalle hyvä erikoistyö. Homma pitäisi saada työn alle suht. heti, jotta se olisi parissa kuukaudessa valmis (OK; kolmessa, jos saan nastajärjestyksen aikaisemmin). Hommaa on varmaankin alle kuukaudeksi, mutta kuka sitä näillä säillä koko päivää putkea tuijottaisi. Hinnasta sovitaan, mutta jos pyydät liikaa, teen itse. -- Juha Kuusama Sample Rate Systems Oy tel. + 358 31 3165 045 Kanslerinkatu 14 fax + 358 31 3165 046 33720 Tampere, Finland e-mail: Juha.Kuusama@mail.sci.fiArticle: 8
In article <CtM2Dx.7AB@srgenprp.sr.hp.com> chuckc@sr.hp.com writes: >Michael Tchou wrote: >> I am delighted by the appearance of this Internet news.froup, >> and cannot resist the temptation to (attempt to) post the >> first article. In keeping with my interest and specialties, >> I invite everyone to read about "my" division's supercomputer. > > Um, OK, given your posting, maybe you or someone else can clue >me in to a few things: > > 1) I was under the impression that the "fpga" in "comp.arch.fpga" > stood for "Field Programmable Gate Array", such as the devices > made by Xilinx, Actel, AMD, Lattice, Altera, etc. (I apologize > for any companies I missed). > > Why did you post an article about a supercomputer here? Does this > supercomputer use FPGAs? Or is there another meaning to FPGAs > that I'm not aware of? Or did you just post this so that you > could claim to be the "first poster"? Indeed, both the GP and the MP variants of the Paragon supercomputer use one Xilinx FPGA per multiprocessor node-assembly. The MP variant also uses several of Intel's top-of-the-line iFX-780 FPGA devices per node. (I forgive you for not mentioning Intel in the FPGA vendor list above...) :-{) In truth, my primary motivation was to make the first posting (here). I posted the Jupiter article because I thought that..."Dude! Look! I'm the first person to ever post here!"...might be a little lacking in intellectual substance. > 2) Sorry if I'm not aware of the latest abbreviations, but what's a > "froup"? As in: "I am delighted by the appearance of this > Internet news.froup". > > > Thanks for any pointers. > > Chuck Corley > chuckc@sr.hp.com The separated.by.dots syntax is an internet ascii colloquialism. (If one may apply the word "colloquial" to a global electronic community.) You may already have seen this syntax used... net.thought.police, net.politics, net.abuser, etcetera. The best definition/description I have seen (to date) maintains that adding the prefix "net." is synonymous with including "of/to the internet" as a suffix. Thus a "net.addict" would be an "addict to/of the internet". ("Smileys" are also a net.colloquialism...) The...word?..."froup" comes from the frequent misspelling of the (more conventional) word "group" within the context of net.news. (Usually due to a participant's hasty typing and disinclination to make use of on- line spell-checkers before transmitting an article.) My own interpretation of "froup" is that it refers singularly and gen- erically to individual internet (net.news) special interest news groups. I believe that it would be correct to refer to "comp.arch.fpga" as the "c.a.f froup" (or even as the "caf froup"). I have yet to see an ex- plicit definition for "froup"...perhaps I should write one? (Or have I already done so?) - michaelt/M6 ----------------------- ----------------------- ----------------------- -- *** <disclaimer> ** Michael R. Tchou ** <disclaimer> *** -- -- My inventions, produce, and the benefits thereof, are my -- -- employer's property. My opinions are my own, and _only_ -- -- my own. All in all, a reasonably equitable arrangement. -- ----------------------- ----------------------- -----------------------Article: 9
In article <31768v$dva@access1.digex.net> nklein@access1.digex.net (Norman Klein) writes: > >I have always worked on the software side writing drivers and >I would like get started learning about programming (??) Field >Programmable Gate Arrays. The only background that I have in >electronics is a year's worth of electric circuits (Ohm's law, >Kirchoff's law, ...) back in school, so can someone please tell >me the prerequisites and point me to the revelant books that will >tell me how to get started working with FPGAs (preferably Xilinx's). > >Thanks!! >direction of >interfaced to h Hello to one and all. This sounds like it could be a good group. In response to the above. First you will need at least a basic understanding of at least TTL logic. Then I would suggest ou start at the Xilinx data book. It is available from almost any Xilinx Distributor. Now my question. I am using Viewlogic Schematic capture, and the new Xact 5.0. I have designed a group of Serial to parallel shift registers (8) that can be read/written from EISA bus. The serial shifting clock runs at 40 mhz and the fastast parallel read is roughly about 8mhz. I am using the 4000series parts. My question is this. Has anyone used the timespec attributes and symbols with the 4k libraries in the past who could be able to answer a few questions for me ? Specifically, how do I specify the 40mhz serial shifting and the 8 mhz parallel read\write, using these symbols and attributes ? Thanks in advance Chuck McGinley mcginley@ll.mit.edu p.s. I also have a call into customer support. p.p.s. This is my first design over 20mhz, and I have never had to squeeze this kind of performance out of the devices.Article: 10
bgeer (bgeer@xmission.com) wrote: : Like it sez, does discussion of the Intel iFX780 FPGA belong here as : well? : It was easy to work with except: : We didn't like the PC programmer as we were a Sun-Unix house. : Intel did give us a Sun-based tool to build the JEDEC files, but we : still have to fight tooth & nail to get their PC-based (PC-only!) : pengn program to convert JEDEC to bitmap files. : Intel support wasn't in any way ready to support downloading : the bitmap files to daisy chained iFX780's. It took 3 weeks to get : the info we needed to program the right sequence to download #2 & #3 : parts. : Otherwise, they worked like a champ. : We used them on a board w/ 2 TMS320C31's. One did frame : buffer arbitration, another did address relocation for a poor man's : hardware anti-alias scheme, & the 3rd did video stync & tyming. What : fun. : Cheers, Bob : -- : <> Bob "Bear" Geer <> <> : <> bgeer@xmission.com <> Even paranoids <> : <> Salt Lake City, <> have enemies, too! <> : <> Utah, USA <> <> Bob and others, I'm just finishing up a project which uses three FX780s. I had very little trouble with the tools, but when I did run into the occasional bottleneck, I found their support (via internet Email) to be very rapid and helpful. I even got a free pen out of the deal for finding an authentic bug! I also used the tools to generate bitmaps for loading our three parts in daisy chain fashion; it worked with no hassle, first time. We create one file for each part, link them into our processor bootrom, and it loads the three parts at power-on time. Since the bootrom is flash memory, and it can reprogram itself, this makes design changes very easy. (Dos platform for all Intel tools) I did, however, find a very serious problem with the part itself, which they don't seem willing to talk about. I was originally planning to run the system at 80MHz, and found that the parts, under certain conditions, just plain don't go that fast. Apparently the internal feedback drivers are too wimpy, and if a signal gets fed back to too many CFBs, it doesn't meet spec. By bringing the lazy signal out to an output pin via an unused output, I could make a direct Tco2 measurement. The spec is 16nS; I was measuring 17.2nS at room temperature at one point. Before I used the workaround (see below), I even saw failures at 64MHz. If I can believe Intel, I am the first person to report this behavior, and they were not aware previously that there might be a problem. They have NOT been responsive to telling me how they intend to rectify this shortcoming. For a workaround, feedback can be taken from the pin (PINFBK). Amazingly, this is faster than using internal feedback. The only problem then is in getting feedback from buried nodes. I have my parts FULL, and with the above exception, am very happy with them. =========================================================================== Rich Wilson Internet: richw@lsid.HP.COM Hewlett Packard Company Packet: N7WWU@N0ARY Lake Stevens Instrument Division Phone: 206-335-2245 This message is umop apisdn. (Thanks, Don)Article: 11
Hi. I would like to have any references, particularly to magazine articles, about building programmers (EPROM, etc. and PAL/GAL, etc.) Also, I have heard of an FPGA from Intel (FLEXLogic, I believe) which is inexpensive to develop. There is a book available with a prototyping board for about $100. It apparently comes with an ordering form for software, but there is no mention of software price. Could somebody confirm this or add information: I had heard of Intel's FPGA's before, and I heard that they distribute, free of charge, plans to build a programmer and the software to run it. Anybody? Thanks in advance, Ken Geis kgeis@ucsee.EECS.berkeley.eduArticle: 12
In article <316qf6INN5o2@sun004.cpdsc.com>, Charles Shelor-Consultant <cshelor@cpdsc.com> wrote: >Get ready to spend a lot more time doing place and route than you >EVER anticipated if you are using XC4010 or larger. When else would I have time to read news??? \\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\////////////////////////////////////// // Brian Schott \/ email: schott@super.org \\ \\ Supercomputing Research Center \/ phone: (301) 805-7322 // // 17100 Science Drive /\ quote: WAIT UNTIL clock'event; \\ \\ Bowie, Maryland 20715 /\ clock <= snooze; // /////////////////////////////////////\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\Article: 13
In article <316blr$dnt@cronkite.ocis.temple.edu> jake@astro.ocis.temple.edu (Baiju Jacob) writes: >Ok finally the group I have been looking for. >Has anybody made a whole fpga based machine [Besides the Splash from SRC]. >Any idea about the status of the virtual machine from VCC. > >-Baiju >Baiju E Jacob Dept of Electrical Engineering, Temple University. >Ph: (215) 204-5742 | bjacob@isac.temple.edu | http://astro.temple.edu/~jake I had a similar question. We at school are building a new processor (very small: 16 bit data/address, ~50 instructions, register-register architecture), and we would like to implement on FPGAs. Has anyone used Mentor Graphics (or some similar tools) to synthesize VHDL code/ generic cell logic design to FPGA tools? How easy was it? How easy was it to debug physical timing errors that weren't detected by the simulations? Any other input would be great. Thanks! -EArticle: 14
FPGA '95: Call for Papers 1995 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays Santa Cruz, California February 12-14, 1995 This symposium has evolved from two previous workshops, FPGA '92 and FPGA '94. As Field-Programmable Gate Arrays become more essential to the design of digital systems there is increased desire to improve their performance, density and automated design. This symposium, sponsored by ACM/SIGDA, seeks contributions, but is not limited to, the following areas: + FPGA Architecture: logic & routing, memory, I/O, new commercial architectures. + Interactions: between CAD, architecture, applications, and programming technology. + Applications: novel uses of FPGAs + Process Technology-Issues Related To FPDs + CAD for FPGAs: Logic optimization, technology mapping, placement, routing + Field-Programmable Systems: emulation and computation, partitioning across chips + Field-Programmable Interconnect Chips/Devs + Field-Programmable Analog Arrays Authors should submit 20 copies of their work (maximum 10 pages, minimum point size 10) by October 3, 1994. Notification of acceptance will be sent by November 20, 1994. A proceedings of accepted papers will be published (which is different from the publication policy of the previous related workshop). Final papers will be limited in length to six pages. Submissions should be sent to: Jonathan Rose FPGA '95 Department of Electrical and Computer Engineering. 10 King's College Road, Toronto, Ontario Canada M5S 1A4 email: jayar@eecg.toronto.edu, phone: (416) 978-6992, fax: (416) 971-2286 General Chair: Pak Chan, UC Santa Cruz Program Chair: Jonathan Rose, University of Toronto. Program Committee (so far): Duncan Buell, SRC Pak Chan, UCSC Jason Cong, UCLA Ewald Detjens, Exemplar Carl Ebeling, U. Washington Frederic Furtek, Atmel Dwight Hill, Synopsys Sinan Kaptanoglu, Actel John McCollum, Actel Jonathan Rose, U. Toronto Richard Rudell, Synopsys Rob Rutenbar, CMU Takayasu Sakurai, Toshiba Martine Schlag, UCSC Steve Trimberger, XilinxArticle: 15
jake@astro.ocis.temple.edu (Baiju Jacob) writes: >Ok finally the group I have been looking for. >Has anybody made a whole fpga based machine [Besides the Splash from SRC]. >Any idea about the status of the virtual machine from VCC. >-Baiju >Baiju E Jacob Dept of Electrical Engineering, Temple University. >Ph: (215) 204-5742 | bjacob@isac.temple.edu | http://astro.temple.edu/~jake The SPLASH is pretty flaky from what I understand. I'm a senior at Virginia Tech and we have the thing. It only works sometimes, who knows why. Anyway is it really all that intellegent to rely on FPGA's for an entire computer? I don't know much about FPGA's so please don't flame me. Anyway, the SPLASH is almost completely XILINX chips, and lots of them. It's in Dr. Armstrong's advanced computer design research lab, but somebody else Dr. Athas uses it for visualation and other things. opps, Dr. Athas supposed to be Dr. Anthas. just in case anybody is interested in SPLASH. ChriSArticle: 16
Hello, I am working with a few of my friends to devlope a paralell proccessing computer. In order to build the bus interface, we were wondering what would be more appropriate: FPGA's, PAL's, or EPLD's? I hear that the Xilinx FPGA's are hard to work with because of odd propagation times. I would like to use the iFX7(whatever the high-io one is). It has precise delays (so I'm told). Thank you. PS. I also would like to look for a group pertaining to computer architecture and p.p., is there such a group?Article: 17
FYI, Altera has purchased Intel's entire line of PLDs, from 22V10s to FLEXlogic, for around $50 million. So your iFX780 will belong to Altera by next quarter. I don't know what Altera plans to do with this Intel device... Fortunately for me, I use Xilinx (I used Actel and TI a few years ago). Good luck, Robert --- [DISCLAIMER: In the event that I am captured or killed, Alcatel Network Systems will destroy my files and disavow all knowledge of my existence.] ***************************************************************************** \ Robert F. Benningfield Jr. {benningf@aur.alcatel.com} / / Member Technical Staff, R&D Core Hardware Design Engineering \ < Alcatel Network Systems, 2912 Wake Forest Road, Raleigh, > \ North Carolina 27609, USA {NCSU Alumnus: MSEE '90, BSEE '89} / / Phone: 919/850-5569 (work) or 919/851-5562 (play), Fax: (919) 850-6590 \ *****************************************************************************Article: 18
cshelor@cpdsc.com writes >Get ready to spend a lot more time doing place and route than you >EVER anticipated if you are using XC4010 or larger. > >Best bet, skip the Xilinx PPR software and get NeoCad! I don't suppose you could be a bit more specific in this case? Have you got some side-by-side examples of the same data being submitted to PPR and to the NeoCAD tools? We are still using the 3K stuff principally, and the APR tool, so I am not willing to comment on the PPR stuff, but in our comparisons, from 2018 devices right through to 3195, there are some patterns. Firstly, the NeoCAD tools are a lot faster than the Xilinx factory tools, by a factor of 3-4 times, particularly on the larger designs. This is based on the APR V3.30 which was in the last pre-XACT5.0 release. (Haven't tried with the APR in XACT 5.0) The NeoCAD tools I've tried on are V4.1. However, in defence of APR, it is seems more flexible, allowing us to tailor the placement and routing in ways that the NeoCAD isolates the user from. For that matter, APR also seems to have that all over PPR too. Too bad its being put on the shelf. The trend seems to be to isolate the user from any understanding of what's happening. We much prefer the use of intermediate files in ASCII, rather than hiding things in proprietary binary files. Nobody is perfect at error messages, and the more information we have to go on when the going gets tough, the more likely we are to get out of the situation. Xilinx seems to be moving away from ASCII outputs, and NeoCAD has never been that open, as far as I know. However, that's another issue altogether. Arnim Littek. arnim@digitech.co.nz -- arnim@actrix.gen.nzArticle: 19
In article <CtnpnA.9JJ@SSD.intel.com> michaelt@ssd.intel.com (Michael Tchou) writes: >use one Xilinx FPGA per multiprocessor node-assembly. The MP variant >also uses several of Intel's top-of-the-line iFX-780 FPGA devices per >node. (I forgive you for not mentioning Intel in the FPGA vendor list >above...) :-{) I dispute the description of the iFX-780 as an FPGA. It is just a big PAL. One of the advantages of FPGAs is that each macrocell has its own set of inputs which do not have to be shared with other macrocells. This is important when using one-hot encoded state machines as otherwise the increase in the number of bits in the state vector often exceeds the number of inputs available to a logic block. If you don't have to do Place and Route, it's not an FPGA... -- Good bye, Mr. Roberti. Thanks for playing.Article: 20
in article <1994Jul28.223308.29847@adobe.com> pngai@mv.us.adobe.com (Phil Ngai) scribbles in crayon: >In article <CtnpnA.9JJ@SSD.intel.com> michaelt@ssd.intel.com (Michael Tchou) writes: >>use one Xilinx FPGA per multiprocessor node-assembly. The MP variant >>also uses several of Intel's top-of-the-line iFX-780 FPGA devices per >>node. (I forgive you for not mentioning Intel in the FPGA vendor list >>above...) :-{) > >I dispute the description of the iFX-780 as an FPGA. It is just a big >PAL. One of the advantages of FPGAs is that each macrocell has its own >set of inputs which do not have to be shared with other macrocells. >This is important when using one-hot encoded state machines as >otherwise the increase in the number of bits in the state vector often >exceeds the number of inputs available to a logic block. > >If you don't have to do Place and Route, it's not an FPGA... > > >-- > Good bye, Mr. Roberti. Thanks for playing. you bring up a valid point, however since there are no current newsgroups to address cplds, eplds, and pals, i suggest this group encompass the whole range rather than attempt to resplinter into smaller subgroups each with an intrinsically overlapping area of discussion and a relatively puny readership. why not emulate the parts themselves and let the discussions flow in a continuum? as to your point about fpga's with their own independent set of inputs; i dont understand, how does this affect the ability to do one-hot encoding? seems like i should be able to do that in a 22v10, no? d -- this is a test of the emergency broadcasting system. had this been a real emergency, you would have been instructed where to send your IRS tax forms...Article: 21
Not counting the silly (for this froup :) thread on supercomputers, I though I would kick the ball off by asking for comments on software support for FPGA's. I have examined the data sheets for the Xilinx 3000 series chips, and I think I could do some really |<oo1 stuff with them, but I am seriously put off by the Xilinx software. I don't want to afford it, learn it, and be locked in to it. I will only consider working with hardware where I have the choice of what software to use. Can anybody tell me (and the rest of the group) if Xilinx or any other vendor of in-place reprogrammable logic (CMOS static cell or Flash storage) has: JEDEC standardized and documented "fuse" maps support from mainstream logic developers (like ABEL) freeware code fragments for downloading fuse maps I have read snippets in trade magazines about Altera's FLEX line, Intel's iFX line, and Cypress CY7C37x chips. With the right attitude from the manufacturer, these all sound like they could satisfy my wants. Thanks in advance. - Larry Doolittle doolittle@cebaf.govArticle: 22
Firstly, congratulations on the new newsgroup, well overdue! Secondly, might as well make use of it and see if anyone has info on the following matter: Our research involves using FPGAs as an architectural building block in constructing a scalable reconfigurable image processing co-processor. Initial work carried out involved implementing simple image processing operations using FPGAs (used Algotronix CHS2x4 board,now by Xilinx). The controlling of input and output from onboard memory to/from the FPGA was carried out by a general purpose on board controller, under supervision from the host What we would like, is to be able to have a reconfigurable controller that would carry out a specific task defined by the image processing operation required, and would carry out its operations without any interaction from the host. This controller itself would be a FPGA as it will be reconfigurable. This dynamic control FPGA would be required to generate address sequences, bus signals as well as clock pulses to drive the other FPGAs. Does anyone have any info about using FPGAs are contr ollers or any other projects in which control of this type was required. If so what devices were used. Don hesitate to mail me, if anyone needs more info or you have anything that may help. Thanks in advance ! Paul. -------------------------------------------------------------------- Paul Donachy pdonachy@elegabalus.cs.qub.ac.uk Dept.Computer Science, The Queens University of Belfast (QUB), Belfast, BT7 1NN, Northern Ireland, UK +44 232 245133 (Ext 3147) -------------------------------------------------------------------- c task defined by the image processing operation required, and would carry out its operations without any interaction from the host. This controller itself would be a FPGA as it will be reconfigurable. This dynamic control FPGA would be required to generate address sequences, bus signals as well as clock pulses to drive the other FPGAs. Does anyone have any info about using FPGAs are contrArticle: 23
in article <Cto8vK.FzJ@actrix.gen.nz> arnim@actrix.gen.nz (Arnim Littek) scribbles in crayon: >cshelor@cpdsc.com writes > >However, in defence of APR, it is seems more flexible, allowing >us to tailor the placement and routing in ways that the NeoCAD >isolates the user from. For that matter, APR also seems to have that >all over PPR too. Too bad its being put on the shelf. > as a study in contrast, i have the opposite opinion. my last project involved such obnoxious schedules that i really didnt have the time or the inclination to play with the xilinx XDE and tweak this node here, move that line there. my experience with xilinx says that if you want to get *any* kind of performance at all from the part, you're going to have to design the circuitry to take advantage of the architecture and spend an enormous amount of time on advance tile placement. be aware of these restrictions if you are on a limited schedule. there is no magic software from xilinx that just does it for you. on the other hand, actel does isolate me from their guts and i appreciate them for it. i've never used their parts for high speed applications but the punch-and-go software gives me fairly quick turn on a design without the headaches of nitpicky details internal to the part. i think it *should* be that way. unless you are assigned just the one part to design, you dont have the time to play doctor for a sick part. >The trend seems to be to isolate the user from any understanding of >what's happening. We much prefer the use of intermediate files in >ASCII, rather than hiding things in proprietary binary files. Nobody >is perfect at error messages, and the more information we have to go >on when the going gets tough, the more likely we are to get out of >the situation. Xilinx seems to be moving away from ASCII outputs, >and NeoCAD has never been that open, as far as I know. However, this i agree with. support has never been a handholding affair so the vendors as much as the customer would appreciate at least some preliminary level of screening by the customer before yelling help. hard to do with strictly binary files. d -- this is a test of the emergency broadcasting system. had this been a real emergency, you would have been instructed where to send your IRS tax forms...Article: 24
In article <316blr$dnt@cronkite.ocis.temple.edu>, Baiju Jacob <jake@astro.ocis.temple.edu> wrote: >Ok finally the group I have been looking for. >Has anybody made a whole fpga based machine [Besides the Splash from SRC]. >Any idea about the status of the virtual machine from VCC. > >-Baiju >Baiju E Jacob Dept of Electrical Engineering, Temple University. >Ph: (215) 204-5742 | bjacob@isac.temple.edu | http://astro.temple.edu/~jake I have compiled a list of such machines, both commercial and research. Below is a copy. If anyone out there knows of a machine I have missed, drop me a line. -- Steve -- guccione@ccwf.cc.utexas.edu or guccione@mcc.com -- 7/29/94 ----------------------< FPGA-based machines >------------------ List of FPGA-based Computing Machines Maintained by: Steve Guccione guccione@ccwf.cc.utexas.edu Last Updated: 6/8/94 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= System name: ACME (Adaptive Connectionist Model Emulator) FPGA Devices: 14 Xilinx XC4010s and 6 Xilinx XC3195s On-board RAM: 7 4K Dual-ported global memories each 4010 has a 4k Dual-ported memory External bus: SBUS Interconnect: Clos Network between 4010s and 3195s 3195s are used as programmable interconnect among 4010s and with global memory Contact: Pak K. Chan Computer Engineering Board 225 Applied Sciences University of California Santa Cruz, CA 95064 Email: pak@cse.ucsc.edu Notes: See FPGA'94 Berkeley ACM Workshop System name: Anyboard FPGA Devices: 5 Xilinx 3042 On-board RAM: 384K External bus: ISA Interconnect: Fixed buses Contact: David E. Van den Bout ECE Department North Carolina State University Raleigh, NC 27695-7911 Notes: System name: ArMen FPGA Devices: 1 3090 per node. The MIMD/FPGA parallel machine is modular and extensible. On-board RAM: 1, 2 or 4Mb/node each board has a T805 processor with 4 20Mb/s links. External bus: SBUS Archipel board with a T805. I/Os can be handled directly within ArMen using additional transputer/peripheral boards. Interconnect: Processor interconnection is host system dependent. We have two 8 nodes computers configures as cubes. 3090 south and west ports are assembled into a linear ring with 36bits data path. North ports are mapped in the processor address space, so that they receive address/data from their local processor. The FPGA 32 bit south port is free for extensions or input/output on each node. Contact: Bernard Pottier Laboratoire d'Informatique de Brest Universite de Bretagne Occidentale UFR Sciences, BP 802, Brest, 29285, FRANCE. Email: pottier@univ-brest.fr Notes: See Napa FCCM 93 and 94 or Hawai HICSS-94 proceedings. ArMen can easily be connected to any host having an interface board for transputers. There are projects for commercial distribution System name: BORG FPGA Devices: 2 Xilinx 3030s and 2 Xilinx 3042s On-board RAM: 2K External bus: PC-bus interface in 5th FPGA Interconnect: 4 FPGAs in a Clos network 2 FPGAs can be used as interconnect or logic Contact: Pak K. Chan Computer Engineering Board 225 Applied Sciences University of California Santa Cruz, CA 95064 Email: pak@cse.ucsc.edu Notes: 25 boards made by Xilinx and distributed for educational purposes. See FPGA'92 Berkeley ACM Workshop System name: BORG II FPGA Devices: 2 Xilinx 4003As and 2 Xilinx 4002As On-board RAM: 8K External bus: PC-bus interface in 5th FPGA Interconnect: 4 FPGAs in a Clos network 2 FPGAs can be used as interconnect or logic Contact: Pak K. Chan Computer Engineering Board 225 Applied Sciences University of California Santa Cruz, CA 95064 Email: pak@cse.ucsc.edu Notes: 100 boards made by Xilinx and distributed for educational purposes. FPGAs are socketed and can be replaced by any 4000 series pc84 part. System Name: Chameleon FPGA Devices: 7 Algotronix CAL External Bus: Interconnect: Fixed mesh Contact: Cuno Pfister pfister@cs.inf.ethz.ch Notes: Experimental workstation from ETH Zurich with FPGA's closely coupled to MIPS R3000 processor and innovative object based design software written in the Oberon language. System name: CHAMP (Configurable Hardware Algorithm Mappable Processor) FPGA Devices: 16 Xilinx 4013 On-board RAM: 512K Dual-ported External bus: VME Interconnect: Crossbar (using FPGAs) Contact: Brian Box Lockheed Sanders NCA01-2244 P.O Box 868 Nashua, NH 03060 Phone: (603) 885-7487 FAX: (603) 885-9056 Email: box@nhquax.sanders.lockheed.com Notes: System name: CHS 2x4 FPGA Devices: 9 Algotronix CALs (1 controller + 8 compute) On-board RAM: 2 MB SRAM External bus: ISA Interconnect: Fixed mesh Contact: Tom Kean Xilinx Development Corp. 53 Mortonhall Gate Edinburgh EH16 6TJ Phone: 44 31 666 2600ext204 Fax: 44 31 666 0222 Email: tomk@xilinx.com Notes: Based on work at the University of Edinburgh by Tom Kean and John Gray. Commercialized by Algotronix. Algotronix purchased by Xilinx in 1993. System cascadable to 2 boards. No longer commercially available. System name: CM-2X FPGA Devices: 16 Xilinx 4005 On-board RAM: None External bus: Fixed Interconnect: None Contact: Craig Reese IDA Supercomputing Research Center 17100 Science Drive Bowie, MD 20715 Phone: (301) 805-7479 FAX: (301) 805-7602 Phone: cfreese@super.org Notes: A Connection Machine 2 SIMD machine from Thinking Machines Corporation with the Weitek WTL3164 floating point processors replaced by Xilinx 4005s. De-commissioned 1994. System name: DSP-56X FPGA Devices: 1 Xilinx 3042 On-board RAM: 32KW-128KW (shared with DSP56000) External bus: SBus & Flexible Interconnect: See notes below. Contact: Michael C. Peck President Berkeley Camera Engineering 3616 Skyline Drive Hayward, CA 94542-2521 Phone: 510-889-6960 Fax: 510-889-7606 email: mikep@bce.com Notes: The DSP-56X is an SBus card that contains a 40MHz Motorola 5600x family DSP, a Xilinx 3042, and memory (32K words or 128K words I believe). The 3042 sits directly on the 56000 bus and can be accessed from either the 56000 or the SBus. Some of the Xilinx pins are connected to the SBus back panel connector. System name: DTM-1 FPGA Devices: 16 DTM chips On Board RAM: 32 8K x16 SRAM banks on separate DTM ports, dual ported to host External Bus: VME Interconnect: Packed Exponential Connections (a multi-grid mesh network) Contact: Worth Kirkman MITRE Corporation 7525 Colshire Dr. McLean, VA 22102 Phone: (703)883-7082 FAX: (703)883-6708 Email: kirkman@mitre.org Notes: Built from custom DTM chips. These devices are custom RAM-configurable 64x64 arrays of expandable-gate cells, each pipelined for 2 boolean input evaluations in a 100MHz cycle. 256 I/O pins, each arbitrary direction with echo-cancellation and programmable parallel<=> serial sub-sampling - normally run at 1/4 the internal rate. System name: EVC (The Engineers Virtual Computer) FPGA Devices: 1 Xilinx 4010 On-board RAM: Daughter board (see notes) External bus: SBus Interconnect: None Contact: Steve Casselman Virtual Computer Corporation Reseda, CA 91335 Phone: (818) 342-8294 FAX: (818) 342-0240 Email: sc@vcc.com Notes: The EVC is a single FPGA based transformable computing system. It has a daughter board area that has 96 user I/O from the 4010. A 2 Meg fast SRAM daughter board is available now. System Name: G-800 System FPGA Devices: Grouped in modules (maximum 16 -- see Notes below) On-board RAM: See Notes below External Bus: VESA (VL) Local bus VESA Media Channel - 100 MB/sec video bus 80 pin connector supports 32 bit devices Interconnect: Bus Oriented Communication - Virtual Bus All Interconnect via Xilinx 4010's on G-800 2x32 bit buses and 2x16 bit buses on G-800 Configuration is programmable - Virtual Bus Contact: bovarga@gigaops.com 2374 Eunice St. Berkeley, CA. 94708 Notes: Modules have standard form factor and pinout. All bus lines to G-800 connect via FPGAs. Up to 16 modules of all types on 1 G-800 board. Visual Computing Module (VMC)= 2xXC4005; 4MB DRAM and 80 MIPS DSPS PGA10MOD = 1 x XC4010, 2MB DRAM, 128K SRAM PROTOMOD = same as PGA10MOD with pinouts extended to pads for wirewrap, logic analyzer, etc. 16xVCMs = 32 XC4005's, 2 XC4010's on G-800, 64 MB DRAM 16xPGA10MODs = 16 XC4010's, 2 XC4010's on G-800, and 32 MB DRAM and 2 MB SRAM XPGAMOD = 4xXC4010, 8 MB DRAM, 512K SRAM (available Oct) 16xXPGAMODs = 32 XC4010's, 2 XC4010's on G-800, and 128 MB DRAM and 8 MB SRAM System name: GANGLION FPGA Devices: 24 Xilinx 3090 On-board RAM: 24K PROM External bus: VME / Datacube MAXbus Interconnect: Fixed Contact: Charles Cox IBM Research Division Almaden Research Center San Jose, CA 95120-6099 Notes: Used exclusively for neural networks. System name: Marc-1 FPGA Devices: 25 Xilinx 4005 (18 processing + 5 interconnect + 2 control) On-board RAM: 6 MB External bus: SBus Interconnect: 5 Xilinx 4005 Contact: David M. Lewis University of Toronto Department of Electrical Engineering Toronto, Canada Email: lewis@eecg.toronto.edu Notes: Marc-1 consists of two modules. Each module contains an instructions unit of 3 Xilinx 4005s, a datapath of 6 Xilinx 4005s, a 256K x 64 instruction memory, a 256K x 32 data memory and a Weitek 3364. These are connected by an interconnect module of 5 Xilinx 4005s. Two more Xilinx 4005s are used to interface to the Sun Sparc host. System name: nP (The Nano Processor) FPGA Devices: 2 Xilinx 3090 On-board RAM: 64K SRAM / 1M DRAM External bus: ISA Interconnect: Fixed Contact: National Technology, Inc. 9500 South 500 West Suite #104 Sandy, UT 84070 Phone: (801) 561-0114 FAX: (801) 561-4702 Email: wirthlin@gecko.ee.byu.edu Notes: System name: PAM (Programmable Active Memories) (perle-0) FPGA Devices: 25 Xilinx 3020 On-board RAM: 0.5 MB External bus: VME Interconnect: Fixed mesh Contact: Patrice Bertin Paris Research Laboratory Digital Equipment Corporation 85, avenue Victor Hugo 92500 Rueil-Malmaison, France bertin@prl.dec.com Notes: Replaced by the DEC PAM perle-1. System name: PAM (Programmable Active Memories) (PeRLe-1) FPGA Devices: 24 Xilinx 3090 On-board RAM: 4MB SRAM External bus: DEC TURBOchannel Interconnect: Fixed mesh Contact: Patrice Bertin Paris Research Laboratory Digital Equipment Corporation 85, avenue Victor Hugo 92500 Rueil-Malmaison, France bertin@prl.dec.com Notes: Set the record for RSA encryption in 1990. System name: PRISM (Processor Reconfiguration through Instruction Set Metamorphosis) FPGA Devices: 4 Xilinx 3090 On-board RAM: None External bus: 16 bit Interconnect: None Contact: Mike Wazlowski or Harvey Silverman Laboratory for Engineering Man/Machine Systems Brown University Providence, RI 02912 {mew,hfs}@lems.brown.edu Notes: Notable for its use of C as the description language for the programmable logic. System name: PRISM-II (Processor Reconfiguration through Instruction Set Metamorphosis) FPGA Devices: 3 Xilinx 4010 per processing node On-board RAM: 128K x 32 per 4010 External bus: 64 bit writes, 32 bit reads, on processor bus (it's not external) Interconnect: Inverted tree, or none, application selectable Contact: Mike Wazlowski or Harvey Silverman Laboratory for Engineering Man/Machine Systems Brown University Providence, RI 02912 {mew,hfs}@lems.brown.edu Notes: Each PRISM-II board is a node in the Armstrong III loosely-coupled parallel processor. The host CPU is a 33Mhz AMD Am29050 RISC processor. There are 20 nodes that are connected by a reconfigurable (of course) interconnection topology. System name: R16 and RISC4005 FPGA Devices: 1 Xilinx XC4005 On-board RAM: 64K Words (16 bit words) External bus: R16 bus, 16 bit addr, 16 bit data, Synchronous at 20 MHz Interconnect: Any Contact: Philip Freidin Fliptronics 468 S. Frances St, Sunnyvale, CA 94086 Phone: (408) 737-8060 or at Xilinx (408) 879-5180 email: philip@xilinx.com Notes: A 16 bit RISC processor that requires 75% of an XC4005, 16 general registers, 4 stage pipeline, Target speed is 20 MHz. Can be integrated with peripherals on 1 FPGA, and ISET can be extended. System name: Rasa Board FPGA Devices: 3 Xilinx 4010 On-board RAM: 320K SRAM External bus: ISA Interconnect: 2 Aptix FPICs Contact: Herman Schmit ECE Department Carnegie Mellon University Pittsburgh, PA 15213 Phone: (412) 268-2476 Notes: Integrated with a behavioral synthesis tool which allows specification of the desired algorithm in behavioral Verilog or C. System name: RIPP (Reconfigurable Interconnect Peripheral Processor) FPGA Devices: 8 Altera FLEX 81188 On-board RAM: 2 MB SRAM External bus: ISA Interconnect: Fixed buses / programmable interconnect (see Description) Contact: Nick Tredennick Altera Corporation 2610 Orchard Parkway San Jose, CA 95134-2020 Phone: (408) 894-7000 Email: nickt@altera.com Notes: Up to 8 Altera FLEX 81188 parts, each of which may be replaced by an ICUBE IQ160 Field Programmable Interconnect Device (FPID). Devices are grouped into 4 pairs of 2 devices, each sharing an SRAM device. Designed by David E. Van den Bout of the Anyboard project. System name: SPARXIL FPGA Devices: 3 Xilinx XC4010s On-board RAM: 2 256Kx32bit SRAMs for user data 1 128Kx8bit SRAM for on-board configuration cache External bus: SBus Interconnect: fixed Contact: Andreas Koch Institut f"ur theoretische Informatik Abteilung Entwurf Integrierter Schaltungen Gaussstr. 11 D-38106 Braunschweig, Germany Email: a.koch@tu-bs.de Notes: See FPL'93 Oxford workshop System name: SPACE (Scalable Parallel Architecture for Concurrency Experiments) FPGA Devices: 16 Algotronix CAL On-board RAM: External bus: Custom Interconnect: Fixed grid Contact: George Milne HardLab Department of Computer Science University of Strathclyde Glasgow G1 1XH Scotland, UK Notes: Used for physics research. System name: Spyder FPGA Devices: 5 Xilinx 4003, 2 Actel A1280 On-board RAM: 128K SRAM plus 2K fast registers External bus: VME and Sun SBus Interconnect: Fixed Contact: Christian Iseli Logic Systems Laboratory Swiss Federal Institute of Technology CH-1015 Lausanne Switzerland Email: chris@lslsun.epfl.ch Notes: A reconfigurable VLIW machine. System name: Spyder (version 2) FPGA Devices: 3 Xilinx 4008 (upgradable to 4010), 2 Xilinx 4005, 1 Actel A1280 and 1 Actel A1225 On-board RAM: 128K SRAM plus 4K fast registers External bus: VME Interconnect: Fixed Contact: Christian Iseli Logic Systems Laboratory Swiss Federal Institute of Technology CH-1015 Lausanne Switzerland Email: chris@lslsun.epfl.ch Notes: A reconfigurable VLIW machine. A newer version of Spyder. System name: SPLASH FPGA Devices: 32 Xilinx 3090 On-board RAM: 4 MB SRAM External bus: VME Interconnect: Linear array Contact: Jeffrey M. Arnold IDA Supercomputing Research Center 17100 Science Drive Bowie, MD 20715 Phone: (301) 805-7479 FAX: (301) 805-7602 Phone: jma@super.org Notes: Replaced by SPLASH 2. System name: SPLASH 2 FPGA Devices: 16 Xilinx 4010 On-board RAM: 8 MB External bus: Sun SBus Interconnect: Linear array plus crossbar Contact: Jeffrey M. Arnold IDA Supercomputing Research Center 17100 Science Drive Bowie, MD 20715 Phone: (301) 805-7479 FAX: (301) 805-7602 Phone: jma@super.org Notes: System name: TbC-Pamette (PAM - Programmable Active Memories) FPGA Devices: 1 to 4 Xilinx 40XX in PQ-208 package Currently supported configurations: 4010 + 4003H 4 x 4010 On-board RAM: Daughter board (see notes) External bus: DEC TURBOchannel Interconnect: Fixed mesh 2 x 2 matrix Contact: Mark Shand Paris Research Laboratory Digital Equipment Corporation 85, avenue Victor Hugo 92500 Rueil-Malmaison, France shand@prl.dec.com Notes: 128 user I/O to daughter board. Synchronous RAM daughter board is under development. Pamette is targeted as a generic I/O adapter with local compute capability. System name: TM-1 (Transmogrifier 1) FPGA Devices: 4 Xilinx 4010 On-board RAM: 4 32Kx9 SRAMs External bus: custom to SUN workstation Interconnect: entirely programmable using Aptix AX1024 FPIC Contact: Jonathan Rose Dept. of Electrical and Computer Engineering University of Toronto 6 King's College Road Toronto, Ontario Canada M5S 1A1 Email: jayar@eecg.toronto.edu Notes: Intended more for rapid prototyping of circuits, but can be used for computing. System name: The Virtual Computer (P-Series) FPGA Devices: Up to 52 Xilinx 4013 On-board RAM: Up to 8 MB SRAM, 256K dual-ported SRAM External bus: Bus Independent - Current SBus interface Interconnect: Up to 24 ICUBE FPID Contact: Steve Casselman Virtual Computer Corporation Reseda, CA 91335 Phone: (818) 342-8294 FAX: (818) 342-0240 Email: sc@vcc.com Notes: The Virtual Computer P-Series consists of P1, P2, P3 and P4. The P1 has 14 4013s the P2 26 4013s the P3 40 4013s and the P4 has 52 4013s. System name: Windchime FPGA Devices: Actel 1020A (1-3 per processor) On-board RAM: External bus: Interconnect: Mesh connected wormhole routing Contact: Erik Brunvand CS Dept. University of Utah Salt Lake City, UT, 84112 Email: elb@cs.utah.edu Phone: (801)581-4345 FAX: (801)581-5843 Notes: MIMD multiprocessor. Used for self-timed circuit experimentation. System name: X-12 FPGA Devices: 12 Xilinx 3195 On-board RAM: 384K SRAM (32K per FPGA) External bus: ISA Interconnect: Fixed common bus Contact: National Technology, Inc. 9500 South 500 West Suite #104 Sandy, UT 84070 Phone: (801) 561-0114 FAX: (801) 561-4702 Email: wirthlin@gecko.ee.byu.edu Notes:
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z