Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I was looking into this also. We use the Byteblaster from Altera all the time. Xilinx does not support the Jam player and from the FAE I talked to, they will not. They are looking into a java based programmer instead. Do a search on Xilinx's home page for java. I guess they just can't play together ...even if it is for the customer! Vasant Ram wrote: > Hello! I've got the Xilinx FPGA/JTAG parallel port cable that comes as > part of the XC9500 demo board, and wanted to know if it it was at all > possible to use this with Altera devices. If not, is there any freeware > schematic so one could make an Altera JTAG programmer? Is it possible to > program a JTAG device independent of software (MAXPLUS II vs. Foundation, > etc)? > > Also, can one actually use JTAG to apply stimulus to the EPLD/observe > signals? I've always been a bit confused about that. > > Thanks for the help. > Vasant > > remove nospam to e-mail.Article: 16451
Bob Sefton <rsefton@home.com> writes: > just not much you can do about it). Blame the GUI problems on > Microsoft not Xilinx. There's not a Windows application in > existence that runs on all Windows machines. Then I wonder why FPGA tool vendors push Windows so much ? Do they get paid a commission from Microsoft ? Can their CEOs play golf with Bill once a month ? Zoltan -- +------------------------------------------------------------------+ | ** To reach me write to zoltan in the domain of bendor com au ** | +--------------------------------+---------------------------------+ | Zoltan Kocsi | I don't believe in miracles | | Bendor Research Pty. Ltd. | but I rely on them. | +--------------------------------+---------------------------------+Article: 16452
In comp.arch.fpga brian_n_miller@yahoo.com wrote: : rolandpj@bigfoot.com wrote: :> I like to to view the problem as an extension of HotSpot/JIT. :> ... Why not do the same thing, but right down to the hardware? : Reliability. If an FPGA fails to reconfigure itself properly, : then how to recover? The issue of how to recover from errors in a parallel computing system is not an insoluble one - provided you are prepared to make some performance sacrifices en route to the goal. JVN made some of the first tentative steps in this direction in his "Synthesising Reliable Organisms from Unreliable Components" paper. -- __________ |im |yler The Mandala Centre http://www.mandala.co.uk/ tt@cryogen.com Vice versa - poems about brothels.Article: 16453
In comp.arch.fpga Roland PJ Pipex Account <rolandpj@bigfoot.com> wrote: :>Roland Paterson-Jones <rpjones@hursley.ibm.com> wrote: [HotSpotting Java bytecode onto FPGAs] :>This would probably wind up being a rather poor way of exploiting the :>power of the available hardware ;-/ : In the same way that a pentium/<paste your least favourite cpu> is a poor : way of exploiting 16m(?) transitors ;-) Yes, in /exactly/ that sense ;-) :>Apart from it's thread support, Java's pretty inherently serial. The :>thought of attempting to extract parallelism from such a serial stream of :>instructions usually makes me feel nauseous. :> : ...but this is exactly what any optimising compiler does - evaluating : dependencies is quite close to extracting paralellism - you just need to : schedule for an infinite-scalar machine, plus conditionals and both their : branches can be evaluated simultaneously, plus loops can be unrolled to the : limit of the hardware... Extracting parallelism from serial code always seems like a dreadful idea to me. Sometimes to get a parallel version of a serial algorithm, you have to completely rewrite it from scratch using a completely new approach. While it's quite possible to extract a certain degree of parallelism from serial code, only extremely advanced artificial intelligence techniques will be able to deal with complicated cases, and in general, no finite state machine can re relied upon to perform the task. : The HANDEL-C examples where the explicit 'par' keyword is used to : parallelise loops, didn't look too hard to auto-detect. I imagine any : detection in C code is going to be more difficult than that for Java : (due to C's dirty semantics). Inserting specific keywords into Java would be virtually impossible without Sun's co-operation. No doubt making a variant of Java that targetted FPGAs would be a more sane approach (performance-wise) than using vanilla Java (which obviously wasn't designed with such applications in mind). Even plain ol' Java would no doubt experience /some/ acceleration if implemented partly on FPGAs (though the ~=400% slower clock speed might currently mean that very little code benefits very much at the moment). : I'm not too interested in extracting 100% from an architecture (the answer : to this is handcoding to the lowest level). What I'm interested in is the : fastest way to run software that is easily written. I'm convinced that the : compiler needs to do the hard work, not the programmer. This becomes crucial : as the scale of a program grows. For /most/ applications having software that's easy to write is an important requirement. It's not my own personal priority, though; I want lots of (largely asynchronous) parallelism, and very high speed - and don't care about much else... :>Java's ability to exploit parallelism (aside from threading code) :>really isn't very good. [...] Writing a smart compiler to examine loops to :>see if this sort of thing is occurring seems to me to be a backwards :>approach: a /sensible/ concurrent language should allow this type of :>parallelism to be explicitly indicated, rather than expecting advanced AI :>techniques to extract the information about when it can and can't be done :>from the code. : Good point. Are you suggesting a HANDEL-C-like approach? Not /really/, except in so far as explicit indication of which code may be parallelised seems desirable to me. : I'd like to be convinced that a compiler is really unable to detect these : things. Do you have an example that would demonstrate this to me? A compiler /could/ detect lots of sorts of parallelism (in principle). In general the problem of determining correctly whether a loop is capable of being executed in a different order from the way specified by the original programmer (while maintaining the same logical operation) is a problem equivalent to Turing's halting problem. The problem /may/ be soluble in a number of important/common cases, however. : (on the other hand, what if you're wrong, and it's not parallelisable? That's OK. The negative consequences are if a programmer declares that some code may be executed concurrently and it turns out that it needs to be executed in a particular sequence, then the code may perform in a non-deterministic manner. As code which performs non-deterministically is easy to write in Java anyway (e.g. either allow program flow or depend on the random number generator, or the timings of some operations, or start up a couple of interacting threads), I don't see this as much of a problem. : Should the compiler verify all such parallelisable assertions? No - that would probably be impossible ;-) : This is likely to be as difficult as auto-detecting them in the first place. Indeed. : If this stuff is difficult for a compiler to detect, then it's sure to be : difficult for a human to decide for all but the smallest examples.) Well, that's not /necessarily/ so. Human programmers can do things like trying the code either way and picking whichever way produces the desired result (an option not available to the compiler). It may even be that the code is /not/ strictly parallelisable, but that the indeterminacies resulting from executing code sections out of the original sequence are tiny or irrelevant for some reason (this would be the case with a large section of one of my programs). A compiler might have a hard time figuring out such cases without explicit clues. In some respects, I'm sympathetic to the view that human programmers shouldn't have to bother with concurrency issues. OTOH, I don't see that it can be avoided in practice - parallelising serial algorithms is currently /much/ too difficult a problem to be left to machines. Designing an electronic circuit is too different a design problem to writing a serial program to expect that a good solution to the programming problem will easily translate into a good soultion to the circuit design one. Intelligence is needed to do much of the translation. It's a domain is one where human programmers are currently much more adept than machines. -- __________ |im |yler The Mandala Centre http://www.mandala.co.uk/ tt@cryogen.com Chaste makes waste.Article: 16454
This is a common problem with high clock rates in larger parts. Distribute the CE in a pipelined tree structure, with each branch node on the tree driving only geographically close CLBs. Alternatively, you can replicate the CE generation logic several times and place each copy so that it drives only geographically close CLBs. Put any gating logic on the CE function ahead of the CE flip-flops (watch out for the Xilinx macros in that regard, many include an OR gate on the CE input to make sure CE gets applied when another macro input is applied - Load on the CC16CLE for instance). If possible, use the CE input to the CLB. If the CE has to use the LUT, then make sure the CE signal comes in at the last level of the logic, so that it does not have multiple CLB delays. > > Problem #1 - Clock enable > > The biggest problem I ran into was routing getting the clock enable to > > the logic to slow it down to 40 Mhz. In order for the clock enable to > > work properly, it has to be there in 1 80Mhz cycle = 12.5 ns. I found > > that using a bufg or bufgls did not work for especially for the iobs. > > The solution for me was to route the problem locally through another > > flop by reclocking the clock enable. > -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 16455
Jim, My experience with intelliflow is about the same. My initial impression of it was "What the Hell do I need that for, it just gets in the way of what I want to do". I can see perhaps where it might be useful for someone who didn't already know how to drive the tools separately and who didn't mind the learning curve for a "helper" application. I personally put it in about the same category as the dancing paper-clip in microsoft products, only it isn't quite as annoying. Jim Kipps wrote: > Bill- > > Thanks for the feedback. I've had my own struggles with early releases of > IntelliFlow and still have some, but the general useability of IntelliFlow > has > improved significantly since its first release. The current release in > Workview Office 7.53 is much improved with a new project wizard. It's > worth another look. May I ask which version you were using? > > Regards, > -Jim > > Bill Kury wrote: > > > In article <3741869F.9CFD2583@viewlogic.com>, > > fpga@viewlogic.com wrote: > > > > > > IntelliFlow Users, > > > > > > If you would like to contribute to a case study of Viewlogic's > > > IntelliFlow, please respond to this email. I'd like to know about > > > your experiences, good and bad, with this tool and what impact > > > it has had with your design methodology. I'll post a summary > > > of the results to comp.arch.fpga at the conclusion of the study. > > > > > > Thanks, > > > -Jim > > > > > Well..this is a thorny subject. I have the viewlogic tools for doing > > fpga's including viewsim,intelliflow, and Fpga express. Viewsim is > > excellent, it is very fast and allows you to see simulation in progress. > > Fpga express is also quite good. I have never used symplicity or > > exemplar so I can't compare to those two but, I have been able to get > > fpga express to do everything that I need. Intelliflow is another > > question entirely :( I am sorry but I have to be honest. I found it > > terrible! I had a tough time just trying to get it to work, the support > > was not very good, and, probably it's worst feature is that the > > documentation is about at good as a bump on log. I struggled with it > > for a couple of weeks, then I ended up going on a vhdl course where I > > was able to use the aldec tool's. There was no comparison. I bought > > the aldec tool's and use them to this day. > > I have to premise this conversation with a couple of things. First, > > when I was using the viewlogic tools, I was in the learning stage of > > vhdl. When you are at this stage, the viewlogic tools immediately show > > their flaws. If I remember correctly there is a compiler called vantage > > used in viewlogic. This compiler would give me the most wonderful > > messages if I had something wrong in my code. How that message > > correlated to the problem is beyond my scope of reasoning.( Maybe the > > problem is that I don't have enough scope :))) > > To be intelliflow specific, when a new vhdl file is added that would not > > analyze, it would come back with "File would not analayze". This is > > good except for the fact that some idea to what the problem is would be > > nice. > > I know that I have kind of gone off in various directions but to > > summarize, intelliflow is best rated poor. It does not let you spend > > your time sorting out design issues, instead, you spend your time > > sorting out tool problems. > > > > Sorry :( > > > > Bill > > > -- > > > -------------------------------------------------------- > > > James R. Kipps FPGA Marketing Manager > > > jkipps@viewlogic.com Phone: (508) 303-5246 > > > -------------------------------------------------------- > > > > > > > > > > --== Sent via Deja.com http://www.deja.com/ ==-- > > ---Share what you know. Learn what you don't.--- > > -- > -------------------------------------------------------- > James R. Kipps FPGA Marketing Manager > jkipps@viewlogic.com Phone: (508) 303-5246 > -------------------------------------------------------- -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 16456
I don't think so, Tim. The only way to create a schmitt trigger function in a PLD without schmitt trigger inputs is to use up an additional pin as an output to provide feedback to a resistive divider. The other option you have is to add a schmitt trigger chip between the input and your PLD. > I created a Schmitt trigger once on a Lattice CPLD, using schematic entry. > You can do it with a 2-input NAND and an inverter. > -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 16457
Austin, Unless it is a board for something going into production, I'd rather have the PCI in a separate part so that I can get use of the entire virtex part, and so I don't have to worry about how the PCI interface is going to affect the floorplanning for my stuff. I only use these type of boards in one-off designs and as prototyping platforms. For the most part, they are way too expensive to use in a product that is going to be produced and sold in any kind of quantity. Austin Franklin wrote: > Why don't you guys have the PCI interface in the Xilinx? > > Malachy Devlin <m.devlin@nallatech.com> wrote in article > <B4000FE0503ED211864100104B4C66C30972A9@CONTEXT>... > > Nallatech has been supplying a Virtex development platform since > > February. This is a 32bit 33Mhz PCI card with a 300K - 800K Virtex > > device and 2 individual banks of 2MBytes 100Mhz ZBT SRAM. > > The PCI card, called the Ballynuey, handles all the PCI issues and comes > > with a pre-configured Spartan that handles the PCI interfacing and data > > buffering between the Virtex and the PC application. PCI drivers, Virtex > > debug tool, FPGA configuration and Application API are included with the > > card. > > Additionally the card includes 4 DIME modules for expansion and custom > > I/O. Currently there are modules for Image Capture and Display, a Dual > > XCV1000 module (yes over 2Million gates!) and various other I/O modules > > (e.g. LVDS) with more in the pipeline. The modules can provide over > > 2Gbytes/sec bandwidth and has over 200 I/O connections. > > > > Configuration of the on-board Virtex is configured dynamically over the > > PCI using the tools provided with the card (and is much faster than > > Xchecker!) If additional DIME modules are placed on the card their FPGAs > > are also individually configurable via PCI. > > > > > > Check out the web site for more details and new developments soon to be > > announced at http://www.nallatech.com/ > > > > > > Malachy Devlin > > Nallatech Ltd > > m.devlin@nallatech.com > > > > > > > -----Original Message----- > > > From: alfred fuchs [mailto:alfred.fuchs@siemens.at] > > > Posted At: 12 May 1999 19:09 > > > Posted To: fpga > > > Conversation: Virtex based PCI cards > > > Subject: Re: Virtex based PCI cards > > > > > > > > > I've just finished the design of a CompactPCI board (6U) with > > > one Virtex1000 > > > and two synchronous SRAM-modules (2Mx72). It mainly uses > > > rear-panel-I/O > > > (more than 100 signals) and is therefore open for various > > > applications. The > > > PCI-IF is a PLX9054, the FPGA is configured by the PCI-master. > > > Pricing is TBD, but we tend to be expensive. > > > > > > Alfred Fuchs > > > Siemens Austria > > > PSE PRO LMS2 > > > +43/1/1707-34113 > > > > > > Atif Zafar schrieb: > > > > > > > Hello: > > > > > > > > Does anyone know of any development boards (PCI) that > > > use the Virtex > > > > FPGA? I am interested in a board with preferably several > > > XV800 or XV1000 > > > > devices along with RAM for prototyping a custom graphics pipeline. I > > > > have heard of the PCI Pamette board, but to my knowledge > > > this does not > > > > have Virtex silicon. Thanks for any info. > > > > > > > > Atif Zafar > > > > Regenstrief Institute > > > > Zafar_A@regenstrief.iupui.edu > > > > > > > -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 16458
Here's the FAQ: 1. Q: Is there a FAQ? A: No, scan deja-news for your question. As for the rules: 1. Don't use the email addresses pulled off usegroup postings for your spam mailing list. The regular contributors to this newsgroup generally post on their time and provide a very valuable service for readers who may not have the expertise of these power users. There is little that is as annoying as a ton of unsolicited advertising junk filling up the email in-basket. 2. The usual other rules. Rob Putala wrote: > Where can I obtain a FAQ for this newsgroup? I would like to advise > subscribers of opportunities I am working on, but do not want to upset or > get flamed. > > Any help with the rules will be appreciated! Also, any suggestions on other > newsgroups will be welcomed. > > -- > rcp@frontiernet.net > F-O-R-T-U-N-E Personnel Consultants of Worcester > (978) 466 - 6800 -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 16459
Hello, I'm selling a full Altera MAX+Plus II package, V9.01. It's new and still sealed. This package is their "Magnum" product which supports full VHDL. Great for DSP design. Includes manual, CD-ROM and dongle. Doesn't include maintainance. I will transfer ownership thru Altera upon purchase. Originally $7000, asking $1600, including shipping. - Chris Note: to reply change "spam" to "netcom" in the return address, thanks.Article: 16460
Hi, 1.5is2 allows you to generate a back annotated netlist in order to simulate the device, in that respect, with the hard macro. Be careful when timing the design as FFS to FFS does not work with the TRIFF. Use Period = etc. The only problem is functional simulation. 2.1i supports the correct use of the TRIFF in 4kXLA, and SpartanXL. Virtex is already supported in 1.5is2. 2.1i FCS is June. Regards, David Hawke. Xilinx. Edward Moore wrote in message <927376848.20543.0.nnrp-03.c2de43d1@news.demon.co.uk>... >Has anyone successfully used the iob tristate registers in a 4000-XLA device >?. > >I am trying to decrease the clock to tristate time of some outputs connected >to ZBT rams. I've read the relevant xilinx app-note on how to use the iob >.nmc hard-macro, and i can't see any sensible way of doing a front-end >simulation of the xilinx device and the rams, since the iob macro doesn't >have a PAD pin. > >It seems i would need to have one design for simulation, using a model of >the iob register connected to the rams, and another design for P&R which >doesn't have the i/o pins. Is this correct ?. > >I was hoping for Unified Library support for the iob tristate flip-flop, but >Xilinx seems to be moving away from using iob library primitives (they don't >seem to be used for Virtex's). So has anyone heard of when the P&R software >will be able to utilize the tristate registers automatically ? > >Edward Moore > >PS : you can use a -XL bitstream in a -XLA device, and suprisingly >vice-versa. > >Article: 16461
They push their Windows products vs. the Unix competition but I've never heard a tool vendor stupid enough to push the OS itself as superior to anything. Zoltan Kocsi wrote: > > Bob Sefton <rsefton@home.com> writes: > > > just not much you can do about it). Blame the GUI problems on > > Microsoft not Xilinx. There's not a Windows application in > > existence that runs on all Windows machines. > > Then I wonder why FPGA tool vendors push Windows so much ? > Do they get paid a commission from Microsoft ? > Can their CEOs play golf with Bill once a month ? > > Zoltan > > -- > +------------------------------------------------------------------+ > | ** To reach me write to zoltan in the domain of bendor com au ** | > +--------------------------------+---------------------------------+ > | Zoltan Kocsi | I don't believe in miracles | > | Bendor Research Pty. Ltd. | but I rely on them. | > +--------------------------------+---------------------------------+Article: 16462
Ray Andraka wrote: > > Austin, > > Unless it is a board for something going into production, I'd rather have the > PCI in a separate part so that I can get use of the entire virtex part, and so > I don't have to worry about how the PCI interface is going to affect the > floorplanning for my stuff. I only use these type of boards in one-off > designs and as prototyping platforms. For the most part, they are way too > expensive to use in a product that is going to be produced and sold in any > kind of quantity. > > -Ray Andraka, P.E. > President, the Andraka Consulting Group, Inc. > 401/884-7930 Fax 401/884-7950 > email randraka@ids.net > http://users.ids.net/~randraka How about a chip where the PCI interface was not done in the programmable logic, but instead was just added to the chip as a standard hardwired function? Just like the boundary scan or the DLLs. It would seem that there are enough PCI based designs being done that it would be a high volume product. -- Rick Collins rick.collins@XYarius.com remove the XY to email me. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design Arius 4 King Ave Frederick, MD 21701-3110 301-682-7772 Voice 301-682-7666 FAX Internet URL http://www.arius.comArticle: 16463
Ask Lucent how many of those they are selling. They've got an FPGA with a PCI interface wrapped in. Rickman wrote: > Ray Andraka wrote: > > > > Austin, > > > > Unless it is a board for something going into production, I'd rather have the > > PCI in a separate part so that I can get use of the entire virtex part, and so > > I don't have to worry about how the PCI interface is going to affect the > > floorplanning for my stuff. I only use these type of boards in one-off > > designs and as prototyping platforms. For the most part, they are way too > > expensive to use in a product that is going to be produced and sold in any > > kind of quantity. > > > > -Ray Andraka, P.E. > > President, the Andraka Consulting Group, Inc. > > 401/884-7930 Fax 401/884-7950 > > email randraka@ids.net > > http://users.ids.net/~randraka > > How about a chip where the PCI interface was not done in the > programmable logic, but instead was just added to the chip as a standard > hardwired function? Just like the boundary scan or the DLLs. > > It would seem that there are enough PCI based designs being done that it > would be a high volume product. > > -- > > Rick Collins > > rick.collins@XYarius.com > > remove the XY to email me. > > Arius - A Signal Processing Solutions Company > Specializing in DSP and FPGA design > > Arius > 4 King Ave > Frederick, MD 21701-3110 > 301-682-7772 Voice > 301-682-7666 FAX > > Internet URL http://www.arius.com -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 16464
On Fri, 21 May 1999 09:41:43 +0100, David Pashley <David@edasource.com> wrote: >In article <37446aa8.4459222@news.dial.pipex.com>, ems@riverside- >machines.com.NOSPAM writes >>there is absolutely *no way* that fpga express is competitive in terms >>of vhdl language support, and i'm sure that you know this as well as >>the rest of us do. true, there have been major advances in both DC and >>express recently, but you've still got some way to go. >> >>on the other hand, there have been some really nice additions in 3.1 - >>FST/scripting and attribute passing primarily, which would make this a >>very competitive tool if you got the language support right. >> >>evan > >As I understand it, and as you suggest, the language support in DC and >FPGA Express is the same. i seem to have missed the party - i've been increasing my skills base in nappy changing... :*) first, DC and express use different analysers (although they presumably had a common ancestor at some time). my point was simply that both analysers have been improving, albeit in different directions. some obvious differences spring to mind - (1) DC (since 1998.08) now supports numeric_std (2) express now supports various language features related to die placement. some enhancements in express 3.0 appear to have been added specifically to allow handling of xilinx RPMs (something which i suspect this NG was largely responsible for). (3) express has had some token '93 support for at least a year. this wasn't present in 1998.08 - i don't know what's happened in 99.05. however, the fact is that both analysers do a very poor job of supporting the vhdl LRMs, not to mention additional specs such as numeric_std, in the case of express. this was the major complicating factor in 1076.6 (the vhdl synthesis interoperability 'standard'), and has resulted in a spec which has been crippled by the need to address the lowest common denominator (although, ironically, it didn't even manage to do this, and the synopsys products aren't compliant, but that's another story). >How does the fact that Synopsys goes from strength to strength, and is >unarguably both successful and competitive (see their website, Q2 sales >21% up at $190m) square with your comments? i'm not knocking DC. there's far more to synthesis than a language front end. but, as far as their revenues go, i suspect that F,U,&D are significant factors in their success. >Do you mean that the language requirements for FPGA are totally >different from those for ASIC? no; it's the development processes which are different. in an asic environment, you're much more concerned with the back-end flow, and the front end entry mechanism is a relatively small part of the flow. the fpga environment is very different. the vendors have invested in push-button flows, and implementation is a *much* smaller part of the process than in the asic world. the net result of this is that the front-end entry mechanism is, relatively, much more important. this is why, IMO, the driving force for language improvements has come from the fpga world, and not the asic world (just look at the example of '93 'support' in express). synopsys made the mistake of entering the fpga market without realising that it was different from the asic market. many users hadn't even heard of synopsys, and weren't willing to take a hit simply to buy into the name. ok, they're improving, but they don't seem to be in a hurry. evanArticle: 16465
The problem I have with that, is it locks me into THE interface THEY provide for the back end. The pins are used for the back end interface anyway, so why not just connect them to the PCI bus. Also, the PCI interface is supposedly only supposed to take up a few 10s of thousands of gates MAX, and that's hardly anything to worry about in a 300k-800k gate part. If it is, someone is doing something wrong. Austin Ray Andraka <randraka@ids.net> wrote in article <3748AE04.BF8B2490@ids.net>... > Austin, > > Unless it is a board for something going into production, I'd rather have the > PCI in a separate part so that I can get use of the entire virtex part, and so > I don't have to worry about how the PCI interface is going to affect the > floorplanning for my stuff. I only use these type of boards in one-off > designs and as prototyping platforms. For the most part, they are way too > expensive to use in a product that is going to be produced and sold in any > kind of quantity. > > Austin Franklin wrote: > > > Why don't you guys have the PCI interface in the Xilinx? > > > > Malachy Devlin <m.devlin@nallatech.com> wrote in article > > <B4000FE0503ED211864100104B4C66C30972A9@CONTEXT>... > > > Nallatech has been supplying a Virtex development platform since > > > February. This is a 32bit 33Mhz PCI card with a 300K - 800K Virtex > > > device and 2 individual banks of 2MBytes 100Mhz ZBT SRAM. > > > The PCI card, called the Ballynuey, handles all the PCI issues and comes > > > with a pre-configured Spartan that handles the PCI interfacing and data > > > buffering between the Virtex and the PC application. PCI drivers, Virtex > > > debug tool, FPGA configuration and Application API are included with the > > > card. > > > Additionally the card includes 4 DIME modules for expansion and custom > > > I/O. Currently there are modules for Image Capture and Display, a Dual > > > XCV1000 module (yes over 2Million gates!) and various other I/O modules > > > (e.g. LVDS) with more in the pipeline. The modules can provide over > > > 2Gbytes/sec bandwidth and has over 200 I/O connections. > > > > > > Configuration of the on-board Virtex is configured dynamically over the > > > PCI using the tools provided with the card (and is much faster than > > > Xchecker!) If additional DIME modules are placed on the card their FPGAs > > > are also individually configurable via PCI. > > > > > > > > > Check out the web site for more details and new developments soon to be > > > announced at http://www.nallatech.com/ > > > > > > > > > Malachy Devlin > > > Nallatech Ltd > > > m.devlin@nallatech.com > > > > > > > > > > -----Original Message----- > > > > From: alfred fuchs [mailto:alfred.fuchs@siemens.at] > > > > Posted At: 12 May 1999 19:09 > > > > Posted To: fpga > > > > Conversation: Virtex based PCI cards > > > > Subject: Re: Virtex based PCI cards > > > > > > > > > > > > I've just finished the design of a CompactPCI board (6U) with > > > > one Virtex1000 > > > > and two synchronous SRAM-modules (2Mx72). It mainly uses > > > > rear-panel-I/O > > > > (more than 100 signals) and is therefore open for various > > > > applications. The > > > > PCI-IF is a PLX9054, the FPGA is configured by the PCI-master. > > > > Pricing is TBD, but we tend to be expensive. > > > > > > > > Alfred Fuchs > > > > Siemens Austria > > > > PSE PRO LMS2 > > > > +43/1/1707-34113 > > > > > > > > Atif Zafar schrieb: > > > > > > > > > Hello: > > > > > > > > > > Does anyone know of any development boards (PCI) that > > > > use the Virtex > > > > > FPGA? I am interested in a board with preferably several > > > > XV800 or XV1000 > > > > > devices along with RAM for prototyping a custom graphics pipeline. I > > > > > have heard of the PCI Pamette board, but to my knowledge > > > > this does not > > > > > have Virtex silicon. Thanks for any info. > > > > > > > > > > Atif Zafar > > > > > Regenstrief Institute > > > > > Zafar_A@regenstrief.iupui.edu > > > > > > > > > > > > > > -- > -Ray Andraka, P.E. > President, the Andraka Consulting Group, Inc. > 401/884-7930 Fax 401/884-7950 > email randraka@ids.net > http://users.ids.net/~randraka > > >Article: 16466
Ray Andraka wrote: > Austin, > > Unless it is a board for something going into production, I'd rather have the > PCI in a separate part so that I can get use of the entire virtex part, and so > I don't have to worry about how the PCI interface is going to affect the > floorplanning for my stuff. I only use these type of boards in one-off > designs and as prototyping platforms. For the most part, they are way too > expensive to use in a product that is going to be produced and sold in any > kind of quantity. > The PCI interface takes so little in a virtex that using another chip does not much sense. Also with a chip between the BUS (any bus) and your design you incur latency and expence. By having the logic right next to the bus interface you are as close to a "reconfigurable coprocessor" model as you are going to get. As for price the HOT2 is $695 for single units and around $400 in volume. This makes it viable for small production runs form 10-100. I'm sure the econmonics for a small virtex based system will be simular. -- Steve Casselman, President Virtual Computer Corporation http://www.vcc.comArticle: 16467
Evan- Thank you for mentioning other features of 3.1 in a good light, but I stand behind my statement that FPGA Express is competitive with regard to language. Synplify and Leonardo both support broader subsets of VHDL and Verilog, but only FPGA Express (and FPGA Compiler II) is fully Design Compiler compatible. While there are many FPGA designers who never intend to take their designs to ASICs, there are also many who use FPGAs for ASIC prototyping. For this latter group, DC compatibility is an important criteria for selecting an FPGA synthesis tool. (Yes, I know that Leonardo can target ASICs for less, but Leonardo does not have the market share that DC boasts and most ASIC design centers use DC). Regards, -Jim ems@riverside-machines.com.NOSPAM wrote: > On Fri, 14 May 1999 22:38:31 -0400, Jim Kipps <jkipps@viewlogic.com> > wrote: > > >What I look for in synthesis is: 1) library coverage > >(you can't synthesize what you can't target), 2) quality > >of results, and 3) language support. There are other > >things that are important, like ease of use, price, and > >runtime performance, but these are the three biggies. > > > >I've had experience with Leonardo (which also supports > >user attributes), Synplify, and FPGA Express and can > >honestly say that all three are competitive with regard > >to libraries, performance, and language. > > there is absolutely *no way* that fpga express is competitive in terms > of vhdl language support, and i'm sure that you know this as well as > the rest of us do. true, there have been major advances in both DC and > express recently, but you've still got some way to go. > > on the other hand, there have been some really nice additions in 3.1 - > FST/scripting and attribute passing primarily, which would make this a > very competitive tool if you got the language support right. > > evan -- -------------------------------------------------------- James R. Kipps FPGA Marketing Manager jkipps@viewlogic.com Phone: (508) 303-5246 --------------------------------------------------------Article: 16468
Tim Tyler wrote: > > While I'm all in favour of a portable, universal means of describing > algorithms that enables them to be implemented efficiently on parallel > hardware, unfortunately, Java doesn't look /remotely/ like what I > envisage. Tim, I agree, though there seems to be enough work using this as a start (J-Bits and JHDL are written in Java -- plus check out this cool product-in-process: http://www.lavalogic.com/products.html ) that I find it hard to ignore. I picture some sort of icon oriented language similar to the Khoros glyphs or the Labview environment. The designer may even use icon layout schemes to effectively floorplan. the "wires" that connect the icons would represent dataflow and hence causality that a compiler could use in determining possible time / logic switches, and parallelism with or without restrictions imposed by the engineer. > > I envisage something like a "rubber circuit" which 'stretches' to fit > the characteristics of the target hardware, while retaining the relevant > inter-component distances and ratios, for purposes of retaining correct > synchronous operation. This sounds like an interesting concept, but I'm not sure I follow. Does the "source design" have any knowledge of the the target hardware, or is there an intermediate representation? It sounds like you have in mind something more sophisticated than an FPGA vendor's macro library which has the same properties. Please explain some more about this. Cheers, Jonathan -- Jonathan F. Feifarek Consulting and design Programmable logic solutionsArticle: 16469
Michael Schuerig wrote: > > I have no clue about FPGAs, but, I'm wondering, aren't the switching > times prohibitive? Remember, you not only have to switch the hardware > around for thread switches in your own process, but also for context > switches among all the processes running on the hardware. Remember, you > don't own the processor. > Michael, In a way, you do own the processor with a Reconfigurable Computer, because the degree of parallelism possible. In theory, if one process (thread) need more logic than is available, the logic allotted to that process can be reconfigured to perform the rest of the thread without affecting the context of the other threads. Or "idle" threads could be pre-empted to free logic for the "active" thread (after saving the internal state of the replaced logic). In practice the switching times depend on the hardware (usually FPGAs). Standard FPGAs require the entire configuration to be loaded (serially to boot) - a growing problem as the number of configuration bits increase with larger components. Some FPGAs (notably Atmel, the defunct Xilinx 5000, 6000 series, and now the Xilinx Virtex series) allow partial reconfigurability, which usually reduces switching times. On-chip storage of configuration data which can be loaded in a single clock cycle was the original topic of discussion for this newsgroup thread. There was earlier reference to some DARPA funded research aimed at this goal. There is a Reconfigurable Computing chip just announced by NEC which apparently can store multiple configurations and switch between them. Check out this link: http://www.edtn.com/story/OEG19990215S0004 . Maybe the switching time problem has finally been solved. Jonathan -- Jonathan F. Feifarek Consulting and design Programmable logic solutionsArticle: 16470
Roland Paterson-Jones wrote: > > > > I like to to view the problem as an extension of HotSpot/JIT technologies > > in virtual machine implementation, most notably lately in Java Virtual > > Machines. What these technologies do is profile a program on-the-fly (with > > embedded profiling instructions, or through interpretation, or whatever). > > When they determine that a certain portion of a program is heavily > > exercised, then that portion is (re-)compiled with heavy optimisation. > > > > Now, why not do the same thing, but right down to the hardware, rather than > > down to machine code. What you need, however, is a general compiler from a > > high-level language (Java bytecode?) to fpga gates. According to the empty > > response to a previous posting, nobody is interested in such a thing. > > I believe that BYU has done work on this topic (hardware level on the fly reconfiguration). Of course, they've done a heck of a lot, so I can't recall exactly where I saw the reference. You may find something at http://splish.ee.byu.edu/docs/papers1.html . > > I have dreams of a single multitasked fpga doing all of the stuff that each > > separate component of a motherboard + cards currently does (or an array of > > fpga's multi-tasking this). Cheap and fast and simple (once you've > > implemented the JIT technology!). > > Yes! I have the same dreams! Basically, System On a Chip - chips are getting bigger, and with judicious multiplexing reasonably sized systems can be implemented this way now. The cheap (and simple) aren't quite there yet, though, and speed is traded with both these factors. Of course, interface circuitry is still required to connect to external systems or subsystems. The concept of reconfigurability can be extended to these circuits - work is already being done on reconfigurable analog and mixed mode circuitry. Who knows? JIT technology may be the way these systems are made cheaper and simpler. Jonathan -- Jonathan F. Feifarek Consulting and design Programmable logic solutionsArticle: 16471
hi, has anyone studied virtex vs apex20k for dsp applications and have postable results ? thanks for any and all comments. muzo Verilog, ASIC/FPGA and NT Driver Development Consulting (remove nospam from email)Article: 16472
Bob Sefton <rsefton@home.com> writes: > They push their Windows products vs. the Unix competition but I've > never heard a tool vendor stupid enough to push the OS itself as > superior to anything. Well, I didn't say they had said Windows was better. I only said they were pushing it. When you can get Windows tools free or very cheap and the same tool from the same vendor for unix for serious money or not at all, then one thinks about the cause. On possibility is that Windows is indeed superior and writing a synthesis engine or a P&R tool under Windows is a joy while under unix it's a nightmare. Along the same lines, Windows tools might be inherently glitch free while unix tools require very costly customer support which justifies the cost or abandonment of the platform. Alternatively, one might form a Conspiracy Theory along the lines of toolmakers receiving financial compensation or fringe benefits for their Windows efforts from some company which is interested in Windows gaining market share against unix. Since the first scenario doesn't sound very plausible and the second is rather malicious, I'm keen on learning the real reasons behind the apparent Windows orientation of the EDA industry. Zoltan -- +------------------------------------------------------------------+ | ** To reach me write to zoltan in the domain of bendor com au ** | +--------------------------------+---------------------------------+ | Zoltan Kocsi | I don't believe in miracles | | Bendor Research Pty. Ltd. | but I rely on them. | +--------------------------------+---------------------------------+Article: 16473
Hello, Does anyone know how the LUT is implemented or can be changed in the XC3000 ? For example, I want to implement a Boolean equation, how do I know if it is implemented thru a AND-OR or AND-NOR type of LUT? Please advise. email: csoolan@dso.org.sgArticle: 16474
Tim Tyler wrote in message <927489612.169709@BITS.bris.ac.uk>... >In comp.arch.fpga Roland PJ Pipex Account <rolandpj@bigfoot.com> wrote: >:>Roland Paterson-Jones <rpjones@hursley.ibm.com> wrote: > >: ...but this is exactly what any optimising compiler does - evaluating >: dependencies is quite close to extracting paralellism - you just need to >: schedule for an infinite-scalar machine, plus conditionals and both their >: branches can be evaluated simultaneously, plus loops can be unrolled to the >: limit of the hardware... > >Extracting parallelism from serial code always seems like a dreadful idea >to me. Sometimes to get a parallel version of a serial algorithm, you >have to completely rewrite it from scratch using a completely new >approach. I had a look at Paterson and Henessey last night. They had some empirical results for average no. instructions schedulable on the same clock cycle for some standard SPEC benchmarks. They ranged from 17 to 150+ !!! They were motivating superscalar processors, and looking at how close you could come to this theoretical maximum with realistic processor designs (finite lookahead window, finite register renaming set, finite memory ports, imperfect branch prediction etc.) It seems to me that many of these 'realistic' limitations (bar memory ports) don't apply to compilation to fpga. But 17 I'd settle for. 150+ is huge. The large numbers tended to be floating-point tests. Roland
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z