Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I have a schematic design with VHDL modules. In my design I have a state machine with registered outputs one of these outputs is driving a pair of FIFO read enables. I am shour on simulation(late, notfast enough) by about 600 NS the signal has 25 loads. It gets ptimized out if I split it into two parts. I tried entering into the VHDL module the attribute MAX_LOAD but it dosen't do any thing. I don't want to buffer anyway. I'd rather have two regs. I cleared the "equivalent register removal" button in the synth options but that dosen't seem to help. When I synth the module it dosent remove the 2nd register but when I try to compile the whold design I seem to lose the 2 signals and then the tool dies when it runs across: the following net "rd_fifo_a" period = 160 Mhz; net "rd_fifo_b" period = 160 Mhz; timegrp "tc_rd_fifo_a" = FFS (rd_fifo_a); timespec "tsrd_fifo_a" = from "tc_rd_fifo_a" 4.23 ns; timegrp "tc_rd_fifo_b" = FFS (rd_fifo_b); timespec "tsrd_fifo_b" = from "tc_rd_fifo_b" 4.23 ns; and cant find the signal names rd_fifo_a and rd_fifo_b listed in the ucf file. "Jay" <kayrock66@yahoo.com> wrote in message news:d049f91b.0210112159.359d3164@posting.google.com... > What is it that you are trying to accomplish by doing this? > > "C.W. THomas" <cwthomas@bittware.com> wrote in message news:<uqehv4ldnktrff@corp.supernews.com>... > > HI; > > > > > > What is the attribute to keep a component from being optimized away in ise > > 4.2?? > > > > thanks;Article: 48226
Peter, it has been a long time... I checked about availability: We have 3064 and 3164 in PQ160, as well as 3164 in TQ144. Contact your rep or disti ( or Xilinx UK). (Might save you a redesign effort) BTW, the configuration problems are most likely a signal integrity issue, and that is much better understood these days than it was years ago. Proper pc-lay-out and termination at the far end have been a tremendous help. Peter Alfke, Xilinx Applications Peter wrote: > In 1992 I designed a product which was a PC (ISA) card containing 32 > (yes thirty two) XC3064 devices. It is basically a complicated pulse > generator. Each of the devices contains the same config data. There is > also an XC3030 which implemented the PC interface and some other > simple stuff. > > The FPGA config data, for all 33 devices, is loaded from a simple > program running under MS-DOS which reads in a Motorola S-record file > and loads it into the card. > > The customer had a few of these, then came back in 1996 for some more. > By then, Xilinx had almost dropped these parts and I had to redesign > the card to use a TQFP version of the 3064, of a higher speed than the > original one. Fortunately it still worked! > > I say "fortunately" because there have always been config loading > problems with Xilinx parts - if you had a lot of them on a board (I > last did FPGA design in 1997). They were very sensitive to the CCLK > edges, not too fast and not too slow. I had to play around with > different types of CMOS drivers to get the edges exactly right, and I > do have a 400MHz scope. There was no explanation for this behaviour, > other than the CCLK input having multi-GHz-speed and picking up > non-monotonic risetimes which a 400MHz scope did not show. > > I never had this problem on any single-device FPGA products, which I > suspect is the vast majority of FPGA applications. > > Also, as Xilinx parts got faster through the 1990s, the D-Q flip-flop > propagation time got faster a lot faster than the interconnect delays > got faster. This meant that designs which used long-lines (with some > short local interconnect) for clock distribution (a method freely > recommended by Xilinx in the early days) stopped working if > implemented in faster parts. Eventually, one really did have to use > just the global clock nets for any concurrently-clocked structures > (e.g. shift registers and counters), in conjunction with the CE > feature, to have any hope of reliability. An experienced Xilinx > engineer confirmed this was indeed the case. > > The design I did has not been done "properly" as above, but once the > config loaded OK, it was always rock solid. > > This customer now wants to buy some more of these cards. There is no > way I am going to risk building some more - unless I could get the > exact same parts, which I can't - because the cost of populating a > card with 33 expensive parts, and finding it doesn't work properly, is > just too high. I also don't have the time to do any major design work > like this, so I am offering this project to somebody interested. > > Xilinx ran the XC3000 and XC4000 parts for a long time, but nowadays > things change too fast. > > I will try to monitor this NG anyway but if anyone is interested in > actually doing this, taking on the risk, and paying me a small % when > it's all done. I would give him the original VL4 designs, which are > actually pretty simple, and whatever else I have. Please email me > direct on the address below. The value of the design, including a PCB > redesign if necessary, is likely to be of the order of GBP 20k. The > customer wants only two of the cards but last time he paid about GBP > 6k each. > > One can either do this with a board containing 32 cheap little Atmel > AVRs (which, realistically, I think is a lot better than using FPGAs > if one at all can, because FPGAs keep changing, the tools keep > changing; I paid GBP 13,000 for Viewlogic 4 / XACT6.01, 2 useless > dongles, covering the XC3k and XC4k parts up to 1992 / 1996, etc). But > on a 2-off project an FPGA is probably the best way. Preferably a > common Xilinx part with a future upgrade route looking OK for another > 10 years, which loads up the same way so the same PC loading program > can be used. > > Peter. > -- > Return address is invalid to help stop junk mail. > E-mail replies to zX80@digiYserve.com but remove the X and the Y. > Please do NOT copy usenet posts to email - it is NOT necessary.Article: 48227
Why do you need several DLLs to divide a frequency? CLBs can do that nicely... Peter Alfke S Embree wrote: > I am trying to link several CLKDLL components in order to create a frequency divider. I am getting an error message that "Period specification references the TNM group, which contains only pad elements. Only synchronous elements are permitted in a period specification...." > > How should I set the period constraints when I am linking several CLKDLLs?Article: 48228
nweaver@ribbit.CS.Berkeley.EDU (Nicholas C. Weaver) writes: > Neil Franklin <neil@franklin.ch.remove> wrote: > > > >I do not consider my tools to be prototype quality. I intend them for > >full real-world use. > > >But that said, the tools should be up to at least the standards that > >professional CPU programming tools had in the 1980s. > > You have at least skimmed the Mythical Man Month, right? Own a copy and read it in its entirety, about 5 years ago. > There really is a huge difference between making something to prove a > point (a prototype) and a usable product. You seem to have missread my answer. I consider "making a usable product" the target. I have no interest in an "prove we could do it". And yes my time estimates are based on "full usable product". What you are most likely not taking into account is, that I am not aiming for some complex massive-features set, but rather an "what is the bare bones that will do an decent job". > As for student work, the "free beer" tools are really strong these > days, after all, they really are largely the full tools in > "crippleware" form. Their strengths, how many they are, do not cover their weaknesses: being closed source software. a) not user-compilable on any machine (yes, open source users regard this as very important) b) dependant on vendors that future users may get a copy (see the rumors around JBits), no user distribution (forbidden) c) Not free to peek into, modify, extend, redistribute And that is why _all_ of them, inclusive the "cheap" back ends are not sufficient. I fully accept that you may not understand these arguments, or more likely simply attach an far lower importance to them. But as an long time open source user, and part of the community that makes/uses it I am aware of and demanding of such points in the software I put my time into learning and writing for. > The use of open tools is mostly for research: the ability to modify > and try new things. Error: the main reason for open source is independance. To do anything you want to do, not what vendors allow you. > >It is open source after all, and that grows be accretion of different > >peoples additions. The problem is the initial "got moving" work. That > >is what I am presently doing. > > You need a pretty big base to get this for FPGA tools. On that point I disagree. I see an possible simple toolset that can work. As an analogy: Today C compilers are massive sized things (gcc >1MByte), in 1973 the first C compiler was a small thing that ran in <64k RAM. Todays compilers are no doubt more powerfull, but the 1970s ones were usable. Think more of an 1970-compiler equivalent sized toolset. Within that spec set, then fully developed to usability. That is within my reach. > The Linux kernel got good traction mostly because everything outside > was already done and coopted from other *nixes. Still leaves the kernel. Which is quite a big thing. And quite a few of the eseential services (init, login, ...) were not pre-existing ones. > And also got a lot of > traction because of the foolish spending by some companies, That postdates the first 10 years of development. The last 2-3 years are just icing on the top. > >Open source works bottom up. CS top down people may not like that. > >Millions of Linux users regard it as adequate. > > Its not "top down" vs "bottom up", "modular" vs "monolithic", its > trying to note that making a robust, quality system is a lot of > uninteresting, additional work. Work I am familiar with. I consider programs finished when they are usable. So that is in on the calculations. > >Which I solve by running my stuff over an existing network service, > >such as ssh. So networking inclusive security becomes as simple as > >using stdin/stdout. > > Assuming, of course, you have the full *nix host now, or at least a > good hunk of it, I have. And any net based design of mine would have an embedded Unix in it. Note: I am not an high-volume maker, where unit cost is dominant. > Security > is HARD. Damn hard. Which is why I leave it to the security professionals. And design my stuff to use theirs. > I'm just using this as a hypothetical example. > If you want an FPGA on a network device that isn't on a PCI card or > something, its now much harder still. Today there exist nice little 486 or MediaGX based single board computers. And yes they set an minimal price. But I can live with that. Anyone else needing something smaller/cheaper will have to add their own security to that. > Partially, I am a pessimist. And I am a known optimist :-) > But more importantly, you seem both > enthusiastic and knowledgeable. There are very many interesting > prototype tools which you could make to target conventional FPGAs, Which all have the problem of being outside of my viewfinder. My experience (and so frustrations and desires) come from JBits usage. I have never used the traditional toolsets, in part because they looked too complicated to me, in larger part because they will not run on my Linux system. That said, I have the impression that many of the problems (such as the placement stuff you and rickman have mentioned) are inherent in the way the traditional tools work (for one using simulation languages which treat an FPGA no different than an ASIC). And it is the interfaces, derived from such working methods, that perpetuate these problems. > where you don't have to spend lots of time doing tedious CRAP like > bitfile generation or routing. I do not regard it as crap. Actually it looks like an interesting puzzle. I have also enjoyed such things as writing microcontroller code in assembly. My biggest gripe with Linux is that it is becoming too fat. > I've built tools to futz with Xilinx designs using XDL, and messed > with Jbits for a while before deciding it was just too low level > (really an API for setting named bits, where a little heirarchy would > be a vast improvement). Which is why I as part of my first FPGA project developed an abstraction layer on top of JBits. And I am using that and the programming style that evolved with it as target to improve on. My libvirtex part will be roughly JBits level, then vm and vas successively higher levels. > tendency to "damn, need another pass to translate to a slightly > different datastructure to do X", Ah, you see, that is what I like doing, finding the elegant way to write something. I will actually change code just to make it nicer. Even without added functions. > I'd hate to see you waste your time constructing something which won't > be used because your target audience would rather use the free-beer I have a suspect that quite a few (even some writing in these threads) will change, simply because it is open source. That is not about money, it is about being less limited, about being able to rely on things, working how/where one wants them to and staying so. > tools, when you could make real contributions to the community by > showing how to do things significantly better. And who says that my totally different approach may not turn out to become (or trigger) something better? That is something I can not predict. But I can experiment. The best experiment is often that that shows something unexpected. And that is usually the off-beat one. When something doesn't work, try/explore an totally different angle. For one, I have decided to kill placement as a problem by simply having the users code explicitely place the stuff, and then solve the problem of that being a lot of work by providing simple tools to make easy. My current JBits work uses that method with large success. Totally different approach to defining pure logic and than have the tools try to figure out how to do it. Cursing at their faillure to be intelligent. -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Programmer, Archer, Roleplayer - hardware runs the world, software controls the hardware code generates the software, have you coded today?Article: 48229
Hello Nacho, Saludos de parte de otro espanhol :o) During the 'translate' step a block can be reported as 'unexpanded', and an unexpanded block is a missing part of the design and therefore it can't be mapped and you should get and error during the mapping step or for some versions of ISE during the translation step. What tool are you using for synthesis? In some synthesis tools you get a detailed LOG where you should get a warning telling you that the component OBUFE8 is inferred as a black box or something like that, that means your package with the OBUFE8 definition doesn't match any known Xilinx component. Compare your definition with the one in UNISIM or in the Xilinx web. What type of device are you targeting? Check the datasheet in case your device doesn't contain that resource. Another thing is, you don't need to define all these buffers in a library, most of them will be inferred for your synthesis tool automatically, sometimes is usefull to instantiate them in your RTL, I guess depends of the design. Good luck. Regards. -- Ulises Hernandez Design Engineer ECS Technology Limited ulisesh@ecs-tech.com "Nacho" <nanoscobra@yahoo.com> wrote in message news:d311f859.0210140102.7c6a5dcc@posting.google.com... > Hello: > > I´m a spanish estudent, sorry for my bad english. > > I have created a proyect with Xilinx Foundation Series 2.1, all in > VHDL. > I have not create a library, I have create a package where put all the > components of my proyect. BUFE8, OBUFE8, OBUF4, BUFG, IBUF, OBUF, > IBUF8 are declarated like components in my package (without > architecture) and after are maped in my top_myproyect.vhdl. > > The are two types of errors when i try to create myproyect.bit. > > 1) ERROR:NgdBuild:432 - logical block 'I1' with type 'BUFE8' is > unexpanded. > The same with the others (OBUFE8, OBUF4, ...) > What happens?? Must i put other library in my proyect? > > 2) I have two BUFE8, both are conected in OBUFE8 and i get another > error. > What an i do about this? two sinals in the same in of OBUFE8? > > Thank you very much!!!Article: 48230
Russell wrote: > > rickman wrote: > > > > Neil Franklin wrote: > > > > > You are working from the wrong end. No one cares about what tools will > > support a chip after it has been on the market after 2 years. By that > > point all the *major* new designs that are going to be done with it > > *have* been done and every thing else is maintenance. At that point the > > *new* designs will be on the *new* parts which the open source tool will > > not support. So if you want to be guaranteed to be behind the cutting > > edge, then by all means use tools that are perpetually out of date. > > Many designs want availability and cheapness instead of cutting edge. > With process problems, life cycles will get longer. Actually the cheapest parts *are* the newest. Spartan2 and 2E are much cheaper than Spartan or XC4000 or anything else that Xilinx makes. None of these parts have been out more than a year if I am not mistaken and some are only out a few months. By the time open source tools are ready for these parts, I expect Xilinx will have one or maybe even two new low cost families around. > > > All that in open source is better: > > > > > > - simple download/compile/install/use, any computer/OS native > > > > don't care, I only use one OS and that will be what the tool is > > compatible with. I'm an FPGA engineer, not a SW engineer. > > Irrelevant. That was my point! > > > - no licensing, can give copy to anyone. allows "config&compile at > > > customers site" designs, and yes, that includes "customer selects > > > modules and generates bitstream that runs them" style adaption to > > > modular interface hardware (see recompiling Linux kernel as an example) > > > > don't care. X and A both give away tools and the paid for versions are > > affordable in the context of running a business. > > Irrelevant. The tools are broken compared to what a decent > open source tool would be like. Open source tools are more broken. :) > > > - anyone can add their brains to development, wherever they have the > > > expertise (compare the open process of science vs pronouncements > > > from high from any closed governing body) > > > > don't care. I want tools to use to do work, not tools I can work on. > > Irrelevant. The tools are broken compared to what a decent > open source tool would be like. Open source tools are far more broken :) > > > For such features many (including myself) are willing to sacrifice > > > using the latest chip family for the time needed to reverse-engineer, > > > dito also not wringing the last drop of power out of an chip. > > > > Many does not include the majority of FPGA engineers, IMHO. In the FPGA > > world you have to work with the best chip for the job and that is often > > the most current chip. > > Rarely. The biggest latest chips have the highest profile. Don't know what you mean by profile. If you want to build a product rather than play, you pick the best part for the job and use the tools that support it. The commercial tools may have warts, but they let you get the job done. > > > They prove my point: vendor tools will not die (that was your claim) > > > just because open source tools appear and reduce user count of vendor > > > tools. Vendor tools can survive in that market. > > > > I am not talking about Synthesis and Simulation. I am talking about the > > back end tools. If Xilinx loses > 50% of its tool market to open source > > or third party tools, I expect they will drop their inhouse tools. They > > would have to either cut their tool staff by half which would ruin their > > tools in the future or start charging more for the chips to make up the > > differernce which would make it harder for them to compete. So > > sucessful open source tools will drive FPGA vendors out of the tool > > market. Then they would have to compete on just the chips and be a > > driving force somehow with the tools. > > No. Internal tools can be fixed in short time if companies have > a 4 digit support contract, which many will. What does this mean? I don't get it. > > > You are forgetting the most important part of open source: motivation. > > > It takes a lot of it. To accept sacrifice of the time to make software > > > without pay. That motivation must come from somewhere. Look for the > > > "somewheres" if you want to know where to look for the first appearances. > > > > Why would anyone be more motivated to develop backend tools? What is > > their value without the front end tools? > > Without free chip information to make backend tools, frontend > tools are useless. Compilers and fitters are fun to make. > GUIs are boring. Tell that to the frontend tool vendors. :) You are making up rules to suit your purposes. No one is talking about GUIs. The compilier *is* a front end tool. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 48231
Russell wrote: > > rickman wrote: > > I don't know what motovates you, but I don't think it is the same as > > most users of the tools. I would like to stop working with analogies > > and hear what the game plan is. Can you give us a roadmap of how you > > would procede? > > You compile a design a subcircuit at a time in HDL. You manually > place all the primitives into a compact area, or let a fitter do > it for you with suitable constraints, making an RPM. > Repeat for all subcircuits. Place these RPMs into bigger > groups of RPMs if needed. Place these 'mega' RPMs. Proceed > up the hierarchy until the whole design is done (one big RPM). > > This is what webpack 5.1i should allow you to do, but > i still need to try it. 4.2i was aimed there, but was > broken. > > Naturally, a GUI is involved in the floorplanning stage, > and is a suitable technical challenge for an open-source > project. The fitter would also be an excellent thing to > work on. I once made a pcb autorouter in dos that caused > all the tracks to naturally 'coalesce' into parallel > paths and minimize track lengths/bends. You are talking about the implementation of fitter and such. I want to hear how you plan to execute this project. What are the parts to be built and what will be the sequence of building them? How long will it take? What resources or information do you need? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 48232
rickman <spamgoeshere4@yahoo.com> writes: > Neil Franklin wrote: > > > > So any tool supporting Virtex/Spartan-II in say 1-2 years, and extendable > > to Virtex-E/Spartan-IIE, has at least an 5 year market life in it. > > Reverse-engineering Virtex-II should not take 5 years, so we can keep up... > > You are working from the wrong end. No one cares about what tools will > support a chip after it has been on the market after 2 years. You may not care about an 2 year old chip. Other "leading edge" people may also take that approach. I care for whatever chip solves my problem. The 2 year old ones do so. > point all the *major* new designs that are going to be done with it > *have* been done and every thing else is maintenance. Makes one wonder why Xilinx re-warmed their over 2 years old Virtex in form of Spartan-II. After already doing the same trick with Spartan, and have since repeated it with Spartan-IIE. Seem to sell, so there seem to be quite a few people who do not need then newest possible. > > You can now see an actual project, started, that has an planned out > > path leading to tools. > > I did check out the web site and I am not clear about where it is > going. Can you explain in terms that an FPGA engineer can understand? I can only say stuff in whatever language I know. Which contains 74xx logic, 8051s, Unix programming in C, shell/perl programming, and doing FPGA in JBits. Sorry if your particular jargon is not supported. > > For such features many (including myself) are willing to sacrifice > > using the latest chip family for the time needed to reverse-engineer, > > dito also not wringing the last drop of power out of an chip. > > Many does not include the majority of FPGA engineers, IMHO. Even if that is majority, there still is the rest. > > Screws from one manufacturer once did not fit nuts from an other, > > about 120 years ago, due to everyone using their own threads. Once the > > basic issues and design space in thread design were explored, standards > > came. > > Bad analogy. It was in the best interest of the dozens or hundreds of > manufacturer's interests to be compatible. Nope. It was in their best interst to lock in customers with incompatible designs. It was users professional societies that forced the change in style. > > It is horses for courses. > > Not sure what that means, It means that what is best for one, is not automatically best for another. And that the anther may be best served with something that the first would never want. > but I have seen $1000 FPGAs go on a board. If > they had known that they would need a $2000 chip, the board would have > never been designed. Absolutely not relevant for me, not for my project sizes. > > > tools (unless you are counting the pennies). The expensive tools are > > > the synthesis and simulation tools. > > > > I was talking of those. > > And you can live a rich full life without the $10,000+ tools. Right? Could you point out, where I am supposed to have claimed contrary to that? I was pointing out, that your "less users, A and X will not bei able to afford development" stuff was nonsense. That there exists people prepared to pay large sums for tools proves that A and X can keep on making tools, by simply rising their price a bit. > I am not talking about Synthesis and Simulation. I am talking about the > back end tools. If Xilinx loses > 50% of its tool market to open source > or third party tools, I expect they will drop their inhouse tools. Or rise their price to 2 times what they are today. Still a lot less than what some people are prepared to pay. > tools in the future or start charging more for the chips to make up the > difference which would make it harder for them to compete. And so what? Their competitors (ASIC vendors) also have to pay for their tool development. Both can then chose how to distribute the costs over software licenses (less larger ones) or chip costs (possibly more sales due to the open source software), or use some of the open source stuff to reduce their costs. May the best win. > sucessful open source tools will drive FPGA vendors out of the tool > market. Then they would have to compete on just the chips and be a CPU manufacturers seem to be living ok, on doing just that. > > You are forgetting the most important part of open source: motivation. > > It takes a lot of it. To accept sacrifice of the time to make software > > without pay. That motivation must come from somewhere. Look for the > > "somewheres" if you want to know where to look for the first appearances. > > Why would anyone be more motivated to develop backend tools? What is > their value without the front end tools? As foundation to build front end tools on? The bitstream is the target. Back end makes that. From there on to the front is ever increasing comfort. > > > they are much more like the gcc target. But the back end tools are very > > > different. *That* is why there are no third party back end tool > > > vendors. > > > > They are software, like all other. I.e. designs to be specced, lines > > of implementing program code to be written, work time to be spent. > > That process is well understood. > > I am glad that you think *all* software is the same. All software in the end boils down to the same: time. Learning, researching, coding, testing are all in the end time. > matter of solving problems, not writing code. If you don't have good > agorithms, you code will be lousy. Writing code is the *easy* part. > Developing the algorithm is the hard part. It is in the end just time. > > Huh? I never said "all" CPUs, nor did I say all FPGAs. I clearly > > stated 2 markets, "mass" and "specialist". > > What is your point? NO ONE can make a Xilinx compatible FPGA except > Xilinx. NO ONE can make an Altera FPGA except Altera... I have my doubts. Where there is a will (enough money) competitors will appear. > Just ask > Clearlogic. :) AFAIK, they did not make FPGAs. They made ASICs that were layouted automatically from Altera bitstreams. And it was not their ASICs that got them into trouble, but rather that every single use of their technology being helping Altera software licensees break the license. So they are not particularly a good example for "cloning not possible". In fact in an hypothetical world with open source (no-Altera) tools, user using them could develop on Altera and then manufacture on Clearlogic, and those would not be violation licences, and so there would be an non-infringing use for Clearlogic -> Altera loses case. > > You claimed that binary incompatible is neccessary, and as such will > > break tools, I pointed out that binary compatibility is possible in this > > market, just as it turned out to be in CPUs, and sketched what its > > result could look like. > > How is compatibility necessary in CPUs or FPGAs? It only exists in the > x86 world because the cat is out of the bag. And who says it will not get out of the bag in the FPGA world? > > It is interesting that Altera did not chose the direct course of > > attacking them with their patents on the actual chip technology. That > > they needed to use such an indirect method of helping users breach the > > the devel tools software license, which is less likely to succeed in > > court, is telling us a lot. > > You are talking about making chips, now you are talking about the > tools. Because the Clearlogic case was about tool missuse, or rather about helping people missuse an tool, and no non-infringing use. > > > AMD could make parts that fit the Pentium socket because they had a > > > license for that. After Socket 7 (IIRC) they no longer had that license > > > and they now have to make their own interfaces. > > > > Socket 7 did not require any license. It is only with Slot 1 and later > > Socket 370 that Intel introduced an patented signalling protocol (not > > pinout) which required an license that they then refused to AMD. The > > pinout is copyable, but useless it one can implement the signalling. > > No, you are confused. Socket 7 required a license, but AMD and several > other companies already had that license due to manufacturing agreements > that Intel had set up previously. They were later interpreted to > include the pinout, the instruction set and even the microcode for > processors up to the 386. AMDs license agreements only went up to 486. Even there they were not complete (the ICE code was not covered). Socket 7 is Pentium, and so not covered by any 486 stuff. Sockets can not be copyrighted, can not be patented, can not be trademarked, so no protection. Signaling protocols can be patented, that is what Intel then did on PentiumII. Same issue that they had with numbers not being trademarkable, so AMD copied the 486 name with impunity. So Intel renamed the becomeing 586 into Pentium, to prevent AMD being able to copy it. > > Also known as: do the most/first needed part first, show an actually > > usable result, and accept that obsolence will happen and require an > > "chase the moving target" attitude. gcc did/does this (different CPUs), > > Linux did/does this (different computer architectures). Sure. > > I disagree that the backend is needed most or first. But then it is not > my decision to make. Exactly. That is my descision. And my knowledge of the open source community that runs into it. > > > Once you > > > have built all the parts of the intended toolchain, what will the flow > > > be? > > > > Tentatively (subject to changes while implementing): > > > > Users chosen language -> compiler (3rd party, multiple) > > -> design reduced to LUT-sized elements, relative placed, their connection s > > reduced design -> vas (from my toolset) > > -> design fitted to LUTs/F5s/etc, absolute placed, connections to PIP list s > > placed/routed design -> vm and libvirtex it calls (from my toolset) > > -> .bit file to be used or displayed/debugged (using existing vd and vv) > > I don't understand any of this. What are you planning to do? If we do not have that much common language to base our discourse on, I might as well give up. Good bye. -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Programmer, Archer, Roleplayer - hardware runs the world, software controls the hardware code generates the software, have you coded today?Article: 48233
Hi, I have a design which requires about 100 Million mulitplies and about 200 Million add/subtracts per second. I'm implementing my design in an FPGA. Does anyone have an idea how compute intensive is this design, if implemented in a DSP. I'm not sure how these operations would compare against MFLOPS or MIPS that DSP data sheets refer to. I would think my operations would be about 300 MIPS (mulitply + add). So a DSP claiming 300 MIPs or more of performance should be able to deal with this design. Am I right ? Thanks, PrashantArticle: 48234
In article <6uelaskgw5.fsf@chonsp.franklin.ch>, Neil Franklin <neil@franklin.ch.remove> wrote: >You seem to have missread my answer. I consider "making a usable >product" the target. I have no interest in an "prove we could do >it". And yes my time estimates are based on "full usable product". > >What you are most likely not taking into account is, that I am not >aiming for some complex massive-features set, but rather an "what is >the bare bones that will do an decent job". And my hunch is that is a lot bigger than you think, for reasonable levels of "Decent". Nevertheless, if you truely wish to make "full usable product", more power to you. But I suspect that you will quickly find all the corner cases (and for such simple silicon, there are ALOT of corner cases: EG, under what conditions can and can't you use the flip flops independant of logic in a LUT?), and all the tedious crapwork involved in bitfile generation, static timing analysis, and routing quite severe, with really no intellectual payoff. >Their strengths, how many they are, do not cover their weaknesses: >being closed source software. > >a) not user-compilable on any machine (yes, open source users regard > this as very important) How many users (outside a small cadre of open source true-believers) really do? How many of the apache websites are run by webmasters who WANT to compile the web server, as opposed to ones who decide it has the features they want at the price they want, with its extensible form? How many linux users WANT to recompile the kernel? How many gimp users WANT to recompile gimp, when even if they want something special, they will probably extend functionality through a plugin? >b) dependant on vendors that future users may get a copy (see the > rumors around JBits), no user distribution (forbidden) Potential concern, but you can probably iron it out with the vendor if you have a real reason (EG, a textbook, a student board, etc). They don't spend money supporting the "free-beer" toolside anyway. >c) Not free to peek into, modify, extend, redistribute Wrong on the extend. Thats the whole point of the public interfaces (EDIF, XDL, Jbits). And your extentions can be freely distributed as well. >I fully accept that you may not understand these arguments, or more >likely simply attach an far lower importance to them. But as an long >time open source user, and part of the community that makes/uses it I >am aware of and demanding of such points in the software I put my time >into learning and writing for. >> You need a pretty big base to get this for FPGA tools. > >On that point I disagree. I see an possible simple toolset that can >work. > >As an analogy: Today C compilers are massive sized things (gcc >1MByte), >in 1973 the first C compiler was a small thing that ran in <64k RAM. >Todays compilers are no doubt more powerfull, but the 1970s ones were >usable. Think more of an 1970-compiler equivalent sized toolset. >Within that spec set, then fully developed to usability. > >That is within my reach. A very poor analogy. C compilers can be simple (eg lcc), but you lose a good deal of performance (50%). But partition, place and route is far more complicated, when dealing with an array like the Virtex. Even the MINIMUM router is going to be a big hastle, let alone a quality router. It isn't a matter of interconnect points, its knowing the cost of everything and the rather skewed flexibility/interconnect rules. How many man years have been spent on the existing free academic tools like VPR and Pathfinder? Yet these invariably use much simpler array models, to make the job much easier and to, by fiat, eliminate huge numbers of corner cases and cruftynesses. >> But more importantly, you seem both >> enthusiastic and knowledgeable. There are very many interesting >> prototype tools which you could make to target conventional FPGAs, > >Which all have the problem of being outside of my viewfinder. My >experience (and so frustrations and desires) come from JBits usage. >I have never used the traditional toolsets, in part because they >looked too complicated to me, in larger part because they will not >run on my Linux system. Wrong on both accounts. Basic functionality is fairly straightforward to USE, at least in the back end. The complications mostly come in design entry, and thats really orthoganl to the back-end tools. And the back end command line path (EDIF -> bitfile) does reportely run under Wine under linux for Xilinx. JBits has a sucky interface, agreed. A better JBits would be nice: adding heirarchy to treat nets as nets, logic block as logic blocks, would make a much nicer tool. But for alot of the stuff a heirarchical JBits would be used for, it is probably better to integrate at the XDL between placement and routing anyway, because a full JBits router, without a LOT of design effort, is going to suffer in comparison to the command line router. And even if it IS equal, without static timing analysis, its still effectively useless. >That said, I have the impression that many of the problems (such as >the placement stuff you and rickman have mentioned) are inherent in >the way the traditional tools work (for one using simulation languages >which treat an FPGA no different than an ASIC). And it is the >interfaces, derived from such working methods, that perpetuate these >problems. Not really. Routing, no matter the placement, is still a big crapfest on something like the Xilinx, where you have sparsely populated switchpoints and many different wire types. The same goes for static timing analysis (an essential component of any viable toolflow for a conventional FPGA) and bitfile generation. There is really 0 origional intellectual content in building a conventional router, static timing analyzer, or bitfile generator. It is all straightforward "read the papers, add the appropriate knobs, and implement". Different synthesis techniques can offer much better placement (improving both toolflow time and performance), but thats all in the FRONT end, not the back, which allow that. >Which is why I as part of my first FPGA project developed an >abstraction layer on top of JBits. And I am using that and the >programming style that evolved with it as target to improve on. See above. >> I'd hate to see you waste your time constructing something which won't >> be used because your target audience would rather use the free-beer > >I have a suspect that quite a few (even some writing in these threads) >will change, simply because it is open source. That is not about money, >it is about being less limited, about being able to rely on things, >working how/where one wants them to and staying so. I'd bet that the only significant users you get will be people using it as a research framework, because someone else did the boring code. I specifically WOULD be a target user, who would be happy with some alternate stuff. But the alternate stuff would have to integrate with the rest of the flow anyway, how ELSE am I going to get designs like LEON 1 pushed through my hacked-up toolflow? >> tools, when you could make real contributions to the community by >> showing how to do things significantly better. > >And who says that my totally different approach may not turn out to >become (or trigger) something better? That is something I can not >predict. But I can experiment. The best experiment is often that that >shows something unexpected. And that is usually the off-beat one. Experiments are best done in prototypes, where you can say "Hmm, I need another pass" and get away with it. And there is very little spcae to experiment in the back end tools anyway. The best programming language analogy is that the back end: simulated annealing placement, routing, static timing analysis, and bitfile generation, are akin to compiler front ends (lexing and parsing): we know how to do it, its just tedious. The only conventional FPGA routing papers in the past 3 years have been of the "We got a 3% improvement over last years, in a theoretical architecture". Its actually getting to be something of a joke, "Not another *ROUTING* paper". Yet unlike the compiler view (where there are meta-tools to handle lexing and parsing), its tedious implementation with no automation, target specific, and effectively all unintersting code. >For one, I have decided to kill placement as a problem by simply having >the users code explicitely place the stuff, and then solve the problem >of that being a lot of work by providing simple tools to make easy. My >current JBits work uses that method with large success. You won't get anyone to do that unless and until you integrate it into some existing toolflow (say a post placement XDL -> your internal form translator, and a path back to XDL for static timing analysis). But that weds you, at least temporarily, (and probably perminantly) to the conventional tools. >Totally different approach to defining pure logic and than have the >tools try to figure out how to do it. Cursing at their faillure to be >intelligent. I know that, as a designer, I can beat placement by 15%+ (i've measuered it). But I still want placement tools for control logic and other crud. And if I'm going to "fix" placement, it would be a pass BEFORE placement to lock down modules and datapaths. I can't beat the router, and I'm not going to try. -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 48235
nweaver@ribbit.CS.Berkeley.EDU (Nicholas C. Weaver) writes: > rickman <spamgoeshere4@yahoo.com> wrote: > > >So? Few if any significant companies can't afford the low end tools > >that will get the job done. Not all tools are 5 figure. In fact very > >few are. What is your point? > > WHen you are paying the engineers $50k/year, so they cost you > $100k/year, you aren't going to blanch at the ~$1000 for the back-end > xilinx tools. IF you are in that price range. Sure. I am not. > >I am glad that you think *all* software is the same. Technology is a > >matter of solving problems, not writing code. If you don't have good > >agorithms, you code will be lousy. Writing code is the *easy* part. > >Developing the algorithm is the hard part. > > Ohh, strongly disagree. Developing the algorthms and proving the > concepts in code is the easy part (its prototyping), its turning or > recreatingthat code into something robust an widely usable thats hard, > IMO. So the main 2 critics disagree totally with each other. > {flow munched} > > >I don't understand any of this. What are you planning to do? > > Back end tools. Starting with bitgen and working backwards. > > Oh, where was static timing analysis? Not mentioned in the post, as not part of the question I was answering. -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Programmer, Archer, Roleplayer - hardware runs the world, software controls the hardware code generates the software, have you coded today?Article: 48236
In article <6ubs5wkexa.fsf@chonsp.franklin.ch>, Neil Franklin <neil@franklin.ch.remove> wrote: >> point all the *major* new designs that are going to be done with it >> *have* been done and every thing else is maintenance. > >Makes one wonder why Xilinx re-warmed their over 2 years old Virtex in >form of Spartan-II. After already doing the same trick with Spartan, >and have since repeated it with Spartan-IIE. > >Seem to sell, so there seem to be quite a few people who do not need >then newest possible. The latest Spartan xx is "What was the top part 1.5 process generations ago" in the sweet spot die sizes. This is simply because 1.5 process generations behind is the sweet spot for volume manufacturing, where all the costs ends up being low and you can crank out $10-30, in package, tested, signed, sealed, and delivered chips. So why redesign the logic block? >> And you can live a rich full life without the $10,000+ tools. Right? > >Could you point out, where I am supposed to have claimed contrary to that? > >I was pointing out, that your "less users, A and X will not bei able >to afford development" stuff was nonsense. That there exists people >prepared to pay large sums for tools proves that A and X can keep on >making tools, by simply rising their price a bit. Xilinx and Altera do NOT NOT NOT make money from their tools. The only money they make is to act as a barrier so they will only need to support those likely to actually do something with their chips, and to pay for whatever software not developed in-house (which is a lot in the case of Foundation). People don't cry about using $2500 tools for putting designs on $2000 chips, or $0 tools for putting designs on $30 chips. Anyone who actually has the resources to do a design has access to the tool: small parts -> free beer, big parts -> full version. >Or rise their price to 2 times what they are today. Still a lot less >than what some people are prepared to pay. XILINX AND ALTERA DO NOT MAKE MONEY ON THE BACK END TOOLS, and never will because the customer would know when they are being gouged and refuse to deal with it, or switch brands. When you pay $2500 for foundation or $1100 for Alliance, you are paying for the ability to call up and get (hopefully) clued people on the other end of the line. >> tools in the future or start charging more for the chips to make up the >> difference which would make it harder for them to compete. > >And so what? Their competitors (ASIC vendors) also have to pay for >their tool development. Both can then chose how to distribute the >costs over software licenses (less larger ones) or chip costs >(possibly more sales due to the open source software), or use some of >the open source stuff to reduce their costs. > >May the best win. Actually, ASIC venders generally don't provide tools. TSMC etc just takes designs and pops out chips. Cadence etc provide the tools, which the customers directly pay for. >> Why would anyone be more motivated to develop backend tools? What is >> their value without the front end tools? > >As foundation to build front end tools on? > >The bitstream is the target. Back end makes that. From there on to >the front is ever increasing comfort. If I want a foundation to build INTERESTINg tools on, I'd borrow the existing back-ends. All the interesting stuff is int the front. >> What is your point? NO ONE can make a Xilinx compatible FPGA except >> Xilinx. NO ONE can make an Altera FPGA except Altera... > >I have my doubts. Where there is a will (enough money) competitors >will appear. Wait another decade, and THEN you may see Xilinx 4000 compatable parts. Patents are an enforced monopoly. >> How is compatibility necessary in CPUs or FPGAs? It only exists in the >> x86 world because the cat is out of the bag. > >And who says it will not get out of the bag in the FPGA world? It doesn't even exist across parts, a V300 is NOT bitfile compatable with a V400. >> I disagree that the backend is needed most or first. But then it is not >> my decision to make. > >Exactly. That is my descision. And my knowledge of the open source >community that runs into it. And I have given up disuading your that this is the wrong end to flow into. Ah well. -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 48237
Nicholas C. Weaver wrote: <snip> > >> I disagree that the backend is needed most or first. But then it is not > >> my decision to make. > > > >Exactly. That is my descision. And my knowledge of the open source > >community that runs into it. > > And I have given up disuading your that this is the wrong end to flow > into. Ah well. Steady on guys.... If Neil F. wants to work up from JBITS, that's his choice, and I can see some merit in it. ( I have given him some info on reduced eqn/vanilla outputs from other tools, to use in VAS ) some examples : I have used a humble uC diss-assembler, as a '2nd opinion' when chasing compiler/linker issues. It it often mentioned on this ng, the importance of being able to 'see what the tools did (wrong)' - things like encrypted netlists make that harder. There is also plenty of education potential for a 'JBITS up' - system designers who 'have their head around' how it all comes together at the lowest levels, can get better results from higher level tools. Reports and analysis are an important part of a design process, and it seems this can only help that. -jgArticle: 48238
Hi, I am looking for a power consumption benchmark for FIR and FFT for Xilinx Vertex (or other newer version). I used Xpower to estimate the power consumption of a 32-tap linear phase FIR filter (sixteen 8-bit multipliers are used), with 1.8V supply voltage and 30 Mhz operation speed. The power is 35.3 mW (with power consumed by clock, signal routing, and logic only) but I don't know if this number is reasonable. Could anybody tell me where I can find the power consumption benchmark? thanks, JackCArticle: 48240
You can do separate installs for 3.3 and 4.2 on the same machine. In order to switch back and forth between them, you need to modify the XILINX environment variable to point to the tools directory you are currently using. Other than that, there is nothing special that needs to be done. IIRC, there was a new xilinx registration number to enter when first installing 4.x, but that is a one-time thing. I have both 3.3 and 4.2 installed on the same drive, different sub-directories on my NT machine. To switch, I change just the directory name in the XILINX environment variable, then I'm good to go. Can't use both at the same time unless you are strictly command line, in which case you can use the set command to do the enviroment locally. Noddy wrote: > Hi, > > I am presently designing for a Spartan II using Foundation 3.3 software. We > have upgrades (still in their boxes) to Foundation 4.2 aswell as ISE 4.2. We > need to upgrade the software sometime as I am going to be redesigning and > modifying for a Spartan IIE. > > My question is the following: I was under the impression the ISE was really > only useful if I was going to design for Virtex. Under this understanding, > will probably want to install the software for Foundation 4.2. However, I > still want to use Foundation 3.3 as I am still working on the design, and am > worried about the design having problems in 4.2. So what I want to do is put > Foundation 4.2 onto a separate hard drive, and plug into the same system. > Will I have any registry issues? Is it possible to run two versions of the > Xilinx Foundation software on the same machine? Do have to do the entire > Xilinx licensing thing again when I install 4.2? Finally, should I rather be > install ISE instead? > > Thanks > > Adrian -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 48241
Just a small point. I was told (by the person who did it) that most of the place and route software is written by one or two people. It is not a huge job. Now the GUI and FPGA editor.. lots of interfaces to different tools .. That is what takes all the programmers. The guts of the place and route tools are done by very small set of programmers. Look at what you did with placement... your just a part timer with lots of other things to do. If you had spent your time on routing I bet you'd have some results (I'll be an optimist for you;-) All the route data is right there in the xdl report someone just has to hook it up to the Toronto tool VPR... Steve > > And my hunch is that is a lot bigger than you think, for reasonable > levels of "Decent". Nevertheless, if you truely wish to make "full > usable product", more power to you. >Article: 48242
Depends heavily on your sample rate. You may also want to consider distributed arithmetic, as it can reduce the physical size of the filter. See the Distributed arithmetic tutorial page on my website. XPower will give a fairly accurate power estimate provided your simulation vectors are representative of the actual circuit operation. JackC wrote: > Hi, > > I am looking for a power consumption benchmark for FIR and FFT for Xilinx Vertex (or other newer version). I used Xpower to estimate the power consumption of a 32-tap linear phase FIR filter (sixteen 8-bit multipliers are used), with 1.8V supply voltage. The power is 35.3 mW (with power consumed by clock, signal routing, and logic only) but I don't know if this number is reasonable. Could anybody tell me where I can find the power consumption benchmark? > > thanks, > JackC -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 48243
nweaver@ribbit.CS.Berkeley.EDU (Nicholas C. Weaver) writes: > In article <6uelaskgw5.fsf@chonsp.franklin.ch>, > Neil Franklin <neil@franklin.ch.remove> wrote: > > >What you are most likely not taking into account is, that I am not > >aiming for some complex massive-features set, but rather an "what is > >the bare bones that will do an decent job". > > And my hunch is that is a lot bigger than you think, for reasonable So it is hunch against hunch. Only experiment can determine that. > conditions can and can't you use the flip flops independant of logic > in a LUT?), Already fully exposed when using JBits. Was no trouble for me. > >a) not user-compilable on any machine (yes, open source users regard > > this as very important) > > How many users (outside a small cadre of open source true-believers) > really do? As the "small cadre" is already in the 100'000s, those outside are not really that important. > How many of the apache websites are run by webmasters who > WANT to compile the web server, About 20-30% of those I know. Admittedly a skewed sample. > form? How many linux users WANT to recompile the kernel? Dito as above. About 5% of those I know have "fully self compiled" (i.e. every program) systems. They report CPU dependant 10-30% speed increase as the main reason. > >c) Not free to peek into, modify, extend, redistribute > > Wrong on the extend. Thats the whole point of the public interfaces > (EDIF, XDL, Jbits). And your extentions can be freely distributed as > well. I ment extending the actual code. Not adding a preprocessor (yes that is allways possible, but can be insufficient). > >As an analogy: Today C compilers are massive sized things (gcc >1MByte), > >in 1973 the first C compiler was a small thing that ran in <64k RAM. > >Todays compilers are no doubt more powerfull, but the 1970s ones were > >usable. Think more of an 1970-compiler equivalent sized toolset. > >Within that spec set, then fully developed to usability. > > A very poor analogy. C compilers can be simple (eg lcc), but you lose > a good deal of performance (50%). Which is acceptabe for simple designs. > But partition, place and route is far more complicated, when dealing > with an array like the Virtex. Even the MINIMUM router is going to be > a big hastle, let alone a quality router. It isn't a matter of > interconnect points, its knowing the cost of everything and the rather > skewed flexibility/interconnect rules. I have had a look at the PIPs documentation in JBits. I know what it looks like there: http://neil.franklin.ch/Projects/VirtexTools/Virtex-CLB-PIPs > Routing, no matter the placement, is still a big crapfest > on something like the Xilinx, where you have sparsely populated > switchpoints and many different wire types. See above. > >I have a suspect that quite a few (even some writing in these threads) > >will change, simply because it is open source. That is not about money, > >it is about being less limited, about being able to rely on things, > >working how/where one wants them to and staying so. > > I'd bet that the only significant users you get will be people using > it as a research framework, because someone else did the boring code. Also usefull. Even if not the original aim. > I specifically WOULD be a target user, who would be happy with some > alternate stuff. But the alternate stuff would have to integrate with > the rest of the flow anyway, how ELSE am I going to get designs like > LEON 1 pushed through my hacked-up toolflow? Run it (I assume it to be VHDL or Verilog code) through an compiler that targets my stuff as back end. Compare the results. -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Programmer, Archer, Roleplayer - hardware runs the world, software controls the hardware code generates the software, have you coded today?Article: 48244
That is, provided XDL runs on your design. It gets into an infinite loop with one of my larger designs, apparently because of the size of the ncd file (never writes an end process in 2GB of output, then stops generating output when the file reaches 2GB in length but keeps running internally). Steve Casselman wrote: > Just a small point. I was told (by the person who did it) that most of the > place and route software is written by one or two people. It is not a huge > job. Now the GUI and FPGA editor.. lots of interfaces to different tools .. > That is what takes all the programmers. The guts of the place and route > tools are done by very small set of programmers. Look at what you did with > placement... your just a part timer with lots of other things to do. If you > had spent your time on routing I bet you'd have some results (I'll be an > optimist for you;-) All the route data is right there in the xdl report > someone just has to hook it up to the Toronto tool VPR... > > Steve > > > > > And my hunch is that is a lot bigger than you think, for reasonable > > levels of "Decent". Nevertheless, if you truely wish to make "full > > usable product", more power to you. > > -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 48245
nweaver@ribbit.CS.Berkeley.EDU (Nicholas C. Weaver) writes: > Neil Franklin <neil@franklin.ch.remove> wrote: > > > >Makes one wonder why Xilinx re-warmed their over 2 years old Virtex in > >form of Spartan-II. After already doing the same trick with Spartan, > >and have since repeated it with Spartan-IIE. > > > >Seem to sell, so there seem to be quite a few people who do not need > >then newest possible. > > The latest Spartan xx is "What was the top part 1.5 process > generations ago" in the sweet spot die sizes. And it (SPartan-IIE) is also very similar to the Virtex-E and as such an easy target to expand to. > So why redesign the logic block? No need to. And that is why we have quite a few near bit compatible families. > >> And you can live a rich full life without the $10,000+ tools. Right? > > > >Could you point out, where I am supposed to have claimed contrary to that? > > > >I was pointing out, that your "less users, A and X will not bei able > >to afford development" stuff was nonsense. That there exists people > > Xilinx and Altera do NOT NOT NOT make money from their tools. And that makes rickmans "open source is a threat" argument totally moot. > >> tools in the future or start charging more for the chips to make up the > >> difference which would make it harder for them to compete. > > > >And so what? Their competitors (ASIC vendors) also have to pay for > >their tool development. Both can then chose how to distribute the > >costs over software licenses (less larger ones) or chip costs > >(possibly more sales due to the open source software), or use some of > >the open source stuff to reduce their costs. > > > >May the best win. > > Actually, ASIC venders generally don't provide tools. TSMC etc just > takes designs and pops out chips. Cadence etc provide the tools, > which the customers directly pay for. Which in the end, for the customer still is just a total-cost comparison. Whether they pay Cadence direct or via TSMC is not really relevant. Just as it is irrelevant whether they pay Xilinx via software costs or chip costs. > >> What is your point? NO ONE can make a Xilinx compatible FPGA except > >> Xilinx. NO ONE can make an Altera FPGA except Altera... > > > >I have my doubts. Where there is a will (enough money) competitors > >will appear. > > Wait another decade, and THEN you may see Xilinx 4000 compatable > parts. Patents are an enforced monopoly. I know enough about patent law. It is just a matter of money to get around it (even if that means buying strategic patents to trip Xilinx up and make it cheaper for them to license their stuff). > >> How is compatibility necessary in CPUs or FPGAs? It only exists in the > >> x86 world because the cat is out of the bag. > > > >And who says it will not get out of the bag in the FPGA world? > > It doesn't even exist across parts, a V300 is NOT bitfile compatable > with a V400. Depends on your definition of compatible. I regard them as identical, modulo some size parameters (1 line in an table each). Even XC2S30 and XCV300 are just table differences. Only from the E types on do we have bit meanings change (seems to be limited to DLLs and IOBs). And XCVxxxE to XC2SxxxE seems to also be just table lines. -- Neil Franklin, neil@franklin.ch.remove http://neil.franklin.ch/ Hacker, Unix Guru, El Eng HTL/BSc, Programmer, Archer, Roleplayer - hardware runs the world, software controls the hardware code generates the software, have you coded today?Article: 48246
In article <6un0pgioiv.fsf@chonsp.franklin.ch>, Neil Franklin <neil@franklin.ch.remove> wrote: >> How many users (outside a small cadre of open source true-believers) >> really do? > >As the "small cadre" is already in the 100'000s, those outside are not >really that important. Is the cadre of open source true blieveres that high? >> A very poor analogy. C compilers can be simple (eg lcc), but you lose >> a good deal of performance (50%). > >Which is acceptabe for simple designs. We have this with LCC, and I'll argue that it is not acceptable to most. People use GCC because it is competitive with commercial tools. If they aren't competitive, all but a few early adopters will ignore them. Remember, people who are doing FPGA implementations tend to be a fairly performance-obsessed lot, why else would we be throwing silicon at the problem? Linus Torvald uses Bitkeeper because it does a better job, even if it does get RMS's panties in a twist. CVS may be free, but if it doesn't do as good a job, people will go with the closed source alternative. >> I specifically WOULD be a target user, who would be happy with some >> alternate stuff. But the alternate stuff would have to integrate with >> the rest of the flow anyway, how ELSE am I going to get designs like >> LEON 1 pushed through my hacked-up toolflow? > >Run it (I assume it to be VHDL or Verilog code) through an compiler >that targets my stuff as back end. Compare the results. The only way which this will occur is if you tie your flow in with the Xilinx flow: If you want to do routing, accept post-placement XDL. If you just want to do bitfile manipulation, only do layers on Jbits. You will probably NEVER be able to do static timing analysis without being able to read the Xilinx databases. I doubt you could ever be independant of the Xilinx flow, but if you ever DO want to be, you are going to have to leverage that flow for as much as possible during development. -- Nicholas C. Weaver nweaver@cs.berkeley.eduArticle: 48247
Did you connect all 32 devices to the same clock? -Stan "Peter" <z80@ds2.com> wrote in message news:o9blqucc7e0148kdj8e0lspftuhmcm0eq3@4ax.com... > > In 1992 I designed a product which was a PC (ISA) card containing 32 > (yes thirty two) XC3064 devices. It is basically a complicated pulse > generator. Each of the devices contains the same config data. There is > also an XC3030 which implemented the PC interface and some other > simple stuff. > > The FPGA config data, for all 33 devices, is loaded from a simple > program running under MS-DOS which reads in a Motorola S-record file > and loads it into the card. > > The customer had a few of these, then came back in 1996 for some more. > By then, Xilinx had almost dropped these parts and I had to redesign > the card to use a TQFP version of the 3064, of a higher speed than the > original one. Fortunately it still worked! > > I say "fortunately" because there have always been config loading > problems with Xilinx parts - if you had a lot of them on a board (I > last did FPGA design in 1997). They were very sensitive to the CCLK > edges, not too fast and not too slow. I had to play around with > different types of CMOS drivers to get the edges exactly right, and I > do have a 400MHz scope. There was no explanation for this behaviour, > other than the CCLK input having multi-GHz-speed and picking up > non-monotonic risetimes which a 400MHz scope did not show. > > ...Article: 48248
rickman wrote: > > Russell wrote: > > ... > You are talking about the implementation of fitter and such. I want to > hear how you plan to execute this project. What are the parts to be > built and what will be the sequence of building them? How long will it > take? What resources or information do you need? One disadvantage of knowing how to do everything is having time to do just about nothing. It's a project i'll attempt some time, but it's too destracting to do right now. Anyway, i need to learn more about programming X windows. I can do M$ windows, but it's strangely unattractive to do so;)Article: 48249
rickman wrote: > > Russell wrote: > > > > rickman wrote: > > > ... > > > Many does not include the majority of FPGA engineers, IMHO. In the FPGA > > > world you have to work with the best chip for the job and that is often > > > the most current chip. > > > > Rarely. The biggest latest chips have the highest profile. > > Don't know what you mean by profile. Just means that the latest and greatest big chip is what everyone hears about, but the garden variety 2-year old chips are the biggest sellers and are a good target for open-source tools. Once the tools are developed, adapting to new chips would be easier and quicker. > > No. Internal tools can be fixed in short time if companies have > > a 4 digit support contract, which many will. > > What does this mean? I don't get it. Just means that open-source tools won't cause vendor-supplied tools to disappear due to lack of profit. > > > Why would anyone be more motivated to develop backend tools? What is > > > their value without the front end tools? > > > > Without free chip information to make backend tools, frontend > > tools are useless. Compilers and fitters are fun to make. > > GUIs are boring. > > Tell that to the frontend tool vendors. :) You are making up rules to > suit your purposes. No one is talking about GUIs. The compilier *is* a > front end tool. I think a good graphic interface like floor-planner is needed for doing manual routing. This is an intermediate tool after the HDL compilation step.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z