Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I think you are somewhat missing the point with the A & X question.. in that you ask the wrong question. Its not who has the best architecture or which one is fastest.. it actually doesn't really matter... for 99% of the designs, as Austin's pointed out before... either is good enough... and if your in the 1% that matters, then anything that you do won't give you a good enough idea until you try and fit the final FF or CLB, and even then your design will be so customised that an A design is almost impossible to translate to X and visa versa. What really matters is what price X or A's FAE will sell you the parts at, what support they will give you, what evaluation boards are about that do some if not all your needs. The decision at my work was which company gave us the best discount, That happened to be Xilinx. It also happened that they do bus LVDS which we are using so our design naturally forced A out anyway, we just didn't tell anyone :-) If you are building a one off then it really doesn't matter anyway. Use a dartboard and a blindfold it will be as accurate as a detailed study... for one off.. just choose a eval board with a largish device, get it all working and see how big it is, then choose a device twice the size required (for the inevitable fixups) my two cents Simon "Joseph H Allen" <jhallen@TheWorld.com> wrote in message news:d633bt$g1u$1@pcls4.std.com... > Thanks you all. This has been very helpful. > > > -- > /* jhallen@world.std.com (192.74.137.5) */ /* Joseph H. Allen */ > int a[1817];main(z,p,q,r){for(p=80;q+p-80;p-=2*a[p])for(z=9;z--;)q=3&(r=time(0) > +r*57)/7,q=q?q-1?q-2?1-p%79?-1:0:p%79-77?1:0:p<1659?79:0:p>158?-79:0,q?!a[p+ q*2 > ]?a[p+=a[p+=q]=q]=q:0:0;for(;q++-1817;)printf(q%79?"%c":"%c\n"," #"[!a[q-1]]);}Article: 84201
Another is to re-write the parallel interface to use one of the FPGA internal memories... it can form part of a circular buffer that the ISR would normally use, then you can use it as if there was an ISR doing the work for you. All I am describing is a hardwired ISR of course, but it is something that you will always use :-) Simon "Big Boy" <bigboytemp@hotmail-dot-com.no-spam.invalid> wrote in message news:xoKdna6e28lETxnfRVn_vg@giganews.com... > That's why the best way to implement is to create an interrupt service > routine (ISR) that respond to the UART. > > This way, even if you do slow-speed processing, in background, the ISR > will respond quickly to the UART. > > Sooner or later, you may have parts of your code that start taking > more time and have slower response time. Of course, you're the one > who know what requirement your application have. If you think that > 16-bytes buffer would be well enough, then no need to trouble with > interrupts ;) >Article: 84202
When you need to worry about floor planning.. you had better have a PhD. Its one of the black arts when you are concerned about arrival times ... its far better to let the tools do it for you in most cases, but there are occasions where the tools fall flat but then you only fix the fault not place the whole design. That's also why there are a few books.. but not many I would suspect. Simon "Lukasz Salwinski" <lukasz@ucla.edu> wrote in message news:d630a5$jd9$1@zinnia.noc.ucla.edu... > hello, > there seems to be a plethora of textbooks on VHDL/Verilog > available. But what about floorplanning ? Are there any > resources (print/Web) available ? > > lukaszArticle: 84203
There are particular routes from LUT output to particular LUT inputs that are faster. You can get down about 10 pS from my memory. We did exactly this on our CRC32 core which has multiple 3 levels of LUT and can run at about 380 MHz in V2. To do this though you have to spend a lot of time in FPGA Editor and going back and forward to get the timing. You get different timings from a LUT o/p to different inputs of the next (same) LUT so controlling which output links to which input gets you reduced flight times. Starting from the simple end you can floorplan your LUT and roughly based on spacial distance before going to the extremes I describe above. You can also use the multiple place and route feature of the tools and that might get you close enough. Another thing to try is setting your synthesiser for area. Seems contractory but sometimes goes faster than speed optimisation. Try all of these first then try the more difficult. John Adair Enterpoint Ltd. - Home of Spartan-3 PCI Development Boards. http://www.enterpoint.co.uk "Gunter Knittel" <knittel@gris.uni-tuebingen.de> wrote in message news:d631kk$clv$1@newsserv.zdv.uni-tuebingen.de... > Hi, > > I'm trying hard to speed-optimize an arithmetic function > on a VII 4000 - 4 device, using ISE 7.1. > I have minimized the logic down to a few layers of 4-input > LUTs. However, the routed design spends much more time > on the wire than in the LUT. In particular, communication > from one slice to the next in the same CLB through the Switch > Matrix can be slow, sometimes in the order of 1ns. > > So my question is: is there no fast private communication between > slices in the same CLB (other than shift and carry)? Is there any > documentation available about the performance of the Switch Matrix, > and how I should arrange the logic such that fastest interconnects > can be made? > > Thanks a lot > Gunter > >Article: 84204
Funny that you think it takes a PhD to drive layout tools, In olden days most chips were layed out by non degreed draftpeople who did not do the circuit design, but kinda knew what looks right. There were at least 2 books for them, Motorola, and Tanner EDA both have IC layout books, still valid today for polygon editing and even floorplanning since its the same idea no matter who you worked for or even 3u v .2u just more rules and layers. As for FPGA floorplanning, all I see is the (not much) help guides, never seen any books on it, they would date too quickly, knowledge is very specific to each family, even each member. It is a bit of a black art and takes alot of practice too. For my own tastes the manual draw flow is about 100x too slow to be practical except for critical repeated blocks. Its probably worth doing some test layout work along the way to get a feel for what the tool will do with different options but the results vary so much with the smallest changes. Like throwing it all up in the air and see it land differently every time. When the timing is easy, the tool has 0 incentive to do better. When the timing is almost impossible, the tool has only heuristics to find a result and it never looks like what a person would produce. In an empty design theres not much call for it, but in a tight packed design its already too late. But there is one time it is justified and that is when you have N copies of same mega cell that could be hand packed and stepped and repeated. It may not actually be much faster by the cell, but it most certainly can improve packing density and makes all copies more or less the same timing spec. Humans really can do better at highly structured layout but the tools will exhaust your patience pretty quickly. Unless you are way over 120MHz I doubt you need it unless you know how to do it anyway. johnjakson at usa dot comArticle: 84205
I forgot to say that manual design is inherantly bottom up design which means you assemble objects and see their interactions as you edit them even if the logic is incomplete. The tools should not care as long as you believe the logic is correct. But the FPGA layout tools don't like that, they insist on a complete design that interferes with what you are trying to accomplish. I have been fighting this battle for 20+ years with EDA writers, they don't use their own tools the way some of us would wish, only in the proscribed way. At one time I did IC mask layout and had no way to turn DRC off which is like driving around with a cop in your passenger seat, only much worse! I wonder how much demand there would be for a realy slick and commercial FPGA layout tool that had at least a basic model of the LUTs and wiring delays that could be correlated with actual devices. I have some ideas on this but other projects come 1st. johnjakson at usa dot comArticle: 84206
Geogle <georgevarughese@indiatimes.com> wrote: > Uwe Bonnes wrote: > > Geogle <georgevarughese@indiatimes.com> wrote: > > > Hi, > > > > > Does this "Free ISE WebPACK 7.1i" for linux work > > > with any distribution other than Red Hat Enterprise Linux 3 ? > > > I tried to install this under Fedora Core 3 / Debian and > > > installation didn't succeed. Looks like the installer is > > > linked against libwiclient.so, libcommdlg50.so .... > > > libodbc50.so etc, which are not there on the system. > > > > > Does anyone know which package provides these ? > > > > The webpac installer is in charge to install them. They are found in > > ../bin/lin > Thanks, they are in the BIN area! > When I tried to install the software, I got the following message: > ( running setup from the extracted files/ > and > sh Webpack*..sh yielded the same > result. ) > Wind/U X-toolkit Error: ", 24Wind/U X-toolkit Error: > wuDisplay: Can\'t open display\n", 30wuDisplay: Can't open display Normally $DISPLAY is ":0.0" on the local host. Zillions of applications work with that setting. The "WINDU" library, that Xilinx uses for implementing a Win32 Library layer on *NIX however doesn't understand it. Set it to ":0" and xilinx should start up. Bye -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 84207
CPU2000 wrote: > Why don't you take a look at the following 8051 from Hitech Global: > > http://www.hitechglobal.com/ipcores/dp8051.htm > > Their core is proven in multiple ASICs and FPGAs > They are also a bit more expensive then free. I think the parent poster was looking for a freeIP ..Article: 84208
Robert Au wrote: > Hi, > > I have been using Altera Quartus II v4.2 SP1 for quite a long time. > Recently, I have a problem that the fiiter change the logic equation > compare to the synthesis. > > 1. I wonder why the fitter need to change the post-synthesis logic > equation during fitter stage. > > More interesting is that the changes make my synchronous register > behave asynchronously. > > Post-synthesis logic equation : > ZD1_RD_PTR[1] = DFFEAS(ZD1_RD_PTR[0], H1_rd_fifo_rd_clk, !PIN_RSTN, , > , , , , ); > > Post-fitter logic equation : > ZD1_RD_PTR[1] = DFFEAS( , GLOBAL(H1L39), !PIN_RSTN, , , > ZD1_RD_PTR[0], , , VCC); > > Obviously, the behaviour will totally different! > > 2. Can someone tell me how to turn off the fitter optimization? > > Robert What is your design flow ? Do you use internal synthesis or third party tool ? Did you activate retiming (physical synthesis) ? QII physical synthesis can now perform very clever optimizations, which often alter the local behavior but (hopefully!) respects the macroscopic behavior. Register retiming is such a well-know optimization technique. In doubt, run a post-layout simulation to be sure. If it works, you shouldn't be worried by what the tool did to improve the design performance. If you worry (or if you somehow need to preserve the conformance between RTL and post-layout, like for formal verification purpose, or if you debug it at internal register level -STP II-), then you should disable physical synthesis. The section is "Synthesis netlist optimization". Bert CuzeauArticle: 84209
Simon Peacock wrote: >When you need to worry about floor planning.. you had better have a PhD. >Its one of the black arts when you are concerned about arrival times ... its >far better to let the tools do it for you in most cases, but there are >occasions where the tools fall flat but then you only fix the fault not >place the whole design. > > > I disagree. If you need the performance, floorplanning can make a huge difference, especially in designs that are heavily data path. The automatic placement tools fall seriously short of what a human can do. No need for a PhD though. I find that floorplanning skill seem to be more of a inate ability thing: either you have it or you don't. Some people are really good at it from day one, while others never do get the knack regardless of how long they study it. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 84210
kempaj@yahoo.com wrote: > I don't think this point is quite fair. Well, IMHO it is quite fair. The quality of support from the Xilinx-associated experts is far better than from Altera. A striking example: once I asked here about clocking of the Cyclone 1C6 device (i.e. is it possible to use a sine wave as a clock source of a PLL?). And it was a Xilinx guy who gave me a comprehensive answer... :-) Best regards Piotr WyderskiArticle: 84211
"Simon Peacock" wrote: >When you need to worry about floor planning.. you had better have a PhD. >Its one of the black arts when you are concerned about arrival times ... its >far better to let the tools do it for you in most cases, but there are >occasions where the tools fall flat but then you only fix the fault not >place the whole design. Floorplanning doesn't take a PhD. Datapath floorplanning is more like doing a jigsaw puzzle than anything else. The tools can't and don't do a the best or even a very good job of placement. Not the fault of the vendors, it is a very hard problem to find the best answer in the general case. The tools do an acceptable job for most designs anymore. What humans can easily do is to find a very good answer, especially if the structure is regular or can be made to be regular, or if there is one key problem to solve. For the first case, think about a simple datapath. The logic, when built into LUTs (and other elements) often can be made as a regular structure where each bit uses basically the same logic, and if this logic is placed the same for each bit, then the control lines can be short and regular as well. This can make for a fairly simple to implement, fairly optimal design. Datapath floorplanning is an "easy" problem, as it can be done by a computer program that completes in reasonable time. If you design enough datapaths, you could save time building up a library of elements and building from those. There are dathpath building tools for ASICs, and they have been around for almost a decade. Maybe in another decade someone will bring one out for FPGAs, as part of a high end FPGA tool. For the second case, my most recent design is fairly low speed, ~50 MHz in a Spartan-3. At one point in time, I was getting a timing failure on a BlockRam -> some data path logic -> multiplier path. The only reason why this was failing was that the automatic placement put the BlockRam in one corner of the part and the multiplier in the far corner of the part. I'm not sure why the tools have such problems with the large blocks, but my experience is that the automatic placement of BlockRams and multipliers is often poor. The cure was simple: I fixed placed all the BlockRams and multipliers to something reasonable. I've also seen cases where, to meet timing, a LUT or FF had to be in a specific location or a few specific locations. The automatic placement would often get close, but close wasn't good enough. -- Phil Hays Phil-hays at posting domain (- .net + .com) should work for emailArticle: 84212
Thank you very much!Article: 84213
Hello,group members! "Peter Soerensen" <pbs@mortician.dk> wrote in message news:ee8e112.-1@webx.sUN8CHnE... > Hi, > > I'm using the EDK 7.1 for a project and I need to use a memory manager > like malloc. According to Xilinx' documentation there should be a > xil_malloc() function. I have set flag "requires malloc" in the software > properties in system platform and re-generated the libraries. The tool > creates a nice xil_malloc.h file for me but when I try to link my program > the linker cannot find it. What is wrong?? > > I have been plagued with this for quite a while ;-) > > thanks, peterArticle: 84214
I have defined the type "Unchar32",a structure of 32bits data, and i changed the SDRAM address to "Unchar32" pointer type and assign it to Unchar32 pointer variable "SdramAddr",then i write the data to SDRAM by assign the each domain of "SdramAddr",but I receive the same wrong data as i write the SDRAM directly,what is the matter?Article: 84215
The problem is that most users design top down.. and when it comes to profit.. it's top down. And I agree... quite a few years ago we used to argue with the vendors about the problems with their tools... But I am more philosophical these days. Even if there was a basic model of an LUT, you would have to have a different model for each family.. and probably for each device... the skill in designing this is far more than the 'average' task... also using this would force limitations in families or devices... that's the opposite that any vendor wants. And I think you summed it up perfectly.. nice concept.. but who has the time to do it for a minority of the users? Simon "JJ" <johnjakson@yahoo.com> wrote in message news:1116076267.579559.109770@g14g2000cwa.googlegroups.com... > I forgot to say that manual design is inherantly bottom up design which > means you assemble objects and see their interactions as you edit them > even if the logic is incomplete. The tools should not care as long as > you believe the logic is correct. > > But the FPGA layout tools don't like that, they insist on a complete > design that interferes with what you are trying to accomplish. > > I have been fighting this battle for 20+ years with EDA writers, they > don't use their own tools the way some of us would wish, only in the > proscribed way. At one time I did IC mask layout and had no way to turn > DRC off which is like driving around with a cop in your passenger seat, > only much worse! > > I wonder how much demand there would be for a realy slick and > commercial FPGA layout tool that had at least a basic model of the LUTs > and wiring delays that could be correlated with actual devices. I have > some ideas on this but other projects come 1st. > > > johnjakson at usa dot com >Article: 84216
"Ray Andraka" <ray@andraka.com> wrote in message news:17ohe.16613$aB.5455@lakeread03... > Simon Peacock wrote: > > >When you need to worry about floor planning.. you had better have a PhD. > >Its one of the black arts when you are concerned about arrival times ... its > >far better to let the tools do it for you in most cases, but there are > >occasions where the tools fall flat but then you only fix the fault not > >place the whole design. > > > > > > > I disagree. If you need the performance, floorplanning can make a huge > difference, especially in designs that are heavily data path. The > automatic placement tools fall seriously short of what a human can do. > No need for a PhD though. I find that floorplanning skill seem to be > more of a inate ability thing: either you have it or you don't. Some > people are really good at it from day one, while others never do get the > knack regardless of how long they study it. > Performance is usually the best place for the tweak.. the problem with hand placing is that a placement for one family won't necessarily be any good for another.. and it will reach a point where within the same family.. placements won't necessarily be combatable IMO. I would also agree with the you've either got it or not... that's why engineers still have jobs and we haven't all been replaced by computers.. hence the black art crack :-) Floor planning is best learned by trial and error I think... I also suspect that most people are best ignoring it until the have to do otherwise.. debugging is best done at higher levels and ignorance can be bliss.. or at least let you sleep at night. My best is to use Symplify... I build constraints into my VHDL and use Xilinx unisim to force certain placement or a particular grouping (usually for IO) and check the floor planner to make sure it does it. More than that.. I ignore.. especially if the timings are OK.. I don't have to go deeper or work weekends and I finish on time with confidence (and a pay check!) SimonArticle: 84217
Are there any FPGA design tools which will run under Mac OS X? I've found that the Icarus Verilog simulator and synthesis tool will run under OS X, but I'm not sure whether that's actually useful for programming any current FPGA part. Thanks. -- Ron Nicholson rhn AT nicholson DOT com http://www.nicholson.com/rhn/ #include <canonical.disclaimer> // only my own opinions, etc.Article: 84218
Nicholas Weaver wrote: <snip> > Thus I personally wonder whether the primary focus of the pissin match > should be mostly about tools (both the vendor tools and support for > third party tools, especially easy floorplanning, datapath aware > placement, & retiming), density ($/LE), and features (Brand X has a > big lead here), rather than who's lut is 10% faster on what functions, > and who's interconnect might be slightly faster on some designs and > slower on others. Or the number of user designs that broke on the latest Vxyz release ? -jgArticle: 84219
AFAIK no, and likely never I suspect. The Mac has had and still has some EDA tools for IC layout (4+), PCB (2+), spice simulation (few), opensource (GEDA etc) but those were open to any technology or company. FPGA is a lock in game, you choose your language and then your device & vendor and you need a tool with intimate knowledge of the device which can't be just encoded in a portable technology file as it was for ASIC design. I could imagine myself developing a platform neutral FPGA floorplanning tool, probably use wxWidgets, maybe even use Java. While such a tool could not have precise knowledge of FPGA slices, it could have a crude enough model that would allow very responsive hand placing with immediate DRC, and feedback about likely performance. It seems today half of all delays are in the wiring and the current floorplanning tools are hopeless for the guy like me that wants complete visual control for some part of the layout. While the tools sort of work, the cycle time for changing the floor plan and rechecking each step is minutes when it should be <1s. Input is another problem, the synthesis would still need to be done by another tool, and that means timing layout driven synthesis. The output would be pretty low level instances of LUTs. If the synthesis is using layout to guide the synthesis, it defeats the whole manual driven thing. Output would be ucf file I guess. So OSX is maybe 3% of the general market and probably even less in the EE world, why would anyone even bother. I know it xxxxx, I wouldn't mind getting a mini myself kind of matches the other mini thing. johnjakson at usa dot comArticle: 84220
"Maybe in another decade someone will bring one out for FPGAs, as part of a high end FPGA tool. " How many dollars do you think such a productivity tool would be worth? An ASIC guy will easily pay $100K for that sort of thing, I bet the average FPGA guy today would squirm at $5K. ASIC EDA tool developments are financed by VCs who expect a decent ROI. At few $k per seat, I see less chance of that even if volumes could be higher. Now if what X is saying about make it your asic is true, then many more ASIC guys will be changing over to FPGA design flows, but the problem is tools, ASIC guys are used to high value high cost tools for 1M gate designs. Well if FPGAs can do 1M equivalent size projects then will these ASIC guys want better more open tools or will they happily accept total vendor lock in. As an x ASIC guy I know the answer to that, they will hate it. But I suspect if a Meta FPGA model can be built for each popular FPGA device, I could see some useful tools being built. Not sure if it will happen. see other post on OSX Since I only see FPGAs through Webpack eyes, I can't know what Synplicity, Mentor offerings can do for the floor planning pros. It keeps coming back to the synthesis, who is in charge of the vision thing, timing driven synthesis has no grand vision internally. For a decent high end floor planning tool, you would need to either work with that synthesis internals or just replace it with human driven placement and let the synthasis finish up whats left over. In the past I suggested that schematic driven design (RIP) could easily be the basis of floorplanning, every schematic symbol drawn maps directly to an area of hand synthesied logic blocks, hiearchical driven design. Since that approach is dead, I would now suggest using the output schematic that come from synthesis and annotate that with floor planning hints. The problem there is that those schematics are worse than what a 3yr old can draw. As you said, when I make 16 copies of something, the average EE sees a bright light and draws an array and reduces the problem 2 fold, datapath plus control. The average dim witted tool (always written by PhDs no doubt) smashes away all logical structure and randomizes everything. Until the EDA tool guys understand this, the tools won't get better. If anything I believe they will get worse as FPGAs keep getting bigger. The key with regularity is that for humans it drastically simplifies the solution N fold and makes things managable. For tools, its just more cycles. end of rant johnjakson at usa dot com transputer2 at yahoo dot comArticle: 84221
Simon, I am not so sure the Meta model would have to be super accurate, just reasonably so. It would have to describe the varying delays through LUTs, and switch fabric and cover the more obvious features available, probably in a HDL model. That automatically sets off alarm bells at the vendor since now anybody can see how their structures are built. Does anybody really care, its not like you can go fab a clone even if you have a detailed model of slice. Even if the timing was 20% off, just being able to put FFs and adders and what not in the best possible place for datapath given this accuracy would be far better than the dim wit SW can do with likely not much better nos. Once the ucf or xcf file is sent back to the vendor tool, we can see if the result is better or not. If the features can be openly described in the pdf spec, then it can be modelled and correlated with actual models in an automated fashion against the vendor tools. In an ASIC, such foolery is limited since every bodies std cell library is basically the same. johnjakson at usa dot comArticle: 84222
"JJ" <johnjakson@yahoo.com> wrote in message news:1116153044.655599.218160@g44g2000cwa.googlegroups.com... > "Maybe in another decade someone will bring one out > for FPGAs, as part of a high end FPGA tool. " > > How many dollars do you think such a productivity tool would be worth? > > An ASIC guy will easily pay $100K for that sort of thing, I bet the > average FPGA guy today would squirm at $5K. ASIC EDA tool developments > are financed by VCs who expect a decent ROI. At few $k per seat, I see > less chance of that even if volumes could be higher. You can probably expect FPGA guys to spend up to 35k maybe even 100k... cost of one engineer... too much and you have to look at the life time of the product and the return on capital > > Now if what X is saying about make it your asic is true, then many more > ASIC guys will be changing over to FPGA design flows, but the problem > is tools, ASIC guys are used to high value high cost tools for 1M gate > designs. What will happen is the ASIC guys will retrain or end up working at MacDonnalds... Top down is a far better approact anyway.. at least the tools vendors will always tell you that.. and people leaning structured programming too.. bottom up will gradually become a forgotten aspect of design as the tools become good enough and the devices too fast. I have even put my OOP and strutured programming to work in my own designs... not as complex as Delphi OOP but same concepts apply. > Well if FPGAs can do 1M equivalent size projects then will these ASIC > guys want better more open tools or will they happily accept total > vendor lock in. As an x ASIC guy I know the answer to that, they will > hate it. Their tools are already propriatary.. they just don't realise it... try taking an ASIC design from one design house to another and see what the NRE cost is! > But I suspect if a Meta FPGA model can be built for each popular FPGA > device, I could see some useful tools being built. Not sure if it will > happen. You will probably have to wait for someone ding a PhD thesis before it happens... sorry :-) > see other post on OSX > > Since I only see FPGAs through Webpack eyes, I can't know what > Synplicity, Mentor offerings can do for the floor planning pros. It > keeps coming back to the synthesis, who is in charge of the vision > thing, timing driven synthesis has no grand vision internally. For a > decent high end floor planning tool, you would need to either work with > that synthesis internals or just replace it with human driven placement > and let the synthasis finish up whats left over. Synplicity Don't do placement.. not yet.. you can get Advantage or add ons (to bring the cost up to 100k of course) that can do some placement... but they optimise the code, put timing constraints in etc and rely on X or A to provide the final place and route. > In the past I suggested that schematic driven design (RIP) could easily > be the basis of floorplanning, every schematic symbol drawn maps > directly to an area of hand synthesied logic blocks, hiearchical driven > design. Schematic design is old school... VHDL and Verilog are the two main languages taught... even the schematics I enter into Mentor's HDL designer are actually just converted into VHDL. I doupt if universities are even teaching schematic design or good practice.. I have seen some atrocious designs from a university graduate. > Since that approach is dead, I would now suggest using the output > schematic that come from synthesis and annotate that with floor > planning hints. The problem there is that those schematics are worse > than what a 3yr old can draw. They are generally worse for documentation than the original design > As you said, when I make 16 copies of something, the average EE sees a > bright light and draws an array and reduces the problem 2 fold, > datapath plus control. The average dim witted tool (always written by > PhDs no doubt) smashes away all logical structure and randomizes > everything. Until the EDA tool guys understand this, the tools won't > get better. If anything I believe they will get worse as FPGAs keep > getting bigger. Most places actually place using a random number generator.. most of the time it works but its not very optimal Xilinx used to (maybe still do) allow you to fix the seed so the sequance is always the same. > The key with regularity is that for humans it drastically simplifies > the solution N fold and makes things managable. For tools, its just > more cycles. you forgot the key to business ... fast, quick, (mostly) working... so good enough is often all the accountants care about. If it does what's wanted then its out the door SimonArticle: 84223
But if they weren't accurate.. people would complain... and if they were.. then there's another slinging match between A & X marketing department over which is better / faster.... you really can't win. Besides.. when it comes down to it.. A & X produce silicon.. not tools.. they aim for 90% of the users to be happy and the last 10% aren't worth the trouble. They can conform or X will read about the company in the chapter 11 section of the paper. I was once told that if you can sell a product to 1% of the American public.. you are a multi millionaire... so imagine 90% of hi-tech companies. When it comes down to it its all about money... most profit for the least amount of effort. The last 10% is always the most effort and least profitable... That is the bottom line of any business. Simon "JJ" <johnjakson@yahoo.com> wrote in message news:1116153841.698139.70400@z14g2000cwz.googlegroups.com... > Simon, I am not so sure the Meta model would have to be super accurate, > just reasonably so. > > It would have to describe the varying delays through LUTs, and switch > fabric and cover the more obvious features available, probably in a HDL > model. That automatically sets off alarm bells at the vendor since now > anybody can see how their structures are built. Does anybody really > care, its not like you can go fab a clone even if you have a detailed > model of slice. > > Even if the timing was 20% off, just being able to put FFs and adders > and what not in the best possible place for datapath given this > accuracy would be far better than the dim wit SW can do with likely not > much better nos. Once the ucf or xcf file is sent back to the vendor > tool, we can see if the result is better or not. > > > If the features can be openly described in the pdf spec, then it can be > modelled and correlated with actual models in an automated fashion > against the vendor tools. > > In an ASIC, such foolery is limited since every bodies std cell library > is basically the same. > > johnjakson at usa dot com >Article: 84224
"What will happen is the ASIC guys will retrain or end up working at MacDonnalds..." I hope thats just your humour, ASIC guys are streets ahead in HW design over most newbie FPGA guys, but we bring with us a higher set of expectations. ASIC guys will have to get used to slightly different style of HW design, and poorer tools. I suppose some of the ASIC EDA companies will follow on. "Top down is a far better approact anyway.. at least the tools vendors will always tell you that.. and people leaning structured programming too.. bottom up will gradually become a forgotten aspect of design as the tools become good enough and the devices too fast. I have even put my OOP and strutured programming to work in my own designs... not as complex as Delphi OOP but same concepts apply. " No no no. For average so so design top down is the quickest way out the door but it also leads one straight into hidden brick wall on performance. Its only if you have the time to assemble bottom up that you can see where all the 0.1ns disappear that you have half a chance to get them back. Once you see that then iti possible to do top down and bottom up at the same time, but it does take longer. Its the same difference between a 32bit cpu that works at 150MHz v another at 300MHz. Top down and you automatically do 1 cycle, single threaded, looks like a DLX, type of design that is bottlenecked all over, no hope of ever speeding it up since control decisions can't be pipelined any further. Bottom up and you look at how DSP is essentially thread driven (ie super pipelined) with as little control decision per clock as possible ie highest possible clock rate. You end up with multicycle multithreaded design instead that don't look anything like the text book cpu designs, not even any bigger either. It will be interesting to see if ASIC guys change the FPGA business any, or the reverse. Most FPGA designs I guess are bespoke projects. Most ASIC designs are the opposite, highest possible volumes and highest possible performance to have perf edge. "you forgot the key to business ... fast, quick, (mostly) working... so good enough is often all the accountants care about. If it does what's wanted then its out the door " Thats probably true in the FPGA biz, but its not like that in the ASIC side of things. JJ
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z