Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
> Not knowing which device(s) he is using V2 and V2P. Generally using the middle speed grade. In the case of V2P it's not because of logic speed (plenty of it, even for the low speed grade), its because of MGT clock rate requirements. -MartinArticle: 87351
Shanon Fernald wrote: >Gary, your algorithm works out on the trusty calculator, but Allan, >your algorithm "does not compute" :) Am I missing something or could >you walk me through this? > >Y1 = (X + X<<1) > >Y2 = (Y1 + Y1<<4) > >Y3 = (Y2 + Y2<<8) > >Because I don't see how this ever gets smaller than X, assuming X is >the number I'm trying to divide by 5. > >Thanks, >Shanon > > > there is an implied shift of the radix point. Also, I think it may be more intuitive to replace '<<' with '>>' in the above algorithm although in this case both are correct. The main thing you are missing is that the implied shift. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 87352
Martin - On Thu, 21 Jul 2005 21:32:24 GMT, "Martin" <0_0_0_0_@pacbell.net> wrote: >Using 6.1.03i. Pondering whether or not to spend the money to update to >latest revision. The devices we are using are fully supported by what we >currently have. No real issues at all. > >That being the case, what would be the most significant reasons that would >justify an upgrade today? > >Thanks, > >-Martin > If I had a choice, I'd stick with the tool version that successfully produced the initial version of my design. Except, that is, for one thing: speed files. If you're absolutely confident that the speed files for your current version of software reflect the latest timing numbers for your part, or if you have so much timing margin that you have no concerns about changes to the speed files, stick with what you've got. But if any of your timing is tight, check with the factory to see if you're up to date. By the way, I just used my 7.1.03i version of speedprint on the 2v3000 and 2vp70. Here's what it reported; 2v3000: PRODUCTION 1.121 2005-05-13 2vp70: PRODUCTION 1.91 2005-05-13 Can the factory send you new speed files that work with your current software? Beats me. I know one or two folks who have managed to obtain special versions of updated speed files that work with older versions of the software. But I've also been told that sometimes it's impossible to do, because newer speed files may have formats incompatible with older versions of the tool, the newer formats being necessary for more accurate timing calculations. Or so I've heard. Good luck, Bob Perlman Cambrian Design WorksArticle: 87353
"Ray Andraka" <ray@andraka.com> wrote in message news:gWzDe.56401$FP2.44836@lakeread03... >I do find it a little humorous that about 5-10 years ago that someone >published a paper at FPGA > that claimed better speed and utilization using a 3-LUT rather than a > 4-LUT, and that the presenter > was someone that was fairly closely aligned with Altera. I think it might > have come out of Jonathon Rose's > students in Toronto. Anyway, the pendulum has obviously swung to bigger > LUTs are better at A. Hi Ray, I think this is the paper you're talking about -- http://www.eecg.toronto.edu/~jayar/pubs/ahmed/tvlsi_march_04.pdf. It's the journal version of a paper from FPGA 2000, which matches the timeframe you're talking about, and it is by one of Jonathan's students (Elias Ahmed). The conclusions are the opposite of what you're saying though: larger LUTs are faster (which was known before), but also that 5 & 6 LUTs are much more area-efficient with modern CAD tools than was previously believed (in Jonathan's 1990 JSSC paper for example). We came to a similar conclusion independently at Altera. However, to improve the area-efficiency of larger LUTs further, we found we had to allow the larger LUT to be fracturable into smaller LUTs when appropriate, and that's the genesis of the ALM. > Again, for designs that use FPGA fabric correctly (ie, not levels upon > levels of combinatorial logic), the > LUT size is not as big a deal. Not many designs manage to keep all the logic to one level of LUT. The DSP designs you do are the most amenable type for deep pipelining, and I think you are a champion even among experts -- I don't see other DSP designs hitting that level of pipelining. Maybe they're so expert they never need our help though :). The high-speed designs I see today (typically 250 - 300 MHz, sometimes up to 400 MHz), have multiple levels of logic between the registers. Typical would be ~3 or 5 levels of logic (LUT) on the critical path using the Stratix II ALM, vs. ~4 to 7 levels using Stratix's logic element (4-LUT based). That reduction of routing hops and logic levels by ~25% is a big help in getting performance in those clock ranges. Most communications designs have more complex logic, and it's typical to have ~6 to 10 levels of logic on the critical path, putting them in the ~120 - 220 MHz range on the main system clock. > Those bigger luts are going to tend to be underutilized. The bypass is of > limited value too if you attempt > to keep all your logic to one level of LUTs, as the LUTs will nearly > always be associated with a flip-flop. Using the larger LUTs in an ALM (5- or 6-LUT) still leaves both ALM registers usable, so there's no problem with register starvation even if you manage to keep all the logic one LUT deep. Regards, Vaughn Altera [v b e t z (at) altera.com]Article: 87354
<jjlindula@hotmail.com> wrote in message news:1121970065.257771.127820@g43g2000cwa.googlegroups.com... > Hello, I'm interested in how individuals or design groups manage > complexity in their design projects. What things do you do or things > the group does that can take complex tasks and break them into simpler > or more manageable tasks? It may sound like a weird question, but there > must be some guidelines, best practices, or habits used to achieve > success in designing/developing a complex project. I'm sure there must > be some individuals out there that are constantly taking complex tasks > and just about every time have success with it. Short of speaking, I > want to know what's the secret to their success. All comments are > welcomed, even the most obvious suggestions. > > As an engineer, I'm constantly trying to improve my design processes. > Joe, Here is another perspective: People are the source of complexity. Better" is the enemy of "good enough". One reason the requirements need to be understood is to avoid changes. You should have a process to prevent changes. Design freezes are a way to do this. This avoids bringing in the latest and greatest idea and perhaps even doing that again and again and again..... Then you should probably have a higher-level process to embrace great ideas for change. It doesn't happen often that it isn't, but try to make sure that the thing is feasible in our lifetime - otherwise it isn't engineering it's an experiment. Sometimes you won't know the requirements until you've tried some things. More wires mean more complexity. If you don't understand what you're doing, and understand it very well, then you'll make a lot of dumb mistakes. It's easy enough to make dumb mistakes when you *do* understand it very well! One person has to be able to wrap their head around the whole thing - but not necessarily all the parts as long as the parts are well known things and under somebody's cognizance and control. This is where you need to be careful about the thing (maybe one of the parts and maybe the whole) being feasible in our lifetime. One of the parts may be akin to "and then a miracle occurs". If it's deemed "complex" to begin with then the core staff should be fairly well full time. People need to wake up in the middle of the night wondering if they've taken care of this and that - and they need to have undivided attention to the challenges so they will solve a knotty problem while sailing on the bay. If they get assigned to other things, you lose this subconscious attention - which has really high value. I don't know how to best express how to apply ideas of hierarchy, partitioning and so forth. But, that seems where the best payoff is in reducing complexity. Maybe we might ask how that really keen understanding is achieved? It's through analysis, partitioning, trying out ideas, etc. until something that really makes sense results. I do it but I can't quite describe how. FredArticle: 87355
This is a multi-part message in MIME format. ------=_NextPart_000_0036_01C58E4D.6B1F8160 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable "Ray Andraka" <ray@andraka.com> wrote in message = news:inODe.59117$FP2.41223@lakeread03... Philip Freidin wrote: Actually, what this does is benchmark your "experts". And the results at best are only valid if you then use that expert for your design :-) Philip, good point. I think you see what I am trying to say. = Basically, that this "mine's bigger"=20 pissing contest is at best a demonstration of how well the parts = compare on an undisclosed set of benchmarks that are biased towards the favorite device. My point = in my previous posts is that any design is going to be biased toward one of the devices. That = bias can be toyed with by=20 making changes in the benchmark designs (and I am talking about = changes at the RTL level, not hand crafting). Even naive designs have this bias, but I suspect that = the benchmarks were either=20 designs done by the vendor's FAEs or by the vendor's customers, which = would already introduce a distinct natural bias toward that vendor's devices. Even if he did = use naive designs, marketing would have undoubtedly polished the numbers by either tweaking the = designs or cherry picking the benchmarks to support his sales pitch. I'm not saying there is = anything wrong with that, just=20 trying to expose the nearly unavoidable bias that is naturally there. Ray, =20 This is certaintly a valid concern -- do the benchmark circuits = presented favour Altera, either intentionally or unintentionally? First, intentional bias: Real customer designs for benchmarking are = considered a valuable engineering resource at Altera. That's because we = believe the only way to make our CAD tools or architectures better is to = be scientific and rigorously benchmark our ideas on real customer = designs, and choose what ideas go into the CAD tools and architectures = based on those results. That means monkeying with the HDL is a big = no-no -- it would destroy the intent of the customer design, and would = no longer be a valid test case. To allow a design that was targeted at = one of Altera/Xilinx/Lattice to compile in the other vendor's tools the = megafunctions/coregen modules are replaced with equivalent modules for = each manufacturer, but that's all that's allowed for design = modification. Next, unintentional bias. I don't think there's a significant = unintentional bias, but of course it's impossible to be 100% sure of = that. If all the designs we had were designed for Stratix II and had = had their HDL extensively tuned by the customer for Stratix II or an = FAE, a bias would certainly be possible. But the device our benchmark = set originally targeted is all over the map: Stratix, VirtexII, = VirtexII-Pro, Stratix II. A significant fraction of our design set was = intentionally written by customers to let them target (with appropriate = megafunctions/coregen modules included) both the Stratix & Virtex = families, so they could do their own benchmark. Based on the result of = that benchmark, they would eliminate one vendor if their device just = couldn't hit the speed or density target, or if both worked, they would = be happy to discuss pricing with both vendors :). We also have some = designs where we were in competition with an ASIC, so again, the HDL was = not written with a particular device in mind. Since Stratix II is quite new, I believe we have more designs = originally coded for Stratix and the Virtex family than were coded for = Stratix II. That would tend to favour a 4-LUT architecture, if there = was a bias. However, since almost no customers code their HDL thinking = that carefully about how it will map to this LUT etc vs. that one, I = don't think there's much bias against Stratix II either. Basically = customers count on CAD tools to implement their logic & routing = intelligently these days, and rarely intervene except to add deeper = pipelining or perhaps retime part of the logic when it's clear that they = aren't going to be fast enough. Those are really architecture-level = changes, and don't directly favour one chip vs. another. The main message I would hope people would take away from the = NetSeminar is that if density is important to you, you should benchmark = for density when choosing a device, rather than relying on the number = attached to a device. Compile your design, or the opencores designs, and = check the % utilization. Regards, Vaughn Altera [v b e t z (at) altera.com] ------=_NextPart_000_0036_01C58E4D.6B1F8160 Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML><HEAD><TITLE></TITLE> <META http-equiv=3DContent-Type = content=3Dtext/html;charset=3DISO-8859-1> <META content=3D"MSHTML 6.00.2900.2668" name=3DGENERATOR> <STYLE></STYLE> </HEAD> <BODY text=3D#000000 bgColor=3D#ffffff> <DIV><FONT face=3DArial size=3D2></FONT> </DIV> <BLOCKQUOTE dir=3Dltr=20 style=3D"PADDING-RIGHT: 0px; PADDING-LEFT: 5px; MARGIN-LEFT: 5px; = BORDER-LEFT: #000000 2px solid; MARGIN-RIGHT: 0px"> <DIV>"Ray Andraka" <<A=20 href=3D"mailto:ray@andraka.com">ray@andraka.com</A>> wrote in = message <A=20 = href=3D"news:inODe.59117$FP2.41223@lakeread03">news:inODe.59117$FP2.41223= @lakeread03</A>...</DIV>Philip=20 Freidin wrote:<BR> <BLOCKQUOTE cite=3Dmid5jotd1pset510dm9lgn7r8vkuk40jlkpo4@4ax.com = type=3D"cite"><PRE wrap=3D"">Actually, what this does is benchmark your = "experts". And the results at best are only valid if you then use that expert for your design :-) </PRE></BLOCKQUOTE> <DIV>Philip, good point. I think you see what I am trying to = say. =20 Basically, that this "mine's bigger" <BR>pissing contest is at = best a=20 demonstration of how well the parts compare on an undisclosed = set<BR>of=20 benchmarks that are biased towards the favorite device. My point = in my=20 previous posts is<BR>that <B><I>any</I></B> design is going to be = biased=20 toward one of the devices. That bias can be toyed with by = <BR>making=20 changes in the benchmark designs (and I am talking about changes at = the RTL=20 level, not<BR>hand crafting). Even naive designs have this bias, = but I=20 suspect that the benchmarks were either <BR>designs done by the = vendor's FAEs=20 or by the vendor's customers, which would already introduce = a<BR>distinct=20 natural bias toward that vendor's devices. Even if he did = use=20 naive designs, marketing<BR>would have undoubtedly polished the = numbers by=20 either tweaking the designs or cherry picking the<BR>benchmarks to = support his=20 sales pitch. I'm not saying there is anything wrong with that, = just=20 <BR>trying to expose the nearly unavoidable bias that is naturally=20 there.</DIV> <DIV> </DIV> <DIV><FONT face=3DArial size=3D2>Ray, </FONT></DIV> <DIV><FONT face=3DArial size=3D2></FONT> </DIV> <DIV><FONT face=3DArial size=3D2>This is certaintly a valid concern -- = do the=20 benchmark circuits presented favour Altera, either intentionally or=20 unintentionally?</FONT></DIV> <DIV><FONT face=3DArial size=3D2></FONT> </DIV> <DIV><FONT face=3DArial size=3D2>First, intentional bias: Real = customer=20 designs for benchmarking are considered a valuable engineering = resource=20 at Altera. That's because we believe the only way to make our = CAD tools=20 or architectures better is to be scientific and rigorously benchmark = our ideas=20 on real customer designs, and choose what ideas go into the CAD tools = and=20 architectures based on those results. That means monkeying with = the HDL=20 is a big no-no -- it would destroy the intent of the customer design, = and=20 would no longer be a valid test case. To allow a design that was = targeted=20 at one of Altera/Xilinx/Lattice to compile in the other vendor's tools = the=20 megafunctions/coregen modules are replaced with equivalent modules for = each=20 manufacturer, but that's all that's allowed for design=20 modification.</FONT></DIV> <DIV><FONT face=3DArial size=3D2></FONT> </DIV> <DIV><FONT face=3DArial size=3D2>Next, unintentional bias. I = don't think=20 there's a significant unintentional bias, but of course it's = impossible to be=20 100% sure of that. If all the designs we had were designed = for=20 Stratix II and had had their HDL extensively tuned by the customer for = Stratix=20 II or an FAE, a bias would certainly be possible. But the device = our=20 benchmark set originally targeted is all over the map: Stratix, = VirtexII,=20 VirtexII-Pro, Stratix II. A significant fraction of our design = set was=20 intentionally written by customers to let them target (with = appropriate=20 megafunctions/coregen modules included) both the Stratix & = Virtex=20 families, so they could do their own benchmark. Based on the = result of=20 that benchmark, they would eliminate one vendor if their device just = couldn't=20 hit the speed or density target, or if both worked, they would be = happy=20 to discuss pricing with both vendors :). We also have some = designs=20 where we were in competition with an ASIC, so again, the HDL was not = written=20 with a particular device in mind.</FONT></DIV> <DIV><FONT face=3DArial size=3D2></FONT> </DIV> <DIV><FONT face=3DArial size=3D2>Since Stratix II is quite new, I = believe=20 we have more designs originally coded for Stratix and the Virtex = family=20 than were coded for Stratix II. That would tend to favour a = 4-LUT=20 architecture, if there was a bias. However, since almost no = customers code their HDL thinking that carefully about how it = will map to=20 this LUT etc vs. that one, I don't think there's much bias against = Stratix II=20 either. Basically customers count on CAD tools to implement = their logic=20 & routing intelligently these days, and rarely intervene except to = add=20 deeper pipelining or perhaps retime part of the logic when it's clear = that=20 they aren't going to be fast enough. Those are really = architecture-level=20 changes, and don't directly favour one chip vs. = another.</FONT></DIV> <DIV><FONT face=3DArial size=3D2></FONT> </DIV> <DIV><FONT face=3DArial size=3D2>The main message I would hope people = would take=20 away from the NetSeminar is that if density is important to you, you = should=20 benchmark for density when choosing a device, rather than relying on = the=20 number attached to a device. </FONT><FONT face=3DArial = size=3D2>Compile your=20 design, or the opencores designs, and check the %=20 utilization.</FONT></DIV> <DIV><FONT face=3DArial size=3D2></FONT> </DIV> <DIV><FONT face=3DArial size=3D2>Regards,</FONT></DIV> <DIV><FONT face=3DArial size=3D2></FONT> </DIV> <DIV><FONT face=3DArial size=3D2>Vaughn</FONT></DIV> <DIV><FONT face=3DArial size=3D2>Altera</FONT></DIV> <DIV><FONT face=3DArial size=3D2>[v b e t z (at) = altera.com]</FONT></DIV> <DIV><FONT face=3DArial = size=3D2></FONT> </DIV></BLOCKQUOTE></BODY></HTML> ------=_NextPart_000_0036_01C58E4D.6B1F8160--Article: 87356
jjlindula@hotmail.com wrote: > Hello, I'm interested in how individuals or design groups manage > complexity in their design projects. What things do you do or things > the group does that can take complex tasks and break them into simpler > or more manageable tasks? It may sound like a weird question, but there > must be some guidelines, best practices, or habits used to achieve > success in designing/developing a complex project. I'm sure there must > be some individuals out there that are constantly taking complex tasks > and just about every time have success with it. Short of speaking, I > want to know what's the secret to their success. All comments are > welcomed, even the most obvious suggestions. > > As an engineer, I'm constantly trying to improve my design processes. > > Thanks everyone, > joe Joe, One of the people I respect most in this area is Paul Bennett, http://www.amleth.demon.co.uk/hidecs/. He is a regular at comp.lang.forth and sci.engr.control. Jerry -- Engineering is the art of making what you want from things you can get. ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻArticle: 87357
Results from behavioural, post-translation and post-map simulation are correct. I think the main problem isn't above code but something after place & route process.Article: 87358
Hi Paul, You've found a case where Quartus synthesis isn't correctly implementing the Verilog standard for signed modulo operations. Currently for A mod B, Quartus is synthesizing a circuit that will always return a non-negative result. That does not match Verilog semantics when B is negative. That is why you're getting a mismatch between the ModelSim simulation (which is correctly modeling the Verilog semantics) and the design in hardware/the Quartus simulator. Quartus was synthesizing a signed modulus implementation that matches Matlab's interpretation of signed modulus, so ironically with the original design the hardware would match Matlab. Once you changed your design so that you added B to (A MOD B) when B < 0, you actually changed the hardware so that it wouldn't match Matlab anymore. This will be fixed in the upcoming Quartus 5.1 release. Thanks for bringing this up, and I hope this didn't cause you too much trouble. Regards, Vaughn Altera [v b e t z (at) altera.com]Article: 87359
On Thu, 21 Jul 2005 19:46:17 GMT, "Phlip" <phlip_cpp@yahoo.com> wrote: >jjlindula wrote: > >> Hello, I'm interested in how individuals or design groups manage >> complexity in their design projects. > >Read /Notes on the Synthesis of Form/ by Christopher Alexander. It makes the >best case possible for incremental design. You build a prototype, see if it >works, tweak it a little, and repeat over many iterations. > As opposed to, say, thinking it through carefully and getting it right the first time? JohnArticle: 87360
"there is an implied shift of the radix point." I'm still not quite understanding this, but I found a good tutorial on radix points and bit shifting, so hopefully it will make sense after reviewing this. In any case, I implemented Gary's algorithm and it works, with perhaps a slight glitch at the edge case, still looking into that. But the new version saved about 1100 LEs. Very nice.Article: 87361
I was arguing with a friend about SSE2 (Intel's SIMD vector CPU instructions), and how useless it is to most general computer programs. But he correctly pointed out any analog simulation (SPICE) likely use floating-point numeric computations. Furthermore, he claimed that in native 64-bit mode, the AMD64/EM64T instruction-set doesn't have x87 (legacy FPU) instructions. A 64-bit application MUST use SSE registers for all floating-point math. I'm not much of a programmer, so is this true? If so, are the 64-bit/linux versions of EDA tools specifically optimized for the SSE instruction set?Article: 87362
Well, thanks for all your suggestions. As far as BRAMs, I would rather use them elsewhere. I ended up with this rather verbose code shown below. And I don't know how well it synthesizes, probably not to well, because I think it is using several hundred LUTs. It's actually a 62 ones counter and the bits can be turned off from the center out with the B signals. Brad (editor: not worth archiving :-)Article: 87363
John Larkin <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote in news:sdu0e1h5or9d6dbe6akavdov7bnqh6c9ld@4ax.com: > On Thu, 21 Jul 2005 19:46:17 GMT, "Phlip" <phlip_cpp@yahoo.com> wrote: > >>jjlindula wrote: >> >>> Hello, I'm interested in how individuals or design groups manage >>> complexity in their design projects. >> >>Read /Notes on the Synthesis of Form/ by Christopher Alexander. It >>makes the best case possible for incremental design. You build a >>prototype, see if it works, tweak it a little, and repeat over many >>iterations. >> > > As opposed to, say, thinking it through carefully and getting it right > the first time? > > John > Complexity causes projects to increase in time exponentially. Without a good assessment of the scope of the project, I think the approach taken can vary considerably. You need at least one project architect to guide the project. I am not a big proponent on extremely detailed project specifications. I think this all to often leads to a finished product that nobody wants. I think you need to leave some room for improved ideas and innovations that typically occur as you develop the product. Of course, this needs to be balanced with getting the project done. There is a time when you have to "shoot the engineer and start production" I also rarely take the build a little, tweak a little approach. Time to market will usually kill you. This doesn't mean that you ignore little problems or bugs in your design. These are going to come back and bite you. On the larger projects that I have worked on, We start with a good plan and a vision of what we want the outcome of our product to be with trying to define every last detail. We might prototype the small sections that we know the least about. Most of the time there are large portions of a project that are already fairly well understood by our team. I like to design the early pieces close to the finished result. In some cases, an iteration or two is all you need to have a finished product. In other cases, you throw away most of the design and start over, with a much better understanding of where you need to go. I think this approach used to be called fast prototyping. -- Al Clark Danville Signal Processing, Inc. -------------------------------------------------------------------- Purveyors of Fine DSP Hardware and other Cool Stuff Available at http://www.danvillesignal.comArticle: 87364
I'm sure Altera will trot out a bunch of data showing how much more logic efficient the ALM is than the Slice. Otherwise they would not hold a net seminar. That said I think the ALM has some nice features. I don't think the 6 and 7 give you much advantage. How often can you share 4 inputs with two seperate functions? Or share 6 inputs with two seperate functions? My guess is that the 5 + 3 or 5 + 4 is where it helps more. I think the biggest advantage is reduced levels of logic isn't it? One important point will be what is the Fmax difference between these designs. Aren't more packed designs slower? Anyway, the truth is something that will not be talked about on here. What is the silicon area of an ALM? What is the silicon area of a SliceM and a SliceL? If the silicon area of a SliceM + SliceL is half the size of 2 ALMs then Xilinx is way ahead. If the silicon area of 2 ALMs is half the size of SliceM + SliceL then Altera is way ahead. An ALM has 64 SRAM bits I believe. 2 x 4 lut plus 4 x 3 lut. A slice has 32 SRAM bits. Last time I checked, 32 SRAM bits takes up less area than 64 SRAM bits. But SRL16 and distributed ram circuits blow up the size of a SliceM. The truth is that the 180 and the 200 are most likely both "reticle busters". The biggest chip you can make. One chip per 90 nm reticle shot. Since the LX200 has 200K and the S2 has 180K, my guess is that the SliceM + SliceL is a bit smaller than 2 x ALM. The laws of physics are the same for both Altera and Xilinx. Nothing is free. SRL16 and RAM cost area. ALM costs area. Routing costs area. Is the size of the silicon area defined by routing or by circuits? If the ALM has bigger silicon area then it should have lower resource usage. Vaughn Betz wrote: > "Ray Andraka" <ray@andraka.com> wrote in message news:inODe.59117$FP2.41223@lakeread03... > Philip Freidin wrote: > > Actually, what this does is benchmark your "experts". And the results > at best are only valid if you then use that expert for your design :-) > > Philip, good point. I think you see what I am trying to say. Basically, that this "mine's bigger" > pissing contest is at best a demonstration of how well the parts compare on an undisclosed set > of benchmarks that are biased towards the favorite device. My point in my previous posts is > that any design is going to be biased toward one of the devices. That bias can be toyed with by > making changes in the benchmark designs (and I am talking about changes at the RTL level, not > hand crafting). Even naive designs have this bias, but I suspect that the benchmarks were either > designs done by the vendor's FAEs or by the vendor's customers, which would already introduce a > distinct natural bias toward that vendor's devices. Even if he did use naive designs, marketing > would have undoubtedly polished the numbers by either tweaking the designs or cherry picking the > benchmarks to support his sales pitch. I'm not saying there is anything wrong with that, just > trying to expose the nearly unavoidable bias that is naturally there. > > Ray, > > This is certaintly a valid concern -- do the benchmark circuits presented favour Altera, either intentionally or unintentionally? > > First, intentional bias: Real customer designs for benchmarking are considered a valuable engineering resource at Altera. That's because we believe the only way to make our CAD tools or architectures better is to be scientific and rigorously benchmark our ideas on real customer designs, and choose what ideas go into the CAD tools and architectures based on those results. That means monkeying with the HDL is a big no-no -- it would destroy the intent of the customer design, and would no longer be a valid test case. To allow a design that was targeted at one of Altera/Xilinx/Lattice to compile in the other vendor's tools the megafunctions/coregen modules are replaced with equivalent modules for each manufacturer, but that's all that's allowed for design modification. > > Next, unintentional bias. I don't think there's a significant unintentional bias, but of course it's impossible to be 100% sure of that. If all the designs we had were designed for Stratix II and had had their HDL extensively tuned by the customer for Stratix II or an FAE, a bias would certainly be possible. But the device our benchmark set originally targeted is all over the map: Stratix, VirtexII, VirtexII-Pro, Stratix II. A significant fraction of our design set was intentionally written by customers to let them target (with appropriate megafunctions/coregen modules included) both the Stratix & Virtex families, so they could do their own benchmark. Based on the result of that benchmark, they would eliminate one vendor if their device just couldn't hit the speed or density target, or if both worked, they would be happy to discuss pricing with both vendors :). We also have some designs where we were in competition with an ASIC, so again, the HDL was not written with a particular device in mind. > > Since Stratix II is quite new, I believe we have more designs originally coded for Stratix and the Virtex family than were coded for Stratix II. That would tend to favour a 4-LUT architecture, if there was a bias. However, since almost no customers code their HDL thinking that carefully about how it will map to this LUT etc vs. that one, I don't think there's much bias against Stratix II either. Basically customers count on CAD tools to implement their logic & routing intelligently these days, and rarely intervene except to add deeper pipelining or perhaps retime part of the logic when it's clear that they aren't going to be fast enough. Those are really architecture-level changes, and don't directly favour one chip vs. another. > > The main message I would hope people would take away from the NetSeminar is that if density is important to you, you should benchmark for density when choosing a device, rather than relying on the number attached to a device. Compile your design, or the opencores designs, and check the % utilization. > > Regards, > > Vaughn > Altera > [v b e t z (at) altera.com]Article: 87365
Paul Leventis (at home) wrote: > Hi Marc, > > > In short, please explain which of the above comparison columns is > > incorrect, and why. > > I see your point now. The problem is the unfortunate use of the term > "Equivalent Four-Input LUTs". This value does *not* represent a simple > 4-LUT. What that column really represents is "Equivalent Benchmarked Logic > Units". In otherwords, it is a measurement of capacity based on benchmark > results. In this case, we have normalized the result so that our arbitrary > unit of measurement matches one Xilinx half-slice (which is labeled "Actual > LUTs" in your table). Thank you for the reply, Paul. Now that you point this out explictly, I think I can see how Altera justifies their numbers (I have no idea if they are truely accurate, but at least I understand them). As you said, the label that Altera has used is most unfortunate... "LUT" has a pretty specific meaning to a very large number of people, and Altera seems to be twisting that meaning to suit them - and adding much confusion in the process. > These comparisons take into account all features of the respective logic > elements, such as dedicated adders, little XOR and MUX widgets, etc. To > obtain the Stratix II value of 186K, we benchmarked a large suite of designs > by running Synplify + ISE vs. Quartus to figure out how many ALMs were > needed vs. how many half-slices were needed. Presumably F8 muxes were turned on. Presumably the suite didn't have very many "128-Number, 16-Bit per number adder" trees. Presumably it didn't force Xilinx to use SRL's when there are almost certainly registers available for a simple four bit shift register (the "0.5 slices" entry in table 11 of the white paper is just silly - if the Stratix can use four registers, so can Xilinx. Altera shouldn't try to imply there is an automatic savings with the Stratix II). > The result was that 2.6 > half-slices were needed to implement the same functionality as 1 ALM (on > average). Multiply by number of ALMs in 2S180 (71760) and you get 186576. > So perhaps a more correct label would have been "Equivalent Virtex > Half-Slices". I believe at least half the confusion I had on this topic was that Altera uses the term "equivalent four-input LUTs" in reference to Xilinx also (in multiple places, but especially table 4 of the white paper). If they'd just stuck with Slice comparisons, it would have helped greatly. Altera has defined "Equivalent four-input LUTs" to mean ALM * 2.6, so how can that term apply to Xilinx? Not only that, but when you start using the term LUT, it makes it seems like Altera is ignoring the other features of the slice like the F-muxes, counter/carry logic, etc - hence the strong wording on my previous email. I see now that they aren't. Thank you again for helping me to understand the madness, MarcArticle: 87366
I think I will use a 120MHZ clock to do the job. Rgds Andr=E9Article: 87367
A number of people have talked about hierarchy. I have inherited a number of designs, both PCB and firmware, where hierarchy was used because the piece of paper wasn't big enough. I would suggest that you should be able to look at any hierarchical block and be able to explain what it does in a couple of sentences, then you enter the block to find out how it does it. Colin jjlindula@hotmail.com wrote: > Hello, I'm interested in how individuals or design groups manage > complexity in their design projects. What things do you do or things > the group does that can take complex tasks and break them into simpler > or more manageable tasks? It may sound like a weird question, but there > must be some guidelines, best practices, or habits used to achieve > success in designing/developing a complex project. I'm sure there must > be some individuals out there that are constantly taking complex tasks > and just about every time have success with it. Short of speaking, I > want to know what's the secret to their success. All comments are > welcomed, even the most obvious suggestions. > > As an engineer, I'm constantly trying to improve my design processes. > > Thanks everyone, > joeArticle: 87368
>Maybe I am wrong, but where you need to do everything on one cycle? If I use the NXT signal directly from pin and I use the PHY clock as my FSM clock I have one cycle to answer with the appropiate data byte. The problem that arises is that I have to care about SETUP/HOLD timings because I check the signal NXT in almost all of my FSM states. If the FSM becomes large I have to assure that these timing are violated in NO state of the FSM. When working directly from pin it is not that easy as having registered NXT before. Am I right? Rgds Andr=E9Article: 87369
The application involved an 32x oversampled 28MHz PSK streamm from a powerline. The logic ran at 28MHz behind a 32tap analog DLL so syncing was done by looking fow a correlation at each of 32 phases in parallel. With even minor power line filtering, the bit edges are all over the place making it tough to say where bits start or end. Anyway it was derived from an ASIC design and BRams were not plentifull in those early Virtex. If I did it today, I'd probably use N x faster clock on digital logic with N x less HW and factor the N out of the oversampling front end logic. I still wouldn't use BRAMs today, I'd use them for other functions. Using 63bits rather than 64 bits takes precisely 63 adder cells and each doubling adds 2 adder delays (ASIC that is), and 64 takes an extra 6 on top. When did adders become expensive. johnjakson at usa dot comArticle: 87370
John Larkin wrote: >On Thu, 21 Jul 2005 19:46:17 GMT, "Phlip" <phlip_cpp@yahoo.com> wrote: > > > >>jjlindula wrote: >> >> >> >>>Hello, I'm interested in how individuals or design groups manage >>>complexity in their design projects. >>> >>> >>Read /Notes on the Synthesis of Form/ by Christopher Alexander. It makes the >>best case possible for incremental design. You build a prototype, see if it >>works, tweak it a little, and repeat over many iterations. >> >> >> > >As opposed to, say, thinking it through carefully and getting it right >the first time? > >John > > If you can think something through carefully and get it right first time, without experimentation, you are either: a) a genius who would make Newton look like a moron, or b) doing something pretty trivial. Regards, SteveArticle: 87371
The design rule is Minimizing The Complexity. Do not rely on the false and unfortunately quite popular idea that any simple HW/SW design is 'stupid'. It's better to make $$$ with 'stupid' product than stuck with complex but 'clever' one ... Kind russian regards, YuriArticle: 87372
< Using 63bits rather than 64 bits takes precisely 63 adder cells> Should say Using 63bits rather than 64 bits takes precisely 63-6 adder cells and 64 bits take 63. Its been awhile. You probably didn't realize I was using 32x for oversamplingArticle: 87373
Hi All, Can anyone suggest a method to convert verilog file into blif (LUT) format. Does altera or xilinx support this file conversion ?. Kindly help me in this regard Thanking you in advanceArticle: 87374
Doesn't ML401 board come with a ucf? Jim "Nenad" <n_uzunovic@yahoo.com> wrote in message news:1121968704.986524.226490@g47g2000cwa.googlegroups.com... > > Has anyone activated this yet? > > i found this ddr controller on the xilinx website: memory interface > generator (MIG). it was on demos on demand as well. it is a simple > program that supposedly generates everything you need. it seems neat, > hdl seems ok. though it's done on ML461 board and it gives wrong .ucf > files even for ML461 itself (there is another sample code of pretty > much same controller there (xapp709) so i compared it with the > generated one). > > can someone please help me with the pinouts here. i've figured out data > and address pins (well, i think i have) and now i am left with > controls. > > thanks >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z