Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
I am a student who is trying to model a parallel hardware architecture for FFT using a C. My aim is to verify the correctness of my architecture and also estimate the noise introduced when fixed point is used. Is there any tutorial/book or any help that can guide me in this process of C modelling --- and especially for fixed point models? Thank you. --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147276
My XP box died the other day and was replaced by a 64 bit Windows7 machine. Now my $%^&Quartus II software won't run, and Altera says Win7 ain't supported. Anybody know of a workaround? I'm developing on the Altera Cyclone III FPGA on the Altium Designer NanoBoard 3000. Thx, BobArticle: 147277
It's running ok for me on Windows 7 64-bit. What particular part of the software are you having problems with? JonArticle: 147278
> I am a student who is trying to model a parallel hardware architecture for > FFT using a C. My aim is to verify the correctness of my architecture and > also estimate the noise introduced when fixed point is used. > > Is there any tutorial/book or any help that can guide me in this process of > C modelling --- and especially for fixed point models? One alternative is to code your FFT in the high-level concurrent language Mobius which supports parameterized integers, fixed point and floating point. The Mobius compiler generates synthesizeable Verilog or VHDL with excellent QoR. On www.codetronix.com there are several FFT demos including Cooley, combinatorial and r2^2sdf architectures demonstrating bit-accurate TLM simulations and synthesis. You could use these as a basis and vary the bitsizes to experimentally observe quantization noise. /PerArticle: 147279
On 4/21/2010 1:03 PM, onkars wrote: > I am a student who is trying to model a parallel hardware architecture for > FFT using a C. My aim is to verify the correctness of my architecture and > also estimate the noise introduced when fixed point is used. > > Is there any tutorial/book or any help that can guide me in this process of > C modelling --- and especially for fixed point models? > > > > Thank you. > > --------------------------------------- > Posted through http://www.FPGARelated.com You're going to have to write it in VHDL or Verilog eventually anyway. Might as well do the modeling there too. You could start by writing purely behavioral code for it, and then have a pretty straightforward path to making something synthesizable out of it. -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 147280
On Wed, 21 Apr 2010 15:03:42 -0500, "onkars" wrote: >I am a student who is trying to model a parallel hardware architecture for >FFT using a C. My aim is to verify the correctness of my architecture and >also estimate the noise introduced when fixed point is used. > >Is there any tutorial/book or any help that can guide me in this process of >C modelling --- and especially for fixed point models? If you download the SystemC-2.0.1 class library from www.systemc.org you will find a comprehensive package of template classes for modelling fixed-point values in C++. You don't have to do the full SystemC thing to use it, and in any case the code should give you some useful ideas. However, folk who do this kind of stuff all the time tend to use Matlab, don't they? -- Jonathan BromleyArticle: 147281
On 4/21/2010 2:38 PM, Rob Gaddi wrote: > On 4/21/2010 1:03 PM, onkars wrote: >> I am a student who is trying to model a parallel hardware architecture >> for >> FFT using a C. My aim is to verify the correctness of my architecture and >> also estimate the noise introduced when fixed point is used. >> >> Is there any tutorial/book or any help that can guide me in this >> process of >> C modelling --- and especially for fixed point models? >> >> >> >> Thank you. >> >> --------------------------------------- >> Posted through http://www.FPGARelated.com > > You're going to have to write it in VHDL or Verilog eventually anyway. > Might as well do the modeling there too. You could start by writing > purely behavioral code for it, and then have a pretty straightforward > path to making something synthesizable out of it. > And goddammit, I just got to reading comp.dsp. http://www.blakjak.demon.co.uk/mul_crss.htm Go learn that before you use the Internet for anything again. Ever. -- Rob Gaddi, Highland Technology Email address is currently out of orderArticle: 147282
Wastrel <stephensdigital@gmail.com> wrote: > My XP box died the other day and was replaced by a 64 bit Windows7 > machine. Now my $%^&Quartus II software won't run, and Altera says > Win7 ain't supported. Anybody know of a workaround? I'm developing on > the Altera Cyclone III FPGA on the Altium Designer NanoBoard 3000. Win7 has an option to tell the program that it is running on a previous version of windows, as far as such checks are concerned. As far as the design software, that is likely to work. As you might need a device driver to talk to USB to download the bitstream, that might be system dependent, such that it won't work. -- glenArticle: 147283
Hi, I'm trying to calculate the absolute value of a signed number (two's complement). Right now, I sign extend the input, and when msb=1, inverse all bits and add 1. The sign extend is to take care of the most negative number. Is there a better way in terms of hardware utilization? Here is my verilog code: wire signed [w-1:0] a; wire signed [w:0] b, c; assign b = $signed(a); //sign exted input assign c = b[w] ? (~b+1'b1) : b; //inverse all bits and add 1 if msb=1 Thanks, Diego --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147284
On Apr 21, 4:08=A0pm, Jon Beniston <j...@beniston.com> wrote: > It's running ok for me on Windows 7 64-bit. > > What particular part of the software are you having problems with? > > Jon Have you found any advantages to running Quartus II on WIndows 7 or a 64 bit OS? Thanks, DerekArticle: 147285
dlopez <d@n_o_s_p_a_m.n_o_s_p_a_m.designgame.ca> wrote: > I'm trying to calculate the absolute value of a signed > number (two's complement). > Right now, I sign extend the input, and when msb=1, inverse all bits and > add 1. The sign extend is to take care of the most negative number. I suppose you can do that. I have never known anyone else to do that, but then on an N bit processor with N bit registers, it doesn't make much sense to try to store an N+1 bit value. > Is there a better way in terms of hardware utilization? > Here is my verilog code: > wire signed [w-1:0] a; > wire signed [w:0] b, c; > assign b = $signed(a); //sign exted input > assign c = b[w] ? (~b+1'b1) : b; //inverse all bits and add 1 if msb=1 -- glenArticle: 147286
glen herrmannsfeldt wrote: > In comp.arch.fpga rickman <gnuarm@gmail.com> wrote: > > On Apr 17, 7:17?pm, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > (snip on test benches) > > >> Yes, I was describing real world (hardware) test benches. > > >> Depending on how close you are to a setup/hold violation, > >> it may take a long time for a failure to actually occur. > > > That is the point. Finding timing violations in a simulation is hard, > > finding them in physical hardware is not possible to do with any > > certainty. A timing violation depends on the actual delays on a chip > > and that will vary with temperature, power supply voltage and process > > variations between chips. > > But they have to be done for ASICs, and all other chips as > part of the fabrication process. For FPGAs you mostly don't > have to do such, relying on the specifications and that the chips > were tested appropriately in the factory. I don't follow your reasoning. Why is finding timing violations in ASICs any different from FPGA? If the makers of ASICs can't characterize their devices well enough for static timing analysis to find the timing problems then ASIC designers are screwed. > > I had to work on a problem design once > > because the timing analyzer did not work or the constraints did not > > cover (I firmly believe it was the tools, not the constraints since it > > failed on a number of different designs). We tried finding the chip > > that failed at the lowest temperature and then used that at an > > elevated temperature for our "final" timing verification. Even with > > that, I had little confidence that the design would never have a > > problem from timing. Of course on top of that the chip was being used > > at 90% capacity. This design is the reason I don't work for that > > company anymore. The section head knew about all of these problems > > before he assigned the task and then expected us to work 70 hour work > > weeks. At least we got them to buy us $100 worth of dinner each > > evening! > > One that I worked with, though not at all at that level, was > a programmable ASIC (for a systolic array processor). For some > reason that I never knew the timing was just a little bit off > regarding to writes to the internal RAM. The solution was to use > two successive writes, which seemed to work. In the usual operation > mode, the RAM was initialized once, so the extra cycle wasn't much > of a problem. There were also some modes where the RAM had to > be written while processing data, such that the extra cycle meant > that the processor ran that much slower. > > > The point is that if you don't do static timing analysis (or have an > > analyzer that is broken) timing verification is nearly impossible. > > And even if you do, the device might still have timing problems. You keep saying that, but you don't explain. > >> Yes, I was trying to cover the case of not using static timing > >> analysis but only testing actual hardware. ?For ASICs, it is > >> usually necessary to test the actual chips, though they should > >> have already passed static timing. ? > > > If you find a timing bug in the ASIC chip, isn't that a little too > > late? Do you test at elevated temperature? Do you generate special > > test vectors? How is this different from just testing the logic? > > It might be that it works at a lower clock rate, or other workarounds > can be used. Yes, it is part of testing the logic. > > (snip) > > >> If you only have one clock, it isn't so hard. ?As you add more, > >> with different frequencies and/or phases, it gets much harder, > >> I agree. ?It would be nice to get as much help as possible > >> from the tools. > > > The number of clocks is irrelevant. I don't consider timing issues of > > crossing clock domains to be "timing" problems. There you can only > > solve the problem with proper logic design, so it is a logic > > problem. > > Yes, there is nothing to do about asynchronous clocks. It just has > to work in all cases. But in the case of supposedly related > clocks, you have to verify it. There are designs that have one > clock a multiple of the other clock frequency, or multiple phases > with specified timing relationship. Or even single clocks with > specified duty cycle. (I still remember the 8086 with its 33% duty > cycle clock.) > > With one clock you can run combinations of voltage, temperature, > and clock rate, not so hard but still a lot of combinations. > With related clocks, you have to verify that the timing between > the clocks works. But you can't verify timing by testing. You can never have any level of certainty that you have tested all the ways the timing can fail. If the clocks are related, what exactly are you testing, that they *are* related? Timing is something that has to be correct by design. RickArticle: 147287
I also use seperate sequential and combinatorial always blocks. At first I felt that I should be able to have just a single sequential block but quickly became accustomed to 2 blocks and it now feels natural and I don't think it limits my ability to express my intent at all. Most of the experienced designers I work with use this style but not all of them.Article: 147288
>I suppose you can do that. I have never known anyone else >to do that, but then on an N bit processor with N bit registers, >it doesn't make much sense to try to store an N+1 bit value. So on a N bit processor with N bit registers, would you round that most negative value that doesn't fit to the max possible positive value? For example, say 4 bit data, -8 becomes 7? Diego --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147289
glen herrmannsfeldt wrote: > combinations. One that I have heard of, though haven't actually > tried, is having a logic block where the delay is greater than > one clock cycle, but less than two. Maybe some tools can do that, > but I don't believe that all can. Just normal multicycle path, has been normal thing in tools for a long time. At least Altera, Xilinx, Synplify, Primetime and Precision support it. --KimArticle: 147290
rickman wrote: > But you can't verify timing by testing. You can never have any level > of certainty that you have tested all the ways the timing can fail. Especially with ASIC you can't verify the design by testing. There are so many signoff corners and modes in the timing analysis. The old worst/best case in normal and testmode are long gone. Even 6 corner analysis in 2+ modes is for low end processes with big extra margins. With multiple adjustable internal voltage areas, powerdown areas etc. the analysis is hard even with STA. --KimArticle: 147291
On 21 Apr., 18:15, Ed McGettigan <ed.mcgetti...@xilinx.com> wrote: > On Apr 20, 6:20=A0pm, "stephen.cra...@gmail.com" > > <stephen.cra...@gmail.com> wrote: > > The CTO of Xilinx, during his keynote this morning at the > > Reconfigurable Architectures Workshop in Atlanta, made mention of the > > recent announcement of the Virtex 7 architecture. =A0My colleagues and = I > > assumed that either the announcement was very recent or not very well > > publicized as none of us had heard anything official regarding Virtex > > 7. A subsequent web search returned little except for a white paper on > > 28nm technology. > > > Does anyone know what announcement the CTO was referring to? > > Either your colleagues misheard what was said our our CTO, Ivo Bolson, > mispoke. =A0There has been no announcement of a Virtex-7 FPGA family. > > Xilinx did recently announce aspects of future families that will be > developed on the 28nm process node.http://www.xilinx.com/technology/roadm= ap/index.htm > > Ed McGettigan > -- > Xilinx Inc. Hi, in Elektronik issue 8/2010 (bimonthly leading German electronics magazine) there's a featured article about "The FPGA of the Future". There is a statement that says :" The fabrication of [Xilinx's] 28nm devices will take place at Samsung and TSMC. The Spartan and Virtex product lines will be joined into a single product family for the 28 nm devices by Xilinx - PROBABLY named Virtex-7" So, the name is in print already. It's NOT mentioned who came up with it, but unless Xilinx doesn't plan to name this new line totally different it's an obvious guess. Rumors travel fast. :-) Regards EilertArticle: 147292
dlopez <d@n_o_s_p_a_m.n_o_s_p_a_m.designgame.ca> wrote: >>I suppose you can do that. I have never known anyone else >>to do that, but then on an N bit processor with N bit registers, >>it doesn't make much sense to try to store an N+1 bit value. > So on a N bit processor with N bit registers, would you round that most > negative value that doesn't fit to the max possible positive value? > For example, say 4 bit data, -8 becomes 7? On most processors, twos complement arithmetic wraps on overflow. In that case, absolute value of the most negative value gives the same, most negative, value. There are some with saturating arithmetic, such that give the largest value on overflow and smallest on underflow. In that case, I would expect the most positive value. Try it on your favorite processor and see what it does. -- glenArticle: 147293
On Apr 22, 3:59=A0am, John Adair <g...@enterpoint.co.uk> wrote: > We finally made an assembly slot and built the 4 remaining Polmaddie > CPLD and FPGA boards. These very low cost CPLD and FPGA boards will > sell to universities and colleges in prices as low as $30-40. One off > pricing starts at $60-70. > > The concept is a bit different to that offered by most development > board vendors and we have 5 solutions, from 4 different CPLD/FPGA > vendors, allowing you to evaluate differnt tool flows or even > different technologies with a common feature set. > > More detailshttp://www.enterpoint.co.uk/polmaddie/polmaddie_family.html. > "ActelTM ProASIC3TM (Polmaddie5)." and the link says Polmaddie5 : " Software ISETM WebpackTM software is a free development tool suite from XilinxTM for CPLD + FPGA design development. More details http://www.xilinx.com/products/design_resources/design_tool/index.htm." oops... ;) and missing is any mention of which ProASIC3 is fitted - rather an important detail ? -jgArticle: 147294
On Apr 22, 3:59=A0am, John Adair <g...@enterpoint.co.uk> wrote: > We finally made an assembly slot and built the 4 remaining Polmaddie > CPLD and FPGA boards. These very low cost CPLD and FPGA boards will > sell to universities and colleges in prices as low as $30-40. One off > pricing starts at $60-70. > > The concept is a bit different to that offered by most development > board vendors and we have 5 solutions, from 4 different CPLD/FPGA > vendors, allowing you to evaluate differnt tool flows or even > different technologies with a common feature set. A very good idea. Since you do an EPM3128A and XC2C128, it's surprising to not see the Atmel ATF1508RE ? ATF1508RE has more logic than a XC2C128, and lower power than a EPM3128A. -jgArticle: 147295
On Apr 21, 4:34=A0pm, Patrick Maupin <pmau...@gmail.com> wrote: > On Apr 21, 8:19=A0am, Jan Decaluwe <j...@jandecaluwe.com> wrote: > > Your coding style provides a very verbose workaround for temporary > > variables. I just can't imagine this is how you do test benches, that > > are presumably much more complex than your RTL code. Presumably > > there you use temporary variables directly where you need them without > > great difficulty. Why would it have to be so different for > > synthesizable > > RTL? > > You're right. =A0Testbenches do not suffer from this limitation. =A0But, > in point of fact, I can use any sort of logic in my testbench. =A0I use > constructs all the time that aren't realistically synthesizable, so > comparing how I code synthesizable RTL vs how I code testbenches would > turn up a lot more differences than just this. As you say, synthesizable RTL has a lot of inherent restrictions. I just don't see the logic in adding artificial restricions on top of those. > > Most importantly: your coding style doesn't support non-temporary > > variables. In other words, register inferencing from variables is not > > supported and therefore ignored as technique. In this sense, this is > > actually a good illustration of the point I'm trying to make. > > Well, it may be a good illustration to you, but now you're waxing > philosophical again. =A0Care to show an example (preferably in verilog) > of how not using this coding style supports your preferred technique? In my experience, we are talking about a paradigm shift here. Easy once you "get it", but apparently confusing to many engineers in the mean time. Therefore, I now think that a meaningful discussion must be more elaborate than a typical newsgroup post can bear :-) What I can offer you is a rather lengthy discussion of two design variants that highlight the issues through their (subtle) differences. The case is based on a real ambiguity that I once detected in the Xilinx ISE examples. Unfortunately, the source code is in Python :-) (MyHDL). However, there is equivalent converted Verilog available in the article. http://www.myhdl.org/doku.php/cookbook:jc2 JanArticle: 147296
Hi I have a simple multiplier custom ip(the tutorial is easily found online). The tutorial created 3(internal) signals a,b and p. It multiplies a(16 bit) and b(16 bit) and sends the product to p(32 bit). It makes use of fifos(read fifo and write fifo). This works fine. But I am trying make p external. I changed the necessary vhdl files by adding port p where it says --USER ports added here and --USER ports mapped here The thing is when i try simulating it port p which is now external remains in am unknown state. I dont understand what is going wrong here??? could someone please help?? --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147297
The Polmaddie4/5 webpages will be improved. Obviously the software aspect will be different. We didn't do an Atmel part because mainly because they are almost non- existant in the educational marketplace. There are a number of other vendors as well that we didn't do on that basis. Ultimately the first 5 boards are a market test and if there are other things that prove popular requests then maybe we might do them. There were also other design critera that knocked many parts out including (1) TQ144 package to reuse tooling we did for Polmaddie1 (2) A free software tools critera. I do know the Actel Igloo and SiliconBlue parts got thrown out on the basis of (1) much as they would be nice parts to try. . All of these factors were important in delivering what is a very low cost set of boards. John Adair Enterpoint Ltd. On 22 Apr, 09:33, -jg <jim.granvi...@gmail.com> wrote: > On Apr 22, 3:59=A0am, John Adair <g...@enterpoint.co.uk> wrote: > > > We finally made an assembly slot and built the 4 remaining Polmaddie > > CPLD and FPGA boards. These very low cost CPLD and FPGA boards will > > sell to universities and colleges in prices as low as $30-40. One off > > pricing starts at $60-70. > > > The concept is a bit different to that offered by most development > > board vendors and we have 5 solutions, from 4 different CPLD/FPGA > > vendors, allowing you to evaluate differnt tool flows or even > > different technologies with a common feature set. > > A very good idea. > > =A0Since you do an EPM3128A and XC2C128, it's surprising to not see the > Atmel ATF1508RE ? > =A0ATF1508RE has more logic than a XC2C128, and lower power than a > EPM3128A. > > -jgArticle: 147298
> Well, some of the comments were regarding ASIC design, where > things aren't so sure. For FPGA designs, there is, as you say, > "properly constrained" which isn't true for all design and tool > combinations. One that I have heard of, though haven't actually > tried, is having a logic block where the delay is greater than > one clock cycle, but less than two. Maybe some tools can do that, > but I don't believe that all can. As Kim says multi-cycle paths have been 'constrainable' in any FPGA took I have used for as long as I can remember. Nial.Article: 147299
On Apr 21, 10:26=A0pm, "dlopez" <d@n_o_s_p_a_m.n_o_s_p_a_m.designgame.ca> wrote: > Hi, > I'm trying to calculate the absolute value of a signed number (two's > complement). > > Right now, I sign extend the input, and when msb=3D1, inverse all bits an= d > add 1. The sign extend is to take care of the most negative number. > > Is there a better way in terms of hardware utilization? > > Here is my verilog code: > > wire signed [w-1:0] a; > wire signed [w:0] =A0 b, c; > > assign b =3D $signed(a); =A0 =A0 =A0 =A0 =A0 =A0//sign exted input > assign c =3D b[w] ? (~b+1'b1) : b; =A0//inverse all bits and add 1 if msb= =3D1 > > Thanks, > Diego =A0 =A0 =A0 > > --------------------------------------- =A0 =A0 =A0 =A0 > Posted throughhttp://www.FPGARelated.com The absolute value of an n-bit signed value is at most an n-bit unsigned value. (An n-bit unsigned value is an n+1 bit signed value.) Does your absolute value need to be signed? The ~b+1 seems like the best approach from a hardware standpoint. Looking at your technology view for whatever family you're using, you might see that you're better off doing the addition outside the conditional: assign c =3D (b[w] ? ~b : b) + {0,b[w]}; Given that signed math is one of the nastiest things in Verilog, I'm pretty certain the operations would be unsigned because the constant is unsigned but that shouldn't matter for absolute value.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z