Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Is it possible to simulate logic cores with WebPack? I'm using Webpack 9.1.03i. First I tried the ISE Simulator, then the free ModelSIM XE/III Starter, and failed with both. OTOH, I'm not sure that I am doing evenrything correctly... Thanks, ThomasArticle: 117926
"Newman" <newman5382@yahoo.com> wrote in message news:1176476922.900374.236090@b75g2000hsg.googlegroups.com... > Simulation is a necessary but insufficient for timing. After > everything is ready to go, I'll do a sanity check with a full timing > simulation mainly for primary input stimulus. I rarely get a timing > violation, but I cannot say never. > Hello Newman, OK, that's fine, but if you have an example where the post P&R failed the timing simulation but the P&R tool static timing said it passed, I'd be grateful if you could report it to the FPGA vendor. That would mean the design methodology followed by most folks on this newsgroup, myself included, is flawed, and something needs to be fixed. > > The information I gave is from an > experience I had over ten years ago. I found that a hold time > violation masqueraded as a too long prop delay. > Right, but the software's got better in the last ten years. On second thoughts, I take your point! :-) Thanks, Syms. p.s. I couldn't resist the Seinfeld opening! :-) From Iwo.Mergler@soton.sc.philips.com Fri Apr 13 08:55:24 2007 Path: newsdbm05.news.prodigy.net!newsdbm04.news.prodigy.net!newsdst01.news.prodigy.net!prodigy.com!newscon04.news.prodigy.net!prodigy.net!newsfeed.telusplanet.net!newsfeed2.telusplanet.net!newsfeed.telus.net!cycny01.gnilink.net!cyclone1.gnilink.net!gnilink.net!nx02.iad01.newshosting.com!newshosting.com!newsfeed.icl.net!newsfeed.fjserv.net!newsfeed.ision.net!newsfeed2.easynews.net!ision!news-lond.gip.net!news.gsl.net!gip.net!lon04-news-philips!53ab2750!not-for-mail Message-Id: <dol4f4-dbe.ln1@c2968.soton.sc.philips.com> From: Iwo Mergler <Iwo.Mergler@soton.sc.philips.com> Subject: Re: No login in uClinux (Petalinux) Newsgroups: comp.arch.fpga References: <1176466354.260015.265730@n76g2000hsh.googlegroups.com> Lines: 45 Organization: Not organised User-Agent: KNode/0.9.2 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7Bit Date: Fri, 13 Apr 2007 16:55:24 +0100 NNTP-Posting-Host: 161.85.127.140 X-Complaints-To: newsmaster@rain.fr X-Trace: lon04-news-philips 1176483343 161.85.127.140 (Fri, 13 Apr 2007 16:55:43 GMT) NNTP-Posting-Date: Fri, 13 Apr 2007 16:55:43 GMT Xref: prodigy.net comp.arch.fpga:129559 Pablo wrote: > Is there some method to avoid "login prompt"?. The reason is that I > have designed an application over uClinux and I want that this app run > without introducing any information. Could I edit some inittab file? > > Thanks Hi Pablo, there are several ways, depending on what your root file system and init procedure is like. I'm guessing you are using a initrd (RAM/ROM) file system. In this case, the first thing the kernel does is executing /linuxrc Now, /linuxrc can be a shell script or a link to an executable, typically /sbin/init To answer your question, simply create a link from your application to /linuxrc and your application is the first thing to run after boot. However, there are a few things to think about. If you run an application from a shell, the shell will have set up some environmental variable, command line options, and so on. If you run your application directly, this won't be the case. If you find that your application doesn't like it, you can always change /linuxrc into a shell script and launch your application from there. Another option is, as you said, edit /etc/inittab. /etc/inittab is read by /sbin/init. You can place your application in this file, instead of the getty which is normally used. This would have the advantage that you can respawn your application, in case it exits or crashes. Kind regards, IwoArticle: 117927
Ken Soon wrote: >> Adding output registers is simple... >> >> if rising_edge(clk) then >> memout_d1 <= memout; >> memout_d2 <= memout_d1; >> memout_d3 <= memout_d2; >> ... >> end if; >> >> You could add a generic port to your memory template to automatically >> generate these delays to keep your upper-level HDL clean. >> > Hmm... > I dun suppose you mean by this way? : > > begin > process (wr_clk) > begin > if (wr_clk'event and wr_clk = '1') then > if (ce = '1') then > if (wr_en = '1') then > din2 <= din; > din3 <= din2; > mem_array(conv_integer('0' & wr_addr)) <= din3; > end if; > end if; > end if; > end process; If you delay the write data, you should also delay the write address otherwise you will have problems. Registers on the write would be mostly there to reduce the fanout, you probably do not need more than one extra input register here. (for data, address and enable since all three need to be equally delayed) So, your write process (using your coding style) would resemble this: process (wr_clk) begin if (wr_clk'event and wr_clk = '1') then -- first determine if something needs to be written on the next cycle -- register duplication will be applied to these if necessary if (ce = '1') then wr_en1 <= wr_en; wr_addr1 <= wr_addr; din1 <= din; else wr_en1 <= '0'; end if; -- then do the actual write if (wr_en1 = '1') then mem_array(conv_integer('0' & wr_addr1)) <= din1; end if; end if; end process; > process (rd_clk) > begin > if (rd_clk'event and rd_clk = '1') then > if (ce = '1') then > if (rd_en = '1') then > dout3 <= mem_array(conv_integer('0' & rd_addr)); > dout2 <= dout3; > dout <= dout2; > end if; > end if; > end if; > end process; You probably do not want to put the delays within your enable block... and like the write, you probably want one register level on the address. With both tweaks, the process should look like this: process (rd_clk) begin if (rd_clk'event and rd_clk = '1') then if (ce = '1') then rd_en1 <= rd_en; rd_addr1 <= rd_addr; else rd_en1 <= '0'; end if; if (rd_en1 = '1') then dout2 <= mem_array(conv_integer('0' & rd_addr1)); end if; dout1 <= dout2; dout <= dout1; end if; end process; > Oh yah, I went to use the FPGA editor. I found that this time, luckily, the > 3 timing constraints all came from one CLB, so I can easily just focus on > this CLb. I thought of shifting up the CLB closer to its sources/ > destinations and thereby shortening the route. But then... the timing delays > increased and subsequent shifting did not even change the timing constraint > by a bit not matter where i shift above the original position. Your slow paths probably were your clock/read/write-enables since you did not decouple the enable signal from the rest of the logic. The modified read/write processes in this message should fix this. > Another engineer told me that it could have help as the route could have > buffers which would have delay the signals so I had hope that it would work > but alas it didn't. When you have signals with large fan-outs and you do pipelining, you need to decouple your enable signals to keep fan-outs on enables in check. If you look at the two processes, you can see that I did this by combining all incoming enables to generate a single-signal enable for the following pipeline stage. > Now I wondering if my adding of the register levels is wrong. Well, I should > hope that it is wrong huh, would mean still got chance... Since you are new to FPGAs, it is normal that you are not (yet) familiar with the fundamentals of working around common design issues... but most of these you should be able to deduce by reading your static timing analysis and thinking about the simplest ways to fix the problems it reveals.Article: 117928
Consider the very simple VHDL code at the end of the message. For each clock cycle two operations are done: 1) A counter is incremented; 2) The bit 0 of the counter is checked. If it's '0', an output flag is triggered. If you simulate the post-translated model of this module with the ModelSim starter edition shipped with the last WebPack, the behaviour is correct (or at least is what I expect to be): the flag is triggered at the very first clock cycle. When you exit from reset CNT is zero, so that at the first clock its first bit is zero too. If you instead simulate the behavioural model (i.e. plain VHDL) of the same module, the flag is triggered at the *second* clock! Apparently the simulator considers the increment operation as it were synchronous, so that when CNT(0) is checked, it has already the value 1. In fact, if you put the increment statement *after* the if statement, the behaviour changes again and the flag is correctly triggered at the first clock. My question is quite simple: is this a simulator bug, or did I always misintepreted the synchronous circuits at their own very basic level? Thank you! library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.STD_LOGIC_ARITH.ALL; use IEEE.STD_LOGIC_UNSIGNED.ALL; entity test is Port ( CLK : in STD_LOGIC; O : out STD_LOGIC; RESET : in STD_LOGIC); end test; architecture Behavioral of test is begin main: process( CLK, reset) variable CNT: std_logic_vector(15 downto 0); begin if RESET='1' then CNT := X"0000"; O <= '0'; elsif CLK='1' and CLK'event then CNT := CNT+1; if CNT(0)='0' then O <= '1'; end if; end if; end process; end Behavioral; -- emboliaschizoide.splinder.comArticle: 117929
Weng Tianxiang <wtxwtx@gmail.com> wrote: > Which is the best book about CORDIC algorithms? my recommondation: Jean-Michel Muller: "Elementary Functions" - Algorithms and Implementation. Birkäuser 2006, ISBN 0-8176-4372-9 Covers different shift-and-add algos like cordic or BKM in detail... WD --Article: 117930
dalai lamah wrote: > Consider the very simple VHDL code at the end of the message. For each > clock cycle two operations are done: > > 1) A counter is incremented; > 2) The bit 0 of the counter is checked. If it's '0', an output flag is > triggered. > > If you simulate the post-translated model of this module with the ModelSim > starter edition shipped with the last WebPack, the behaviour is correct (or > at least is what I expect to be): the flag is triggered at the very first > clock cycle. When you exit from reset CNT is zero, so that at the first > clock its first bit is zero too. > If you instead simulate the behavioural model (i.e. plain VHDL) of the same > module, the flag is triggered at the *second* clock! Apparently the > simulator considers the increment operation as it were synchronous, so that > when CNT(0) is checked, it has already the value 1. In fact, if you put the > increment statement *after* the if statement, the behaviour changes again > and the flag is correctly triggered at the first clock. > > My question is quite simple: is this a simulator bug, or did I always > misintepreted the synchronous circuits at their own very basic level? Variable assignments are processed sequentially while signal assignments are processed synchronously... at least in principle. Since 'CNT' is a variable, it incrementation becomes effective immediately and the check done after the incrementation occurs on the incremented value. If you used a signal instead, the test would happen on the CNT value that was in effect before the process was evaluated - signal assignments become effective after an event's processing (re-evaluate all affected processes) is completed. With the code you posted, the correct behavior is for the flag to be set on the first cycle... and if you put the increment after the test, the correct behavior becomes the flag being set on the second cycle. If you used a signal instead of a variable, the test result would be independent from its position relative to assignment operations since the test would be done on inputs as of the moment the process' evaluation was triggered - the previous cycle's output. Since not all synthesis and simulation tools agree on how to deal with variables in synthesizable code, some weird bugs can come up so it is generally better to use signals which more closely (less ambiguously) represents how synchronous hardware works.Article: 117931
Weng Tianxiang wrote: > On Apr 12, 9:24 am, "Ben Jones" <ben.jo...@xilinx.com> wrote: > >>"Weng Tianxiang" <wtx...@gmail.com> wrote in message >> >>news:1176392513.597727.256450@d57g2000hsg.googlegroups.com... >> >> >>>Thank you for your pointing to Ray Andraka website. >>>I have read his paper, a very nice paper. I will print all references >>>in his paper too. But I am afraid that the paper was published 1998, >>>almost 9 years ago and there must have been new progresses on this >>>topics. >> >>Not necessarily. All the books on my reference shelf I refer to most often >>are >10 years old. :-) >> >>CORDIC was a great way to do many DSP functions before FPGAs with dedicated >>fast low-power multipliers came along. Nowadays, a lot of its advantages are >>waning. However, it's still a fascinating and elegant idea and worthy of >>study (certainly I keep meaning to learn more about it!). >> >>I'd definitely start by absorbing all that Ray has to say on the subject, >>then go from there. >> >> -Ben- > > > Hi Ben, > I have read Ray paper and went to his website and read "How to > calculate sine()". After reading I still have difficulties > understanding its process. > > Can you give an example on how to calculate sine(x), where x = 35 > degrees? > > I don't understand why he could calculate the factor K before starting > calculation. Because during calculating, one doesn't know how many > iterations should be taken to reach required accuracy. > > Thank you. > > Weng > Weng, First, thanks for looking at my paper. I'm not aware of much research on the CORDIC algorithm itself. There are a fair amount of applications papers that are using CORDIC in order to help solve problems, but nothing as far as I know regarding the algorithm itself. There also are not a lot of books that devote much attention to CORDIC, although I suppose there are more now than there were when I wrote that paper. THere are a few books where I've seen a section on CORDIC. I think Frerking's Digital Communications (also ~10 years old now) has a section, as does Meyer-Baese's DSP with FPGAs. I don't think either has much more information than I have in my paper though. Regarding the algorithm, the accuracy of the result is a function of the number of iterations you perform. Generally speaking, you compute the number of iterations required to meet the accuracy specification for your application and that becomes a system constant. From there, you compute K to determine the exact gain of your design in that application. In short, you do know how many iterations you are going to go before doing the calculation. Roughly speaking, the magnitude output for a vectoring CORDIC improves by two bits for each iteration. Phase output improves by one bit per iteration, and for rotation mode, the I and Q output accuracy improves by 1 bit per iteration.Article: 117932
Walter Dvorak wrote: > Weng Tianxiang <wtxwtx@gmail.com> wrote: > >>Which is the best book about CORDIC algorithms? > > > my recommondation: > > Jean-Michel Muller: "Elementary Functions" - Algorithms and Implementation. > Birkäuser 2006, ISBN 0-8176-4372-9 > > Covers different shift-and-add algos like cordic or BKM in detail... > > WD I think the treatment in Frerking as well as that in Meyer-Baese is mor readable. Muller has a more complete treatise, but it is a difficult read comparatively speaking.Article: 117933
Jonathan Bromley wrote: > On 12 Apr 2007 12:40:55 -0700, "Weng Tianxiang" > <wtxwtx@gmail.com> wrote: > > >>I have read Ray paper and went to his website and read "How to >>calculate sine()". After reading I still have difficulties >>understanding its process. >> >>Can you give an example on how to calculate sine(x), where x = 35 >>degrees? >> >>I don't understand why he could calculate the factor K before starting >>calculation. Because during calculating, one doesn't know how many >>iterations should be taken to reach required accuracy. > > > Ah, but that's not so. The whole point about CORDIC is that > you *do* know in advance how many iterations it takes. > You *never* stop the iterations early. > > Having said that... the fudge-factor K reaches an asymptote > so quickly that the error (for any number of iterations greater > than about 10) is negligibly small. > > I'll try an explanation... be sure to read it in a > monospaced (Courier etc) font. > > I can rotate a vector (x,y) through any angle A, giving a > rotated vector (x', y'), by using the matrix multiplication > > / x' \ / cos(A) -sin(A) \ / x \ > | | = | | * | | > \ y' / \ sin(A) cos(A) / \ y / > > We can divide both sides of the equation by cos(A)... > > / x'/cos(A) \ / 1 -tan(A) \ / x \ > | | = | | * | | > \ y'/cos(A) / \ tan(A) 1 / \ y / > > since tan(A) = sin(A)/cos(A). That works out > fairly easily as > > x'/cos(A) = x - y.tan(A) or x' = cos(A) * (x - y.tan(A)) > y'/cos(A) = x.tan(A) + y or y' = cos(A) * (x.tan(A) + y) > > Now here comes the cunning part: Suppose we choose A > so that tan(A) is exactly 2^(-N) for some integer N. > The calculation of x.tan(A) and y.tan(A) is now a > simple right shift (division by an exact power of 2). > Note, too, that you can make the rotation negative > simply by swapping the + and - operations. > > Of course, to get this we require > A = arctan(2^(-N)) > for some integer N. For example: > > N=0: A = arctan(1) = 45 degrees > N=1: A = arctan(0.5) = 26.565... degrees > N=2: A = arctan(0.25) = 14.036... degrees > > All this would be harmless fun, until Volder (in his 1959 > paper) was able to prove that ANY angle within a certain > range (roughly +/- 100 degrees) can be expressed as the > sum of all these different A, for N=0,1,2....infinity, > if each of those terms is either added or subtracted > (i.e. multiplied by +1 or -1). Of course we don't need > to go to infinity, but only "far enough" for any given > accuracy. To take your 35-degree example: > > start 0 ... too small > add arctan(1) 0 + 45 = 45 ... too big > subtract arctan(1/2) 45 - 26.565 = 18.435 ... too small > add arctan(1/4) 18.435 + 14.036 = 32.471 ... too small > add arctan(1/8) 32.471 + 7.125 = 39.696 ... too big > subtract arctan(1/16) 39.696 - 3.576 = 36.120 ... too big > ... > > and after, let's say, 16 such steps you will have converged > on your desired 35 degree angle to within about 0.002%. > Of course, you pre-calculate all these arctan angles and > store them in a little lookup table. > > Meanwhile, as you have been adding and subtracting these > angles you have also been doing the matrix multiplication > (in the right direction at each step, of course). > If you started with x = 1.00000 and y = 0, then > by the time you have finished, the final x and y values > will be the cosine and sine of your desired angle > BUT they will also have been multiplied by all the > 1/cos(A) factors at each step. So you get a > scaling (the CORDIC gain) of > > 1/cos(arctan(1)) * 1/cos(arctan(0.5)) * ... > > Try it on a spreadsheet; you'll find this value very > quickly converging on the limit value 1.646760258. > To fix this problem, all you need to do is start with > x=0.607252935... instead of 1.00000. > > SUCH a pretty algorithm... but, as others have said, > in these days of fast, cheap multipliers it's not > so useful as it once was. CORDIC still is hard to beat for extracting magnitude and phase. It's advantage is much less clear for phase rotators,e.g. mixers where there are plentiful fast multipliers on the FPGA. It is also still useful when you need a phase resolution on a rotator of more than about 2^12 revolutions, as the Sin/Cos lut starts getting unwieldy or needing more complicated interpolation to get the resolution.Article: 117934
Hi experts, I want to persue my PhD in SoC , would you tell me best institutes or research goroups in Europe in this area. Thanks, BaigArticle: 117935
unless their tools have improved very significantly in the past couple years, I think you are going to be dissappointed with the amount of detail work you are going to need to do. Their tools are not in the same league as Xilinx and Altera. It is much more like working in the old Xilinx XDS back around 1990 (much like doing the design under today's FPGA Editor).Article: 117936
Weng Tianxiang wrote: > Which is the best book about CORDIC algorithms? Walter Dvorak wrote: > my recommondation: > > Jean-Michel Muller: "Elementary Functions" - Algorithms and Implementation. > Birkäuser 2006, ISBN 0-8176-4372-9 I second that. It's an excellent book. Although there are other good sources of information on CORDIC, this book had the best explanations I've seen for CORDIC algorithms (including many variations) and other shift-and-add algorithms (e.g., e^x, ln). There was an earlier edition, so if you buy it used, you may want to make sure you get the 2006 edition.Article: 117937
Austin Lesea wrote: > Distributors have been squeezed pretty hard by price conscious > customers: there is no such thing as shelf stock any longer. It is all > "virtual stock." I understand the reason, but it still seems to me that local distributors aren't providing nearly the value add that they did back in the good old days. It used to be that if I urgently needed a part (and it wasn't too exotic), there was a reasonable chance that I could order it from a local distributor for will call, and pick it up later the same day, because they did have shelf stock. On the other hand, back then we didn't have overnight and two-day shipping options. If the part had to be shipped from another distributor location, it would take a week, and if it had to be ordered from the factory, it would take at least two weeks. Nowdays, I buy everything I can from Digikey and Mouser, and my general rule of thumb is that if they don't have it in stock, it doesn't really exist. Of course, as with any rule of thumb, there are exceptions. Digikey doesn't seem to stock the latest Xilinx parts or development boards. Is that because they don't want to stock them, or because they aren't able to get them from Xilinx? Anyhow, from the manufacturer's perspective, wasn't the main purpose of a network of distributors that the stock would be closer to the customers, and the manufacturer wouldn't have to deal with small orders? Otherwise what's the point? If the manufacturer is going to drop-ship orders anyhow, and the distributor can't/won't accept orders for less than the minimum order quantity, the manufacturer might as well cut out the middleman. EricArticle: 117938
On Apr 12, 3:32 pm, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com> wrote: > On 12 Apr 2007 12:40:55 -0700, "Weng Tianxiang" > > <wtx...@gmail.com> wrote: > >I have read Ray paper and went to his website and read "How to > >calculate sine()". After reading I still have difficulties > >understanding its process. > > >Can you give an example on how to calculate sine(x), where x = 35 > >degrees? > > >I don't understand why he could calculate the factor K before starting > >calculation. Because during calculating, one doesn't know how many > >iterations should be taken to reach required accuracy. > > Ah, but that's not so. The whole point about CORDIC is that > you *do* know in advance how many iterations it takes. > You *never* stop the iterations early. > > Having said that... the fudge-factor K reaches an asymptote > so quickly that the error (for any number of iterations greater > than about 10) is negligibly small. > > I'll try an explanation... be sure to read it in a > monospaced (Courier etc) font. > > I can rotate a vector (x,y) through any angle A, giving a > rotated vector (x', y'), by using the matrix multiplication > > / x' \ / cos(A) -sin(A) \ / x \ > | | = | | * | | > \ y' / \ sin(A) cos(A) / \ y / > > We can divide both sides of the equation by cos(A)... > > / x'/cos(A) \ / 1 -tan(A) \ / x \ > | | = | | * | | > \ y'/cos(A) / \ tan(A) 1 / \ y / > > since tan(A) = sin(A)/cos(A). That works out > fairly easily as > > x'/cos(A) = x - y.tan(A) or x' = cos(A) * (x - y.tan(A)) > y'/cos(A) = x.tan(A) + y or y' = cos(A) * (x.tan(A) + y) > > Now here comes the cunning part: Suppose we choose A > so that tan(A) is exactly 2^(-N) for some integer N. > The calculation of x.tan(A) and y.tan(A) is now a > simple right shift (division by an exact power of 2). > Note, too, that you can make the rotation negative > simply by swapping the + and - operations. > > Of course, to get this we require > A = arctan(2^(-N)) > for some integer N. For example: > > N=0: A = arctan(1) = 45 degrees > N=1: A = arctan(0.5) = 26.565... degrees > N=2: A = arctan(0.25) = 14.036... degrees > > All this would be harmless fun, until Volder (in his 1959 > paper) was able to prove that ANY angle within a certain > range (roughly +/- 100 degrees) can be expressed as the > sum of all these different A, for N=0,1,2....infinity, > if each of those terms is either added or subtracted > (i.e. multiplied by +1 or -1). Of course we don't need > to go to infinity, but only "far enough" for any given > accuracy. To take your 35-degree example: > > start 0 ... too small > add arctan(1) 0 + 45 = 45 ... too big > subtract arctan(1/2) 45 - 26.565 = 18.435 ... too small > add arctan(1/4) 18.435 + 14.036 = 32.471 ... too small > add arctan(1/8) 32.471 + 7.125 = 39.696 ... too big > subtract arctan(1/16) 39.696 - 3.576 = 36.120 ... too big > ... > > and after, let's say, 16 such steps you will have converged > on your desired 35 degree angle to within about 0.002%. > Of course, you pre-calculate all these arctan angles and > store them in a little lookup table. > > Meanwhile, as you have been adding and subtracting these > angles you have also been doing the matrix multiplication > (in the right direction at each step, of course). > If you started with x = 1.00000 and y = 0, then > by the time you have finished, the final x and y values > will be the cosine and sine of your desired angle > BUT they will also have been multiplied by all the > 1/cos(A) factors at each step. So you get a > scaling (the CORDIC gain) of > > 1/cos(arctan(1)) * 1/cos(arctan(0.5)) * ... > > Try it on a spreadsheet; you'll find this value very > quickly converging on the limit value 1.646760258. > To fix this problem, all you need to do is start with > x=0.607252935... instead of 1.00000. > > SUCH a pretty algorithm... but, as others have said, > in these days of fast, cheap multipliers it's not > so useful as it once was. > -- > Jonathan Bromley, Consultant > > DOULOS - Developing Design Know-how > VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services > > Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK > jonathan.brom...@MYCOMPANY.comhttp://www.MYCOMPANY.com > > The contents of this message may contain personal views which > are not the views of Doulos Ltd., unless specifically stated. Hi Jonathan, Thank you very much for you explanation. Very excellent explanation !!! This is second time for me to get your help. The first time was when I was programming a random generation circuit copying your post in this group. WengArticle: 117939
On Apr 13, 11:02 am, Walter Dvorak <use-reply...@invalid.invalid> wrote: > Weng Tianxiang <wtx...@gmail.com> wrote: > > Which is the best book about CORDIC algorithms? > > my recommondation: > > Jean-Michel Muller: "Elementary Functions" - Algorithms and Imple= mentation. > Birk=E4user 2006, ISBN 0-8176-4372-9 > > Covers different shift-and-add algos like cordic or BKM in detail= .=2E. > > WD > -- Hi WD, Thank you for your recommendation. I will review the book and determine if I buy it. WengArticle: 117940
On 13 Apr 2007 12:48:50 -0700, "M Ihsan Baig" <mirzamihsan@hotmail.com> wrote: >I want to persue my PhD in SoC , would you tell me best institutes or >research goroups in Europe in this area. IMEC in Leuven, Belgium? ISLI in Livingston, Scotland? And plenty more, I'm sure. Take a look at the proceedings of FDL (European near-equivalent of DVCon) to see who is most active in your area of interest. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 117941
On Fri, 13 Apr 2007 17:58:17 GMT, dalai lamah <antonio12358@hotmail.com> wrote: >Consider the very simple VHDL code at the end of the message. For each >clock cycle two operations are done: > >1) A counter is incremented; >2) The bit 0 of the counter is checked. If it's '0', an output flag is >triggered. Not after reset, though. After reset, your output flag is stuck at zero. >If you simulate the post-translated model of this module with the ModelSim >starter edition shipped with the last WebPack, the behaviour is correct (or >at least is what I expect to be): the flag is triggered at the very first >clock cycle. When you exit from reset CNT is zero, so that at the first >clock its first bit is zero too. Really? That sounds wrong to me. >If you instead simulate the behavioural model (i.e. plain VHDL) of the same >module, the flag is triggered at the *second* clock! Apparently the >simulator considers the increment operation as it were synchronous, so that >when CNT(0) is checked, it has already the value 1. Exactly as it should. > In fact, if you put the >increment statement *after* the if statement, the behaviour changes again >and the flag is correctly triggered at the first clock. Also exactly as it should. >My question is quite simple: is this a simulator bug, Not in the behavioural case. > or did I always misintepreted the > synchronous circuits at their own very basic level? As Daniel S. said, it's not "basic" - people often get confused about the behaviour of variables in VHDL clocked processes. The action of the behavioural VHDL that you describe is exactly as it should be. The question that needs answering is: why does the post- synthesis simulation not match it? On the first clock after reset, CNT:=CNT+1 immediately updates CNT to X"0001". Consequently the assignment to O does not happen. On the next clock, though, CNT is incremented to X"0002" and O should be set. Immediately after the assignment CNT:=CNT+1, the variable CNT represents the next-state value of CNT - in other words, the value presented to the D-inputs of the CNT register. Output O should be synchronously set if this next-state value has '0' in its LSB. If your observation is correct, then the Xilinx simulation or place-and-route tools are in error and you should raise a bug report. > main: process( CLK, reset) > variable CNT: std_logic_vector(15 downto 0); > begin > if RESET='1' then > CNT := X"0000"; > O <= '0'; > elsif CLK='1' and CLK'event then > CNT := CNT+1; > if CNT(0)='0' then > O <= '1'; > end if; > end if; > end process; -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 117942
On Fri, 13 Apr 2007 15:45:16 -0400, Ray Andraka <ray@andraka.com> wrote: >CORDIC [...] is also still useful >when you need a phase resolution on a rotator of more than about 2^12 >revolutions, as the Sin/Cos lut starts getting unwieldy or needing more >complicated interpolation to get the resolution. I'm not entirely sure I understand this, since you can rather easily extract any derivative of sin/cos from a sin/cos lookup table with the help of a multiplier. For example, if I have a sin/cos LUT with 2^10/rev precision, can't I get sin/cos to nearly 2^20 precision with only one first-order interpolation? I agree that it's not so hardware-friendly a calculation as a CORDIC step, but it seems to offer much more bang per buck if you have hardware multipliers to hand. Having said that, I probably should have done the sums before writing this post :-) -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 117943
Eric, I agree that the reason behind having stock was to provide the service of "nearly same day delivery." But, you are correct, that does not exist anymore, partially because you (and everyone else) orders on-line, and expects "just in time" delivery. The business model of a distributor is not what it used to be. As for your comment: "so what exactly is the middleman providing in value added?" I will have to let you ask them (I am not an expert in that at all). AustinArticle: 117944
Thank you, Newman, Peter, and Symon. As you have mentioned, I traced the problem down to a setup time problem reported as hold time violation. I still don't understand the paragraph in question, and hopfully Peter will provide the answer. >OK, that's fine, but if you have an example where the post P&R failed the >timing simulation but the P&R tool static timing said it passed, I'd be >grateful if you could report it to the FPGA vendor. This is very likely to happen if you have functional asynchronous resets or inferred latches. But as you mentioned, only if you dont follow proper design guidelines. /MHSArticle: 117945
I was hoping to download Francesco Poderico's Picoblaze C compiler today, but unfortunately his domain is expired. Google didn't turn up any other sites from which I can download it; does anyone know of such a location, or would anyone be willing to make it available online or send me a copy? (Provided that doing so doesn't violate any license terms.) Thanks! EricArticle: 117946
M. Hamed wrote: > > I traced the problem down to a setup time problem reported > as hold time violation. I still don't understand the paragraph > in question, and hopfully Peter will provide the answer. > In some cases, the particulars of modeling the back annotated clock & data path delays results in certain simulation 'features' that can cause the exact location & type of the reported timing violation to migrate about within the post-PAR simulation model. I have listed a few typical Answer Records of that sort below. (You didn't post the details of your timimg error, so I'm not suggesting that any of these are your exact problem.) 24220 9.1i Timing - Clock skew master record 24289 9.1i Timing - How does clock skew relate to Hold/Race Checks functionality? 17224 8.1i Timing Analyzer/TRACE - How does clock skew affect setup/ hold calculations? (Hold violation) 11067 SimPrim - ModelSim Simulations: Input and Output clocks of the DCM and CLKDLL models do not appear to be de-skewed (VHDL, Verilog) 18668 7.1i Timing Analyzer/TRCE - Timing tools do not properly adjust for my equivalent positive phase shift 23689 LogiCORE Blk Men Gen v2.1 - Virtex-4, timing simulation fails due to setup time violation in X_SFF good luck, BrianArticle: 117947
On Apr 13, 6:12 am, "Pablo" <pbantu...@gmail.com> wrote: > Is there some method to avoid "login prompt"?. The reason is that I > have designed an application over uClinux and I want that this app run > without introducing any information. Could I edit some inittab file? If you have a dedicated application and do not need the bulk of CLI tools, one small, fast, easy solution is to have the application be the "init" process, or to have /bin/sh be the init process and stick the startup scriping in the shell startup scripts. Do this either by replacing the init binary, or using the kernel boot arg overrides init. This is also a usable "back door" for lost root passwords on most Linux systems booted with Lilo/Grub which allow editing the kernel arguments at boot time.Article: 117948
On Mar 25, 11:11 pm, "Daniel S." <digitalmastrmind_no_s...@hotmail.com> wrote: > While it would certainly be nice to have open-source tools to support FPGA > development (more options is almost always good for those days where > ISE&all keep on crashing and burning), the rather small world-wide pool of > FPGA people with both the programming knowledge and motivation to build and > maintain this sort of project as volunteers is, unfortunately, almost > certainly far too small. Actually, many of the large open source projects are full time staffed by industry giants ... for example many of the linux developers have a day job maintaining linux for their respective empoyer's hardware platforms ... in IBM, SUN, HP, etc ... while also having hat's as core developers in various distro's as a paid job. Much of the integration and testing of distro's is also funded day work, via contracts ... it's how RedHat engineers get paid. This is big business, not just volunteer work. The hardware interfaces into computers and not "just easy" pieces of code to develop and maintain, and are loaded with heavy IP rights ... just like the internal chip interfaces for FPGA's. The difference is, that software developers took those interfaces public with open source operating systems. If it can work for operating systems and major distro's ... then it can work in other industries where there is leadership in open source to obtain advantages for both the Corporations and their customers. So far, Xilinx and Altera are not taking that lead ... which should result in a significantly better tool set for the industry. As I have suggested before ... places where vendors are crying about not having enough funds to support their product lines (IE dropping support for entire product lines like the XC4K chips) while new product is on the shelf in distribution and production inventory ... could have done that much better by turning over to a customer/vendor open source partnership. We hear frequently here that vendors cann't support anything other than the top few dozen customers ... that changes with active open source industry partnerships which are lead by industry paid staff. JohnArticle: 117949
I'd guess FPGA vendors are certainly not unaware of the advantages of the opensource model. I think that the problem for them is twofold: 1/ for old chips, netlists could be reverse-engineered from bitstream easily with the chip specs. This would be a major blow to some FPGA users. 2/ for newer chips, even with strong crypto and/or eeproms available on all chip lines to prevent bitstream reversing, the major problem would be to still allow selling closed IP components for the platform. There's no solution to this problem -- it simply is unsolvable from an information theory standpoint. The only thing possible would be to change the business model, and I guess this is not likely to happen anytime soon. The change won't come from the most established FPGA vendor, that's for sure -- maybe the contender could do something about it, though... JB
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z