Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On May 21, 12:18=A0am, "Mr.CRC" <crobcBO...@REMOVETHISsbcglobal.net> wrote: > 1c. =A0Is there any difference in the synthesis if you code it one bitwis= e > operation at a time, like the module above, vs. all in one equation? > No, because logically they are all representing the same thing. The Boolean equation for the implementation of 'y' will be the same. You can play games with non-portable synthesis directives to attempt to prevent any optomizations if you'd like...but that requires you to then manually check each and every build to make sure that it honored your request. There will likely be no warning or error produced if the request gets chucked out. > 1d. If you "infer" a mux using the code shown in the vendor's device > library, then do you get a good mux, or a glitchy mux? > You're assuming that the vendor's device has some non-glitching 'mux' primitive...typically they do not. If it did have a mux primitive then it would likely be able to pick it out from the source code...just like it can pick out to use the appropriate hardware primitives from the source code in the following situations - multiplication: y <=3D a * b; - memory > In the case of a CPLD, I would expect that I could implement the fixed > mux if I selected suitable synthesis properties, such as "mux > extraction" (which I think recognizes my intent to create a mux, Nope...again it comes down to whether or not there is a 'mux' primitive in the target device. CPLDs typically have and/or arrays to implement logic rather than memory, but that does not imply anything about what logic gets synthesized which depends on what optomizations the synthesis tool applies. Synthesis tools live to create the smallest, fastest logic that they can. Implementing redundant logic has a resource cost that the tools will not pay willingly (i.e. without special directive otherwise). > > But the FPGA with its LUTs is a different ball of wax. > Different, but many of the same principles still apply. CPLD with their and/or arrays are not immune to what you're seeing with FPGAs. One problem that I was brought into fix years ago had to do with a CPLD implementing a transparent latch. The basic equation for the latch is identical to the mux, just that in the usage the output of the mux happens to connect to one of the data inputs. Implementing a redundant logic term provides a fix, but again you have to explicitly disable optomization for that signal...and check with each build that it gets implemented as you require. > > I think that it is possible, though I don't yet know how, to see the > "RTL" output by the synth? =A0Are the answers to my questions to be found > there? > Viewing the resulting logic is a very good way to see what is going on. It is also a good way to see that writing different forms of source code results in identical logic. Kevin JenningsArticle: 151826
On 21/05/11 14:57, wzab wrote: . . . > > Well, unfortunately this a rather low priority project for me, which > I'm > doing only in my spare time. > In fact all I need at the moment is a possibility to define new words > for J1. Today it became possible to build an image and immediately test it using the hardware model description in MyHDL. > It could be done in a "Riscy Pygness" way, but then the tool > running on the host (corresponding to riscy.tcl in RP) should > record all the interactive session to allow further analysis, and > extraction > of best performing words defined during the session. It should be possible to continue to use James Bowman's cross compiler after the target is running, once simple read/write/execute access is available. ToDo: 1) Documentation. 2) Write a test suite for the j1 to cover full instruction set. 3) Export Verilog/VHDL from MyHDL and import to Sim/Synth tools to come closer to hardware run. 4) Add interrupt system, to allow debug and modification of a live system. 5) Add back in James Bowman's VGA, Ethernet, and other peripherals. If there's enough volunteers I'll find a nice open home for the project. Jan CoombsArticle: 151827
On May 20, 11:48=A0pm, "Mr.CRC" <crobcBO...@REMOVETHISsbcglobal.net> wrote: > Don't worry so much. =A0Your input has been especially helpful, and I > think you are the one who's on target re: the real cause of my > glitch--ground bounce rather than the mux glitching. =A0I'm pretty sure > this is proved at this point by the various observations that have been > made so far. > At this point, you don't appear to have enough to reasonably conclude with high certainty on any cause (my opinion). The reality is that it is pretty much impossible to prove much of anything since you can't actually probe the offending signals so all you can do is hypothesize, test and develop a preponderance of evidence that suggests some particular cause more than other causes...and have to deal with the fact that each new test can change dramatically the test conditions since each new test could involve a new FPGA build. Some tests that one could apply to see where things lead: - Pull the unused clock input high. The failure rate should be unchanged if caused by ground bounce. - Pull the unused clock input low. The failure rate should decrease if caused by ground bounce. - Change the FPGA design to use just the one clock input. The failure rate should be unchanged if caused by ground bounce. The failure rate should decrease if caused by logic implementation glitch. - Change the FPGA design so that only one clock input is used by the one clock input still goes through a logic cell before being used as it is in the original design. This will require some manual effort to insure that this happens (see my reply to your other post regarding mux and optomiziations). The failure rate should be the same as with the original design. - Change the FPGA design so that the mux and counters are replicated many times. This will require preventing some optomizations so that the mux and counters really do get replicated and not just a single instance that gets fanned out. If ground bounce is the cause multiple counters should glitch - Change the FPGA design so that only the counters are replicated many times. Again, ground bounce should cause multiple counters to glitch. - Heat the device with a hair dryer or cool it with freeze spray. Temperature will affect the threshold voltages but not the ground bounce. If ground bounce is the cause then temperature should have not effect the failure rate. If you were to do all of these experiments (and others) you would likely find that there are some mixed results, the result of one test would seem to disprove one cause versus another. But each of these experiments will inherently change more than 'just one thing' so you will really have to dig into what other things changed in order to determine what parameters really were controlled and which actually varied. For example, adding additional counters clocked by the muxed clock will change the capacitance seen by the logic that drives that clock line through the FPGA. Increased capacitance makes it harder for a glitch to occur since the capacitance will tend to squelch any voltage change. All of this though is just for those who want to experiment. Practically for product design one tends to simply avoid the potholes in the road and design around them. However, some folks do get the time or the application is so critical that a deep understanding and many types of experiments are justified. > One nice thing about this group is that people have a much more > professional demeanor than at certain other less civil neighborhoods > (cough, cough, SED, cough cough). > They are a rowdy crowd over there in SED... Kevin JenningsArticle: 151828
On May 21, 3:52=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > > As I understand it, the FPGA LUTs are designed not to glitch > in just this condition. =A0The exact implementation I don't know, > but they should not glitch in the cases where a combination of > gates that the LUT represents would not glitch. > If this were true, then building a transparent latch out of LUTs would be reliable...but typically it's not. Kevin JenningsArticle: 151829
On May 20, 7:03=A0pm, KJ <kkjenni...@sbcglobal.net> wrote: > On May 20, 4:29=A0pm, rickman <gnu...@gmail.com> wrote: > > > On May 18, 11:15=A0pm, "Mr.CRC" <crobcBO...@REMOVETHISsbcglobal.net> > > > Just to be clear running a clock through logic is not really an issue > > for the clock. > > This is not true. =A0If the memory implementing the logic has a couple > of inputs changing together a glitch can occur. =A0Based on what CRC > says this may not be his failure mode, but it is a failure caused by > "running a clock through logic". > > > The only reason it is a bad thing is if you are using > > the multiplexed clock for data on an IO pin. > > CRC's code does not clock in any data on an I/O pin...it simply clocks > a counter, all internal. =A0So saying 'The *only* reason...' is not > correct. > > > The LookUp Table > > (LUT) structure in FPGAs is not the same as a semiconductor memory and > > is designed to not glitch from a single input changing state. > > A plausible explanation though for the rogue clock edge that advanced > CRC's counter at the falling edge is that the LUT structure did glitch > as a result of a single input changing. =A0There are other plausible > explanations, my point is that I wouldn't rule out this as a > possiblity. > > > In the > > case of a multiplexer, the clock input that is not selected never has > > an impact on the output regardless of what the other clock is doing. > > Not true. =A0What about when both clocks happen to switch simultaneously > (or nearly so). =A0Now you'll have two of the three inputs to the LUT > switching. =A0Even if you were guaranteed no glitches with one input > changing, you're not guaranteed the same result with two. > > Kevin Jennings Kevin, You are working with a model of the LUT that likely is not how they are implemented. I was told by one of the experts at Xilinx, most likely here in this group, that LUTs are designed to avoid this sort of "race" glitch you are describing. The LUT mux is implemented with pass transistors rather than gates and so one turning off before the other turns on does not cause a glitch, it leaves a node undriven for a brief moment and the capacitance holds the value. At least this is what I remember, which may not be exactly what I was told. But it is approximate. I think this was Peter Alfke who told me this. Perhaps he is around to verify that I remember it correctly. As to CRC's problem, he has done some tests that pretty clearly show the problem is ground noise perturbing a slow edge rate. He sped up the edge and the problem went away. RickArticle: 151830
On Sat, 21 May 2011 16:39:19 -0700, rickman wrote: > On May 20, 7:03Â pm, KJ <kkjenni...@sbcglobal.net> wrote: >> On May 20, 4:29Â pm, rickman <gnu...@gmail.com> wrote: >> >> > On May 18, 11:15Â pm, "Mr.CRC" <crobcBO...@REMOVETHISsbcglobal.net> >> >> > Just to be clear running a clock through logic is not really an issue >> > for the clock. >> >> This is not true. Â If the memory implementing the logic has a couple of >> inputs changing together a glitch can occur. Â Based on what CRC says >> this may not be his failure mode, but it is a failure caused by >> "running a clock through logic". >> >> > The only reason it is a bad thing is if you are using the multiplexed >> > clock for data on an IO pin. >> >> CRC's code does not clock in any data on an I/O pin...it simply clocks >> a counter, all internal. Â So saying 'The *only* reason...' is not >> correct. >> >> > The LookUp Table >> > (LUT) structure in FPGAs is not the same as a semiconductor memory >> > and is designed to not glitch from a single input changing state. >> >> A plausible explanation though for the rogue clock edge that advanced >> CRC's counter at the falling edge is that the LUT structure did glitch >> as a result of a single input changing. Â There are other plausible >> explanations, my point is that I wouldn't rule out this as a >> possiblity. >> >> > In the >> > case of a multiplexer, the clock input that is not selected never has >> > an impact on the output regardless of what the other clock is doing. >> >> Not true. Â What about when both clocks happen to switch simultaneously >> (or nearly so). Â Now you'll have two of the three inputs to the LUT >> switching. Â Even if you were guaranteed no glitches with one input >> changing, you're not guaranteed the same result with two. >> >> Kevin Jennings > > Kevin, > > You are working with a model of the LUT that likely is not how they are > implemented. I was told by one of the experts at Xilinx, most likely > here in this group, that LUTs are designed to avoid this sort of "race" > glitch you are describing. The LUT mux is implemented with pass > transistors rather than gates and so one turning off before the other > turns on does not cause a glitch, it leaves a node undriven for a brief > moment and the capacitance holds the value. At least this is what I > remember, which may not be exactly what I was told. But it is > approximate. > > I think this was Peter Alfke who told me this. Perhaps he is around to > verify that I remember it correctly. There was an appnote, dating from the XC3000 series parts. IIRC, the LUT wouldn't generate glitches provided that at most one input toggled within some small timing margin (which was in the order of the LUT delay). Kevin's statement of the restrictions of a single-LUT mux are quite valid. It's trivial to work around these using two ranks of LUTs though. Regards, AllanArticle: 151831
On Sun, 22 May 2011 03:41:46 +0000, Allan Herriman wrote: [snip] Here 'tis, from a 2001 c.a.f thread: http://groups.google.com/group/comp.arch.fpga/browse_thread/ thread/17a018858cc39a2d Here is what [Peter Alfke] wrote ten years ago ( you can find it, among other places, in the 1994 data book, page 9-5: "Function Generator Avoids Glitches ... Note that there can never be a decoding glitch when only one select input changes. Even a non-overlapping decoder cannot generate a glitch problem, since the node capacitance would retain the previous logic level... When more than one input changes "simultaneously", the user should analyze the logic output for any intermediate code. If any such code produces a different result, the user must assume that such a glitch might occur, and must make the system design immune to it... If none of the address codes contained in the "simultaneously" changing inputs produces a different output, the user can be sure that there will be no glitch...." This still applies today.Article: 151832
I have made a tutorial using flash programs that will help you understand how quadrature modulation and quadrature demodulation works. It is located here: http://www.fourier-series.com/IQMod/index.html I try to show you without getting bogged down in the math. The hope is that after going through the tutorial that the math will be easier to understand. (The math for this stuff can be found many places online)Article: 151833
On May 21, 11:50=A0pm, Allan Herriman <allanherri...@hotmail.com> wrote: > On Sun, 22 May 2011 03:41:46 +0000, Allan Herriman wrote: > > [snip] > > Here 'tis, from a 2001 c.a.f thread: > > http://groups.google.com/group/comp.arch.fpga/browse_thread/ > thread/17a018858cc39a2d > > Here is what [Peter Alfke] wrote ten years ago ( you can find it, among > other places, in the 1994 data book, page 9-5: > > "Function Generator Avoids Glitches > ... > Note that there can never be a decoding glitch when only one select input > changes. Even a non-overlapping decoder cannot generate a glitch problem, > since the node capacitance would retain the previous logic level... > When more than one input changes "simultaneously", the user should > analyze the logic output for any intermediate code. If any such code > produces a different result, the user must assume that such a glitch > might occur, and must make the system design immune to it... > If none of the address codes contained in the "simultaneously" changing > inputs produces a different output, the user can be sure that there will > be no glitch...." > > This still applies today. Ok, I think that means there would be no glitch even if both clock inputs change at the same time. mux clk1 clk2 output 0 0 0 0 0 0 1 0 0 1 1 1 ...or... mux clk1 clk2 output 0 0 0 0 0 1 0 1 0 1 1 1 I don't see how this could cause a glitch. I think what is being remembered the classic race condition where if both inputs change at the same time the output does not change. But given a small delta in the propagation paths, the output can momentarily change to a value equal to the output when only one input changes. In this case the unselected clock causes no change in the output so that none of the intermediate states are any different from and so no glitch occurs. Am I figuring this wrong? RickArticle: 151834
On May 21, 3:52=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > Mr.CRC <crobcBO...@removethissbcglobal.net> wrote: > > The simplest incarnation of a 2-to-1 multiplexer can be > > described by the equation: > > y =3D ~s & a | s & b > > where 'y' is the output, 's' is the select input, with 'a' and 'b' the > > data inputs. > > Of course, this multiplexer is broken because a practical implementatio= n > > glitches in a if 'a' and 'b' are true, and the select line toggles. > > (snip) > > > The fix for the glitching is of course to implement an extra term > > (Verilog not shown, but I think we can all handle it): > > y =3D ~s & a | s & b | a & b > > As I understand it, the FPGA LUTs are designed not to glitch > in just this condition. =A0The exact implementation I don't know, > but they should not glitch in the cases where a combination of > gates that the LUT represents would not glitch. > > -- glen Actually, there is no reason for a LUT to glitch where logic wouldn't since they work like logic, a mux in fact. But a logic glitch depends on the details of the logic in question. What the LUTs are designed to avoid is a glitch in some cases where logic WOULD glitch. This mux is such an example. y =3D ~s & a | s & b If the inputs a and b are both 1 and s changes, logic has the potential to glitch since one and gate can turn off before the other turns on, outputting a 0 momentarily. One LUT pass transistor will turn off, but the driven node will not change state until the next pass transistor turns on to change the stored value.Article: 151835
On May 21, 12:18=A0am, "Mr.CRC" <crobcBO...@REMOVETHISsbcglobal.net> wrote: > Hi: > > The simplest incarnation of a 2-to-1 multiplexer can be described by the > equation: > > y =3D ~s & a | s & b > > where 'y' is the output, 's' is the select input, with 'a' and 'b' the > data inputs. > > Of course, this multiplexer is broken because a practical implementation > =A0glitches in a if 'a' and 'b' are true, and the select line toggles. Who says this logic equation glitches in an FPGA, the book cited below? I don't think that is correct. It will definitely glitch if constructed in discrete gates because of the difference in delays of the two paths. But only because an intermediate state is driven to a value. In a pass transistor implementation the intermediate state is undriven and does not change value. > Such as this example from the book (1): > ----------------------------------------- > module mux21g ( > =A0 =A0 input wire a, > =A0 =A0 input wire b, > =A0 =A0 input wire s, > =A0 =A0 output wire y > ); > > wire _s; > wire c, d; > > assign #10 _s =3D ~s; > assign #10 c =3D _s & a; > assign #10 d =3D s & b; > assign #10 y =3D c | d; > ------------------------------------ Yes, this will glitch in simulation because of the delay in generating _s from s. I'm not as familiar with Verilog, so I'm not sure it will be apparent. In VHDL they use delta delays (zero time, but provides sequencing to intermediate values) which will create an intermediate value. It may not be visible on the simulation display, but in ActiveHDL it will be "touchable" by selecting the signal in the waveform display and scrolling the cursor from event to event. It will stop on what looks like a constant value showing the delta delay glitch. > The fix for the glitching is of course to implement an extra term > (Verilog not shown, but I think we can all handle it): > > y =3D ~s & a | s & b | a & b > > So the big questions are: > > 1. =A0What happens (ie., synthesizes) when you implement these equations > in an FPGA? Give some thought to the LUT structure. For this purpose it is just a memory. You have four inputs, sixteen states of those inputs and sixteen stored bits for the outputs for each input state. A mux selects the output based on the input state. In essence the Karnaugh map is directly implemented. It doesn't matter how you write the equation, you get the same map and the same implementation. > 1a. =A0What synthesizes if you do: > > assign y =3D ~s & a | s & b; // ??? > > 1b. =A0What synthesizes if you code the corrected mux equation: > > assign y =3D ~s & a | s & b | a & b; // ??? > > 1c. =A0Is there any difference in the synthesis if you code it one bitwis= e > operation at a time, like the module above, vs. all in one equation? > > 1d. If you "infer" a mux using the code shown in the vendor's device > library, then do you get a good mux, or a glitchy mux? What would be a "good mux"? You mean does it glitch when s changes, I don't think so. > In the case of a CPLD, I would expect that I could implement the fixed > mux if I selected suitable synthesis properties, such as "mux > extraction" (which I think recognizes my intent to create a mux, perhaps > whether I code it glitch free or not, and implements a correct mux--can > any tool experts clarify?) and/or "wysiwyg" which will probably even > implement the bad mux if I so choose. Yes, most CPLDs use product terms and fixed or arrays. So you get a straight forward sum of products. I can't say if they glitch or not without adding the "non-glitch" product term. > But the FPGA with its LUTs is a different ball of wax. > > I will be extremely interested to hear what the experts make of these > questions. > > I think that it is possible, though I don't yet know how, to see the > "RTL" output by the synth? =A0Are the answers to my questions to be found > there? I use Synplify and it has buttons to see either the RTL or the "technology" which is the inferred LUTs, counters, etc. > Then there are pre-synthesis and post-synthesis (or is it pre and post > fitting?) simulation models, etc., and whew! =A0There are quite a few > things I haven't delved into yet. > > Have a nice weekend! Already did! I was out kayaking both days. > (1) "Learning by Example using Verilog ..." Richard Haskell, et.al. pg. 7= 8. > > Disclaimer -- none of the above is intended to imply that one should > ever route a clock through logic such as a mux! Maybe I missed this thread the other day when I was responding to other posts. RickArticle: 151836
Is that you site? If so, I would like to tell you that your flash programs are damn good. Never saw something so ituitive to play. I haven´t looked everything yet but a FFT with frequency and phase of a sine function as inputs and FFT modulus and phase as outputs (bonus points if you can choose the window functions) would do wonders to teach such concepts. Very well done. Daniel Em 22/05/2011 01:19, brent escreveu: > I have made a tutorial using flash programs that will help you > understand how quadrature modulation and quadrature demodulation > works. It is located here: > > > http://www.fourier-series.com/IQMod/index.html > > I try to show you without getting bogged down in the math. The hope > is that after going through the tutorial that the math will be easier > to understand. (The math for this stuff can be found many places > online)Article: 151837
Hi, I'm trying to use Xilinx 10.1 SDK for PowerPC simulation. However for = unknown reasons SDK is unable to connect to debugger. This is a dump = from XMD console: <start here> Accepted a new TCLSock connection from 127.0.0.1 on port 4760 Instruction Set Simulator (ISS) PPC405, PPC440 Version 1.9 (1.76) (c) 1998, 2005 IBM Corporation status: waiting to connect to controlling interface (port=3D6470, = protocol=3Dtcp).... status: connected to Remote address (127.0.0.1) on port 4761 status: local Address =3D (127.0.0.1) on port 6470 0 Accepted a new GDB connection from 127.0.0.1 on port 4762 Closed the GDB connection from 127.0.0.1 on port 4762 <ends here> Every try fails with 'timed out' error. I expected that software = simulation would run just out of the box, but apparently I've missed = something obvious. Has anyone a clue what to do ? Do I need to configure = something in order to launch debugger ?=20 Thanks and regards GPArticle: 151838
On Sun, 22 May 2011 14:36:47 -0700, rickman wrote: > On May 21, 11:50Â pm, Allan Herriman <allanherri...@hotmail.com> wrote: >> On Sun, 22 May 2011 03:41:46 +0000, Allan Herriman wrote: >> >> [snip] >> >> Here 'tis, from a 2001 c.a.f thread: >> >> http://groups.google.com/group/comp.arch.fpga/browse_thread/ >> thread/17a018858cc39a2d >> >> Here is what [Peter Alfke] wrote ten years ago ( you can find it, among >> other places, in the 1994 data book, page 9-5: >> >> "Function Generator Avoids Glitches >> ... >> Note that there can never be a decoding glitch when only one select >> input changes. Even a non-overlapping decoder cannot generate a glitch >> problem, since the node capacitance would retain the previous logic >> level... When more than one input changes "simultaneously", the user >> should analyze the logic output for any intermediate code. If any such >> code produces a different result, the user must assume that such a >> glitch might occur, and must make the system design immune to it... If >> none of the address codes contained in the "simultaneously" changing >> inputs produces a different output, the user can be sure that there >> will be no glitch...." >> >> This still applies today. > > Ok, I think that means there would be no glitch even if both clock > inputs change at the same time. > > mux clk1 clk2 output > 0 0 0 0 > 0 0 1 0 > 0 1 1 1 > > ...or... > > mux clk1 clk2 output > 0 0 0 0 > 0 1 0 1 > 0 1 1 1 > > I don't see how this could cause a glitch. I think what is being > remembered the classic race condition where if both inputs change at the > same time the output does not change. But given a small delta in the > propagation paths, the output can momentarily change to a value equal to > the output when only one input changes. In this case the unselected > clock causes no change in the output so that none of the intermediate > states are any different from and so no glitch occurs. > > Am I figuring this wrong? If I make some assumptions about how the LUTs work internally, I get the same conclusion as you. Working with Peter's conditions, something like an xor gate would definitely not be safe, but a 2 to 1 mux might be. Regards, AllanArticle: 151839
Hi, In my design i have two counters, a write_counter and a read_counter, both are 11 bits wide. I used a simple compare equation like this: assign last_byte = odd_number_bytes ? (read_counter + 2 == write_counter) :(read_counter + 1 == write_counter); and last_byte triggers the state machine etc etc. now the logic designed by this comparator is of 6 logic levels which is causing a timing failure in my design. I need to optimize this logic but i can't seem to think of any fast implementation. I tried to come up with something like a lookup on first two LSBs and XOR the other bits etc, but every solution that i come up with contains a lot of corner cases and the whole thing starts to get messy. So, can anyone help me with this logic's optimization? Perhaps a fast implementation or a way to optimize it. Thanks a lot. Regards --------------------------------------- Posted through http://www.FPGARelated.comArticle: 151840
something like a prediction logic where i can predict beforehand that the coming byte is the last one or not. --------------------------------------- Posted through http://www.FPGARelated.comArticle: 151841
salimbaba <a1234573@n_o_s_p_a_m.n_o_s_p_a_m.owlpic.com> wrote: > In my design i have two counters, a write_counter and a read_counter, both > are 11 bits wide. I used a simple compare equation like this: > assign last_byte = odd_number_bytes ? (read_counter + 2 == write_counter) > :(read_counter + 1 == write_counter); > and last_byte triggers the state machine etc etc. > now the logic designed by this comparator is of 6 logic levels which is > causing a timing failure in my design. I need to optimize this logic but i > can't seem to think of any fast implementation. I tried to come up with > something like a lookup on first two LSBs and XOR the other bits etc, but > every solution that i come up with contains a lot of corner cases and the > whole thing starts to get messy. The usual way involves pipelining, separating the add from the compare. That is, in one cycle compute counter+2 and counter+3, in the next compare them to write counter. The addition is one higher to make up for the one cycle delay. Note that it fails in some cases where yours doesn't. If those can occur, then you have to special case them out. (Specifically, when read_counter can initialize to write_counter-1 or write_counter-2, when there is no time for the additional cycle.) -- glenArticle: 151842
On Mon, 23 May 2011 14:31:38 -0500, "salimbaba" wrote: >Hi, >In my design i have two counters, a write_counter and a read_counter, both >are 11 bits wide. I used a simple compare equation like this: > >assign last_byte = odd_number_bytes ? (read_counter + 2 == write_counter) >:(read_counter + 1 == write_counter); > >and last_byte triggers the state machine etc etc. >now the logic designed by this comparator is of 6 logic levels which is >causing a timing failure in my design. Not knowing the details it's a bit hard to suggest what you might be able to change, but here are two suggestions that might get you somewhere: 1. Compute (write_counter - read_counter). That should go in a nice fast adder structure using the carry chain. Then compare the output of the subtract with the constants 1 and 2. I think that should go in 4 levels of logic in total, though I'm not certain! 2. (Only if you're desperate.) Maintain three separate read counters. Initialise them to 0, 1 and 2 respectively. Increment them all together on every read operation. Now you can do the equality comparison without an extra +: ... odd ? (read_2==write) : read_1==write); cheers -- Jonathan BromleyArticle: 151843
On May 22, 7:33=A0pm, Daniel Mendes <dmend...@gmail.com> wrote: > Is that you site? If so, I would like to tell you that your flash > programs are damn good. Never saw something so ituitive to play. > I haven=B4t looked everything yet but a FFT with frequency and phase > of a sine function as inputs and FFT modulus and phase as outputs > (bonus points if you can choose the window functions) would do wonders > to teach such concepts. > > Very well done. > > Daniel > > Em 22/05/2011 01:19, brent escreveu: > > > I have made a tutorial using flash programs that will help you > > understand how quadrature modulation and quadrature demodulation > > works. =A0It is located here: > > >http://www.fourier-series.com/IQMod/index.html > > > I try to show you without getting bogged down in the math. =A0The hope > > is that after going through the tutorial that the math will be easier > > to understand. =A0(The math for this stuff can be found many places > > online) ThanksArticle: 151844
Hey Glen, I have taken care of those failure cases, forgot to mention in the post though. I will definitely look at it. --------------------------------------- Posted through http://www.FPGARelated.comArticle: 151845
If on a bidirectional bus, if there is a strong pull up and there is a device which is drives the line low, can we reduce the fall time substantially, if we reduce the pull up on the lines? Or is it that the low overrides the pull up and does not affect fall time at all? Thanks ShyamArticle: 151846
shyam <mail.ghanashyam.prabhu@gmail.com> wrote: > If on a bidirectional bus, if there is a strong pull up and there is a > device which is drives the line low, can we reduce the fall time > substantially, if we reduce the pull up on the lines? > Or is it that the low overrides the pull up and does not affect fall > time at all? Did you do the math? Did you try to simulate? Take the Pull up current in relation to the low drive sink current... -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 151847
On May 24, 9:39=A0am, Uwe Bonnes <b...@elektron.ikp.physik.tu- darmstadt.de> wrote: > shyam <mail.ghanashyam.pra...@gmail.com> wrote: > > If on a bidirectional bus, if there is a strong pull up and there is a > > device which is drives the line low, can we reduce the fall time > > substantially, if we reduce the pull up on the lines? > > Or is it that the low overrides the pull up and does not affect fall > > time at all? > > Did you do the math? Did you try to simulate? > > Take the Pull up current in relation to the low drive sink current... > -- > Uwe Bonnes =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0b...@elektron.ikp.physik.tu-dar= mstadt.de > > Institut fuer Kernphysik =A0Schlossgartenstrasse 9 =A064289 Darmstadt > --------- Tel. 06151 162516 -------- Fax. 06151 164321 ---------- Let's give the poor guy a clue. What impedance is a driver designed to drive an edge into? If you don't know what a transmission line is you are in it up to your neck.Article: 151848
A few ideas: - For your app: put regs on your "read_counter+2" and "read_counter+1" signals. You might have to adjust offsets using look-ahead logic. I don't know how soon you need "last_byte" relative to when those situations occur. Pipelining certainly requires you to account for the latency elsewhere. - For high-performance, split and pipeline comparators according to fabric. For example, with 6-input LUTs, compare 3 bits at a time, use a pipeline reg for each 3-bit compare, than use straight logic for last stage (i.e. "if all sub-compares equal, entire comparison must be equal"). You can't get much faster than breaking down your logic into "single LUT-single REG" combinations. - An idea for accounting for pipeline latencies is to use look-ahead version of a counter for one path while using a different version for the other. In that case, you'd either have to replicate the counter logic or add pipeline regs to the actual counter to emulate look-ahead versions. For high-performance, it's always a good idea to 'buffer' your actual counter from downstream comparators and such anyway, since your counter typically is nothing but a register that already feeds back to its internal adder logic. JohnArticle: 151849
Try this workaraoun http://forums.xilinx.com/t5/EDK-and-Platform-Studio/Simple-SDK-question-how-to-debug-as-simulation-in-SDK/m-p/124820#M17844
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z