Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Antti, We do not support the use of a DCM being supplied clock by CLKFX of another DCM. There are just too many variables where the jitter from the first DCM will cause the second DCM to not be able to lock, or to lose lock. That said, your symptom does not match what I just described: the first DCM is losing lock. That is usually because the input clock to the first DCM is not stable when it initially locks, and then as the input clock stabilizes, the DCM can not track it, and loses lock. The LOCK pin can only go low if the delay line taps overflow or underflow, which suggests that the frequency is changing while the device was locking (or was out of range). Try delaying the startup of the DCM, until the input clock is stable. The other cause is too much jitter on the CLKIN of the DCM. That you can determine with your Lecroy. Measure cycle to cycle jitter, if there is any cycle to cycle greater than 1n for low frequency range, or 300 ps for high frequency range, you will lose lock. What is the M and D of the first DCM? Larger values of M result in larger jitter, and also have more trouble locking to a signal that is wandering around. Vccaux changes used to be a cause of concern, but in V4 we redesigned the regulator that provides the voltages for the delay lines. A sudden change in Vccaux should not affect the lock of the DCM in V4 (as long as that change is within the Vccaux specifications). Lastly, if the first DCM is multiplying up the frequency to be within the range of the second DCM, I am assuming its CLKIN input is less than the min input frequency. Thus the first DCM should not be using the CLKFB pin. If the CLKFB pin is connected, then the software will also be trying to get the DLL part of the DCM to lock, and it will fail to do so (with an input below its lock range). Austin Antti wrote: > Hi > > if anyone has seen the same or has any ideas how to avoid the issue I > am faced please help - I am trying to get it solved myself (my deadline > is today 21:00 german time) but kinda stuck currently. > > problem: > > Virtex-4FX12 > 2 DCMs in series > DCM1 uses only FX output for PPC system clock (to get the clock into > DLL input range) > DCM2 generates 3X clock proc clock for PPC > > it all works for 360 milliseconds after configuration. then the first > DCM will remove lock, output stop, everythings stops. the delay 360 > milliseconds is not dependand on the first DCM clock ratio settings. if > PPC is held in reset then the DCMs still shut down after the same 360 > milliseconds. > > any ideas? what to check? I have Lecroy 2GS/s DSO on some signals and > power supply lines but not seeing anything at the time where the DCM > shut off. > > thanks in advance for any suggestions, > > Antti >Article: 107601
You can always run "partgen" from a command line window to generate the package file. Or if you prefer a nicer presentation of it, you can use ADEPT: http://home.comcast.net/~jimwu88/tools/adept/ HTH, Jim pmaupin@gmail.com wrote: > MM wrote: > > > Where can I find the ASCII package files? > > > > Try this link: > > http://www.xilinx.com/products/silicon_solutions/fpgas/virtex/virtex4/resources/virtex-4-pkgs.htm > > > > To get there from the page where you started you should have chosen All > > Virtex-4 Resources and then Package Files under the Documentation... > > > > Even there, not all the links are correct. For example the FF1513 > links (LX100, LX160, LX200) all point to the same file (LX100 > version). A bit of URL jiggery-pokery seems to lead to the right > files, though, so I think I'm good. > > Thanks, > PatArticle: 107602
Austin Lesea schrieb: > Antti, > [snip] > the min input frequency. Thus the first DCM should not be using the > CLKFB pin. If the CLKFB pin is connected, then the software will also > be trying to get the DLL part of the DCM to lock, and it will fail to do > so (with an input below its lock range). > > Austin weird - the thing is that the DCM chaining works fine, in ISE and in EDK sometimes, and the single DCM in DFS mode works in EDK also always with CLKFB no connected. but with an PPC design in EDK when the CLKFB is not connected then I think the DCM_STANDBY macro jumps in and shuts down the DCM, well Xilinx docs say the standby macro tracks idle state longer 100ms, hm the 360 milliseconds sounds still plausible time for the shutdown macro to activate connecting the 12MHz input clock that *IS* defenetly clean and stable to both CLKIN _and_ CLKFB fixed the issue. surpsingly I have EDK microblaze systems with similarly chained DCMs where CLKFB of first DCM is not connected that work. BTW after reading xilinx AR=B4s I did not find a clear note that DCM chaining from FX is not allowed, there was note that Arch wizard does not support it yet, but not that it want work or can not be used. AnttiArticle: 107603
Austin Lesea schrieb: > Antti, > > The PMV is not used for configuration. > > In fact, it was not intended to be used by anything at all. It is there > for test only. > > As such, it is uncharacterized. Anything that is uncharacterized can > NOT be made available for customer use. The DCM NBTI macro did make use > of it, as that macro needed an oscillator for when no clock was present, > and the less resource used by a macro, the better the macro (least > impact on any design). > > If you want a ring oscillator, (which is all that the PMV is), you can > easily make one out of a chain of LUTs. > > Austin > > Antti wrote: > > Hi All, > > > > as I had guessed for long time the PMV primitive is actually the > > on-chip oscillator, most likely it is the same oscillator that is used > > for configuration. And it can be used from user designs as well. PMV is > > present in all recent FPGAs. > > > > http://xilant.com/index.php?option=com_content&task=view&id=29&Itemid=32 > > > > When I opened webcase about the issue that Xilinx tools made fatal > > failure when I tried to use the PMV from an hard macro the response was > > that, "you dont need to know" - well now I know :) > > > > Antti > > Hi Austin, hmm, well the PMV sounds like better oscillator than the chain of LUT's as it always in the same place and doesnt need LOC constraints etc? The PMV looks like a available replacement for the actual config clock primitive. I still wish Xilinx FPGAs would have the config Clock accessible again as it was with old Xilinx FPGAs ! AnttiArticle: 107604
Does anyone know of a FPGA that supports DDR3 and/or GDDR3 dram protocols? These drams use some new signalling levels and termination schemes that don't seem to be supported. Thanks! John ProvidenzaArticle: 107605
What is this device supposed to featured. Start the rumor mill. -EliArticle: 107606
Eli Hughes schrieb: > What is this device supposed to featured. Start the rumor mill. > > -Eli its already started, I posted the info that I have gathered already yesterday! looks like partial downgrade from Spartan3E, eg smaller devices, no SPI or NOR flash support, but as added bonus self reconfiguration feature AnttiArticle: 107607
"rickman" <gnuarm@gmail.com> wrote in message news:1156939530.298888.12970@i42g2000cwa.googlegroups.com... > Symon wrote: > > So, you advocate decoupling the power plane without considering what effect > > this has on the IC? Why would you go to all that effort if the package > > you're stuck with prevents your efforts making any difference? > > A quiet plane is part of the solution. If your package produces ground > bounce that blows your noise budget, you have no hope of building a > good design. If so, you need to get a part with a better package. I > don't get what you are saying. > Rick, OK, I think you're correct, this conversation has reached an end. It can go nowhere with one of us posting data and sims (including ones that model the power planes, albeit as a lumped capacitance) that the other cannot or will not look at, the other posting hearsay from a class he went to. I will, however, try to make one last point based on the snippet above. I contend that the package impedance of modern FPGAs is such that any benefit that a board wide power plane's capacitance could provide to your design, over and above that which you can get from a small local plane and associated bypass capacitors, is negligible. The caps work up to a few hundred MHz, just about where the package stops working. Any noise on the supply above this frequency doesn't get to the silicon anyway. As posting a simulation to show this is wasted effort, I instead refer you to UG076, Figure 6-3, which shows how Xilinx power their Rocket-IO circuitry. I also offer http://www.eetasia.com/ARTICLES/2006MAY/PDF/EEOL_2006MAY16_POW_EMD_TA.pdf as further reading. Thanks for encouraging me to simulate this stuff again, it's given me some more ideas for my next layout! I'm also gladdened that, as you mention in one of your posts, you will be simulating your next board. If you are able to share your results, I for one would be most interested to see them. Yours &c, Syms.Article: 107608
"rickman" <gnuarm@gmail.com> writes: > Martin Thompson wrote: > Is TRW still around? I thought they were bought by Northrop Grumman. > I guess some part of TRW was not part of that deal? I used to word in > Defense Systems in McLean or whatever they called it that week. > Northrop bought the whole shebang and then the Automotive bit spun off as separate entity. > I guess I am too old school to feel good about using 'C' like code. > Sure if it works, do it. But I always think in terms of hardware and > like to know what I am building before I let the tool build it. I > guess I would not want to debug a design where I didn't know what the > tool was doing. Then I would be debugging software and not hardware. > Maybe that works for some people, but I like to know the hardware I am > building so I know exactly how to debug it. That also includes > avoiding certain types of bugs that are caused by poorly designed > hardware. If the tool generated the hardware then I can't say it > doesn't have race conditions and such. > We're not yet at the stage of "chuck any old algorithm at the tools", but I think synthesis tools get less cred for figuring things out than they sometimes should. Regarding tools - when I write software, I like to know what I'm aiming for in terms of which bits of the processor are going to be used as well. I wouldn't like to not know what my C-compiler is likely to be doing either. However, plenty of people write software without that understanding and in future plenty of people will write hardware similarly. For a lot of applications, it won't matter IMHO. It's all just a case of getting things done in the end :-) Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 107609
mikegurche@yahoo.com writes: > Martin Thompson wrote: > > mikegurche@yahoo.com writes: > > > > <snip commentary on variables> > > > > > > > > In synthesis, the problem is normally the abuse of sequential > > > statements, rather than the use of variable. I have seen people trying > > > to convert C segment into a VHDL process (you can have variables, for > > > loop, while loop, if, case, and even break inside a process) and > > > expecting synthesis software to figure out everything. > > > > > > > Why not do this? Synthesis software is good at figuring all this > > out. If it does what you need it to and meets timing, you're done. > > Move on to the next problem. > > > > If the synthesis software is really this capable, there is no need for > hardware engineers. Everyone can do hardware design after taking C > programming 101 and we all will become unemployed :( Well, eventually, it's going to happen. We'll be abstracted far enough away from the hardware to still be productive. > > Let me give an example. Assume that we want to design a sorting > circuit that sorts a register of 1000 8-bit word with minimal > hardware. OK, but I notice you've said minimal hardware there - my point was "gets the job done"... maybe I have an enormous FPGA. > For simplicity, let us use the bubble sort algorithm: > > n=100 > for (i=0; i<n-1; i++) { > for (j=0; j<n-1-i; j++) > if (a[j+1] < a[j]) { /* compare the two neighbors */ > tmp = a[j]; /* swap a[j] and a[j+1] */ > a[j] = a[j+1]; > a[j+1] = tmp; > } > } > > The hardware designer's approach is to develop a control FSM to mimic > the algorithm. It can be done with one 8-bit comparator in > 0.5*1000*1000 clock cycles. > Yes. Maybe in 1 process, and maybe in 2, or maybe three :-) > If we ignore the underlying hardware structure and just translate C > constructs to corresponding VHDL constructs directly (the C > programmer's approach), we can still derive correct VHDL code: > > process(clock) > variable a: std_logic_vector(999 downto 0) of > std_logic_vector(7 dwonto 0); > variable tmp: std_logic_vector(7 dwonto 0); > begin > if (clock'event and clock='1') then > -- register > q <= d; > a := q; > -- combinational sorting circuit based on > -- one-to-one mapping of C constructs > for i in 0 to N-2 loop > for j in 0 to N-2-i loop > if (a(j+1) <a(j)) then > tmp := a(j); > a(j) := a(j+1); > a(j+1) := tmp; > end if; > end loop; > end loop; > -- result to register input > d <= a; > end process; > I wasn't suggesting directly mapping C-code to VHDL like this, just that the "describe everything to the synthesizer" approach can be a bit heavy-handed and time consuming. > The resulting circuit can complete sorting in one clock cycle but > requires 0.5*1000*1000 8-bit comparators. We need a extremely large > target device to accommodate the synthesized circuit. It will be very > demanding for synthesis software to convert this code into a circuit > with only one comparator. I think my job is still safe, for now :) > I don't suggest that the synth *can* make those sort of transformations. It doesn't have the constraints currently to know. As an aside, I think that FpgaC will do it with one comparator and a memory, (maybe fpga_toys will chip in here). In the future, we *will* be moving away from low-level stuff like this (personally, I don't think *C* is high enough level, but that's another debate :-) Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 107610
Here is my potential setup: 1. N-boxes connected to a single box (Spoke to Hub config) via RocketIO running the simplex Aurora protocol. 2. Each channel would consist of four 3.125Gb/s lanes bonded together. 3. Time aligned samples are transmitted from each spoke to the hub. Here are my assumptions, please correct me if I am wrong: 1. If my data throughput is less than the max throughput of the link, the Link Layer will add filler data during the idle periods that will be stripped out on the receiver side. 2. The elastic buffers on the TX and RX sides are filled and emptied, respectively, by me. 3. If data were continuously fed into the TX buffer, sent, and correctly received by the receiver, the RX buffer would not reach an empty state. 4. I do not have to send alignment characters after the channel has been successfully initialized. I am concerned that if a sample took a bit hit, which caused the sample to be incorrectly interpreted as a idle or alignment character, the sample would basically be thrown away. This results in my data no longer being time aligned, which is catastrophic for this application. Any insight or suggestions? Is it possible to pass all data transmitted, user data and special characters to the buffers? Given the number of channels, the length of operation, and the rate of transmission, I believe bit errors will be unavoidable. Incorrect data is relatively acceptable, but skewed data is not. Thanks!Article: 107611
David Brown wrote: > Thanks for taking the time explaining this - between you and Symon I'm > hopefully learning something! I had written a reply to this post, but I hit a wrong button as I typed and POOF! So here it is again... > However, I've a couple of issues here. First off, I can't see that the > power planes have much capacitive effect at these frequencies (the > "planes" being polygons, with other signals on the same layer, and thus > having plenty of gaps). But I'll happily admit to not having a clear > idea how to model such planes or polygons. If your planes are not designed to have good capacitance, then they won't. They need to be complete on thier own layer and closely spaced. It is not hard to get nFs from planes with very low inductance. > Secondly, I understand about different caps working better at different > frequencies, and obviously have bulk capacitors for the lower > frequencies (electrolytics near the regulators, and a few 4.7uF ceramics > around the board). But I still can't find any reason to expect a 0.001 > uF ceramic 0603 capacitor to be significantly better at higher > frequencies than a 0.1 uF ceramic (same dialectric) 0603 capacitor. It is not just that the caps work at different frequencies, it is how they work with the power planes. A cap closely coupled to the power planes will have a resonance (or anti-resonance) which will create a *higher* impedance in that range of frequencies than either the cap or plane alone. If you pick a cap of small value and low Q (high ESR) it will have a low amplitude resonance, high in frequency. This same cap will require a lot of them to provide effective coupling at lower frequencies. So you can then use a smaller number of larger value caps to provide a lowered impedance at lower frequencies. Again it is important to not use parts with a high Q as this will raise the amplitude of the impedance peaks due to parallel resonance. By using a range of cap values the impedance is kept low across a wide range of frequency and the resonances are kept to a minimum. > Using the muRata software, I picked a 0603 X7R 100 nF capacitor. The > software gives it an SRF of 21 MHz, L of 0.63 nH, R of 0.027 O, and an > impedance of 0.14 ohm at 10 MHz, 0.02 ohm at 20 MHz, 0.16 ohm at 50 MHz, > 0.38 ohm at 100 MHz, 0.78 at 200 MHz, and 1.97 ohm at 500 MHz. Picking > a 10 nF cap with the same setup gives an SFR of 67 MHz, and impedances > at these frequencies of 1.66, 0.77, 0.16, 0.24, 0.71 and 1.95 ohms. In > other words, it is a better at around 100 MHz, but not vastly better. > Until we start looking at special 0306 caps for frequencies of several > hundred MHz, I just don't see the benefit of smaller capacitance values. > Even then, it is more economical to simply use a few extra caps of the > same type (assuming the board has space for it). But this does not take the parallel resonance into account. If parallel resonance did not matter we could decouple everything with a few tantalum caps. > It doesn't even take that many caps - I've got about a dozen for the > processor (which as two main supplies and a PLL supply), two or three > for each of the sdram chips, and one or two for each of the other major > chips. It sounds like this is a simple design, but have you tested worse case? Try the situation where the address and data bus both change from all 0s to all 1s at the same moment (assuming this processor can do that). The DSP I last designed with would switch both data and address busses at the same time. Put a high speed scope probe (with a very short ground) on a separate output from this part that is set to a 1 and watch the glitch, that is your total bounce including the inductance from the power pins and the plane bounce. Also measure the glitch on a power pin and you will have just the power plane noise. After you consider this noise and the other sources such as crosstalk, can you tell if your design is quiet enough. Testing won't do it unless you explicitly test your worst cases. > One thing that makes a significant difference is that I'm not driving > any fast, high current lines - signalling is (almost) all TTL levels. > Higher current drives would mean more capacitors, but I'd still expect > to use the same types. If you are driving with fast edges, you are driving high current. Series terminated 3.3 volt CMOS driving a 50 ohm transmission line will drive 33 mA per line. It will be much higher if it is not series terminated. Multiply that by 64 and you get 2 Amps! Did you consider this much current in your decoupling calculations? If you don't supply the current from the power plane the caps can't really keep up with the fast rise time of many drivers (< 1 ns). It will create high noise on the planes and can trigger bounce logic level problems. If you are using an MCU with fully internal memory then we are talking about a different class of design and you can get by with a dozen or so of single value caps.Article: 107612
Symon wrote: > OK, I think you're correct, this conversation has reached an end. It can go > nowhere with one of us posting data and sims (including ones that model the > power planes, albeit as a lumped capacitance) that the other cannot or will > not look at, the other posting hearsay from a class he went to. I will, > however, try to make one last point based on the snippet above. Probably, but I also think this topic should probably be revisited from time to time as well. As Austin stated in the intro post, there are few reasons to stack caps (unless the dominant cause is lack of adequate capacitance), which simply shouldn't happen if you have a good idea what worst case current spikes from the chip are. That unfortunately isn't specified, because it's highly variable depending on the design, place/route, and other factors. If something is "fixed" by adding some additional medium/low speed capacitance, then you made some wrong assumption, or have a process problem (like poor via plating as I've seen before). My experience is that there are some designs, which do not work in some packages, even with best possible practice on the user PCB, simply because of the inductance and resistances in the package. My REALLY BAD experience was XCV2000E's in BG560's. I've had similar problems with other parts that are not nearly as clear, but find comfort that Xilinx is improving packaging so they believe that XC4V and XC5V should not be a problem. When I have time, I may revisit the PCB layouts given your wonderful enlightment, and see if there are improvements to be made. Maybe I'll even risk getting a few XC4VLX100, XC4VLX200's, or XC5V parts and giving it a try. I suspect there may still be some land mines that are related to very dense designs which are optimized to one combinatorial delay based around SRL's, with minimum inter LUT routing dominating the timing and power requirements, and may result in very short power bursts several times the average current. In the largest parts, the clock skew may hide this, thus preventing the current stackup. If it's possible to juggle the routing to balance the clock skew, there may well still be "perverse" ways of getting the parts to fail, that can also be accidentally invoked by placement and routing variations. It would be interesting to spend a few days to verify this, and see if it really is safe not to worry about unexpected worst case stackups. In the end, we may have to move to the next level, and get rid of the packages all togather. When I asked Xilinx about getting raw tested die for direct user PCB attach last year they were a very resistant. With half, or better of the inductance still remaining in the package, it's getting tougher, even with best practice, to meet the demands for high performance applications. thanks .. and Have Fun!! JohnArticle: 107613
KJ wrote: > What is missing is some form of control to say, how many clock cycles > it can take to implement the algorithm. Hmmm One way to do that is to declare i and/or j as variables rather than loop constants. Then throttle back to fewer loops per tick... -- Mike TreselerArticle: 107614
Antti, The chaining rule is there somewhere, and it should state that a CLKIN is not allowed to be driven from a CLKFX of a previous DCM. Also no more than two DCM shall be cascaded. This an old restriction, since V2, and has not changed (as the jitter on CLKFX has not changed). I have heard about the DCM NBTI macro causing headaches before, and that may be the problem. I do not know how to disable it. Maybe a quick email to the hotline would point you to how to disable it. It may be the 12 MHz is too slow and trips the macro "loss of clock" circuit? What value of M do you use to multiply the 12 MHz by? Austin Antti wrote: > Austin Lesea schrieb: > >> Antti, >> > [snip] > >> the min input frequency. Thus the first DCM should not be using the >> CLKFB pin. If the CLKFB pin is connected, then the software will also >> be trying to get the DLL part of the DCM to lock, and it will fail to do >> so (with an input below its lock range). >> >> Austin > > weird - the thing is that the DCM chaining works fine, in ISE > and in EDK sometimes, and the single DCM in DFS mode works > in EDK also always with CLKFB no connected. > > but with an PPC design in EDK when the CLKFB is not connected > then I think the DCM_STANDBY macro jumps in and shuts down > the DCM, well Xilinx docs say the standby macro tracks idle state > longer 100ms, hm the 360 milliseconds sounds still plausible time > for the shutdown macro to activate > > connecting the 12MHz input clock that *IS* defenetly clean > and stable to both CLKIN _and_ CLKFB fixed the issue. > > surpsingly I have EDK microblaze systems with similarly chained DCMs > where CLKFB of first DCM is not connected that work. > > BTW after reading xilinx ARīs I did not find a clear note that DCM > chaining from FX is not allowed, there was note that Arch wizard does > not support it yet, but not that it want work or can not be used. > > Antti >Article: 107615
Antti, As convenient as the PMV is, it is actually 16 different ring oscillators. We would have to fully characterize all of them in order to pick the "best" one, and then create a user macro to instantiate it... Maybe some day, but right now it is not even on a list of "nice to have..." Austin Antti wrote: > Austin Lesea schrieb: > >> Antti, >> >> The PMV is not used for configuration. >> >> In fact, it was not intended to be used by anything at all. It is there >> for test only. >> >> As such, it is uncharacterized. Anything that is uncharacterized can >> NOT be made available for customer use. The DCM NBTI macro did make use >> of it, as that macro needed an oscillator for when no clock was present, >> and the less resource used by a macro, the better the macro (least >> impact on any design). >> >> If you want a ring oscillator, (which is all that the PMV is), you can >> easily make one out of a chain of LUTs. >> >> Austin >> >> Antti wrote: >>> Hi All, >>> >>> as I had guessed for long time the PMV primitive is actually the >>> on-chip oscillator, most likely it is the same oscillator that is used >>> for configuration. And it can be used from user designs as well. PMV is >>> present in all recent FPGAs. >>> >>> http://xilant.com/index.php?option=com_content&task=view&id=29&Itemid=32 >>> >>> When I opened webcase about the issue that Xilinx tools made fatal >>> failure when I tried to use the PMV from an hard macro the response was >>> that, "you dont need to know" - well now I know :) >>> >>> Antti >>> > > Hi Austin, > > hmm, well the PMV sounds like better oscillator than the chain of LUT's > as it always in the same place and doesnt need LOC constraints etc? The > PMV looks like a available replacement for the actual config clock > primitive. > > I still wish Xilinx FPGAs would have the config Clock accessible again > as it was with old Xilinx FPGAs ! > > Antti >Article: 107616
vt2001cpe, It seems that if bit errors are not acceptable, as you say, they WILL happen (if not because of marginal signal integrity of cables, connectors, etc. then because of EMI/RFI ... in efect "**it happens"). Given a bit error is unavoidable, then that says you MUST use FEC, or a acknowledge, negative acknowledge (resend) protocol. A bit error causing the link to loose frame synchronization, or channel synchronization is something I have not studied for the Aurora core, and you could email a webcase to ask them exactly that question: "how does aurora deal with a bit error that may cause frame, or channel synchronization? Can that even happen?" It may not be possible to corrupt a 8b/10b character such that you lose sync...after all, it is a well tried and well thought out old coding (by IBM), and they were always good at making channel coding such that loss of link sync was not possible, even at very high error rates... For example to lose frame sync on a T1 or E1 link requires an error rate of ~ 1E-3 or worse. Systems are usually designed such that single isolated bit errors don't cause problems (except to data). Austin vt2001cpe wrote: > Here is my potential setup: > 1. N-boxes connected to a single box (Spoke to Hub config) via RocketIO > running the simplex Aurora protocol. > 2. Each channel would consist of four 3.125Gb/s lanes bonded together. > 3. Time aligned samples are transmitted from each spoke to the hub. > > Here are my assumptions, please correct me if I am wrong: > > 1. If my data throughput is less than the max throughput of the link, > the Link Layer will add filler data during the idle periods that will > be stripped out on the receiver side. > 2. The elastic buffers on the TX and RX sides are filled and emptied, > respectively, by me. > 3. If data were continuously fed into the TX buffer, sent, and > correctly received by the receiver, the RX buffer would not reach an > empty state. > 4. I do not have to send alignment characters after the channel has > been successfully initialized. > > I am concerned that if a sample took a bit hit, which caused the sample > to be incorrectly interpreted as a idle or alignment character, the > sample would basically be thrown away. This results in my data no > longer being time aligned, which is catastrophic for this application. > Any insight or suggestions? Is it possible to pass all data > transmitted, user data and special characters to the buffers? > > Given the number of channels, the length of operation, and the rate of > transmission, I believe bit errors will be unavoidable. Incorrect data > is relatively acceptable, but skewed data is not. > > Thanks! >Article: 107617
"David Brown" <david@westcontrol.removethisbit.com> wrote in message news:44f59085$1@news.wineasy.se... > > Thanks for taking the time explaining this - between you and Symon I'm > hopefully learning something! > > However, I've a couple of issues here. First off, I can't see that the > power planes have much capacitive effect at these frequencies (the > "planes" being polygons, with other signals on the same layer, and thus > having plenty of gaps). But I'll happily admit to not having a clear > idea how to model such planes or polygons. > > Secondly, I understand about different caps working better at different > frequencies, and obviously have bulk capacitors for the lower > frequencies (electrolytics near the regulators, and a few 4.7uF ceramics > around the board). But I still can't find any reason to expect a 0.001 > uF ceramic 0603 capacitor to be significantly better at higher > frequencies than a 0.1 uF ceramic (same dialectric) 0603 capacitor. > > Using the muRata software, I picked a 0603 X7R 100 nF capacitor. The > software gives it an SRF of 21 MHz, L of 0.63 nH, R of 0.027 O, and an > impedance of 0.14 ohm at 10 MHz, 0.02 ohm at 20 MHz, 0.16 ohm at 50 MHz, > 0.38 ohm at 100 MHz, 0.78 at 200 MHz, and 1.97 ohm at 500 MHz. Picking > a 10 nF cap with the same setup gives an SFR of 67 MHz, and impedances > at these frequencies of 1.66, 0.77, 0.16, 0.24, 0.71 and 1.95 ohms. In > other words, it is a better at around 100 MHz, but not vastly better. > Until we start looking at special 0306 caps for frequencies of several > hundred MHz, I just don't see the benefit of smaller capacitance values. > Even then, it is more economical to simply use a few extra caps of the > same type (assuming the board has space for it). > > It doesn't even take that many caps - I've got about a dozen for the > processor (which as two main supplies and a PLL supply), two or three > for each of the sdram chips, and one or two for each of the other major > chips. > > One thing that makes a significant difference is that I'm not driving > any fast, high current lines - signalling is (almost) all TTL levels. > Higher current drives would mean more capacitors, but I'd still expect > to use the same types. > > Hi David, I think the main issue at stake in this thread isn't the impedances per se, but resonances between the various components of the board assembly. I recommend that you try using spice simulations to see these resonance mechanisms for yourself. This certainly increased my understanding of the subject. The LTSpice files I posted might help get you started. In summary, I think we have (at least) two different methodologies in this thread. 1) Rick's teacher has presented a way to prevent resonances between bypass caps and power planes. These resonances can be substantial because of the high Q of the plane capacitance. He prevents this serious resonance by using a bunch of different valued capacitors to move and spread out the resonance. This introduces new parallel resonances between these different valued caps, but these aren't as bad as the original plane resonance because the caps have low Q. 2) For FPGA boards, I suggest a solution whereby we dispense with the power plane. Hence no serious resonance, as we have no high Q components. Use one value of decoupling cap to prevent resonances between different values. Pick a value with crappy Q. We have lost the very high frequency decoupling capability of the plane capacitance, but that was no use anyway as we can't couple this plane capacitance to the device we're using because its package has too much inductance (from its balls and vias plus the PCB vias). Instead, we use a bunch (maybe even a bigger bunch than in (1)) of bypass caps (very) near the device and a small 'mini-plane' to parallel them together. The money you've saved by removing a PCB layer pays for the extra caps. Both methods will work. Each has pros and cons. But I use methodology 2). :-) As package technology advances, I will re-evaluate this position. I may also need to learn how to use a 3-D modelling package, as lumped simulation is not much help beyond 1GHz. Cheers, Syms. p.s. In both methods, the over-riding key issue is to have a decent ground. Without that, forget everything.Article: 107618
Austin Lesea schrieb: > Antti, > > The chaining rule is there somewhere, and it should state that a CLKIN > is not allowed to be driven from a CLKFX of a previous DCM. > > Also no more than two DCM shall be cascaded. > > This an old restriction, since V2, and has not changed (as the jitter on > CLKFX has not changed). > > I have heard about the DCM NBTI macro causing headaches before, and that > may be the problem. I do not know how to disable it. Maybe a quick > email to the hotline would point you to how to disable it. > > It may be the 12 MHz is too slow and trips the macro "loss of clock" > circuit? > > What value of M do you use to multiply the 12 MHz by? > > Austin > > Antti wrote: > > Austin Lesea schrieb: > > > >> Antti, > >> > > [snip] > > > >> the min input frequency. Thus the first DCM should not be using the > >> CLKFB pin. If the CLKFB pin is connected, then the software will also > >> be trying to get the DLL part of the DCM to lock, and it will fail to = do > >> so (with an input below its lock range). > >> > >> Austin > > > > weird - the thing is that the DCM chaining works fine, in ISE > > and in EDK sometimes, and the single DCM in DFS mode works > > in EDK also always with CLKFB no connected. > > > > but with an PPC design in EDK when the CLKFB is not connected > > then I think the DCM_STANDBY macro jumps in and shuts down > > the DCM, well Xilinx docs say the standby macro tracks idle state > > longer 100ms, hm the 360 milliseconds sounds still plausible time > > for the shutdown macro to activate > > > > connecting the 12MHz input clock that *IS* defenetly clean > > and stable to both CLKIN _and_ CLKFB fixed the issue. > > > > surpsingly I have EDK microblaze systems with similarly chained DCMs > > where CLKFB of first DCM is not connected that work. > > > > BTW after reading xilinx AR=B4s I did not find a clear note that DCM > > chaining from FX is not allowed, there was note that Arch wizard does > > not support it yet, but not that it want work or can not be used. > > > > Antti > > 50, 72 and 100MHz out from first DCM all had the 360 milliseconds shutdown. for the moment the connection of input clock to both CLKIN AND CLKFB of the first DCM seems to fix the issue. If there is another better workaround solution, well I will possible open webcase or something, for tonight I got it working, what is needed right now. AnttiArticle: 107619
Awesome summary Symon :)Article: 107620
Antti wrote: > Hi > > if anyone has seen the same or has any ideas how to avoid the issue I > am faced please help - I am trying to get it solved myself (my deadline > is today 21:00 german time) but kinda stuck currently. > > problem: > > Virtex-4FX12 > 2 DCMs in series > DCM1 uses only FX output for PPC system clock (to get the clock into > DLL input range) > DCM2 generates 3X clock proc clock for PPC > > it all works for 360 milliseconds after configuration. then the first > DCM will remove lock, output stop, everythings stops. the delay 360 > milliseconds is not dependand on the first DCM clock ratio settings. if > PPC is held in reset then the DCMs still shut down after the same 360 > milliseconds. > > any ideas? what to check? I have Lecroy 2GS/s DSO on some signals and > power supply lines but not seeing anything at the time where the DCM > shut off. > > thanks in advance for any suggestions, > > Antti > Antti, I don't think this is your problem since you said it is the first DCM losing lock, however check to make sure the jitter out of the first DCM is within the max jitter input specs for the second DCM. In most cases, you violate the max jitter spec when trying to drive a second DCM with the clkfx output of the first. Perhaps you can change it around so that the first is doing clkx2 and the second is doing the clkfx? Otherwise, at least check the jitter for your M and D values to satisfy yourself that the jitter is not too much for the second DCM Then again, I'm sure you already know all this ;-) Second question: are you using the NBTI macro? I don't know what changes Xilinx has made to it in the past 16 months. The original version for the devices that don't have the DCM changes had race conditions that made it unreliable for higher clock input frequencies, at least on paper. We had no end of problems getting it to react properly when starting a clock up after the FPGA had been programmed. There were several revisions to the circuit since then, but we have not looked at them here to see if the race problem was addressed or not. We are leaving the NBTI circuit out for designs that have a crystal on the board now.Article: 107621
Antti wrote: > Austin Lesea schrieb: > > >>Antti, >> > > [snip] > > >>the min input frequency. Thus the first DCM should not be using the >>CLKFB pin. If the CLKFB pin is connected, then the software will also >>be trying to get the DLL part of the DCM to lock, and it will fail to do >>so (with an input below its lock range). >> >>Austin > > > weird - the thing is that the DCM chaining works fine, in ISE > and in EDK sometimes, and the single DCM in DFS mode works > in EDK also always with CLKFB no connected. > > but with an PPC design in EDK when the CLKFB is not connected > then I think the DCM_STANDBY macro jumps in and shuts down > the DCM, well Xilinx docs say the standby macro tracks idle state > longer 100ms, hm the 360 milliseconds sounds still plausible time > for the shutdown macro to activate > > connecting the 12MHz input clock that *IS* defenetly clean > and stable to both CLKIN _and_ CLKFB fixed the issue. > > surpsingly I have EDK microblaze systems with similarly chained DCMs > where CLKFB of first DCM is not connected that work. > > BTW after reading xilinx ARīs I did not find a clear note that DCM > chaining from FX is not allowed, there was note that Arch wizard does > not support it yet, but not that it want work or can not be used. > > Antti > Antti, see my note above. You might try taking out the DCM_standby and replace them with a straight DCM to see if it clears up the problem. If you are using 8.1, there is a envrionment variable to avoid the auto-insertion.Article: 107622
Jim, The automotive processors use a separate communication link (most are on board packet switched virtual links) to each processor brought out through a interface on the chip. (Nexus) There are support standards for this. The asian processors that we have created support for were lockstepped simulation and hardware to extract more information for the developers. Most of the current processors that I am seeing are using asynchronous background BDM or JTAG brought out through a limited number of pins. Watch this space later in the year for information on the consumer products multiprocessor debug support. w.. Jim Granville wrote: > Walter Banks wrote: > > Jim, > > > > We have certainly thought about it. Byte Craft has done quite a bit of > > instruction design work on embedded commercial processors. Internally > > for every C compiler we create an instruction set simulator with a lot of > > performance instrumentation. > > > > I expect the next round of processors will move towards multiple processor > > solutions to applications. Compilers and other HLL tools will be focused on > > application work division. > > > > w.. > > Sounds promising. What about debug pathways ? > > -jgArticle: 107623
Antti, Yes, it looks like the macro thinks the clock is no present. I would open a webcase just so that someone is aware of this. Austin Antti wrote: > Austin Lesea schrieb: > >> Antti, >> >> The chaining rule is there somewhere, and it should state that a CLKIN >> is not allowed to be driven from a CLKFX of a previous DCM. >> >> Also no more than two DCM shall be cascaded. >> >> This an old restriction, since V2, and has not changed (as the jitter on >> CLKFX has not changed). >> >> I have heard about the DCM NBTI macro causing headaches before, and that >> may be the problem. I do not know how to disable it. Maybe a quick >> email to the hotline would point you to how to disable it. >> >> It may be the 12 MHz is too slow and trips the macro "loss of clock" >> circuit? >> >> What value of M do you use to multiply the 12 MHz by? >> >> Austin >> >> Antti wrote: >>> Austin Lesea schrieb: >>> >>>> Antti, >>>> >>> [snip] >>> >>>> the min input frequency. Thus the first DCM should not be using the >>>> CLKFB pin. If the CLKFB pin is connected, then the software will also >>>> be trying to get the DLL part of the DCM to lock, and it will fail to do >>>> so (with an input below its lock range). >>>> >>>> Austin >>> weird - the thing is that the DCM chaining works fine, in ISE >>> and in EDK sometimes, and the single DCM in DFS mode works >>> in EDK also always with CLKFB no connected. >>> >>> but with an PPC design in EDK when the CLKFB is not connected >>> then I think the DCM_STANDBY macro jumps in and shuts down >>> the DCM, well Xilinx docs say the standby macro tracks idle state >>> longer 100ms, hm the 360 milliseconds sounds still plausible time >>> for the shutdown macro to activate >>> >>> connecting the 12MHz input clock that *IS* defenetly clean >>> and stable to both CLKIN _and_ CLKFB fixed the issue. >>> >>> surpsingly I have EDK microblaze systems with similarly chained DCMs >>> where CLKFB of first DCM is not connected that work. >>> >>> BTW after reading xilinx ARīs I did not find a clear note that DCM >>> chaining from FX is not allowed, there was note that Arch wizard does >>> not support it yet, but not that it want work or can not be used. >>> >>> Antti >>> > > 50, 72 and 100MHz out from first DCM all had the 360 milliseconds > shutdown. for the moment the connection of input clock to both CLKIN > AND CLKFB of the first DCM seems to fix the issue. > > If there is another better workaround solution, well I will possible > open webcase or something, for tonight I got it working, what is needed > right now. > > Antti >Article: 107624
<heinerlitz@gmx.de> wrote in message news:1156924431.107740.20650@m73g2000cwd.googlegroups.com... > Hi > > I have several questions powering the AVCCAUXRX/TX MGT power supply: > > -Are switching regulators allowed or should linears be used. An > application note says switcher will work up to 3.125 with extensive > filtering some say you should use linears. We want to work with the > MGTs at 6.5. > Hi Heiner, I'd be interested in the response you get for this question. As linear regulators have a bandwidth of a 100kHz or so, I fail to see how they provide an advantage over a filtered switcher. > > -How much power (amps) do unused MGTs consume? > Is that in DS302 anywhere? Or the power estimate spreadsheet thingy? > > -Do unused MGTs generate lots of noise on the power supply so other > components can be affected? > UG076 table 6-1 gives some indication of the filtering required from which you might be able to make some assumptions. > > My plan was to supply the unused MGTs from the same source than the 1.2 > FPGA core without filtering caps. This would consume less space on the > PCB, however it is crucial to know how much amps unused MGTs consume > and if they generate noise on the power supply. > You might want to get a development board and try removing some of the bypass stuff to see what happens. I doubt Xilinx will support what you suggest, but that doesn't mean it won't work. Good luck. Syms.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z