Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
We have used the Xilinx XC5200 FPGA family and XC17XXXD serial PROM parts in a number of designs. We configure via master serial mode with the INIT~ line tied to the PROM OE~ line with an external 4.7K pullup resistor. After a number of customer complaints reporting product malfunctions, we investigated and learned that the Xilinx FPGA will not reliable configure on power up if the power supply has a slower then 4ms (typically) rise time from 3 volts to 4.75 volts. Note that 4ms is a short time since we have a number of supplies that require greater then 50ms to rise from 3 volts to 4.75 volts. This is a serious problem! Has any one else experienced this problem? Do you have any recommended solutions? One solution is to have all our customers use power supplies that power up quickly. This is not a realistic request. Another solution is to hold off configuration until a safe configuration voltage is reached (i.e 4.75 volts). This requires a power supply monitor circuit that would hold the INIT~ line low until 4.75 volts is detected. This circuit can easily be included on the configuration PROM dice. Since the PROM's OE~ signal is often tied to the INIT~ signal, the PROM can then hold off configuration (with its built in power supply monitor circuit) until a safe programming voltage is detected. Does anyone know of such a PROM?Article: 8401
Hello to everyone, About two weeks ago, I posted an article concerning configuration troubles of a XC4013E-2 using an ATMEL 17C256 EEPROM. First - many thanks for the great number of helpful replies and Emails I received. An email message from ATMEL finally gave the "wall-breaking" hint: Some early delivered devices do not sense the RESET polarity on power-up correctly! They wrote, that devices with the following codes (number under the device designation 17C256) seem to have this problem: 6D0852 6D0853 6D0854 6D0855 I looked at my components and had one one of them ... :( Now, I managed to get newer ones (7Axxx code) and they work without any problems :) . Best Regards, Tobias HilpertArticle: 8402
hi, I have a design which uses %55 of a 10K100 and within this design I have 12 8 bit registers which need to be selectively loaded into another register, IOW, I have an 8 bit bus from which 12 8 bit registers hang. Because 10K doesn't have internal tristates, I have a big mux which selects the register. Currently this mux is the bottleneck in my design which limits my speed to 11 MHz in a -3 part. Any ideas how to increase the speed in this design? The design is in Verilog. Are there any ways to get more performance by using the floor plan tool ? I am already using FAST synthesis and maximum speed in the compiler options. thanks muzo WDM & NT Kernel Driver Development Consulting <muzok@pacbell.net>Article: 8403
John Woodgate wrote: [ Poorly worded statement that expected magnitude of integrals of random variables with symmetric distributions about zero is usually proportional to the square root of the integration length deleted. ] > Yes, of course. But you have assumed, AFAICS, in there somewhere a > non-linearity, either that the pdf of the noise is asymmetrical and the right > way round to produce a net movement away from metastability (which I > think is petitio principii) or that the integration somehow involves a > rectification. I would regard your case as still unproven by argument, > although I can accept that the effect is real. Well that leaves math or actual circuits. But the whole issue is becoming large and therefore somewhat confused. This is my fault, but probability in of itself is quite a large subject, and the integral referred to above is not IMHO trivial by any means even if it should turn out to be mathematically trivial depending on the particular PDF (Probability Density Function). As I read your paragraph, two things I've written are intermeshed. One is the idea that in general integrating random displacements commonly has a non-zero expected result. An example is take a resistor to ground and place a integrator on the other end which integrates the noise current onto a capacitor. Now clearly noise currents average to a DC zero. However the expected magnitude of the output voltage of the integrator grows proportionally to sqrt(time). This is a classic random walk problem. The most likely place to be at any later time T is still 0 and the probability of having a negative or positive result is equal, but the average value of V(t)^2 will be found to be proportional to t and hence the average magnitude of V(t) is proportional to sqrt(t). I brought up this thought in response to your statement that the high frequency noise seems to do nothing. I intended to point out that very likely it will cause the initial voltage to wander in time irrespective of the nonlinear behavior in the system or for that matter "diode action". This is by no means certain with out nailing down the specifics of the problem, but it is the "usual" result. So far in this discussion neither the circuit or the noise has been fully specified so its not possible to complete the math. For example in the above resistor noise integrator problem if we add a large capacitor in series with the integrator then the sqrt(t) behavior will only be valid for small times compared to RC. Now the nonlinear response function in this problem works in a manner opposite to the capacitor added above it causes a small divergence to blow up, instead of pushing them back to zero. This is a separate point from the one above which considers the simplest case with no push one way or the other. I think it is time to resort to hard measurements or the literature. I am seriously considering building a test circuit. ChuckArticle: 8404
John Woodgate wrote: [ Poorly worded statement that expected magnitude of integrals of random variables with symmetric distributions about zero is usually proportional to the square root of the integration length deleted. ] > Yes, of course. But you have assumed, AFAICS, in there somewhere a > non-linearity, either that the pdf of the noise is asymmetrical and the right > way round to produce a net movement away from metastability (which I > think is petitio principii) or that the integration somehow involves a > rectification. I would regard your case as still unproven by argument, > although I can accept that the effect is real. Well that leaves math or actual circuits. But the whole issue is becoming large and therefore somewhat confused. This is my fault, but probability in of itself is quite a large subject, and the integral referred to above is not IMHO trivial by any means even if it should turn out to be mathematically trivial depending on the particular PDF (Probability Density Function). As I read your paragraph, two things I've written are intermeshed. One is the idea that in general integrating random displacements commonly has a non-zero expected result. An example is take a resistor to ground and place a integrator on the other end which integrates the noise current onto a capacitor. Now clearly noise currents average to a DC zero. However the expected magnitude of the output voltage of the integrator grows proportionally to sqrt(time). This is a classic random walk problem. The most likely place to be at any later time T is still 0 and the probability of having a negative or positive result is equal, but the average value of V(t)^2 will be found to be proportional to t and hence the average magnitude of V(t) is proportional to sqrt(t). I brought up this thought in response to your statement that the high frequency noise seems to do nothing. I intended to point out that very likely it will cause the initial voltage to wander in time irrespective of the nonlinear behavior in the system or for that matter "diode action". This is by no means certain with out nailing down the specifics of the problem, but it is the "usual" result. So far in this discussion neither the circuit or the noise has been fully specified so its not possible to complete the math. For example in the above resistor noise integrator problem if we add a large capacitor in series with the integrator then the sqrt(t) behavior will only be valid for small times compared to RC. Now the nonlinear response function in this problem works in a manner opposite to the capacitor added above it causes a small divergence to blow up, instead of pushing them back to zero. This is a separate point from the one above which considers the simplest case with no push one way or the other. I think it is time to resort to hard measurements or the literature. I am seriously considering building a test circuit. ChuckArticle: 8405
John Woodgate wrote: [ Poorly worded statement that expected magnitude of integrals of random variables with symmetric distributions about zero is usually proportional to the square root of the integration length deleted. ] > Yes, of course. But you have assumed, AFAICS, in there somewhere a > non-linearity, either that the pdf of the noise is asymmetrical and the right > way round to produce a net movement away from metastability (which I > think is petitio principii) or that the integration somehow involves a > rectification. I would regard your case as still unproven by argument, > although I can accept that the effect is real. Well that leaves math or actual circuits. But the whole issue is becoming large and therefore somewhat confused. This is my fault, but probability in of itself is quite a large subject, and the integral referred to above is not IMHO trivial by any means even if it should turn out to be mathematically trivial depending on the particular PDF (Probability Density Function). As I read your paragraph, two things I've written are intermeshed. One is the idea that in general integrating random displacements commonly has a non-zero expected result. An example is take a resistor to ground and place a integrator on the other end which integrates the noise current onto a capacitor. Now clearly noise currents average to a DC zero. However the expected magnitude of the output voltage of the integrator grows proportionally to sqrt(time). This is a classic random walk problem. The most likely place to be at any later time T is still 0 and the probability of having a negative or positive result is equal, but the average value of V(t)^2 will be found to be proportional to t and hence the average magnitude of V(t) is proportional to sqrt(t). I brought up this thought in response to your statement that the high frequency noise seems to do nothing. I intended to point out that very likely it will cause the initial voltage to wander in time irrespective of the nonlinear behavior in the system or for that matter "diode action". This is by no means certain with out nailing down the specifics of the problem, but it is the "usual" result. So far in this discussion neither the circuit or the noise has been fully specified so its not possible to complete the math. For example in the above resistor noise integrator problem if we add a large capacitor in series with the integrator then the sqrt(t) behavior will only be valid for small times compared to RC. Now the nonlinear response function in this problem works in a manner opposite to the capacitor added above it causes a small divergence to blow up, instead of pushing them back to zero. This is a separate point from the one above which considers the simplest case with no push one way or the other. I think it is time to resort to hard measurements or the literature. I am seriously considering building a test circuit. ChuckArticle: 8406
rk wrote: > here are some calcuations i did using chip express cx2001 technology, > based > on their flip-flop parameters and example in the CX Technology Design > Manual, so i could see how this technology performs. The CX2001 > series > uses a channelled module architecture (gate array) with each module > consisting of three 2:1 muxes and an AND gate (a bit differently set > up > than Act 1 but not all that dissimilar); there are no hardwired > flip-flops, > as there is in the xilinx devices which peter a. previously talked > about. > I assumed a 50 MHz clock, a 10 MHz average incoming data rate, and > made the > extra settling time a paramter. here's what i got: > > extra delay mtbf mtbf > (nsec) (years) (years) > > 1.0000e+0 448.25e-6 14.214e-12 > 2.0000e+0 180.84e-3 5.7344e-9 > 3.0000e+0 72.956e+0 2.3134e-6 > 4.0000e+0 29.432e+3 933.29e-6 > 5.0000e+0 11.874e+6 376.52e-3 > 6.0000e+0 4.7903e+9 151.90e+0 > 7.0000e+0 1.9325e+12 61.280e+3 > Correction: The center column must be mtbf in seconds, not years. My opinion: Interesting results, dramatically showing the superiority of dedicated, tightly-designed and laid out flip-flops. Under the above conditions with the product of the two frequencies 500 times higher than in the Xilinx data book ( page 13-43 ) , just 2 ns of extra allowable resolution time give an MTBF of about 20,000 years in the 2-year old XC4005E-3. The gate array with its, -excuse the harsh word-, "spread-out" flip-flops, takes almost 7 ns to achieve the same MTBF. There is no substitute for high gain-bandwidth in the feedback path of the master latch, if you want to resolve metastability reasonably fast. Peter Alfke, Xilinx ApplicationsArticle: 8407
In article <3491BE33.36C5A13B@CatenaryScientific.com>, Chuck Parsons <chuck@CatenaryScientific.com> writes > > >John Woodgate wrote: > > [ Poorly worded statement that expected magnitude of integrals of > random variables with symmetric distributions about zero is usually > proportional to the square root of the integration length deleted. ] > >> Yes, of course. But you have assumed, AFAICS, in there somewhere a >> non-linearity, either that the pdf of the noise is asymmetrical and the right >> way round to produce a net movement away from metastability (which I >> think is petitio principii) or that the integration somehow involves a >> rectification. I would regard your case as still unproven by argument, >> although I can accept that the effect is real. > > Well that leaves math or actual circuits. But the whole issue is becoming >large and therefore somewhat confused. This is my fault, but probability >in of itself is quite a large subject, and the integral referred to above is not >IMHO trivial by any means even if it should turn out to be mathematically >trivial >depending on the particular PDF (Probability Density Function). > > As I read your paragraph, two things I've written are intermeshed. One is >the idea that in general integrating random displacements commonly has >a non-zero expected result. An example is take a resistor to ground >and place a integrator on the other end which integrates the noise current >onto a capacitor. Now clearly noise currents average to a DC zero. >However the expected magnitude of the output voltage of the integrator >grows proportionally to sqrt(time). This is a classic random walk problem. >The most likely place to be at any later time T is still 0 and the probability >of having a negative or positive result is equal, but the average value >of V(t)^2 will be found to be proportional to t and hence the average >magnitude of V(t) is proportional to sqrt(t). No, I can't go along with that. The circuit you describe is (theoretically, anyway) linear, and cannot produce what is effectively a growing d.c. voltage from the noise. Your math reasoning does not take into account that the sqrt function is two-valued. What grows proprtionally to sqrt(t) is not sqrt(v^2), which can be either positive or negative, but |sqrt(V^2)|, which is positive. > I brought up this thought >in response to your statement that the high frequency noise seems to >do nothing. I intended to point out that very likely it will cause the >initial voltage to wander in time irrespective of the nonlinear behavior >in the system or for that matter "diode action". This is by no means certain >with out nailing down the specifics of the problem, but it is the "usual" >result. I suppose because there is always some non-linearity, and since the metastable condition is (theoretically) a single point in the circuit's state-space, whichever way the bias is, it moves the conditions away from the metastable point. > > So far in this discussion neither the circuit or the noise has been fully >specified >so its not possible to complete the math. For example in the above resistor >noise >integrator problem if we add a large capacitor in series with the integrator >then the sqrt(t) behavior will only be valid for small times compared to RC. > > Now the nonlinear response function in this problem works in a manner >opposite >to the capacitor added above it causes a small divergence to blow up, instead of >pushing them back to zero. This is a separate point from the one above which >considers >the simplest case with no push one way or the other. > > I think it is time to resort to hard measurements or the literature. I am >seriously >considering building a test circuit. > >Chuck > > Good idea! -- Regards, John Woodgate, Phone +44 (0)1268 747839 Fax +44 (0)1268 777124. OOO - Own Opinions Only. It is useless to threaten a strong man - he will ignore you. It is dangerous to threaten a weak man - he will kill you if he can.Article: 8408
I am working on a design for an XC4013 with 30 16bit adders, plus lots of latches, in an XC4013. It should be able to go up to about 30MHz with the 4013-2, but I am wondering about the dynamic power. If on the average a signal changes every other clock cycle, how much power should I expect out of this? I don't want a big heatsink and fan for each chip! I tried to find this in "the programmable logic data book," but I didn't find it. I only need rough numbers right now, but I don't even have that, thanks, -- glenArticle: 8409
Peter Alfke <peter@xilinx.com> wrote in article <3491C135.D65118EF@xilinx.com>... > rk wrote: > > > here are some calcuations i did using chip express cx2001 technology, > > based <snip> > > > > extra delay mtbf mtbf > > (nsec) (years) (years) > > > > 1.0000e+0 448.25e-6 14.214e-12 > > 2.0000e+0 180.84e-3 5.7344e-9 > > <snip> > > Correction: The center column must be mtbf in seconds, not years. **** yup, good eye-balls pete. i cut and pasted the numbers but the titles **** didn't come over (sigma splot) so i did those by hand. i was sort of **** paranoid and checked my numbers vs. chipx' for the case they did, **** checked each row for about the right divider from sec -> years, but **** missed the obvious. > My opinion: > Interesting results, dramatically showing the superiority of dedicated, > tightly-designed and laid out flip-flops. **** i thought it would make a nice comparison and hope from the description **** it was plain for all that it was a 'routed' flip-flop design, not a **** hardwired one. > Under the above conditions with the product of the two frequencies 500 > times higher than in the Xilinx data book ( page 13-43 ) , just 2 ns of > extra allowable resolution time give an MTBF of about 20,000 years in > the 2-year old XC4005E-3. The gate array with its, -excuse the harsh > word-, "spread-out" flip-flops, takes almost 7 ns to achieve the same > MTBF. **** no problem, don't consider it harsh, that's just the way it is. i **** call it a 'routed flip-flop' vs. a 'hard-wired flip-flop.' now, i'm **** not sure if there are official terms for these by those in the chip **** business. if so, please let us now so we all use consistent and **** correct language. > > There is no substitute for high gain-bandwidth in the feedback path of > the master latch, if you want to resolve metastability reasonably fast. **** yup, going through the routing network slows things down and it shows **** in this parameter. **** **** an interesting aside, however, is some simulations on the performance **** of routed vs. hard-wired flip-flops in the Actel RH1280 and the Act 3 **** devices (either A1460A or A14100A), where you can select either type **** of flip-flop. The results showed that for this technology the choice **** of implementation had a significant effect on tsu, not counting the **** ability to 'combine' in their s-module, which gives even a better tsu. **** however, clk -> q performance for the global routed clocks showed no **** significant difference. by careful logic design, the slower c-module **** flip-flops can be used with no loss in system performance. additionally, **** the slower routed flip-flops generally offer better radiation performance, **** so in the aerospace community they still have some advantages. **** lastly, i think it would be interesting to compare a lot of the different **** devices and technologies out there. so, if anyone wants to email me the **** parameters for the device that they are familiar with, i'll compute the **** results for some sample cases and post the results. > Peter Alfke, Xilinx Applications -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 8410
JTAG configuration of Xilinx XC4000E FPGAs? Urgent! I need to do JTAG configuration of a single FPGA (Xilinx XC4013E). Can anyone advise/supply 3rd-party software to do this? Xilinx documentation details that it's indeed possible to configure a XC4000 FPGA via its JTAG interface, but Xilinx does not extend their Xchecker software to support of this. Any suggestions ? Thanks- Peter Fenn -------------------------------------- "CodeLogic" Digital & Software Design Services -------------------------------------- PeteFenn@iafrica.com TEL: (+27 21) 855-1354 FAX: (+27 21) 855-2807 -------------------------------------- P.O.Box 5098 Helderberg, 7135. South Africa --------------------------------------Article: 8411
JTAG configuration of Xilinx XC4000E FPGAs? Urgent! I need to do JTAG configuration of a single FPGA (Xilinx XC4013E). Can anyone advise/supply 3rd-party software to do this? Xilinx documentation details that it's indeed possible to configure a XC4000 FPGA via its JTAG interface, but Xilinx does not extend their Xchecker software to support of this. Any suggestions ? Thanks- Peter Fenn -------------------------------------- "CodeLogic" Digital & Software Design Services -------------------------------------- PeteFenn@iafrica.com TEL: (+27 21) 855-1354 FAX: (+27 21) 855-2807 -------------------------------------- P.O.Box 5098 Helderberg, 7135. South Africa --------------------------------------Article: 8412
Carmen Baena Oliva wrote: > > I'm trying to obtain a combinational multiplier using a Xilinx FPGA. > Can anybody give me some references about good structures for the > multiplier? > > Thanks in advance. > Carmen For the 4-LUT based Xilinx FPGA, your best bet is probably to produce a 1x, 2x (1x shifted) and 3x copy of one input. Then for every 2 bits in the other input, select one of these or zero to get a 2xN partial product. Then add all the partial products together with the appropriate alignment of the radix point to obtain the result. The adding is done in a tree structure to minimize the prop delays. I think the Xilinx DSP LogiCore tools use this approach for their multipliers. You can get the tools from Xilinx. -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 8413
In article <3491bc91.80619134@news.walltech.com>, muzok@pacbell.net writes... >hi, >I have a design which uses %55 of a 10K100 and within this design I have 12 8 bit >registers which need to be selectively loaded into another register, IOW, I have >an 8 bit bus from which 12 8 bit registers hang. Because 10K doesn't have >internal tristates, I have a big mux which selects the register. Currently this >mux is the bottleneck in my design which limits my speed to 11 MHz in a -3 part. >Any ideas how to increase the speed in this design? The design is in Verilog. > >Are there any ways to get more performance by using the floor plan tool ? I am >already using FAST synthesis and maximum speed in the compiler options. I have done my programming in AHDL, not Verilog, but I assume you have a way to access the LCELL and CASCADE primitives through Verilog. What I would do is write out the multiplexor levels explicitly. Use the LSB select bit to multiplex from 12 sets of lines to 6 sets with LCELLs on the output. Repeat this with the 2nd LSB to get down to 3 sets of 8 lines, again with LCELLs on the output. Finally, take the 2 MSB select bis along with 2 of the 3 remaining sets of 8 lines and cascade the 8 outputs into the last 8 lcells with the remaining set of 8 lines and select bits. This assures that any signal only has to propagate through 3 levels of LCELLs plus a small extra delay for the cascade. (I don't recommend trying to get down to 1 level of LCELLs using the cascade primitive as you may have fitting problems.) If this does not buy you enough speed, you might be able to convert the middle level of LCELLs to DFFs (an extra pipeline stage). Daniel Lang dbl@hydra0.caltech.eduArticle: 8414
On Fri, 12 Dec 1997 22:43:10 GMT, muzok@pacbell.net (muzo) wrote: >I have a design which uses %55 of a 10K100 and within this design I have 12 8 bit >registers which need to be selectively loaded into another register, IOW, I have >an 8 bit bus from which 12 8 bit registers hang. Because 10K doesn't have >internal tristates, I have a big mux which selects the register. Currently this >mux is the bottleneck in my design which limits my speed to 11 MHz in a -3 part. What speed do you need? >Any ideas how to increase the speed in this design? The design is in Verilog. So am I right in saying that a 12 to 1 multiplexer, register to register (hence, including routing) takes 90 ns? As Altera claims a 16 to 1 mux takes 9.4 ns in a -3 speed grade there are three possibilities: 1 You need a new synthesis tool (which are you using?) 2 There's more than the mux in your critical path. (how many layers of logic and routing in the path?) 3 Altera isn't up to it, has a "printing error" in it's data sheets, and you need a different FPGA. 1 & 2 seem more probable. Stuart -- For Email remove "die.spammer." from the addressArticle: 8415
Several times by E-mail I have been asked the following, which I quote from the last person who asked, without attribution since he E-mailed instead of posting >>I've been following your arguements and I'm >>mostly convinced, but I've also seen the arguement >>that if all it took was noise, then surely the >>manufacturere would be adding noise. If you've >>already addressed this point, sorry, I may >>have missed it. There have been a lot of >>articles about this subject, most, like yours >>have been interesting. This is a good question, and of course the answer is they are! Someone in an earlier post stated that things had improved by thirteen orders of magnitude relative to some old logic. (Sorry wish I could remember who.) Many have said that increase in gain are responsible for this. Sure increases in gain definitely help, and decreasing channel length leads to increased gain in a FET but decreased channel width leads to decreased gain. How much has gain improved? Someone out there should have hard numbers I don't. I don't dispute that that makes a big difference, but I would stipulate that a far bigger difference is made by the reduced capacitance of the modern circuits. This increases the slew rate of the circuit (without necessarily improving gain but only GBW) allowing it to move out of the metastable region faster for smaller perturbations. In addition, it also has to following interesting effect. The expected noise voltage energy any capacitor will be at least kT/2 where k is the Boltzmann constant. This leads to the equation 1/2CV^2=kT/2. We can solve this for V finding |V|=sqrt(kT/C) Boltzmann's constant has a value of 1.4E-23J/K. At a temperature of 300K this is 4.2E-21J. Or 4.2E-21 Farad Volt^2. This leads to: |Vnoise| = 65nV/sqrt(C) = 2mV/sqrt( C in femto farads) The noise will randomly flucuate positive and negative. The time that his takes to happen will be determined by the impedance of the network it is connected to and the effective RC time constant. For a 10 femtofarad capacitor, sitting unconnected we only have the leakage resistance which might be 10^15 ohms leading to flucuation times of about a 10 seconds. For a femtofarad capacitor hooked up in a 100 ohm circuit the fluctuation times would be about 100fS. This is quite a range, and I bring it up to point out that calculating the noise voltage on a capacitor without considering bandwidth, is usually not useful. Now perhaps someone could post what the typical gate+interconnect capacitance in a modern process are as well as the drive+interconnect resistance. For the sake of example I will assume they are something like 100fF and 100 ohm. This leads to a noise voltage of 200uV and a characteristic fluctuation time of 10ps. If a older process had 1000fF and 1000ohm ( How high was poly silicon on a longish run?) this would lead to 63uV of noise and a fluctuation time of 1ns. A final important fact is that as the input+stray capacitance goes down leading to faster fluctuation noise the speed of the circuit goes up as well assuming this is the dominate load on the output or that the other loads on the output reduce capacitance at the same rate. So the circuit is able to track these faster fluctuations. To sum up lower capacitance --> higher noise and faster noise and a circuit that can track the faster noise. --> Quicker expulsion from the metastable region. Secondly, of course if you just inject noise without increasing slew rate, a number of bad things happen, particularly a increase in jitter. But basically, except for the metastable problem everything I can think of gets worse with increased noise, so one would want to study the tradeoffs of doing it carefully. As a postscript it is enlightening to consider the following post: >rk wrote: >> >>here are some calcuations i did using chip express cx2001 technology, >> based >> on their flip-flop parameters and example in the CX Technology Design >> Manual, so i could see how this technology performs. The CX2001 >> series >> uses a channelled module architecture (gate array) with each module >> consisting of three 2:1 muxes and an AND gate (a bit differently set >> up >> than Act 1 but not all that dissimilar); there are no hardwired >> flip-flops, >> as there is in the xilinx devices which peter a. previously talked >> about. >> I assumed a 50 MHz clock, a 10 MHz average incoming data rate, and >> made the >> extra settling time a paramter. here's what i got: >> >> extra delay mtbf mtbf >> (nsec) (years) (years) >> >> 1.0000e+0 448.25e-6 14.214e-12 >> 2.0000e+0 180.84e-3 5.7344e-9 >> 3.0000e+0 72.956e+0 2.3134e-6 >> 4.0000e+0 29.432e+3 933.29e-6 >> 5.0000e+0 11.874e+6 376.52e-3 >> 6.0000e+0 4.7903e+9 151.90e+0 >> 7.0000e+0 1.9325e+12 61.280e+3 >> >Correction: The center column must be mtbf in seconds, not years. > >My opinion: >Interesting results, dramatically showing the superiority of dedicated, >tightly-designed and laid out flip-flops. >Under the above conditions with the product of the two frequencies 500 >times higher than in the Xilinx data book ( page 13-43 ) , just 2 ns of >extra allowable resolution time give an MTBF of about 20,000 years in >the 2-year old XC4005E-3. The gate array with its, -excuse the harsh >word-, "spread-out" flip-flops, takes almost 7 ns to achieve the same >MTBF. > >There is no substitute for high gain-bandwidth in the feedback path of >the master latch, if you want to resolve metastability reasonably fast. > >Peter Alfke, Xilinx Applications I think the increased noise of a low capacitance layout helps too. ChuckArticle: 8416
John Woodgate wrote: > In article <3491BE33.36C5A13B@CatenaryScientific.com>, Chuck Parsons > <chuck@CatenaryScientific.com> writes > > > As I read your paragraph, two things I've written are intermeshed. One is > >the idea that in general integrating random displacements commonly has > >a non-zero expected result. An example is take a resistor to ground > >and place a integrator on the other end which integrates the noise current > >onto a capacitor. Now clearly noise currents average to a DC zero. > >However the expected magnitude of the output voltage of the integrator > >grows proportionally to sqrt(time). This is a classic random walk problem. > >The most likely place to be at any later time T is still 0 and the probability > >of having a negative or positive result is equal, but the average value > >of V(t)^2 will be found to be proportional to t and hence the average > >magnitude of V(t) is proportional to sqrt(t). > > No, I can't go along with that. The circuit you describe is > (theoretically, anyway) linear, and cannot produce what is effectively a > growing d.c. voltage from the noise. Your math reasoning does not take > into account that the sqrt function is two-valued. What grows > proprtionally to sqrt(t) is not sqrt(v^2), which can be either positive > or negative, but |sqrt(V^2)|, which is positive. > Yes it does, that is why I said "magnitude of V(t)". In fact, you don't know whether the output will be positive or negative, but you can calculate that on average the _magnitude_ of the difference from the initial voltage grows with time as I described. One can for instance calculate the statistically average time it takes the integrator to peg against it output range limits. If you use higher supply limits and double the output range of the the integrator. It will take on average 4 times as long before the integrator pegs and stops working. If we start in the middle of the range, and any input bias current is perfectly nulled , 50% of the time the integrator will peg negative and 50% will peg positive, but it in both cases. But if you reset the circuits 1000 times and calculate the average time to pegging the later circuit will take four times as long on average to peg. The situation is quite similar to two school children matching pennies. If they each start out with 100 pennies and don't cheat. Eventually one or the other will run out of pennies. Obviously it is impossible to happen in less than 100 matches. But in fact it is very unlikely to happen in 400 matches. Mathematically one can calculate how long on average this takes. I'm sorry, but I don't remember the precise answer but it is considerably less than 10,000 matches. The probabilistic question here is: "What is the average number of matches until the difference in wins equals 100". A closely related problem I do remember the answer to is "What is the probability distribution of winnings and losses after N matches?" The answer is it is gaussian distributed with a sigma equal to the square root of N. In fact the exact answer is binomial not gaussian, but for large N the two functions converge very quickly. After 10,000 matches the distribution of expected winnings (losses) will be almost perfectly gaussian with a sigma of 100. After 40,000 matches the distribution will have a sigma of 200. If each school child has 200 pennies it takes 4 times as long on average though. We find that the expected magnitude of the grows as the square root of the number of matches. Just as for the integrator I described were the expected deviation of the output from the starting point (breaking even) grows with the square root of time.Article: 8417
We just bought the Xilinx development software. However we found that the copy protection works differently from the past. We were expecting to be able to install the software on several workstations and share a dongle. But now they protect the licence by keying the software to the serial number on your diskdrive. I have been told that this serial number can be changed so that we can once again let each engineer work at his own desk. Does anyone know how to do this? Can you point me to a source of this information? Rick Collins rickman@erols.comArticle: 8418
In article <3492C41F.67A722C5@CatenaryScientific.com>, Chuck Parsons <chuck@CatenaryScientific.com> writes > > Yes it does, that is why I said "magnitude of V(t)". In fact, you don't know >whether the output will be positive or negative, but you can calculate that on >average the _magnitude_ of the difference from the initial voltage grows with >time >as I described. One can for instance calculate the statistically average time >it >takes the integrator to peg against it output range limits. If you use >higher supply limits and double the output range of the the integrator. It will >take >on average 4 times as long before the integrator pegs and stops working. If we >start >in the middle of the range, and any input bias current is perfectly nulled , 50% >of >the time the integrator will peg negative and 50% will peg positive, but it in >both >cases. But if you reset the circuits 1000 times and calculate the average >time to pegging the later circuit will take four times as long on average to >peg. I don't think I've quite grasped why you have brought in this doubling of the number of pennies. > > The situation is quite similar to two school children matching pennies. If >they >each start out with 100 pennies and don't cheat. Eventually one or the other >will >run out of pennies. Well, that is clearly not inevitable, AFAICS. An infinite exchange can be envisaged. >Obviously it is impossible to happen in less than 100 >matches. >But in fact it is very unlikely to happen in 400 matches. Mathematically one can >calculate how long on average this takes. I'm sorry, but I don't remember the >precise >answer but it is considerably less than 10,000 matches. The probabilistic >question >here is: "What is the average number of matches until the difference in wins >equals >100". A closely related problem I do remember the answer to is "What is the >probability distribution of winnings and losses after N matches?" The answer is >it is >gaussian distributed with a sigma equal to the square root of N. In fact the >exact >answer is binomial not gaussian, but for large N the two functions converge very >quickly. After 10,000 matches the distribution of expected winnings (losses) >will >be almost perfectly gaussian with a sigma of 100. After 40,000 matches the >distribution will have a sigma of 200. If each school child has 200 pennies it >takes >4 times as long on average though. We find that the expected magnitude of the >grows >as the square root of the number of matches. Just as for the integrator I >described >were the expected deviation of the output from the starting point (breaking >even) >grows with the square root of time. > I won't argue with your reasoning: questions of probability often have counter-intuitive answers. However, looking at the subject another way, the resistor/integrator device seems to be able to extract noise energy, and that means thermal energy, from the resistor and store a net amount of it in the capacitor. What happens then to the temperature of the resistor? What is wrong with this crazy result? -- Regards, John Woodgate, Phone +44 (0)1268 747839 Fax +44 (0)1268 777124. OOO - Own Opinions Only. It is useless to threaten a strong man - he will ignore you. It is dangerous to threaten a weak man - he will kill you if he can.Article: 8419
See the info on http://www.xilinx.com/techdocs/940.htm. This, plus the Xilinx documentation, should get you up to speed. ----------------------------------------------------------- Steven K. Knapp OptiMagic, Inc. -- "Great Designs Happen 'OptiMagic'-ally" E-mail: sknapp@optimagic.com Web: http://www.optimagic.com ----------------------------------------------------------- Peter Fenn wrote in message <66ssic$5pm$1@news01.iafrica.com>... >JTAG configuration of Xilinx XC4000E FPGAs? > >Urgent! I need to do JTAG configuration of a single FPGA (Xilinx XC4013E). >Can anyone advise/supply 3rd-party software to do this? > >Xilinx documentation details that it's indeed possible to configure a XC4000 >FPGA via its JTAG interface, but Xilinx does not extend their Xchecker >software to support of this. Any suggestions ? > >Thanks- > >Peter Fenn >-------------------------------------- > "CodeLogic" >Digital & Software Design Services >-------------------------------------- > PeteFenn@iafrica.com > TEL: (+27 21) 855-1354 > FAX: (+27 21) 855-2807 >-------------------------------------- > P.O.Box 5098 > Helderberg, 7135. > South Africa >-------------------------------------- > > > >Article: 8420
The following is from Xilinx' Answer Database at http://www.xilinx.com/techdocs/940.htm --- BEGIN --- General Description: With the exception of the XC3000 family, it is possible to configure XC4000 fami ly devices and XC5200 devices via the boundary scan pins TMS,TCK,TDI, & TDO. This solution applies the XC4000(including XC4000EX) family and the XC5200 family of devices.NOTE: This solution record is the document on configuring an XC4000 or XC5200 FPGA via the JTAG TAP. Information in this solution record superceeds all other documents on FPGA JTAG configuration.Solution 1: *NOTE*To understand this solution, you must have an understanding of JTAG/Boundary Scan. This solution applies to the XC4000 family and XC5200 family of devices.CONFIGURE - Steps to Follow to configure a Xilinx XC4000, or XC5200 via JTAG: The bitstream format is identical for all configuration modes. A user can use a .bit file or a .rbt file, depending on whether the user wants to read a binary file(.bit) or an ascii file(.rbt). Express mode bitstreams cannot be used in configuring via boundary scan. Xilinx also recommends that the mode pins of the device be tied low before starting the configuration.1. Turn `on' the boundary scan circuitry. This can be done one of three ways, either via powerup or via a configured device with boundary scan enabled, or by pulling the /PROGRAM pin low. If you want to do this via powerup, then just hold the INIT pin low when power is turned on. When VCC has reached VCC(min), then the TAP can be toggled to enter JTAG instructions. The INIT pin can be held low one of two ways, either manually or with a pulldown. If you choose to manually hold the INIT low, then the INIT pin must be held low until the CONFIGURE instruction is the current instruction. If you choose a pulldown, use a pulldown which pulls the INIT pin down to approxi- mately 0.5V. The pulldown has the merit of holding INIT low whenever the FPGA is powered-up, and letting the user `see' an attenuated INIT pin during configuration. After the FPGA has been configured, if you want to reconfigure a configured device that has boundary scan enabled after configuration, then just start toggling theboundary scan TAP pins. 2. Load the Xilinx Configure instruction into the IR. The Xilinx Configure instruction is 101(I2 I1 I0). I0 is the bit shifted in first into the IR. 3. After shifting in the Xilinx CONFIGURE instruction, make the CONFIGURE instruction the current JTAG instruction by going to the update-IR state. When TCK goes low in the update-IR state, the FPGA is now in the JTAG configuration mode and will start clearing the con- figuration memory; The CONFIGURE instruction is now the current instruction, which must be followed by a ris- ing edge on TCK. If you chose to manually hold the INIT pin low, then the INIT pin must be held low until the CONFIGURE instruction is the current instruction. 4. Once the Xilinx CONFIGURE instruction has been made the current instruction, the user must go to the run-test/idle state, and remain in the run-test/idle state until the FPGA has finished clearing it's configurationmemory. The approximate time it takes to clear an FPGA's con- figuration memory is: 2 * 1 us * (# of frames per devicebitstream). When the FPGA has finished clearing it's configuration memory, the open-collector INIT has gone high impedance.At this point, the user should advance to the shift-dr state. Once the TAP is in the shift-dr state and the INIT pin has been released, clocks on the TCK pin will be considered configuration clocksfor data and length count. 5. In the shift-DR state, start shifting in the bitstream. Con- tinue shifting in the bitstream until DONE has gone high and the startup sequence has finished. During the time you are shifting in the bitstream via the TAP, the configuration pins LDC, HDC, INIT, PRO- GRAM, DOUT, and DONE all function as they normally do during non-JTAG configuration. These pins can be probed by the user, or after completion of configuration, or if configuration failed, the SAMPLE/PRELOAD instruction can be used to view these IOB's(except PROGRAM or DONE). LDC is low during configuration. HDC is high during configuration. INIT will be high impednace during configuration, but if a CRC error or frame error is detected, INIT will go low; If a pulldown is present on INIT then the user must probe INIT with a meter or scope;With a pulldown(as in step 1) attached to the INIT pin, the user will see a drop from approximately 0.5V to OV if INIT drops low to indicate a data error. PROGRAM can still be used to abort the configuration process. DOUT and TDO will echo TDI until the preamble and length count are shifted into TDI; When the preamble and length count have been shifted into the FPGA, DOUT will remain high. DONE will go high when configuration is finished. Until configuration is finished, DONE will remain low.Some Additional Notes: (a) It is possible to configure several 4K, and/or 5K devices in a JTAG chain. But unlike non-JTAG daisy- chain configuration, this doesn't necessarily mean merging all the bitstreams into one bitstream. In the case of JTAG con- figuration of Xilinx devices in a JTAG chain, all devices, except the one being configured, will be placed in BYPASS mode. The one device in CONFIGURE will have its bitstream downloaded to it. After configuring this device it will be placed in BYPASS, and another device will be taken out of BYPASS into CONFIGURE. (b) If you are configuring a 'long' daisy-chain of JTAG devices(TDI connected to TDO of the previous device), the bitstream for the device with the CONFIGURE instruction may need to have its bitstream modified. For example, lets say that the a user has the followingdaisy-chain of devices: device1---device2-----device3 Device1's TDO pin is connected to device2's TDI pin; Device2's TDO pin is connected to device3's TDO pin. The way to configure this chain is to place one device in CONFIGURE, and the other two in BYPASS. Let's further assume that device1 and device2 configure in this way, but device3 never configures. Specifically, device3's DONE pin never goes high; The problem is the bitstream length count. A possible cause, aside from bitstream corruption, is that the final value of the length count computed by the user/software was reached beforethe loading was complete. There are two solutions. One solution involves just continually clocking TCK(for about 15 seconds) until DONE goes high. The other solution is to modify the bitstream;Increase the length count by the number of devices ahead of the device under configuration. For example, in the testcase above, the user increase the length count value by 2. (In a daisy-chain of devices configuring via boundary scan, devices inBYPASS will supply the extra 1's needed at the head of the bitstream) (c) In general for the XC4000 and XC5200, if you are configuring these devices via JTAG, finish configuring the device first before executing any other JTAG instruc- tions; Once configuration through boundary scan is started, the configuration operation must be finished. (d) If boundary scan is not included in the design being configured, then make sure that the release of I/Os is the last event in the startup sequence. If boundary scan is not available, the FPGA is config- ured, and the I/Os are released before the startup sequence is finished, the FPGA will not respond to input signals and outputs won't respond at all. (e) Re-issuing a boundary scan CONFIGURE instruction after the the clearing of configuration memory will cancel the configure instruction. The proper method of re-issuing a CONFIGURE instruction after the configuration memory is cleared is to issuue another boundary scan instruction, and the follow it by the CONFIGURE instruction. (f) If configuration through boundary scan fails, there are only two boundary scan instructions available: sample/pre- load and bypass. If a another reconfiguration wants to be attempted, then the PROGRAM pin must be pulled low, or the FPGA must be repowered. (g) When the CONFIGURE instruction is the current instruction, clocks on the TCK pin are not considered configuration clocks until the /INIT pin has gone high impedance, and the TAP is in the shift-dr state.(h) If the user is attempting to configure a chain of devices, it is recommended that the user only configure the chain in all boundary scan mode, or use the non-boundary scan configuration modes. It is possible to configure a daisy-chain of devices, some in boundary scan and some in non-boundary scan configuration. Configuring in a mixed mode will not necessarily give the user a continuous boundary scan chain, which may or may not be a problem for a particular user's applications. (j) Currently, there is no software to configure a Xilinx FPGA via the boundary scan pins. The user must provide this.NOTE: The intention of configuration for a daisy-chain was to use either all the devices in boundary scan, or all the devices in non-boundary scanconfiguration. (k) When configuring a chain of Xilinx FPGA's via boundary scan, this doesn't require merging all the bitstreams into one bitstream, as in non-boundary scan configuration daisy-chains. When the FPGA is in boudary scan configuration, the same configuration circuitry used for non-boundary scan configuration is used. So, if a user would like, it is possible for a user to merge all bitstreams into one bitstream, using the Prom Formater or makeprom/promgen. In a case where the user wants to merge the bitstreams into one bitstream, the user should configure as in note (a) above. Additionally, the user will have to tie all /INIT pins together. All DONE pins will also have to be tied together. NOTE: The intention of configuration for a daisy-chain was to use either all the devices in boundary scan, or all the devices in non-boundary scan configuration. -- END -- ----------------------------------------------------------- Steven K. Knapp OptiMagic, Inc. -- "Great Designs Happen 'OptiMagic'-ally" E-mail: sknapp@optimagic.com Web: http://www.optimagic.com -----------------------------------------------------------Article: 8421
>I have heard that the code in the article was deliberately incomplete, >per an agreement with Zilog. They (Zilog) didn't want people to be >able to reproduce the Z80 core. Can't say that I'd blame them. Correct. EDN said as much at the time. This was done in VHDL. This was a pity, since IMHO anyone can just sit down and design a Z80-opcode-compatible processor and publish it, perfectly legally. I was told by Zilog that this was what NEC did with the UPD780 - they got no help whatever from Zilog. And now Toshiba to some Z80-compatible micros too. And such a design, published for all, would have been truly useful, to anyone doing an ASIC and looking for a decent CPU (and I don't mean a 6805 etc) to go inside it. Regarding the subject header: doing a Z80 in an FPGA would be unbelievably inefficient and expensive. FPGAs are not efficient for decoder-rich logic like micros tend to be. And by the time one implemented all the opcode decoders using cascaded CLBs etc, the speed would not be that good, either. Just for a laugh, I recall someone in late 1970s doing a 8080 emulator using ECL (as a PhD post-grad project), and this ran at 80MHz. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8422
Whatever anwers you get, you will be able to improve on it by an order of magnitude if you can gate clocks, or avoid clocking altogether. But gate clocking is dangerous, unless the design is carefully done to tolerate skews. Peter. Return address is invalid to help stop junk mail. E-mail replies to z80@digiXYZserve.com but remove the XYZ.Article: 8423
John Woodgate wrote: > I won't argue with your reasoning: questions of probability often have > counter-intuitive answers. However, looking at the subject another way, > the resistor/integrator device seems to be able to extract noise energy, > and that means thermal energy, from the resistor and store a net amount > of it in the capacitor. What happens then to the temperature of the > resistor? What is wrong with this crazy result? Well lets not get sidetracked on that. John, I think you may be trying to exhaust me ;-) The input node of an amp (as discussed in the thread on Johnson noise) can have a lower than thermal temperature, than a resistor and yes heat flows from the resistor to the amp. It is a very small amount similar in magnitude to the thermal conductivity of a single atom in a tube, because there is only one degree of freedom. Anyone, wanting to know more about this please ask Kendall ;-) But John, you know this energy is not what is being stored on the capacitor. The energy to charge the capacitor comes from the amplifier power supplies, at least the vast bulk of it. Just like in an amplifier the energy from the microphone doesn't wind up in the speakers but drains down the base of a transistor to the supply rail. For instance as the voltage on the capacitor grows the power being stored on it goes as V*I which is very different for large and small V. The thermal heat extraction from the resistor is basically constant. Well it fluctuates based on chance, but it doesn't care what the voltage on the output of the integrator is.Article: 8424
Left out a important word. Chuck Parsons wrote: > John Woodgate wrote: > > > I won't argue with your reasoning: questions of probability often have > > counter-intuitive answers. However, looking at the subject another way, > > the resistor/integrator device seems to be able to extract noise energy, > > and that means thermal energy, from the resistor and store a net amount > > of it in the capacitor. What happens then to the temperature of the > > resistor? What is wrong with this crazy result? > > Well lets not get sidetracked on that. John, I think you may be trying to exhaust > me ;-) The input node of an amp (as discussed in the thread on Johnson noise) can > > have a lower than thermal **NOISE** temperature, than a resistor and yes heat flows > from the > > resistor to the amp. It is a very small amount similar in magnitude to the thermal > conductivity of a single atom in a tube, because there is only one degree of freedom. > Anyone, wanting to know more about this please ask Kendall ;-) > > But John, you know this energy is not what is being stored on the capacitor. The > energy to charge the capacitor comes from the amplifier power supplies, at least the > vast bulk of it. Just like in an amplifier the energy from the microphone doesn't > wind up in the speakers but drains down the base of a transistor to the supply rail. > For instance as the voltage on the capacitor grows the power being stored on it goes > as V*I which is very different for large and small V. The thermal heat extraction > from the resistor is basically constant. Well it fluctuates based on chance, but it > doesn't care what the voltage on the output of the integrator is.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z