Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
In comp.arch.fpga John Larkin <jjlarkin@highlandtechnology.com> wrote: > We do have provision for adding a pin-fin heat sink and a fan directly > over the chip. Like this: > > https://dl.dropboxusercontent.com/u/53724080/Thermal/Uzed_Fan_Side.JPG > > Just yesterday someone was harassing me to make the fan speed software > controllable, so maybe I will. FWIW, we have a simple hardware control loop of the fan PWM. This is the source: https://github.com/CTSRD-CHERI/beri/blob/master/cherilibs/trunk/peripherals/FanControl/FanControl.bsv TheoArticle: 159851
On 11 Apr 2017 16:28:03 +0100 (BST), Theo Markettos <theom+news@chiark.greenend.org.uk> wrote: >In comp.arch.fpga John Larkin <jjlarkin@highlandtechnology.com> wrote: >> We do have provision for adding a pin-fin heat sink and a fan directly >> over the chip. Like this: >> >> https://dl.dropboxusercontent.com/u/53724080/Thermal/Uzed_Fan_Side.JPG >> >> Just yesterday someone was harassing me to make the fan speed software >> controllable, so maybe I will. > >FWIW, we have a simple hardware control loop of the fan PWM. This is the >source: >https://github.com/CTSRD-CHERI/beri/blob/master/cherilibs/trunk/peripherals/FanControl/FanControl.bsv > >Theo Looks to me, as I can read the code, that the control algorithm is an up/down counter that controls fan speed, and it's incremented or decremented by the temp being below/above the setpoint. Is that right? That's the algorithm that I have proposed. Fan speed changes will be slow and controlled, so there would be no acoustic drama. The min-to-max fan speed slew could take minutes. A cal table would include min fan voltage, max fan voltage, and the up/down increment, so tuning would be easy. -- John Larkin Highland Technology, Inc lunatic fringe electronicsArticle: 159852
On Monday, April 10, 2017 at 7:13:23 PM UTC-6, John Larkin wrote: > We have a ZYNQ whose predicted timing isn't meeting decent margins. > And we don't want a lot of output pin timing variation in real life. >=20 > We can measure the chip temperature with the XADC thing. So, why not > make an on-chip heater? Use a PLL to clock a bunch of flops, and vary > the PLL output frequency to keep the chip temp roughly constant. I'm confused by the concept. Doesn't timing get *worse* as temp increases?= How would a higher temperature help? By "output pin timing variation" do= you mean that there are combinatorial output paths? I think the best best= is to stay as cool as possible and keep all outputs registered. If you re= ally need to control output delay you can use the IODELAY block, possibly a= long with a copper trace feedback line. I have used precision oscillators with built-in heaters. In that case, it'= s more important that the crystal stay at a constant temp than what the tem= p is. By making that temperature above the highest possible ambient temp, = the heater can keep the crystal temp constant.Article: 159853
> https://dl.dropboxusercontent.com/u/53724080/Thermal/ESM_Ring_Oscillator.jpg > > The change in prop delay vs temp is fairly small. > That's more linear than I would've guessed. Is that the ambient temperature or junction temp?Article: 159854
In comp.arch.fpga John Larkin <jjlarkin@highlandtechnology.com> wrote: > Looks to me, as I can read the code, that the control algorithm is an > up/down counter that controls fan speed, and it's incremented or > decremented by the temp being below/above the setpoint. Is that right? That's right. Note that it's better to drive fans by PWM than with a linear voltage - at low voltages they can stall, while the PWM is enough to kick them into spinning. This also makes the whole system entirely digital (beyond the sensor), saving components. You just need one MOSFET. > That's the algorithm that I have proposed. Fan speed changes will be > slow and controlled, so there would be no acoustic drama. The > min-to-max fan speed slew could take minutes. A cal table would > include min fan voltage, max fan voltage, and the up/down increment, > so tuning would be easy. Min-to-max slew taking minutes seems like a bug not a feature - unless the junction temperature has a similar time constant. It is also worth reading the tach back from the fan. Fans die - and it's better for your system to shut down in a fault condition than continue melting because it hasn't noticed. (On a three-wire fan the tach gets chopped by the PWM and creates erroneous readings - four wire fans avoid that, or you can do tricks with your FPGA) TheoArticle: 159855
John Larkin wrote... > >There will be an overall box fan, and it is speed controlled. > >https://dl.dropboxusercontent.com/u/53724080/Circuits/Power/Fan_Regulator.jpg Here's my fan speed controller. Quite serious. https://www.dropbox.com/s/7gsrmb9uci1wdb9/RIS-764Gb_fan-speed-controller.JPG?dl=0 First there's an LM35 TO-220-package temp sensor mounted to the heat sink, amplify and offset its 10mV/deg signal by 11x, to generate a fan-speed voltage, present to a TC647 fan-speed PWM chip, add optional MOSFET for when using a non-PWM fan. E.g., cool, fan runs at 0%, ramps its speed over a 30 to 40 degree range, thereafter runs at 100%. TC647 chip senses stalled fan, makes error signal. -- Thanks, - WinArticle: 159856
On 11 Apr 2017 17:45:05 +0100 (BST), Theo Markettos <theom+news@chiark.greenend.org.uk> wrote: >In comp.arch.fpga John Larkin <jjlarkin@highlandtechnology.com> wrote: >> Looks to me, as I can read the code, that the control algorithm is an >> up/down counter that controls fan speed, and it's incremented or >> decremented by the temp being below/above the setpoint. Is that right? > >That's right. Note that it's better to drive fans by PWM than with a linear >voltage - at low voltages they can stall, while the PWM is enough to kick >them into spinning. > >This also makes the whole system entirely digital (beyond the sensor), >saving components. You just need one MOSFET. We have a nice fan in stock, and it really didn't like direct PWM drive. So we'd need an inductor, maybe drive it from a synchronous switcher or something. The linear thing should work; we will have a minimum voltage, to keep the fan spinning. This 24V fan starts spinning at about 7 volts, so we'd always give it more. > >> That's the algorithm that I have proposed. Fan speed changes will be >> slow and controlled, so there would be no acoustic drama. The >> min-to-max fan speed slew could take minutes. A cal table would >> include min fan voltage, max fan voltage, and the up/down increment, >> so tuning would be easy. > >Min-to-max slew taking minutes seems like a bug not a feature - unless the >junction temperature has a similar time constant. We just want to avoid acoustic drama. The box fan can have a slow slew rate. If we have a separate FPGA fan, that can be faster, since it won't be very audible from outside the box. All the params will be in a writable cal table, so we can play with things without recompiling. > >It is also worth reading the tach back from the fan. Fans die - and it's >better for your system to shut down in a fault condition than continue >melting because it hasn't noticed. The fan that we have doesn't have a tach, so we'll just look at temperatures. We have a thermistor on the PCB, and both FPGAs can report their die temp. > >(On a three-wire fan the tach gets chopped by the PWM and creates erroneous >readings - four wire fans avoid that, or you can do tricks with your FPGA) > >Theo -- John Larkin Highland Technology, Inc picosecond timing precision measurement jlarkin att highlandtechnology dott com http://www.highlandtechnology.comArticle: 159857
On 11 Apr 2017 09:52:20 -0700, Winfield Hill <hill@rowland.harvard.edu> wrote: >John Larkin wrote... >> >>There will be an overall box fan, and it is speed controlled. >> >>https://dl.dropboxusercontent.com/u/53724080/Circuits/Power/Fan_Regulator.jpg > > Here's my fan speed controller. Quite serious. >https://www.dropbox.com/s/7gsrmb9uci1wdb9/RIS-764Gb_fan-speed-controller.JPG?dl=0 > > First there's an LM35 TO-220-package temp sensor > mounted to the heat sink, amplify and offset its > 10mV/deg signal by 11x, to generate a fan-speed > voltage, present to a TC647 fan-speed PWM chip, > add optional MOSFET for when using a non-PWM fan. > E.g., cool, fan runs at 0%, ramps its speed over > a 30 to 40 degree range, thereafter runs at 100%. > TC647 chip senses stalled fan, makes error signal. That TO220 LM35 is nice. Our two FPGA die temps are only readable digitally. We have a thermistor on the PCB, digitized by the BIST analog mux thing. All the controls will be software. -- John Larkin Highland Technology, Inc picosecond timing precision measurement jlarkin att highlandtechnology dott com http://www.highlandtechnology.comArticle: 159858
On Mon, 10 Apr 2017 20:06:57 -0700, John Larkin <jjlarkin@highlandtechnology.com> wrote: >On Mon, 10 Apr 2017 22:15:50 -0400, krw@notreal.com wrote: > >>On Mon, 10 Apr 2017 18:13:13 -0700, John Larkin >><jjlarkin@highland_snip_technology.com> wrote: >> >>>We have a ZYNQ whose predicted timing isn't meeting decent margins. >>>And we don't want a lot of output pin timing variation in real life. >>> >>>We can measure the chip temperature with the XADC thing. So, why not >>>make an on-chip heater? Use a PLL to clock a bunch of flops, and vary >>>the PLL output frequency to keep the chip temp roughly constant. >> >>Why not? Don't bother with the output frequency, just vary the number >>of flops wiggling. > >That would work too. Maybe have a 2-bit heat control word, to get >coarse steps of power dissipation, 4 groups of flops. I suppose a >single on-off bit could be a simple bang-bang thermostat. > >The PLL thing would be elegant, proportional control of all the flops >in the distributed heater array. You can do the same thing with the flops. Use a shift register to enable flops in a "thermometer code" sort of thing. Too low - shift right. Wait. Still to low - shift right. Wait. Too high - shift left... There are all sorts of algorithms that can be built into spare flops. > >I'm thinking we could reduce the overall effect of ambient temp >changes by some healthy factor, 4:1 or 10:1 or something. Seems reasonable. IBM used to add heater chips for the same purpose (bipolar circuits run faster at high temperature).Article: 159859
On Tue, 11 Apr 2017 21:09:52 -0400, krw@notreal.com wrote: >On Mon, 10 Apr 2017 20:06:57 -0700, John Larkin ><jjlarkin@highlandtechnology.com> wrote: > >>On Mon, 10 Apr 2017 22:15:50 -0400, krw@notreal.com wrote: >> >>>On Mon, 10 Apr 2017 18:13:13 -0700, John Larkin >>><jjlarkin@highland_snip_technology.com> wrote: >>> >>>>We have a ZYNQ whose predicted timing isn't meeting decent margins. >>>>And we don't want a lot of output pin timing variation in real life. >>>> >>>>We can measure the chip temperature with the XADC thing. So, why not >>>>make an on-chip heater? Use a PLL to clock a bunch of flops, and vary >>>>the PLL output frequency to keep the chip temp roughly constant. >>> >>>Why not? Don't bother with the output frequency, just vary the number >>>of flops wiggling. >> >>That would work too. Maybe have a 2-bit heat control word, to get >>coarse steps of power dissipation, 4 groups of flops. I suppose a >>single on-off bit could be a simple bang-bang thermostat. >> >>The PLL thing would be elegant, proportional control of all the flops >>in the distributed heater array. > >You can do the same thing with the flops. Use a shift register to >enable flops in a "thermometer code" sort of thing. Too low - shift >right. Wait. Still to low - shift right. Wait. Too high - shift >left... > >There are all sorts of algorithms that can be built into spare flops. >> >>I'm thinking we could reduce the overall effect of ambient temp >>changes by some healthy factor, 4:1 or 10:1 or something. > >Seems reasonable. IBM used to add heater chips for the same purpose >(bipolar circuits run faster at high temperature). CMOS is slower at high temps. Somewhere between about 1000 and 3000 PPM/K prop delay. -- John Larkin Highland Technology, Inc lunatic fringe electronicsArticle: 159860
On Tue, 11 Apr 2017 19:26:01 -0700, John Larkin <jjlarkin@highlandtechnology.com> wrote: >On Tue, 11 Apr 2017 21:09:52 -0400, krw@notreal.com wrote: > >>On Mon, 10 Apr 2017 20:06:57 -0700, John Larkin >><jjlarkin@highlandtechnology.com> wrote: >> >>>On Mon, 10 Apr 2017 22:15:50 -0400, krw@notreal.com wrote: >>> >>>>On Mon, 10 Apr 2017 18:13:13 -0700, John Larkin >>>><jjlarkin@highland_snip_technology.com> wrote: >>>> >>>>>We have a ZYNQ whose predicted timing isn't meeting decent margins. >>>>>And we don't want a lot of output pin timing variation in real life. >>>>> >>>>>We can measure the chip temperature with the XADC thing. So, why not >>>>>make an on-chip heater? Use a PLL to clock a bunch of flops, and vary >>>>>the PLL output frequency to keep the chip temp roughly constant. >>>> >>>>Why not? Don't bother with the output frequency, just vary the number >>>>of flops wiggling. >>> >>>That would work too. Maybe have a 2-bit heat control word, to get >>>coarse steps of power dissipation, 4 groups of flops. I suppose a >>>single on-off bit could be a simple bang-bang thermostat. >>> >>>The PLL thing would be elegant, proportional control of all the flops >>>in the distributed heater array. >> >>You can do the same thing with the flops. Use a shift register to >>enable flops in a "thermometer code" sort of thing. Too low - shift >>right. Wait. Still to low - shift right. Wait. Too high - shift >>left... >> >>There are all sorts of algorithms that can be built into spare flops. >>> >>>I'm thinking we could reduce the overall effect of ambient temp >>>changes by some healthy factor, 4:1 or 10:1 or something. >> >>Seems reasonable. IBM used to add heater chips for the same purpose >>(bipolar circuits run faster at high temperature). > >CMOS is slower at high temps. Somewhere between about 1000 and 3000 >PPM/K prop delay. I understand but my point was that regulating temperature to control speed has been done. It's not a strange idea at all.Article: 159861
On Tue, 11 Apr 2017 09:29:03 -0700 (PDT), Kevin Neilson <kevin.neilson@xilinx.com> wrote: >On Monday, April 10, 2017 at 7:13:23 PM UTC-6, John Larkin wrote: >> We have a ZYNQ whose predicted timing isn't meeting decent margins. >> And we don't want a lot of output pin timing variation in real life. >> >> We can measure the chip temperature with the XADC thing. So, why not >> make an on-chip heater? Use a PLL to clock a bunch of flops, and vary >> the PLL output frequency to keep the chip temp roughly constant. > >I'm confused by the concept. Doesn't timing get *worse* as temp increases? Prop delays get slower. > How would a higher temperature help? High temperature is an unfortunate fact of life some times. I'm after constant temperature, to minimize delay variations as ambient temp and logic power dissipations change. > By "output pin timing variation" do you mean that there are combinatorial output paths? I think the best best is to stay as cool as possible and keep all outputs registered. All our critical outputs are registered in the i/o cells. Xilinx tools report almost a 3:1 delay range from clock to outputs, over the full range of process, power supply, and temperature. Apparently the tools assume the max specified Vcc and temperature spreads for the part and don't let us tease out anything, or restrict the analysis to any narrower ranges. > If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line. Our output data-valid window is predicted by the tools to be very narrow relative to the clock period. We figure that controlling the temperature (and maybe controlling Vcc-core vs temperature) will open up the timing window. The final analysis will have to be experimental. We can't crank in a constant delay to fix anything; the problem is the predicted variation in delay. > >I have used precision oscillators with built-in heaters. In that case, it's more important that the crystal stay at a constant temp than what the temp is. By making that temperature above the highest possible ambient temp, the heater can keep the crystal temp constant. That's the idea, keep the FPGA core near the max naturally-expected temperature, heat it up as needed, and that will reduce actual timing variations to below the worst-case predicted by the tools. I expect that the tools are grossly pessimistic. I sure hope so. -- John Larkin Highland Technology, Inc lunatic fringe electronicsArticle: 159862
On 4/11/2017 12:31 PM, Kevin Neilson wrote: > >> https://dl.dropboxusercontent.com/u/53724080/Thermal/ESM_Ring_Oscillator.jpg >> >> The change in prop delay vs temp is fairly small. >> > > That's more linear than I would've guessed. Is that the ambient temperature or junction temp? Even if it wasn't especially linear, the proportionality is based on degrees Kelvin. So the non-linearity would not be terribly pronounced. That was part of the reason for the inflate-gate thing a couple of years ago. I remember that between the pressure being relative rather than absolute and the temperature being Celsius or Fahrenheit rather than Kevin, the people here took some time to figure out that the reported pressures were easily explained by the difference in temperature between the locker rooms and the playing field. -- Rick CArticle: 159863
On 4/11/2017 11:37 PM, John Larkin wrote: > On Tue, 11 Apr 2017 09:29:03 -0700 (PDT), Kevin Neilson > <kevin.neilson@xilinx.com> wrote: > >> On Monday, April 10, 2017 at 7:13:23 PM UTC-6, John Larkin wrote: >>> We have a ZYNQ whose predicted timing isn't meeting decent margins. >>> And we don't want a lot of output pin timing variation in real life. >>> >>> We can measure the chip temperature with the XADC thing. So, why not >>> make an on-chip heater? Use a PLL to clock a bunch of flops, and vary >>> the PLL output frequency to keep the chip temp roughly constant. >> >> I'm confused by the concept. Doesn't timing get *worse* as temp increases? > > Prop delays get slower. > >> How would a higher temperature help? > > High temperature is an unfortunate fact of life some times. I'm after > constant temperature, to minimize delay variations as ambient temp and > logic power dissipations change. > >> By "output pin timing variation" do you mean that there are combinatorial output paths? I think the best best is to stay as cool as possible and keep all outputs registered. > > All our critical outputs are registered in the i/o cells. Xilinx tools > report almost a 3:1 delay range from clock to outputs, over the full > range of process, power supply, and temperature. Apparently the tools > assume the max specified Vcc and temperature spreads for the part and > don't let us tease out anything, or restrict the analysis to any > narrower ranges. > > >> If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line. > > > Our output data-valid window is predicted by the tools to be very > narrow relative to the clock period. We figure that controlling the > temperature (and maybe controlling Vcc-core vs temperature) will open > up the timing window. The final analysis will have to be experimental. > > We can't crank in a constant delay to fix anything; the problem is the > predicted variation in delay. > >> >> I have used precision oscillators with built-in heaters. In that case, it's more important that the crystal stay at a constant temp than what the temp is. By making that temperature above the highest possible ambient temp, the heater can keep the crystal temp constant. > > That's the idea, keep the FPGA core near the max naturally-expected > temperature, heat it up as needed, and that will reduce actual timing > variations to below the worst-case predicted by the tools. > > I expect that the tools are grossly pessimistic. I sure hope so. The nature of designing synchronous logic is that you want to know the worst case delay so you can design to a constant period clock cycle. So the worst case is the design criteria. The timing analysis tools are naturally "pessimistic" in that sense. But that is intended so that the design process is a matter of getting all timing paths to meet the required timing rather than trying to compare delays on this path to delays on that path which would be a nightmare. When you need better timing on the I/Os, as you have done, the signals can be clocked in the IOB FFs which give the lowest variation in timing as well as the shortest delays from clock input to signal output. Typically I/O timing also needs to be designed for worst case as well because the need is to meet setup timing while hold timing is typically guaranteed by the spec on the I/Os. But if you are not doing synchronous design this may not be optimal. If you are trying to get a specific timing of an output edge, you may have to reclock the signals through discrete logic. -- Rick CArticle: 159864
Our biggest box takes about a kilowatt, which includes 70W for the fans. We= build enough of them, which run 24/7, to work out the total cost of owners= hip and running the box a little bit hotter reduces reliability a bit but s= aves enough electricity to make it worthwhile. ColinArticle: 159865
On Tue, 11 Apr 2017 09:31:20 -0700 (PDT), Kevin Neilson <kevin.neilson@xilinx.com> wrote: > >> https://dl.dropboxusercontent.com/u/53724080/Thermal/ESM_Ring_Oscillator.jpg >> >> The change in prop delay vs temp is fairly small. >> > >That's more linear than I would've guessed. Is that the ambient temperature or junction temp? Foil-sticky thermocouple on the top of the chip. It was an Altera Cyclone 3, clocked internally at 250 MHz. https://dl.dropboxusercontent.com/u/53724080/PCBs/ESM_rev_B.jpg The ring oscillator was divided internally before we counted it, by 16 as I recall. Newer chips tend to have an actual, fairly accurate, die temp sensor, which opens up complex schemes to control die temp, or measure it and tweak Vccint, or something. -- John Larkin Highland Technology, Inc lunatic fringe electronicsArticle: 159866
On 4/11/2017 12:29 PM, Kevin Neilson wrote: > On Monday, April 10, 2017 at 7:13:23 PM UTC-6, John Larkin wrote: >> We have a ZYNQ whose predicted timing isn't meeting decent margins. >> And we don't want a lot of output pin timing variation in real life. >> >> We can measure the chip temperature with the XADC thing. So, why not >> make an on-chip heater? Use a PLL to clock a bunch of flops, and vary >> the PLL output frequency to keep the chip temp roughly constant. > > I'm confused by the concept. Doesn't timing get *worse* as temp increases? How would a higher temperature help? By "output pin timing variation" do you mean that there are combinatorial output paths? I think the best best is to stay as cool as possible and keep all outputs registered. If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line. > > I have used precision oscillators with built-in heaters. In that case, it's more important that the crystal stay at a constant temp than what the temp is. By making that temperature above the highest possible ambient temp, the heater can keep the crystal temp constant. That is exactly what John is talking about, except the heater will be on the FPGA itself. -- Rick CArticle: 159867
Hi In the context of a university research, I try to convert the signal coming from an analog camera (1000tvl camera style) to obtain a digital signal and save it in a file in format h264; All using SYSTEMC. RQ: I start in systemc Someone can help me or guide me. thank youArticle: 159868
Den onsdag den 12. april 2017 kl. 05.37.07 UTC+2 skrev John Larkin: > On Tue, 11 Apr 2017 09:29:03 -0700 (PDT), Kevin Neilson > <kevin.neilson@xilinx.com> wrote: > > >On Monday, April 10, 2017 at 7:13:23 PM UTC-6, John Larkin wrote: > >> We have a ZYNQ whose predicted timing isn't meeting decent margins. > >> And we don't want a lot of output pin timing variation in real life. > >> > >> We can measure the chip temperature with the XADC thing. So, why not > >> make an on-chip heater? Use a PLL to clock a bunch of flops, and vary > >> the PLL output frequency to keep the chip temp roughly constant. > > > >I'm confused by the concept. Doesn't timing get *worse* as temp increases? > > Prop delays get slower. > > > How would a higher temperature help? > > High temperature is an unfortunate fact of life some times. I'm after > constant temperature, to minimize delay variations as ambient temp and > logic power dissipations change. > > > By "output pin timing variation" do you mean that there are combinatorial output paths? I think the best best is to stay as cool as possible and keep all outputs registered. > > All our critical outputs are registered in the i/o cells. Xilinx tools > report almost a 3:1 delay range from clock to outputs, over the full > range of process, power supply, and temperature. Apparently the tools > assume the max specified Vcc and temperature spreads for the part and > don't let us tease out anything, or restrict the analysis to any > narrower ranges. > > > > If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line. > > > Our output data-valid window is predicted by the tools to be very > narrow relative to the clock period. We figure that controlling the > temperature (and maybe controlling Vcc-core vs temperature) will open > up the timing window. The final analysis will have to be experimental. > > We can't crank in a constant delay to fix anything; the problem is the > predicted variation in delay. > that is basically what the IDELAY/ODELAY blocks are for, you instantiate an IDELAYCTRL and feed it a ~200MHz clock and it uses that a reference to reduce the effects of process, voltage, and temperature on the iodelayArticle: 159869
On Wed, 12 Apr 2017 10:57:26 -0700, cmajdi wrote: > Hi In the context of a university research, I try to convert the signal > coming from an analog camera (1000tvl camera style) to obtain a digital > signal and save it in a file in format h264; All using SYSTEMC. > RQ: I start in systemc > > Someone can help me or guide me. thank you Step one: learn all you need about video. Step two: learn all you need about System C Step three: put them together. Seriously, what other answer can someone give to such a general question? If you're capable of doing the job at all, this book should help with step one: <https://www.amazon.com/Video-Demystified-Handbook-Digital- Engineer/dp/0750683953> You may not find your particular camera's interface specification in there, but reading that book should help a lot to understanding what the camera's doing. A _really rough sketch_ of what you need to do is: * synchronize to the incoming video. The camera will generate horizontal and vertical sync signals that you'll need to synchronize to with phase- locked loops. For best performance, you may want to have a dedicated analog pixel clock on the board that's not synthesized by the FPGA. * Sample the pixels at the right time. * Build frames in memory. (This ends the analog part) * Convert those frames to the digital format of your choice * Get them onto disk Note that there are a LOT of options and tradeoffs involved with the "convert to digital" part -- mostly concerning what sort of compression you use and how good it is. I've seen this sort of thing done from scratch in commercial/military products. In that sort of environment I'd guess that it'd take a three to six-man team about a year to get a prototype, and another six months to get into production. Getting a demonstration working on a eval board that only has to work at room temperature and with an expert running things should take a lot less effort. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com I'm looking for work -- see my website!Article: 159870
> > If you really need to control output delay you can use the IODELAY bloc= k, possibly along with a copper trace feedback line. >=20 >=20 > Our output data-valid window is predicted by the tools to be very > narrow relative to the clock period. We figure that controlling the > temperature (and maybe controlling Vcc-core vs temperature) will open > up the timing window. The final analysis will have to be experimental. >=20 > We can't crank in a constant delay to fix anything; the problem is the > predicted variation in delay. >=20 I still think the IODELAY could help you. The output goes through an adjus= table IODELAY, then you route the output back in through a pin, adjust the = input IODELAY to figure out where the incoming edge is, and then use a feed= back loop to keep the output delay constant. It's a technique used for des= kewing DRAM data. I think the main clock would also have to be deskewed wi= th a BUFG so you have a good reference for the input. Or, if you character= ized the delay-vs-temp in the lab, you could run in open-loop mode by adjus= ting the IODELAY tap based on the temperature you read.=20 Yes, the tools are definitely pessimistic. They're only useful for worst-c= ase. I'm pretty sure you can put in the max temperature when doing PAR, so= you could isolate the effects of just that, but it will still probably be = worse variation than in reality.Article: 159871
On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson <kevin.neilson@xilinx.com> wrote: >> > If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line. >> >> >> Our output data-valid window is predicted by the tools to be very >> narrow relative to the clock period. We figure that controlling the >> temperature (and maybe controlling Vcc-core vs temperature) will open >> up the timing window. The final analysis will have to be experimental. >> >> We can't crank in a constant delay to fix anything; the problem is the >> predicted variation in delay. >> > >I still think the IODELAY could help you. The output goes through an adjustable IODELAY, then you route the output back in through a pin, adjust the input IODELAY to figure out where the incoming edge is, and then use a feedback loop to keep the output delay constant. It's a technique used for deskewing DRAM data. I think the main clock would also have to be deskewed with a BUFG so you have a good reference for the input. Or, if you characterized the delay-vs-temp in the lab, you could run in open-loop mode by adjusting the IODELAY tap based on the temperature you read. > >Yes, the tools are definitely pessimistic. They're only useful for worst-case. I'm pretty sure you can put in the max temperature when doing PAR, so you could isolate the effects of just that, but it will still probably be worse variation than in reality. My FPGA guy says that the ZYNQ does not have adjustable delay after the i/o block flops. We can vary drive strength in four steps, and we may be able to do something with that. -- John Larkin Highland Technology, Inc picosecond timing precision measurement jlarkin att highlandtechnology dott com http://www.highlandtechnology.comArticle: 159872
On 4/12/2017 4:20 PM, John Larkin wrote: > On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson > <kevin.neilson@xilinx.com> wrote: > >>>> If you really need to control output delay you can use the IODELAY block, possibly along with a copper trace feedback line. >>> >>> >>> Our output data-valid window is predicted by the tools to be very >>> narrow relative to the clock period. We figure that controlling the >>> temperature (and maybe controlling Vcc-core vs temperature) will open >>> up the timing window. The final analysis will have to be experimental. >>> >>> We can't crank in a constant delay to fix anything; the problem is the >>> predicted variation in delay. >>> >> >> I still think the IODELAY could help you. The output goes through an adjustable IODELAY, then you route the output back in through a pin, adjust the input IODELAY to figure out where the incoming edge is, and then use a feedback loop to keep the output delay constant. It's a technique used for deskewing DRAM data. I think the main clock would also have to be deskewed with a BUFG so you have a good reference for the input. Or, if you characterized the delay-vs-temp in the lab, you could run in open-loop mode by adjusting the IODELAY tap based on the temperature you read. >> >> Yes, the tools are definitely pessimistic. They're only useful for worst-case. I'm pretty sure you can put in the max temperature when doing PAR, so you could isolate the effects of just that, but it will still probably be worse variation than in reality. > > My FPGA guy says that the ZYNQ does not have adjustable delay after > the i/o block flops. We can vary drive strength in four steps, and we > may be able to do something with that. That's also not adjustable in real time though. I believe what the others are talking about is a real time adjustable delay that is built into the clocking module. I don't know about the Zynq, but Xilinx has what they call a delay locked loop which sounds exactly like what you need. I believe it works by syncing the output signal to the clock signal. There will be some signal path in the feedback loop which will still cause timing variation with temperature and I suppose voltage, but the variation in process can be compensated. -- Rick CArticle: 159873
On Wednesday, 4/12/2017 4:27 PM, rickman wrote: > On 4/12/2017 4:20 PM, John Larkin wrote: >> On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson >> <kevin.neilson@xilinx.com> wrote: >> >>>>> If you really need to control output delay you can use the IODELAY >>>>> block, possibly along with a copper trace feedback line. >>>> >>>> >>>> Our output data-valid window is predicted by the tools to be very >>>> narrow relative to the clock period. We figure that controlling the >>>> temperature (and maybe controlling Vcc-core vs temperature) will open >>>> up the timing window. The final analysis will have to be experimental. >>>> >>>> We can't crank in a constant delay to fix anything; the problem is the >>>> predicted variation in delay. >>>> >>> >>> I still think the IODELAY could help you. The output goes through an >>> adjustable IODELAY, then you route the output back in through a pin, >>> adjust the input IODELAY to figure out where the incoming edge is, >>> and then use a feedback loop to keep the output delay constant. It's >>> a technique used for deskewing DRAM data. I think the main clock >>> would also have to be deskewed with a BUFG so you have a good >>> reference for the input. Or, if you characterized the delay-vs-temp >>> in the lab, you could run in open-loop mode by adjusting the IODELAY >>> tap based on the temperature you read. >>> >>> Yes, the tools are definitely pessimistic. They're only useful for >>> worst-case. I'm pretty sure you can put in the max temperature when >>> doing PAR, so you could isolate the effects of just that, but it will >>> still probably be worse variation than in reality. >> >> My FPGA guy says that the ZYNQ does not have adjustable delay after >> the i/o block flops. We can vary drive strength in four steps, and we >> may be able to do something with that. > > That's also not adjustable in real time though. > > I believe what the others are talking about is a real time adjustable > delay that is built into the clocking module. I don't know about the > Zynq, but Xilinx has what they call a delay locked loop which sounds > exactly like what you need. I believe it works by syncing the output > signal to the clock signal. There will be some signal path in the > feedback loop which will still cause timing variation with temperature > and I suppose voltage, but the variation in process can be compensated. > In the 7-series what you want is the MMCM, which has the ability to adjust the output phase in steps of 1/56 of the VCO period. This adjustment can be applied to a subset of the MMCM outputs, so you can for example vary the outgoing clock phase while keeping the data phase constant with respect to the clock driving the MMCM. On the other hand, the whole point of a source synchronous interface is to just need low skew between outputs - not low skew between the input clock and the outputs. Typically just placing the outputs in the IOB and using the same clock resource is good enough. Skew between outputs is much lower than the variance in output delay. -- Gabor -- GaborArticle: 159874
On 4/12/2017 5:16 PM, Gabor wrote: > On Wednesday, 4/12/2017 4:27 PM, rickman wrote: >> On 4/12/2017 4:20 PM, John Larkin wrote: >>> On Wed, 12 Apr 2017 12:37:59 -0700 (PDT), Kevin Neilson >>> <kevin.neilson@xilinx.com> wrote: >>> >>>>>> If you really need to control output delay you can use the IODELAY >>>>>> block, possibly along with a copper trace feedback line. >>>>> >>>>> >>>>> Our output data-valid window is predicted by the tools to be very >>>>> narrow relative to the clock period. We figure that controlling the >>>>> temperature (and maybe controlling Vcc-core vs temperature) will open >>>>> up the timing window. The final analysis will have to be experimental. >>>>> >>>>> We can't crank in a constant delay to fix anything; the problem is the >>>>> predicted variation in delay. >>>>> >>>> >>>> I still think the IODELAY could help you. The output goes through >>>> an adjustable IODELAY, then you route the output back in through a >>>> pin, adjust the input IODELAY to figure out where the incoming edge >>>> is, and then use a feedback loop to keep the output delay constant. >>>> It's a technique used for deskewing DRAM data. I think the main >>>> clock would also have to be deskewed with a BUFG so you have a good >>>> reference for the input. Or, if you characterized the delay-vs-temp >>>> in the lab, you could run in open-loop mode by adjusting the IODELAY >>>> tap based on the temperature you read. >>>> >>>> Yes, the tools are definitely pessimistic. They're only useful for >>>> worst-case. I'm pretty sure you can put in the max temperature when >>>> doing PAR, so you could isolate the effects of just that, but it >>>> will still probably be worse variation than in reality. >>> >>> My FPGA guy says that the ZYNQ does not have adjustable delay after >>> the i/o block flops. We can vary drive strength in four steps, and we >>> may be able to do something with that. >> >> That's also not adjustable in real time though. >> >> I believe what the others are talking about is a real time adjustable >> delay that is built into the clocking module. I don't know about the >> Zynq, but Xilinx has what they call a delay locked loop which sounds >> exactly like what you need. I believe it works by syncing the output >> signal to the clock signal. There will be some signal path in the >> feedback loop which will still cause timing variation with temperature >> and I suppose voltage, but the variation in process can be compensated. >> > > In the 7-series what you want is the MMCM, which has the ability to > adjust the output phase in steps of 1/56 of the VCO period. This > adjustment can be applied to a subset of the MMCM outputs, so you > can for example vary the outgoing clock phase while keeping the > data phase constant with respect to the clock driving the MMCM. > > On the other hand, the whole point of a source synchronous interface > is to just need low skew between outputs - not low skew between the > input clock and the outputs. Typically just placing the outputs in > the IOB and using the same clock resource is good enough. Skew > between outputs is much lower than the variance in output delay. Yeah, well, it's not like we really know the true and full problem. We just know he doesn't like the timing range reported by the tools. -- Rick C
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z