Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi, I am trying to figure out how to set up the external memory controller for data bus width matching. I have an 8 bit flash but a 32 bit PLB bus. I set up the emc parameter as follows: BEGIN xps_mch_emc PARAMETER INSTANCE = flash_16Mx8b PARAMETER HW_VER = 2.00.a PARAMETER C_NUM_BANKS_MEM = 1 PARAMETER C_NUM_CHANNELS = 0 PARAMETER C_MEM0_WIDTH = 8 <----------------------- PARAMETER C_MAX_MEM_WIDTH = 8 PARAMETER C_INCLUDE_DATAWIDTH_MATCHING_0 = 1 <----------------------- PARAMETER C_TCEDV_PS_MEM_0 = 90000 PARAMETER C_TAVDV_PS_MEM_0 = 90000 PARAMETER C_THZCE_PS_MEM_0 = 40000 PARAMETER C_THZOE_PS_MEM_0 = 30000 PARAMETER C_TWC_PS_MEM_0 = 90000 PARAMETER C_TWP_PS_MEM_0 = 90000 PARAMETER C_TLZWE_PS_MEM_0 = 0 PARAMETER C_MCH_PLB_CLK_PERIOD_PS = 16000 PARAMETER C_MEM0_BASEADDR = 0x80000000 PARAMETER C_MEM0_HIGHADDR = 0x80ffffff BUS_INTERFACE SPLB = mb_plb PORT RdClk = sys_clk_s PORT Mem_A = flash_16Mx8b_Mem_A PORT Mem_DQ = flash_16Mx8b_Mem_DQ PORT Mem_WEN = flash_16Mx8b_Mem_WEN PORT Mem_OEN = flash_16Mx8b_Mem_OEN PORT Mem_CEN = flash_16Mx8b_Mem_CEN END I expected the emc to perform 4 access to the flash to create a 32 bit data everytime I read or write to it. This is the code I use to read and write [CODE] // Write 16 bits XIo_Out8(FLASHBASEADDR + 8, 0x40); // Send write byte command XIo_Out16(FLASHBASEADDR + 8, 0x1234); // Write 16 bits to address 0 [/CODE] It only writes 0x12 to address 8, Can anybody explain why address matching is not working. Thanks a lot AmishArticle: 139801
On Apr 14, 8:39=A0am, axr0284 <axr0...@yahoo.com> wrote: > On Apr 13, 10:21=A0pm, halong <cco...@netscape.net> wrote: > > > > > On Apr 13, 4:58=A0pm, ivan <ivan.so...@gmail.com> wrote: > > > > On 13 tra, 23:22, Mike Harrison <m...@whitewing.co.uk> wrote: > > > > > What should happen is you save your source, then when you run Impac= t it runs all the processes > > > > needed. If you didn't save the source it should prompt you. > > > > No, that's not it. I mean, i can't programm the board twice in a row. > > > Impact reports a blue message 'Program succeeded' every time, but onl= y > > > the first time does it transfer the new design. For every next time, > > > it just keeps transfering the original, that is - the first transfere= d > > > design since the ISE was loaded. > > > > So, that has nothing to do with saving the project, or running all th= e > > > steps for synthesis/implementation/generation. Just to mention, I do > > > save all the files before proceeding with other steps. Always. > > > > Thanks, > > > Ivan. > > > Thinking as Impact and ISE are the two separated tools, and you > > yourself are the user > > > You would use ISE to generate the bit file, and you use IMPACT to > > download any bit file to an X-device, make sure you have selected the > > desired bit file everytime you wanna to load it > > > btw, have you updated to the latest service pack yet ?- Hide quoted tex= t - > > > - Show quoted text - > > Are you abolutely sure the bit file on the hard drive has been > updated. Maybe something went wrong during bitgen. Check the logs > Check the time stamp of the bit file right after you do a compile. > Technically, Impact checks that the file on the disk is different from > the file you downloaded before and would give you a pop up message > mentioning that fact. Actually timestamps are important. If iMpact sees a change in the bit file it should give you a message like "configuration file has been modified on disk. using new version." If you don't see this when you re-program after a new build there may be an issue with system date. Are iMpact and ISE both running on the same computer? Are the bit files also on that computer? If more than 1 computer is involved are they set to the same time zone and are the clocks set appropriately? Regards, GaborArticle: 139802
On Apr 14, 7:46=A0am, "Antti.Luk...@googlemail.com" <Antti.Luk...@googlemail.com> wrote: > On Apr 14, 2:08=A0pm, acd <acd4use...@lycos.de> wrote: > > > > > On 14 Apr., 13:02, Petter Gustad <newsmailco...@gustad.com> wrote: > > > > Are there any press releases etc. which indicates that the Arria II > > > will be competing with Cyclone III in price, e.g. that a 2AGX125 cost= s > > > less than (or the same as) an EP3C120? > > > The description of the Arria products states that they target cost- > > sensitive applications. > > "Arria=AE II GX FPGAs are the lowest power FPGAs with up to 3.75-Gbps > > transceivers. Designed for cost-sensitive applications, Arria II GX > > FPGAs are based on a 40-nm, full-featured FPGA architecture that > > includes adaptive logic modules (ALMs), digital signal processing > > (DSP) blocks, embedded RAM, and a hard PCIe IP core. This FPGA family > > provides the optimal logic, memory, and DSP capabilities to address > > your needs. Unlike other 3-Gbps FPGAs, Arria II GX FPGAs offer > > improvements in usability that allow you to complete your projects > > faster."http://www.altera.com/products/devices/arria-fpgas/arria-ii-gx/= aiigx-... > > > But of course, the high-speed links make the devices more expensive. > > > Andreas > > well at EW2009 there was a guy who claimed he got 10$ pricing for > smallest Arria II (50k pcs) > that is well below Lattice price what would be 17$ (1KLUT/1$) > > Antti Which Lattice? ECP2M or ECP3? Regards, GaborArticle: 139803
http://www.theregister.co.uk/2009/04/14/facebook_twitter_users_dunces_amoral/ "Vikram" <vkr101@gmail.com> wrote in message news:4af78531-9571-4cd3-9437-fed73274249f@n7g2000prc.googlegroups.com... > Guys, > > Wanted to let you know abou the FPGA Twitter at http://twitter.com/fpga > , it is a great way to follow what is hapenning in the FPGA world.. > > -VikramArticle: 139804
Hi, This is probably a very basic question, but do analog circuits need anything like reset? For example, given that a PLL is mostly analog, does it need reset input? Regards,Article: 139805
Sharan <sharan.basappa@gmail.com> wrote: >This is probably a very basic question, but do analog circuits need >anything like reset? >For example, given that a PLL is mostly analog, does it need reset >input? There are cases where analogue circuits such as signal conditioners or watchdogs might need to be started from a specific state, in which case, something like a switchable voltage-clamp might be required. Whether or not you need a clamp for a PLL depends on the specific requirement. But you should not normally require a clamp to ensure that a circuit works correctly -- if the circuit is capable of getting locked into an aberrant metastable state, then its design is at fault. -- Dave FarranceArticle: 139806
Yet another content-free post with abrasive comments about posting style. Way to go, Petter! On Apr 14, 12:35 am, Petter Gustad <newsmailco...@gustad.com> wrote: > acd <acd4use...@lycos.de> writes: > > (with Arria II recently announced) > > URL? > > Petter > -- > A: Because it messes up the order in which people normally read text. > Q: Why is top-posting such a bad thing? > A: Top-posting. > Q: What is the most annoying thing on usenet and in e-mail?Article: 139807
On Apr 14, 7:38 pm, Dave Farrance <DaveFarra...@OMiTTHiSyahooANDTHiS.co.uk> wrote: > Sharan <sharan.basa...@gmail.com> wrote: > >This is probably a very basic question, but do analog circuits need > >anything like reset? > >For example, given that a PLL is mostly analog, does it need reset > >input? > > There are cases where analogue circuits such as signal conditioners or > watchdogs might need to be started from a specific state, in which case, > something like a switchable voltage-clamp might be required. Whether or > not you need a clamp for a PLL depends on the specific requirement. But > you should not normally require a clamp to ensure that a circuit works > correctly -- if the circuit is capable of getting locked into an aberrant > metastable state, then its design is at fault. > > -- > Dave Farrance true. I did check one datasheet of pll ic manufacturer and he did not have reset. But what was surprising was that the pll has an SPI slave interface to it, so I would expect that part of the controller to have reset. RegardsArticle: 139808
Hi ... I'm very new to working with altera and quartus.I'm working with NiOS II development board(Cyclone III EP3C25). I have to create a design which has a NiOS processor, SRAM, Ethernet MAC and Ethernet Management Interface and other components.But the MAC and MI are custom components and not altera's IPs. I need to know how to instantiate these components in SOPC builder and I'm not aware of the settings to be done. The thing I did is I copied my custom component folders into C:\altera\81\ip\altera and included the class.ptf files of components in alteracomponents.ipx file. Is this method correct to create a component? Thanks and regards RenuArticle: 139809
On Apr 14, 7:46=A0am, "Antti.Luk...@googlemail.com" <Antti.Luk...@googlemail.com> wrote: > On Apr 14, 2:08=A0pm, acd <acd4use...@lycos.de> wrote: > > > > > On 14 Apr., 13:02, Petter Gustad <newsmailco...@gustad.com> wrote: > > > > Are there any press releases etc. which indicates that the Arria II > > > will be competing with Cyclone III in price, e.g. that a 2AGX125 cost= s > > > less than (or the same as) an EP3C120? > > > The description of the Arria products states that they target cost- > > sensitive applications. > > "Arria=AE II GX FPGAs are the lowest power FPGAs with up to 3.75-Gbps > > transceivers. Designed for cost-sensitive applications, Arria II GX > > FPGAs are based on a 40-nm, full-featured FPGA architecture that > > includes adaptive logic modules (ALMs), digital signal processing > > (DSP) blocks, embedded RAM, and a hard PCIe IP core. This FPGA family > > provides the optimal logic, memory, and DSP capabilities to address > > your needs. Unlike other 3-Gbps FPGAs, Arria II GX FPGAs offer > > improvements in usability that allow you to complete your projects > > faster."http://www.altera.com/products/devices/arria-fpgas/arria-ii-gx/= aiigx-... > > > But of course, the high-speed links make the devices more expensive. > > > Andreas > > well at EW2009 there was a guy who claimed he got 10$ pricing for > smallest Arria II (50k pcs) > that is well below Lattice price what would be 17$ (1KLUT/1$) I don't follow. I am getting the smallest Lattice XP part at below $10 qty 100. How is the Arria a better price? It may be more gates, but who cares if you don't need them? I don't price chips by the LUT or 1KLUT. I price them by the socket that will hold my design. RickArticle: 139810
This is the appropriate way to reply to that... Rick On Apr 14, 11:19=A0am, MadHatt...@myself.com wrote: > Yet another content-free post with abrasive comments about posting > style. > > Way to go, Petter! > > On Apr 14, 12:35 am, Petter Gustad <newsmailco...@gustad.com> wrote: > > > > Petter > > -- > > A: Because it messes up the order in which people normally read text. > > Q: Why is top-posting such a bad thing? > > A: Top-posting. > > Q: What is the most annoying thing on usenet and in e-mail? > > > URL? > > > acd <acd4use...@lycos.de> writes: > > > (with Arria II recently announced) >Article: 139811
On Apr 14, 3:47=A0pm, gabor <ga...@alacron.com> wrote: > On Apr 14, 4:19=A0am, ales.gor...@gmail.com wrote: > > > Hi all, > > > We are developing a new Spartan3A DSP board with DDR. Since we need a > > x32 DDR a good and low cost option seems low power mobile DDR SDRAM > > like this:http://www.micron.com/products/partdetail?part=3DMT46H16M32LF= CM-6%20IT > > It seems to be very popular since it has very low power comsumption > > (1.8V), high bandwidth and low cost; Beagle Board, DSP boards... > > > Problem: It is not directly supported by Xilinx MIG =A0(memory interfac= e > > generator) . > > The question is: did anyone sucesfully use MIG core with such SDRAM? > > > Cheers, > > > Ales > > What speed are you running the DDR memory at? =A0I have used Mobile > DDR with Lattice ECP2, and found that their fancy DDR I/O built > into the part does not work with LVCMOS because it needed a > "preamble detector" that uses low voltage (1/4 Vcc) as a > detection mechanism. =A0I ended up rolling my own using generic > DDR I/O and without using DQS as an input clock, but I should > preface this with the fact that I only clock the parts at > 125 MHz (DDR 250). > > You could theoretically use MIG to generate sources and then > modify it yourself. =A0The power-up reset sequence is very > different for Mobile vs Standard DDR. =A0Check the data sheets. > > If you're not up to the task of rolling your own, though > I would suggest looking at DDR2 instead. =A0It runs at the > same voltage and is very well supported by MIG. =A0For a > single-chip solution with very short routes, you don't even > need to use external terminating resistors. =A0Just set the > IO standard to SSTL type I (low drive) and set the lowest > drive on the DDR2 chip. =A0You still use more pins to get > Vref, etc. but you might get the bandwidth you need in > a narrower (16b vs. 32b) package. > > regards, > Gabor Thank Gabor, The memory speed should probably go to max 150MHz (DDR300). I know that the power up sequence is different, I read that in some Micron datasheet or technical note. We want x32 SDRAM, and DDR2 does not come in single chip solution. I was hoping that somebody already succesfully used MIG with LPDDR and would give me a hint on what to do. I am not so skilled and I do not have time to dig into MIG core. I also thought that anyone else noticed that x32 DDR is only available as LPDDR (mobile DDR) which has 1.8V LVCMOS interface and does not need any termination (point to point) and no current sink, not to mention several times lower power consumption... It seemed like a perfect choice, but maybe next time. Regarding the weak response to this post I doubt that Xilinx will add support for LPDDRs. Cheers, AlesArticle: 139812
On Apr 14, 10:31=A0pm, rickman <gnu...@gmail.com> wrote: > On Apr 14, 7:46=A0am, "Antti.Luk...@googlemail.com" > > > > <Antti.Luk...@googlemail.com> wrote: > > On Apr 14, 2:08=A0pm, acd <acd4use...@lycos.de> wrote: > > > > On 14 Apr., 13:02, Petter Gustad <newsmailco...@gustad.com> wrote: > > > > > Are there any press releases etc. which indicates that the Arria II > > > > will be competing with Cyclone III in price, e.g. that a 2AGX125 co= sts > > > > less than (or the same as) an EP3C120? > > > > The description of the Arria products states that they target cost- > > > sensitive applications. > > > "Arria=AE II GX FPGAs are the lowest power FPGAs with up to 3.75-Gbps > > > transceivers. Designed for cost-sensitive applications, Arria II GX > > > FPGAs are based on a 40-nm, full-featured FPGA architecture that > > > includes adaptive logic modules (ALMs), digital signal processing > > > (DSP) blocks, embedded RAM, and a hard PCIe IP core. This FPGA family > > > provides the optimal logic, memory, and DSP capabilities to address > > > your needs. Unlike other 3-Gbps FPGAs, Arria II GX FPGAs offer > > > improvements in usability that allow you to complete your projects > > > faster."http://www.altera.com/products/devices/arria-fpgas/arria-ii-g= x/aiigx-... > > > > But of course, the high-speed links make the devices more expensive. > > > > Andreas > > > well at EW2009 there was a guy who claimed he got 10$ pricing for > > smallest Arria II (50k pcs) > > that is well below Lattice price what would be 17$ (1KLUT/1$) > > I don't follow. =A0I am getting the smallest Lattice XP part at below > $10 qty 100. =A0How is the Arria a better price? =A0It may be more gates, > but who cares if you don't need them? =A0I don't price chips by the LUT > or 1KLUT. =A0I price them by the socket that will hold my design. > > Rick Rick the price was for smallest ECP3 what is 17KLUT, and per L pricing yields to 17$ you do not get 17KLUT lattice device in 100 pcs under 10$ not yet AnttiArticle: 139813
ales.gorkic@gmail.com wrote: >On Apr 14, 3:47=A0pm, gabor <ga...@alacron.com> wrote: >> On Apr 14, 4:19=A0am, ales.gor...@gmail.com wrote: >> >> > Hi all, >> >> > We are developing a new Spartan3A DSP board with DDR. Since we need a >> > x32 DDR a good and low cost option seems low power mobile DDR SDRAM >> > like this:http://www.micron.com/products/partdetail?part=3DMT46H16M32LF= >CM-6%20IT >> > It seems to be very popular since it has very low power comsumption >> > (1.8V), high bandwidth and low cost; Beagle Board, DSP boards... >> >> > Problem: It is not directly supported by Xilinx MIG =A0(memory interfac= >e >> > generator) . >> > The question is: did anyone sucesfully use MIG core with such SDRAM? >> >> > Cheers, >> >> > Ales >> >> regards, >> Gabor > >Thank Gabor, > >The memory speed should probably go to max 150MHz (DDR300). > >I know that the power up sequence is different, I read that in some >Micron datasheet or technical note. > >We want x32 SDRAM, and DDR2 does not come in single chip solution. > >I was hoping that somebody already succesfully used MIG with LPDDR and >would give me a hint on what to do. I am not so skilled and I do not >have time to dig into MIG core. > >I also thought that anyone else noticed that x32 DDR is only available >as LPDDR (mobile DDR) which has 1.8V LVCMOS interface and does not >need any termination (point to point) and no current sink, not to >mention several times lower power consumption... > >It seemed like a perfect choice, but maybe next time. > >Regarding the weak response to this post I doubt that Xilinx will add >support for LPDDRs. Its all about signal integrity. If the part you want to use supports 1.8V LVCMOS it will work with 1.8V DDR SDRAM. Anyway, the last time I checked the MIG core it was a big fat pig -more like a demonstrator than something useful-. You might be better off buying a DDR SDRAM core with proper documentation and support for the part you are going to use. -- Failure does not prove something is impossible, failure simply indicates you are not using the right tools... "If it doesn't fit, use a bigger hammer!" --------------------------------------------------------------Article: 139814
Gabor Regards' Ummm... Since we're on a top-posting roll, I thought to point out the the original post by Petter was not content-free. He was (in a way) asking for a link to the Altera Arria II GX information. Perhaps you thought "URL?" was some sort of caveman-like grunt? On Apr 14, 3:33=A0pm, rickman <gnu...@gmail.com> wrote: > This is the appropriate way to reply to that... > > Rick > > On Apr 14, 11:19=A0am, MadHatt...@myself.com wrote: > > > Yet another content-free post with abrasive comments about posting > > style. > > > Way to go, Petter! > > > On Apr 14, 12:35 am, Petter Gustad <newsmailco...@gustad.com> wrote: > > > > Petter > > > -- > > > A: Because it messes up the order in which people normally read text. > > > Q: Why is top-posting such a bad thing? > > > A: Top-posting. > > > Q: What is the most annoying thing on usenet and in e-mail? > > > > URL? > > > > acd <acd4use...@lycos.de> writes: > > > > (with Arria II recently announced) > >Article: 139815
"Sharanbr" <sharan.basappa@gmail.com> wrote in message news:fdf33306-8c52-4fc1-8502-a7ca36c146cd@37g2000yqp.googlegroups.com... > > true. I did check one datasheet of pll ic manufacturer and he did not > have reset. > But what was surprising was that the pll has an SPI slave interface to > it, so I would > expect that part of the controller to have reset. > The SPI chip select signal should provide all the bit level synchronization that a SPI slave should require. Kevin JenningsArticle: 139816
On 13 avr, 18:22, Vikram <vkr...@gmail.com> wrote: > Guys, > > Wanted to let you know abou the FPGA Twitter athttp://twitter.com/fpga > , it is a great way to follow what is hapenning in the FPGA world.. > > -Vikram No it isn't. SorryArticle: 139817
renupriya wrote: > I need to know how to instantiate these components in SOPC builder and I'm > not aware of the settings to be done. The thing I did is I copied my > custom component folders into C:\altera\81\ip\altera and included the > class.ptf files of components in alteracomponents.ipx file. Is this method > correct to create a component? No. Start with your HDL files in your own directory. Then use the component editor in SOPC to create a new component. To make the process easier, you should use the naming conventions described in the SOPC documentation for the top-level entity ports. What I do, if I'm using a 3rd-party module, is create a top-level wrapper around the core with the correct naming convention. There may be a complication with the TCP/IP stack if you're using your own MAC. IIRC, at least some time ago, the system library dialogue wouldn't allow you to add the TCP/IP stack without a recognised MAC in the project. I'm sure it can be done, may just be some fiddling to do. Regards, -- Mark McDougall, Engineer Virtual Logic Pty Ltd, <http://www.vl.com.au> 21-25 King St, Rockdale, 2216 Ph: +612-9599-3255 Fax: +612-9599-3266Article: 139818
acd <acd4usenet@lycos.de> writes: > But of course, the high-speed links make the devices more expensive. The package is probably a little more expensive. Qualification and testing cost a bit more. There's most likely a royalty to the SERDES IP provider (unless it's Altera's own). Besides from that I can't see why they should be more expensive. A PCIe based networking card can be purchased for $10, so the ASIC on the board must be dirt cheap. So the additional cost can't be that high. The market is currently willing to pay a bit more for high speed serial interfaces, but soon people will expect them in FPGA's like they now do with LVDS interfaces. Petter -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 139819
Hi everybody I have been using the ISE environment to synthesis/implement my designs. I would like to know what is the minimum acceptable slack on a signal. I have a crystal oscillators chip which feed my DCM. Clk_in = 50 MHz. I then generate a 50 MHz (CLK0) and a 100MHz Clock (CLK2X) using a DCM. Usually I will try to have a 10% slack on all generated clock : for 50 MHz (20ns) , I will try to get a worst case slack of 2ns for 100 MHz (10ns) , I will try to get a worst case slack of 1ns. The person who told me to follow this rule , has been working in the military domain for 20 years. Someone else told me lower value for the worst case slack is acceptable ,he told me a 0.00000x ns slack will work, because the vendor already include margin in the calculation for the worst slack. Could someone confirm this ? or could someone explain which rule they are using to validate timing closure ? Thanks for your help Ben_QuemArticle: 139820
Although the NW Logic site doesn't explicitly say so, I was told that the NW Logic mobile-DDR core will work with Xilinx. http://www.nwlogic.com/docs/Mobile_DDR_SDRAM_Controller_Core.pdf The Xilinx MIG controller will not work as is with Mobile DDR. The existing controller depends on the DDR synchronizing the clock. Mobile DDR does not have the PLL that standard DDR has, so the timing is not the same. The MIG controller could be modified, as described in this Xilinx Answer Record: http://www.xilinx.com/support/answers/24713.htm I think your best option with Xilinx and Mobile DDR is to move to Spartan-6, which has an integrated memory controller block that supports Mobile DDR. http://www.xilinx.com/publications/prod_mktg/Spartan6_Overview.pdf BryanArticle: 139821
I have developed a slack acceptance insurance. The fee is only 3% of the component costs + set-up fee. Our policy ensures that with a zero-slack the devices will not fail because of slow timing and gives you a replacement of the failed component.Article: 139822
On Apr 11, 2:12=A0am, "Antti.Luk...@googlemail.com" <Antti.Luk...@googlemail.com> wrote: > On Apr 11, 10:07=A0am, Arnim <clv.5.min...@spamgourmet.com> wrote: > > > > > > > > now the funny thing comes: > > > > there are two different possibilities for the LEDs depening the slide > > > switch (reset!) > > > this is normal (reset and run states) > > > > but there is also 3rd possibility when the LEDs are about 50% of > > > intensity!! > > > Which position is the switch then - reset or run? > > > What's the state of the LEDs during reset? What after reset, i.e. the > > next pattern? > > > > what is the reason? how come the LEDs be at 50% intensity? > > > FPGA is configured all the time, the leds are driven with constant > > > value. (no PWM) > > > First guess: 50% intentsity is caused by the design looping > > reset->run->reset->run. I'd investigate for ground bounce affecting the > > reset input. A weak pull-up is probably also involved. > > > Arnim > > 100% hit (i was just curious if somebody guesses it) > > yes that it was, the slide switch was in middle position > there is no onboard resistor and fpga pulls disabled > > it worked also like CAPACITIVE touch sensor > touching PCB at LED0 (close to the switch) made leds on, taking finger > away led off > > Antti- Hide quoted text - > > - Show quoted text - Floating causes head ache ;-)Article: 139823
On Wed, 15 Apr 2009 07:30:24 -0700 (PDT), halong <ccon67@netscape.net> wrote: >Floating causes head ache Indeed. Unhappy memories of my first-ever CMOS project, where I left the reset input floating. It would often work for a few minutes. Or it would fail to work until you moved a finger close to it. Sensible people don't make that sort of mistake twice. I think it took me three or four such situations before I learnt my lesson :-( -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK jonathan.bromley@MYCOMPANY.com http://www.MYCOMPANY.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 139824
Ben, The advice you were given is safe, to be sure, but far too conservative. The slack should be the total of all instabilities in the clock period, both intrinsic, and extrinsic. If there was no phase noise (jitter), the slack could be zero. However, there is jitter on any oscillator, which might be 35 to 100 ps peak to peak, and the jitter caused by all the signals switching internally, and all the IOs switching externally. This is "system jitter" and may vary from hundreds of ps, to as much as two nanoseconds (in the case of poor bypassing, extreme ground bounce, bad signal integrity). In the "old days" designers would not care about signal integrity, or power supply integrity, and then be overly conservative in their slack to make up for their neglect. Sometimes the system would still fail, due to their ignorance. Now, the practice is to perform a reasonably good signal integrity analysis, as well as follow best practices on bypassing and power distribution. One adds all the peak to peak jitter sources (Xilinx ISE does this for you, but you need to supply the system jitter number), quadratically: that is peak to peak "adds" as the square root of the sum of the squares. 100 ps of clock jitter, plus 200 ps of system jitter is sqr (100^2+200^2) or ~224 ps peak to peak). The slack required is then just the peak -, or the minimum possible clock period, which is 1/2 the 224, or 114 ps. Again, if you place a system jitter value in the tools, this will be taken into account, and any slack will already have the system jitter, DCM jitter, PLL jitter, and clock jitter accounted for. Thus, a value of 0 after taking these into account is acceptable. Once built, the system jitter should be measured, so that you are sure you still have 0 or positive slack. Austin
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z