Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"John Jacob" <jjacob@graphite.com> wrote in message news:<asbpov$1f5$2@iruka.swcp.com>... > > Take a look at the Bittware SharcFin ASIC. It was designed exactly for > what > > you seem to be asking for: > > http://www.bittware.com/products/app-specific/ic.stm > > > > A quad DSP board for a PCI slot runs ~5K, with another 3K for the Visual DSP > and 2K for their API - ~10K total. That seems expensive considering the > chips are only ~30, PCB manufactures are at 1/2 capacity for lack of work, > and shareware compilers are readily available. So you think their engineering time oughta be ignored? And one chip does not a design make. You may wish to recalibrate your expectations.Article: 50201
"John Jacob" <jjacob@graphite.com> wrote in message news:<asfl2n$jdp$1@iruka.swcp.com>... > My original question was to see if anyone had interfaced a SHARC to a PCI > bridge using a FPGA, and in particular, the details regarding the state > diagrams. Others have pointed out the problems with PCI soft cores for FPGAs. Here's my take: they're expensive! You may find it easier to go with a PLX chip to do all of the PCI heavy-lifting; your job is to interface to their local bus, which is not too difficult. If you're trying to attach one Sharc chip to the PCI bus, it oughta be simple. If you want, you should be able to attach more than one Sharc to the PLX' local bus, as they support local-bus mastership arbitration. I would imagine that the PLX part and the FPGA/CPLD required would be a frick of a lot cheaper than the Xilinx or Altera soft core, unless you're making a lot of boards. -aArticle: 50202
Ray Andraka <ray@andraka.com> wrote in message news:3DD9CD49.48C37CA6@andraka.com... > Xilinx originally said that it was safe to go directly between the clock domains, > however experience dictates otherwise. I have a design where I go from rising edge to rising edge between 1X and 2X clock domains and haven't noticed any problems yet. But now I wonder if I should go back and redesign this to eliminate the possibility of timing glitches. I seem to remember Xilinx saying that was OK, but looking at the latest SpartanII datasheet I don't really see anything on it. I browsed the answers database and couldn't find anything on this. Peter or Austin, I would love to hear you learned opinion on going between 1X and 2X domains driven by a DLL. > Instead, I recommend that you be very careful about how you transfer data across > the 1x/2x clock boundaries, as the edge can fluctuate by several hundred ns > relative to the edge in the other domain...enough to cause mis-clocking. Yikes - several hundred ns? really? Or did you mean ps? -JeffArticle: 50203
> I'm not sure where you're getting your pricing information from, but it > doesn't seem accurate to me. Note that a 21160 is not a $30 part, so > maybe you're looking at our quad 21160 board, not our quad 21161 board, > which is substantially less expensive. Also note that if you only ask > for low volume quotes, the pricing will be MUCH higher than reasonable > volume orders - support almost seems inversely proportional to order > size. The toolkit (our windows and linux API) and VDSP are one time > costs. The quote I obtained was for a HH3U for a PCI bus. It indeed has the 21160, which according to the ADI pricing sheet lists for 145.00. But even subtracting 400 from the quote for a 21161 still puts it at a premium, at least from my perspective. Also, the quote was for a quantity of (1). > > Also, note that you'll need Visual DSP whether you buy boards or develop > your own - there are no shareware compilers for the 21161 or 21160, and > you'll also probably want an emulator, though with our boards we do have > a software plugin for the VDSP debugger that is cheaper. > To date, I have found several sources for GCC source code for the SHARC DSP. ftp://ftp.analog.com/pub/dsp/dev_tool/21k_tool/gnu_src/g21k-3.3-src.tar.Z In addition, a release for the DSP 21xx has been included, presently not available from AD, but from: ftp://ftp.dgii.com/users/rick/g21xx.tar.gz See also http://www3.telus.net/sharpshin/index.html, which is another 21xxx open source set of SHARC DSP tools. None of them have been personally tried, as I am still deciding which hardware options to pursue. However, after donating substantial funds to the Bill Gates retirement fund over the years, I find the open source rather refreshing. I am in no means denigrating your products, for they appear to have an excellent reputation from a number of sources and are constantly recommended. However, they simply appear too expensive for my application. > Also, the $13K for a quad Tiger is way off as well - are you including > the cost of VDSP and the toolkit in it, which are time development > costs? Yes. And your product with software is about the cost of some of your competitors boards alone. Again, you no doubt have an excellent product, but it is completely out of my price range.Article: 50204
> > So you think their engineering time oughta be ignored? And one chip > does not a design make. > Absolutely not. But when undertaking a development program, one costs out the development time over the life of the product and adds it to the product cost over the units to be sold. After a certain point, this engineering time is completely allocated and the product becomes a manufacturing exercise. At least, that is how we cost out our engineering time and tooling molds for our gimbal designs. I will also add that I have a lot more respect for a PCB populated with a number of high-performance ASIC chips throughout the surface. High frequency design requires a great deal of skill and technique. There was an interesting paper on a TigerSHARC cluster at http://www.analog.com/library/applicationNotes/dsp/applicationNotes.html that discussed the importance of good board design and layout. The pdf link is located at the bottom of the page with the description "ADSP-TS101S MP System Simulation and Analysis." Also see some of the design notes at http://www.pericom.com/docs/apps.php with respect to clock buffers and board layout. After reading the issues confronting the hardware designer when dealing with high-frequency chips, one develops a much better understanding of what is actually transpiring at the hardware level. But after the development cost have been allocated out, drop the price of the board.Article: 50205
Hello all, Could someone please explain the following statement taken directly from www.xilinx.com. The sentance came from a description of the new system generator. >Designers can use 285MHz multipliers to easily develop higher performance, >lower cost DSP systems. I've haven't been successful in finding any information that explains this statement. When I look at the virtex-II datasheet I only find the following information about the performance of the built in multiplier blocks: Multiplier 18x18 (with Block RAM inputs) XC2V1000 –5 88 MHz Multiplier 18x18 (with Register inputs) XC2V1000 –5 105 MHz When I use coregen and implement a fully pipelined multiplier with registered inputs and outputs I get results similar to what the datasheet says. If anyone knows how to achieve this performance using the Virtex-II built in multipliers your advice would be greatly appreciated! Thanks, Chip LukesArticle: 50206
> > I would imagine that the PLX part and the FPGA/CPLD required would be > a frick of a lot cheaper than the Xilinx or Altera soft core, unless > you're making a lot of boards. > A PCI core is available for the Xilinx device, but believe the cost was about 14K. Also, some of the technical questions being asked about its implementation made me stop and think if I truly wanted to join them in that exercise. For a large volume application, it may be an ideal solution. But as you and others have suggested, a hard-wired chip is much easier to deal with. Any experience with the Pericom line of PCI bridges? Their bridge chip data sheets look interesting, and their web site is full of useful design knowledge.Article: 50207
"eric - Mtl" <notervme@sympatico.ca> wrote in message news:3LsH9.6981$3J2.987911@news20.bellglobal.com... > the processor can even rely on the VGA ram to hold it's software, thus > making the whole design very cost effective ... > Assuming one could find a ready supply of ISA video cards. RalphArticle: 50208
Hi, everybody, I met with a problem when i simulated my program in QUARTUS2.0. I want to the simulator driven by the stimulus of a text file, for example the *.vec file. I can open *.vec in MAXPLUS2 ,but fail in QUARTUS. Although I tried to change the *.vec to *.tbl, it can't be opened still. Can Quartus open *.vec ? and how to open the stimulus from a text file in QUARTUS? thank you! Best regards! siriuswmxArticle: 50209
Shoulda been ps. IIRC, we saw on othe order of abut 600 ps jitter introduced in getting the clock onto the chip, and that was enough to cause a problem going between clock domains where there were no LUTs between the flip-flops. Since then, we've taken the better safe than sorry route. The little bit of extra consideration in design is saved ten fold in not having to chase it down in the lab (it was a bitch to find). Jeff Cunningham wrote: > Ray Andraka <ray@andraka.com> wrote in message > news:3DD9CD49.48C37CA6@andraka.com... > > > Xilinx originally said that it was safe to go directly between the clock > domains, > > however experience dictates otherwise. > > I have a design where I go from rising edge to rising edge between 1X and 2X > clock domains and haven't noticed any problems yet. But now I wonder if I > should go back and redesign this to eliminate the possibility of timing > glitches. I seem to remember Xilinx saying that was OK, but looking at the > latest SpartanII datasheet I don't really see anything on it. I browsed the > answers database and couldn't find anything on this. Peter or Austin, I > would love to hear you learned opinion on going between 1X and 2X domains > driven by a DLL. > > > Instead, I recommend that you be very careful about how you transfer data > across > > the 1x/2x clock boundaries, as the edge can fluctuate by several hundred > ns > > relative to the edge in the other domain...enough to cause mis-clocking. > > Yikes - several hundred ns? really? Or did you mean ps? > > -Jeff -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 50210
>Shoulda been ps. IIRC, we saw on othe order of abut 600 ps jitter introduced in >getting the clock onto the chip, and that was enough to cause a problem going >between clock domains where there were no LUTs between the flip-flops. Since >then, we've taken the better safe than sorry route. The little bit of extra >consideration in design is saved ten fold in not having to chase it down in the >lab (it was a bitch to find). Do the tools catch that case now? Does the data sheet mention it? Would it be safe if you went through a (dummy) LUT? -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 50211
Jon Elson <jmelson@artsci.wustl.edu> wrote in message news:<3DEE9B1F.5060402@artsci.wustl.edu>... > Kolin Paul wrote: > > > Hi, > > > > I want to measure the actual power in a FPGA Board. > > Any Ideas on how to do it. > > You want to measure power consumption of a single chip, or the whole > board? Measuring the power draw of a single chip in an operating system > is tricky, as you need to break all the power connections between the chip > and the rest of the board, and funnel it through a sensing resistor or > hall effect probe. > > Measuring total system power consumption should be easier. A DVM > with a current shunt can be used, just put it in series with each power > source coming into the board. Multiply each current times the voltage to > get power, add power for all voltages together to get total power. > > Without descibing the board in more detail, it is hard to get more specific. > > Jon Hi Jon, Say, if i want to know the current going through a particular trace in the Board...how can we measure? Is that possible firt of all...? (don't say, breat the trace and insert meter :-)) Best regards, MuthuArticle: 50212
>Say, if i want to know the current going through a particular trace in >the Board...how can we measure? > >Is that possible firt of all...? (don't say, breat the trace and >insert meter :-)) If you have enough current, you can measure the voltage drop on a trace. I doubt if that will be useful. I've only done it to find a shorted bypass cap on a wire-wrap board that had a comb/finger pattern for the power/ground connections. (Worked great that time.) If you have a single trace, you might be able to measure the current through it by cutting the trace and soldering a cap across the cut and putting the probes for a current meter across the cap. -- The suespammers.org mail server is located in California. So are all my other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited commercial e-mail to my suespammers.org address or any of my other addresses. These are my opinions, not necessarily my employer's. I hate spam.Article: 50214
Check http:\\www.amontec.com\chameleon.shtml This is a 'dongle' format CPLD dev. board based parallel port. Buying a Chameleon POD, you will receive our I2C Master controller for FREE (VHDL source code). So, you can generate and control a true 400 KHz I2C bus from you Parallel Port. By happy Laurent vasu wrote: > hi all am doing a project which requires i2c bus interface ..is there > any simple vhdl code which could do the function of sending around > 400khz clock for scl and sda alone? since i am using spartan fpga for > this project would like to get some codes for the above function ..pls > send me at the earliest > thanks > vasuArticle: 50215
I'm using a C64 DSP on a Board from Ateme. To be able to input analog signals I have an extra daughtercard with a Xilinx FPGA. The daughtercard connects through the standard TI expansion connection with two 80 pin connectors. Usually the communication between DSP and FPGA is realized with the EMIF protocol. Unfortunately this doesn't work correctly. If I try to address the a/d converters the dsp programm crashes and only a hard reset helps. Are there other possibilities to communicate between DSP and FPGA? Thanks, Stephan -- Fraunhofer-Einrichtung Systeme der KommunikationstechnikArticle: 50216
Hi there, we're having some probs. at getting the software to run properly when using an external ACEX to implement a DMA controller transferring images from a ccd to the sdram memory banks. That's how things go: - The software starts the transfer by writing a word in an external (in the pld) register. - The pld does the transfer by driving the sdram pins by obeying the MBREQ/MBGNT protocol with the StrongArm - At the end of the transfer, the acex drives a StrongARm gpio which is bounded to an isr handler in the sw to process the acquired image. Trouble is I never get the isr handler to run, the micro gets hung and the hw watchdog resets everything!!?! When running a similar operation from inside a small test prog it works fine (transfer works and isr gets called), but from within the final app (380KB image) I get the described behaviour (i.e. isr never triggers and micro gets hung) Anybody ever had similar probs? or has got some hints/tips? Is there something to be VERY careful of when dealing with ACEX and SDRAMs? any help wil be VERY appreciated. Thanx in advance, Mo (yet another embedded sw guy) use e-mail below to contact me privately if you can suggest something. ramui123__at__hotmail.com -- Posted via Mailgate.ORG Server - http://www.Mailgate.ORGArticle: 50217
Chen Wei Tseng <chenwei.tseng@xilinx.com> wrote in message news:<3DEE1ED8.7CD17D2E@xilinx.com>... > Markus, > > Two ways. > > 1. Create RPM out of this EDIF file will get the placement preserved for > future usage. (easy way) See Xilinx App Note 422 on how this can be > accomplished. > > http://support.xilinx.com/xapp/xapp422.pdf > > If routing resources need to be preserved as well, try throwing FPGA > Editor's directed routing constraints into the RPM's NCF file as well. > Do note that directed routing is a feature that is still under quite a > bit of development. > > 2. Create Hard Macro out of this EDIF's implementation. (hard way) > > http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=10901 > > -Wei Hi Wei, The Xapp422 was quite useful for creating macros. and i am facing some warning messge pop-up window, Like "Not all CLB-type symbols are floorplanned, continue?". what it is mean actually.? Best regards, MuthuArticle: 50218
John Jacob wrote: > > > I'm not sure where you're getting your pricing information from, but it > > doesn't seem accurate to me. Note that a 21160 is not a $30 part, so > > maybe you're looking at our quad 21160 board, not our quad 21161 board, > > which is substantially less expensive. Also note that if you only ask > > for low volume quotes, the pricing will be MUCH higher than reasonable > > volume orders - support almost seems inversely proportional to order > > size. The toolkit (our windows and linux API) and VDSP are one time > > costs. > > The quote I obtained was for a HH3U for a PCI bus. It indeed has the 21160, > which according to the ADI pricing sheet lists for 145.00. But even > subtracting 400 from the quote for a 21161 still puts it at a premium, at > least from my perspective. Also, the quote was for a quantity of (1). > That explains the pricing - the 21160 board and a quantity of 1. If you have a volume requirement, talk to ours sales people about a true volume quote on our boards. Note that the 21160 board is a high end board (66 MHz PCI, etc.) while the 21161 is designed as a more low cost board. Since it's not just the DSPs that differ on the boards, the quantity 1 price of a quad 21161 is almost 50% less than the quad 21160 price. > > > > Also, note that you'll need Visual DSP whether you buy boards or develop > > your own - there are no shareware compilers for the 21161 or 21160, and > > you'll also probably want an emulator, though with our boards we do have > > a software plugin for the VDSP debugger that is cheaper. > > > > To date, I have found several sources for GCC source code for the SHARC DSP. > > ftp://ftp.analog.com/pub/dsp/dev_tool/21k_tool/gnu_src/g21k-3.3-src.tar.Z > > In addition, a release for the DSP 21xx has been included, presently not > available from AD, but from: > > ftp://ftp.dgii.com/users/rick/g21xx.tar.gz > > See also http://www3.telus.net/sharpshin/index.html, which is another 21xxx > open source set of SHARC DSP tools. > > None of them have been personally tried, as I am still deciding which > hardware options to pursue. However, after donating substantial funds to the > Bill Gates retirement fund over the years, I find the open source rather > refreshing. You might want to make sure that these support 21160s and 21161s. I didn't think they had even been updated for 21065s, but maybe I'm out of date on that ... > I am in no means denigrating your products, for they appear to > have an excellent reputation from a number of sources and are constantly > recommended. However, they simply appear too expensive for my application. > > > Also, the $13K for a quad Tiger is way off as well - are you including > > the cost of VDSP and the toolkit in it, which are time development > > costs? > > Yes. And your product with software is about the cost of some of your > competitors boards alone. Again, you no doubt have an excellent product, but > it is completely out of my price range. My only advice is to make sure you talk to our sales people about your true requirements and volumes. Generally our prices can come down considerably with volume, as many expenses tend to be per customer as opposed to order size (support, sales costs, etc), while of course the regular volume savings (larger part buys and builds) also come into account. So just make sure that you have an accurate picture before heading down your own development path. The last thing I'll mention before letting this thread die, and by the way, thanks for your professional attitude throughout it, is that there are more advantages to going with COTS just potential time and cost savings for the current product. The most significant, and why we have many repeat customers, is for future product upgrades. For example, if you design in our 21160 board, and in a year or two need more horsepower, you can upgrade to our TigerSharc boards. While there are obviously differences in the DSPs, we try to keep our architectures, tools and APIs as close as possible, so that your porting exercise is minimized. Also, we support the same archictecture on all the PCI platforms (desktop, 6U, 3U, PMC, PC104Plus) so that you can move platforms without changing your code. --- Ron Huizen BittWareArticle: 50219
Hi everybody, I have the following situation in my board design : (A)FPGA-pin--"R"-------T------connector--cable--connector------T-----FPGA-pin(B) R = series termination resistor T = "5 mil width" PCB trace More info : The resistor is for series termination at the source. The termination resistor value is 33 ohms. The cable is a 3 feet long SCSI cable. The connectors are 68-pin SCSI female,high density connectors. Suppose I know for sure that the (A) side FPGA pad will always be used only to drive the signal, and also that the (B) side FPGA pad will always be used only to receive the signal and "NEVER" to drive ....given this assumption, is the above picture acceptable from the termination point-of-view. please do write back with your suggestions, thanks very much, regards, Anand KulkarniArticle: 50220
Hi, Thank you for your suggestions. All this time, I have been trying out different things with a 3 nios system. In this system, one NIOS, which I call the 'master', is connected to the uart and runs the germs monitor while the other two run a secific user code stored in their respective local memories. Also the master nios has access to the local memory of other two nios processors. This way, I can change the code which the two nios cpu's execute at run-time rather than go through the entire generation and compilation process. This saves a lot of time. Thanks once again, regards, Satchit kempaj@yahoo.com (Jesse Kempa) wrote in message news:<95776079.0211071322.7432ddd5@posting.google.com>... > No problems doing this what so ever, but there are some additional > considerations that may or may not be immediately obvious if you are > sharing a memory device for *data* (stack) memory: > > First, if you are using this shared global memory as data memory for > both processors: > > You will need to modify your SDK slightly to divide up the memory for > each processor's use - this is because SOPC Builder generates your > SDK, and more specifically, stack size and address span, based on > which memory device you choose for each CPU's data memory. Things like > the stack and frame pointers are automatically adjusted by the CPU > when entering and exiting functions, and having two CPUs each think > they own common data areas will make things ugly fast. > > Let's say your global memory (called "global_memory" for argument's > sake) is from 0x1000000 to 0x1FFFFFF. If you wish to use this common > memory for data for two CPUs, then go ahead and generate the system > with both CPU's data memory set to "global_memory". Once that's done, > edit both the "excalibur.h" and "excalibur.s" (or "nios.h" and > "nios.s") files. There is one set of header files like these for each > CPU in your system (divided into separate SDK's for each CPU). These > are the memory map files SOPC Builder generates that define, for the > compiler, memory maps and other related things. In these files you'll > be able to hard-code in start address, end address, range, and stack > top address for each CPU in the system. > > As an example, dividing the memory into two even parts: CPU #1 could > be defined as having data memory beginnning at 0x1000000, ending at > 0x17FFFFF, size = 0x800000, and stack top at 0x17FFFFF. CPU #2 could > be set to have its data memory begin at 0x1800000, end at 0x1FFFFFF, > sze = 0x8000000, stack top = 0x1FFFFFF. After this, compile your code > for each CPU. You can then truly ensure each CPU will have its own > private data memory/stack space. > > Second - If your global memory is *not* generic CPU data memory, but > some other memory you're using for buffer space or shared data between > two CPUs (that is, something you manually assign a pointer to, rather > than generic variables like integer arrays): > > There is no special action required in your design. Just write your > software to ensure proper sharing of the buffer space (if using an > RTOS of some sort, this is nearly always done with semaphores...). > > Regardless of these, there is one more variable you can play with to > tweak performance of a shared memory resource: the bus arbitration > settings in SOPC Builder. By default, each CPU (master) accessing > shared memory has equal "rights" to it. This is more of an advanced > topic, but if you have one system CPU which needs unhindered access to > the shared memory, you can give it a higher arbitration assignment > than the other CPU (a good example is if you have one simple CPU for > use interface, communication, or file system tasks, and a second for > non-stop processing). For a conceptual overview, one of the Altera app > notes, #184 I think, covers how the arbitration works and how > conflicts on shared slavess are handled. > > PS: Sorry for the long-winded explanation. It really is simple if you > keep the above in mind though! > > - Jesse Kempa > > satchit_h@hotmail.com (satchit) wrote in message news:<ddf018f6.0211061008.467ded5c@posting.google.com>... > > Hi, > > > > Thank you for the reply. Currently I am trying out a simple > > 2-processor configuration where each processor in addition to some > > local ram, has access to some global memory. I am trying to sort out > > some bugs in my design. Did you encounter any problems while trying to > > simultaneously access global memory? thank you once again. > > > > regards, > > Satchit > >Article: 50221
--------------6C002B4E9A6DC44A10B5818F Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Muthu, Usually, when you create a RPM, you would want to RLOC everything. This warning message tells you that you still have components (LUTS/DFF/ etc...) not placed when you are trying to create the RPM. This will results in the unplaced logic not bound to the RPM shape. Possible cause of this. 1. Using replace all with placement - And you have DFF packed into the IOB. Since Floorplanner doesn't show components in the IOB, you'll have to manually place the DFF components or leave them out of the RPM shape. 2. When you're manually placing the components, you didn't place all the components other than the IOBs. Regards, Wei Muthu wrote: > Chen Wei Tseng <chenwei.tseng@xilinx.com> wrote in message news:<3DEE1ED8.7CD17D2E@xilinx.com>... > > Markus, > > > > Two ways. > > > > 1. Create RPM out of this EDIF file will get the placement preserved for > > future usage. (easy way) See Xilinx App Note 422 on how this can be > > accomplished. > > > > http://support.xilinx.com/xapp/xapp422.pdf > > > > If routing resources need to be preserved as well, try throwing FPGA > > Editor's directed routing constraints into the RPM's NCF file as well. > > Do note that directed routing is a feature that is still under quite a > > bit of development. > > > > 2. Create Hard Macro out of this EDIF's implementation. (hard way) > > > > http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=10901 > > > > -Wei > > Hi Wei, > > The Xapp422 was quite useful for creating macros. and i am facing some > warning messge pop-up window, Like "Not all CLB-type symbols are > floorplanned, continue?". what it is mean actually.? > > Best regards, > MuthuArticle: 50222
In this relatively slow case I would not gamble on the coincidence between f anf 2f outputs, since it is not necessary. Just use the other 2f clock edge and you know that it will be far away from the rising edge of f ( half period of 40 MHz = 12.5 ns, an eternity...). Why worry about subtleties when there is a brute-force solution? But remember, 20 MHz is lower than the guaranteed min frequency for the DLL. That's a much stronger reason to worry... Peter Alfke, Xilinx Applications ============================================ Ray Andraka wrote: > Shoulda been ps. IIRC, we saw on othe order of abut 600 ps jitter introduced in > getting the clock onto the chip, and that was enough to cause a problem going > between clock domains where there were no LUTs between the flip-flops. Since > then, we've taken the better safe than sorry route. The little bit of extra > consideration in design is saved ten fold in not having to chase it down in the > lab (it was a bitch to find). > > Jeff Cunningham wrote: > > > Ray Andraka <ray@andraka.com> wrote in message > > news:3DD9CD49.48C37CA6@andraka.com... > > > > > Xilinx originally said that it was safe to go directly between the clock > > domains, > > > however experience dictates otherwise. > > > > I have a design where I go from rising edge to rising edge between 1X and 2X > > clock domains and haven't noticed any problems yet. But now I wonder if I > > should go back and redesign this to eliminate the possibility of timing > > glitches. I seem to remember Xilinx saying that was OK, but looking at the > > latest SpartanII datasheet I don't really see anything on it. I browsed the > > answers database and couldn't find anything on this. Peter or Austin, I > > would love to hear you learned opinion on going between 1X and 2X domains > > driven by a DLL. > > > > > Instead, I recommend that you be very careful about how you transfer data > > across > > > the 1x/2x clock boundaries, as the edge can fluctuate by several hundred > > ns > > > relative to the edge in the other domain...enough to cause mis-clocking. > > > > Yikes - several hundred ns? really? Or did you mean ps? > > > > -Jeff > > -- > --Ray Andraka, P.E. > President, the Andraka Consulting Group, Inc. > 401/884-7930 Fax 401/884-7950 > email ray@andraka.com > http://www.andraka.com > > "They that give up essential liberty to obtain a little > temporary safety deserve neither liberty nor safety." > -Benjamin Franklin, 1759Article: 50223
> Do the guts of your Spartans/System-bus run at 33MHz but then you're > going to the 4X clock to time multiplex to save RAM chips? Yes. The application is strictly memory bound: there are a large number of state machines who are "clocked" on the data_valid signal coming from their associated port of the memory controller. The controller's job is to accept addresses from the fairly modest number of asynchronous machines and to schedule them in FIFO order. Therefore, the more I can get out of a single SRAM, the better. I've had an extensive off-line discussion about this topic with another newsgroup individual. It's starting to look more feasible to simply have a clock local to each FPGA and then handle data on the bus asynchronously. -- Brian Wickman Center for Advanced Computing http://www.brianwickman.com http://cac.engin.umich.eduArticle: 50224
No, they don't. It is not necessarily safe to assume that a LUT provides enough delay to ensure operation either, there is no state minimum delay in a LUT, nor is there a stated maximum skew due to jitter introduced before the clock network and DLL. Hal Murray wrote: > >Shoulda been ps. IIRC, we saw on othe order of abut 600 ps jitter introduced in > >getting the clock onto the chip, and that was enough to cause a problem going > >between clock domains where there were no LUTs between the flip-flops. Since > >then, we've taken the better safe than sorry route. The little bit of extra > >consideration in design is saved ten fold in not having to chase it down in the > >lab (it was a bitch to find). > > Do the tools catch that case now? Does the data sheet mention it? > > Would it be safe if you went through a (dummy) LUT? > > -- > The suespammers.org mail server is located in California. So are all my > other mailboxes. Please do not send unsolicited bulk e-mail or unsolicited > commercial e-mail to my suespammers.org address or any of my other addresses. > These are my opinions, not necessarily my employer's. I hate spam. -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z