Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Some systems require controlled behavior, even in the absence (due to failure, etc.) of a clock. In these cases, an asynchronously asserted, synchronously deasserted reset is one method: process (rstin, clk) is variable rstmeta : std_logic; begin if rstin = '1' then rstout <= '1'; rstmeta := '1'; elsif rising_edge(clk) then rstout <= rstmeta; rstmeta := '0'; end if; end process; In the example above, rstout is routed to the async reset pins of all registers clocked by clk and needing a reset. Another method involves disabling the clock during asynchronous reset, then enabling it some number of clock cycles later. This can be very efficient with the enabled global clock buffers in xilinx parts. Andy mk wrote: > On 5 Jul 2006 00:07:24 -0700, saumyajit_tech@yahoo.co.in wrote: > > >2) My point number 2 is reagrding putting some logic( SRL16, FFs..) in > >the reset ckt. I am afraid, it vilaotes the basic rule of > >controllability of DFT. I should be able to control the state of all > >the FFs from a single reset pin out side the chip. is not it true ? > > You can always insert control points after your reset logic so that > when you enable test the flops become externally controllable again. > It's easy to do this with current scan insertion tools.Article: 104801
Or just infer the block ram from an 9/18/36 bit wide array... Then you can store anything you like in the ram (std_logic, std_logic_vector, unsigned, integer, boolean, enumerated types (states!), records, etc. Check your synthesis manual for templates for inferring block ram from arrays. Andy Ray Andraka wrote: > PeterSmith1954@googlemail.com wrote: > > > > > > The last time I used *9 / *18 / *36 mode block rams, I instantiated > > them as such and they exposed themselves as those *8 + the parity bit. > > Look for the instantiation template and you'll see what I mean. > > > > Just assign your ninth bit (for each block ram) to the parity bit. > > > > Cheers > > > > PeteS > > > The primitives have the bits separated off as parity bits, but other > than the addressing considerations if you have different depths on the > dual ports, they are no different than the data bits. It may be easier > to deal with if you make a wrapper for the Xilinx primitives that bring > in/out an 18 bit bus.Article: 104802
"Weng Tianxiang" <wtxwtx@gmail.com> wrote in message news:1152158287.898968.14220@l70g2000cwa.googlegroups.com... > Hi, > Can I find a website that lists the time for sorting 1 million > 64-bit/32-bit random data using the fastest software sorting algorithm > in a PC? > > Thank you. > > Weng Since you posted on fpga and vhdl newsgroups in addition to programming, are you interested in an FPGA solution? Most of the information that one *can* find on the web deals with serial implementations. Parallel implementations for an FPGA such as the Bitonic sort can speed up the implementation significantly.Article: 104803
"Andy" <jonesandy@comcast.net> wrote in message news:1152202901.742237.164620@a14g2000cwb.googlegroups.com... > Or just infer the block ram from an 9/18/36 bit wide array... > > Then you can store anything you like in the ram (std_logic, > std_logic_vector, unsigned, integer, boolean, enumerated types > (states!), records, etc. > > Check your synthesis manual for templates for inferring block ram from > arrays. > > Andy > Hi Andy, Back when I was young and idealistic, (well actually about 4 years ago!) I used to try and infer these things in my code. Trouble was, the Synplify tool wouldn't infer 18KiB RAMs. Has that changed now? Thanks, Syms.Article: 104804
Hi All, I am a student of the University of Arizona. I am trying to get xilkernel to work with Microblaze on a spartan 3 FPGA board. My C application looks like this: #include "xmk.h" #include <stdio.h> #include <pthread.h> #include <os_config.h> #include <sys/process.h> void main() { xilkernel_main(); } I am using xmd to download this. I am able to connect to microblaze on the FPGA and I am able to download the C application onto the SRAM. When I give the run command and hit return, the following error comes up: Error MDT :Fatal Error: Detected Software Breakpoint. But Microblaze is running. Can somebody please explain why this is happening? and how I can overcome this? Why is there an error at this stage ? AjayArticle: 104805
"Symon" <symon_brewer@hotmail.com> wrote in message news:44ad423a$1_1@x-privat.org... > "Andy" <jonesandy@comcast.net> wrote in message > news:1152202901.742237.164620@a14g2000cwb.googlegroups.com... >> Or just infer the block ram from an 9/18/36 bit wide array... >> >> Then you can store anything you like in the ram (std_logic, >> std_logic_vector, unsigned, integer, boolean, enumerated types >> (states!), records, etc. >> >> Check your synthesis manual for templates for inferring block ram from >> arrays. >> >> Andy >> > Hi Andy, > Back when I was young and idealistic, (well actually about 4 years ago!) I > used to try and infer these things in my code. Trouble was, the Synplify > tool wouldn't infer 18KiB RAMs. Has that changed now? > Thanks, Syms. I've had no problem with Synplify inference for the last few years. The memories easily imply up to 9 bits, up to 18 bits, or up to 36 bits in a single BlockRAM. I still instantiate memories often in order to apply parameters such as READ_FIRST at the source level or to control more subtle aspects of dual-port memories.Article: 104806
I've been messing around with my own sort of development board for an Altera MAX 3064 because I have some downtime at work (I'm a coop) and wanted to teach myself some PLD stuff. I wanted ot put a manual clock button on it so I could just mess around with some simple designs to get a feel for how to use HDL's and Quartus. I have some DIPs for inputs and LEDs for outputs, everything run on 3.3V. But the I can't manage to get the clock to pulse just once when I press the button. Can anyone recomend a debouncing circuit to use for something l like this?Article: 104807
"Brian McFarland" <brian.mcf1985@gmail.com> wrote in message news:1152212019.602303.49750@s26g2000cwa.googlegroups.com... > I've been messing around with my own sort of development board for an > Altera MAX 3064 because I have some downtime at work (I'm a coop) and > wanted to teach myself some PLD stuff. I wanted ot put a manual clock > button on it so I could just mess around with some simple designs to > get a feel for how to use HDL's and Quartus. I have some DIPs for > inputs and LEDs for outputs, everything run on 3.3V. But the I can't > manage to get the clock to pulse just once when I press the button. > Can anyone recomend a debouncing circuit to use for something l like > this? > Best way of all is to use a single pole change over switch and a set-reset circuit (cross coupled two input nands). Guaranteed to give bounce free output without any tricky time constants/capacitor slugging/timers etc! http://www.wheelnut.plus.com/sr.gif (excuse poor drawing!) HTH SlurpArticle: 104808
The last time I tried, XST (ISE 7.1) did only infer *8/*16/*32-bit RAMs and did not use the "parity" bits. Xilinx answered that this is a known bug/limitation, don't know if it is fixed now. So I ended up in writting a wrapper for BRAM-instantion... Thomas www.entner-electronics.com "Andy" <jonesandy@comcast.net> schrieb im Newsbeitrag news:1152202901.742237.164620@a14g2000cwb.googlegroups.com... > Or just infer the block ram from an 9/18/36 bit wide array... > > Then you can store anything you like in the ram (std_logic, > std_logic_vector, unsigned, integer, boolean, enumerated types > (states!), records, etc. > > Check your synthesis manual for templates for inferring block ram from > arrays. > > Andy > > > Ray Andraka wrote: >> PeterSmith1954@googlemail.com wrote: >> >> >> > >> > The last time I used *9 / *18 / *36 mode block rams, I instantiated >> > them as such and they exposed themselves as those *8 + the parity bit. >> > Look for the instantiation template and you'll see what I mean. >> > >> > Just assign your ninth bit (for each block ram) to the parity bit. >> > >> > Cheers >> > >> > PeteS >> > >> The primitives have the bits separated off as parity bits, but other >> than the addressing considerations if you have different depths on the >> dual ports, they are no different than the data bits. It may be easier >> to deal with if you make a wrapper for the Xilinx primitives that bring >> in/out an 18 bit bus. >Article: 104809
Weng Tianxiang wrote: > Hi, > Can I find a website that lists the time for sorting 1 million > 64-bit/32-bit random data using the fastest software sorting algorithm > in a PC? > > Thank you. > > Weng A post to c.p, c.a.f, c.l.v for 3 different sets of answers. wikipedia is probably a good place to start for software only solution. Largely depends on overall memory performance since very little work is done for the cost of memory access and also what you really want to sort for, whether it is already mostly sorted or always random ordered and if you are only counting frequencies rather than true sorting. Research for quicksort, mergesort plus radix sort and counter sorts for the larger cases and bubble or insertion sort for the tiny cases. If you have Knuths 3 vol set that will help, the material has been repeated many times in every CS text. Most give you the answer in terms of ideal machines using big O notation. Knuths reference books were written in the 60s when machines ran below 1mips with memory in same ballpark as cpu, so no caches. For the PC case For very small sorts say upto many thousands of items, the entire sort can be done from L1, L2 caches and is likely to be a good test of the cpu & cache system. Bigger sorts can use these smaller sorts in succession with some sort of merging. 1 random million isn't even remotely going to take longer than a blink since L2 caches are big enough to hold a good fraction of the 1M dataset. In any nix system you already have a basic quicksort on the cmd line but IIRC its not known to be stable for data that is already nearly sorted. If you code up quicksort, you can randomize the order to guarantee quicksort sticks to O N log N time otherwise it can degrade badly. Mergesort always takes bigger O N log N time since it does more work but it doesn't degrade with data patterns. A quicksort with pre randomization probably comes out even. Now if computers still had a form of memory that had the performance of SRAM but the cost and size of DRAM, the answer would be much easier to predict and radix sort would come out handily as that sort time follows the no of partitions performed on the word * no of items. Most radix sorts will use byte sorting so that would mean 4 passes of byte testing and writing items to 1 of 256 respective buckets which can get very large, esp if the data is mostly the same. If you have enough of this flat memory to hold the data on input and output, then you can do it in 2 passes with 65536 buckets, this will break modern PC caches though. Since most PCs don't have SRAM like memory they simulate that with cache hierarchy the answer gets more complicated to predict. Fully random uncached accesses into PC main memory are surprisingly slow, towards several 100ns. Thats called a Memory wall. If you sort for N random ordered items and sweep N from very small to large to enormous, the quicksort and others follow O N log N time but with abrupt steps in O that increase as the hardware works ever harder to simulate flat memory access times. IIRC O can vary over over a factor of 10. O isn't anywhere near constant except on ideal machines with something like RLDRAM or infinitely big caches. I believe that mergesort will have a much more constant O. For the FPGA case. Each compare and swap could be done in an atomic operation needing 2 reads and possibly 2 conditional writes in merge sort. WIth a dual port memory that might take 1 or 2 clocks in a pipeline fashion for sorts that fit into Blockram. Now if the data is spread over many sort engines, the parallelism may make up for the much lower clock. I might favor radix 3 sort since you could effectively use BlockRams as temporary buckets. Basically stream all the DRAM values through to 2048 distinct buckets depending in turn on upper, middle, lower 11b fields. As each bucket fills, save it back to DRAM and reset to empty. Same can be done slower with radix 4 with 256 buckets or even faster with radix 2 with 65536 buckets on a really big FPGA. The sort time will approach 6M (or resp 8M, 4M) memory cycles at the fastest possible rate for just the keys. Some complexity comes in managing the buckets in the output stream into seemingly contiguous buckets. If you are just doing counter sort of values limited to x values, things get so much simpler, just funnel a memory read stream through x counters. Does x have to be 2^32? John Jakson transputer guyArticle: 104810
David <simianfever@gmail.com> wrote: >Hi All, > >I'm trying to get a DDR controller fo r the Micron MT46V16M16TG-75 up and running on a Memec V2MB1000 dev board but not having much luck so far. > >I initially tried the Opencores ddr_sdr which seemed to be sending the correct signals to the RAM when I checked with a scope, but DQS was not being strobed during the read cycle and the controller was reading back the last value written - presumably because of the bus capacitance. After reading a few discussions I got the idea that my external clock might be skewed, but I was unsure of how to calculate the necessary phase shift as I have no idea how long the feedback trace is (the feedback and clock pins are right next to each other on the package however so I would imagine it would be pretty short). I tried various speculatory values for the phase shift to no avail. > >In the last couple of days I've been trying a similar thing in EDK 8.1 - I downloaded the XBD files for the board from Avnet and set up a simple memory test project using the OPB DDR controller. The test passes for the flash memory and then freezes when it gets to the DDR and returns neither a pass or a fail. I'm kinda stuck here and unsure what to try next - is it possible that there's a problem with the board? Any help you guys could give me would be much appreciated. > What kind of memory speed are you trying to achieve? Maybe you should try to use 100MHz first. Anyway, if the DDR memory is not responding, you should look into the initialisation and alignment of the clock and control signals. A logic analyzer (16 channels at 400MHz is enough) will help a lot. -- Reply to nico@nctdevpuntnl (punt=.) Bedrijven en winkels vindt U op www.adresboekje.nlArticle: 104811
"Alex" <alexmchale@gmail.com> wrote: >Thanks to everyone who has replied. > >Can anyone point me to a good resource on learning to interface the >FPGA with a RAM chip? I'm using VHDL for everything in the FPGA. The >more example VHDL I can see, the better. > >This is a project that landed in my lap that I'm having to learn a lot >for as I go. The more resources I have to learn how to use this FPGA >and VHDL, the better. This may be a good start: http://minila.sourceforge.net -- Reply to nico@nctdevpuntnl (punt=.) Bedrijven en winkels vindt U op www.adresboekje.nlArticle: 104812
Hi all, I was wondering which PC upgrades can make ISE to run faster? For example, can it take advantage of dual CPU, and/or dual-core Xeon, etc.? I am currently running at 2.6 GHz P4 with HT, 2GB 400MHz RAM, 800FSB and it is too slow... Thanks, /MikhailArticle: 104813
rickman wrote: > Jim Granville wrote: > >>rickman wrote: >> >>>But you have not responded as to why the FF can not oscillate. The FF >>>is more than just a penl standing on end or a ball on a hill. A FF is >>>a dynamic system with feedback and delays. My schooling taught me that >>>with the right combination of delay and gain (or the wrong combination) >>>it can oscillate. What makes these FFs different? >>> >>>The pen analogy is not a lot different from the pendulum. Yet a >>>pendulum can be chaotic! I think the pen is only good as a first order >>>approximation. For this sort of issue, the analogy requires further >>>scrutiny. >> >>Put this into a Spice pgm, and try it. > > > Two reasons why I can't... 1) I don't have Spice 2) I don't have > "this". > > > >>In fact, (good) spice should be able to show the settling-time-extension >>effects of metastability quite well. It would need carefull sweep >>of the drive voltage, at the instant the clock does the hand-over. > > > I am sure it can, but several posts here claim that CMOS FFs can't > oscillate and I am asking how people know this is a true fact. > Obviously simulating it or hooking up a test circuit can't prove it > won't oscillate. That can only prove it won't oscillate under those > conditions. Hi rickman, I find spice very good, for getting a 'feel' of how a circuit behaves, and you can add parasitics as you wish. Of course, every instance is different, but spice gives a good data-point. It also depends on what you mean by 'oscillate' : If you mean settle-whilst-ringing, or non-monotonic settling, then I'd call that very probable : but best modeled as an extended settling time, in the digital domain. ( which is what metastability is ) If you mean run forever at XX MHz, then that becomes improbable to the point that any device so poorly designed, would be culled. If you want a historical/physical sort of proof, look at the old 2 transistor cross-coupled multivibrator. Same regenerative scheme, at the high frequency realm. Transistors have more gain than a single FET, but this circuit does not oscillate at the 4 x Tpd rate - it does not have sufficent gain/phase to do so. > >>The FF I am used to, is Analog transmission gates, around single >>CMOS INV/OR gates - two forming the regenerative latch. >> >>These simple 'unbuffered' CMOS structures have finite analog gain, even >>at their peak, in the linear region. (unlike TTL ones ) > > > Why do you call this "unbuffered"? Don't the gates create gain and > delay? "unbuffered' refers to the simplest 2 fet INV / 4 fet OR/NAND. Std CMOS logic is usually buffered ( chain of 3 gates ) - look a 74AHCU04 data - this is a unbuffered inverter, used for Xtal oscillators, and a similar structure is found in most micocontrollers. > The more I think about this, the more I am starting to believe > that a FF is capable of chotic behavior. Then we agree ? How practical it is to get a number of FF's into this zone, with usefull 'yield', is another question. -jgArticle: 104814
Brian McFarland wrote: > I've been messing around with my own sort of development board for an > Altera MAX 3064 because I have some downtime at work (I'm a coop) and > wanted to teach myself some PLD stuff. I wanted ot put a manual clock > button on it so I could just mess around with some simple designs to > get a feel for how to use HDL's and Quartus. I have some DIPs for > inputs and LEDs for outputs, everything run on 3.3V. But the I can't > manage to get the clock to pulse just once when I press the button. > Can anyone recomend a debouncing circuit to use for something l like > this? If you use a microswitch ( SPDT ) action (so it flips from GND to Vcc), and a regen-hold feedback (or pinkeep), this solves the bounce effects. -jgArticle: 104815
Weng Tianxiang wrote: > Hi, > Can I find a website that lists the time for sorting 1 million > 64-bit/32-bit random data using the fastest software sorting algorithm > in a PC? > > Thank you. > > Weng > About a half second on my machine. Is that what you really wanted to know? -- Joe Wright "Everything should be made as simple as possible, but not simpler." --- Albert Einstein ---Article: 104816
Hello everyone, I have a question that I'm having a difficult time confirming from the datasheets, app notes, answer database,.... Can a BUFGMUX drive a global clock directly in the Spartan-3? I know that you can drive the input of a DCM from the output of a BUFGMUX and then drive the global clock from there. But can I do this without the DCM? And I don't want to go through the logic fabric :) >From the Spartan-3 datasheet (Figure 18) the BUFGMUX can drive the Top or Bottom Spines. I think that can then get on the horizontal spine and then drive one of the global clocks. But I can't find any text to confirm that. Cheers, James.Article: 104817
On Thu, 06 Jul 2006 11:53:39 -0700, Brian McFarland wrote: > I've been messing around with my own sort of development board for an > Altera MAX 3064 because I have some downtime at work (I'm a coop) and > wanted to teach myself some PLD stuff. I wanted ot put a manual clock > button on it so I could just mess around with some simple designs to > get a feel for how to use HDL's and Quartus. I have some DIPs for > inputs and LEDs for outputs, everything run on 3.3V. But the I can't > manage to get the clock to pulse just once when I press the button. > Can anyone recomend a debouncing circuit to use for something l like > this? Google "switch debounce." ~Dave~Article: 104818
On Thu, 6 Jul 2006 16:34:13 -0400, "MM" <mbmsv@yahoo.com> wrote: >Hi all, > >I was wondering which PC upgrades can make ISE to run faster? For example, >can it take advantage of dual CPU, and/or dual-core Xeon, etc.? I am >currently running at 2.6 GHz P4 with HT, 2GB 400MHz RAM, 800FSB and it is >too slow... One option is to get a amd athlon 64 at 2.4g with 1m cache. You'll definitely see a pretty good jump with that. The other option is to wait a month or so and get a core2 (e6600 or e6700) with 4m cache. That's going to give you a good jump too. Don't in any circumstance get another P4 machine. A third option is to get a 3.6 GHz cpu and upgrade your existing system which should still give you a nice jump. While you're at it get another 2G memory depending on how big your designs are.Article: 104819
Hi, What I really want to know is that the following formula for best sorting timing is correct (it is copied from Knuth's "Sorting and Searching") and it is not too far away from the PC computer reality: 14.5* N * (lg N). There are about 20 algorithms, and I reall don't know what formula should be selected as a representative for best software algorithm running for 1 million random data. 14.5* N * (lg N) is likely the best one. Thank you. WengArticle: 104820
"Weng Tianxiang" <wtxwtx@gmail.com> wrote in message news:1152158287.898968.14220@l70g2000cwa.googlegroups.com... > Hi, > Can I find a website that lists the time for sorting 1 million > 64-bit/32-bit random data using the fastest software sorting algorithm > in a PC? Here's a ballpark figure: Let n = 1000000 n*log2(n) / 3Ghz = 6.64385619 milliseconds - OliverArticle: 104821
Hi James, Can you tell me specifically what document you are looking at (document number, version, date)? I looked at Spartan-3 and Spartan-3E data sheets on the Xilinx website and Figure 18 (in either document) wasn't related to clocks. The global clock distribution networks in the FPGA are driven by "global clock buffers". In the Spartan-3 families, it happens that "global clock buffers" are BUFGMUX. If you call out a BUFG in your design, it is transformed into a BUFGMUX by the MAP program (with the unused inputs of the BUFGMUX tied to appropriate constants...) These global resources are located at the top center and bottom center of the device. Spartan-3E, specifically, has some additional BUFGMUX on the left and right sides -- they are not fully global but still very useful. You can route a signal to a BUFGMUX, and then onto the global clock distribution network. It can come from anywhere -- a DCM, another BUFGMUX, some internal logic, or from an I/O pin. It is a flexible arrangement. If you have specific requirements for FPGA pin-to-pin timing (chip-level setup, hold, and clock to out) you will want to pay attention to which resources you use and where they are located with respect to each other so that you obtain the best performance possible. Hope that helps, Eric "James Morrison" <spam1@emorrison.ca> wrote in message news:1152219180.4041.41.camel@spice.emorrison.ca... > Hello everyone, > > I have a question that I'm having a difficult time confirming from the > datasheets, app notes, answer database,.... > > Can a BUFGMUX drive a global clock directly in the Spartan-3? I know > that you can drive the input of a DCM from the output of a BUFGMUX and > then drive the global clock from there. But can I do this without the > DCM? And I don't want to go through the logic fabric :) > > >From the Spartan-3 datasheet (Figure 18) the BUFGMUX can drive the Top > or Bottom Spines. I think that can then get on the horizontal spine and > then drive one of the global clocks. But I can't find any text to > confirm that.Article: 104822
"Weng Tianxiang" <wtxwtx@gmail.com> wrote in message news:1152223321.982640.267120@m79g2000cwm.googlegroups.com... > Hi, > What I really want to know is that the following formula for best > sorting timing is correct (it is copied from Knuth's "Sorting and > Searching") and it is not too far away from the PC computer reality: > 14.5* N * (lg N). > > There are about 20 algorithms, and I reall don't know what formula > should be selected as a representative for best software algorithm > running for 1 million random data. > > 14.5* N * (lg N) is likely the best one. That will not be optimal. Integers would probably be a good candidate for Knuth's 'merge insertion' algorithm. This article "Implementing HEAPSORT with (n logn - 0.9n) and QUICKSORT with (n logn + 0.2n) comparisons" may be of some interest to you. He also supplies the source code and driver program for the article. It would be fairly simple to test on your platform, I imagine. In addition, this algorithm (especially for integer data type) may be of special interest for your particular case: http://www.nada.kth.se/~snilsson/public/code/nloglogn.c Of course, it is a distribution based sort (as a comparison sort operating that quickly is provably impossible). > Thank you. > > Weng >Article: 104823
Here comes some basic stuff. Excuse me if you find it boring. Most metastable problems are cused by input set-up time violations on classical edge-triggered flip-flops. Such flip-flops consist of a master latch and a slave latch. When the clock is Low, the D input drives the master latch, which does not have any feedback during that time. The slave latch is isolated from the master, and retains the previous data through its slave-latch feedback. When the clock is High, the master latch is isolated form the D input, retaining data by means of its master-latch feedback. The slave latch then has no feedback, but is directly driven by the output of the master latch. Metastability occurs when the input data happened to change exactly to a specific "bad" level, just at the moment when the clock rises. At that moment the master latch is being decoupled from the D input, and the master-latch feedback is being activated. That is the (only) moment when a (rising-edge triggered) flip-flop can go metastable. Only the master latch is responsible for metastability. To analyze the behavior, we look at the innards of the master latch: it consists of two cascaded simple inverter stages, each with a p-channel pull-up transistor and an n-challel pull-down transistor (nothing but these four transistors with very short connections between them) plus a clock-controlled pass transistor feeding the output back to the input. That's all, none of all that TTL junk that caused those metastable problems decades ago. What happens when the master latch exits the metastable state? We can model that by (conceptually or really) driving the input of the double-inverter to an input voltage that is identical with the output voltage, which means both inverters are in their linear range. Any two-stage (non-hysteresis) non-inverting amplifier has such an operating point where Vin = Vout. To go metastable, we have to hit exactly (or almost exactly) that point. Now we activate the pass transistor that connects Vout to Vin. Nothing happens momentarily, since, by definition, there is no voltage difference. But very soon a voltage difference will develop and will drive the output either to the positive or the negative rail. The debate in this thread has been whether that recovery from metastability is monotonic, or can involve oscillation. I claim it is monotonic since the phase response of the circuit loop is dominated by the RC of pass transitor impedance times input capacitance. Perhaps somebody with a more recent education can take it from here. If the structure is any more complicated, involving additional transistors or lengthy interconnects, all bets are off, and the circuit may have poor metastable behavior. But we are not interested in such a bad circuit.. Peter AlfkeArticle: 104824
MM wrote: > I was wondering which PC upgrades can make ISE to run faster? For example, > can it take advantage of dual CPU, and/or dual-core Xeon, etc.? I am > currently running at 2.6 GHz P4 with HT, 2GB 400MHz RAM, 800FSB and it is > too slow... I mostly use Quartus but expect similar behaviour for ISE. For non-trivial examples, compilation time is dominated by the "fitter" which amounts to placing and routing AFAICT. I look forward to test P&R performance on the new Intel core as it becomes available. I don't have enough data to conclude which of the many factors have the most impact, but on my designs an Athlon 64 3500+, 2.2 GHz, 0.5 MiB L2$, 2 GiB dual channel DDR 400 is roughly 10% faster than an Athlon 64 3200+, 2.0 GHz, 1 MiB L2$, 1 GiB single channel DDR 400 , which in turn is more than 10% faster than an Intel Dothan, 1.7 GHz, 1 MiB L2$, single channel DDR 333 For me FPGA compilation one of very few workloads where performance is still a concern! Tommy
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z