Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Rick wrote: > On Feb 10, 2:55 am, Olafur Gunnlaugsson <o...@audiotools.com> wrote: >> Žann 05/02/2010 18:19, Eric Chomko skrifaši: >> >>> Has anyone created a copy machine of an old system using an FPGA? I >>> was wondering if it would be possible to take an entire SWTPC 6800 and >>> compile the schematics and have it run on an FPGA board.? Wouldn't >>> even have to be the latest Xylinx product, I suspect. >> There are loads of such projects out there, even a commercial one called >> C-One "the reconfigurable computer", here:http://www.c64upgra.de/c-one/ > > It is a great effort but last time I checked it was a bit pricey ~$300 > for a basic system. > Just out of curiosity, how old are you? Giving the decade is OK. A game system is that price so I'm wondering if "kids" think $300 is too much. /BAHArticle: 145476
On Wed, 10 Feb 2010 11:06:30 -0800 (PST), Rick <richardcortese@gmail.com> wrote: |It is a great effort but last time I checked it was a bit pricey ~$300 |for a basic system. | |=================== Try 333 euros now. $453 US. That includes the Cyclone 3 FPGA extender board. jamesArticle: 145477
ISE drives me crazy! One of the most powerful features of VHDL is the ability to handle multiple architectures and configurations for the same entity. This makes for efficient simulation, regression testing, and promotes code reuse. I've spent way too many hours trying to figure out problems with ISE synthesis (XST). I have tried various permutations of configurations, separate files for each configuration, using just a single configuration file in the project, all to no avail. The GUI always seems to display the incorrect architecture. Now, I "think" this is just a GUI anomaly and in actuality the tool knows which architecture bind. But, when you need to use the GUI to help diagnose a synthesis problem how do you trust it? The only thing you can do is let it rip the use the (terrible) schematic RTL viewer to get a sense of what it did. This is not efficient if you tend to have a heavily scripted development flow. Who wants to look at the RTL view for every regression? Same files, same configuration used with Synplicity and everything is great! I always know what architecture it is going to use, and (get this) I can select which top-level configuration to synthesize! However many times Synplicity has difficulty closing timing or I can't check-out a license because someone else has it tied up (but that's rant for a later time). Plus, Synplicity is expensive compared to ISE (OK, you get what you pay for). It's one thing to be frustrated because I cannot get a design to synthesize with XST. It's another problem altogether when it synthesizes incorrectly! I spent a lot of time debugging why some logic was getting optimized away. Turns out XST did not use the architecture it was instructed to use at a lower level in the design. Bottom line: Xilinx cannot handle VHDL configuration constructs properly. So until it does, I will have to overcome my loathing to dumb-down my code and scripts in order to accommodate Xilinx. --------------------------------------- Posted through http://www.FPGARelated.comArticle: 145478
On Thu, 11 Feb 2010 09:27:23 -0500, jmfbahciv <jmfbahciv@aol> wrote: |Rick wrote: |> On Feb 10, 2:55 am, Olafur Gunnlaugsson <o...@audiotools.com> wrote: |>> Žann 05/02/2010 18:19, Eric Chomko skrifaši: |>> |>>> Has anyone created a copy machine of an old system using an FPGA? I |>>> was wondering if it would be possible to take an entire SWTPC 6800 and |>>> compile the schematics and have it run on an FPGA board.? Wouldn't |>>> even have to be the latest Xylinx product, I suspect. |>> There are loads of such projects out there, even a commercial one called |>> C-One "the reconfigurable computer", here:http://www.c64upgra.de/c-one/ |> |> It is a great effort but last time I checked it was a bit pricey ~$300 |> for a basic system. |> |Just out of curiosity, how old are you? Giving the decade is OK. |A game system is that price so I'm wondering if "kids" think $300 |is too much. | |/BAH |=========== Kids today have little concept of what is too much. For what the board does it is a good price even now at 333 euros. The boards are no longer shipping at the 269 euros anymore. The increase price now reflects that the system ships with the FPGA extender card. Also the main board uses two older FPGAs, EP1K30 and EP1K100. You also need a SVGA monitor, a keyboard, memory, a PS2 style mouse, floppy drives, hard drive, ATX case and ATX power supply. So to get a system up and minimally running is going to cost you at least another $200 additionally unless you have all those components lieing around in your juck box. It is a great effort on the team's part. It can be very flexible system now with the Cyclone 3 extender board. jamesArticle: 145479
<snip> >Bottom line: >Xilinx cannot handle VHDL configuration constructs properly. So until >it does, I will have to overcome my loathing to dumb-down my code and >scripts in order to accommodate Xilinx. > The Design Manager GUI isn't clever in the way it handles conditional generates, either. Thus I haven't told it the whereabouts of the file containing the behavioural model of one of the sub-modules that mostly I use when simulating, in case XST tries to synthesize it! My experience is that synthesis tools are less smart with regards to generates and configurations than simulators (ModelSim, at least). --------------------------------------- Posted through http://www.FPGARelated.comArticle: 145480
On Feb 11, 6:27=A0am, jmfbahciv <jmfbahciv@aol> wrote: > Rick wrote: > > On Feb 10, 2:55 am, Olafur Gunnlaugsson <o...@audiotools.com> wrote: > >> =DEann 05/02/2010 18:19, Eric Chomko skrifa=F0i: > > >>> Has anyone created a copy machine of an old system using an FPGA? I > >>> was wondering if it would be possible to take an entire SWTPC 6800 an= d > >>> compile the schematics and have it run on an FPGA board.? Wouldn't > >>> even have to be the latest Xylinx product, I suspect. > >> There are loads of such projects out there, even a commercial one call= ed > >> C-One "the reconfigurable computer", here:http://www.c64upgra.de/c-one= / > > > It is a great effort but last time I checked it was a bit pricey ~$300 > > for a basic system. > > Just out of curiosity, how old are you? =A0Giving the decade is OK. > A game system is that price so I'm wondering if "kids" think $300 > is too much. > > /BAH 59. Directed at Olafur too, it isn't a question of value for goods delivered. Having recently priced getting some prototype boards made, I think they are priced appropriately. I have a pretty good idea of ratio of shall we say sophisticated users to the Plebs. An analogy would be a Porsche is more expensive then riding a bus. The Plebs aren't concerned with file I/O, clock frequency, or storage. If they look back at all it is just to play a game of Pac Man or Bard's Tale. An emulator or SoC is fine and suited to their needs. This reduces the potiential market to hard core users. RickArticle: 145481
Brian Davis wrote: > Unless a hypothetical assembly house stuffed, say, a 1.2 ohm > (1R2) where a one Kohm (102) NPN base resistor was supposed > to go, lighting the DONE LED just fine but clamping the DONE > voltage seen at the FPGA pin to one VBE drop such that the > FPGA thinks DONE never went high. Ha, THAT was it! Thank you, you made my day! Putting in the right value fixes the issue: Bit-files load without the DONE_cycle-setting, plus indirect SPI-programming works now. The only question that remains is: why doesn't iMPACT give a "DONE did not go high"-message? If it did, I'd probably gotten there sooner. And another question is: what else did the assembly house mess up on this board? Lot of 0402 bird seed on this one, no way to tell by looking at it... But at least all supply voltages are correct and so on. > It is a major nuisance that Xilinx doesn't provide the JTAG-SPI > core for iMPACT in either source or black-box synthesizable form. Yep. Antti did post his version of such a core in the Xilinx user forums, though: http://tinyurl.com/yjtharz This is a start as well. > This forces customers to reinvent the indirect SPI FLASH wheel. > > If you ever need to do this, I'd suggest starting with either the > Ken Chapman Picoblaze flash example (S3E) or the Avnet V5 SPI > flash eval board example, which demonstrates the V5 logic needed > to do user access to the internal configuration logic. Fortunately, in the "production" release it'll be simpler. A host PC will send SPI-commands which will just be passed through the FPGA basically. The indirect programming bit is just something I wanted to try since I've never used it before and was a bit puzzled when it didn't work. BTW, Xilinx' xapp1020 can come in handy, too: http://tinyurl.com/y955nom > Also of note, command line 10.1 PROMGEN has an undocumented-in- > the-manuals " -spi " option that will let you generate an .mcs > file for SPI proms with the proper bit order. Good to know. Did I mention the iMPACT GUI sucks? :) Thanks for all the excellent pointers! Learned quite a bit from this. -- Replace "MONTH" with the three-letter abbreviation of the current month and the two-digit code for the current year (simple, eh?).Article: 145482
My "religion" does not tell you how to make a stackup without more information. Do you need to set specific impedance on any traces? If so, what values, number of traces and approximate length. Do you expect any EMI issues? What is your fastest edge rate? The way you have your stack set up seems to be optimized for controlling trace impedance by having a gnd/pwr layer immediately adjacent to each signal layer. The stackup below will give you a stackup optimized for power distribution and still allow for impedance control if you can adjust your trace width to suit. I assume you have read the arguments pro and con the significance of inter plane capacitance. With the stackup below, you can still get good impedance control on the outer layers with slightly wider traces. But you won't be routing too many signals on the outer layers if your board is at all dense, the components eat up al the routing space. It will be important on *all* layers to minimize routing of signals one above the other on adjacent layers to prevent coupling and crosstalk. The easy remedy is to route orthogonally on adjacent layers. It also provides a good means of getting signals around the board with minimal vias which is important. 1 signal - Top 2 signal 3 GND plane -- >=3D 5 mil to PWR plane 4 4 PWR plane 5 signal 6 signal 7 GND plane -- >=3D 5 mil to PWR plane 8 8 PWR plane 9 signal 10 signal - Bottom With ten layers the average thickness (using 1/2 oz copper) is about 6 mil. You will likely make it a little thicker between layers 4, 5, 6 and 7 since that is stripline rather than microstrip. You will want to use very thin layers between 3 & 4 and 7 & 8 to maximize PDS capacitance. If you want to go for maximum signal isolation from crosstalk, I would isolate all signal layers with power and ground. 1 signal - Top 2 GND plane 3 signal 4 PWR plane 5 signal 6 signal 7 GND plane 8 signal 9 PWR plane 10 signal - Bottom Well, I guess 5 and 6 are still adjacent, but you can't have everything... This arrangement does lend itself to ground pours on all signal layers other than 5 and 6. If you use ground pours on layers 5 and 6 it will much with the impedance control because of the routing on the adjacent signal layer. This is just a seat-of-the-pants analysis. If you need impedance control on all layers, you would need to analyze each signal layer in terms of impedance vs. trace width. You may find that your approach is better for your specific needs. Can you explain the rational behind your stackup? I don't want to argue about it, I just want to learn what you are thinking. On Feb 11, 5:05=A0am, "Nial Stewart" <nial*REMOVE_TH...@nialstewartdevelopments.co.uk> wrote: > I'm about to start the layout on a board which I think needs > a 10 laye-r stack. > > After the religious wars betwen rickman and Symon on decoupling I'm > unsure on the best stack but am veering towards... > > 1 =A0signal - Top > 2 =A0GND plane > 3 =A0signal > 4 =A0signal > 5 =A0PWR plane > 6 =A0GND plane > 7 =A0signal > 8 =A0signal > 9 =A0PWR & GND plane > 10 signal - Bottom > > There will almost definitely be power pours on layer 9, the BGA > will be on the top layer. > > Comments? I wouldn't bother mixing ground and power on one layer. Even with a couple of signal layers between, you will get significant capacitance between the power and ground. Since the power layers act as ground for high frequency signals you don't need the ground on layer 9 unless it is for connectivity. Are you expecting gnd layers 6 and 2 to be chopped up? With two power layers they will be much less chopped up if you have more than two power voltages. RickArticle: 145483
<snip> >And another question is: what else did the assembly house mess up on >this board? Lot of 0402 bird seed on this one, no way to tell by looking >at it... But at least all supply voltages are correct and so on. <snip> This reminded me of an article I saw in Circuit Cellar on Smart Tweezers (http://www.smarttweezers.com) which might help you out. I expect there may be some problems in circuit with parallel paths though, I think it is more meant for identifying mystery components in isolation. Peter Van EppArticle: 145484
I have a really large lookup table (members of a finite field) encoded as a function. This function is only ever invoked with a constant argument, so it shouldn't actually be synthesized. Altera's synthesis tool seems to handle this with no problem. When I try to use Synplify to synthesize my design (particularly this constant function), the verilog compiler (c_ver) runs out of memory (hits the 4GB limit). Seems like Synplify isn't figuring out that the function doesn't actually need to be synthesized. So I guess the question is two-fold: 1. is there a better way to encode a large lookup table like that that should really only be used at elaboration time? 2. is it possible to enable c_ver to run in 64-bit mode, so that it can go past the 4G boundary? Thanks, BenArticle: 145485
Hi, I finally understand the reason when a flip-flops can be replaced by a latch. Here is the excerpt from the paper "Atom Processor Core Made FPGA Synthesizable" Optimized for a frequency range from 800MHz to 1.86Ghz, the original Atom design makes extensive use of latches to support time borrowing along the critical timing paths. With level-sensitive latches, a signal may have a delay larger than the clock period and may flush through the latches without causing incorrect data propagation, whereas the delay of a signal in designs with edge-triggered flip-flops must be smaller than the clock period to ensure the correctness of data propagation across flip-flop stages [3]. It is well known that the static timing analysis of latch-based pipeline designs with level-sensitive latches is challenging due to two salient characteristics of time borrowing [2, 3, 14]: (1) a delay in one pipeline stage depends on the delays in the previous pipeline stage. (2) in a pipeline design, not only do the longest and shortest delays from a primary input to a primary output need to be propagated through the pipeline stages, but also the critical probabilities that the delays on latches violate setup-time and hold-time constraints. Such high dependency across the pipeline stages makes it very difficult to gauge the impact of correlations among delay random variables, especially the correlations resulting from reconvergent fanouts. Due to this innate difficulty, synthesis tools like DC-FPGA simply do not support latch analysis and synthesis correctly." In short, a pipeline with several FFs can be replaced with a pipeline with two FFs in the ends and normal latches inserted between them to steal time slack. FF1 ---> FF2 ---> FF3 ---> FF4 FF1 ------->l2 --------> l3--> FF4. I saw the circuits before, but not realized what the basic reason was. With the above paper, I now know that the technology is not a new, it originated in 1980s. WengArticle: 145486
On Feb 11, 2:50=A0pm, van...@sfu.ca (Peter Van Epp) wrote: > <snip>>And another question is: what else did the assembly house mess up = on > >this board? Lot of 0402 bird seed on this one, no way to tell by looking > >at it... But at least all supply voltages are correct and so on. > > <snip> > > =A0 =A0 =A0 =A0 This reminded me of an article I saw in Circuit Cellar on= Smart Tweezers > (http://www.smarttweezers.com) which might help you out. I expect there m= ay > be some problems in circuit with parallel paths though, I think it is mor= e > meant for identifying mystery components in isolation. > > Peter Van Epp We've got a set of these in our production area here, which is also the test and rework area (small company). You can often get information in-circuit, especially in cases like this where you might see a much lower impedance than you were expecting. They're also great for comparing two boards when one doesn't work, since whatever you measure on the good board (R L or C) is just a relative reference point rather than the presumed component value. Regards, GaborArticle: 145487
On Feb 11, 2:52=A0pm, Ben Gelb <ben.g...@gmail.com> wrote: > I have a really large lookup table (members of a finite field) encoded > as a function. This function is only ever invoked with a constant > argument, so it shouldn't actually be synthesized. Altera's synthesis > tool seems to handle this with no problem. > > When I try to use Synplify to synthesize my design (particularly this > constant function), the verilog compiler (c_ver) runs out of memory > (hits the 4GB limit). > > Seems like Synplify isn't figuring out that the function doesn't > actually need to be synthesized. > > So I guess the question is two-fold: > > 1. is there a better way to encode a large lookup table like that that > should really only be used at elaboration time? > 2. is it possible to enable c_ver to run in 64-bit mode, so that it > can go past the 4G boundary? When you say the function is only called with a constant argument, is it truly a constant in terms of the language? Maybe if you are using a variable which is initialized to a value and not changed, this is not so well understood. Otherwise, I can't think why the entire function would need to be "implemented" rather than just evaluated. Which language are you using? RickArticle: 145488
Yes, latch-based design is much older than flop-based design, for the simple reason that it can be cheaper. Think about it -- every flop is really two latches! (At least for static designs that can be clocked down to DC...) Where I work (at a chip company), we're still occasionally converting latch-based designs into flop-based ones. But (and this is a big but) FPGAs themselves (not just the design tools) are designed for flop-based design, so if you use latch-based designs with FPGAs you are not only stressing the timing tools, you are also avoiding the nice, packaged, back-to-back dedicated latches they give you called flops. Pat On Feb 11, 2:05=A0pm, Weng Tianxiang <wtx...@gmail.com> wrote: > Hi, > I finally understand the reason when a flip-flops can be replaced by a > latch. > > Here is the excerpt from the paper "Atom Processor Core Made FPGA > Synthesizable" > Optimized for a frequency range from 800MHz to 1.86Ghz, > the original Atom design makes extensive use of latches > to support time borrowing along the critical timing paths. > With level-sensitive latches, a signal may have a delay larger > than the clock period and may flush through the latches > without causing incorrect data propagation, whereas the delay > of a signal in designs with edge-triggered flip-flops must > be smaller than the clock period to ensure the correctness of > data propagation across flip-flop stages [3]. It is well known > that the static timing analysis of latch-based pipeline designs > with level-sensitive latches is challenging due to two > salient characteristics of time borrowing [2, 3, 14]: (1) a > delay in one pipeline stage depends on the delays in the previous > pipeline stage. (2) in a pipeline design, not only do > the longest and shortest delays from a primary input to a > primary output need to be propagated through the pipeline > stages, but also the critical probabilities that the delays on > latches violate setup-time and hold-time constraints. Such > high dependency across the pipeline stages makes it very > difficult to gauge the impact of correlations among delay > random variables, especially the correlations resulting from > reconvergent fanouts. Due to this innate difficulty, synthesis > tools like DC-FPGA simply do not support latch analysis > and synthesis correctly." > > In short, a pipeline with several FFs can be replaced with a pipeline > with two FFs in the ends and normal latches inserted between them to > steal time slack. > > FF1 ---> FF2 ---> FF3 ---> FF4 > FF1 ------->l2 --------> l3--> FF4. > > I saw the circuits before, but not realized what the basic reason was. > With the above paper, I now know that the technology is not a new, it > originated in 1980s. > > WengArticle: 145489
Ben Gelb <ben.gelb@gmail.com> wrote: > I have a really large lookup table (members of a finite field) encoded > as a function. This function is only ever invoked with a constant > argument, so it shouldn't actually be synthesized. Altera's synthesis > tool seems to handle this with no problem. > When I try to use Synplify to synthesize my design (particularly this > constant function), the verilog compiler (c_ver) runs out of memory > (hits the 4GB limit). > Seems like Synplify isn't figuring out that the function doesn't > actually need to be synthesized. That doesn't necessarily follow. It may be doing a lot of work while figuring out the value to be synthesized. I have seen similar effects in Fortran compilers for compile time constant tables, including running out of memory. (Even for very small tables.) -- glenArticle: 145490
Sean Durkin wrote: > >> <snip> clamping the DONE voltage seen at the FPGA pin to one >> VBE drop such that the FPGA thinks DONE never went high. > > Ha, THAT was it! Thank you, you made my day! Putting in the > right value fixes the issue: Bit-files load without the > DONE_cycle-setting, plus indirect SPI-programming works now. > Great! I hadn't seen that exact problem before, but the DONE LED circuit found on many Xilinx/Digilent boards made me think of it: : : !!! Don't ever do this !!! : Evil circuit clamps DONE high level to the LED's Vf : : DONE >---o---/\/\---> VCC : | : | : v : - LED : | : | : GND Given that most Xilinx FPGAs have a Vih minimum threshold on the DONE pin of 2.0 V (LVTTL, LVCMOS33) or 1.7 V (LVCMOS25), it gives me the heebie-jeebies to see a circuit like that clamping DONE to the LED Vf - depending upon the exact LED, Vf could easily be down in the 1.5 V to 1.7 V range. > > The only question that remains is: why doesn't iMPACT give > a "DONE did not go high"-message? If it did, I'd probably > gotten there sooner. > There are two DONE related bits in the V5 status register, an internal "I-have-released-done" signal and then the actual DONE pin state; I'd guess that iMPACT is reading the former. The FPGA startup state machine uses the external signal seen on the DONE pin, so the FPGA will stall the startup sequence with your accidental Vbe clamp on DONE. It would be interesting to compare the before and after board-resistor-change state of the status register [Debug->Read Device Status], post JTAG download attempt. There are other helpful bits in that status register for troubleshooting exactly where the FPGA got stuck, here's a few: " " STARTUP_STATE [20:18] CFG startup state machine [ NOT BINARY ENCODED!!!] " DONE [14] Value on DONE pin " RELEASE_DONE [13] Value of internal DONE signal " EOS [4] End of Startup signal from Startup Block " from UG191 v3.8, V5 FPGA Configuration guide, page 120: http://www.xilinx.com/support/documentation/user_guides/ug191.pdf > >Antti did post his version of such a core in the Xilinx user >forums, though: > >http://tinyurl.com/yjtharz > Thanks, I hadn't seen that one before. BrianArticle: 145491
In comp.arch.fpga Patrick Maupin <pmaupin@gmail.com> wrote: > Yes, latch-based design is much older than flop-based design, for the > simple reason that it can be cheaper. Think about it -- every flop is > really two latches! (At least for static designs that can be clocked > down to DC...) Where I work (at a chip company), we're still > occasionally converting latch-based designs into flop-based ones. Often using a two (or more) phase clock. Some latches work on one phase, some on the other. With appropriately non-overlapping, one avoids race conditions and the timing isn't so hard to get right. > But (and this is a big but) FPGAs themselves (not just the design > tools) are designed for flop-based design, so if you use latch-based > designs with FPGAs you are not only stressing the timing tools, you > are also avoiding the nice, packaged, back-to-back dedicated latches > they give you called flops. Well, you could use a sequence of FF's, clocking on different clock edges, or the same edge of two clocks. That allows for some of the advantages. If there was enough demand, I suppose FPGA companies would build transparent latch based devices. (Who remembers the 7475?) In pipelined processors of years past the Earle latch combined one level of logic with the latch logic, reducing the latch delay. -- glenArticle: 145492
Gabor <gabor@alacron.com> writes: >On Feb 11, 2:50=A0pm, van...@sfu.ca (Peter Van Epp) wrote: >> <snip>>And another question is: what else did the assembly house mess up = >on >> >this board? Lot of 0402 bird seed on this one, no way to tell by looking >> >at it... But at least all supply voltages are correct and so on. >> >> <snip> >> >> =A0 =A0 =A0 =A0 This reminded me of an article I saw in Circuit Cellar on= > Smart Tweezers >> (http://www.smarttweezers.com) which might help you out. I expect there m= >ay >> be some problems in circuit with parallel paths though, I think it is mor= >e >> meant for identifying mystery components in isolation. >> >> Peter Van Epp >We've got a set of these in our production area here, which is also >the test and rework area (small company). You can often get >information >in-circuit, especially in cases like this where you might see a much >lower >impedance than you were expecting. They're also great for comparing >two boards when one doesn't work, since whatever you measure on the >good board (R L or C) is just a relative reference point rather than >the presumed component value. >Regards, >Gabor Even better, someone that has actually used one (which I haven't :-)). Peter Van EppArticle: 145493
Hi Antti, Do you happen to know if coreABC works in Libero IDE v8.6? I tried to put in coreABC but the system prompted this error "CoreABC_Generator.exe failed with code 32768". Please advise. Thank you.Article: 145494
Using QDRII x18 in 2-burst mode, I assume that the data/adr/cmd bus are all DDR signals using the same clock domain (except the read and write signal wich are single data rate) The docs say there are limitations on where to put the data bus, and then I can put the adr/cmd on what is left in the bank. This appears to be a bit weird, as in this mode they are all clocked in at the same DDR(K clk) domain. Trying to compile it with swapped pins does give an error like: "Error: Can't place I/O "d[4]" to I/O location Pin_AG6 because it does not support x18 mode memory interfaces" AG6 is not in a x18 DQ bus, but its on the same bank as the other signals in the domain. I wonder if anyone got experience with this. Is it just a lazy pinout guide that dates back from the modes where adr was single data rate? Or is there a hardware explanation? Maybe there is a way to override the error message? (Im clocking at half the max allowed, so there should be some overhead in timing)Article: 145495
On Feb 12, 4:55=A0am, RaulGonz <raull...@hotmail.com> wrote: > Hi Antti, > > Do you happen to know if coreABC works in Libero IDE v8.6? I tried to > put in coreABC but the system prompted this error > "CoreABC_Generator.exe failed with code 32768". Please advise. Thank > you. works with 8.5 not sure about anything else contact actel support, it should work with 8.6 also AnttiArticle: 145496
Rick wrote: > On Feb 11, 6:27 am, jmfbahciv <jmfbahciv@aol> wrote: >> Rick wrote: >>> On Feb 10, 2:55 am, Olafur Gunnlaugsson <o...@audiotools.com> wrote: >>>> Žann 05/02/2010 18:19, Eric Chomko skrifaši: >>>>> Has anyone created a copy machine of an old system using an FPGA? I >>>>> was wondering if it would be possible to take an entire SWTPC 6800 and >>>>> compile the schematics and have it run on an FPGA board.? Wouldn't >>>>> even have to be the latest Xylinx product, I suspect. >>>> There are loads of such projects out there, even a commercial one called >>>> C-One "the reconfigurable computer", here:http://www.c64upgra.de/c-one/ >>> It is a great effort but last time I checked it was a bit pricey ~$300 >>> for a basic system. >> Just out of curiosity, how old are you? Giving the decade is OK. >> A game system is that price so I'm wondering if "kids" think $300 >> is too much. >> >> /BAH > > 59. Directed at Olafur too, it isn't a question of value for goods > delivered. Having recently priced getting some prototype boards made, > I think they are priced appropriately. Ah, OK. > > I have a pretty good idea of ratio of shall we say sophisticated users > to the Plebs. An analogy would be a Porsche is more expensive then > riding a bus. The Plebs aren't concerned with file I/O, clock > frequency, or storage. If they look back at all it is just to play a > game of Pac Man or Bard's Tale. An emulator or SoC is fine and suited > to their needs. This reduces the potiential market to hard core users. If you had been a youngster, I would have poked a little bit more to try to find out what kinds of things would interest you ;-). I'm hearing rumors that the current young generation are interested in learning about how all the insides work. /BAHArticle: 145497
I posted a query about 10 layer PCBs for a new board I'm doing. I caught a reply by Rickman at home via Google, but it seems to have disappeared now. My usenet connection's normally pretty reliable so I'm not sure what's going on. Will this get out? Nial.Article: 145498
On Feb 12, 9:18=A0am, "Nial Stewart" <nial*REMOVE_TH...@nialstewartdevelopments.co.uk> wrote: > I posted a query about 10 layer PCBs for a new board I'm > doing. I caught a reply by Rickman at home via Google, but > it seems to have disappeared now. > > My usenet connection's normally pretty reliable so I'm not sure > what's going on. > > Will this get out? > > Nial. Google groups sees your post. Also your thread is still showing up there. It doesn't show your thread as new, but rather as a change in subject: Board layout for FPGA Discussion subject changed to "10 layer stack for 1152 pin BGA routing (and decoupling)?" by Nial Stewart Perhaps your newsreader still shows the original subject line? Regards, GaborArticle: 145499
On Feb 11, 3:05 pm, Weng Tianxiang <wtx...@gmail.com> wrote: > Hi, > I finally understand the reason when a flip-flops can be replaced by a > latch. > > Here is the excerpt from the paper "Atom Processor Core Made FPGA > Synthesizable" > Optimized for a frequency range from 800MHz to 1.86Ghz, > the original Atom design makes extensive use of latches > to support time borrowing along the critical timing paths. > With level-sensitive latches, a signal may have a delay larger > than the clock period and may flush through the latches > without causing incorrect data propagation, whereas the delay > of a signal in designs with edge-triggered flip-flops must > be smaller than the clock period to ensure the correctness of > data propagation across flip-flop stages [3]. It is well known > that the static timing analysis of latch-based pipeline designs > with level-sensitive latches is challenging due to two > salient characteristics of time borrowing [2, 3, 14]: (1) a > delay in one pipeline stage depends on the delays in the previous > pipeline stage. (2) in a pipeline design, not only do > the longest and shortest delays from a primary input to a > primary output need to be propagated through the pipeline > stages, but also the critical probabilities that the delays on > latches violate setup-time and hold-time constraints. Such > high dependency across the pipeline stages makes it very > difficult to gauge the impact of correlations among delay > random variables, especially the correlations resulting from > reconvergent fanouts. Due to this innate difficulty, synthesis > tools like DC-FPGA simply do not support latch analysis > and synthesis correctly." > > In short, a pipeline with several FFs can be replaced with a pipeline > with two FFs in the ends and normal latches inserted between them to > steal time slack. > > FF1 ---> FF2 ---> FF3 ---> FF4 > FF1 ------->l2 --------> l3--> FF4. > > I saw the circuits before, but not realized what the basic reason was. > With the above paper, I now know that the technology is not a new, it > originated in 1980s. > > Weng I'm a little unclear on how this works. Is this just a matter of the outputs of the latches settling earlier if the logic path is faster so that the next stage actually has more setup time? This requires that there be a minimum delay in any given path so that the correct data is latched on the current clock cycle while the result for the next clock cycle is still propagating through the logic. I can see where this might be helpful, but it would be a nightmare to analyze in timing, mainly because of the wide range of delays with process, voltage and temperature (PVT). I have been told you need to allow 2:1 range when considering all three. I think similar issues are involved when considering async design (or more accurately termed self-timed). In that design method the variations in delay affect the timing of both the data path and clock path so that they are largely nulled out so that the min delays do not need to include the full 2:1 range compared to the max. Some amount of slack time must be given so the clock arrives after the data, but otherwise all the speed of the logic is utilized at all times. This also is supposed to provide for lower noise designs because there is no chip wide clock giving rise to simultaneous switching noise. Self- timed logic does not really result in significant increases in processing speed because although the max speed can be faster, an application can never rely on that faster speed being available. But for applications where there is optional processing that can be done using the left over clock cycles (poor term in this case, but you know what I mean) it can be useful. In the case of using latches in place of registers, the speed gains are always usable. But can't the same sort of gains be made by register leveling? If you have logic that is slower than a clock cycle followed by logic that is faster than a clock cycle, why not just move some of the slow logic across the register to the faster logic section? Rick
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z