Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"llandre" <llandre@libero.it> wrote: >In this message >http://groups.google.it/group/comp.arch.fpga/browse_thread/thread/4b6b9f8a976483ec/8dbea4a95f3e91f7?lnk=gst&q=polybus&rnum=2&hl=it# >Josh Rosen provided detailed information about comparative performance >tests he made (see also http://www.polybus.com/linux_hardware/ >index.htm). >When he posted that message AMD processors were definitively the best >choiche. Did something changed since then? Have Intel processors >released in the meanwhile filled the gap? Based on 10 years of experience I recommend to stay away from AMD based system for any serious computing. AMD based systems are low budget systems based on crappy chipsets and crappy components and because of that they lack a very important feature: stability. I've never seen an AMD based system survive a day in the office without crashing. Most people will tell you their AMD system _at home_ works perfectly. But tell me, is a PC at home used extensively for 10 hours straight? I don't think so. If you want a computer get an Intel cpu based professional workstation from the business section from Dell or HP. You'll probably notice the price difference between the computer shop around the corner, but believe me, the price difference is worth having a PC that just works fine every day. Large companies buy PCs like these by the thousands for a good reason: a PC which doesn't work/crashes costs a lot of money. -- Reply to nico@nctdevpuntnl (punt=.) Bedrijven en winkels vindt U op www.adresboekje.nlArticle: 115601
On Wed, 14 Feb 2007 10:42:55 -0800, llandre wrote: > Joshua, > > thank you very much for these preciuos information. > I'll try to convince my buying manager to get a E6700 with 4GB RAM and > linux o.s. ... > > Best regards, > llandre A note on RAM for the Core2s. The 2G DIMMs are now affordable, I bought a pair DDR2 800 2G DIMMs from NewEgg for $500. If you plan on using the largest FPGAs you'll want to be able to put 8G in the system so even if you don't want 8G now you should make sure that you use 2G DIMMs so that you can get to 8G in the future without having to throw out any DIMMs. The speed of the RAM has no direct effect on performance because the Core2s are limited by their FSBs which runs at 1066MHz which exactly matches the bandwidth of a pair of DDR 533 DIMMs. I found that the difference between running the DIMMs at 533MHz and at 800MHz was only about 1%. However if you are planning on overclocking the CPU then you'll need faster DIMMs. The reason for this is that the RAMs and the CPU share the same source clock. The BIOS on my motherboard (Abit AB9 Pro) only offers 3 RAM speeds, 533, 667 and 800. These are actually 2X source clock, 2.5X source clock and 3X source clock. If you raise the source clock speed from 266 to 300, the E6700 has a 10X multiplier so 300 is 3GHz, the RAM will run at 600MHz, 770MHz or 900MHz. DDR2 667 or better will allow you to remain within spec even if you raise the source clock to 330MHz. If you by from HP or Dell you won't be able to overclock so you can live with DDR2 533 RAM, however if you buy a whitebox system then you'll want DDR2 667 or better.Article: 115602
On Wed, 14 Feb 2007 19:29:14 +0000, Nico Coesel wrote: > "llandre" <llandre@libero.it> wrote: > >>In this message >>http://groups.google.it/group/comp.arch.fpga/browse_thread/thread/4b6b9f8a976483ec/8dbea4a95f3e91f7?lnk=gst&q=polybus&rnum=2&hl=it# >>Josh Rosen provided detailed information about comparative performance >>tests he made (see also http://www.polybus.com/linux_hardware/ >>index.htm). >>When he posted that message AMD processors were definitively the best >>choiche. Did something changed since then? Have Intel processors >>released in the meanwhile filled the gap? > > Based on 10 years of experience I recommend to stay away from AMD > based system for any serious computing. AMD based systems are low > budget systems based on crappy chipsets and crappy components and > because of that they lack a very important feature: stability. I've > never seen an AMD based system survive a day in the office without > crashing. Most people will tell you their AMD system _at home_ works > perfectly. But tell me, is a PC at home used extensively for 10 hours > straight? I don't think so. > > If you want a computer get an Intel cpu based professional workstation > from the business section from Dell or HP. You'll probably notice the > price difference between the computer shop around the corner, but > believe me, the price difference is worth having a PC that just works > fine every day. Large companies buy PCs like these by the thousands > for a good reason: a PC which doesn't work/crashes costs a lot of > money. > You are completely wrong about this. I've been running three Athlon 64 systems 24/7, the oldest is three years old, the newest is about 18 months old. My systems are under heavy load doing simulations and place and routes. My compute servers have never crashed, the only problem that I've ever had is with X on my workstation. The Nvidia chipsets for the A64s are every bit as good as the the Intel chipsets, and the Nvidia chipsets integrate more functions such as dual Gigabit MACs with IP offload engines. the Nvidia chipsets are also 100% Linux compatible, you still have to do a little tweaking to get Intel 965 motherboards to work. The Core2s are better performers but there isn't any difference in reliability anymore.Article: 115603
Hi, I have an XPS project which allows me to fill BRAM (port A) with data over Ethernet. I then have some custom hardware written in VHDL in the pcores folder and listed in the project repository which uses the 2nd BRAM port (port B) to do some processing and write the output back to BRAM. This part of the project is working well. My problem is that I need to incorporate a Xilinx CoreGen IP core as a module in this custom hardware block and I cant figure out how to do this. I tried adding the .xmp file as an embedded sub-module to an ISE project but the 'Instantiation Template' that was produced didn't contain the output ports of the custom block that weren't connected to a bus. Similarly when I tried creating a peripheral (in XPS) the wizard would only let me connect to a bus and not directly to the 2nd unused port of the BRAM. I hope my explanation is clear. Any help on this would be great. Thanks, S.Article: 115604
On Fri, 26 Jan 2007, doug wrote: > Just to be clear in this case here are the differences in 7.1 and 8 or > 9 for the same project. > ISE 7.1 max memory useage about 120mB and 5 minutes > ISE89 crashes at 2GB after an hour. > > This is definitely the memory leak category. The other question, of > course is how they managed to write bad enough code to take 50 times > as long (when 8 or 9 crashes, it is about 25% done). Even if it did > not crash, this would be a major pain as it implies a xst process time > of around four hours. I had a similar problem with a project where memory usage exploded between 7.1 and 8.2. I traced that to the way XST infers SRL16. I have a number of 640x8 shift registers. When coded according to XST manual they are synthesized right away as SRLs in 7.1. In 8.2 XST produces a large number of flip-flops which are optimized to SRLs later on. I had to rewrite my shift registers to instantiate SRL16E directly. -- Dmitry TeytelmanArticle: 115605
Nico Coesel wrote: > "llandre" <llandre@libero.it> wrote: > > >>In this message >>http://groups.google.it/group/comp.arch.fpga/browse_thread/thread/4b6b9f8a976483ec/8dbea4a95f3e91f7?lnk=gst&q=polybus&rnum=2&hl=it# >>Josh Rosen provided detailed information about comparative performance >>tests he made (see also http://www.polybus.com/linux_hardware/ >>index.htm). >>When he posted that message AMD processors were definitively the best >>choiche. Did something changed since then? Have Intel processors >>released in the meanwhile filled the gap? > > > Based on 10 years of experience I recommend to stay away from AMD > based system for any serious computing. AMD based systems are low > budget systems based on crappy chipsets and crappy components and > because of that they lack a very important feature: stability. I've > never seen an AMD based system survive a day in the office without > crashing. Most people will tell you their AMD system _at home_ works > perfectly. But tell me, is a PC at home used extensively for 10 hours > straight? I don't think so. > > If you want a computer get an Intel cpu based professional workstation > from the business section from Dell or HP. You'll probably notice the > price difference between the computer shop around the corner, but > believe me, the price difference is worth having a PC that just works > fine every day. Large companies buy PCs like these by the thousands > for a good reason: a PC which doesn't work/crashes costs a lot of > money. > My work system is a AMD 6400 X2 dual processor (Hypersonic Cyclone) configured with ECC RAM. It runs 24/7, usually with at least a simulation, synplify synthesis, or a place and route running on it. I have not had any problems with stability with this machine: it is reliable. I've had some windows issues (memory leaks and file handle limits), but nothing that points to hardware. This is my second AMD system. The previous one was a dual K7, which I also had no issues with other than the incredible amount of heat it produced and the attendant fan noise to keep it from melting down.Article: 115606
Hello All, I have been having some trouble with a custom designed FPGA board based on the Xilinx Spartan 3 (XC3S400-FT256). I am hoping that someone here might be able to shed some light on the problem. Essentially, pins on certain IO Banks don't work as outputs unless at least one pin on the affected bank is designated as an input and is continuously receiving an active high (3.3V) signal. First let me go into a bit more detail about the board and the FPGA configuration. All 8 IO Banks on the FPGA are configured to receive VCCO=3.3V. Every IO pin used is designated for LVTTL signaling. 40 of the FPGA pins, mostly from IO Banks 0 and 5, attach to a General Purpose IO header on the board. While attempting to program these pins for various output functionality, we noticed that they did not work (the output will always be logic '0') unless at least one pin on the same IO bank was specified as an input and was receiving a 3.3V (logic '1') signal. The problem appears to be isolated to banks 0 and 5, though it is possible that other banks may be affected. When I look at the output of the non-working pins on an O-scope, I see that when the signal should be driving high to 3.3V, its only jumping 200mV or so. It's as if the signal is somehow being pulled down to ground, though I have no idea how or why. The 3.3V VCCO supply appears to be fine, and is connected to all of the pins on the FPGA that it should be. Other components on the board are also powered by the 3.3V supply, and they are working fine. I'm pretty stumped as to what might be causing this problem. Has anyone out there ever experienced this problem? As far as I can tell, it certainly isn't a "feature" of Spartan 3 FPGAs. Any help would be greatly appreciated. Thanks, -BenArticle: 115607
Hey guys, I am using the XUP2VP Virtex2P board. and I couldnt be able to configure the second processor (ppc405_1) with LED using EDK. Does EDK support configuring ppc405_1?.. AngeloArticle: 115608
Comment embedded: "bengineerd" <bengineerd@gmail.com> wrote in message news:1171494633.490932.137610@a34g2000cwb.googlegroups.com... > Hello All, > > I have been having some trouble with a custom designed FPGA board > based on the Xilinx Spartan 3 (XC3S400-FT256). I am hoping that > someone here might be able to shed some light on the problem. > Essentially, pins on certain IO Banks don't work as outputs unless at > least one pin on the affected bank is designated as an input and is > continuously receiving an active high (3.3V) signal. This screams that the Vcco for the bank isn't actually powered. The input pin's protection diods kick in to provide the Vcco to the rest of the bank as Vin-Vdiode. It's quite possible you have soldering problems that are causing issues with the Vcco balls on those banks. > First let me go into a bit more detail about the board and the FPGA > configuration. All 8 IO Banks on the FPGA are configured to receive > VCCO=3.3V. Every IO pin used is designated for LVTTL signaling. 40 > of the FPGA pins, mostly from IO Banks 0 and 5, attach to a General > Purpose IO header on the board. While attempting to program these > pins for various output functionality, we noticed that they did not > work (the output will always be logic '0') unless at least one pin on > the same IO bank was specified as an input and was receiving a 3.3V > (logic '1') signal. The problem appears to be isolated to banks 0 and > 5, though it is possible that other banks may be affected. > > When I look at the output of the non-working pins on an O-scope, I see > that when the signal should be driving high to 3.3V, its only jumping > 200mV or so. It's as if the signal is somehow being pulled down to > ground, though I have no idea how or why. The 3.3V VCCO supply > appears to be fine, and is connected to all of the pins on the FPGA > that it should be. Other components on the board are also powered by > the 3.3V supply, and they are working fine. > > I'm pretty stumped as to what might be causing this problem. Has > anyone out there ever experienced this problem? As far as I can tell, > it certainly isn't a "feature" of Spartan 3 FPGAs. Any help would be > greatly appreciated. > > Thanks, > -Ben It all makes solid sense for non-powered I/O banks. Do you have more than one board? Do you have access to the layout? If there are direct plane connects for power but not for signals, there could be differences in the soldering for the different pin types. Good luck.Article: 115609
I've created a variant of the GSRD with a fairly large software application that runs great using the impact download and xmd debugger - However, I can't get the software application to run using the System Ace and CF. I went back to the example programs supplied with the GSRD and couldn't get them to run from the Ace either. Using the example memtest program supplied with the GSRD I created an ace file using the following cmd line: C:\>xmd -tcl genace.tcl -jprog -board ml403 -hw implementation/ download.bit -elf ppc405_0/code/ddr_mem_test.elf -target ppc_hw -ace mem.ace I'm using EDK v7.1 and selected to initialize the BRAM with the bootloop. I copied the resulting ace file onto the Compact Flash. Upon reset my FPGA is being loaded correctly - I can observe the activity of some gpio LEDS I added, however I never get my application to run. It should be running out of the DDR SDRAM but shows no signs of life on the debug serial port. What is preventing the application from starting?? The output from my ace build: Xilinx Microprocessor Debug (XMD) Engine Xilinx EDK 7.1.2 Build EDK_H.12.5.1 Copyright (c) 1995-2005 Xilinx, Inc. All rights reserved. Executing xmd script : C:/EDK/data/xmd/genace.tcl ####################################################################### XMD GenACE utility. Generate SystemACE File from bit/elf/data Files ####################################################################### GenACE Options: Board : ml403 Target : ppc_hw Elf Files : ppc405_0/code/ddr_mem_test.elf Data Files : HW File : implementation/download.bit ACE File : mem.ace JPROG : true ############################################################ Converting Bitstream 'implementation/download.bit' to SVF file 'implementation/d ownload.svf' Executing 'impact -batch bit2svf.scr' Copying implementation/download.svf File to mem.svf File ############################################################ Converting ELF file 'ppc405_0/code/ddr_mem_test.elf' to SVF file 'ppc405_0/code/ ddr_mem_test.svf' Target reset successfully section, .text: 0x00000000-0x00001980 section, .boot0: 0xffffc000-0xffffc010 section, .boot: 0xfffffffc-0x00000000 section, .data: 0x00001980-0x000024a2 section, .sdata: 0x000024a4-0x000024a8 section, .sbss: 0x000024a8-0x000024b8 section, .sdata2: 0x000024b8-0x000024b8 section, .bss: 0x000024b8-0x000124c0 Downloaded Program ppc405_0/code/ddr_mem_test.elf Setting PC with program start addr = 0xfffffffc WARNING:Portability:111 - Message file "EDK.msg" wasn't found. PC reset to 0xfffffffc, Clearing MSR Register Copying ppc405_0/code/ddr_mem_test.svf File to mem.svf File ############################################################ Writing Processor JTAG "continue" command to SVF file 'sw_suffix.svf' Processor started. Type "stop" to stop processor ############################################################ Converting SVF file 'mem.svf' to SystemACE file 'mem.ace' Executing 'impact -batch svf2ace.scr' SystemACE file 'mem.ace' created successfully Any or all ideas are appreciated, Thanks.Article: 115610
Andreas Ehliar wrote: > Hi, I've been wondering for a while if there is any data about > typical clock frequencies of FPGA designs for various FPGA > devices. > > What I'm curious about is if there is any sort of published > statistics about the clock frequencies used in different FPGA > designs on different FPGA architectures. > > Basically I'd like to have some empirical data as to what to > consider absurdly low, low, normal or high or extremely high in > terms of clock frequency in a certain FPGA device. > > This is of course more complicated if you consider multiple > clock domains, design complexity, hard IP cores running > at speeds much higher than the surrounding logic, etc. > > From the limited experience I have I would consider designs > running at over say 200 MHz in a Virtex-4 to be high speed > designs and designs running at lower than 100 MHz in such a > device to be low speed but I may be off the mark here by a > significant margin :) > > That is why I'd really like to hear if anyone knows of any > published statistics about this subject. > > /Andreas For V4, 200 MHz isn't high speed, in fact it is only half the clock rate the DSP48's BRAMs and DCMs are spec'd at for the slowest speed grade. At 200 Mhz, you don't really even need to worry much about placement as long as you are reasonably careful with the design. I'd put the dividing line for high speed somewhat over 300 MHz, in other words where you start to have to worry about placement in order to meet timing. My floating point FFT kernel makes timing in a -10 part at 400MHz: it is limited by the clock speed of the BRAM and DSP48s, that probably counts for extremely high, as the clock speed is limited by the minimum clock period of the FPGA blocks rather than by the design. If you stick to just LUTs, you can push it quite a bit higher, but then you give up the big features too. The V4 clocks don't really like to be run slower than about 150 MHz. I'd consider anything less than that to be quite low. The flip side to this is power consumption. At high utilization and high clock rates, cooling can become a challenge. FPGAs are much different than CPLDs as far as clocking goes too. FPGAs have an abundance of flip-flops, as well as clock managers that can multiply the input clock. This makes it very attractive to use a multiplied clock in order to reduce the footprint of the logic, which can often get you into a significantly smaller device.Article: 115611
On Feb 14, 2:25 pm, stephen...@gmail.com wrote: > Hi, I have an XPS project which allows me to fill BRAM (port A) with > data over Ethernet. I then have some custom hardware written in VHDL > in the pcores folder and listed in the project repository which uses > the 2nd BRAM port (port B) to do some processing and write the output > back to BRAM. This part of the project is working well. My problem > is that I need to incorporate a Xilinx CoreGen IP core as a module in > this custom hardware block and I cant figure out how to do this. I > tried adding the .xmp file as an embedded sub-module to an ISE project > but the 'Instantiation Template' that was produced didn't contain the > output ports of the custom block that weren't connected to a bus. > Similarly when I tried creating a peripheral (in XPS) the wizard would > only let me connect to a bus and not directly to the 2nd unused port > of the BRAM. I hope my explanation is clear. Any help on this would > be great. > > Thanks, > S. Instead of using the import wizard, I suggest that you read the "Platform Specification Format Manual" also known as UG131 located in the $EDK\doc directory. I used the import wizard for the first sample pcore that I created, but now I prefer to create all the EDK data files myself because I find it is more flexible. There are some examples of how to use CoreGen from EDK in the Xilinx ChipScope cores in EDK. They use TCL files to create files for CoreGen and then call Coregen to create the core. Look in $EDK\hw \XilinxProcessorIPLib\pcores\chipscope_*\data for the TCl files to use as examples. Regards, John McCaskill www.fastertechnology.comArticle: 115612
Hi Dave, Dave H wrote: > I've created a variant of the GSRD with a fairly large software > application that runs great using the impact download and xmd debugger > - However, I can't get the software application to run using the > System Ace and CF. I went back to the example programs supplied with > the GSRD and couldn't get them to run from the Ace either. I've seen some wierdness with SystemAce downloading large ELF files into off-chip RAM, whereby the first 10 or so words written into the mrmory were corrupted, and the CPU start address was also wrong. This was with a MicroBlaze on a custom board, but it points to a problem either in the genace.tcl script, the SystemAce controller or even maybe some board-specific issue. We diagnosed this by changing the ELF file to have a bunch of "bootloop" type opcodes (bri 0 on a microblaze) right at the start, so the CPU would spin immediately after systemAce boot. Once system ace config/DRAM load completed, we connected with XMD and could see the bogus memory contents and also the bogus starting PC value. We characterised the badness and worked around it, but I don't have a solution as yet. All that said, I've done big ACE SW downloads (PPC Linux kernels) on an ML403 previously with no issues at all. Maybe this is relevant? JohnArticle: 115613
Hallo, I would connect virtex-4fx and cpld to test an i2c slave peripheral. Into virtex-4 I have programmed a small system with microblaze, opb_i2c, and some other peripherals. Into cpld I would program a small i2c custom slave peripheral. Is it possible to realize, otherwise I have troubles due to bus sharing with flash and ram? Many thanks in advance MarcoArticle: 115614
On Feb 14, 6:31 am, backhus <n...@nirgends.xyz> wrote: > > is there something special to get kcpsm3 running on linux/wine ? > > my wine conf seems ok (FileZila win edition is running fine) > > but KCPSM3.EXE faills (Unhandled page fault on read access to > > 0xffffffff...) > > I also have a problem with kcpsm3, while kcpsm for spartan2 picoblaze > runs fine under wine. > > I tried dosemu instead. (Version 1.2.2.0) > It works. I filed a bug report against Wine; see http://bugs.winehq.org/ show_bug.cgi?id=7431 - DanArticle: 115615
Again, I'm looking for a diagram like frequency (some MHz to 10 GHz for example) versus loss tangent and / or epsilon R for FR4 or other usual PCB material. I only found a poor black-and-white copy from 1991 in a paper which I searched with Google. I wouldn't have thought it to be so hard to find a graph but as noone replied to my previous question so far it does seem to be hard! :-) Does anybody know where I can find that? Regards GeroArticle: 115616
On Feb 14, 9:59 am, "jetq88" <jetq5...@gmail.com> wrote: > Our design department basically split in the middle with half products > were designed with Altera parts and half products were designed with > Xilinx parts, when talking about choosing one main FPGA source, > everyone voiced different opinions. I'm about to have a new design to > process digital video signal which requires large external memory, > either DIMM DDR/DDR2 SDRAM or component DDR/DDR2 SDRAM. > First i go for Xilinx ISE9.1 webpack, quite large program, go to > CoreGen, can't find place to generate memory controller, goto Xilinx > and check MIG tool, nowhere to download MIG tool for ISE9.1. guess I > have to use old tool, then import to ISE9.1 and tweak it by myself. > downloaded Altera quartus6.1 webpack, go to megawizard, choose memory > controller, then DDR SDRAM, right there, only thing I need is to > customize it, looks like it's simpler so far, since I just get > started, no sure the road ahead yet, but from the beginning, look like > xilinx road is bumpy. > I know if I get reference design of either one, It should get the job > done, I want to listen to others out there, specially those who have > experience on both, what are your thoughts about both companies in > term of chip performance, development tool and supports, I'd like > choose a company with overall better performance, stick with it and > forget the other one I do use both vendors; I have a board where the Stratix gets its bitstream from a CoolRunner. The choice is much more subjective than objective. They both make quality products, have good tools, and a smart support team. There are differences, but they are small. Example petty annoyances are things like Xilinx not releasing the EDK at the same time as ISE, and the way Quartus just don't run right if it's not connected to The Internet. The question I have is why do you feel you need to pick one vendor? It's lots of fun to have a lot of Xilinx docs floating around when the Altera rep comes to visit. ;) Best of luck, GHArticle: 115617
dank <daniel.r.kegel@gmail.com> wrote: > On Feb 14, 6:31 am, backhus <n...@nirgends.xyz> wrote: > > > is there something special to get kcpsm3 running on linux/wine ? > > > my wine conf seems ok (FileZila win edition is running fine) > > > but KCPSM3.EXE faills (Unhandled page fault on read access to > > > 0xffffffff...) > > > > I also have a problem with kcpsm3, while kcpsm for spartan2 picoblaze > > runs fine under wine. > > > > I tried dosemu instead. (Version 1.2.2.0) > > It works. > I filed a bug report against Wine; see http://bugs.winehq.org/ > show_bug.cgi?id=7431 Where is the download page for kcpsm3? Google wasn't that much of a help for me. -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 115618
jetq88 wrote: > Our design department basically split in the middle with half products > were designed with Altera parts and half products were designed with > Xilinx parts, when talking about choosing one main FPGA source, > everyone voiced different opinions. This is a common situation in politics, religion and FPGAs. > I'm about to have a new design to > process digital video signal which requires large external memory, > either DIMM DDR/DDR2 SDRAM or component DDR/DDR2 SDRAM. > First i go for Xilinx ISE9.1 webpack, quite large program, go to > CoreGen, can't find place to generate memory controller, goto Xilinx > and check MIG tool, nowhere to download MIG tool for ISE9.1. guess I > have to use old tool, then import to ISE9.1 and tweak it by myself. > downloaded Altera quartus6.1 webpack, go to megawizard, choose memory > controller, then DDR SDRAM, right there, only thing I need is to > customize it, looks like it's simpler so far, since I just get > started, no sure the road ahead yet, but from the beginning, look like > xilinx road is bumpy. I would run a simplified design through both sets of tools and decide for myself. Ease of simulation and the RTL viewers are my hot buttons. > I know if I get reference design of either one, It should get the job > done, I want to listen to others out there, specially those who have > experience on both, what are your thoughts about both companies in > term of chip performance, development tool and supports, I'd like > choose a company with overall better preformance, stick with it and > forget the other one Certainly, the designer has to choose one, but each design is a new game. -- Mike TreselerArticle: 115619
On Feb 15, 2:49 am, Kevin Neilson <kevin_neil...@removethiscomcast.net> wrote: > I have been unable to find a definitive answer for this in my searches. > I would like to operated DDR2 SDRAM below its minimum specified clock > rate, which is usually 125MHz (DDR250). I assume that the minimum clock > rate specified has nothing to do with refreshing but has to do with the > lowest rate at which the DRAM's DLL can lock. I am wondering if I can > operate below the minimum rate if I turn off the DLL. As far as I can > tell, the only purpose of the DLL is to ensure that the output DQS > strobe and the data are aligned with the input clock, so if I don't care > about that relationship I should be able to disable the DLL. Anybody > know for sure? > -Kevin Ya you are right, but for initilisation process you need to enable and then can disable the DDR DLL, once in active state, you can work , the only thing is you make sure all the timings still maintained with min operating freq with margins, because we are not sure of the internal timing paramentersArticle: 115620
Nico Coesel wrote: > "llandre" <llandre@libero.it> wrote: > >> In this message >> http://groups.google.it/group/comp.arch.fpga/browse_thread/thread/4b6b9f8a976483ec/8dbea4a95f3e91f7?lnk=gst&q=polybus&rnum=2&hl=it# >> Josh Rosen provided detailed information about comparative performance >> tests he made (see also http://www.polybus.com/linux_hardware/ >> index.htm). >> When he posted that message AMD processors were definitively the best >> choiche. Did something changed since then? Have Intel processors >> released in the meanwhile filled the gap? > > Based on 10 years of experience I recommend to stay away from AMD > based system for any serious computing. AMD based systems are low > budget systems based on crappy chipsets and crappy components and > because of that they lack a very important feature: stability. I've > never seen an AMD based system survive a day in the office without > crashing. Most people will tell you their AMD system _at home_ works > perfectly. But tell me, is a PC at home used extensively for 10 hours > straight? I don't think so. > That may have had some merit as an argument 10 years ago, but it is totally at odds with most people's experiences since then. AMD has been the manufacturer of choice for serious computing since the Opteron's first came out - again and again, they have given more powerful and scalable than Intel's solutions, and the processors left stability problems behind with the K6 generation. There have been issues with heat - many of AMD's chips in the last five years have run particularly hot, and if you buy a cheap system then it's cooling system might not be good enough. And if you want to talk about motherboard and chipset issues, then Intel has far outweight AMD for problems in recent years - mostly because, until the Core 2, it has been rushing out everything it can in hopes of competing with AMD. In my own experience, I have picked AMD on almost every occasion in the last fifteen years - first purely for value for money, and later for reliability as well. Were I buying a new machine today, I would probably go for a Dual Core 2, simply because of better value for money at the moment, although for a server I might pick AMD for stability (and for a four-core or more machine, AMD is the only realistic choice). If your machines crash after a day at the office, you are doing something terribly wrong, and the processor is the least of your worries. Most of the machines I use and administer, at home and at the office, are AMD's, and most of them are never turned off. I have a server here at the office with a 300 MHz AMD K6 that has been running for around 8 years, and has only been off a half-dozen times for power cuts and a replacement power supply (this is probably a world record for NT 4.0). And in the world of gaming, people run their machines for much longer than 10 hours at a time, and often with more demanding loads than any professional use - they generally choose AMD. There is a reason why AMD captured a large proportion of the server market, especially for multi-core systems, despite Intel's entrenchment (and illegal and/or unethical behaviour, for which they are currently on trial). > If you want a computer get an Intel cpu based professional workstation > from the business section from Dell or HP. You'll probably notice the > price difference between the computer shop around the corner, but > believe me, the price difference is worth having a PC that just works > fine every day. Large companies buy PCs like these by the thousands > for a good reason: a PC which doesn't work/crashes costs a lot of > money. > This is a totally different issue. If you want a reliable machine, be prepared to spend money on it and get it from a reliable supplier. No one will argue with that. Don't buy AMD processors because they are cheap - buy the appropriate chip for the job.Article: 115621
jetq88 wrote: > term of chip performance, development tool and supports, I'd like > choose a company with overall better preformance, stick with it and > forget the other one That is impossible choice to make in reality. FPGA selection depends on the project, schedules, part availability, price etc. For example the FPGA vendors are not creating new product families at the same time, so one vendor is sometimes ahead and sometimes behind. As a average both A and X create good chips and tools. Also things like packaging, io vs. logic ratio, embedded resources, io support etc. have to be considered in the selection. Easiest way is to design for both families, and decide the chip selection when there is something to synthesize and p&r. Or at least make the selection at the beginning of the project when all the requirements are known. --KimArticle: 115622
From anandtech's article (http://www.anandtech.com/printarticle.aspx?i=2925) on the 80-core Intel Terascale chip: "The chip uses a LGA package like Intel's Core 2 and Pentium 4 processors, but features 1248 pins. Of the 1248 pins on the package, 343 of them are used for signaling while the rest are predominantly power and ground." That's 905 power and ground balls! Remember when Trilogy went down, taking over $200 million of start-up funding with it, attempting packages with 1200 pins. How we laughed ;-)Article: 115623
Uwe Bonnes schrieb: > dank <daniel.r.kegel@gmail.com> wrote: >> On Feb 14, 6:31 am, backhus <n...@nirgends.xyz> wrote: >>>> is there something special to get kcpsm3 running on linux/wine ? >>>> my wine conf seems ok (FileZila win edition is running fine) >>>> but KCPSM3.EXE faills (Unhandled page fault on read access to >>>> 0xffffffff...) >>> I also have a problem with kcpsm3, while kcpsm for spartan2 picoblaze >>> runs fine under wine. >>> >>> I tried dosemu instead. (Version 1.2.2.0) >>> It works. > >> I filed a bug report against Wine; see http://bugs.winehq.org/ >> show_bug.cgi?id=7431 > > Where is the download page for kcpsm3? Google wasn't that much of a help for > me. > Hi Uwe, maybe it's because there is no separate downloadpage for kcpsm3. It's part of the picoblaze for Spartan3 Design files. http://www.xilinx.com/xlnx/xebiz/designResources/ip_product_details.jsp?key=picoblaze-S3-V2-Pro&sGlobalNavPick=&sSecondaryNavPick= regards EilertArticle: 115624
Does anyone have a good recipe for simulating an EDK project in NC- Sim? I am looking at the NC Launch (GUI front end) revision and it is 05.40-s015. I am using EDK 8.2 with the latest service pack. I have done this in the past, but abandoned it b/c I couldn't simulate any useful amount of time before the thing blew up. I would think that a moderately populated EDK would sim OK. Hell, NC Sim is used for massive ASIC sim. Just thought I'd ask on here to see if there is a relatively 'easy' way. Thanks!
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z