Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Mar 19, 5:41 pm, "Bob Golenda" <bgoli...@nospam.net> wrote: > Thank you very much for that pointer. It still appears that aside from what > Antti did, no one (at least on this list) has successfully done a real JTAG > programmer for the XCF...just XSVF Player type programmers. I just find > that odd. > > "MM" <m...@yahoo.com> wrote in message > > news:5684abF27jrufU1@mid.individual.net... > > > > > Take a look at this old discussion: > >http://tinyurl.com/277l9a > > > /Mikhail- Hide quoted text - > > - Show quoted text - Actually I did with a PicoBlaze for an XCF02. Really didn't take too long once I figured out what to do. Basically I used the ieee1532 and the JTAG 1149 spec as guide. DaveArticle: 116901
On 20 mar, 20:01, "Paul" <pauljbenn...@gmail.com> wrote: > Sorry, but that's an amazingly unclear question.... what video > format? how is it stored? streaming input signal? analog input to an > ADC? You can't possibly expect anyone to help you with that > question... clarify and maybe you'll find some advice > > On Mar 20, 6:51 am, "kha_vhdl" <abai...@gmail.com> wrote: > > > hi every body , > > > i m asked to read a video using an FPGA , can you please give me some > > ways that let me adopt to create the test bench of the video? > > it means what are the different ways to read video into simulation : > > they told me there is an automatic way and a manual one , please can u > > explain it to me > > > thank you Hi for me that for now i havent an idea about ( i mean i m asking for what details i m asking about to read my video ) , it is my first time that i think about this and i want help to know these detailsArticle: 116902
Herbert Kleebauer wrote: > We use in a laboratory course still XILINX XC3000 FPGAs with > Viewlogic's Workview design entry (DOS version) and XILINX > XACT (also DOS). The problem is that we have to replace the > old PC's and that Viewlogic only supports a few graphics modes > and it is unlikely that it will run on new PC's. The last > version of XILINX ISE software which supports XC3000 FPGA's > isn't an alternative (and I'm not sure whether it will > run on W2k/XP) because the system must be extremely easy to > use so the students are able to design and implement a simple > CPU in about 10 hours (including the time to learn how to use > the schematic entry and simulation tool). What are the prime teaching targets: learning FPGA flows, or learning shematic entry ? > > Some questions: > > 1. I have tried to find an actual FPGA with a package which can > be soldered with a non professional equipment, something like > a PLCC84 where you can get cheap sockets which can be used on > self made PCBs and if possible with a VCC of 5 V to easy interface > with external TTL logic. XILINX and ACTEL only offers packages with > a pin distance of 0.5 mm. ATMEL's AT40K20 would fulfill this > requirements but I'm not sure if this architecture is still > supported (ATMEL's documentation is five years old) and whether > there exists good development software. Isn't '5 years old', actually new on your time scales ? Get the Atmel tools and try them > > - has anybody experience with ATMEL's AT40K20 and can suggest > development software (it must be a schematic entry, no VHDL > because the students have to "see" the processor at gate level. What about simpler HDLs, like CUPL or ABEL ? With those, you can 'see' the AND and OR terms ? What about 'seeing' the result in the report files - is that gate-level enough ? > > - does anybody know other FPGAs which could be used (or is the > hobby market completely uninteresting for the manufactures). Your best pathway into new devices, is a daughter card approach. Put the 'CPU'/REG/CAPS on a tiny PCB, with pin headers. > > > 2. Was somebody able to run Viewlogic (DOS version) in a virtual > PC emulation. The problem is, the virtual PC must provide > the proper graphics mode, mouse type and support a physical > dongle on the virtual parallel port. Keys on virtual parallel ports ?! Nope... > > > Here a description of the students project: > ftp://137.193.64.130/pub/mproz/mproz_e.pdf Interesting, a 3 opcode CPU. I'd look at the CPLDs, and which devices support simpler HDLs CUPL/ABEL, or even Altera's AHDL - the biggest FPGAs are all Verilog/VHDL flows, but you are looking at the simpler end of the scale, so a simpler HDL might fit the teaching targets better. Atmel have up to ATF1508, which could do a 3 opcode CPU, but maybe not this one, if you want to clone the SCHs precisely, as there seems to be many layers of logic. Xilinx have XC95xx and Coolrunner II, I think with ABEL flows on all CPLDs, Lattice have IspMACH4000 family, and Abel in their CPLD flows. -jgArticle: 116903
"jd" <jaggunitj@yahoo.co.uk> wrote in message news:1174332556.604594.212410@d57g2000hsg.googlegroups.com... > hi all > > i am an engineering final year student in english > i am doing a project on "CONFIGURING FPGA USING XC9500 CPLD AND > PARALLEL PROM" > i am staring frm the scratch > i studied some literature... > but i am not understanding the following things > > * how to create bitstream > * how to create configuration file > * how to load bitstream > > how to use and what is the role of vhdl in this application.. > > plz help > You definately need to start with simple VHDL/Verilog program, like blinky LED, synthesize the code and download bitstream into FPGA for testing. This will give you at least basic idea of the design flow. Check www.xess.com site, they sell XSA-3S1000 eval kit which implements your project ideas of configuring FPGA from parallel flash with the help of CPLD. They have schematics of the board posted in User's Manual. Although new Spartan3E family is capabale of self-configuring from parallel flash. good luck!Article: 116904
Herbert Going sideways on what you are looking for it is worth looking at a couple of ideas from our product line to allow the easy use of modern FPGAs. The first is our Craignell family http://www.enterpoint.co.uk/component_replacements/craignell.html which operate from 5V, in a DIL format, and are fully 5V tolerant. At the moment we do 32,36,40 pin versions but I expect to have 28 and 48 pin versions added to the range. Maybe a few others if someone gives us a good reason. Almost a bigger brother our product Darnaw1 is waiting in our lab for a couple of days test before it goes into mass manufacture. This is a 2.54mm pitch PGA style module that lets you use a XC3S1200E/1600E Spartan. This module is 3.3V tolerant and operates from a single 3.3V input. The module also has spi flash and sdram to allow the implementation of fairly powerful processor applications. If you like the concepts of these modules have a look at our university program (UAP). It offers discount and various other academic support things. Details here http://www.enterpoint.co.uk/uap/uap.html. John Adair Enterpoint Ltd. On 20 Mar, 16:23, Herbert Kleebauer <k...@unibwm.de> wrote: > We use in a laboratory course still XILINX XC3000 FPGAs with > Viewlogic's Workview design entry (DOS version) and XILINX > XACT (also DOS). The problem is that we have to replace the > old PC's and that Viewlogic only supports a few graphics modes > and it is unlikely that it will run on new PC's. The last > version of XILINX ISE software which supports XC3000 FPGA's > isn't an alternative (and I'm not sure whether it will > run on W2k/XP) because the system must be extremely easy to > use so the students are able to design and implement a simple > CPU in about 10 hours (including the time to learn how to use > the schematic entry and simulation tool). > > Some questions: > > 1. I have tried to find an actual FPGA with a package which can > be soldered with a non professional equipment, something like > a PLCC84 where you can get cheap sockets which can be used on > self made PCBs and if possible with a VCC of 5 V to easy interface > with external TTL logic. XILINX and ACTEL only offers packages with > a pin distance of 0.5 mm. ATMEL's AT40K20 would fulfill this > requirements but I'm not sure if this architecture is still > supported (ATMEL's documentation is five years old) and whether > there exists good development software. > > - has anybody experience with ATMEL's AT40K20 and can suggest > development software (it must be a schematic entry, no VHDL > because the students have to "see" the processor at gate level. > > - does anybody know other FPGAs which could be used (or is the > hobby market completely uninteresting for the manufactures). > > 2. Was somebody able to run Viewlogic (DOS version) in a virtual > PC emulation. The problem is, the virtual PC must provide > the proper graphics mode, mouse type and support a physical > dongle on the virtual parallel port. > > Here a description of the students project:ftp://137.193.64.130/pub/mproz/mproz_e.pdfArticle: 116905
I've played around with Xilinx PicoBlaze processor, but it's time to step up into 32-bit softcore CPU world for more serious designs, potentially getting in line with embedded OS. I am facing a choice, whether to use always up to date Xilinx EDK tools integrated with ISE and MicroBlaze, which comes with good documentation, there is a third party uClinux port. The other alternative is using Altium Designer FPGA goodies bag. They offer a wide range of Wishbone compatible cores (no source code though), platform independent primitive libraries, a choice of few softcore processors including 32-bit RISC core TSK3000, and even support for the same MicroBlaze, but I've noticed that supported version is a bit behind. Currently I am considering Altium route as it brings more value to the table, providing vendor independence, but at the same time I would entirely depend on Altium continuous support towards FPGAs. I would appreciate if anybody could share a similar experience or thoughts.Article: 116906
InmateRemo, Thoughts to share: I would suggest that Xilinx is the only provider with a continuous history of providing a code compatible (MicroBlaze) soft processor. Where others had their first version (which did not work very well) and then abandoned it (leaving all their customers with useless code). Xilinx recognizes that to be a serious player in the embedded processor space there must be backward compatible code (forever). Intel's rule is very simple, you can research and play with any architecture you wish, but there is one, and only one instruction set (x86). In a similar fashion, we have PicoBlaze (still the KCMP core from long ago), MicroBlaze (32 bit Harvard architecture soft core optimized for our architecture -- unchanged as far as instructions from day 1), and the IBM Power PC family (405, 4??, ???: the roadmap being IBM's "power" architecture roadmap, just delayed). With as many customers as we have, with all of their designs, and as many seats of software (more than 250,000 installed), and our long history (invented the FPGA in 1984), besides our business position (took PowerPC(tm IBM) architecture from ~33% in embedded systems when we introduced Virtex II Pro, to more than 50% of embedded systems today); you would be well served to stick with Xilinx. AustinArticle: 116907
Hi folks, Is it possible to get EPROMs for Altera (Cyclone 2) devices that work at 1.8V? I appreciate the help and insights. Best, SanjayArticle: 116908
On Feb 12, 3:07 pm, kjasap...@yahoo.com wrote: > Hello, > > I am trying to debug a scenario where I am unable to upload the FPGA > image from the prom to the FPGA. The image downloaded to the prom sees > correct. > > iMPACT : 8.1.03i, Prom xcf32p, and Xilinx V Pro 100. > > Downloading the .mcs file via iMPACT always works. But if I bitbang > the generated .xsvf file to the prom it some times does not work. > Although the prom verifies okay (through iMPACT) and also the checkum > of the various regions are also fine. > > I am using region 0 only and with compression mode enabled (the image > size without compression is about 102%.). > > The FPGA is in slave mode and apparently the clock also seems fine. I > have verified the clock both after power up(reset) and with LOAD FPGA > option enabled. The Prom seems to be clocking okay (internally > clocked) but apparently the data bits are all high (parallel mode). > > Is there a way to dump the xcf32p status registers? > > Any suggestions would be helpful. > > Thanks in advance, > -Kalpesh Try using iMPACT 9.1 to generate the xsvf file. I think 8.1 has a bug in its generation of xsvf files for xcf32p parts. -BrianArticle: 116909
Hello, I have a design on a SX55 that contains of two complex (real +imaginary) datapaths. I've been trying to debug the code in Chipscope using ILAs before and after my processing block. Both channels are identical in vhdl as they use the same entity. For some reason the output of datapath #1 is showing all zeroes in Chipscope. I checked the design in the FPGA editor and it looks like all of the components and nets are where they should be. I don't have any relevant errors on constraints either. Please see screenshot linked below: http://img400.imageshack.us/my.php?image=ila4tz5.png For some reason I'm only getting zero-valued data out of channel 1 and consequently is never being enabled, unlike channel 2... Any ideas? Thanks, -BrandonArticle: 116910
Hi, Sure, DSP48 has more stuff grafted to the multiplier but the multiplier is by far the most costly and performance-critical function. Adders and muxes are not particularly expensive by comparison. Since the OP said the application was video scaling, the access pattern should be highly linear and require loading only a few consecutive lines of video data for highly efficient processing. The (usually) highly linear structure of scaling functions and data usually lends itself quite nicely to pipelining so sharing some multipliers, assuming the overall count cannot be reduced in the first place, should not be too problematic at medium/low resolutions. Building a custom DRAM controller is certainly not a novice's job: most companies that use DRAMs in high-end applications have at least one skilled engineer working full-time on optimizing, updating and testing memory controllers to achieve the highest throughput, lowest and steadiest latencies possible or making sure outsourced/third-party controllers work as advertised. Reworking any design to move from BRAMs to DRAMs (or even SRAMs, to a lesser extent) will be pretty far from trivial in all but a few rare cases since DRAM R/W control is of such a drastically different nature from BRAMs'. Hours/days/weeks... possibly months of fun ahead. Paul wrote: > Ken, > > The Spartan & Virtex parts are VERY different parts... you will > certainly encounter a handful of problems trying to port this design. > First, the spartan has multipliers, but the DSP slices include MUCH > more than that. You will find that if the old design used the > multiplier-accumulate portion of the DSP slices, each of those is > going to eat up a bunch of FPGA fabric (slices) in the ported > version. As for using the DRAM... you will need to either design a > controller or use an existing one. Xilinx has a memory interface > generator that will generate a controller for you. There are also > probably a couple on opencores.org. If you are brand new to FPGA > design, I would definately not suggest trying to design your own - > DRAM controllers can be tricky. But you will need to rewrite parts > of your code to properly interface to the controller, as it will > likely not look exactlly like a block ram. Also, if the design made > use of the dual-ports on the BRAMs, then you're going to have to come > up with a way to get around the fact that you no longer have the > luxery of dual ports. Good luck, you've got a hefty project ahead of > you. > >> Hmm, for the time being, I shall try to find the information about the peak >> bandwidth first. Then later on, i could move on to the logic for prefetch >> data and BRAMs as IO buffers. >> >> Heh, I guessed everyone got their own problems too. IOBs..huh.. Can't help >> you much on that though. >> Good luck to you! Thanks alot!~- Hide quoted text - >> >> - Show quoted text - > >Article: 116911
Hi, It would depend if you want to use MicroBlaze or not and how important new feature of MicroBlaze will be. Altium is also a nice tool and you should compare the it with Xilinx tools to see which one provides best features for your need. But Xilinx tools will be the tool with best support for MicroBlaze and all it's new features. Being vendor independent is nice but it's also comes with a cost of performance and cost. Good luck in your decision Göran "InmateRemo" <remis4pro@yahoo.com.extra> wrote in message news:iaYLh.18079$MR6.3246@fe1.news.blueyonder.co.uk... > I've played around with Xilinx PicoBlaze processor, but it's time to step > up into 32-bit softcore CPU world for more serious designs, potentially > getting in line with embedded OS. > > > > I am facing a choice, whether to use always up to date Xilinx EDK tools > integrated with ISE and MicroBlaze, which comes with good documentation, > there is a third party uClinux port. > > The other alternative is using Altium Designer FPGA goodies bag. They > offer a wide range of Wishbone compatible cores (no source code though), > platform independent primitive libraries, a choice of few softcore > processors including 32-bit RISC core TSK3000, and even support for the > same MicroBlaze, but I've noticed that supported version is a bit behind. > > > > Currently I am considering Altium route as it brings more value to the > table, providing vendor independence, but at the same time I would > entirely depend on Altium continuous support towards FPGAs. > > > > I would appreciate if anybody could share a similar experience or > thoughts. > > > >Article: 116912
Herbert Kleebauer wrote: > We use in a laboratory course still XILINX XC3000 FPGAs with > Viewlogic's Workview design entry (DOS version) and XILINX > XACT (also DOS). The problem is that we have to replace the > old PC's and that Viewlogic only supports a few graphics modes > and it is unlikely that it will run on new PC's. The last > version of XILINX ISE software which supports XC3000 FPGA's > isn't an alternative (and I'm not sure whether it will > run on W2k/XP) because the system must be extremely easy to > use so the students are able to design and implement a simple > CPU in about 10 hours (including the time to learn how to use > the schematic entry and simulation tool). > > Some questions: > > 1. I have tried to find an actual FPGA with a package which can > be soldered with a non professional equipment, something like > a PLCC84 where you can get cheap sockets which can be used on > self made PCBs and if possible with a VCC of 5 V to easy interface > with external TTL logic. XILINX and ACTEL only offers packages with > a pin distance of 0.5 mm. ATMEL's AT40K20 would fulfill this > requirements but I'm not sure if this architecture is still > supported (ATMEL's documentation is five years old) and whether > there exists good development software. > > - has anybody experience with ATMEL's AT40K20 and can suggest > development software (it must be a schematic entry, no VHDL > because the students have to "see" the processor at gate level. > > - does anybody know other FPGAs which could be used (or is the > hobby market completely uninteresting for the manufactures). > > > 2. Was somebody able to run Viewlogic (DOS version) in a virtual > PC emulation. The problem is, the virtual PC must provide > the proper graphics mode, mouse type and support a physical > dongle on the virtual parallel port. > > > Here a description of the students project: > ftp://137.193.64.130/pub/mproz/mproz_e.pdf This is such a simple CPU, that I quickly tried it targeting the ATF1508 - which _is_ 5V and _is_ PLCC84, so you can keep all your design notes, and just re-map the pins on the PCB layout you have. Below is the CUPL code (minus STEU and full adder), which is close enough to registers that the students should be able to "see" the processor at gate level. This FITs with 46 spare macrocells in an ATF1508, which should be plenty to complete the adder, and STEU state engine. As you can see, this is probably easier to read than the SCHs, 8 blocks in the main diagram become 8 (very simple) equation sets. FIELD Sta = [Sta2..Sta0]; FIELD Din = [Din15..Din0]; FIELD Adr = [Adr15..Adr0]; FIELD PC = [PC15..PC0]; FIELD XReg = [XReg15..XReg0]; FIELD YReg = [YReg15..YReg0]; FIELD XGate = [XGate15..XGate0]; FIELD YGate = [YGate15..YGate0]; FIELD ALU = [ALU15..ALU0]; /* ~~~~~~~~~ XReg, YReg, PC Simple Registers ~~~~~~~~~~~~ */ XReg.d = Din; XReg.ck = CLK; XReg.ce = s1; YReg.d = Din; YReg.ck = CLK; YReg.ce = s2; PC.d = ALU; PC.ck = CLK; PC.ce = s3; /* ~~~~~~~~~~~~~~~~ Adr is AMUX out ~~~~~~~~~~~~~~ */ /* AMUX: IF (s4=0) THEN out=in1 ELSE out=in2 */ Adr = s4 & YReg # !s4 & PC; /* ~~~~~~~~~~~~~~~~ XGate, YGate ~~~~~~~~~~~~~~~~~ */ /* XGATE: IF (s5=0) THEN out=$0000 ELSE out=in */ XGate = s5 & XReg # !s5 & 'h'0000; /* YGATE: IF (s6=0) THEN out=$0001 ELSE out=in */ YGate = s6 & YReg # !s6 & 'h'0001; /* ~~~~~~~~~~~~~~~ ALU, pre-adder ~~~~~~~~~~~~~~~~~ */ /* ALU: IF (s7=0) THEN out=in1 ADD in2 , F=carry ELSE out=in1 NOR in2 , F=zero */ ALU = s7 & !(XGate # YGate) # !s7 & (XGate $ YGate); /* Dummy, until adder done */ F.d = s7 & 'b'0 # !s7 & (XGate0 $ YGate0); /* Dummy, until adder-CY done */ F.ck = CLK; F.ce = s8;Article: 116913
Hello, I am running into a strange problem with the dual-port block RAM in Virtex-II and at this point have run out of ideas :( In my design I am trying to perform data acquisition on a 32-bit parallel data stream running at 125 MHz. I have two memory blocks into which I want to be able to direct the data: internal block RAM and external SRAM. Internal RAM is dual ported with one port connected to the local bus. SRAM controller has can be switched between the local bus and the data port. Data source is the same for both block RAM and the SRAM - their inputs are taken from a single signal. Within the design I have a little test pattern generator to produce a fake data stream for testing. So here is the problem: the data is written correctly into the SRAM, but not into the block RAM. It is a timing problem - errors go away if I lower the clock frequency. The same problem persist with both ISE 8.2i and 9.1i. Here are some numbers: The part is XC2V3000-4FF1152. Clock constraint is 7.6 ns. Static timing analysis gives me 7.632 ns. Experimentally-determined maximum clock frequency for error-free acquisition into the block RAM is 105 MHz. Maximum clock frequency for SRAM - around 150 MHz. As I play around with block RAM (instantiated vs. inferred, pipelining in front of the memory), maximum frequency moves in the range from 90 to 120 MHz. SRAM maximum frequency is consistently around 150 MHz. Block RAM errors are typically single-bit, sometimes two bits. Which bit it is seems to move around from one compilation to another. -- Dmitry TeytelmanArticle: 116914
InmateRemo wrote: > I've played around with Xilinx PicoBlaze processor, but it's time to step up > into 32-bit softcore CPU world for more serious designs, potentially getting > in line with embedded OS. > > > > I am facing a choice, whether to use always up to date Xilinx EDK tools > integrated with ISE and MicroBlaze, which comes with good documentation, > there is a third party uClinux port. > > The other alternative is using Altium Designer FPGA goodies bag. They offer > a wide range of Wishbone compatible cores (no source code though), platform > independent primitive libraries, a choice of few softcore processors > including 32-bit RISC core TSK3000, and even support for the same > MicroBlaze, but I've noticed that supported version is a bit behind. > > > > Currently I am considering Altium route as it brings more value to the > table, providing vendor independence, but at the same time I would entirely > depend on Altium continuous support towards FPGAs. > > > > I would appreciate if anybody could share a similar experience or thoughts. Aren't these two mutually exclusive ?! "(no source code though)" and "providing vendor independence" ie, with no source code, you have just locked yourself into vendor Altium - surely a crazy thing to do ? If you like vendor independance, then look at Lattice Mico8/Mico32, which are open source. -jgArticle: 116915
Thanks for your answer. The libraries are compiled, they were found and used. Modelsim exits with the fatal error (pop up opens) and writes nothing more to console or logfile. The error occurs in the moment, when all files and libs are already found and used and the simulation starts. In the moment, when another simulation changes the monitor and opens a wave or sth. like that (the moment when the real simulation should start). --- Original Nachricht --- Absender: HT-Lab Datum: 20.03.2007 14:46 > Try to run Modelsim in command line mode (vsim -c) you sometimes get some > extra info. Also make sure that all your primitive libraries are compiled > (use compxlib) with the version of Modelsim you are using, > > Hans > www.ht-lab.com > > "Markus" <outofmem@arcor.de> wrote in message > news:45ff2faf$0$23145$9b4e6d93@newsspool1.arcor-online.net... >> When I try to run a timing simulation (simprim is used) modelsim pe >> student exits with fatal error and exit code 211. >> Modelsim XE works fine, but sloooow. >> Does anybody has some experience with this problem and an advice maybe? > >Article: 116916
I've been using the Xilinx Webpack 8.2i since sometime in November, and I've become so irritated with their software that I'm about ready to just become a rabid Xilinx basher. I've encountered uncountable crashes while actually trying to use their horribly clunky ISE. On a recent weekend I found four different internal errors in XST while using their command line tools. For a product which boasts a copyright going back to 1995 -- that's a 12-year-old product -- it sure feels like alpha release software. My most recent issue is the fact that most VHDL attributes are absolutely broken in the VHDL compiler. How on earth could such a horribly engineered and maintained product last for 12 years? For example, this simple source will cause the XST product to produce one error. Only one. It gives up after one error, but if you reorder the assignments to 'i' and 'l' it will produce an error for that the other use of 'pred' as well. entity main is Port (clk : in boolean); end main; architecture are_xilinx_tools_inferior of main is type logic_level is (unknown, low, undriven, high); type index is range 5 downto 0; signal l : logic_level := undriven; signal i : index := 4; begin driver : process (clk) is begin if clk then i <= index'pred(i); l <= logic_level'pred(l); end if; end process driver; end are_xilinx_tools_inferior; The error it produces is: ERROR:Xst:772 - "main.vhd" line 18: Attribute is not authorized : 'pred'. Not authorized? Do I need to call a bank or some police department to obtain 'authorization'? (Incidentally, the freely available VHDL Simili compiler will compile the above source with no complaint). Attempts to file reports about Xilinx's terrible software with the company itself have fallen upon the canonical "three monkeys" (see no evil, hear no evil, speak no evil). You can't file reports to the company until you register. When you register you are asked ONLY for a name, email address and password -- and that's all you can provide. Then, after a few days, they'll deny you access to file reports with them because you failed to provide a company name, street address and job title. Holy cow, Batman. How can a company thrive when it strives to avoid bettering their software through 12 years of releases, and then further sets up insurmountable hurdles in order for customers to report issues to them. Does anyone know how to get Xilinx's attention, or how to get their software to actually work? Frankly, I don't care about the ISE environment -- that's way beyond being usable for fixed. I'd be happy if I could just get a VHDL compiler which actually produces images which can be loaded on the Xilinx Spartan 3E board. Any help would be greatly appreciated. thuttArticle: 116917
"Taylor Hutt" <thutt151@comcast.net> wrote in message news:m38xdrqykd.fsf@localhost.localdomain... > > I've been using the Xilinx Webpack 8.2i since sometime in November, > and I've become so irritated with their software that I'm about ready > to just become a rabid Xilinx basher. > snipped rabid bashing... > > Does anyone know how to get Xilinx's attention, Get a purchasing job at Cisco? HTH, Syms. p.s. You've really lasted since November? Dude, RESPECT! :-)Article: 116918
Taylor Hutt wrote: > I've been using the Xilinx Webpack 8.2i since sometime in November, > and I've become so irritated with their software that I'm about ready > to just become a rabid Xilinx basher. <snip> > Does anyone know how to get Xilinx's attention, or how to get their > software to actually work? Try the code in Altera's Quartus, and quote that - nothing like being seen to be last, to hurry someone along.... While you have Quartus up, you could even see how the new Cyclone III looks for the design. Mention that to Xilinx as well... -jgArticle: 116919
Dmitry, whatever timing poblems you have has nothing to do with the BlockRAM itself. It is a synchronous device (think of it as a flip-flop or register) with a data and address input set-up time below 1 ns, and no hold time requirement. Clock-to-out (for reading) can be up to 3 ns. If you have problems below 150 MHz, those timing problems are elsewhere. I hope you use a global clock for clocking the BRAM and the adjacent logic... Peter Alfke, Xilinx Applications On Mar 20, 4:07 pm, dim...@moc.liamg wrote: > Hello, > > I am running into a strange problem with the dual-port block RAM in > Virtex-II and at this point have run out of ideas :( > > In my design I am trying to perform data acquisition on a 32-bit parallel > data stream running at 125 MHz. I have two memory blocks into which I want > to be able to direct the data: internal block RAM and external SRAM. > Internal RAM is dual ported with one port connected to the local bus. SRAM > controller has can be switched between the local bus and the data port. > Data source is the same for both block RAM and the SRAM - their inputs are > taken from a single signal. Within the design I have a little test pattern > generator to produce a fake data stream for testing. > > So here is the problem: the data is written correctly into the SRAM, but > not into the block RAM. It is a timing problem - errors go away if I lower > the clock frequency. > > The same problem persist with both ISE 8.2i and 9.1i. Here are some > numbers: > > The part is XC2V3000-4FF1152. Clock constraint is 7.6 ns. Static timing > analysis gives me 7.632 ns. Experimentally-determined maximum clock > frequency for error-free acquisition into the block RAM is 105 MHz. > Maximum clock frequency for SRAM - around 150 MHz. > > As I play around with block RAM (instantiated vs. inferred, pipelining in > front of the memory), maximum frequency moves in the range from 90 to > 120 MHz. SRAM maximum frequency is consistently around 150 MHz. Block RAM > errors are typically single-bit, sometimes two bits. Which bit it is seems > to move around from one compilation to another. > > -- > Dmitry TeytelmanArticle: 116920
Taylor, Well, Peter and I read the newsgroup, so we can provide an omudsman function. Did you register? Did you get confirmation back? What was it that did not allow you to file a webcase? If possible, send me what went wrong (links, steps taken, etc.). With 250,000 seats of software, some people are able to file webcases...so it can't be completely broken! For example, I see the reports on what cases get filed, and what category they are assigned to (software, hardware, and so on). One other comment: our synthesis tool (XST) was never meant to compete with the "real" tools that exist. XST is a vehicle for research into synthesis, where we have an opportunity to test how synthesis works with our FPGAs. We share all synthesis ideas and improvements with the "real" synthesis tool vendors, so that they may add value by performing more efficient synthesis using our devices. This is in no way an apology for bugs, but a statement of fact. XST is not intended to compete with "real" synthesis tools. It is made available in Webpack, as a means to allow others to get some feeling for the flow, and the potential. The XST team is dedicated to pioneering improvements, and they very much like to get feedback. Efficient synthesis for things like our DSP48e, and other features are not trivial: and older tool may synthesis correctly, yet be horribly inefficient, and turn out huge areas and slow logic. In any event, I will pass your issues along to the XST development team once I get the details of what did not work. AustinArticle: 116921
Ombudsman... Perhaps the previous spelling was a subconscious error? In any event, send me the problems. AustinArticle: 116922
dimtey@moc.liamg wrote: > ... > So here is the problem: the data is written correctly into the SRAM, but > not into the block RAM. It is a timing problem - errors go away if I lower > the clock frequency. > ... Show the actual clock constraints you are using.Article: 116923
> > One other comment: our synthesis tool (XST) was never meant to compete > with the "real" tools that exist. XST is a vehicle for research into > synthesis, where we have an opportunity to test how synthesis works with > our FPGAs. We share all synthesis ideas and improvements with the > "real" synthesis tool vendors, so that they may add value by performing > more efficient synthesis using our devices. This is in no way an > apology for bugs, but a statement of fact. XST is not intended to > compete with "real" synthesis tools. It is made available in Webpack, > as a means to allow others to get some feeling for the flow, and the > potential. Wow, I've been using xst (and webpack, base-x, and foundation) for years now and never once heard this. I've used it in over 20 designs, including some moderately-high-end virtex-4 stuff, and never had the slightest idea that I was using inferior tools. Who makes these other synthesis tools? Are they expensive? Is this what happens when you do all of your engineering at a university? ...EricArticle: 116924
On Mar 20, 12:07 pm, "bwilso...@gmail.com" <bwilso...@gmail.com> wrote: > On Mar 19, 5:31 am, "Torsten Landschoff" <t.landsch...@gmx.de> wrote: > > > Hi there! > > > I am wondering what the default IOSTANDARD is on pins for which it is > > not explicitly assigned in the UCF of the project. Here, the project > > uses LVCMOS25 for some pins where nothing is set explicitly - is that > > always the default value? Is it a good style to always define the > > IOSTANDARD in any case? > > > Thanks for any hints, Torsten > > May I recommend ALWAYS specifying the defaults? The designer should > know what the IOSTANDARD and other attributes should be for each port, > and should not rely on the tools for a default. What if the software > were to change the defaults? No "what if" about it - ISE did exactly that sometime around 5.1 (+/- 1 :-) I now always specify my IOSTANDARD. Marc
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z