Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Christian E. Boehme wrote: > I am using a Xilinx Spartan II FPGA on a prototype PCI add-in card > as the PCI device attached to the bus. According to the Spartan II > data sheet it appears that 5V PCI compatible IOs are indeed instantiable > which was the very reason for going with a Spartan II in the first place. > > However, after correlating the appropriate requirements from the PCI spec > with what is claimed in the Spartan II data sheet I am not 100% positive > about true 5V PCI compliancy without extra circuitry (namely clamping > diodes to the 5V rail) because the PCI spec requires that a device with- > stand AC worst case voltages of +11V down to -5.5V respectively while the > data sheet gives +7V down to -2V which more or less resembles only the > 3.3V PCI AC requirements. Notice that I am concerned about the over > and under voltages during switching and not the DC or ``5V tolerance'' > behaviour of the device. PCI spec waveforms (+11V for overshoot, -5.5 for undershoot) are specified at the resistor (55 Ohm for overshoot, 25 Ohm for undershoot) that is part of the test setup, not at the input pin. (section 4.2.1.3) When you overshoot or undershoot, the clamp diodes pass current causing a voltage drop across the resistor resulting in a reduced voltage seen at the input pin. -- Paul Fulghum paulkf@microgate.comArticle: 72651
Vinod, I do not fully understand your problem but I can guess that you may be having issues with order of events in the simulator other than what you are expecting. Try to offset your input stimulus from the clock so that the two do not happen at the same time to see if that makes things clearer as to how the logic operates. In other words try changing your input stimulus to change 1 ns after the clock edge and see if the results look more as you would expect. -- Brian vinod wrote: > Brian Philofsky, > Thank u for ur response..i made changes in my stimulus as u told > stimuli : process > begin > wait for 20 ns; > reset <= '1'; > > wait for 120 ns; > reset <= '0'; > > > > wait; > end process stimuli; > and i made simulation once again but @180 ns the rising edge data '0' > of d comes to q1 in same rising edge ..actually it shud come at > falling edge.. it is correct in functional simulation... i am getting > same wrong result even if I used IFFDDRSE primitive . > > can u check this plz.. > > thank u > vinod > > > Brian Philofsky <brian.philofsky@no_xilinx_spam.com> wrote in message news:<412E568B.9020503@no_xilinx_spam.com>... > >>Vinod, >> >>I would guess that the reason you are seeing a functional difference >>would be due to the fact that you are not waiting until the global >>set/reset signal to complete. For all Xilinx gate-level simulations, a >>global set/reset pulse is generated for the first 100 ns of the design >>to initialize all of the registers and simulate the effect for the GSR >>signal in simulation. For the first 100 ns of the design, the registers >>will be in reset and will not change value. If you hold off you >>stimulus for the first 100 ns of simulation (keep your clock running and >>initialize the inputs but just don't wiggle them yet), you will likely >>see more correlation between your behavioral and post-translate simulation. >> >>Also, if you are desiring to have the DDR register pulled into the I/O >>and use the dedicated resource, you should be instantiating the IFDDRRSE >>in the design. The code you are creating will use two separate >>registers but not the dedicated DDR registers in the I/O. >> >> >>-- Brian >> >> >>vinod wrote: >> >> >>>i am vinod..i got problem with ddr for virtex2 fpga. i have written >>>code and did functional simulation everything is correct but after >>>post translate simulation i am not getting same result. here is my >>>code and testbench >>> >>>code: >>>library ieee; >>>use ieee.std_logic_1164.all; >>>library UNISIM; >>>use UNISIM.VCOMPONENTS.ALL; >>> >>>entity input_ddr is >>>Port ( d : in std_logic; >>>reset : in std_logic; >>>clk : in std_logic; >>>dataoutx : out std_logic; >>>dataouty : out std_logic >>> ); >>>end input_ddr; >>> >>>architecture input_ddr_arch of input_ddr is >>> >>> >>>signal q1, q2 : std_logic; >>> >>> >>> >>>begin >>> >>> >>> >>> >>>process (clk,d,reset) >>>begin >>> >>>if reset = '1' then >>> q1 <= '0'; >>> dataoutx <= '0'; >>> >>>elsif rising_edge(clk) then >>>q1 <= d; >>>dataoutx <= q1; >>>end if; >>>end process; >>> >>>process (clk,d,reset) >>>begin >>>if reset = '1' then >>> q2 <= '0'; >>> dataouty <= '0'; >>>elsif falling_edge(clk) then >>>q2 <= d; >>>dataouty <= q2; >>>end if; >>>end process; >>> >>> >>> >>>end input_ddr_arch; >>> >>>testbench.... >>>library ieee; >>>use ieee.std_logic_1164.all; library ieee; >>>use ieee.std_logic_1164.all; >>>use ieee.numeric_std.all; >>>use IEEE.STD_LOGIC_1164.ALL; >>>use IEEE.STD_LOGIC_ARITH.ALL; >>>use IEEE.STD_LOGIC_UNSIGNED.ALL; >>>library UNISIM; >>>use UNISIM.VCOMPONENTS.ALL; >>> >>>entity tbddr is >>>end entity tbddr; >>> >>>architecture test2 of tbddr is >>>component input_ddr is >>>Port ( d : in std_logic; >>>reset : in std_logic; >>>clk : in std_logic; >>>dataoutx : out std_logic; >>>dataouty : out std_logic >>> ); >>>end component ; >>> >>> >>>component FDDRRSE is >>>port ( >>> >>>Q : out STD_ULOGIC; >>>C0 : IN STD_ULOGIC; >>>C1 : IN STD_ULOGIC; >>>CE :IN STD_ULOGIC; >>>D0 : IN STD_ULOGIC; >>>D1 : IN STD_ULOGIC; >>>R :IN STD_ULOGIC ; >>>S : IN STD_ULOGIC >>>); >>>end component; >>> >>> >>> >>>signal risdatax,faldatax : std_ulogic; >>>signal dataoutx,dataouty : std_logic; >>>signal count : natural range 0 to 15; >>>--signal cnt : natural range 0 to 40; >>>signal ind : std_logic; >>>signal S,R,cE :std_logic; >>> >>>signal data : std_ulogic_vector(15 downto 0):="1101000111010101"; >>>signal data1 : std_ulogic_vector(15 downto 0):= "1001011100011100"; >>>--signal data2: std_ulogic_vector(15 downto 0):= "0010100111001011"; >>>signal clk,invclk, reset :std_logic; >>>signal clk2 ,d : std_logic; >>>begin >>> >>>--clk2 <= not clk; >>> >>> >>>uut1: input_ddr >>>port map ( >>> d => d, >>> clk => clk, >>>reset => reset, >>>dataoutx => dataoutx, >>>dataouty => dataouty >>> >>> >>>); >>> >>> >>>FDDRRSEx : FDDRRSE >>>port map ( >>> >>>Q => d, >>>C0 => clk, >>>c1 => invclk, >>>CE => ce, >>> >>>d0 => risdatax, >>>d1 => faldatax, >>>r => '0', >>>s => '0' >>>); >>> >>> >>> >>>clock1 : process >>>begin >>> clk <= '1'; >>> invclk <= '0'; >>>wait for 10 ns; >>> >>>clk <= '0'; >>>invclk <= '1'; >>>wait for 10 ns; >>>end process; >>> >>>stimuli : process >>>begin >>>wait for 20 ns; >>>reset <= '1'; >>> >>>wait for 40 ns; >>>reset <= '0'; >>> >>> >>> >>>wait; >>>end process stimuli; >>> >>>process(clk,reset,ind) >>>begin >>> if reset = '1' then >>> count <= 0; >>> ce <= '0'; >>> risdatax <= '0'; >>> faldatax <= '0'; >>>elsif clk'event and clk = '1' then >>> >>> ce <= '1'; >>> risdatax <= data(count); >>> faldatax <= data1(count); >>> if count < 15 then >>> count <= count+1; >>> elsif count = 15 then >>> count <= 0; >>> end if; >>> end if; >>> >>> end process; >>>end test2; >>> >>> >>> >>>i have used FFDDRSE primitive in testbench for generating double data >>>rate. >>> >>>i will be waiting for reply... >>> >>>thanks in advance >>>byeArticle: 72652
Johan Bernspång wrote: > One question is still unanswered though, and I'd really appreciate some > input on this matter. How come that a system that synthesized perfectly > fine in ISE 6.1 also does that in ISE 6.2, but with a lower performance > (i.e. much more noise etc)? Chipscope is not a completely passive observer, but is placed and routed along with your design (as is signalprobe from brand A). So there could be different interactions with each placement. Does "more noise" mean digital interference with the analog front end? Or some difference in a DSP process? > I have checkes all the coregen cores that I > utilize and they are all the same version. I have also checked my own > logic, and it seem to synthesize the same way, but still the result is > so much better using 6.1. Some DSP guys like to use matlab to modelsim interfaces for simulation. Maybe one of the X-men will comment. -- Mike TreselerArticle: 72653
Johan Bernspång wrote: <snip> > > One question is still unanswered though, and I'd really appreciate some > input on this matter. How come that a system that synthesized perfectly > fine in ISE 6.1 also does that in ISE 6.2, but with a lower performance > (i.e. much more noise etc)? I have checkes all the coregen cores that I > utilize and they are all the same version. I have also checked my own > logic, and it seem to synthesize the same way, but still the result is > so much better using 6.1. There are many possible reasons for this but rather than going there first, I would like to ask the question of whether timing constraints were provided to the design and if so, were they met? The reason for the question is the way the software is designed to work is to look at the timing constraints provided and attempt to meet them. If they can be easily met, many times the software will give you that result without trying to see exactly how fast the device can really go. This allows the tools to operate much faster and still deliver the results that were requested. If no timing constraints are provided and a low effort level is used (such as that is by default) the tools generally run very fast however do not produce the best possible result since it has nothing it is trying to strive for. If no timing constraints are provided and a high effort level is used, the tools will provide a better result however without constraints being provided, the tools guess at the tradeoffs to make for timing so it still may not give as good of a result as if true timing constraints are provided. Now if you are providing timing constraints that were met with a previous version of software and are not met now, that situation can be more difficult to explain. In general that should not happen however we all know that it does from time to time. The best thing that you can do if you are in this situation is to contact either your FAE or the Xilinx hotline to have this investigated. There may be a simple explanation or there may be a complicated one but generally it is very dependent on the design that is being run, the device being targeted, how the tools are being run and possibly a list of other factors. Without this investigation it would be difficult to fathom a guess at the reason for that. -- Brian > > Johan >Article: 72654
"Drew" <dhruvish@gmail.com> wrote in message news:ad2011c0.0408270636.6646866e@posting.google.com... > Helo Guys, > > I am sort of new to it. I need to buy some EPLD which can be socketed > so that I can remove and put another one it it. The problem I am > having is, looking solely by the device package name (PLCC, TQFP) I > cant tell its socketed or not. How do I know which one is socketed > simply by looking at the device name. Say, EPM3032ALC44-4 and > EPM3032ATC44-4. The various packages are described in a document on the Altera web site. They don't put these details in the data sheet, for some reason. LeonArticle: 72655
These days you can buy a socket for most anything, but I think PLCC is what you want if you really need a socket. Why not to use in-circuit programming? /Mikhail "Drew" <dhruvish@gmail.com> wrote in message news:ad2011c0.0408270636.6646866e@posting.google.com... > Helo Guys, > > I am sort of new to it. I need to buy some EPLD which can be socketed > so that I can remove and put another one it it. The problem I am > having is, looking solely by the device package name (PLCC, TQFP) I > cant tell its socketed or not. How do I know which one is socketed > simply by looking at the device name. Say, EPM3032ALC44-4 and > EPM3032ATC44-4. > > Please let me know, > DrewArticle: 72656
I filed a WebCase over this, but I'm going to ask here in case others have run into this problem as well. I expect the Linux User Density to be higher here then at Xilinx Support:-) I have impact from ISE 6.2i installed on a Linux/Intel machine, so that I can use the PC-IV cable. It was working for a while, but now all the sudden it ishopelessly non-functional. All I get is I dialog box with the message: Can't access this folder Path is too long I don't know if this matters, but I've been running impact on my main workstation for a while, making ACE files for another project (open source, stay tuned) and that has been and still does work fine. My home directory is shared. I've tried removing the .WindU file that keeps appearing. (For the record, I find this Wind/U thing phenomenally klunky and embarassingly slow.) -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, http://www.icarus.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep."Article: 72657
I noticed that for Spartan 3 the DCM status does not define bit 2 any more. In Virtex II it meant that the DFS was stopped. Does this mean the DFS never stops? I want to use DFS mode (no CLKFB). How do I know if I need to reset the DCM? Thanks, TonyArticle: 72658
Has anybody out there had any luck using GNU tools to compile source code into an ELF file that can be loaded into an ISE or EDK project? is it possible? easy? Thanks MattArticle: 72659
I am trying to figure out if there is a way of obtaining the RTL netlist in Project Navigator when I synthesize. The RTL schematic that is obtained under 'synthesise option' on the other hand uses the UNISIM library but for some reason I can only get the schematic (which is read-only) and not the actual netlist code that uses the UNISIM library. I used the post-translate simulation model under the 'implement design' option but that only gives me the netlist for simulation model which uses the simulation primitives (SIMPRIMS library) which dont represent the true implementation of the device. I am currently using the ISE webpack 6.2i, the free version but willing to upgrade to the full version if it provides this capability.Article: 72660
"Tony C" <tony.casorso@quantum.com> wrote in message news:92989e5a.0408271105.56c6d4bc@posting.google.com... > I noticed that for Spartan 3 the DCM status does not define bit 2 any > more. In Virtex II it meant that the DFS was stopped. Does this mean > the DFS never stops? I want to use DFS mode (no CLKFB). How do I know > if I need to reset the DCM? STATUS[2] is still the "CLKFX or CLKFX180 Output Stopped Indicator", as shown in XAPP462. http://www.xilinx.com/bvdocs/appnotes/xapp462.pdf I see, however, that it's missing from the latest data sheet! I'll file a bug report. --------------------------------- Steven K. Knapp Applications Manager, Xilinx Inc. General Products Division Spartan-3/II/IIE FPGAs http://www.xilinx.com/spartan3 --------------------------------- Spartan-3: Make it Your ASICArticle: 72661
Umm, might not be that obvious, but how about checking out http://www.jedec.org/ ? -- MarcusArticle: 72662
Does anyone here have experience transmitting and receiving Channel Link or Camera Link signals directly into a Virtex II Pro chip? Or into a Spartan 3? Does it work? Supposedly the Channel Link signals are LVDS signals. BradArticle: 72663
Maybe it's one of those spaces in the path names problems disguised as path too long.Article: 72664
"Brad Smallridge" <bradsmallridge@dslextreme.com> wrote in message news:10iv9777uhbi2a5@corp.supernews.com... > Does anyone here have experience transmitting and receiving Channel Link or > Camera Link signals directly into a Virtex II Pro chip? Or into a Spartan > 3? Does it work? Supposedly the Channel Link signals are LVDS signals. > > Brad > Both Virtex-II Pro and Spartan-3 support LVDS signals. I don't have a Spartan-3 specific design but here's an example of one such product using Virtex-II. http://www.transtech-dsp.com/fpga/custom_mod_exam.asp Essentially, Channel Link is a 7:1 LVDS interface with a 66 MHz input clock. Use the CLKFX and CLKFX180 outputs from a DCM. Set the CLKFX_MULTIPLY=7, CLKFX_DIVIDE=2. Use DDR flip-flips and LVDS input or output buffers for either the input (receiver) or output (transmitter). 66 MHz * 7/2 * 2X (DDR) = 462 Mbps. --------------------------------- Steven K. Knapp Applications Manager, Xilinx Inc. General Products Division Spartan-3/II/IIE FPGAs http://www.xilinx.com/spartan3 --------------------------------- Spartan-3: Make it Your ASICArticle: 72665
Brian Philofsky wrote: > > > Simon wrote: > >> Brian Philofsky wrote: >> >>> >>> >>> <snip> >> >> >> >> Actually in the newer BlockRAM implementations, you can choose the >> behaviour (at least you can on the Spartan-3, whose datasheets I've >> been poring over recently :-) >> >> There's: >> >> o READ_FIRST, which will always report the value of the data >> in RAM before the write was committed >> >> o WRITE_FIRST, which will provide the unpredictable behaviour >> you mention (and it's the default, to provide backwards >> compatibility) >> >> o NO_CHANGE, which seems to disconnect the output RAM so the >> output value remains unchanged. >> >> I doubt anything would help you if you simultaneously write into the >> same address on both ports, though :-) > > > > This is a mis-conception many fall into. Those modes apply only to the > port that is being written to. In other words, the output of the port > being written has this known behavior you specify above but as for the > other port, everything I said still applies regardless of which mode you > have it in. Hmm, well XAPP463 (Using BlockRAM in spartan3 FPGA's) seems to cope with both ports, or am I misunderstanding what you are saying ? I'm looking at the table (table 8) on page 14, and it's laid out as: Write mode Same Port Other port (dual port only, same addr) ------------------------------------------------------------------- WRITE_FIRST blah Invalidates data on DO,DOP READ_FIRST blah Data from RAM to DO,DOP NO_CHANGE blah Invalidates data on DO,DOP I thought it was clear, but now I'm confused :-(Article: 72666
Brad Smallridge wrote: > Maybe it's one of those spaces in the path names problems disguised as path > too long. Might be. I found that there was a ~/.windu.flask directory as well as a .WindU file that needed deleting. Deleted the directory and I am back to life. *Whew*! I did notice that the various dialogue boxes get very confused and start storing garbage if you type control chracters or arrow keys in the various text entry fields. It is possible that just the right bit of junk got into its brain that it finally went insane. Have I mentioned yet that I hate this Wind/U atrocity?-) Anyhow, I now get to deal with the consequences of accidently pushing Vref of a PC-IV pod on to GND. You would think that a $100 device would make a bigger spark:-/ -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, http://www.icarus.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep."Article: 72667
So, *when* do you get this message? When you are generating an ACE file? When you are accessing the cable? When you are trying to write the ACE files to the compact flash? Spotaneously for no apparent reason? What were you doing when it started to appear? [The .WindU file is important and will always be regenerated so don't mess with it] Stephen Williams wrote: > > I filed a WebCase over this, but I'm going to ask here in case > others have run into this problem as well. I expect the Linux > User Density to be higher here then at Xilinx Support:-) > > I have impact from ISE 6.2i installed on a Linux/Intel machine, > so that I can use the PC-IV cable. It was working for a while, > but now all the sudden it ishopelessly non-functional. All I > get is I dialog box with the message: > > Can't access this folder > Path is too long > > I don't know if this matters, but I've been running impact on my > main workstation for a while, making ACE files for another project > (open source, stay tuned) and that has been and still does work > fine. My home directory is shared. I've tried removing the .WindU > file that keeps appearing. > > (For the record, I find this Wind/U thing phenomenally klunky > and embarassingly slow.) >Article: 72668
Neil Glenn Jacobson <n.e.i.l.j.a.c.o.b.s.o.n.a.t.x.i.l.i.n.x.c.o.m.> wrote: : Uwe Bonnes wrote: : > Neil Glenn Jacobson <n.e.i.l.j.a.c.o.b.s.o.n.a.t.x.i.l.i.n.x.c.o.m.> wrote: : > ... : > : > Ok, it's the MainWin porting tool that requires the per-seat license. : > : > That's pretty much what keep Xilinx from having a set of free tools for : > : > Linux, correct? : > : > : > : Well, we don't use MainWin but, yes, that is correct. : > : > I don't understand that answer. : > : > Ise 6.2 clearly uses MainWin, or am I wrong? : Yes, you are wrong. Xilinx does not use MainWin So where do the Wind/U related messages come from that Stephen Williams reports about in the thread "Impact vs. Linux RedHat Linux" Or is Wind/U something different then MainWin. In that case, excuse my ignorance. B.t.w.: Does Wind'U require a per seat license? Bye -- Uwe Bonnes bon@elektron.ikp.physik.tu-darmstadt.de Institut fuer Kernphysik Schlossgartenstrasse 9 64289 Darmstadt --------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------Article: 72669
Using the Xilinx-supplied .mcs files I can program the configuration PROM and run these reference designs: "Default Board Test MCS file" "Digital Clock PCB monitor" But, this design: "MicroBlaze Master System" results in a non-responsive board. Anybody have any luck with this? Thanks. -DaveArticle: 72670
Neil Glenn Jacobson wrote: > So, *when* do you get this message? When you are generating an ACE > file? When you are accessing the cable? When you are trying to write > the ACE files to the compact flash? Spotaneously for no apparent reason? > What were you doing when it started to appear? Any time it is about to ask the user for a file. For example, when I select (at startup) that I want to open an existing cdf file, it instead of asking me to browse for the file put up a little dialog box that said: Can't access this folder Path is too long But like I said elsewhere in the thread, it appears there was some sort of poison in the ~/.windu.hostname directory. Removing that directory got me out of the problem. It may be (blind speculation) that the file got corrupted somehow, so that the initial directory where file selection boxes go was an invalid directory. > [The .WindU file is important and will always be regenerated so don't > mess with it] So far as I can tell, there are only 2 things that can be safely done with the file: ignore it or remove it. Generally, ignoring it is the best plan, but this time I was left with the other option. > > Stephen Williams wrote: > >> >> I filed a WebCase over this, but I'm going to ask here in case >> others have run into this problem as well. I expect the Linux >> User Density to be higher here then at Xilinx Support:-) >> >> I have impact from ISE 6.2i installed on a Linux/Intel machine, >> so that I can use the PC-IV cable. It was working for a while, >> but now all the sudden it ishopelessly non-functional. All I >> get is I dialog box with the message: >> >> Can't access this folder >> Path is too long >> >> I don't know if this matters, but I've been running impact on my >> main workstation for a while, making ACE files for another project >> (open source, stay tuned) and that has been and still does work >> fine. My home directory is shared. I've tried removing the .WindU >> file that keeps appearing. >> >> (For the record, I find this Wind/U thing phenomenally klunky >> and embarassingly slow.) >> > -- Steve Williams "The woods are lovely, dark and deep. steve at icarus.com But I have promises to keep, http://www.icarus.com and lines to code before I sleep, http://www.picturel.com And lines to code before I sleep."Article: 72671
Hi, I generated two FIFOs using Xilinx ISE CoreGen. Both FIFOs are 64-bit wide, but one FIFO depth is 16, the other is 64. After doing the mapping, I was surprised to find that both FIFOs use the same amount of BlockRam. I cannot understand the reason since the 16-depth FIFO will definately use less memory than the 64-depth FIFO. Please give some idea about this. Thanks.Article: 72672
Hi ! According to Xilinx documentations, the PLB bus is limited to 100MHz. We have a need to run at higher frequencies, 125MHz and even 250 MHz. We are Using a VP20 -6 FPGA. If I do not use the EDK flow and any of the EDK modules, but just a PPC405 and write my own modules for internal and external memory management, etc, is it possible to run the PLB at higher speeds ? I know the PPC405 has a OCM bus, and that supposedly supports up to 375 MHz system clock. However the overall bandwidth is listed as 500 MBytes/sec. So I presume there are some sort of other limitations there. My main concern is overall data throughput. The speed of the PPC405 is irrelevant as it is used for maintenance tasks only ... we could even use microblaze for that. It's just that we need for the CPU to be able to access the memory data bus as well. Any suggestions, recommendations, war stories ? Best Regards, rudi ============================================================= Rudolf Usselmann, ASICS World Services, http://www.asics.ws Your Partner for IP Cores, Design, Verification and SynthesisArticle: 72673
EDK ships with GNU tools. - Peter Matthew E Rosenthal wrote: > Has anybody out there had any luck using GNU tools to compile source > code into an ELF file that can be loaded into an ISE or EDK project? > > is it possible? easy? > > Thanks > > MattArticle: 72674
Stephen Williams wrote: > Brad Smallridge wrote: >> Maybe it's one of those spaces in the path names problems disguised as >> path too long. > > Might be. > > I found that there was a ~/.windu.flask directory as well as a > .WindU file that needed deleting. Deleted the directory and I am > back to life. *Whew*! > > I did notice that the various dialogue boxes get very confused > and start storing garbage if you type control chracters or arrow > keys in the various text entry fields. It is possible that just > the right bit of junk got into its brain that it finally went > insane. > > Have I mentioned yet that I hate this Wind/U atrocity?-) > I have to jump in here ! This Wind/U crap is made by some sort of a third party, to "easy" "porting" of windows applications to unix like systems. I can't see the usability of any GIUs made with this Wind/U stuff. Did anybody try to use EDK on Linux ? C'mon, I am trying to get a job done not finding out alternative ways to use a mouse and a keyboard. Great to see the results from the Wind/U "porting" tools. Don't have to bother evaluating or trying it out for any of our projects - what a piece of crap !!! If Xilinx could find an alternative to this Wind/U, I think they would have an excellent tool flow for Linux (plus fixing the paralle port support to something native of course). Regards, rudi ============================================================= Rudolf Usselmann, ASICS World Services, http://www.asics.ws Your Partner for IP Cores, Design, Verification and Synthesis > Anyhow, I now get to deal with the consequences of accidently > pushing Vref of a PC-IV pod on to GND. You would think that a > $100 device would make a bigger spark:-/ >
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z