Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
"Ruzica" <ruzica@die.upm.es> wrote in message news:1174397350.142307.92220@y80g2000hsf.googlegroups.com... > On Mar 20, 1:42 pm, "Symon" <symon_bre...@hotmail.com> wrote: > > Hello Symon, > > 300ps seems like a lot of time to me. Is it possible that this time > difference is only due to the different inputs of the LUT? > > Greetings, > Ruzica > Hi, Do you have the FPGA Editor software? Open it up, try swapping the pins and see what happens to the delay. The program can report delays for you. HTH, Syms. p.s. Don't forget to report back what you find! :-)Article: 116876
Try to run Modelsim in command line mode (vsim -c) you sometimes get some extra info. Also make sure that all your primitive libraries are compiled (use compxlib) with the version of Modelsim you are using, Hans www.ht-lab.com "Markus" <outofmem@arcor.de> wrote in message news:45ff2faf$0$23145$9b4e6d93@newsspool1.arcor-online.net... > When I try to run a timing simulation (simprim is used) modelsim pe > student exits with fatal error and exit code 211. > Modelsim XE works fine, but sloooow. > Does anybody has some experience with this problem and an advice maybe?Article: 116877
On Mar 20, 9:28 am, John_H <newsgr...@johnhandwork.com> wrote: > dlharmon wrote: > > I am getting a warning on 18 and 36 bit wide block rams inferred in my > > Verilog code in ISE 9.1. The following code is an example of what > > causes the warning. I do not get warnings on 16 or 32 bit wide > > blockrams inferred this way. The resulting block rams do not work > > properly when loaded into an FPGA. > > > module bramtest(din, dout, ain, aout, wr, clk); > > input [17:0] din; > > output [17:0] dout; > > reg [17:0] dout; > > input [9:0] ain, aout; > > input wr, clk; > > reg [17:0] bram_test[0:1023]; > > always @ (posedge clk) > > begin > > if(wr) > > bram_test[ain] <= din; > > dout <= bram_test[aout]; > > end // always @ (posedge clk) > > endmodule // bramtest > > > (From map report) > > > WARNING:PhysDesignRules:1060 - Dangling pins on > > block:<Mram_bram_test/Mram_bram_test.A>:<RAMB16_RAMB16A>. The > > block is > > configured to use input parity pins. There are dangling output > > parity pins. > > > I would appreciate any suggestions. I would really like to stick with > > the inferred memory, but may consider using alternative methods. > > > Thanks for your help > > > Darrell Harmon > > I've seen this harmless warning for way too long though I often get it > for instantiated BlockRAMs where I don't actually read the parity bits > but I write them with zero values so the port isn't undefined. > > It's possible the complaint is because the write port is written with > parity bits but the write port output parity bits are unused. Who cares? > O.K., but if it's the write-port parity outputs, why not also complain (warn) about the write-port data outputs being unconnected? > I'd suggest you take a look at the FPGA Editor (or some other technology > view) to see if all your pins are hooked up properly. If your design > truly does not work - as opposed to being a coding/debug problem which > it often is for my development - then there is something wrong with the > synthesis which has to be addressed by the vendor. I often infer > memories with SynplifyPro but haven't used XST for any development. > Also do you see a difference in simulation between behavioral and post-translate? This could point to a bug in the tools... > I'd love to see the warning actually mean something but I've lost any > confidence that it's communicating anything real because of my past > experience with it. > > - John_HArticle: 116878
working with a custom board with a V4 on it. It has been working fine for a couple of months until last week - when the FPGA would just not come out of reset. I tried running a JTAG boundary scan using Impact, and got the following error: "ERROR:iMPACT:585 - A problem may exist in the hardware configuration. Check that the cable, scan chain, and power connections are intact, that the specified scan chain configuration matches the actual hardware, and that the power supply is adequate and delivering the correct voltage. Chain TCK freq = 100000000. Chain TCK freq = 100000000. Validating chain... Boundary-scan chain validated successfully." now it does say 'boundary-scan chain validated successfully', so whats going on here? I probed the prog_b signal on the V4 and it never goes high. There is an external pull-up on it per spec. Any ideas why the prog_b would be held low? any help is much appreciated. -Thanks!Article: 116879
On Mar 20, 2:43 pm, "Symon" <symon_bre...@hotmail.com> wrote: > "Ruzica" <ruz...@die.upm.es> wrote in message > > news:1174397350.142307.92220@y80g2000hsf.googlegroups.com...> On Mar 20, 1:42 pm, "Symon" <symon_bre...@hotmail.com> wrote: > > > Hello Symon, > > > 300ps seems like a lot of time to me. Is it possible that this time > > difference is only due to the different inputs of the LUT? > > > Greetings, > > Ruzica > > Hi, > Do you have the FPGA Editor software? Open it up, try swapping the pins and > see what happens to the delay. The program can report delays for you. > HTH, Syms. > p.s. Don't forget to report back what you find! :-) As I am using cores, FPGA Editor doesn't allow me to change the pins, but I will try to do it with some other design. When I get some results, I'll post it here. Thanks!Article: 116880
is the V4 being held in reset causing the JTAG boundry-scan to fail? or is it the other way around? There are 3 devices on the JTAG chain (PROM, CPLD and V4), so could one of the other devices be causing the V4 to be held in reset? I didn't think that would make sense. Even if the V4 doesnt get configured correctly from the PROM, it 'shouldn't' hold the prog_b low, right?? I checked the voltage ramps on the V4, looked ok. Also I noticed the power consumption of the board increased by about 0.5W, which kind of makes me want to think theres a component failure (bad cap? bad resistor?) but again, I checked the voltage ramps and they looked fine.Article: 116881
John, Well, I just went on-line after reading the thread, clicked on their button, and it said '0 available'. I was very surprised. It seems that some links are not working, and some tables do not supply the correct information. OK? That said, you are correct: we still do not have a suitable 'onsey-twosey' means of supplying the chips. A million in one month, no problem. One? Two? At least we were able to get the latest parts stocked on the distributors shelves recently. I suppose if the only process that is available (now) from TSMC is the "low power" 65nm one, that it makes good business sense to make a low cost FPGA in that process (if you are given lemons, make lemonade). Seems that they will have to wait for the "high performance" 65nm process to be developed and debugged, where when we asked two fabs for 65nm triple oxide, we got it right away. Since I am in the "high performance group" (Virtex), I am enjoying our 9 month lead on them in 65nm. I wish them luck. AustinArticle: 116882
Ruzica , Remember the FPGA Editor is a fiction. It is a software version of the hardware. It has no basis in reality, but is a complete abstraction of what is really there. That said, there is serious 'magic' that happens underneath, and if you are really interested, you will have to get a job here working for us. AustinArticle: 116883
Hello everybody! I am slowly making my way getting our custom Xilinx Virtex-4 based board to work. Today I got as far as running the Base System Builder (BSB) from the Embedded Development Kit, selecting our own board and getting a memory test by just clicking "Next" a few times and "Finish" at the end. What I would like to do is to have BSB add some tweaks required for the early silicon Virtex-4 on the development board in order to work around the issue described at http://www.xilinx.com/xlnx/xil_ans_display.jsp?getPagePath=22471 What I managed to do is to create a custom peripheral based on a netlist from Xilinx that implements this for any given MGT pair. The peripheral also allows to select the MGTs to connect graphically. As I know that I will forget to add these peripherals sooner or later, I'd like to automate adding them. What I tried is to add an IO_ADAPTER section to the XBD file but for some reason it is ignored by BSB. The only port that is connected to the work around component is a clock line. Interestingly, when I connect the clock from the system reset input as in BEGIN IO_ADAPTER ATTRIBUTE CORENAME = rio_workaround ATTRIBUTE INSTANCE = rio_workaround_0 PORT GREFCLK_IN = sys_rst_n # works!? # PORT GREFCLK_IN = ext_osc_clock !!! does not work, peripheral not created PARAMETER C_MGT_LOC_A = GT11_X0Y2, IO_IS = C_MGT_LOC_A PARAMETER C_MGT_LOC_B = GT11_X0Y3, IO_IS = C_MGT_LOC_B END in the board definition file, it "works" - kind of, as the work around needs a real clock. Using ext_osc_clock instead, the peripheral is not even instantiated. What I really want is the output of the DCM module that BSB creates automatically, but I have no idea how to access it in the XBD file. The clock source is defined like this: BEGIN IO_INTERFACE ATTRIBUTE IOTYPE = XIL_CLOCK_V1 ATTRIBUTE INSTANCE = clk_100 PARAMETER CLK_FREQ =100000000, IO_IS=clk_freq # 100 Mhz PORT SYS_CLK = ext_osc_clock, IO_IS=ext_clk END Any hints from some Xilinx expert? Greetings, TorstenArticle: 116884
Austin, According to the rattling in the newsgroup, information could not be had from the distributor websites for ANY quanitity from Xilinx for new parts. This was my disappointment: that you thought the Altera web link information of "0 available" meant something more than Xilinx's "0 available." Congrats on your 65 nm lead. I love that Xilinx is pushing the technology. Unfortunately I don't have the luxury of using it since our office marketplace can't afford the high costs associated with performance FPGAs. If I'm going to touch 65 nm anytime soon, it'll be Altera. If I'm going for PCIe anytime soon, it's probably Lattice to get the embedded phy in the low cost part. I have yet to swim outside the Xilinx pool in my current role but I've certainly stuck my toe in others' waters. I liked the Spartan-3 route of pushing 90 nm technology first but I'm a little disappointed that the non-performance product line seems a little stale with incremental changes into S3L, S3E, S3A, S3AN.... I can understand the difficult experience getting the S3 up to speed for yield, power, and such on the new process so I'm not disturbed that Xilinx hasn't hit 65 nm yet for the cost-sensitive products but with the V5 success, I'm a little surprised that I haven't heard anything concrete yet from Xilinx as to the 65 nm low-cost route. I was happy to see lower max power numbers for the S3E family the Friday before the Altera "low power 65 nm process" announcement but their numbers still put the triple oxide S3X family numbers to shame. Damned good lemons, I guess! Yay Xilinx! Yay Altera! Whatever. I enjoy the competition because I get to enjoy the fruits of competitive technology, citrus or otherwise. I'm looking forward to the next Xilinx advancements, as well. - John_H "Austin Lesea" <austin@xilinx.com> wrote in message news:etor9h$q791@cnn.xsj.xilinx.com... > John, > > Well, I just went on-line after reading the thread, clicked on their > button, and it said '0 available'. > > I was very surprised. > > It seems that some links are not working, and some tables do not supply > the correct information. > > OK? > > That said, you are correct: we still do not have a suitable > 'onsey-twosey' means of supplying the chips. A million in one month, no > problem. > > One? Two? > > At least we were able to get the latest parts stocked on the > distributors shelves recently. > > I suppose if the only process that is available (now) from TSMC is the > "low power" 65nm one, that it makes good business sense to make a low > cost FPGA in that process (if you are given lemons, make lemonade). > Seems that they will have to wait for the "high performance" 65nm > process to be developed and debugged, where when we asked two fabs for > 65nm triple oxide, we got it right away. > > Since I am in the "high performance group" (Virtex), I am enjoying our 9 > month lead on them in 65nm. > > I wish them luck. > > AustinArticle: 116885
"Gabor" <gabor@alacron.com> wrote in message news:1174398637.932534.169030@l75g2000hse.googlegroups.com... > On Mar 20, 9:28 am, John_H <newsgr...@johnhandwork.com> wrote: <snip> >> I've seen this harmless warning for way too long though I often get it >> for instantiated BlockRAMs where I don't actually read the parity bits >> but I write them with zero values so the port isn't undefined. >> >> It's possible the complaint is because the write port is written with >> parity bits but the write port output parity bits are unused. Who cares? >> > > O.K., but if it's the write-port parity outputs, why not also complain > (warn) about the write-port data outputs being unconnected? Because it's an extremely poor warning. It takes one programmer a few minutes to produce something nonsensical that doesn't get changed through peer review processes but it takes months and a lot of pain for something to come out of software. The right thing is to be consistent. It takes one person too little effort to produce inconsistent messages. >> I'd suggest you take a look at the FPGA Editor (or some other technology >> view) to see if all your pins are hooked up properly. If your design >> truly does not work - as opposed to being a coding/debug problem which >> it often is for my development - then there is something wrong with the >> synthesis which has to be addressed by the vendor. I often infer >> memories with SynplifyPro but haven't used XST for any development. >> > > Also do you see a difference in simulation between behavioral and > post-translate? This could point to a bug in the tools... I tend to skip the post-translate simulation in favor of live testing except for special cases. The FPGA tools have done a superb job historically of producing a target FPGA design that matches the original RTL. While the ASIC industry needs a significant investment in equivalence checking to make sure the output from a tool matches the input, I've only come across one or two instances over the years where the final result isn't a 100% match to the input code. It's possible a tool problem is in the way of a good simulation. I'd do a quick check to see if the "post-translate" results are producing misconnected hardware or if it looks like the right 18 bits are connected to the right 18 lines in and out of the BlockRAM. If the physical connections aren't there, it's probably a synthesis problem. If the physical connections are there and the warning is its usual bogus self, a mismatch between RTL and "post-translate" simulation would tend to point to the simulation models. If the problem doesn't appear with 16 and 32-bit memories, the troubles probably aren't with simultaneous read/write operation coming up different in different simulations but this failure mode would be something else I'd look for in bad memory simulations if I didn't have the additional good-inference information. >> I'd love to see the warning actually mean something but I've lost any >> confidence that it's communicating anything real because of my past >> experience with it. >> >> - John_HArticle: 116886
Ken Soon wrote: > ... > Yeh for the hardware multipliers, I guessed it automatically changed the > DSP48s to the multipliers, but alas, shortage problems again. 60 mulitpliers > needed for 36 available. Analyze the design a bit; 60 multipliers sounds like a lot, though I have not done video work. Maybe you don't need all of them, or maybe some are being used to multiply small numbers that could be handled in LUTs. If some of the multipliers are only doing a multiply every 2 or 3 or 4 clocks, maybe some could be shared.Article: 116887
We use in a laboratory course still XILINX XC3000 FPGAs with Viewlogic's Workview design entry (DOS version) and XILINX XACT (also DOS). The problem is that we have to replace the old PC's and that Viewlogic only supports a few graphics modes and it is unlikely that it will run on new PC's. The last version of XILINX ISE software which supports XC3000 FPGA's isn't an alternative (and I'm not sure whether it will run on W2k/XP) because the system must be extremely easy to use so the students are able to design and implement a simple CPU in about 10 hours (including the time to learn how to use the schematic entry and simulation tool). Some questions: 1. I have tried to find an actual FPGA with a package which can be soldered with a non professional equipment, something like a PLCC84 where you can get cheap sockets which can be used on self made PCBs and if possible with a VCC of 5 V to easy interface with external TTL logic. XILINX and ACTEL only offers packages with a pin distance of 0.5 mm. ATMEL's AT40K20 would fulfill this requirements but I'm not sure if this architecture is still supported (ATMEL's documentation is five years old) and whether there exists good development software. - has anybody experience with ATMEL's AT40K20 and can suggest development software (it must be a schematic entry, no VHDL because the students have to "see" the processor at gate level. - does anybody know other FPGAs which could be used (or is the hobby market completely uninteresting for the manufactures). 2. Was somebody able to run Viewlogic (DOS version) in a virtual PC emulation. The problem is, the virtual PC must provide the proper graphics mode, mouse type and support a physical dongle on the virtual parallel port. Here a description of the students project: ftp://137.193.64.130/pub/mproz/mproz_e.pdfArticle: 116888
John, I just compared their typical power with our typical power in Virtex 5 by using spreadsheets. Very impressive. I am surprised that their 65nm part has a Vccint of 1.2 volts, however. I might be concerned about gate oxide life. I notice their process is specified at 10 years, max, at 85 degrees Tj. It is still far better than Intel or AMD, which are at 3 to 5 years at 65nm. That we are at >20 years at 85C at 65nm has surprised (and delighted) our customers. 1/2 watt for a design in C3, vs 1 watt for same design in V5. I also get ~ .85W for that design in Spartan 3E. 19000 LE's, 200 slow 4 mA IOs, all the RAM, 2 PLLs or DCMs, similar DSP 12.5% duty cycle, 100 MHz. 3E doesn't have enough RAM, but then the RAM adds a very tiny value to power. The only problem I have is that once you see how much 65nm varies due to process (on the same die, let alone the same wafer), 'typical' ends up being pretty meaningless. For example, the "maximum" value for Spartan 3E goes from .85W to about 1 watt. At least that is a guaranteed value for the worst case power one could get. Unless they choose to bin so they don't ship the leaky and high static power parts, they will have to be honest with how much power it might actually have to use, not just the typical value. Still, this was our typical vs their typical, and it was half for C3 vs V5, which is not a big surprise because C3 is the low power process, and V5 has elements of the highest performance 65nm process (lowest Vt, thinnest oxide), as well as medium power elements. Generally, the V5 has a "typical worst case at X degrees C" value for static power, and I really do not know how the C3 is specified for this value. They are honest and say that all that data is presently preliminary, and being characterized. AustinArticle: 116889
Hi, is there a way to get the Xilkernel to run with a C++ application? I didn't find any option for this in the XPS "Software Platform Settings". The problem I'm having is that my thread function ("TSK_Main", which is declared in the C++ application) cannot be called from libxil.a, which is a C library. This is the error message that I'm getting while trying to build the C++ application: ../../ppc405_0_sw_platform/ppc405_0/lib/libxil.a(init.o):(.sdata+0x0): undefined reference to `TSK_Main' Maybe someone knows a workaround for this? Many thanks, Guy.Article: 116890
Herbert Kleebauer wrote: <snip> > 2. Was somebody able to run Viewlogic (DOS version) in a virtual > PC emulation. The problem is, the virtual PC must provide > the proper graphics mode, mouse type and support a physical > dongle on the virtual parallel port. Do you have electronics recycling centers in your region? In the U.S., these places accumulate vast quantities of serviceable (usable) laptops which would be ideal platforms for the ongoing hosting of the DOS software, and one can obtain quantities of the machines for next to nothing. Also, it would be an interesting experiment to try running the software on a VMWare MSDOS VM (a no-cost experiment); please report the results! Regards, MichaelArticle: 116891
On Mar 19, 5:31 am, "Torsten Landschoff" <t.landsch...@gmx.de> wrote: > Hi there! > > I am wondering what the default IOSTANDARD is on pins for which it is > not explicitly assigned in the UCF of the project. Here, the project > uses LVCMOS25 for some pins where nothing is set explicitly - is that > always the default value? Is it a good style to always define the > IOSTANDARD in any case? > > Thanks for any hints, Torsten May I recommend ALWAYS specifying the defaults? The designer should know what the IOSTANDARD and other attributes should be for each port, and should not rely on the tools for a default. What if the software were to change the defaults?Article: 116892
"Herbert Kleebauer" <klee@unibwm.de> wrote in message news:46000A78.F8457E53@unibwm.de... > We use in a laboratory course still XILINX XC3000 FPGAs with > Viewlogic's Workview design entry (DOS version) and XILINX > XACT (also DOS). The problem is that we have to replace the > old PC's and that Viewlogic only supports a few graphics modes > and it is unlikely that it will run on new PC's. The last > version of XILINX ISE software which supports XC3000 FPGA's > isn't an alternative (and I'm not sure whether it will > run on W2k/XP) because the system must be extremely easy to > use so the students are able to design and implement a simple > CPU in about 10 hours (including the time to learn how to use > the schematic entry and simulation tool). > Hi Herbert, If it's OK, I have an observation. I wonder why these students are being taught design methods on design tools and FPGA parts that most folks on this newsgroup haven't used for a long time. The schematic vs. HDL wars have long since died down because modern FPGA designs are generally 'better' implemented using HDLs. Anyway, I'm sure you have a good reason for the approach you outline; I'm interested as to what this is. If you can post your goals, maybe the group could suggest some up-to-date alternatives? Best regards, Syms. p.s. I HATE Viewlogic. I wasted a day on a legacy design a while back because a wire had the wrong shaped dot on it. The worst part was the bloody software guy spotted the mistake!Article: 116893
i have a byte array of size 10 and i need to shift values in the array to left by 5 positions in 5 clocks. here is the code im using. reg [7:0] data[0:9]; // data shifting process reg ps_shift_start; reg ps_done; reg [1:0] ps_state; parameter ps_s0 = 2'b00; parameter ps_s1 = 2'b01; reg [12:0] ps_shift; integer ps_index; always @ (posedge clk) begin if(reset == 1) begin ps_done <= 0; ps_state <= ps_s0; end else begin case(ps_state) ps_s0: begin if(ps_shift_start == 1) begin ps_done <= 0; ps_shift <= 0; ps_state <= ps_s1; end end // ps_s1: begin if(ps_shift < 5) begin for(ps_index = 0; ps_index < 10; ps_index = ps_index + 1) begin data[ps_index] <= data[ps_index + 1]; end end else begin ps_done <= 0; ps_state <= ps_s0; end end endcase end end the problem is XST syntherziser infferes lot of flip-flops for the signal 'data'. is't is possible to use distributed RAM for signal 'data'. if posssible, how do i do that? thank youArticle: 116894
On Mar 20, 11:23 am, Herbert Kleebauer <k...@unibwm.de> wrote: > The last > version of XILINX ISE software which supports XC3000 FPGA's > isn't an alternative (and I'm not sure whether it will > run on W2k/XP) because the system must be extremely easy to > use so the students are able to design and implement a simple > CPU in about 10 hours (including the time to learn how to use > the schematic entry and simulation tool). As a suggestion, drop the schematic entry approach and introduce a hardware design language such as Verilog, or VHDL, or some academic invention that can be translated - these are much more powerful and extensible to real world applications. They are also much more portable. Seperate the simulation solution from the hardware implementation. That should give you many choices, both commerical and free/open source, on many platforms - with no lock-in to the FPGA hardware vendor. Finally, for the actual implementation in an FPGA, try to give them an already canned project file to which they simply add their HDL source code. Ideally their HDL has already been vetted by the simulator and shouldn't have errors, but learning how to click on an error in the window to be taken to the offending line should not be complicate - much simpler than learning about any schematic entry tool. Alternatively, set it up with command line tools and a makefile type environment. You should be able to fit a project of this scope into the free- license versions of tools, so even if you do end up having to use an emulated environment to run old software, dongles at least wouldn't be an issue.Article: 116895
On Mar 20, 10:26 am, "CMOS" <manu...@millenniumit.com> wrote: > i have a byte array of size 10 and i need to shift values in the array > to left by 5 positions in 5 clocks. > here is the code im using. > > reg [7:0] data[0:9]; > > // data shifting process > reg ps_shift_start; > reg ps_done; > > reg [1:0] ps_state; > parameter ps_s0 = 2'b00; > parameter ps_s1 = 2'b01; > > reg [12:0] ps_shift; > integer ps_index; > > always @ (posedge clk) begin > if(reset == 1) begin > ps_done <= 0; > ps_state <= ps_s0; > end > else begin > case(ps_state) > ps_s0: begin > if(ps_shift_start == 1) begin > ps_done <= 0; > ps_shift <= 0; > ps_state <= ps_s1; > end > end > // > ps_s1: begin > if(ps_shift < 5) begin > for(ps_index = 0; ps_index < 10; ps_index = ps_index + 1) begin > data[ps_index] <= data[ps_index + 1]; > end > end > else begin > ps_done <= 0; > ps_state <= ps_s0; > end > end > endcase > end > end > > the problem is XST syntherziser infferes lot of flip-flops for the > signal 'data'. is't is possible to use > distributed RAM for signal 'data'. if posssible, how do i do that? > > thank you Put the code that infers the RAM into its own process, and make sure you follow the template. -aArticle: 116896
Peter Alfke wrote: > Glen, ten or fifteen years ago, it was possible to design latches = > memory cells in a slightly asymmetric way, so that they were > guaranteed to power-up in a specific state. That's what Xilinx did > originally with the many configuration memory cells. With smaller > geometries, this "trick" became unreliable, and Xilinx had to find a > different way to power up without massive contention. And we found :-) > Playing analog tricks becomes increasingly more cumbersome (and > unreliable) as we now are deep, deep in sub-micron territory. Yes, I was considering more than 15 years ago (when the 8080 was a popular processor). I believe the thought at the time was an initialized RAM that would power up with some useful data, which could then be changed. -- glenArticle: 116897
Ace wrote: > sorry 'bout that. Yup, I was directing to the "general purpose > procesor" The easy answer is that general purpose processors are better for general purpose problems. Assuming you are asking about FPGA for use as a processor, (see http://www.fccm.org/ ) then they work well for specific problems. Specifically, if you want to do some simple operation a very large number of times an FPGA or array of FPGAs may be a good solution. -- glenArticle: 116898
"Bob Golenda" <bgolinda@nospam.net> writes: > Thank you very much for that pointer. It still appears that aside > from what Antti did, no one (at least on this list) has successfully > done a real JTAG programmer for the XCF...just XSVF Player type > programmers. I just find that odd. Not the XCF (haven't tried yet), but I've made a JTAG programmer which I have used to program 18V04's and several Xilinx and Altera PLD's/FPGA's. I don't use XSVF but plain SVF. I've made several versions of the programmer. One variant program over PCI, another over USB, one over the a serial port and a couple over Ethernet. One of the latter used an Altera NIOS. I can compress the programming data, but this require more resources in terms of CPU, gates or LUT's when you want to program. If you have a lot of space available (e.g. a CF type device) you can use a very simple device to drive the JTAG signals. Petter -- A: Because it messes up the order in which people normally read text. Q: Why is top-posting such a bad thing? A: Top-posting. Q: What is the most annoying thing on usenet and in e-mail?Article: 116899
Sorry, but that's an amazingly unclear question.... what video format? how is it stored? streaming input signal? analog input to an ADC? You can't possibly expect anyone to help you with that question... clarify and maybe you'll find some advice On Mar 20, 6:51 am, "kha_vhdl" <abai...@gmail.com> wrote: > hi every body , > > i m asked to read a video using an FPGA , can you please give me some > ways that let me adopt to create the test bench of the video? > it means what are the different ways to read video into simulation : > they told me there is an automatic way and a manual one , please can u > explain it to me > > thank you
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z