Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Dec 4, 3:39 am, "Boudewijn Dijkstra" <boudew...@indes.com> wrote: > Op Mon, 03 Dec 2007 18:27:50 +0100 schreef rickman <gnu...@gmail.com>: > > > On Dec 3, 4:14 am, "Boudewijn Dijkstra" <boudew...@indes.com> wrote: ...snip... > >> Given that uncompressible data often resembles noise, you have to ask > >> yourself: what would be lost? > > > The message! Just because the message "resembles" noise does not mean > > it has no information. In fact, just the opposite. > > If you are compressing reliably transmitted pure binary data, then you are > absolutely right. But if there is less information per datum, like in an > analog TV signal, something that resembles noise might very well be noise. But noise and signal that "resembles noise" are two different things. You can characterize noise and send a description of it. But it is *isn't* noise you have just turned part of your signal into noise. So to take advantage of the fact that noise can be compressed by saying "this is noise" requires that you separate the noise from the signal. If you could do that, why would you even transmit the noise? You wouldn't, you would remove it. So the only type of "noise like" signal left is the part that *is* signal and the part that can't be separated from the signal. Since you can't distinguish between the two, you have to transmit them both and suffer the inability to compress them. > > Once you have a > > message with no redundancy, you have a message with optimum > > information content and it will appear exactly like noise. > > > Compression takes advantage of the portion of a message that is > > predictable based on what you have seen previously in the message. > > This is the content that does not look like noise. Once you take > > advantage of this and recode to eliminate it, the message looks like > > pure noise and is no longer compressible. But it is still a unique > > message with information content that you need to convey. > ...snip... > > >> If you can identify the estimated compression beforehand and then split > >> the stream into a 'hard' part and an 'easy' part, then you have a way to > >> retain the average. > > > Doesn't that require sending additional information that is part of > > the message? > > Usually, yes. How can you flag the "easy" (compressible) part vs. the "hard" part without sending more bits? > > On the average, this will add as much, if not more to > > the message than you are removing... > > Possibly. As I describe below, compression only saves bits if your *average* content has sufficient redundancy. So what does "possibly" mean? > > If you are trying to compress data without loss, you can only compress > > the redundant information. If the message has no redundancy, then it > > is not compressible and, with *any* coding scheme, will require some > > additional bandwidth than if it were not coded at all. > > > Think of your message as a binary number of n bits. If you want to > > compress it to m bits, you can identify the 2**m most often > > transmitted numbers and represent them with m bits. But the remaining > > numbers can not be transmitted in m bits at all. If you want to send > > those you have to have a flag that says, "do not decode this number". > > Now you have to transmit all n or m bits, plus the flag bit. Since > > there are 2**n-2**m messages with n+1 bits and 2**m messages with m+1 > > bits, I think you will find the total number of bits is not less then > > just sending all messages with n bits. But if the messages in the m > > bit group are much more frequent, then you can reduce your *average* > > number of bits sent. If you can say you will *never* send the numbers > > that aren't in the m bit group, then you can compress the message > > losslessly in m bits.Article: 126851
On Dec 4, 7:51 am, "KJ" <kkjenni...@sbcglobal.net> wrote: > "rickman" <gnu...@gmail.com> wrote in message > > news:58740cd5-a79f-4306-bff9-c274852f2a01@e6g2000prf.googlegroups.com... > > > > > On Nov 30, 5:15 pm, "KJ" <kkjenni...@sbcglobal.net> wrote: > >> "rickman" <gnu...@gmail.com> wrote in message > > >> > The examples are far too numerous to list, but here is one. > >> <snip> > >> > To make this unsynthesizable in a way that is sometimes attempted by > >> > newbies... > > >> > Example2: process (SysClk, Reset) begin > >> > if (Reset = '1') then > >> > DataOutReg <= (others => '0'); > >> > elsif (rising_edge(SysClk) or falling_edge(SysClk)) then > >> > if (SCFG_CMD = '1') THEN > >> > DataOutReg <= TT & SD & PCMT0 & PCMT1 & WP_SDO0 & WP_SDO1 & DTR & > >> > RTS; > >> > end if; > >> > end if; > >> > end process Example2; > > >> > You can imagine a register that clocks on both the rising and falling > >> > edge, but you can't build it in an FPGA. > > >> But that does not imply that it couldn't be synthesized using two sets of > >> flip flops whose results get combined. You might not find a synthesis > >> tool > >> in 2007 that accepts the above code, but that doesn't mean that there > >> won't > >> be one in 2008 that will. Whether there is such a tool or not depends on > >> how many users scream to brand A and X that they really need this. It > >> can > >> be synthesized, just not how you are focusing on how you think it must be > >> synthesized. > > > If you can build the second description, I would like to see that. Do > > you know this is possible or are you just speculating? I have never > > seen a good example of a register clocked on both edges done in an > > FPGA. > > Like I said, your description does not imply that it couldn't be synthesized > using two sets of flops suitably combined. See code below for functionally > equivalent code that implements your example 2 but does so in a way that I'm > sure you can see that it can be synthesized. Note, I'm not suggesting that > writing dual edge flop code is good practice or anything, I'm simply saying > that the code that you presented is synthesizable, since it is functionally > equivalent to the code that I list below which clearly is synthesizable. > That implies that your Example 2 code is just not supported by today's > tools, quite possibly due to the lack of any real demand for support for > such coding....but, like I said in the earlier post, 'not supported' is not > the same as 'not synthesizable'. > > KJ > > Example2a: process (SysClk, Reset) begin > if (Reset = '1') then > DataOutReg_re <= (others => '0'); > elsif rising_edge(SysClk) then > if (SCFG_CMD = '1') THEN > DataOutReg_re <= TT & SD & PCMT0 & PCMT1 & WP_SDO0 & WP_SDO1 & DTR & > RTS; > end if; > end if; > end process Example2a; > > Example2b: process (SysClk, Reset) begin > if (Reset = '1') then > DataOutReg_fe <= (others => '0'); > elsif falling_edge(SysClk) then > if (SCFG_CMD = '1') THEN > DataOutReg_fe <= TT & SD & PCMT0 & PCMT1 & WP_SDO0 & WP_SDO1 & DTR & > RTS; > end if; > end if; > end process Example2b; > > DataOutReg <= DataOutReg_re when (SysClk = '1') else DataOutReg_fe; I am not sure you are correct in that the two circuits are equivalent. I was thinking about how such a circuit would not work in a practical sense, although logically it is the same as the one I wrote. So from a synthesis standpoint, this circuit could be synthesized. However, it would likely not work as intended since it depends too much on controlling delays. So I see your point, but I disagree that the two circuits are the same. I don't argue that "not supported" is the same as "not synthesizable". But I think there is a very small set of useful designs that are synthesizable but not supported by most vendors.Article: 126852
Though not what you were asking for, you might find this bibliography page interesting: http://splish.ee.byu.edu/bib/bibpage.html You might want to look at some of the projects being done on the "RAMP": http://ramp.eecs.berkeley.edu/ -- JecelArticle: 126853
How do I get Quartus to infer a dual port RAM? I've tried a couple of coding styles neither worked. I've also tried putting a /* syn_ramstyle = "M4K" */ on the module statement and on the ram declaration, that didn't help either. I've also tried the /* syn_ramstyle = "no_rw_check" */ attribute. The Quartus manual claims that it can infer a dual port, I'm using version 7.2. The first is from the Quartus Manual, the second is reg [WIDTH-1:0] a_ram; reg [WIDTH-1:0] b_ram; reg [WIDTH-1:0] ram[DEPTH-1:0] /* syn_ramstyle = "M4K" */; always@(posedge sysclk) begin if(a_wrt) begin ram[a_addr] <= a_din; a_ram <= a_din; end else begin a_ram <= ram[a_addr]; end // else: !if(a_wrt) if(b_wrt) begin ram[b_addr] <= b_din; b_ram <= b_din; end else begin b_ram <= ram[b_addr]; end // else: !if(b_wrt) if(a_ld) a_q <= a_ram; if(b_ld) b_q <= b_ram; end // always@ (posedge sysclk) The second is in the Xilinx style, wire [WIDTH-1:0] a_ram; wire [WIDTH-1:0] b_ram; reg [WIDTH-1:0] ram[DEPTH-1:0] /* syn_ramstyle = "M4K" */; reg [ADDR_WIDTH-1:0] a_indx; reg [ADDR_WIDTH-1:0] b_indx; assign a_ram = ram[a_indx]; assign b_ram = ram[b_indx]; always@(posedge sysclk) begin if(a_wrt) begin ram[a_addr] <= a_din; end if(b_wrt) begin ram[b_addr] <= b_din; end a_indx <= a_addr; b_indx <= b_addr; if(a_ld) a_q <= a_ram; if(b_ld) b_q <= b_ram; endArticle: 126854
fazulu deen wrote: > Is there any formula to calculate processor clock cycles per > Instructions with given parameters as FPGA implemented processor clock > frequency and instruction bytes... This is a design tradeoff between Fmax, latency and device utilization. Edit code, run a sim, check Fmax, repeat. -- Mike TreselerArticle: 126855
fazulu deen <fazulu.vlsi@gmail.com> wrote: >Hai all, > >Is there any formula to calculate processor clock cycles per >Instructions with given parameters as FPGA implemented processor clock >frequency and instruction bytes... > >pls suggest.. No, this depends entirely on the architecture of the processor. Some processors may require 1 clock cycle for 1 instruction, others may need a variable number of clock cycles. -- Reply to nico@nctdevpuntnl (punt=.) Bedrijven en winkels vindt U op www.adresboekje.nlArticle: 126856
On 4 Dez., 09:39, "Boudewijn Dijkstra" <boudew...@indes.com> wrote: > Op Mon, 03 Dec 2007 18:27:50 +0100 schreef rickman <gnu...@gmail.com>: > > > > > On Dec 3, 4:14 am, "Boudewijn Dijkstra" <boudew...@indes.com> wrote: > >> Op Thu, 29 Nov 2007 15:42:45 +0100 schreef Denkedran Joe > >> <denkedran...@googlemail.com>: > > >> > I'm working on a hardware implementation (FPGA) of a lossless > >> compression > >> > algorithm for a real-time application. The data will be fed in to the > >> > system, will then be compressed on-the-fly and then transmitted > >> further. > > >> > The average compression ratio is 3:1, so I'm gonna use some FIFOs of a > >> > certain size and start reading data out of the FIFO after a fixed > >> > startup-time. The readout rate will be 1/3 of the input data rate The > >> > size > >> > of the FIFOs is determined by the experimental variance of the mean > >> > compression ratio. Nonetheless there are possible circumstances in > >> which > >> > no compression can be achieved. > > >> Given that uncompressible data often resembles noise, you have to ask > >> yourself: what would be lost? > > > The message! Just because the message "resembles" noise does not mean > > it has no information. In fact, just the opposite. > > If you are compressing reliably transmitted pure binary data, then you are > absolutely right. But if there is less information per datum, like in an > analog TV signal, something that resembles noise might very well be noise. The algorithm is either lossless, or it isn't. Identifying and removing the noise is exactly what a lossy algortihm does. For any lossless algorithm there must be an input that produces an output at least the size of the input. The proof is rather simple. You just count the number of possible inputs (2^N for N bits) and the number of bits necessary to distinguish that many different outputs. (That's N bits, surprise). With less bits two inputs would code to the same output. To guarantee compression for a lossless algorithm therefore is only possible if most input sequences can't occcur. If you can guarantee that half of the input combinations can't happen you could guarantee to save a single bit. Cases where that occurs is image half toning with a fixed frequency. In that case you have a worst case run length distribution . Kolja SulimmaArticle: 126857
Michael Laajanen wrote: > Hi, > > Is there anyone that has XABEL for Sun available, due to some internal > mess we have managed to loose our old XABEL installation. > > It was used with XACT 5.2.1 and Viewlogic powerview which we still have > along with licenses for XABEL only the distribution CD is missing. > > /michael Have you asked Xilinx ? - give them some Date codes, for the other stuff, so they know which dusty store-room to look in. -jgArticle: 126858
I finally gave up on inferring RAM blocks - the synthesizer is just too fiddle. However, if you still want to try, use the templates. You'll find them under the Edit menu. Good luck, TommyArticle: 126859
Hi, I am reading the Spartan3E user guide and I had a question regarding the clock infrastructure. It says that there are 8 "quadrant clock lines" A - H. Does that mean that my code can ONLY have eight different clock frequencies. What happens if I have more than that. Will they still work although they would not be routed on the clock lines. Thanks for the help AmishArticle: 126860
Hello I am trying to convert the following code to vhdl assign Q = (rst==0)?Q_int:1'do; How do i convert this to vhdl? I have to use a concurrent statement as this statement is not in the always block hence concurrent. I cannot use an if then else statement as it is sequential. Please helpArticle: 126861
Anuja wrote: > assign Q = (rst==0)?Q_int:1'do; > > How do i convert this to vhdl? I have to use a concurrent statement as Q <= Q_int when rst = '0' else '0';Article: 126862
KJ wrote: (snip) > As I pointed out in my first post, what many people refer to as 'not > synthesizable' really means that they can't find a tool that supports the > code as it is written. Something that is 'not synthesizable' can never be > built. Living with the limitations of a particular tool(s) is not the same > thing at all. >>Simulation operates on the >>full language. Synthesis only works with a subset that actually >>describes hardware. > Agreed. Absolute delays will likely never be synthesizable, for example. >>The examples are far too numerous to list, but here is one. (snip) >>You can imagine a register that clocks on both the rising and falling >>edge, but you can't build it in an FPGA. > But that does not imply that it couldn't be synthesized using two sets of > flip flops whose results get combined. You might not find a synthesis tool > in 2007 that accepts the above code, but that doesn't mean that there won't > be one in 2008 that will. Whether there is such a tool or not depends on > how many users scream to brand A and X that they really need this. It can > be synthesized, just not how you are focusing on how you think it must be > synthesized. On the other hand, if it can be done why not do it as a module. (That would be verilog, but VHDL has something similar.) Most don't synthesize a variable divide, but usually one would want to use a module, anyway. (For example, to do a pipelined divide.) Unless the hardware implemented a two edge FF, I don't think one should synthesize one. It is likely to be slow, exactly what you don't want in a two edge FF. -- glenArticle: 126863
On Dec 4, 1:56 pm, axr0284 <axr0...@yahoo.com> wrote: > Hi, > I am reading the Spartan3E user guide and I had a question regarding > the clock infrastructure. > It says that there are 8 "quadrant clock lines" A - H. Does that mean > that my code can ONLY have eight different clock frequencies. What > happens if I have more than that. Will they still work although they > would not be routed on the clock lines. Thanks for the help > Amish You can have only 8 global clocks in each quadrant. There are 32 global resources available with right/left half clock pins that are different from the global clock pins available at the top and bottom of the device. You can have a design that uses all 32 on global resources. If you exceed the 32 clocks (or 8 clocks within any one quadrant) then local routing can be used but the ability to properly place your part to meet setup/hold timing could be difficult. While general clock routing can be poor, if you synchronize your interface near the edge of the device, local routing can do a very reasonable job of keeping the setup/hold times in check. You might find good app notes on "local clocks" for those uses. But you have to ask yourself: do you really need such a multitude of clocks?! All those asynchronous boundary crossings must be a MESS. You do know about multi-cycle paths and clock enables to work at an effective lower speed from a high speed clock, don't you? I've never had more than 3 clocks in my designs that I can recall; there are designs that have a wide variety of interfaces, but I've never seen a "clock killer" app up close and personal. Please work toward a design that uses clocks sparingly and you'll find the reduction in asynchronous interfaces worth the hassle. - John_HArticle: 126864
On Tue, 04 Dec 2007 13:35:55 -0800, Tommy Thorn wrote: > I finally gave up on inferring RAM blocks - the synthesizer is just too > fiddle. However, if you still want to try, use the templates. You'll > find them under the Edit menu. > > Good luck, > Tommy I managed to make it work but it's incredibly sensitive. The two ports have to be in different always blocks even though they share the same clock. Also there doesn't seem to be a way to put a clock enable on the read. I put in a work around that saves the address in a register and then selects it if there is no clock enable but that's a kludge that wastes logic and will slow down the address path. Altera needs to fix this. assign a_adr = a_ce ? a_addr : a_indx; assign b_adr = b_ce ? b_addr : b_indx; always@(posedge sysclk) begin if(a_wrt && a_ce) begin ram[a_adr] <= a_din; a_ram <= a_din; end else begin a_ram <= ram[a_adr]; end // else: !if(a_wrt) if(a_ce) a_indx <= a_addr; if(a_ld) a_q <= a_ram; end // always@ (posedge sysclk) always@(posedge sysclk) begin if(b_wrt && b_ce) begin ram[b_adr] <= b_din; b_ram <= b_din; end else begin b_ram <= ram[b_adr]; end // else: !if(b_wrt) if(b_ce) b_indx <= b_addr; if(b_ld) b_q <= b_ram; end // always@ (posedge sysclk)Article: 126865
Hi, Jim Granville wrote: > Michael Laajanen wrote: > >> Hi, >> >> Is there anyone that has XABEL for Sun available, due to some internal >> mess we have managed to loose our old XABEL installation. >> >> It was used with XACT 5.2.1 and Viewlogic powerview which we still >> have along with licenses for XABEL only the distribution CD is missing. >> >> /michael > > > Have you asked Xilinx ? - give them some Date codes, for the other > stuff, so they know which dusty store-room to look in. > > -jg > I have tried our local Xilinx but this predates most people that works there :) I can try and contact Xilinx in USA and see, will do. /michaelArticle: 126866
Hi all, I am trying to calculate clock cycles per instructions by running the test cases and monitoring the waveforms. I am confused to calculate the cycles for memory and IO related instructions... For example MOV AL,33H ------>Took 3 clk cycles for my design OUT 32H,AL------>Fetching and decoding takes ----10 clock cycles write in to memory starts after three clock cycles of fetching and decoding process and ends after 1 clk ending of fetching and decoding cycles-- 8 clk cycles And execution takes----1 clk cycle... HLT my question is should i have to consider overlapping time of writing into memory also for calculatimg clock cycle of OUT instructions??R i should subtract the writing time??? So OUT instruction takes 10+1= 11 clock cycles or 3+1=4 clk cycles..?? which is correct..?? regards, fazArticle: 126867
thanks for your answers! but an open question is still how ise comes up with the clock speed after synthesis? as far is a read and learnd there is no timing information in that step and only a netlist is generated. or i'm i worng? how reliable are those numbers in real world? say if my design synthesises with 240 MHz will it run with 200 MHz on the fpga? another thing is the timming reports. i searched the xilinx website and googled but i couldn't find some useful answers for me. is there a document or tutorial for those things because im a bit overwhelmed with all the information and i'm trying to figure out the important stuff.... thanks urbanArticle: 126868
Op Tue, 04 Dec 2007 17:50:46 +0100 schreef rickman <gnuarm@gmail.com>: > On Dec 4, 3:39 am, "Boudewijn Dijkstra" <boudew...@indes.com> wrote: >> Op Mon, 03 Dec 2007 18:27:50 +0100 schreef rickman <gnu...@gmail.com>: >> >> > On Dec 3, 4:14 am, "Boudewijn Dijkstra" <boudew...@indes.com> wrote: > ...snip... >> >> Given that uncompressible data often resembles noise, you have to ask >> >> yourself: what would be lost? >> >> > The message! Just because the message "resembles" noise does not mean >> > it has no information. In fact, just the opposite. >> >> If you are compressing reliably transmitted pure binary data, then you >> are >> absolutely right. But if there is less information per datum, like in >> an >> analog TV signal, something that resembles noise might very well be >> noise. > > But noise and signal that "resembles noise" are two different things. > You can characterize noise and send a description of it. But it is > *isn't* noise you have just turned part of your signal into noise. So > to take advantage of the fact that noise can be compressed by saying > "this is noise" requires that you separate the noise from the signal. > If you could do that, why would you even transmit the noise? You > wouldn't, you would remove it. If you could, yes. Costs put limits on the available processing power. > So the only type of "noise like" signal left is the part that *is* > signal and the part that can't be separated from the signal. Since > you can't distinguish between the two, you have to transmit them both > and suffer the inability to compress them. > > >> > Once you have a >> > message with no redundancy, you have a message with optimum >> > information content and it will appear exactly like noise. >> >> > Compression takes advantage of the portion of a message that is >> > predictable based on what you have seen previously in the message. >> > This is the content that does not look like noise. Once you take >> > advantage of this and recode to eliminate it, the message looks like >> > pure noise and is no longer compressible. But it is still a unique >> > message with information content that you need to convey. >> > ...snip... >> >> >> If you can identify the estimated compression beforehand and then >> split >> >> the stream into a 'hard' part and an 'easy' part, then you have a >> way to >> >> retain the average. >> >> > Doesn't that require sending additional information that is part of >> > the message? >> >> Usually, yes. > > How can you flag the "easy" (compressible) part vs. the "hard" part > without sending more bits? In the context of the OP's hardware implementation, you may be able to distribute these two streams over the available output pins without sending extra bits. >> > On the average, this will add as much, if not more to >> > the message than you are removing... >> >> Possibly. > > As I describe below, compression only saves bits if your *average* > content has sufficient redundancy. So what does "possibly" mean? If compression saves 'a lot of' bits ands flagging needs 'a few' bits, then it will not "add as much, if not more to the message than [I am] removing..." Your description below only applies to certain compression algorithms, so any conclusion derived from it may or may not apply to the general case. >> > If you are trying to compress data without loss, you can only compress >> > the redundant information. If the message has no redundancy, then it >> > is not compressible and, with *any* coding scheme, will require some >> > additional bandwidth than if it were not coded at all. >> >> > Think of your message as a binary number of n bits. If you want to >> > compress it to m bits, you can identify the 2**m most often >> > transmitted numbers and represent them with m bits. But the remaining >> > numbers can not be transmitted in m bits at all. If you want to send >> > those you have to have a flag that says, "do not decode this number". >> > Now you have to transmit all n or m bits, plus the flag bit. Since >> > there are 2**n-2**m messages with n+1 bits and 2**m messages with m+1 >> > bits, I think you will find the total number of bits is not less then >> > just sending all messages with n bits. But if the messages in the m >> > bit group are much more frequent, then you can reduce your *average* >> > number of bits sent. If you can say you will *never* send the numbers >> > that aren't in the m bit group, then you can compress the message >> > losslessly in m bits. > -- Gemaakt met Opera's revolutionaire e-mailprogramma: http://www.opera.com/mail/Article: 126869
On 29 Nov., 16:38, Anton Kowalski <AntonKowal...@gmail.com> wrote: > I am new to EDK (but not ISE) and have some questions about the > workflow for developing a custom IPIF peripheral. > > The documentation implies that the peripheral is re-imported into EDK > once it's development is *complete*. But what if one wants to work > iteratively? That is, I would like to start with a stubbed-out design > (the one provided in user_logic.vhd) and add to it incrementally, > debugging and testing the peripheral from the processor along the way. > Is there a simple way to do this? Or does one have to re-import the > design every time a internal structural change is made? (The external > specification will remain the same.) Let me tell you some lessons that I learned over my last couple of EDK projects. 1. The address decodes and register implementation in user_logic. vhd ist awfull. I stripped out 2ns from the critical path with 5 minutes of editing. 2. In the design flow we are using we have a rather stable CPU system with complex hardware in development. When implementing the user logic in the EDK project every time you add a register you need to open EDK, change and wire up multiple hierarchy levels of entities and component declaration, etc. The solution to both problems is simple: We create an IPIF with only one user address range. Usually with a size of 24 bits. (Larger address range means less hardware in the decoders) Instead of instantiating the user logic inside EDK we bring out the IPIF signals to ports of the EDK system. We instantiate the EDK system in an ISE project and implement the user logic there, doing our own address decoding. This greatly speeds up iterations on the register set. We hardly ever touch EDK again during the design process. Kolja SulimmaArticle: 126870
On 5 Dez., 09:57, "Boudewijn Dijkstra" <boudew...@indes.com> wrote: > Your description below only applies to certain compression > algorithms, so any conclusion derived from it may or may not apply to the > general case. ROTFL. Did you even read it? He outlined the formal prove that I was referencing to in a little more detail. This proof shows, that for ANY lossless algorithm there is an input that can't be compressed. I find it rather funny that you counter that proof by the assertion that it only applies to certain algorithms. For the fun of it: Would you be so kind and present a single example of a compression algorithm that the proof does not apply to? Could be worth a PhD if you manage. Kolja SulimmaArticle: 126871
HI I have a question about the use of an BUFGCE in a xilinx design. (currently using a virtex 4). when i enable the buffer it seems to loose one clock cycle. 1 2 3 4 ___ ___ ___ ___ ___ CLK_IN __| |___| |___| |___| |___| | _________________________ ENABLE _______| ___ ___ ___ CLK_OUT _______________| |___| |___| | (hope the drawing dosen't get messed up) can anyone tell my why i don't see clock cycle number 2 on the output? i read in the virtex4 datasheet (if i understood that right) that the second clock cycle should be on the output. any ideas what i'm doing wrong? thanks urbanArticle: 126872
On Nov 29, 8:36 am, Mark McDougall <ma...@vl.com.au> wrote: > Hi, > > I have a design that works fine in Quartus. > > In the process of porting it to ISE, I'm getting a series of these > warnings and can't for the life of me work out why... > > An example: > WARNING:Xst:647 - Input <vblank> is never used. > > But it clearly _is_ being used!?! Same for all the other signals that it's > complaining about. > > Normally I'd suspect a missing clock but that doesn't appear to be the > case. (For the record the clock for this process isn't actually meeting > all timing constraints at the moment.) > > Any tips would be appreciated - it's driving me (more) insane! > Regards, > > -- > Mark McDougall, Engineer > Virtual Logic Pty Ltd, <http://www.vl.com.au> > 21-25 King St, Rockdale, 2216 > Ph: +612-9599-3255 Fax: +612-9599-3266 if u connect sub modules with each other (port connection), and u dont use all signals in 2nd module, ISE ll issue that warning. even if that port u use in submodules, but if it have no contribution to your all logic, ISE ll give warning. because during optimization ISE not create any hardware for unused logic. (the logic which have no contribution to your path i.e input to output). if u also out ur warning ports through ur top module. then no warning ll b issued. Regards ShahidArticle: 126873
I'm planning to use Spartan 3e and SDRAM for a product - sort of a simple video "card" for an embedded CPU system. I got myself the Spartan 3e STARTER kit and I'm trying to use the SDRAM on board. Found the MIG 1.6 and the pre-configured "bl2cl2" set of files. Got them to synthesize by editing a full (wrong for me) path to "params" file. So far so good. Changed the UCF to use the 50 MHz clock rather than the external one. The resulting bit file loads and does *something*. At least there are pulses on "data valid" LED. I may get somewhere if I continue on this path but it will take me a long time to figure out how this core works. I wonder if anyone knows of an existing design that uses SDRAM on this board interfaced to something: soft CPU, video generator, etc. Thanks. -Alex.Article: 126874
>Hi all, > >I am trying to calculate clock cycles per instructions by running the >test cases and monitoring the waveforms. > >I am confused to calculate the cycles for memory and IO related >instructions... > Code profiling is a complicated thing, and is definitely off-topic for an FPGA forum. Perhaps your architecture is completely wrong if you need to profile the code that closely. HTH!
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z