Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On May 22, 10:07 pm, nandit...@gmail.com wrote: > Hi , > > I am using Altera Quartus II version 4.0 . The EPS125 device has > 138 M4K blocks > 224 M512 blocks > 2 number of 512K Mram blocks > > has a total of 1,944,576 bits. > > (http://www.altera.com/products/devices/stratix/features/stx- > trimatrix.html ) > > the problem i am facing is i am trying to fit in 4 buffer data blocks > each of size 16384*14 bits > (total 229376*4 =917504) > plus i have additional fifo and single port Ram of sizes 8192 , and > 1500 bits > > The problem is though there is adequate RAM available , I am not able > to fit in all the memory; only 48% of the RAM is used. > > The fitter resourse summary says more M-Ram blocks are required but > most of M4K and M512 blokcs are not used. ie 40/224 M512 blocks are > used, 2/138 M4K blocks and 2-MRam blocks... > > I am fairly new to FPGAs and stuff so i would appreciate all help. > > thanks and regards. See if http://www.altera.com/literature/ug/ug_fifo_partitioner.pdf fits your needs, it allows you to have multiple fifo's in one MRAM MotiArticle: 119601
I'm having a problem with Xilinx Navigator "discovering" a Verilog design hierarchy. I've inherited some IP that requires that a Verilog `define be set to specify the modules to include into the design. However, when Navigator starts up and builds the design hierarchy, I see no way to tell it about global macros definitions. I can manually go in and add `define values in the files (yuck), but I'm hoping someone has already figured out how to get Navigator to do this. Verilog allows you to define a macro via the command line using +define and XST apparently also allows macros to be defined via the synthesis property "Verilog Macros". I've found no way to do something similar with Navigator. Has anyone else run into this and found a work around? Thanks! John ProvidenzaArticle: 119602
Hello all. I'm very new to VHDL and stuck with a simple task. This code should convert binary number to BCD number using shift and add= 3 = algoritam http://www.engr.udayton.edu/faculty/jloomis/ece314/notes/devices/binary_= to_BCD/bin_to_BCD.html After this code executes I always get "0000" in digit and unit You don't have to analyze the whole code, just tell me what is generally= = wrong. Thank you people! variable temp: bit_vector(7 downto 0) :=3D "00011000"; --24 (10) variable unit: bit_vector(3 downto 0) :=3D "0000"; variable digit: bit_vector(3 downto 0):=3D "0000"; begin for i in 0 to 7 loop digit :=3D digit sll 1; digit(0) :=3D unit(3); unit :=3D unit sll 1; unit(0) :=3D temp(7); temp :=3D temp sll 1; --This is the part where I add 3, is there any other way? It must work= = on FPGA case digit is when "0101" =3D> digit :=3D "1000"; when "0110" =3D> digit :=3D "1001"; when "0111" =3D> digit :=3D "1010"; when "1000" =3D> digit :=3D "1011"; when "1001" =3D> digit :=3D "1100"; when others =3D> digit :=3D digit; end case; = case unit is when "0101" =3D> unit :=3D "1000"; when "0110" =3D> unit :=3D "1001"; when "0111" =3D> unit :=3D "1010"; when "1000" =3D> unit :=3D "1011"; when "1001" =3D> unit :=3D "1100"; when others =3D> unit :=3D digit; end case; end loop;Article: 119603
On 23 Mai, 07:05, "J.Ram" <jrgod...@gmail.com> wrote: > Finally i downloaded this design on board and see that design is > working at 90Mhz. > My question why tool is not meeting timing but design is working at > board. If a tool states a max frequency, this frequency has to be valid for worst case scenario. This means usually min voltage, max temperature and worst case silicon (and maybe some derating factors for ageing effects). Your design will run much faster with a device from a "good" waver run on max voltage and min temperature at the begining of its lifetime. The next point is, that the max frequency may be determined by a few paths wich are seldom used. You might have a good time with overclocking until you need the longest path in the design. E.g. your longest path is the ripple carry of an adder which is the only path real exceeding the clock periode. This adder will do a good job until the day you have the full ripple over all bits. This adder will result n+1 for each increment of n except when n = max(n)/2. bye ThomasArticle: 119604
Test01 wrote: > > Also you are suggesitng to raise the sink current value to 24 mA. Currently I am using 16 mA driver. >I am assuming that 24 mA sink current will some how lower the resistance of the n-channel mosfet in the FPGA >when it is on and help reduce the voltage drop across it. Correct. 24mA has lower resistance MOSFET than 16mA >It will be great if you can expand on this. >I am assuming that this solution will not require any external transistor to ground. No external transistor is needed, it will decrease the 400mV loss to 2/3 of that. If that is still too much, you could parallel pins on the FPGA. -jgArticle: 119605
On May 20, 6:30 pm, Mark McDougall <m...@vl.com.au> wrote: > e...@idcomm.com wrote: > > Has anybody figured out how to get ONLY the CORRECT pin assignments to > > be used in the design? > > Use a HDL? > > -- > Mark McDougall, Engineer > Virtual Logic Pty Ltd, <http://www.vl.com.au> > 21-25 King St, Rockdale, 2216 > Ph: +612-9599-3255 Fax: +612-9599-3266 The problem isn't with the HDL/Schematic debate, but with the fact that there's a problem in how Quartus interprets the diagram. For a 2k-Gate device, HDL's are nice and simple to use, but for a design with eight 5-M-Gate devices, you want to have a block diagram at some point. Most designs fall between those. Since it's the interpretation of the diagram/schematic that's messed up, you can't very well use HDL to fix it if you ultimately want a block diagram. Another problem arises when you attempt to review a 3000 page HDL design, in that there get to be disagreements between individuals performing the review, and with a BIG design, it means the folks paying for the design have to pay more for the review, which, with HDL, takes 3-4 weeks, while, with schematic/block-diagrams, it takes half a day. Most of the guys I work for are engineers, and even the civil-engineer/ corporate-officers can read and correctly interpret a schematic, while most recent EE grads, who were taught about HDL's still have trouble with a big HDL design. The competent old-timers, the ones paying for the work to be done, just scratch their heads and frown at HDL listings. The problem is with the documentation, if not with the way in which signal names are interpreted. As far as I'm able to tell there is too little of it, and much of it is out of date or otherwise in conflict with the way in which the software behaves. Since the task is to produce a design that's documented in schematic form, there's little to be done with HDL until the schematic editor works better and the code is brought into sync with the documentation. Richard Erlacher Erlacher Associates Denver, COArticle: 119606
On May 20, 2:22 pm, "Icky Thwacket" <i...@it.it> wrote: > <e...@idcomm.com> wrote in message > > news:1179683343.527144.240710@y18g2000prd.googlegroups.com... > > > I've repeatedly reported this blatant and repeatedly acknowledged bug > > in Quartus-II's schematic editor, yet it appears it's still there in > > all its splendor in v7.1. > > > When I assign a bus named <NAME[K..0]> the software arbitrarily > > appends another signal group <NAMEK..NAME0> and binds them to > > arbitrary pins and misassigns my signals NAME[K] .. NAME[0] to other > > pin numbers, which, among other things, produces conflicts of > > assignment, not to mention that it completely fouls up my signal > > assignments. > > > Has anybody figured out how to get ONLY the CORRECT pin assignments to > > be used in the design? > > Its not a bug, you clearly do not fully understand the rules for bus naming. > > FRED[7..0] includes the members FRED[7] down to FRED[0] and also the members > FRED7 down to FRED0. > > FRED[7] is exactly the same as FRED7 etc. > > If you have already independently named a node FRED7 for instance, it will > become part of the FRED[7..0] bus - with the bus conflicts/pin renaming you > describe. > > If you want really confusing fun try naming a bus like FRED372[7..0] > > Icky Icky, I agree that this is a mess, but the Altera folks have freely declared this to be a bug since v2.2 of Quartus, but it's not due to be remedied until later this year (purportedly in October, when the "next release" is scheduled to occur. I may not understand the rules to which the software was written,but I have read what little avaiable documentation there is and none of the approaches that they propose, nor any of those proposed by the support staff, will work properly. What puzzles me is that the doc suggests that the bus name should be A[15..0] and another might be D[7..0]. It handles the latter just fine but always makes a mess of the assignments and interpretation of the former. It's done that since the original release of Quartus, and I've complained. The Altera folks claim they're fixing it, but there's room for doubt. Since it's taken them over four years to address the matter. I have, in my current quest, four named busses, and three are handled exactly as I wish. Why do you think it handles the bus named A[15..0] (yes, STD_LOGIC_VECTOR(15 downto 0);) differently from the other three? What can I do to make it work, in schematic form? What you said gives me pause, as I'm looking at having to change a bunch of signal names that it presently doesn't screwup, e.g. nOEU6 and nOEU7 ... etc. all of which, as you suggest, should be part of a bus if that's how it works. They're not, however, and don't appear in a group, though all the named busses, aside from the A[15..0] bus, are presented as groups. I don't generally use Altera parts/software unless I have to have 5- volt logic. I'm using this stuff because I have a number of boards I use and reuse in test fixtures and proof-of-concept stuff and generally don't have data busses to contend with. This case is an exception. All the bus names in this case are similarly constructed, yet the software chooses to mess up one of them. I'd really like to know how to fix this. Richard Erlacher Erlacher Associates Denver, COArticle: 119607
On May 23, 2:48 pm, johnp <johnp3+nos...@probo.com> wrote: > I'm having a problem with Xilinx Navigator "discovering" a Verilog > design hierarchy. > > I've inherited some IP that requires that a Verilog `define be set to > specify > the modules to include into the design. However, when Navigator starts > up > and builds the design hierarchy, I see no way to tell it about global > macros > definitions. > > I can manually go in and add `define values in the files (yuck), but > I'm hoping > someone has already figured out how to get Navigator to do this. > > Verilog allows you to define a macro via the command line using > +define and > XST apparently also allows macros to be defined via the synthesis > property > "Verilog Macros". I've found no way to do something similar with > Navigator. > > Has anyone else run into this and found a work around? > > Thanks! > > John Providenza One way is to include a common file in each file that needs it. The common file would set the defines. This gives one a common point of control. -NewmanArticle: 119608
Hi, > For me, XST simply black-boxed the EDK components below the top-level > "system.vhd" file ... That's right. > ... and the PAR tools picked them up later. I don't recall > using "black-box" attributes; XST simply left them un-instantiated. I have to manipulate the system.vhd from the EDK inside my ise project. That's why I would like to do syntax checking. Isn't syntax checking the first step at all one does when working under ise? > However I think what you are doing SHOULD work. Except... > it sounds like you have another problem : XST's handling of libraries > seems to be consistently broken (at least on 6.1 and 7.1; I haven't > tried newer versions). And this is completely independent of EDK; it > seems to do this with any libraries. > > Which ISE version are you using? I'm working under 7.x. I will try version 8 next days or week. Maybe it will work better. I have got a license for version 8 since today. I tried a little bit last week and I think I have accidentally worked around. But I'm not sure how. I added the missing library manually and did syntax checking. ISE does some mysterious magic (:-D) and "compiles" the vhdl module file. Then you can delete the library from the project and it still works. Now you are able to add the second library with the eponymous module inside without problems. Ok, that's tricky or dirty?! ;-) I think, this is how it worked last week one late evening. But I will have to try next days again anyway. I had to redesign something. That's why this problem had to wait. > It is perfectly valid in VHDL to have the same entity name in several > different libraries, and select a particular one for instantiation in > any place with "library/use" clauses, and embedded configuration > statements "for ... use " etc. Yes, I know. That's an advantage of name hierarchies. I think it's more like a names conflict in the internal organisation or implementation of ise, that causes the error. It's absolutely ligitimate in vhdl to have eponymous modules as long as they are in different librarys. But ise states some strange error when adding the second module, even when they are included in different libs. I can't remember the words, the ise error popup has used. I can post it, when I'm at this state of redesign again. > But ISE simply instantiates the first component it finds with the right > name and ignores your explicit bindings (in my experience) thxArticle: 119609
If we design an S3-based system using an internal clock doubler, the design software wants to know our input frequency. In our case, we want to double the incoming 20 MHz clock to 40 MHz for internal use. In other applications, the incoming clock may range from, say, 18 to 25 MHz, and we'd like to use the same FPGA design without recompiling. Looking at the Xapps and datasheets, it's implied that the clock doubler will double an incoming clock over the specified range (18 to 167 MHz in) without otherwise being "told" the nominal input frequency. Is that right? Thanks, JohnArticle: 119610
On Wed, 23 May 2007 15:00:27 -0700, John Larkin <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: > > >If we design an S3-based system using an internal clock doubler, the >design software wants to know our input frequency. In our case, we >want to double the incoming 20 MHz clock to 40 MHz for internal use. > >In other applications, the incoming clock may range from, say, 18 to >25 MHz, and we'd like to use the same FPGA design without recompiling. >Looking at the Xapps and datasheets, it's implied that the clock >doubler will double an incoming clock over the specified range (18 to >167 MHz in) without otherwise being "told" the nominal input >frequency. Is that right? > >Thanks, > >John And I guess there are two parts to the question: how do the DFS and the DLL blocks operate in this regard? If we use the DFS, we can go below 18 MHz as the input, which would be nice. JohnArticle: 119611
John, 18 MHz is the lower limit for the Spartan 3 DLL, and the Spartan 3 DFS could go even lower than 18 MHz -- down to 9 MHz (to multiply and provide a clock out that is 2X the clock in). 25 MHz is well below the low frequency mode of the DLL/DCM, so one bitstream fits all (one setting will lock for any input in the range of 18 to 25 MHz). If the frequency changes while it is running, you may have to reset the DCM for it to lock. The DCM tracks up to a point, but delay line overflow or underflow may happen which then requires a reset to restart. Austin John Larkin wrote: > On Wed, 23 May 2007 15:00:27 -0700, John Larkin > <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: > >> >> If we design an S3-based system using an internal clock doubler, the >> design software wants to know our input frequency. In our case, we >> want to double the incoming 20 MHz clock to 40 MHz for internal use. >> >> In other applications, the incoming clock may range from, say, 18 to >> 25 MHz, and we'd like to use the same FPGA design without recompiling. >> Looking at the Xapps and datasheets, it's implied that the clock >> doubler will double an incoming clock over the specified range (18 to >> 167 MHz in) without otherwise being "told" the nominal input >> frequency. Is that right? >> >> Thanks, >> >> John > > And I guess there are two parts to the question: how do the DFS and > the DLL blocks operate in this regard? If we use the DFS, we can go > below 18 MHz as the input, which would be nice. > > John >Article: 119612
The "one bitstream for two frequencies" might be confusing because of the frequency setting on the DCM itself (the primitive's parameter). The mention that the value is used "to optimize jitter" never was very clear to me. I'd suggest that for a DCM with zero phase offset, compiling with the highest frequency value as the specification for the UCF will keep the system running at a fixed frequency across that entire range. For a DCM with fixed phase offset, however, I'd suggest due-diligence to make sure that neither the setup nor the hold times on the various paths are compromised. Except for this rare situation, it appears the DFS-only mode should support everything nicely. - John_H "austin" <austin@xilinx.com> wrote in message news:f32fq6$6f21@cnn.xilinx.com... > John, > > 18 MHz is the lower limit for the Spartan 3 DLL, and the Spartan 3 DFS > could go even lower than 18 MHz -- down to 9 MHz (to multiply and > provide a clock out that is 2X the clock in). > > 25 MHz is well below the low frequency mode of the DLL/DCM, so one > bitstream fits all (one setting will lock for any input in the range of > 18 to 25 MHz). > > If the frequency changes while it is running, you may have to reset the > DCM for it to lock. The DCM tracks up to a point, but delay line > overflow or underflow may happen which then requires a reset to restart. > > Austin > > John Larkin wrote: >> On Wed, 23 May 2007 15:00:27 -0700, John Larkin >> <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: >> >>> >>> If we design an S3-based system using an internal clock doubler, the >>> design software wants to know our input frequency. In our case, we >>> want to double the incoming 20 MHz clock to 40 MHz for internal use. >>> >>> In other applications, the incoming clock may range from, say, 18 to >>> 25 MHz, and we'd like to use the same FPGA design without recompiling. >>> Looking at the Xapps and datasheets, it's implied that the clock >>> doubler will double an incoming clock over the specified range (18 to >>> 167 MHz in) without otherwise being "told" the nominal input >>> frequency. Is that right? >>> >>> Thanks, >>> >>> John >> >> And I guess there are two parts to the question: how do the DFS and >> the DLL blocks operate in this regard? If we use the DFS, we can go >> below 18 MHz as the input, which would be nice. >> >> John >>Article: 119613
John, The input frequency is used only for estimating jitter, and to help calculate any timing period numbers. It is also used to set low or high frequency mode, if not set explicitly. It does not affect the bitstream. Austin John_H wrote: > The "one bitstream for two frequencies" might be confusing because of the > frequency setting on the DCM itself (the primitive's parameter). The > mention that the value is used "to optimize jitter" never was very clear to > me. > > I'd suggest that for a DCM with zero phase offset, compiling with the > highest frequency value as the specification for the UCF will keep the > system running at a fixed frequency across that entire range. For a DCM > with fixed phase offset, however, I'd suggest due-diligence to make sure > that neither the setup nor the hold times on the various paths are > compromised. Except for this rare situation, it appears the DFS-only mode > should support everything nicely. > > - John_H > > > "austin" <austin@xilinx.com> wrote in message > news:f32fq6$6f21@cnn.xilinx.com... >> John, >> >> 18 MHz is the lower limit for the Spartan 3 DLL, and the Spartan 3 DFS >> could go even lower than 18 MHz -- down to 9 MHz (to multiply and >> provide a clock out that is 2X the clock in). >> >> 25 MHz is well below the low frequency mode of the DLL/DCM, so one >> bitstream fits all (one setting will lock for any input in the range of >> 18 to 25 MHz). >> >> If the frequency changes while it is running, you may have to reset the >> DCM for it to lock. The DCM tracks up to a point, but delay line >> overflow or underflow may happen which then requires a reset to restart. >> >> Austin >> >> John Larkin wrote: >>> On Wed, 23 May 2007 15:00:27 -0700, John Larkin >>> <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: >>> >>>> If we design an S3-based system using an internal clock doubler, the >>>> design software wants to know our input frequency. In our case, we >>>> want to double the incoming 20 MHz clock to 40 MHz for internal use. >>>> >>>> In other applications, the incoming clock may range from, say, 18 to >>>> 25 MHz, and we'd like to use the same FPGA design without recompiling. >>>> Looking at the Xapps and datasheets, it's implied that the clock >>>> doubler will double an incoming clock over the specified range (18 to >>>> 167 MHz in) without otherwise being "told" the nominal input >>>> frequency. Is that right? >>>> >>>> Thanks, >>>> >>>> John >>> And I guess there are two parts to the question: how do the DFS and >>> the DLL blocks operate in this regard? If we use the DFS, we can go >>> below 18 MHz as the input, which would be nice. >>> >>> John >>> > >Article: 119614
Lastly, The DFS can not be sync'd to the DLL when the input clock is too low for the DLL (below 18 MHz in this case). If phase synchronization is required, then the range starts at 18 MHz. If phase synchronization is not required, then do not use the CLK0, 90, 180, 270, 2X, 2x_b, DV outputs, and only use the CLKFX, CLKFX-B outputs. Leave the CLKFB input not connected. This instantiates ONLY the DFS element. Now it works from 9 MHz up, and the CLKFX outputs are 2X the input. There is no guaranteed phase relationship from the CLKIN to the DFS to the CLKFX outputs in the DFS only mode. Austin austin wrote: > John, > > The input frequency is used only for estimating jitter, and to help > calculate any timing period numbers. It is also used to set low or high > frequency mode, if not set explicitly. > > It does not affect the bitstream. > > Austin > > John_H wrote: >> The "one bitstream for two frequencies" might be confusing because of the >> frequency setting on the DCM itself (the primitive's parameter). The >> mention that the value is used "to optimize jitter" never was very clear to >> me. >> >> I'd suggest that for a DCM with zero phase offset, compiling with the >> highest frequency value as the specification for the UCF will keep the >> system running at a fixed frequency across that entire range. For a DCM >> with fixed phase offset, however, I'd suggest due-diligence to make sure >> that neither the setup nor the hold times on the various paths are >> compromised. Except for this rare situation, it appears the DFS-only mode >> should support everything nicely. >> >> - John_H >> >> >> "austin" <austin@xilinx.com> wrote in message >> news:f32fq6$6f21@cnn.xilinx.com... >>> John, >>> >>> 18 MHz is the lower limit for the Spartan 3 DLL, and the Spartan 3 DFS >>> could go even lower than 18 MHz -- down to 9 MHz (to multiply and >>> provide a clock out that is 2X the clock in). >>> >>> 25 MHz is well below the low frequency mode of the DLL/DCM, so one >>> bitstream fits all (one setting will lock for any input in the range of >>> 18 to 25 MHz). >>> >>> If the frequency changes while it is running, you may have to reset the >>> DCM for it to lock. The DCM tracks up to a point, but delay line >>> overflow or underflow may happen which then requires a reset to restart. >>> >>> Austin >>> >>> John Larkin wrote: >>>> On Wed, 23 May 2007 15:00:27 -0700, John Larkin >>>> <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: >>>> >>>>> If we design an S3-based system using an internal clock doubler, the >>>>> design software wants to know our input frequency. In our case, we >>>>> want to double the incoming 20 MHz clock to 40 MHz for internal use. >>>>> >>>>> In other applications, the incoming clock may range from, say, 18 to >>>>> 25 MHz, and we'd like to use the same FPGA design without recompiling. >>>>> Looking at the Xapps and datasheets, it's implied that the clock >>>>> doubler will double an incoming clock over the specified range (18 to >>>>> 167 MHz in) without otherwise being "told" the nominal input >>>>> frequency. Is that right? >>>>> >>>>> Thanks, >>>>> >>>>> John >>>> And I guess there are two parts to the question: how do the DFS and >>>> the DLL blocks operate in this regard? If we use the DFS, we can go >>>> below 18 MHz as the input, which would be nice. >>>> >>>> John >>>> >>Article: 119615
On Wed, 23 May 2007 16:21:42 -0700, austin <austin@xilinx.com> wrote: >Lastly, > >The DFS can not be sync'd to the DLL when the input clock is too low for >the DLL (below 18 MHz in this case). > >If phase synchronization is required, then the range starts at 18 MHz. > >If phase synchronization is not required, then do not use the CLK0, 90, >180, 270, 2X, 2x_b, DV outputs, and only use the CLKFX, CLKFX-B outputs. > >Leave the CLKFB input not connected. > >This instantiates ONLY the DFS element. > >Now it works from 9 MHz up, and the CLKFX outputs are 2X the input. > >There is no guaranteed phase relationship from the CLKIN to the DFS to >the CLKFX outputs in the DFS only mode. > >Austin Thanks. We just need a clock, with no phase requirements, so we'll do as you suggest. The CPU clock might run from 16 to maybe 25 MHz, so this will work nicely. John > >austin wrote: >> John, >> >> The input frequency is used only for estimating jitter, and to help >> calculate any timing period numbers. It is also used to set low or high >> frequency mode, if not set explicitly. >> >> It does not affect the bitstream. >> >> Austin >> >> John_H wrote: >>> The "one bitstream for two frequencies" might be confusing because of the >>> frequency setting on the DCM itself (the primitive's parameter). The >>> mention that the value is used "to optimize jitter" never was very clear to >>> me. >>> >>> I'd suggest that for a DCM with zero phase offset, compiling with the >>> highest frequency value as the specification for the UCF will keep the >>> system running at a fixed frequency across that entire range. For a DCM >>> with fixed phase offset, however, I'd suggest due-diligence to make sure >>> that neither the setup nor the hold times on the various paths are >>> compromised. Except for this rare situation, it appears the DFS-only mode >>> should support everything nicely. >>> >>> - John_H >>> >>> >>> "austin" <austin@xilinx.com> wrote in message >>> news:f32fq6$6f21@cnn.xilinx.com... >>>> John, >>>> >>>> 18 MHz is the lower limit for the Spartan 3 DLL, and the Spartan 3 DFS >>>> could go even lower than 18 MHz -- down to 9 MHz (to multiply and >>>> provide a clock out that is 2X the clock in). >>>> >>>> 25 MHz is well below the low frequency mode of the DLL/DCM, so one >>>> bitstream fits all (one setting will lock for any input in the range of >>>> 18 to 25 MHz). >>>> >>>> If the frequency changes while it is running, you may have to reset the >>>> DCM for it to lock. The DCM tracks up to a point, but delay line >>>> overflow or underflow may happen which then requires a reset to restart. >>>> >>>> Austin >>>> >>>> John Larkin wrote: >>>>> On Wed, 23 May 2007 15:00:27 -0700, John Larkin >>>>> <jjlarkin@highNOTlandTHIStechnologyPART.com> wrote: >>>>> >>>>>> If we design an S3-based system using an internal clock doubler, the >>>>>> design software wants to know our input frequency. In our case, we >>>>>> want to double the incoming 20 MHz clock to 40 MHz for internal use. >>>>>> >>>>>> In other applications, the incoming clock may range from, say, 18 to >>>>>> 25 MHz, and we'd like to use the same FPGA design without recompiling. >>>>>> Looking at the Xapps and datasheets, it's implied that the clock >>>>>> doubler will double an incoming clock over the specified range (18 to >>>>>> 167 MHz in) without otherwise being "told" the nominal input >>>>>> frequency. Is that right? >>>>>> >>>>>> Thanks, >>>>>> >>>>>> John >>>>> And I guess there are two parts to the question: how do the DFS and >>>>> the DLL blocks operate in this regard? If we use the DFS, we can go >>>>> below 18 MHz as the input, which would be nice. >>>>> >>>>> John >>>>> >>>Article: 119616
> "NA" <madid87-MAKNI-@yahoo.com> wrote in message > news:op.tsslp11d3ik8el@soba... > Hello all. > I'm very new to VHDL and stuck with a simple task. > This code should convert binary number to BCD number using shift and add 3 > algoritam > http://www.engr.udayton.edu/faculty/jloomis/ece314/notes/devices/binary_to_BCD/bin_to_BCD.html > After this code executes I always get "0000" in digit and unit > You don't have to analyze the whole code, > just tell me what is generally wrong. Thank you people! Please do your own homework, thank you.Article: 119617
himassk wrote: > Hi, > > Please suggest me how to transfer a single clockwide pulse from high > frequency clock domain and create a single clockwide pulse in a slow > clock domain? > What are different methods available? > > Thanks in advance. > > Regards, > Himassk. > use the pulse in one domain to toggle a semaphore each time the pulse occurs. Synchronize that semaphore across the clock domain boundary, then synchronously edge detect it in the second domain. process(clk1) begin if rising_edge(clk1) then if pulse1='1' then toggle <= not toggle; end if; end if; end process; process(clk2) begin if rising_edge(clk2) then toggle_sync1 <= toggle; toggle_sync <= toggle_sync1; toggle_z <= toggle_sync; pulse2 <= toggle_z xor toggle_sync; end if; end process; This makes it reliable regardless of the relative clock frequencies provided the frequency of pulse occurrence is less than half the clk2 frequency.Article: 119618
Pasacco wrote: > Hi > > I need to implement 33-bit data BRAM. > Obviously, following BRAM will not work. > Does anyone suggest how to implement 33-bit data, if possible? > Thank you in advance. > Depends on how deep you need the memory to be. Size the memory primitives to match your depth requirement, then instance as many as are needed for your data width. If, for example, you need 512 or less words, then a single 512x36 BRAM does the trick. If you need 16K depth, then you need to use 33 1x16K BRAMs.Article: 119619
subin ur thinking very loudly now..... keep it up..... and never let it down.... only by this u can learn new things.... any way abt ur doubt...... we human can understand what ur doing in the algo is same... but not the machines.... ur first two cases are atleast comparable.... we need to think why it differed... like as John suggested may be its due to the blocking assignments.... like in the first case synthesizer can view it as four adds.... it can pump all the available optimization into that.... but for the second case we are forcing the synthesizer to look it as a three separate add operations... at that time the synthesizer may not be able to apply all the optimization techniques.... as john suggested u try to use the non blocking assignments.... tht may free up the synthesizer to look into the prob as a single operation.......... But the third one ... simply different thing...... its actually a decoder.... to decode the {in4,in2,in1} variable then some logic to form the operand to add to the in3..... plus ofcourse an adder to add that variable to in3.... i think u will get the same result with the following code case{in4,in2,in1} op = value based on different cases.... .... ... endcase result = in3 = op;Article: 119620
Hi to all I am currently working on DDR2 controller for Burst Lenth 8. My own code is giving good results when i verified with memory model from MICRON. Now my problem is memory on the board is not sending 4 DQS clock pulses. it seems to be sending for burst lenth 4. some results i observed ........on C.R.O If i won't Initialize properly it is not responding at all . no DQS nothing is comming . if i initialize it properly its giving DQS signal of two sine clock pulses. If i write with 101010101 .... of each location i am getting two sine clock pulses on DQ pin while reading . if i write with all ZERO's i am getting ZERO' on DQ pin. so i think inialization is happenig properly. some problem persistence still some where . If any body has IDEA please help me regarding this. with regards...... sudhakarArticle: 119621
On Thu, 24 May 2007 02:38:49 +0200, Kryten <kryten_droid_obfusticator@ntlworld.com> wrote: > Please do your own homework, thank you. This is not a homework and I NEVER asked "you" to write this for me. All I asked is what is wrong like people ask in 90% other posts. So you're reply is very rude, but you already know that. Anyway I found my problem. That piece of code was in a process sensitive to clock (50MHz) and it would execute many times instead of one. If you sll something "many" times you get zeros.Article: 119622
http://www.heise.de/mobil/artikel/88916/2 http://www.heise.de/bilder/88916/2/1 pretty interesting - so funny that makes you wanna a child again ;) ok, the price is not 100 but 150, guess the manufacturer did not meed the 100 USD goal AnttiArticle: 119623
On May 24, 1:46 am, vssumesh <vssumesh_a...@yahoo.com> wrote: > subin ur thinking very loudly now..... keep it up..... and never let > it down.... only by this u can learn new things.... > any way abt ur doubt...... we human can understand what ur doing in > the algo is same... but not the machines.... > ur first two cases are atleast comparable.... we need to think why it > differed... > like as John suggested may be its due to the blocking assignments.... > like in the first case synthesizer can view it as four adds.... it can > pump all the available optimization into that.... but for the second > case we are forcing the synthesizer to look it as a three separate add > operations... at that time the synthesizer may not be able to apply > all the optimization techniques.... > as john suggested u try to use the non blocking assignments.... tht > may free up the synthesizer to look into the prob as a single > operation.......... > > But the third one ... simply different thing...... its actually a > decoder.... to decode the {in4,in2,in1} variable then some logic to > form the operand to add to the in3..... plus ofcourse an adder to add > that variable to in3.... i think u will get the same result with the > following code > case{in4,in2,in1} > op = value based on different cases.... > .... > ... > endcase > result = in3 = op; // typo? result = in3 + op; It looks like Synplicity thinks methods 1 and 2 are identical. I would think that a variation of method 2 should be able to get skinnied down a bit in the area of XORCY, LUT1, .... I was wondering if the map routine during the implementation phase would trim some of these out. Interesting academic exercise. For practical purposes, I think John had it about right for how to group things together. It would be interesting what the resource usage of coregen components would be that were structurally connected together. as stated by John. -NewmanArticle: 119624
Hi, I need your help for some problem using the Xilinx EDK in simulation and profiling. The system I have designed is composed by a Microblaze and an OPB customized peripheral, built by the peripheral wizard in EDK 8.1. I need to do the simulation (behavioral at the moment) of the system with Modelsim SE. This peripheral (written in Verilog), is built as an overclocked dual port ram as shown in Xapp228 to obtain a quad-port memory. It is implemented by some parallel blocks ram (using the primitive ramb16_s1_s1). I would like to initialize that memory with global variables defined in my program and placed in that memory by means of an attribute and a customer linker script, which defines the section DATAWELL in my memory. For example, on main.c: int vector[10] __attribute__((section("DATAWELL"))) = {...my data...}; This may be done for a block ram IP predefined by EDK, instead of the memory in my peripheral, because the script which initialize the memory recognises it as well as the microblaze data and istructions memory. Unfortunately it does not recognise my memory on initialization. In fact, the wrapper "system_init.vhd", which has been generated by EDK for behavioral simulation, contains istructions and data memory intialization (and block ram IP predefined by EDK, if defined), not my peripheral memory data. What may I do in order to initialize that memory in simulation? On profiling my program by means of XMD and the ISS (commands : connect mb sim , etc.), I have noticed that the main has not been executed, even though it does have been executed in every behavioral simulation with Modelsim SE I have done. The commands I digited are: connect mb sim dow <elf_path> profile + other statistics developping commands stop This on Debug configuration as recomended on the SDK help chapter about profiling. The profiling results are about the following microblaze initialization routines: _crtinit _start1 _frame_dummy _program_init None of the function defined in the main have been executed and no error has been revealed. The main program contains only dummy functions (as "int add(int a, int b);" ) in order to try profiling. In the future I will need to profile a complex program interacting with my opb peripheral by means of "xio.h" macros. Can anyone give me a solution? Thank you in Advance Giovanni
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z