Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hello everybody, I am looking for a SDH/SONET simulation & certification libraries, to be use in either VHDL, Verilog or C simulation environment (e.g. Modelsim or similar). I already know and currently use Synopsys Telecom Workbench, which is rather overdimensioned for my test purposes (@ STM-1 level) and very, very, very expensive. Can anyone of You suggest me a 3rd party simulation toolset? Thanks a lot, best regards. DO NOT SPAM WITH JUNK, PLEASE.Article: 58651
Peter, I've tried but it fails. I use Synplify for synthesis. When I remove some peripherals by changing the yes/no table, and after the design is downloaded to ML300, the "OPB ERROR" LED indicates that there is error. Despite this, the "hello_tft" program runs ok, but not the Linux (corresponding peripheral support is removed from kernel). Even no program is downloaded for running, the "OPB ERROR" LED also indicates error. I would like to ask after changing the yes/no table, anything else I need to pay attention to ? Thanks in advance !! tk "Peter Ryser" <ryserp@xilinx.com> wrote in message news:3F27650B.57F7FAB4@xilinx.com... > Tk, > > removing peripherals in V2PDK are as simple as modifying flow.cfg and > commenting changing the yes/no table at the end of the file. Of course, if > you remove PCI from the hardware you will also have to remove PCI support > from the Linux kernel. > > - Peter > > > tk wrote: > > > Hi, > > > > I'm now finding the way to contact "my FAE" ..... > > > > I really hope that Peter can me a copy of it ... > > bcoz I've spent one week modifying the reference > > design (just wanna remove the AC97 and PCI) > > but failed .... the Linux can't boot ...... > > > > tk > > > > "Antti Lukats" <antti@case2000.com> wrote in message > > news:80a3aea5.0307272123.2567fa16@posting.google.com... > > > hln01@uow.edu.au (Lan Nguyen) wrote in message > > news:<70360b52.0307271802.7509c4db@posting.google.com>... > > > > Hi Peter, > > > > > > > > I've got the Developer's Kit V2PDK VP4. I wanted to run the reference > > > > designs and test the results via the serial port. I tried and got > > > > nothing in the HyperTerminal. > > > > > > > > Does XST work for the synthesis ? If so, what modifications do I have > > > > to make ? > > > > > > > > (I was told that the only way is to get Synplify synthesis tool) > > > > > > Hi Lan, > > > > > > yes and no - > > > V2PDK was targetted for synplify synthesis but you can use portions > > > of the reference platforms also with XST synthesis > > > > > > --- flow.cfg --------- > > > # Synthesis (Synplify) > > > #SYN_TOOL = synplify > > > #SYN_CMD = synplify # synplify / synplify_pro > > > # add -batch to SYN_OPT to invoke synplify in batch mode (non-GUI) > > > #SYN_OPT = # inferred (synplify): <none> > > > > > > # Synthesis (XST) > > > SYN_TOOL = xst > > > SYN_CMD = xst > > > SYN_OPT = -hierarchy_separator / -keep_hierarchy Yes -opt_level 2 > > > -opt_mode area -iobuf no # inferred (xst): <none> > > > > > > ----- end cut --- > > > above is the modified flow.cfg > > > > > > > > > notice: you can use XST for the simple vhdl design only for the > > > embedded_vhdl it will not work, only verilog version works and there > > > you need to disable most of the peripherelas (at the end of flow.cfg) > > > to get it to fit into VP7 > > > > > > antti > > > PS has anybody seen the ref. design Peter has talking about? >Article: 58652
"Lorenzo Lutti" <lorenzo.lutti@DOHtiscalinet.it> wrote in message news:%eCVa.212902$lK4.6197687@twister1.libero.it... > Do you know some books more focused to a specific vendor (i.e. Xilinx)? > I have a couple of "generic" VHDL books, but often I have to spend a lot > of time seeking some "vendor dependent" informations here and there on > the Internet. A comprehensive guide would be very useful. I don't know of anything that's reasonably up-to-date, and in general it's hard to see how the book writing and publishing process could keep up with the pace of FPGA development. In any case, I have a small duty to my employer to point out that our Xilinx TechClass is intended to fill exactly that hole :-) http://www.doulos.com/frxtc.html has the details. > One drawback of VHDL compared to "pure software" languages is the lack > of multi-line comments. Maybe the inventor of VHDL thought that one line > would have been just enough for everything. :) Grin. Find yourself a civilised editor with a block/column select and fill facility. On Windoze we generally use TextPad, which is pretty goodand can be had in a free eval version if you don't mind the nag screens: www.textpad.com Or NEdit on Unix/Linux. cheers -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * Perl * Tcl/Tk * Verification * Project Services Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire, BH24 1AW, UK Tel: +44 (0)1425 471223 mail: jonathan.bromley@doulos.com Fax: +44 (0)1425 471573 Web: http://www.doulos.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 58653
In the archive of this NG, some article said Xilinx has been recommending against use of the global reset logic. But I never found any application note of Xilinx about this issue. Could you please tell me where is the related statement or document? "Peter Alfke" <peter@xilinx.com> news:3F215808.BE8CF213@xilinx.com... : Global reset is obviously a good thing, but it also has a problem: : What happens after the trailing edge of global reset? : Cab you take away the global reset "simultaneously" from all affected : circuits, and if you cannot, what happens when one circuit stays in : reset a few clock ticks longer than the others ? : All Xilinx FPGAs have a built-in global reset that gets released at the : end of configuration, and that release can also automatically be : internally synchronized with the user clock. Inspite of this, at high : clock rates some state machines may need individually resynchronized : reset. This has been discussed in many threads in this ng. : : Peter Alfke, Xilinx ApplicationsArticle: 58654
vchen@uiuc.edu wrote: > Hi. I am new to VHDL and FPGA. May I ask what is the difference between > behavioral and other simulation? What would have caused the result in > behavioral simulation to be different from other simulations? > (That is the problem I am having now. One of the message is > WARNING:NetListWriters:431 - Design does not contain hierarchical blocks > with KEEP_HIERARCHY property. Hierarchy will not be retained. ) > > Which simulation would approximate most closely the behavior when the > code is loaded into the FPGA? I am using Xilinx Webpack 5.2 03i. > Thank you =) Behavioral simulation is the simulation of the pure VHDL-code. That is, when you simulate your design in that way and it is working well, it is a good sign. But it does not mean that it will work in reality. "Other" simulation (often called back-annotated simulation) is the simulation of the synthesized design. In comparision to the behavioral simulation this includes: - timing information I.e. how long are delays through logic elements, delays through connections, etc. - bugs of your synthesis tool ;-) That is no joke! - badly described VHDL constructs that might work for behavioral simulation but that are not synthesized in the way you might expect - ... (very likely I forgot something, others will extend the list ;-) As you see, it is the back-annotated simulation that is the closest-to-hardware one. Regards, MarioArticle: 58655
Dirk Dörr wrote: > Hi! > > How do you readout in Linux? Are you using a device driver or plain > inp(0x37A)? With the latter you will not get more than 500 KByte/s (At > least, that's what I experienced). > > > You can easily up, reading the EPP Address Port (0x37B) and the EPP Data Port 0x37C. Laurent Gauch www.amontec.comArticle: 58656
Peter Ryser <ryserp@xilinx.com> wrote in message news:<3F276600.BB66B635@xilinx.com>... > Antti, > > Ethernet should work, even with XST. >Did you see any problems with having Ethernet enabled? > > - Peter sorry, I dont say always things very clear, I think most (defenetly Ethernet) from V2PDK embedded_verilog should work with XST Verilog (and I have intended to say something else) I had to remove Etherenet as it did take too much resources and the design did not fit (not because it wouldnt work). also I do have fully working Ethernet enabled EDK ref design(VHDL). anttiArticle: 58657
Six months after I said "it's nearly done", I've now finished an alpha-test version of the synthesisable VHDL fixed-point package I promised. Doesn't time fly when you're enjoying yourself? :-) I've published it on our corporate website but, as yet, there are no links to it from elsewhere on the site. So you need to go straight to it using this link: http://www.doulos.co.uk/knowhow/vhdl_models/fp_arith/ At present, all you get is a VHDL package and package body, and a PDF doc describing it. Any feedback is welcome. Please note that it is very much in an experimental state at present. It is in desperate need of example designs and a validation test suite; contributions towards either will be gratefully received, and acknowledged in future releases. I would be especially grateful for any indication of whether it's aiming in the right direction, and how it could be enhanced or made more useful. My employers are not responsible for any part of the package. Any comments on it should come directly to me at the address given below. -- Jonathan Bromley, Consultant DOULOS - Developing Design Know-how VHDL * Verilog * SystemC * Perl * Tcl/Tk * Verification * Project Services Doulos Ltd. Church Hatch, 22 Market Place, Ringwood, Hampshire, BH24 1AW, UK Tel: +44 (0)1425 471223 mail: jonathan.bromley@doulos.com Fax: +44 (0)1425 471573 Web: http://www.doulos.com The contents of this message may contain personal views which are not the views of Doulos Ltd., unless specifically stated.Article: 58658
John, Hyperlynx handles connectors, daughter cards, etc. It just gets complex, which is not so much of a problem, as it is reality. One good reason to put SDRAM close to the FPGA is so you can cheat. By that, I mean that many people "discover" that they do not need all of the SSTL or HSTL parallel terminations on really short runs, so they simulate it (actually, they just removed the resistors, first) and found that it worked just fine. Not recommended (obviously), but professional drivers on closed courses can do fancy tricks and get away with it. So it isn't to specification, but it had all of the necessary SI issues well in hand. So 'close is nice' (because you can simplify matters), but 'far' works too, you just have to follow all of the "rules." Austin John Williams wrote: > Hi Austin, > > Austin Lesea wrote: > > I recommend you get Mentor's Hyperlynx IBIS simulation tools. These are some of the best value/$ tools that are out there. > > Thanks, I'll check it out. The uni may already have a license - that > would be nice! > > > After you finish the layout, you can extract the pcb parameters, and model the actual traces, and see if that works, and > > make the necessary changes to make it work well before the board is fabricated. > > Do you know of any packages that can do SI modelling across connector > structures, for example modelling mezzanine / daughter board > architectures? > > Here's why I ask: we're considering stackable memory "modules" for the > platform we're developing (think PC104+ for FPGAs and you're part way > there), but I've been warned off trying to put SDRAM (pref. DDR) > anywhere except on the main board nice and close to the FPGA. > > Obviously this problem can be solved - case in point being commodity DDR > SDRAM modules in desktop PCs. But whether mere mortals can do it - > that's what I need to know before we get too far into the design. > > Cheers, > > JohnArticle: 58659
Hello, I am designing a circuit for an Apex20ke FPGA with Synopsys v2001.8 and Quartus2. After compilation I want to simulate the design in ModelSim 5.5e. Therefore I saved the current design using: write -format v -hierarchy -output top.v write_sdf -version 2.1 top.sdf In ModelSim I could compile fine the verilog netlist but could not load it due to an enormous number of errors such as: # Region: :top:I0:I1 # Searched libraries: # work # ERROR: /pe_users/nagel/vhdl/smartcam/synopsys_linux/top.v(40686): Instantiation of 'ATBL_1' failed (design unit not found). Now I included a number of libraries from Altera in my work library: lpm altera_mf apex20ke But these ATBL_* - that are in the apex20ke-3.db library (as reported by report_library) - are still not found... Any help would be highly appreciated ! Thank you, Jean-LucArticle: 58660
Hello everyone, I've racked my brain over this problem for a day and I realize I'm missing something trivial, I'm hoping someone here will be kind enough to help... I am desinging a Digital-PLL and I'm trying to figure out what DAC resolution I need to drive my VCO given a .1 degree phase accuracy requirement (that is, reference and output need to be within .1 degrees of each other). The DPLL output operates over a range of 1 Hz to 50 Hz (not too tough), and my VCO gain is 11.8333 Hz / Volt . My phase detector is the standard two-DFF type and can get detect a minium time difference of 20 ns in the two waveforms. DAC output(which controls the VCO) range is 0 - 5V. My problem is that I'm having trouble relating my phase requirement to DAC voltage step-size (number of bits). I realize that .1 deg Phase accuracy means at 50Hz, my waveforms will need to be no more than approx. 5.5 us off from each other. I calculate this by doing (.1 deg / 360 deg) * period_of_waveform = delta_time since I say my system will go through 360 degrees in one period of 50Hz. I also realize that dPhase/dt = Frequency, so phase = Integral (Frequency)*dt but I'm still having relating the phase requirement to the VCO gain factor (which is in terms of Hz/Volt) and the DAC output (which is in volts). I would appreciate any suggestions or insights. As usual, thanks in advance! =) -- Jay.Article: 58661
Jon, attached find a work around for the TLB errata for Linux. Instructions on how to apply the patch are here: ------ For embedded Linux a patch is available that works around the errata described in answer record 14052 (http://support.xilinx.com/xlnx/xil_ans_display.jsp?iLanguageID=1&iCountryID=1&getPagePath=14052). The patch applies cleanly against the MontaVista Professional Linux 3.0 kernel for ML300. To apply the patch follow these steps: 1. change into the root directory of your Linux kernel 2. dry-run to apply the patch by executing the command: $ gunzip -c tlb_errata.patch.gz | patch --dry-run -p0 -F0 You will see the following output: patching file arch/ppc/kernel/head_4xx.S patching file arch/ppc/kernel/misc.S 3. If the dry-run was successful apply the patch $ gunzip -c tlb_errata.patch.gz | patch -p0 -F0 4. recompile the Linux kernel ------------ Rev 1 and 2 of the 2VPx FG456 Insight boards had a problem with the byte enable signals for the SDRAM. AFAIK, the problem was fixed for Rev 3. The work-around for Rev 1 and 2 boards is to write a small boot loader that turns on the caches before jumping to the Linux start address (0x400000). After that the Linux kernel will boot and run fine on these boards. You should not see any machine check exceptions. Please be aware that the 405 remembers bus errors, ie. once the bus error signal has been asserted it will result in a machine check exception as soon as the corresponding bit in the MSR is turned on. There is no way around this. This means that all code needs to be clean and not make references to non-existent memory and/or peripherals. - Peter Jon Masters wrote: > Hi, > > The chip revision is 20010820 which I also recently discovered has a > nice list of errata. I especially like CPU_210 and the uTLB one. > > CPU_210 issue rules out a lot of standard PowerPC atomic operation > sequences accessing system memory while the latter affects TLBs. > > Jon.Article: 58662
Jean-Luc wrote: > Hello, > > I am designing a circuit for an Apex20ke FPGA with Synopsys v2001.8 > and Quartus2. > After compilation I want to simulate the design in ModelSim 5.5e. > Therefore I saved the current design using: > > write -format v -hierarchy -output top.v > write_sdf -version 2.1 top.sdf Consider a functional sim of your source code before synthesis. -- Mike TreselerArticle: 58663
> I am currently trying to perform a readout from FPGA to a LINUX PC using > parallel port. I have implemented the state-machine for EPP communication > in the FPGA and it works well however the system is slow. > > I think this is because EPP devices are supposed to negotiate the best > available transfer mode during initialization but the FPGA is not > currently setup to do that. As a result, I had to fall back on software > emulation of the data transfer handshaking. > > I was wondering if anyone has experience performing readout from FPGA > using EPP. My aim is to get 1MByte/sec communication. Any help would be > appreciated. > I had the same experience: the PC parallel port HW wouldn't switch into EPP mode without the proper negotiation. You will stay in SPP mode on the SW side, though you can emulate the EPP protocoll from SW. And yes, it's slow. Andras TantosArticle: 58664
> I am trying to get the Leonardo synthesis tool to use the available > block ram within the FPGA. I have written a dual port RAM in VHDL and > I am trying to synthesize on a Virtex II pro FPGA, which contains > loads of available block RAM. > > My design synthesizes fine, but does not make use of the available > block RAM. > only LCs. Hi! I don't know about Leonardo and Simplify but Xilinx XST recognizes *single port* RAMs OK. Dual port RAMs are a no-go in that tool. Also: if you use async read, to tool will infer distributed RAMs since the block RAMs in Xilinx devices cannot support that. If you use sync reads and writes (again XST at least) properly recognizes and uses a block RAM. Andras TantosArticle: 58665
Jay <se10110@yahoo.com> wrote in message news:MPG.198be88c83c8b6aa9896b4@news.surfcity.net... > Ok, > I have what might be a simple question for a circuit and a design I came > up with. I just want to know if I could have made life easier... > > I have a signal (call it "go_high") that's synchronous to my system > clock (output is from a DFF) that goes high and stays high for at least > 2 clock cycles, but upto 200. > > I wanted to make a circuit that detects an edge on "go high" and gives > an output for no more than 2 clock cycles. As Paul says below, it's not a good idea to generate clocks, the tools don't like it and it can give you glitches. Number 1 rule of FPGA design - Clock everything off one clock if possible. The following does what you want in vhdl. signal go_high_d : std_logic; signal go_hign_d2 : std_logic; signal edge_event : std_logic; : process(clk,rst) begin if(rst = '1') then go_high_d <= '0'; go_high_d2 <= '0'; elsif(rising_edge(clk)) then go_high_d <= go_high; go_high_d2 <= go_high_d; edge_event <= go_high xor go_high_d2; end if; end process; NOTE: This might give an edge_event after reset depending on the initial state of go_high, and if go_high is active for only two clock cycles then changes again you'll get one 4 clock cycle width edge_event signal out. Hope this helps, Nial ------------------------------------------------ Nial Stewart Developments Ltd FPGA and High Speed Digital Design www.nialstewartdevelopments.co.ukArticle: 58666
Hi Fellows, I have main architecture consists of different components. All these components are defined in different *.vhd files and I am combining all these VHDL files in one *.vhd file and downloading into the CHIP and it's working fine according to logic. Lets call this main BLOCK as BLOCK1. Now,I need 6 of these blocks on the same chip (I am using XCV600 device). And I need to interconnect different signals defined in the BLOCK. say for example I need to loop all of them together. How could I accomplish this task. I want only one *.vhd file so that it would be easy to download one *.rbt file into the chip at one time. 1. Do I have to define different input / output signal so that every BLOCK has different signal names from each other and than use port map and component decelartion in top *.vhd file to accopmlish the task. Thats how all these *.vhd files would be in one *.vhd file. For example in one BLOCK I have 4 entity/arch for 4 components and 1 main entity/arch where port mapping and component decleration is defines , so I have 5 entity / arch in one BLOCK and it's in one *.vhd file. Thats how for 6 of these BLOCK I would have 30 entity/arch + 1 main entity/arch in which all of these components (which I think 6 one of each BLOCK)will be defined along with port mapping. So in all 31 entity/arch pair would be in one *.vhd file. Do you think is this correct. Or is there is any other better way to do this. 2. How can I use make file option to compile 6 different *.vhd files (one of each block )and 1 main *.vhd file (for component decleration and port mapping) to generate one *.rbt file. I am worried because I have to use 12 BLOCKS of different logic design having 12 entity/arch pair block (in all 145 Entity/arch pairs )and interconnect all these 12 block with 6 above mentinoed BLOCKS. Cheer Guru's Any help would be appreciated. IsaacArticle: 58667
If anybody have a chematic of the ALTERA Byte Blaster II. Or if anybody can send me link to this schematic Thanks in advance LevArticle: 58668
Thanx for the explanation. It was really helpful. > The fact that it is not zero is (almost certainly) correct. In order to have > more manageable I/O timing in the default situation, Xilinx has tweaked the > performance of the DCM to pull the clock back slightly - this has the effect > of increasing the setup time requirement to the IOB flop (Tpsdcm), and > reducing the hold time (Tphdcm), making it negative for the Virtex family. > Negative hold times make syncronous interfaces "easier" to implement. This is the number I am interested in. How do I know (without running TRCE) as to what the actual DCM delay is going to be ? I did Place and Route of the design ( the previous timing report is for a mapped design ) and this is the report for the same path Clock Path: clk_PN to t2 Location Delay type Delay(ns) Physical Resource Logical Resource(s) ------------------------------------------------- ------------------- A6.I Tiopi 0.653 clk_PN clk_PN d1_u1 DCM_X1Y1.CLKIN net (fanout=1) 0.630 d1_clk_int DCM_X1Y1.CLK0 Tdcmino -3.307 d1_u2 d1_u2 BUFGMUX1P.I0 net (fanout=1) 0.674 d1_clk_dcm BUFGMUX1P.O Tgi0o 0.465 d1_u3 d1_u3.GCLKMUX d1_u3 SLICE_X6Y14.CLK net (fanout=3) 0.507 c1 ------------------------------------------------- --------------------------- Total -0.378ns (-2.189ns logic, 1.811ns route) -------------------------------------------------------------------------------- I am trying to run PrimeTime timing analyzer on this design and for that I need to model the DCM -ve delay accurately. If I don't do that, my slack numbers get messed up. I have all the numbers ( I think ) that should be able to give me Tdcmino but even then I am off by some amount. > To run an accurate, full timing simulation, this delay must be annotated > onto the DCM as a delay path from the CLKIN to the CLK0. Since simulators > cannot deal with negative propagation delays (although static timing > analysis tools can), you might have to add a full clock period to the delay > to be annotated - so at 100MHz (10ns), you would annotate 9.287 onto this > path. If you get this value wrong, then all the IOB timing will be incorrect > in your simulation. > > As for calculating it in advance, it should always be almost exactly the > same value for a given process/temperature/voltage (PVT).. The actual value > will depend on the delay through the IBUFG (which is a constant at a given > PVT), the delay through the dedicated routing to the CLKIN of the DCM (the > path from every IBUFG to all reachable DCMs is balanced, and hence should be > a constant), the delay through the BUFG, and the delay through the dedicated > routing back to the CLKFB, which are also constants at a given PVT. Since > Xilinx has set the DCM to result in a -0.713ns effective delay, and all the > components are constant, the Tdcmino should also be a constant for a given > PVT when CLKOUT_PHASE_SHIFT=NONE. If CLKOUT_PHASE_SHIFT is not NONE, then > the phase delay of the DCM will be added to Tdcmino (CLKOUT_PHASE*Tper/256). > > Avrum > > > "Ab Ran" <aran_jan@yahoo.com> wrote in message > news:b7c69989.0307291801.5724477b@posting.google.com... > > Hi, > > > > I was wondering if someone could help me with -ve delay number > > associated with a DCM in the TRCE report. How is this number > > calculated ? If I try to do a timing simulation of this > > design, how can I calculate this delay before-hand ? > > > > Thanx in advance. > > > > ---- Ab. > > > > > > I am attaching a sample path report below. > > > > -------------------------------------------------------------------------- > > > > Clock Path: clk_PN to t2 > > Location Delay type Delay(ns) Physical > > Resource > > Logical > > Resource(s) > > ------------------------------------------------- > > ------------------- > > IOB.I Tiopi 0.653 clk_PN > > clk_PN > > d1_u1 > > DCM.CLKIN net (fanout=1) e 0.100 d1_clk_int > > DCM.CLK0 Tdcmino -2.131 d1_u2 > > d1_u2 > > BUFGMUX.I0 net (fanout=1) e 0.100 d1_clk_dcm > > BUFGMUX.O Tgi0o 0.465 d1_u3 > > d1_u3.GCLKMUX > > d1_u3 > > SLICE.CLK net (fanout=3) e 0.100 c1 > > ------------------------------------------------- > > --------------------------- > > Total -0.713ns (-1.013ns > > logic, 0.300ns route) > > > > -------------------------------------------------------------------------- > ------Article: 58669
"Ab Ran" <aran_jan@yahoo.com> wrote in message news:b7c69989.0307300839.237294a4@posting.google.com... > Thanx for the explanation. It was really helpful. > > > The fact that it is not zero is (almost certainly) correct. In order to have > > more manageable I/O timing in the default situation, Xilinx has tweaked the > > performance of the DCM to pull the clock back slightly - this has the effect > > of increasing the setup time requirement to the IOB flop (Tpsdcm), and > > reducing the hold time (Tphdcm), making it negative for the Virtex family. > > Negative hold times make syncronous interfaces "easier" to implement. > > This is the number I am interested in. How do I know (without running > TRCE) as to what the actual DCM delay is going to be ? I did Place and > Route of the design ( the previous timing report is for a mapped > design ) > and this is the report for the same path > > Clock Path: clk_PN to t2 > Location Delay type Delay(ns) Physical > Resource > Logical > Resource(s) > ------------------------------------------------- > ------------------- > A6.I Tiopi 0.653 clk_PN > clk_PN > d1_u1 > DCM_X1Y1.CLKIN net (fanout=1) 0.630 d1_clk_int > DCM_X1Y1.CLK0 Tdcmino -3.307 d1_u2 > d1_u2 > BUFGMUX1P.I0 net (fanout=1) 0.674 d1_clk_dcm > BUFGMUX1P.O Tgi0o 0.465 d1_u3 > d1_u3.GCLKMUX > d1_u3 > SLICE_X6Y14.CLK net (fanout=3) 0.507 c1 > ------------------------------------------------- > --------------------------- > Total -0.378ns (-2.189ns > logic, 1.811ns route) > > -------------------------------------------------------------------------- ------ > > I am trying to run PrimeTime timing analyzer on this design and > for that I need to model the DCM -ve delay accurately. If I don't > do that, my slack numbers get messed up. I have all the numbers > ( I think ) that should be able to give me Tdcmino but even then > I am off by some amount. I have not worked with the Xilinx->Primetime link. However, I assume the flow is to load both a netlist and an standard delay format file (SDF) into Primetime. The netlist describes the circuit topology to primetime, and the SDF describes the timing. In order for the sdf to be meaningful, it must be generated from a post P&R ncd file. That SDF file should contain ALL the annotated delays, including the annotation for Tdcmino. From PrimeTime's point of view, the DCM is simply a clock buffer, which should have an annotated delay from CLKIN->CLK0, which is the value of Tdcmino. The fact that it is negative will not be a problem for PrimeTime (at least I don't think so). If the sdf file from Xilinx does NOT include an annotation for Tdcmino, I would call that a bug. If (for some reason) Xilinx has chosen not to annotate Tdcmino, you will have to do it yourself. In that case (I hate to say this), your best bet is to run trce first to get Tdcmino, and then annotate it manually. I don't think Xilinx gives information on the Tdcmclkinoffset they talk about in the appnote. Avrum > > > To run an accurate, full timing simulation, this delay must be annotated > > onto the DCM as a delay path from the CLKIN to the CLK0. Since simulators > > cannot deal with negative propagation delays (although static timing > > analysis tools can), you might have to add a full clock period to the delay > > to be annotated - so at 100MHz (10ns), you would annotate 9.287 onto this > > path. If you get this value wrong, then all the IOB timing will be incorrect > > in your simulation. > > > > As for calculating it in advance, it should always be almost exactly the > > same value for a given process/temperature/voltage (PVT).. The actual value > > will depend on the delay through the IBUFG (which is a constant at a given > > PVT), the delay through the dedicated routing to the CLKIN of the DCM (the > > path from every IBUFG to all reachable DCMs is balanced, and hence should be > > a constant), the delay through the BUFG, and the delay through the dedicated > > routing back to the CLKFB, which are also constants at a given PVT. Since > > Xilinx has set the DCM to result in a -0.713ns effective delay, and all the > > components are constant, the Tdcmino should also be a constant for a given > > PVT when CLKOUT_PHASE_SHIFT=NONE. If CLKOUT_PHASE_SHIFT is not NONE, then > > the phase delay of the DCM will be added to Tdcmino (CLKOUT_PHASE*Tper/256). > > > > Avrum > > > > > > "Ab Ran" <aran_jan@yahoo.com> wrote in message > > news:b7c69989.0307291801.5724477b@posting.google.com... > > > Hi, > > > > > > I was wondering if someone could help me with -ve delay number > > > associated with a DCM in the TRCE report. How is this number > > > calculated ? If I try to do a timing simulation of this > > > design, how can I calculate this delay before-hand ? > > > > > > Thanx in advance. > > > > > > ---- Ab. > > > > > > > > > I am attaching a sample path report below. > > > > > > -------------------------------------------------------------------------- > > > > > > Clock Path: clk_PN to t2 > > > Location Delay type Delay(ns) Physical > > > Resource > > > Logical > > > Resource(s) > > > ------------------------------------------------- > > > ------------------- > > > IOB.I Tiopi 0.653 clk_PN > > > clk_PN > > > d1_u1 > > > DCM.CLKIN net (fanout=1) e 0.100 d1_clk_int > > > DCM.CLK0 Tdcmino -2.131 d1_u2 > > > d1_u2 > > > BUFGMUX.I0 net (fanout=1) e 0.100 d1_clk_dcm > > > BUFGMUX.O Tgi0o 0.465 d1_u3 > > > d1_u3.GCLKMUX > > > d1_u3 > > > SLICE.CLK net (fanout=3) e 0.100 c1 > > > ------------------------------------------------- > > > --------------------------- > > > Total -0.713ns (-1.013ns > > > logic, 0.300ns route) > > > > > > -------------------------------------------------------------------------- > > ------Article: 58670
Jonathan Bromley wrote: > > "Lorenzo Lutti" <lorenzo.lutti@DOHtiscalinet.it> wrote in > message news:%eCVa.212902$lK4.6197687@twister1.libero.it... > > > Do you know some books more focused to a specific vendor (i.e. Xilinx)? > > I have a couple of "generic" VHDL books, but often I have to spend a lot > > of time seeking some "vendor dependent" informations here and there on > > the Internet. A comprehensive guide would be very useful. > > I don't know of anything that's reasonably up-to-date, and in > general it's hard to see how the book writing and publishing > process could keep up with the pace of FPGA development. > > In any case, I have a small duty to my employer to point out > that our Xilinx TechClass is intended to fill exactly that hole :-) > http://www.doulos.com/frxtc.html has the details. > > > One drawback of VHDL compared to "pure software" languages is the lack > > of multi-line comments. Maybe the inventor of VHDL thought that one line > > would have been just enough for everything. :) > > Grin. > > Find yourself a civilised editor with a block/column select and > fill facility. On Windoze we generally use TextPad, which is > pretty goodand can be had in a free eval version if you don't > mind the nag screens: > www.textpad.com > > Or NEdit on Unix/Linux. Just my two cents worth on the editor for comments. CodeWright is not free, and it is not even real cheap the last time I looked, but it is very good. It has programmable comment definitions for different file types. It also have a generic "Prompted Slide In" and out. This lets you insert or remove anything you want at the start of each line in a range. The only thing you need to specify is the text to insert or remove. But I agree that it is down right silly that they don't have block comments in VHDL. To quote Dr. Phil, "What were they thinking"? -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 58671
Hi Salvo, The Nios GCC linker looks at the excalibur.h and excalibur.s files (located in your 'sdk/inc') directory for the starting & ending addresses (and range) of data memory and program memory. These addresses are derived from the SOPC Builder GUI page where you specify which memory you want to use for Data & Program space for each CPU in the system. I suggest you copy the entire SDK into a new folder (so that your changes won't be overwritten if you re-generate the SDK), and edit the starting & ending addresses of data & program memory as appropriate... for example, if you have an SRAM based at 0x800000 spanning to 0x900000, you could change the ending address in excalibur.[h & s] to 0x87FFFF. The span of the memory would be changed from 0x100000 to 0x80000 - thereby dividing the memory in half from the linker's perspective. This would allow you to use the remaining SRAM in any way you see fit (software or hardware) without worrying about a conflict. After making this change, I suggest you re-compile your library (in sdk/lib "make all"), and then re-compile any source code. Jesse Kempa Altera Corp. jkempa at altera dot com "SDL" <S.DeLuca@nospamUSA.NET> wrote in message news:<HvAVa.1210242$ZC.177011@news.easynews.com>... > Hi, > Can I decide the address of some variables in the SRAM of Altera Cyclone > Development board? In which way I can reserve an area for them, from my > Nios C code? > Tanks > SalvoArticle: 58672
louis lin wrote: > > In the archive of this NG, some article said Xilinx has been recommending > against use of the global reset logic. But I never found any application note > of Xilinx about this issue. > Could you please tell me where is the related statement or document? > > "Peter Alfke" <peter@xilinx.com> news:3F215808.BE8CF213@xilinx.com... > : Global reset is obviously a good thing, but it also has a problem: > : What happens after the trailing edge of global reset? > : Cab you take away the global reset "simultaneously" from all affected > : circuits, and if you cannot, what happens when one circuit stays in > : reset a few clock ticks longer than the others ? > : All Xilinx FPGAs have a built-in global reset that gets released at the > : end of configuration, and that release can also automatically be > : internally synchronized with the user clock. Inspite of this, at high > : clock rates some state machines may need individually resynchronized > : reset. This has been discussed in many threads in this ng. > : > : Peter Alfke, Xilinx Applications I don't think anyone has said that the global reset should not be used. I belive the issue is that because of its slow propagation delay, it can not guarantee a clean exit from reset. That is, it can't guarantee a clean exit unless you do some thinking about your design. For example, if you use FSMs, the initial transition should depend on an external signal or some other delayed reset. That way you can be sure that the reset has been removed from all FFs in the FSM and they will not startup out of sync. In essence, any part of your design that can get out of kilter if the reset ends on different clock cycles should be synchronized using some other signal. You can think of this as a post-reset enable signal. -- Rick "rickman" Collins rick.collins@XYarius.com Ignore the reply address. To email me use the above address with the XY removed. Arius - A Signal Processing Solutions Company Specializing in DSP and FPGA design URL http://www.arius.com 4 King Ave 301-682-7772 Voice Frederick, MD 21701-3110 301-682-7666 FAXArticle: 58673
Jay wrote: > I am desinging a Digital-PLL and I'm trying to figure out what DAC > resolution I need to drive my VCO given a .1 degree phase accuracy > requirement (that is, reference and output need to be within .1 degrees > of each other). > > The DPLL output operates over a range of 1 Hz to 50 Hz (not too tough), > and my VCO gain is 11.8333 Hz / Volt . My phase detector is the standard > two-DFF type and can get detect a minium time difference of 20 ns in the > two waveforms. DAC output(which controls the VCO) range is 0 - 5V. > > My problem is that I'm having trouble relating my phase requirement to > DAC voltage step-size (number of bits). Assuming that the DAC is updated once each cycle of the output frequency, you want your frequency to be within f (1 +- 1/3600), which would generate the maximum phase error, assuming that the phase was exactly matched at the beginning. That suggests that you want at least a 12-bit converter. If you did external filtering, you could use fewer bits and toggle the DAC setting more frequently such that the average voltage is for the correct frequency. ThadArticle: 58674
On Wed, 30 Jul 2003, Andras Tantos wrote: > > I am currently trying to perform a readout from FPGA to a LINUX PC using > > parallel port. I have implemented the state-machine for EPP communication > > in the FPGA and it works well however the system is slow. > > > > I think this is because EPP devices are supposed to negotiate the best > > available transfer mode during initialization but the FPGA is not > > currently setup to do that. As a result, I had to fall back on software > > emulation of the data transfer handshaking. > > > > I was wondering if anyone has experience performing readout from FPGA > > using EPP. My aim is to get 1MByte/sec communication. Any help would be > > appreciated. > > > > I had the same experience: the PC parallel port HW wouldn't switch into EPP > mode without the proper negotiation. You will stay in SPP mode on the SW > side, though you can emulate the EPP protocoll from SW. And yes, it's slow. > > Andras Tantos Were you able to implement the proper negotiation and avoid staying in SPP? If so what do you recommend? -Yash
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z