Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 119200

Article: 119200
Subject: Re: Uart problem, xapp223 + Spartan3A
From: mh <moazzamhussain@gmail.com>
Date: 15 May 2007 01:35:14 -0700
Links: << >>  << T >>  << A >>
Hi Borge

Place EDN files in the project directory and declare the instance as a
black box. To avoid this error of synthesis, Add a verilog file with
same module name and port list (and no RTL)  followed by endmodule.

Hope it works
/MH











Borge wrote:
> I'm using Ken Chapman's uart_tx in a Spartan3A design.
>
> The HDL compler can't find the module/primitive, so I wonder if I did
> everything right here.
>
> What I did was the following:
> - Download and unpack xapp223.zip from Xilinx
> - Include uart_tx.EDN in my design, no problem
> - Instantiate it in my code, copy/paste from Spartan3A demo source
> "terminal.v"
> - Compile, got error
>
> I know the uart documentation says it's fit for any Virtex or Spartan-
> II, but seeing that it works in the Spartan3A demo I should be able to
> use it in my code on the same board.
>
> Can you explain what went wrong here?
>
> Thanks,
> B=F8rge
>
>
> Here's my instantiation:
>     uart_tx uart_transmitter (
>         .data_in(outdata_uart),
>         .write_buffer(write_to_uart),
>         .reset_buffer(reset_local),
>         .en_16_x_baud(en_16_x_baud),
>         .serial_out(RS232_DCE_TXD),
>         .buffer_full(uart_buffer_full),
>         .buffer_half_full(),
>         .clk(clk_100)
>     );
>
> Here's my comple error message:
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> *                     Design Hierarchy
> Analysis                         *
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> ERROR:HDLCompilers:87 - "sources/implementation.v" line 346 Could not
> find module/primitive 'uart_tx'
> -->


Article: 119201
Subject: Re: An Open-Source suggestion for Xilinx
From: "comp.arch.fpga" <ksulimma@googlemail.com>
Date: 15 May 2007 01:46:03 -0700
Links: << >>  << T >>  << A >>
On 14 Mai, 19:42, fpga_t...@yahoo.com wrote:

> The biggest problem with much of this, is a mine field caused by the
> Xilinx NDA in their software license,

I did not sign an NDA prior to installing the Xilinx software.
There was some text shown to me during installation, but the terms of
the sales contract were allready negotiated between me and Avnet at
that point. Even goods and money were exchanged by that time.
This is far to late to add additional restrictions to the sales
contract.
Also, you can read most information on the DVD without installing
and clicking through the EULA.

> Selling products which violate Xilinx IP rights, or giving them away
> as open source, will give Xilinx the power to pick and choose who they
> wish to enforce their rights against. The Xilinx software NDA terms
> are very broad,
That's another p=FCoint of attack. They are probably to broad to be
applicable
under EU law.

> so if the information you use for the reverse
> engineering is included in their software products, watch out ...
> carefully follow both the US and EU laws.
>
> It's this simple problem of Xilinx NDA and license terms that makes
> the company very very high risk to support with open source
> development. They include/use major Open Source products freely,

That is the third angle of attack. The EULA you click through for ISE
includes
the GPL and other open source licenses. (They ship TCL and other
software)
It is not clear from the installation process what parts of the
software are covered
by which of these EULAs.
In Germany law BGB =A7305c(2) states that if a contract that is
formulated by one party
is ambigous it must be interpreted in the interest of the other
party.
This means that if the EULA is valid, anything that might be covered
by the GPL
can be used by me under GPL terms. Good for Xilinx that the EULA is
not valid.

Also, protocols and ISA and similar stuff are neither covered by EU
copyright nor patent
law. And finally, Xilinx has a history of hiding information, but not
beeing aggressive against
open source  projects interfacing to their parts or their software.

As a result open source projects should not be scared using
information derived from the
software tools. Just don't include any code, netlists or binaries from
Xilinx into your projects.

Kolja Sulimma


Article: 119202
Subject: Re: An Open-Source suggestion for Xilinx
From: fpga_toys@yahoo.com
Date: 15 May 2007 01:50:56 -0700
Links: << >>  << T >>  << A >>
On May 15, 12:46 am, Eric Smith <e...@brouhaha.com> wrote:
> If your mention of US copyright laws is meant to reference the DMCA
> provisions against reverse engineering for circumvention, it should
> be noted that the DMCA specifically exempts reverse-engineering for
> the purposes of achieving iteroperability.

Assume you ask first and get permission, that it will not violate
license restrictions, or violate NDA terms for trade secret and other
Xiinx IP rights.  One problem with Open Source programs, is that they
frequently also provide much more than interoperability, they also
disclose the broader IP protected by trade secret and NDA provisions.
So while DMCA may grant reverse engineering for interoperability
relating to copyrightable material if Xilinx provides permission, it
doesn't take you very far otherwise.

The Xilinx license starts out with the statement:

1. License and Ownership Rights
(a) XILINX License.The Integrated Software Environment (ISE) software,
including any updates and/or associated documentation thereto,
(collectively, the "Software") contains copyrighted material, trade
secrets, and other proprietary information (collectively,
"Intellectual Property") owned by XILINX, Inc. ("XILINX") and its
third-party licensors. With respect to the Intellectual Property in
the Software that is owned by XILINX, XILINX hereby grants you a
nonexclusive license for a period of one (1) year from the date of
purchase or annual renewal thereof to use the Software solely for your
use in developing designs for XILINX programmable logic devices.

and then places the following restriction on use:

3. Restrictions. The Software contains copyrighted material, trade
secrets, and other proprietary information. In order to protect them
you may not decompile, translate, reverse-engineer, disassemble, or
otherwise reduce the Software to a human-perceivable form.

Plus, to comply with EU provisions:

8. Interoperability. If you acquired the Software in the European
Union (EU), even if you believe you require information related to the
interoperability of the Software with other programs, you shall not
decompile or disassemble the Software to obtain such information, and
you agree to request such information from XILINX at the address
listed above. Upon receiving such a request, XILINX shall determine
whether you require such information for a legitimate purpose and, if
so, XILINX will provide such information to you within a reasonable
time and on reasonable conditions.

and the termination clause includes:

This License will terminate immediately without notice from XILINX if
you fail to comply with any of the terms and conditions of this
License.

Which causes problems for open source reverse engineering, because
DCMA requires that you have a legal copy, with permission, for reverse
engineering. and that legal copy disappears when you violate the
reverse engineering restrictions.

The only legal path, is via the EU provisions, requiring the developer
of an interoperable product to ask for the documentation needed,
without reverse engineering, with the hope that Xilinx will not claim
trade secret and refuse.

Given the OP's claim about existing reverse engineering work, I have
some serious questions about liability for anyone involved in the
project, doing development based on the RE information, or
distributing the resulting work. This is one point, where one really
needs an attorney before even volunteering about the existance of a
project.


Article: 119203
Subject: Re: An Open-Source suggestion for Xilinx
From: fpga_toys@yahoo.com
Date: 15 May 2007 01:58:42 -0700
Links: << >>  << T >>  << A >>
On May 15, 2:46 am, "comp.arch.fpga" <ksuli...@googlemail.com> wrote:
> As a result open source projects should not be scared using
> information derived from the
> software tools. Just don't include any code, netlists or binaries from
> Xilinx into your projects.

Should not be scared to openly violate the direct terms of the
license?

See an attorney first ....



Article: 119204
Subject: Re: An Open-Source suggestion for Xilinx
From: "comp.arch.fpga" <ksulimma@googlemail.com>
Date: 15 May 2007 01:59:00 -0700
Links: << >>  << T >>  << A >>
On 8 Mai, 23:12, <steve.l...@xilinx.com> wrote:
> There are a few programs that we could open-source, but they
> only have a few engineers working on them, so it would not save
> resources since we have to manage the open-source projects.

No necessarily. Besides opening and supporting your tools there
are three other options:

1) Opening interfaces, file formats, APIs, etc
This allows people to create tools that are completely independant of
your software.
Xilinx did that with XNF and JBits and there was quite a lot of
activity
in academia using these.
For the configuration tools issue this is likely to solve most of the
problems.

2) Open Source your Code without managing the project
You can release code for less important projects like Impact without
taking care
of the open source development process. The community can create a
fork that
competes with your implementation. You can formulate the license in a
way that
allows you to port back any code into your closed source branch. There
is no need
for you to provide any support at all to the open source brach.
This is similar to the old days of Star Office.

3) Open source abandoned projects
This is what IBM did with OpenDX. IBM was still actively using this
tool internally but
decided that it was not successfull as a product. By opening it up
thay made sure that
it continued to be maintained to some extend.
How about the source code for JBits, JINI, etc.?

Kolja Sulimma


Article: 119205
Subject: Re: Xilinx software quality - how low can it go ?!
From: ammonton@cc.full.stop.helsinki.fi
Date: 15 May 2007 09:22:24 GMT
Links: << >>  << T >>  << A >>
steve.lass@xilinx.com wrote:

>  - Most of our time is spent doing new device support (and this in not just
> within the map and par groups). How do we get these "volunteers" to
> deliver new devices when we need them?
>  - We start the software for new devices about 2 years before the software
> for that family is released. Making the details of a new architecture public
> at that time is not an option.

There's nothing stopping Xilinx from developing in private repositories,
contributing the changes when the new device is launched. The private
repository would also be the natural source for Xilinx' "official"
supported tool releases. This is how much development is done for eg.
GCC and the GNU binutils. The benefit is that since it's all open source
changes flow both ways.

-a

Article: 119206
Subject: Re: Timing constraint question
From: veeresh <veereshin@gmail.com>
Date: 15 May 2007 03:06:45 -0700
Links: << >>  << T >>  << A >>
Hi,

On May 15, 1:27 am, Dima <Dmitriy.Bek...@gmail.com> wrote:
> Hello,
>
> I am trying to specify a timing constraint for a latch that I have in
> my design. I need to make sure that from the rising edge of the clock
> to when a control signal goes high that causes the latch to switch, I
> have less than one clock cycle delay.

I suppose you have two outputs  a clk out, and one more signal i.e,
control signal. And the edge on control signal has to be kept at a
time gap from rising edge of the clock. If control signal is generated
w.r.t same clock, clk to output delay constraint can be used.
Otherwise sample this signal again with the same clock, and use clk to
o/p delay constraint.


Rehgards,
Veeresh
>
> This is my UCF file:
>
> #######################
> ## System level constraints
> Net CLK TNM_NET = CLK;
> TIMEGRP "RISING_CLK" = RISING "CLK";
> NET "add1_rdy" TPTHRU = "ADD1_RDY";
> NET "accumulate<31>" TNM_NET = ACC;
> NET "accumulate<30>" TNM_NET = ACC;
> NET "accumulate<29>" TNM_NET = ACC;
> NET "accumulate<28>" TNM_NET = ACC;
> NET "accumulate<27>" TNM_NET = ACC;
> NET "accumulate<26>" TNM_NET = ACC;
> NET "accumulate<25>" TNM_NET = ACC;
> NET "accumulate<24>" TNM_NET = ACC;
> NET "accumulate<23>" TNM_NET = ACC;
> NET "accumulate<22>" TNM_NET = ACC;
> NET "accumulate<21>" TNM_NET = ACC;
> NET "accumulate<20>" TNM_NET = ACC;
> NET "accumulate<19>" TNM_NET = ACC;
> NET "accumulate<18>" TNM_NET = ACC;
> NET "accumulate<17>" TNM_NET = ACC;
> NET "accumulate<16>" TNM_NET = ACC;
> NET "accumulate<15>" TNM_NET = ACC;
> NET "accumulate<14>" TNM_NET = ACC;
> NET "accumulate<13>" TNM_NET = ACC;
> NET "accumulate<12>" TNM_NET = ACC;
> NET "accumulate<11>" TNM_NET = ACC;
> NET "accumulate<10>" TNM_NET = ACC;
> NET "accumulate<9>" TNM_NET = ACC;
> NET "accumulate<8>" TNM_NET = ACC;
> NET "accumulate<7>" TNM_NET = ACC;
> NET "accumulate<6>" TNM_NET = ACC;
> NET "accumulate<5>" TNM_NET = ACC;
> NET "accumulate<4>" TNM_NET = ACC;
> NET "accumulate<3>" TNM_NET = ACC;
> NET "accumulate<2>" TNM_NET = ACC;
> NET "accumulate<1>" TNM_NET = ACC;
> NET "accumulate<0>" TNM_NET = ACC;
>
> # Constrain CLK to 200 MHz
> TIMESPEC TS_CLK = PERIOD CLK 5 ns;
>
> # Constrain the accumulate latch (one CLK cycle)
> TIMESPEC TS_ACC_LATCH = FROM "RISING_CLK" THRU "ADD1_RDY" TO "ACC"
> TS_CLK * 0.99;
> #######################
>
> During PAR, I always get the message that the following constraint is
> ignored:
> WARNING:Timing:3223 - Timing constraint TS_ACC_LATCH = MAXDELAY FROM
> TIMEGRP "RISING_CLK" THRU TIMEGRP "ADD1_RDY" TO TIMEGRP "ACC" TS_CLK *
> 0.99; ignored during timing analysis.
>
> Why is that? What is the right way to specify what I'm trying to
> constraint?
>
> Thanks



Article: 119207
Subject: reading IDCODE from parallel bus?
From: "Morten Leikvoll" <mleikvol@yahoo.nospam>
Date: Tue, 15 May 2007 12:07:41 +0200
Links: << >>  << T >>  << A >>
Is there any way to read IDCODE (and execute other jtag commands) using the 
parallel config bus?
I can't find any information on this, mostly because of polluted results.

Thanks



Article: 119208
Subject: Re: downto usage in EDK
From: Andrew Greensted <ajg112@ohm.york.ac.uk>
Date: Tue, 15 May 2007 11:47:34 +0100
Links: << >>  << T >>  << A >>
Manny wrote:
> Hi,
> 
> I vaguely seem to remember a provision on the use of downto in EDK. I
> tried to dig this out before posting this with no luck---Can't
> remember where I came across this though I tried a couple of keyword
> searches in random EDK pdf docs. Kinda doubtful about something I've
> just written in EDK using many downto's. Would appreciate a hint on
> this.
> 
> BTW, is Xilinx planning any short-term remedy for this? Keeping track
> of minor details is a bit daunting especially when you'r supposed to
> do loads of things from electronic circuitry design, DSP algorithmic
> development,  to VHDL coding---You know academia!
> 
> Regards,
> -Manny
> 

The ordering in EDK caused me quite a headache too. I tend to use (N
downto 0) for my own HDL. The following function is very handy though:

signal dataA : std_logic_vector(0 to 31);
signal dataB : std_logic_vector(31 to 0);

function reverseVector (a: in std_logic_vector) return std_logic_vector is
  variable result: std_logic_vector(a'reverse_range);
begin
  for i in a'range loop
    result((a'length-1)-i) := a(i);
  end loop;
  return result;
end;

dataB <= reverseVector(dataA);

Hope that helps
Andy

Article: 119209
Subject: Xilinx EDK: Slow OPB write speeds
From: Andrew Greensted <ajg112@ohm.york.ac.uk>
Date: Tue, 15 May 2007 12:00:12 +0100
Links: << >>  << T >>  << A >>
Hi All,

I've a simple peripheral with an OPB interface. In a nutshell I've been
getting some very slow write speeds over the OPB and wanted to see if
this was normal, or if there was anything I can do to speed things up.

I've tried a number of configurations. Trying PPC and microblaze based
systems. Using the OPB_IPIF and connecting directly to the bus. The
results are:

Virtex2Pro + PPC
cpu Freq: 100Mhz, bus Freq: 100Mhz
memory write freq about 1.25MHz
1 write / 800ns or every 80 clock cylces

Virtex2Pro + microblaze
cpu Freq: 100Mhz, bus Freq: 100Mhz
memory write freq about 12.5MHz
1 write / 80ns or every 8 clock cylces

Obviously the microblaze approach is faster. Is this simply because the
PPC system has to use a plb2opb bridge? Including the IPIF doesn't seem
to slow things down. The peripheral does and immediate acknowledge of
the data write, so there should be no delays there.

Are them some tricks to speed up access to OPB based peripherals?

For reference, the test code is:
#include "xparameters.h"
#include "xutil.h"
#include "xio.h"

int main(void)
{
  print("Starting OPB Test\n");

  Xuint32 dataOut = 0;

  while (1)
  {
//  XIo_Out32(XPAR_OPB_TEST_0_BASEADDR, dataOut);	// IPIF
    XIo_Out32(XPAR_OPB_IPIF_TEST_0_BASEADDR, dataOut);	// Direct
    dataOut ^= 0x1;
  }
}

Many Thanks
Andy

Article: 119210
Subject: coregen -> simulation error in modelsim
From: kislo <kislo02@student.sdu.dk>
Date: 15 May 2007 04:06:17 -0700
Links: << >>  << T >>  << A >>
When i try to simulate a coregen generated single port ram, i get a
error from modelsim :

# -- Loading package blkmemsp_pkg_v6_2
# -- Loading entity blkmemsp_v6_2
# ** Error: ram.vhd(112): Internal error: ../../../src/vcom/
genexpr.c(5483)
# ** Error: ram.vhd(112): VHDL Compiler exiting
# ** Error: D:/modelsimXE/win32xoem/vcom failed.


what can be the cause of the error ? i can simulate async fifo, and
have the newest upgrades/service packs.

from google search i found a guy having the same problem with another
coregen component:
http://www.mikrocontroller.net/topic/68567
he says:

"jedenfalls war das problem, dass die generics nur im mapping
aufgef=FChrt
waren und nicht im deklarationsteil der architecture des von coregen
generierten files. das sollte, nein das muss man von hand =E4ndern und
alles ist gut :o)"

what is it exatly i am supposed to do to get it to work ?


Article: 119211
Subject: Xilinx ISE 9.1 Simulator does not work with glibc 2.5
From: Thomas Feller <tmueller@rbg.informatik.tu-darmstadt.de>
Date: Tue, 15 May 2007 13:25:27 +0200
Links: << >>  << T >>  << A >>
Hi List,

I cannot run the ISE-Simulator on a Gentoo Laptop with glibc 2.5.
The simulation seems to run fine, but the Simulation window is not
showing up.
I've found that recently someone else had experienced this problem with
the WebPack version of ISE. While talking to some other people at the
University here the problem seems not to be a specific Gentoo issue.
I've tried to use LD_LIBRARY_PATH with a glibc I only compliled for this
 task but I haven't got it working.

Had anyone ever tried this or even experienced this problem?
Is there anyone who can tell me what i've to compile and put it in
LD_WHATEVER to get this working?
My C/C++ skills are very limited, so please give me some nice keywords I
can google for.

On my Core2duo machine which is 64-Bit install, I didn't have that
problem, even though it is also a Gentoo machine with glibc2.5. I think
that some of the 64-32-bit emulation libraries do the trick.

Hopefully providing enough Information for a respectful and polite answer :)

Regards
	Thomas

Article: 119212
Subject: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Amit <amit3007@gmail.com>
Date: 15 May 2007 04:34:28 -0700
Links: << >>  << T >>  << A >>
Hello,

I have a DDR2 Controller ASIC rtl, which i need to put on FPGA and
validate it. The problem is, i am not able to get this controller run
on more than 50-60 Mhz on Virtex4 FPGA. Now, as everyone says minimum
clock frequency for DDR2 devices is 100Mhz, i am simply not able to
get DDR2 interface working on FPGA.

I want to know if the DDR2 Devices can work on the clock frequencies
which are much lower than 100Mhz. Has anyone tried it and got any
success?

please put your insights.

Regards,
Amit


Article: 119213
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Antti <Antti.Lukats@googlemail.com>
Date: 15 May 2007 05:02:46 -0700
Links: << >>  << T >>  << A >>
On 15 Mai, 13:34, Amit <amit3...@gmail.com> wrote:
> Hello,
>
> I have a DDR2 Controller ASIC rtl, which i need to put on FPGA and
> validate it. The problem is, i am not able to get this controller run
> on more than 50-60 Mhz on Virtex4 FPGA. Now, as everyone says minimum
> clock frequency for DDR2 devices is 100Mhz, i am simply not able to
> get DDR2 interface working on FPGA.
>
> I want to know if the DDR2 Devices can work on the clock frequencies
> which are much lower than 100Mhz. Has anyone tried it and got any
> success?
>
> please put your insights.
>
> Regards,
> Amit

Try faster FPGA?

Antti




Article: 119214
Subject: Re: Xilinx EDK: Slow OPB write speeds
From: Guru <ales.gorkic@email.si>
Date: 15 May 2007 05:04:32 -0700
Links: << >>  << T >>  << A >>
On May 15, 1:00 pm, Andrew Greensted <ajg...@ohm.york.ac.uk> wrote:
> Hi All,
>
> I've a simple peripheral with an OPB interface. In a nutshell I've been
> getting some very slow write speeds over the OPB and wanted to see if
> this was normal, or if there was anything I can do to speed things up.
>
> I've tried a number of configurations. Trying PPC and microblaze based
> systems. Using the OPB_IPIF and connecting directly to the bus. The
> results are:
>
> Virtex2Pro + PPC
> cpu Freq: 100Mhz, bus Freq: 100Mhz
> memory write freq about 1.25MHz
> 1 write / 800ns or every 80 clock cylces
>
> Virtex2Pro + microblaze
> cpu Freq: 100Mhz, bus Freq: 100Mhz
> memory write freq about 12.5MHz
> 1 write / 80ns or every 8 clock cylces
>
> Obviously the microblaze approach is faster. Is this simply because the
> PPC system has to use a plb2opb bridge? Including the IPIF doesn't seem
> to slow things down. The peripheral does and immediate acknowledge of
> the data write, so there should be no delays there.
>
> Are them some tricks to speed up access to OPB based peripherals?
>
> For reference, the test code is:
> #include "xparameters.h"
> #include "xutil.h"
> #include "xio.h"
>
> int main(void)
> {
>   print("Starting OPB Test\n");
>
>   Xuint32 dataOut = 0;
>
>   while (1)
>   {
> //  XIo_Out32(XPAR_OPB_TEST_0_BASEADDR, dataOut);       // IPIF
>     XIo_Out32(XPAR_OPB_IPIF_TEST_0_BASEADDR, dataOut);  // Direct
>     dataOut ^= 0x1;
>   }
>
> }
>
> Many Thanks
> Andy

 Try using PARAMETER C_INCLUDE_BURST_SUPPORT = 1 in MHS.

Guru


Article: 119215
Subject: Re: Xilinx EDK: Slow OPB write speeds
From: Andrew Greensted <ajg112@ohm.york.ac.uk>
Date: Tue, 15 May 2007 13:18:33 +0100
Links: << >>  << T >>  << A >>
Guru wrote:
>  Try using PARAMETER C_INCLUDE_BURST_SUPPORT = 1 in MHS.
> 
> Guru

As far as I can see, this parameter does not apply to any of the parts
of my system. Can this be used as a global MHS parameter? Or did you
mean it to go in a specific peripheral?

Cheers
Andy

-- 
Dr. Andrew Greensted      Department of Electronics
Bio-Inspired Engineering  University of York, YO10 5DD, UK

Tel: +44(0)1904 432828    Mailto: ajg112@ohm.york.ac.uk
Fax: +44(0)1904 432335    Web: www.bioinspired.com/users/ajg112

Article: 119216
Subject: Re: Digital gain and offset correction
From: "Marco T." <marcotoschi@gmail.com>
Date: 15 May 2007 05:41:02 -0700
Links: << >>  << T >>  << A >>

Sean Durkin ha scritto:

> Marco T. wrote:
> > I wasn't asking about how to perform addition and multiplication.
> >
> > I was asking if there are other implementations or auto tuning-
> > techniques, in example if someone has made a loopback to perform it.
> That's *not* what you were asking.
>
> You were asking:
> "Which is the way to perform digital vgain and offset correction using a
> fpga?"
>
> How is anyone supposed to see that you want to now about loopback and
> auto-tuning from that?
>
> How are we supposed to give you decent answers if you don't state your
> question clearly?
>
> It's like calling your auto shop and asking "How do I fix a broken
> car?". No mechanic will be able to give you a decent answer, because he
> doesn't know what kind of car you are talking about, what is broken,
> what kind of tools you have to repair it and if you know how to use them.
> Without more information, your question is meaningless and much too
> vague to answer.
> That's why Antti reacted the way he did when you first asked this here.
> It was his ironic way of letting you know that your question doesn't
> really make sense.
>
> So: Without knowing what it is you want to do gain/offset correction
> for, noone will be able to give you a decent answer. If, for example,
> you want to do gain/offset-correction for a CMOS image sensor, it's
> different from doing GOC for RF receivers. In the end it all amounts to
> + and *, but how you get the values to add or multiply by differs a lot
> depending on the system you are looking at.
>
> So actually, you don't want to know about *PERFORMING*
> gain/offset-correction at all (because that's trivial, just + and *)...
> Instead, what you want to know is how one obtains the correction factors
> (maybe through auto-tuning and/or loopbacks), which is a totally
> different subject.
>
> --
> My email address is only valid until the end of the month.
> Try figuring out what the address is going to be after that...


Sorry for my bad english... I made a generic question to open the
thread and then going more specific.

I should connect two type of sensors, one with single ended output
(from 0 to 5 Volt) and the second with wheatsotone bridge to an ADC.

I would like to know which are the best techniques to perform auto
calibration.

Sorry for other previous posts.

Thanks,
Marco T.


Article: 119217
Subject: Re: Digital gain and offset correction
From: Antti <Antti.Lukats@googlemail.com>
Date: 15 May 2007 05:57:39 -0700
Links: << >>  << T >>  << A >>
On 15 Mai, 14:41, "Marco T." <marcotos...@gmail.com> wrote:
> Sean Durkin ha scritto:
> > Marco T. wrote:
> > > I wasn't asking about how to perform addition and multiplication.
>
> > > I was asking if there are other implementations or auto tuning-
> > > techniques, in example if someone has made a loopback to perform it.
> > That's *not* what you were asking.
>
> > You were asking:
> > "Which is the way to perform digital vgain and offset correction using a
> > fpga?"
>
> > How is anyone supposed to see that you want to now about loopback and
> > auto-tuning from that?
>
> > How are we supposed to give you decent answers if you don't state your
> > question clearly?
>
> > It's like calling your auto shop and asking "How do I fix a broken
> > car?". No mechanic will be able to give you a decent answer, because he
> > doesn't know what kind of car you are talking about, what is broken,
> > what kind of tools you have to repair it and if you know how to use them.
> > Without more information, your question is meaningless and much too
> > vague to answer.
> > That's why Antti reacted the way he did when you first asked this here.
> > It was his ironic way of letting you know that your question doesn't
> > really make sense.
>
> > So: Without knowing what it is you want to do gain/offset correction
> > for, noone will be able to give you a decent answer. If, for example,
> > you want to do gain/offset-correction for a CMOS image sensor, it's
> > different from doing GOC for RF receivers. In the end it all amounts to
> > + and *, but how you get the values to add or multiply by differs a lot
> > depending on the system you are looking at.
>
> > So actually, you don't want to know about *PERFORMING*
> > gain/offset-correction at all (because that's trivial, just + and *)...
> > Instead, what you want to know is how one obtains the correction factors
> > (maybe through auto-tuning and/or loopbacks), which is a totally
> > different subject.
>
> > --
> > My email address is only valid until the end of the month.
> > Try figuring out what the address is going to be after that...
>
> Sorry for my bad english... I made a generic question to open the
> thread and then going more specific.
>
> I should connect two type of sensors, one with single ended output
> (from 0 to 5 Volt) and the second with wheatsotone bridge to an ADC.
>
> I would like to know which are the best techniques to perform auto
> calibration.
>
> Sorry for other previous posts.
>
> Thanks,
> Marco T.- Zitierten Text ausblenden -
>
> - Zitierten Text anzeigen -

This has nothing todo with FPGA, it is more related to analog domain
and sensor conditioning
So try reading some books and look in relevant places about analog
measurements.

Antti











Article: 119218
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Jonathan Bromley <jonathan.bromley@MYCOMPANY.com>
Date: Tue, 15 May 2007 14:00:57 +0100
Links: << >>  << T >>  << A >>
On 15 May 2007 05:02:46 -0700, Antti <Antti.Lukats@googlemail.com>
wrote:

>On 15 Mai, 13:34, Amit <amit3...@gmail.com> wrote:
>> Hello,
>>
>> I have a DDR2 Controller ASIC rtl, which i need to put on FPGA and
>> validate it. The problem is, i am not able to get this controller run
>> on more than 50-60 Mhz on Virtex4 FPGA.
>
>Try faster FPGA?

This probably isn't very helpful, as I guess Amit is using the FPGA
for ASIC prototyping and therefore his RTL will be targeted at the
ASIC and will be poorly optimised for FPGA.

The maximum clock frequency for DDR2 SDRAMs is quite aggressively 
specified, as Amit says.  I have to say that my own knee-jerk 
reaction would be simply to try it; I'd be amazed if the 
DDR2's internal clock generators could not cope with a clock
that's slower than nominal by a factor of 4.
But they have hideously complicated internal clock re-timing 
circuits, so maybe it would indeed all go horribly wrong.  
Does anyone know for sure?  Micron's data sheet and Verilog 
model both enforce an 8ns maximum clock period on all their 
current (>=200MHz) DDR2 parts.

Is there a memory expert in the house? :-)
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.

Article: 119219
Subject: Re: Xilinx EDK: Slow OPB write speeds
From: Andrew Greensted <ajg112@ohm.york.ac.uk>
Date: Tue, 15 May 2007 14:04:38 +0100
Links: << >>  << T >>  << A >>
It struck me that part of the speed problem with the PPC based system
was having the main system memory on the PLB bus. By using the OCM
interface for data and instructions things got slightly faster:

Virtex2Pro + PPC
cpu Freq: 100Mhz, bus Freq: 100Mhz
memory write freq about 2.941MHz
1 write / 340ns

However, this still seems very slow for a 100MHz bus.

Andy

Article: 119220
Subject: Re: Xilinx ISE 9.1 Simulator does not work with glibc 2.5
From: Colin Paul Gloster <Colin_Paul_Gloster@ACM.org>
Date: 15 May 2007 13:07:11 GMT
Links: << >>  << T >>  << A >>
Hello,

We are currently waiting for Xilinx ISE 9.1 to be delivered so I do
not have all the answers for you. We have GNU LibC 2.3.x. In the
meantime I thought of installing Xilinx WebPACK 9.1 but the unzipping
tools I tried (viz. unzip of Info-ZIP and p7zip_4.45) on
WebPACK_SFD_91i.zip did not successfully unzip the whole thing so I
did not bother to try and run it.

Thomas Feller <tmueller@rbg.informatik.tu-darmstadt.de> sent:

"I cannot run the ISE-Simulator on a Gentoo Laptop with glibc 2.5.
The simulation seems to run fine, but the Simulation window is not
showing up.
I've found that recently someone else had experienced this problem with
the WebPack version of ISE. While talking to some other people at the
University here the problem seems not to be a specific Gentoo issue.
I've tried to use LD_LIBRARY_PATH with a glibc I only compliled for this
 task but I haven't got it working.

[..]"

I would suggest that if you are not using a compatible operating
system for software you are trying to run, that you try using a
compatible operating system. If you do not want to replace Gentoo, you
could emulate a different operating system in something such as QEMU (
WWW.QEMU.org ).

It is quite possible that the incompatibility you suffer is to do with
GLibC 2.5 and that not all of ISE 9.1 is incompatible. I do not
know. Do you have a reason to particularly suspect GLibC 2.5?

If, and this is a very big if, the only thing you need to add is X
Windows compiled in a manner which is not incompatible, then you could
compile X Windows with a suitable version of GLibC with crosstool (
WWW.Kegel.com/crosstool/
) which is a very convenient tool which I have used when I basically
needed to ridiculously run third party GLibC 2.3.x code with third
party GLibC 2.2.y code on the same machine. Unfortunately for you, it
might not be so simple as the third party closed source program from
Xilinx which is calling X Windows quite possibly requires the same
version of GLibC as what it requires X Windows to use.

Regards,
Colin Paul Gloster

Article: 119221
Subject: Re: How low DDR2 Clock Frequency can be? To make it work on FPGA.
From: Sean Durkin <news_may07@durkin.de>
Date: Tue, 15 May 2007 15:28:46 +0200
Links: << >>  << T >>  << A >>
Amit wrote:
> Hello,
> 
> I have a DDR2 Controller ASIC rtl, which i need to put on FPGA and
> validate it. The problem is, i am not able to get this controller run
> on more than 50-60 Mhz on Virtex4 FPGA. Now, as everyone says minimum
> clock frequency for DDR2 devices is 100Mhz, i am simply not able to
> get DDR2 interface working on FPGA.
> 
> I want to know if the DDR2 Devices can work on the clock frequencies
> which are much lower than 100Mhz. Has anyone tried it and got any
> success?
125MHz is the lowest specified clock rate for DDR2 SDRAM, not 100MHz.

The problem is that inside the DRAM chip there is a DLL, that makes sure
the data is output edge aligned with the data strobe. This DLL only
works for a specific frequency range, usually down to 125MHz. In the
JEDEC spec it is specified that the DLL must work down to 125MHz, but
does not need to work for frequencies below that.

So basically this means that maybe it works, maybe it doesn't. You could
be lucky and have DRAM chips that support it, but you can't count on it.
It might work with one shipment of chips, but might not with another. It
may vary from manufacturer to manufacturer, and from die revision to die
revision. So even though technically slower clock speeds should be
possible, this is just something that is out of spec, and even if it
happens to work the functionality might go away at any time.

So in your case if you try it and it doesn't work, you can never be sure
if the problem is with the IP core or the DRAM chip...

-- 
My email address is only valid until the end of the month.
Try figuring out what the address is going to be after that...

Article: 119222
Subject: Re: Digital gain and offset correction
From: Jonathan Bromley <jonathan.bromley@MYCOMPANY.com>
Date: Tue, 15 May 2007 14:31:20 +0100
Links: << >>  << T >>  << A >>
On 15 May 2007 05:41:02 -0700, "Marco T." <marcotoschi@gmail.com>
wrote:

>I should connect two type of sensors, one with single ended output
>(from 0 to 5 Volt) and the second with wheatsotone bridge to an ADC.
>
>I would like to know which are the best techniques to perform auto
>calibration.

It depends very much on what's on the other side of the sensor -
the physical thing you're trying to measure.  For example,
if you know that the thing being measured has a long-term average
of zero, it's easy to log the measured long-term average and
subtract it from the measurements (offset).  Similarly, you
may be able to make inferences about the system gain from
observations of long-term peak-to-peak swing.  The kind of
example I have in mind is an electronic compass (navigation):
if you can go through one complete rotation, you know that
you have seen peak-to-peak swing on both the north/south
and the east/west signals, and you also know that when 
either of the two signals is at a peak, the other one is at
zero.  That explains why many car navigation systems 
sometimes ask you to drive round in a circle!

There is NOTHING about the output from any sensor that 
inherently allows you to calibrate it.  You MUST know
something about the input to the sensor.  Maybe you can
zero the input?  For example, if it's a light sensor and
it has a chopper disc rotating in front of it, you can
be sure that the sensor is detecting zero when the chopper
disc is interrupting the light.  If you are very lucky, you
may have some way of introducing a known calibration input
to the sensor (put a known weight on the strain gauge...).

Some of the ideas I've mentioned depend on you performing
an explicit calibration step.  You will then need to consider
how frequently to do this.  Obviously this depends on how
quickly the sensor becomes de-calibrated.  Remember that you
have to consider thermal effects, changes in supply voltage,
and many other possible factors.  If you can measure some
of these, it may be possible to apply some kind of first-order
compensation to the sensor output and thereby reduce the need
to recalibrate very frequently.

Automotive sensors provide some very interesting examples.
Take, for instance, the manifold-absolute-pressure (MAP)
sensor that detects how hard the engine is sucking air in.
When the engine is not rotating (easy to tell from the
crankshaft sensor) you can be 100% confident that the inlet
pressure is the same as atmospheric pressure.  This provides
a calibration opportunity every time you start the engine.

FINALLY....  For some sensors that can have large offset
or gain errors, it may be preferable to apply your
corrections by injecting analog voltages into the sensor
using a DAC driven from your system.  In this way you can
pull the sensor's useful output range so that it matches
your ADC's conversion range, thus improving dynamic range.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Is it not the case that almost everything I've said here
is just simple scientific common-sense?

-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.

Article: 119223
Subject: Re: coregen -> simulation error in modelsim
From: Colin Paul Gloster <Colin_Paul_Gloster@ACM.org>
Date: 15 May 2007 13:33:26 GMT
Links: << >>  << T >>  << A >>
Kislo <kislo02@student.SDU.Dk> wrote:
"When i try to simulate a coregen generated single port ram, i get a
error from modelsim :

# -- Loading package blkmemsp_pkg_v6_2
# -- Loading entity blkmemsp_v6_2
# ** Error: ram.vhd(112): Internal error: ../../../src/vcom/
genexpr.c(5483)
# ** Error: ram.vhd(112): VHDL Compiler exiting
# ** Error: D:/modelsimXE/win32xoem/vcom failed.


what can be the cause of the error ? i can simulate async fifo, and
have the newest upgrades/service packs.

from google search i found a guy having the same problem with another
coregen component:
http://www.mikrocontroller.net/topic/68567
he says:

"jedenfalls war das problem, dass die generics nur im mapping
aufgef=FChrt
waren und nicht im deklarationsteil der architecture des von coregen
generierten files. das sollte, nein das muss man von hand =E4ndern und
alles ist gut :o)"

what is it exatly i am supposed to do to get it to work ?"

Hej till Danskland.

Please bear in mind that even though the error message which Seb reported is
identical to yours, it seems that it could result from many different
errors. Anyway, I will try to translate Seb's tips from pseudoGerman
to actual English for you, and hopefully you will not need any more
help.

"jedenfalls war das problem, dass die generics nur im mapping
aufgef=FChrt
waren und nicht im deklarationsteil der architecture des von coregen
generierten files."

Anyway the problem was, that the generics were just in the mapping
(maybe he meant portmaps... I do not use Coregen, you figure it out!),
and not in the declarative part of the architecture of the files
generated by Coregen.

" das sollte, nein das muss man von hand =E4ndern und
alles ist gut :o)"

So that should, no, that must, be modified by hand and everything will
be fine.

Also, you edited out Seb's previous two pseudo-sentences which may
also be useful:
"es lag nicht an den libraries."

The problem was not in the libraries.

" hatte auch noch nen syntaxfehler drin
-
die gutan alten semikolons :o)"

I had a few syntax errors in there - the good old semicolons.

Regards,
Colin Paul Gloster

Article: 119224
Subject: debit- xilinx bitstream decompiler project has been vanished? or does someone know the URL
From: Antti <Antti.Lukats@googlemail.com>
Date: 15 May 2007 06:40:11 -0700
Links: << >>  << T >>  << A >>
good thing do not seem to be long online, the bitstream decompiler
project seems not available, at least I can find the relevant weblinks
any more,

maybe there is new hidden URL?

the last one used to be
http://www.ulogic.org/
or
http://www.ulogic.org/trac

but those links are no dead :(

Antti




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search