Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 96675

Article: 96675
Subject: Re: vhdl to edif
From: Duane Clark <dclark@junkmail.com>
Date: Wed, 08 Feb 2006 21:21:35 GMT
Links: << >>  << T >>  << A >>
Leow Yuan Yeow wrote:
> Hi, may I know whether there is any free program that is able to convert a 
> vhdl file to a .edf file? I am unable to find such options in the Xilinx ISE 
> Navigator. I have tried using the Xilinx ngc2edif convertor but when I tried 
> to generate a bit file from the edf file its says:
> 
> ERROR:NgdBuild:766 - The EDIF netlist 'synthetic2.edf' was created by the
> Xilinx
>    NGC2EDIF program and is not a valid input netlist.  Note that this EDIF
>    netlist is intended for communicating timing information to third-party
>    synthesis tools. Specifically, no user modifications to the contents of
> this
>    file will effect the final implementation of the design.
> ERROR:NgdBuild:276 - edif2ngd exited with errors (return code 1).
> ERROR:NgdBuild:28 - Top-level input design file "synthetic2.edf" cannot be
> found
>    or created. Please make sure the source file exists and is of a
> recognized
>    netlist format (e.g., ngo, ngc, edif, edn, or edf).
> 
> Any help is appreciated!

Xilinx's XST can be told to generate edif instead of ngc, though since 
ngdbuild can understand the ngc format, I am not sure what you expect to 
gain by doing it. You can combine ngc and edif files with ngdbuild, and 
it should combine them fine.

Anyway, XST takes an "-ofmt" parameter, which can be set to "NGC" or 
"EDIF". However, the gui does not provide a method for doing that, so 
you would need to execute XST from the command line.

Article: 96676
Subject: Async Processors
From: Jim Granville <no.spam@designtools.co.nz>
Date: Thu, 09 Feb 2006 10:25:02 +1300
Links: << >>  << T >>  << A >>
  Further to an earlier thread on ASYNC design, and cores, this in the 
news :
http://www.eet.com/news/design/showArticle.jhtml?articleID=179101800

  and a little more is here
http://www.handshakesolutions.com/Products_Services/ARM996HS/Index.html

  with some plots here:
http://www.handshakesolutions.com/assets/downloadablefile/ARM996HS_leaflet_feb06-13004.pdf

  They don't mention the Vcc of the compared 968E-S, but the Joules and 
EMC look compelling, as does the 'self tracking' aspect of Async.

  They also have an Async 8051
http://www.handshakesolutions.com/Products_Services/HT-80C51/Index.html

  -jg


Article: 96677
Subject: Re: NMEA Decoder/Display
From: Jim Granville <no.spam@designtools.co.nz>
Date: Thu, 09 Feb 2006 10:41:16 +1300
Links: << >>  << T >>  << A >>
al99999 wrote:

>>PicoBlaze should handle it if it's not too complex.  You may run out
>>of instruction space (the 8-bit PicoBlaze for Spartan 3 is limited to
>>1K instructions if memory serves me right).  This does not require
>>the EDK and is published as a reference design with source.
>>
> 
> 
> Is there a C compiler for PicoBlaze?

:) - No, not at that code size / core resource ....
These are tiny cores, intended for simple SW assisting tasks..

There are assemblers, and the AS MacroAssembler now supports both the
Xilinx PicoBlaze, and the open source Lattice Mico8.

See
http://john.ccac.rwth-aachen.de:8000/as/download.html

-jg



Article: 96678
Subject: Re: latest XILINX WebPack is totally broken
From: Rich Grise <richgrise@example.net>
Date: Wed, 08 Feb 2006 21:46:34 GMT
Links: << >>  << T >>  << A >>
On Tue, 07 Feb 2006 16:20:49 -0800, Brian Davis wrote:

> Sylvain Munaut wrote:
>>
>> I wanted to submit a webcase but when I login into the webcase system
>> all I get is "Internal Server Error" ...
>>
>  I've been having similar problems trying to look up an old webcase
> for a couple of weeks now, and either :
> 
>  a) can't log in
> 
> or
> 
>  b) can log in, search, and find the webcase title; but when I click
>    on the webcase link: "The page cannot be displayed"
> 
> Brian

Sounds like, after having released 8.1, they sent the programmers to the
Bahamas or something. ;-)

Thanks!
Rich



Article: 96679
Subject: Which workstation or server should I take to build a state-of-the-art
From: Roel <electronics_designer@hotmail.com>
Date: Wed, 08 Feb 2006 23:57:06 +0100
Links: << >>  << T >>  << A >>
I was wondering which PC configuration is the best for:
Running ISE
Running Modelsim
Running Synplify Premier

Just to simulate and synthesize / place and route complex Xilinx FPGA's 
like the V4 FX100's.

Would it be Intel or AMD? Would it be 32bits or 64 bit? MultiCore? 
HyperThreating? Would 4GB be enough? Dual Channel Memory I guess? Linux 
or Windows? Which filesystem? Other things that make sense?

It would be nice just having a list of FPGA CAE Tools benchmark results 
of some nice servers/workstations. It is available?

Roel

Article: 96680
Subject: Re: why does speed grade effect VHDL program??
From: "ernie" <ernielin@gmail.com>
Date: 8 Feb 2006 15:42:08 -0800
Links: << >>  << T >>  << A >>
Hi,

One thing I would try is to declare "CLK" as a global wire.  You can do
this in Quartus by opening the assignment editor, entering "CLK" as the
>From name, and then selecting "Global Signal" under Assignment name.

Cheers.


Article: 96681
Subject: Re: porting linux on ml403
From: Peter Ryser <peter.ryser@xilinx.com>
Date: Wed, 08 Feb 2006 15:50:56 -0800
Links: << >>  << T >>  << A >>
On the ML403 you could start with the .config file in the reference 
design at 
http://www.xilinx.com/products/boards/ml403/files/ml403_emb_ref_ppc_71.zip
.

- Peter

> I'd like a copy of your .config file?


Article: 96682
Subject: Re: porting linux on ml403
From: Peter Ryser <peter.ryser@xilinx.com>
Date: Wed, 08 Feb 2006 15:51:55 -0800
Links: << >>  << T >>  << A >>
For example the Users Guide for ML403, pg37:
http://www.xilinx.com/bvdocs/userguides/ug082.pdf

- Peter

ramesh wrote:

> Hi ,
> Do you have any kind of documentation or any URL where i can find how
> to port linux on ML403?
> 
> Ramesh
> gnathita wrote:
> 
> 
>>Hello,
>>
>>I ported linux 2.4_devel to the ml403 board.
>>
>>The zImage. that you get is fine for loading into the Virtex4 ppc. The
>>only thing you need to do is "cp" it to another directory with the .elf
>>extension.
>>
>>I integrated it with my download.bit into an .ace file and it works ok
>>both for the reference design and for my custom design (a basic one).
>>
>>Feel free to ask me about what I did if you need help.
>>Paula
> 
> 


Article: 96683
Subject: Re: Which workstation or server should I take to build a state-of-the-art FPGA CAE tool workstation?
From: Josh Rosen <bjrosen@polybusPleaseDontSPAMme.com>
Date: Wed, 08 Feb 2006 19:15:50 -0500
Links: << >>  << T >>  << A >>
On Wed, 08 Feb 2006 23:57:06 +0100, Roel wrote:

> I was wondering which PC configuration is the best for:
> Running ISE
> Running Modelsim
> Running Synplify Premier
> 
> Just to simulate and synthesize / place and route complex Xilinx FPGA's 
> like the V4 FX100's.
> 
> Would it be Intel or AMD? Would it be 32bits or 64 bit? MultiCore? 
> HyperThreating? Would 4GB be enough? Dual Channel Memory I guess? Linux 
> or Windows? Which filesystem? Other things that make sense?
> 
> It would be nice just having a list of FPGA CAE Tools benchmark results 
> of some nice servers/workstations. It is available?
> 
> Roel

Athlon64 with 1M caches. I haven't updated these for a while but these
benchmarks are still relevant,

 http://www.polybus.com/linux_hardware/index.htm

What's not on there are dual cores, however I can tell you that the Athlon
64 X2 4400+ runs about 10% faster then the 3400+ on single threaded
applications which is to be expected because each core is basically a
3400+, i.e. 2.2GHz with a 1M cache, but it has a much faster main memory
system (dual channel vs single channel). The X2's handle multiprocessing
very well, I typically run simultaneous simulations and place and routes.

As you can see from my benchmarks a 1M cache on the A64 is extremely
important. There is a 2 to 1 difference in performance on NCVerilog
between a 1M A64 and a 1/2M A64.

I'm currently using an X2 4400+ with 4G of RAM and 64 bit Fedora Core 4.
I'm running the latest Xilinx and Altera tools as well as NCSim. I highly
recommend this setup for FPGA development.


Article: 96684
Subject: Re: ISE Simulator
From: "Brad Smallridge" <bradsmallridge@dslextreme.com>
Date: Wed, 8 Feb 2006 16:28:58 -0800
Links: << >>  << T >>  << A >>
OK, I have it now.  Enter the testbed waveform, run the simulation,
open up the UUT, drag and drop, run the restart. Thanks. 



Article: 96685
Subject: Re: Async Processors
From: wtxwtx@gmail.com
Date: 8 Feb 2006 18:37:01 -0800
Links: << >>  << T >>  << A >>
Hi,
While always dealing with clock cycles, I am really surprised to learn
how a clockless CPU works. Amazing!

Weng


Article: 96686
Subject: Re: NMEA Decoder/Display
From: "rickman" <spamgoeshere4@yahoo.com>
Date: 8 Feb 2006 18:59:01 -0800
Links: << >>  << T >>  << A >>
al99999 wrote:
> Thanks for all the advice.  I have a Xilinx spartan 3 starter kit that
> i'd like to use to implement this, but not EDK.  Any suggestions for a
> simple CPU to use on the FPGA?

Actually, I think I would give a try to doing it as a state machine
before I tackled the complexity of adding a CPU with memory and IO then
trying to get a program written, compiled or assembled, loaded into the
memory and finding a way to debug the whole thing.  If you had a
working CPU with toolset that was fully integrated, that would be
different.  But the sequence you are parsing should be pretty simple to
do in a FSM (finite state machine).

What you do to update the display, that is a different matter.  But
then that might not be easy in software either.  What is your display
task like?


Article: 96687
Subject: Parallel NCO (DDS) in Spartan3 for clock synthesis - highest possible speed?
From: "PeterC" <peter@geckoaudio.com>
Date: 8 Feb 2006 19:08:26 -0800
Links: << >>  << T >>  << A >>

Gurus,

I have built and tested a numerically-controlled oscillator (clock
generator) using a simple phase accumulator (adder) and two registers.
One register contains the tuning word (N), and the other is used in the
feedback loop into the second input of the adder.

I take the MSB of the feedback register as my synthesised clock. I am
generating sub 50kHz clock frequencies, by clocking the feedback
register at 100 MHz. The accumulator is a 32 bit adder as  is the
feedback register (of course). Works nicely on a board (my tuning word
comes from a processor chip, and my spectrum analyzer tells the truth
when I look at my MSB generated clock).

To reduce the jitter I would like to run two or more phase accumulators
in parallel which are clock-enabled on every-other clock cycle (as per
Ray Andraka's suggestion from the "how to speed up my accumulator" post
by Moti in Dec 2004) and then switch between the MSBs of each
accumulator using a MUX on the MSBs.

The problem then comes down to how fast I can switch the MUX - the
faster the better.

1. Is the Xilinx CoreGen 1-bit MUX a good option?

2. For a 4-input 1-output MUX I would need a 2 bit counter counting the
select word in sequence 00, 01, 10, 11, 00 .... - how fast could this
be done?

3. What about using a fast parallel-to-serial converter approach ?
(feeding the outputs of each NCO into a shift register and then
blasting out the bits really fast to a pin - effectively doing a
round-robin type switching between the MSB of each NCo).

I have designed (but not yet implemented) this scheme, and I would like
some advice relating on how best to best do this.

I look forward to everyone's replies!

Cheers,
PeterC.


Article: 96688
Subject: Re: Protected power calculation spread sheets
From: paul.leventis@gmail.com
Date: 8 Feb 2006 19:09:28 -0800
Links: << >>  << T >>  << A >>
Hi Rick,

I am guessing you are trying to use Altera's Early Power Estimator,
since as best as I know Xilinx only has the web thing.  Please send me
the details on your problem -- the anti-virus software you are using,
exact error message, and version & target architecture of the EPE you
are using.

Sheet protection & code hiding are used for a few reasons.  First, to
protect our users from themselves -- we would probably have a lot of
"bugs", or worse "bad estimates" that were actually due to a
copy-and-paste gone mad.  Trust me, I've done that to myself enough
times with the unprotected version.  Second, we would rather not have
to explain exactly how everything works under the hood, and we wouldn't
want to share those workings with our competitors.  Sure, password
cracking is readily available, but I am unsure of the legality (never
mind the ethics) of using such a technique, particularly for
competitive analysis purposes.

Regards,

Paul Leventis
Altera Corp.


Article: 96689
Subject: Re: Async Processors
From: "rickman" <spamgoeshere4@yahoo.com>
Date: 8 Feb 2006 19:35:56 -0800
Links: << >>  << T >>  << A >>
Jim Granville wrote:
> Further to an earlier thread on ASYNC design, and cores, this in the
> news :
> http://www.eet.com/news/design/showArticle.jhtml?articleID=179101800
>
>   and a little more is here
> http://www.handshakesolutions.com/Products_Services/ARM996HS/Index.html
>
>   with some plots here:
> http://www.handshakesolutions.com/assets/downloadablefile/ARM996HS_leaflet_feb06-13004.pdf
>
>   They don't mention the Vcc of the compared 968E-S, but the Joules and
> EMC look compelling, as does the 'self tracking' aspect of Async.
>
>   They also have an Async 8051
> http://www.handshakesolutions.com/Products_Services/HT-80C51/Index.html

I seem to recall participating in a discussion of asynch processors a
while back and came to the conclusion that they had few advantages in
the real world.  The claim of improved speed is a red herring.  The
clock cycle of a processor is fixed by the longest delay path which is
at lowest voltage and highest temperature.  The same is true of the
async processor, but the place where you have to deal with the
variability is at the system level, not the clock cycle.  So at room
temperature you may find that the processor runs faster, but under
worst case conditions you still have to get X amount of computations
done in Y amount of time.  The two processors will likely be the same
speed or the async processor may even be slower.  With a clocked
processor, you can calculate exactly how fast each path will be and
margin is added to the clock cycle to deal with worst case wafer
processing.  The async processor has a data path and a handshake path
with the handshake being designed for a longer delay.  This delay delta
also has to have margin and likely more than the clocked processor to
account for two paths.  This may make the async processor slower in the
worst case conditions.

Since your system timing must work under all cases, you can't really
use the extra computations that are available when not running under
worst case conditions, unless you can do SETI calculations or something
that is not required to get done.

I can't say for sure that the async processor does not use less power
than a clocked processor, but I can't see why that would be true.  Both
are clocked.  The async processor is clocked locally and dedicates lots
of logic to generating and propagating the clock.  A clocked chip just
has to distribute the clock.  The rest of the logic is the same between
the two.

I suppose that the async processor does have an advantage in the area
of noise.  As SOC designs add more and more analog and even RF onto the
same die, this will become more important.  But if EMI with the outside
world is the consideration, there are techniques to spread the spectrum
of the clock that reduce the generated EMI.  This won't help on-chip
because each clock edge generates large transients which upset analog
signals.

I can't comment on the data provided by the manufacturer.  I expect
that you can achieve similar results with very agressive clock
management.  I don't recall the name of the company, but I remember
recently reading about one that has cut CPU power significantly that
way.  I think they were building a processor to power a desktop
computer and got Pentium 4 processing speeds at just 25 Watts compared
to 80+ Watts for the Pentium 4.  That may not convey well to the
embedded world where there is less paralellism.  So I am not a convert
to async processing as yet.


Article: 96690
Subject: Re: Protected power calculation spread sheets
From: "rickman" <spamgoeshere4@yahoo.com>
Date: 8 Feb 2006 19:46:26 -0800
Links: << >>  << T >>  << A >>
paul.leventis@gmail.com wrote:
> Hi Rick,
>
> I am guessing you are trying to use Altera's Early Power Estimator,
> since as best as I know Xilinx only has the web thing.  Please send me
> the details on your problem -- the anti-virus software you are using,
> exact error message, and version & target architecture of the EPE you
> are using.
>
> Sheet protection & code hiding are used for a few reasons.  First, to
> protect our users from themselves -- we would probably have a lot of
> "bugs", or worse "bad estimates" that were actually due to a
> copy-and-paste gone mad.  Trust me, I've done that to myself enough
> times with the unprotected version.  Second, we would rather not have
> to explain exactly how everything works under the hood, and we wouldn't
> want to share those workings with our competitors.  Sure, password
> cracking is readily available, but I am unsure of the legality (never
> mind the ethics) of using such a technique, particularly for
> competitive analysis purposes.

Paul,

Thanks for your interest.  Yes, this is an Altera spread sheet, but I
seem to recall having one from Xilinx as well, but that would be pretty
old by now.  This one is old too, I'm just a pack rat when it come to
my hard drive.

I'm not going to worry about it further.  This is only two blips in a
very lengthly error report.  The AVS is Sophos and it makes multiple
reports on some files when they are accessed and list the same error
several hundred times.  I'm not sure what is up with that, but it may
have to do with AV scans being done on the same hard drive by two
different machines at the same time.

It just happened that these two files were the first in the report so I
tackled them first.  I have since moved on to more fruitful error
reports.


Article: 96691
Subject: Re: Async Processors
From: Jim Granville <no.spam@designtools.co.nz>
Date: Thu, 09 Feb 2006 18:20:12 +1300
Links: << >>  << T >>  << A >>
rickman wrote:

> Jim Granville wrote:
> 
>>Further to an earlier thread on ASYNC design, and cores, this in the
>>news :
>>http://www.eet.com/news/design/showArticle.jhtml?articleID=179101800
>>
>>  and a little more is here
>>http://www.handshakesolutions.com/Products_Services/ARM996HS/Index.html
>>
>>  with some plots here:
>>http://www.handshakesolutions.com/assets/downloadablefile/ARM996HS_leaflet_feb06-13004.pdf
>>
>>  They don't mention the Vcc of the compared 968E-S, but the Joules and
>>EMC look compelling, as does the 'self tracking' aspect of Async.
>>
>>  They also have an Async 8051
>>http://www.handshakesolutions.com/Products_Services/HT-80C51/Index.html
> 
> 
> I seem to recall participating in a discussion of asynch processors a
> while back and came to the conclusion that they had few advantages in
> the real world.  The claim of improved speed is a red herring.  The
> clock cycle of a processor is fixed by the longest delay path which is
> at lowest voltage and highest temperature.  The same is true of the
> async processor, but the place where you have to deal with the
> variability is at the system level, not the clock cycle.  So at room
> temperature you may find that the processor runs faster, but under
> worst case conditions you still have to get X amount of computations
> done in Y amount of time.

Yes, but systems commonly spend a LOT of time waiting on external, or 
time, events.

   The two processors will likely be the same
> speed or the async processor may even be slower.  With a clocked
> processor, you can calculate exactly how fast each path will be and
> margin is added to the clock cycle to deal with worst case wafer
> processing.  The async processor has a data path and a handshake path
> with the handshake being designed for a longer delay.  This delay delta
> also has to have margin and likely more than the clocked processor to
> account for two paths.

  Why ? In the clocked case, you have to spec to cover Process spreads,
and also Vcc and Temp. That's three spreads.
  The Async design self tracks all three, and the margin is there by ratio.


   This may make the async processor slower in the
> worst case conditions.
> 
> Since your system timing must work under all cases, you can't really
> use the extra computations that are available when not running under
> worst case conditions, unless you can do SETI calculations or something
> that is not required to get done.
> 
> I can't say for sure that the async processor does not use less power
> than a clocked processor, but I can't see why that would be true.  

You did look at their Joule plots ?

> Both
> are clocked.  The async processor is clocked locally and dedicates lots
> of logic to generating and propagating the clock. 

Their gate count comparisons suggest this cost is not a great as one
would first think.

> A clocked chip just has to distribute the clock.  

... and that involves massive clock trees, and amps of clock driver 
spikes, in some devices....(not to mention electro migration issues...)

> The rest of the logic is the same between
> the two.
> 
> I suppose that the async processor does have an advantage in the area
> of noise.  

yes.  [and probably makes some code-cracking much harder...]

As SOC designs add more and more analog and even RF onto the
> same die, this will become more important.  But if EMI with the outside
> world is the consideration, there are techniques to spread the spectrum
> of the clock that reduce the generated EMI.  This won't help on-chip
> because each clock edge generates large transients which upset analog
> signals.
> 
> I can't comment on the data provided by the manufacturer.  I expect
> that you can achieve similar results with very agressive clock
> management.  

Perhaps, in the limiting case, yes - but you have two problems:
a) That is a LOT of NEW system overhead, to manage all that agressive 
clock management...
b) The Async core does this 'Clock management for free - it is part of 
the design.

I don't recall the name of the company, but I remember
> recently reading about one that has cut CPU power significantly that
> way.  I think they were building a processor to power a desktop
> computer and got Pentium 4 processing speeds at just 25 Watts compared
> to 80+ Watts for the Pentium 4.  

  Intel are now talking of Multiple/Split Vccs on a die, including
some mention of magnetic layers, and inductors, but that is horizon
stuff, not their current volume die.
  I am sure they have an impressive road map, as that is one thing that 
swung Apple... :)


That may not convey well to the
> embedded world where there is less paralellism.  So I am not a convert
> to async processing as yet.

  I'd like to see a more complete data sheet, and some real silicon, but 
the EMC plot of the HT80C51 running indentical code is certainly an eye 
opener. (if it is a true comparison).

  It is nice to see (pico) Joules / Opcode quoted, and that is the right
units to be thinking in.

-jg




Article: 96692
Subject: Re: Async Processors
From: fpga_toys@yahoo.com
Date: 8 Feb 2006 21:31:57 -0800
Links: << >>  << T >>  << A >>

rickman wrote:
>The two processors will likely be the same
> speed or the async processor may even be slower.  With a clocked
> processor, you can calculate exactly how fast each path will be and
> margin is added to the clock cycle to deal with worst case wafer
> processing.  The async processor has a data path and a handshake path
> with the handshake being designed for a longer delay.  This delay delta
> also has to have margin and likely more than the clocked processor to
> account for two paths.  This may make the async processor slower in the
> worst case conditions.

There are a lot of different async technologies, not all suffer from
this.
Dual rail with an active ack do not rely on the handshake having a
longer
time to envelope the data path worst case. Phased Logic designs are
one example.

> Since your system timing must work under all cases, you can't really
> use the extra computations that are available when not running under
> worst case conditions, unless you can do SETI calculations or something
> that is not required to get done.

Using dual rail with ack, there is no worst case design consideration
internal to the logic ... it's just functionally correct by design at
any
speed. So, if the chip is running fast, so does the logic, up until it
must synchronize with the outside world.

> I can't say for sure that the async processor does not use less power
> than a clocked processor, but I can't see why that would be true.  Both
> are clocked.  The async processor is clocked locally and dedicates lots
> of logic to generating and propagating the clock.  A clocked chip just
> has to distribute the clock.  The rest of the logic is the same between
> the two.

for fine grained async, there is very little cascaded logic, and as
such
very little transitional glitching in comparision to relatively deep
combinatorials that are clocked. This transitional glitching at clocks
consumes more power than just the best case behaviorial of clean
transitions of all signals at clock edges and no prop or routing
delays.

for course grained async, the advantage obviously goes away.

> I suppose that the async processor does have an advantage in the area
> of noise.  As SOC designs add more and more analog and even RF onto the
> same die, this will become more important.  But if EMI with the outside
> world is the consideration, there are techniques to spread the spectrum
> of the clock that reduce the generated EMI.  This won't help on-chip
> because each clock edge generates large transients which upset analog
> signals.

By design clocked creates a distribution of additive current spikes
following clock edges, even if spread spectrum. This simply is less, if
any, of a problem using async designs.  Async has a much better chance
of creating larger DC component in the power demand by time spreading
transistions so that the on chip capacitance can filter the smaller
transition spikes, instead of high the AC components with a lot of
frequency components that you get with clocked designs.

In the whole discussion about the current at the center of the ball
array and DC currents, this was the point the was missed. If you slow
the clock down enough, the current will go from zero, to a peak shortly
after a clock, and back to zero, with any clocked design. To get the
current profile to maintain a significant DC level for dynamic
currents, requires carefully balancing multiple clock domains and using
deeper than one level of LUTs with long routing to time spread the
clock currents.  Very Very regular designs, with short routing and a
single lut depth, will generate a dynamic current spike 1-3 lut delays
from the clock transition. On small chips which do not have a huge
clock net skew, this will mean most of the dynamic current will
occuring in a two or three lut delay window following clock
transitions. Larger designs with a high distribution of multiple levels
of logic and routing delays flatten this distribution out.

Dual rail with ack designs just completely avoid this problem.


Article: 96693
Subject: Re: BGA central ground matrix
From: fpga_toys@yahoo.com
Date: 8 Feb 2006 21:39:19 -0800
Links: << >>  << T >>  << A >>

Jim Granville wrote:
> Please do, we can agree there is an effect, my antennae just question
> how much of an effect at DC ?.
>
>   You still have to satisfy ohms law, so any push effects that favour
> flow, have to model somehow as mV(uV) generators....
>   To skew Ball DC currents 7/8 or 15/16, frankly sounds implausible, and
> maybe the models there forgot to include resistance balancing effects ?
>   [ ie do not believe everything you are 'told' ]

The problem is that there may not be ANY DC component.  Consider slowly
decreasing the clock period so that all logic paths settle well before
the next
clock edge. In this case the current goes from zero, to one or more
peaks, and
back to zero for each clock cycle.

Given that the clock in this stable case will be in the megahertz
range, it's
quite justified to say that the DC effects may just completely vanish,
or be
insignificant at best.

With some very careful design, using multiple clock domains and phased
clocks, to time spread with overlap the distribution of dynamic
currents to
create some DC component based on minimal filtering effects of the on
die
capacitances.

Async designs, have a might better chance of creating some DC component
out of the dynamic currents.

There might be a DC path in the I/O's from pull ups, pull downs, and
slower
clock rates.


Article: 96694
Subject: vhdl to edif
From: "Leow Yuan Yeow" <nordicelf@msn.com>
Date: Wed, 8 Feb 2006 22:37:59 -0800
Links: << >>  << T >>  << A >>
Hi, may I know whether there is any free program that is able to convert a 
vhdl file to a .edf file? I am unable to find such options in the Xilinx ISE 
Navigator. I have tried using the Xilinx ngc2edif convertor but when I tried 
to generate a bit file from the edf file its says:

ERROR:NgdBuild:766 - The EDIF netlist 'synthetic2.edf' was created by the
Xilinx
   NGC2EDIF program and is not a valid input netlist.  Note that this EDIF
   netlist is intended for communicating timing information to third-party
   synthesis tools. Specifically, no user modifications to the contents of
this
   file will effect the final implementation of the design.
ERROR:NgdBuild:276 - edif2ngd exited with errors (return code 1).
ERROR:NgdBuild:28 - Top-level input design file "synthetic2.edf" cannot be
found
   or created. Please make sure the source file exists and is of a
recognized
   netlist format (e.g., ngo, ngc, edif, edn, or edf).

Any help is appreciated!
YY 



Article: 96695
Subject: Re: question for the EDK users out there...
From: Greg Brown <gbrown@xilinx.REMOVETHISBIT.com>
Date: Wed, 08 Feb 2006 23:43:53 -0700
Links: << >>  << T >>  << A >>
Hi Mordehay:

Go to www.xilinx.com/edk

On the lower right, under documentation, select the Embedded Design 
Examples. In the table of examples, you will find MB - ChipScope Design.

-Greg

me_2003@walla.co.il wrote:
> Hi John,
> Thanks for the answer, is there anywhere where I can get a reference
> design or appnote
> that describes such a design (CS + MDM) ?
> Thanks, Mordehay.
> 

Article: 96696
Subject: Re: Software reset for the MicroBlaze
From: "Simon Peacock" <simon$actrix.co.nz>
Date: Thu, 9 Feb 2006 19:45:19 +1300
Links: << >>  << T >>  << A >>
That would be a hardware reset .. not software :-).... but it depends on
what you call a hard reset

Simon

"John Williams" <jwilliams@itee.uq.edu.au> wrote in message
news:43E985C4.8010303@itee.uq.edu.au...
> Simon Peacock wrote:
> > Jump to the reset vector ?
>
> You need to generate an OPB reset signal, otherwise your peripherals are
> not reset.
>
> A single-bit GPIO driven through a pulse widening state machine , with
> its output tied to the OPB_Rst signal, is one simple way.
>
> John



Article: 96697
Subject: Re: BGA central ground matrix
From: "Peter Alfke" <alfke@sbcglobal.net>
Date: 8 Feb 2006 23:02:36 -0800
Links: << >>  << T >>  << A >>

fpga_toys@yahoo.com wrote:
> >
> The problem is that there may not be ANY DC component.

Well, well. It does not take a genius to find out that all current (or
power) consumption ends up as a (pulsating) DC current through the
chip, from Vcc to ground.
Just imagine the transistors as simple switches, and the loads as
capacitors. When driving High, charge (current) flows in from Vcc. When
driving low, that same charge gets dumped into the ground leads.
I call that dc current. there isn't even any reversal of the current
direction.
Isn't that pretty basic?
Peter Alfke


Article: 96698
Subject: Re: Virtex4 Powerdown, Vcco questions
From: Sean Durkin <smd@despammed.com>
Date: Thu, 09 Feb 2006 08:08:00 +0100
Links: << >>  << T >>  << A >>
austin wrote on 08.02.2006 20:04:
> Sean,
> 
> Nothing bad happens.
> 
> The IO banks draw 2 to 8 mA from 3.3V while remaining tristate. 
OK, perfect, just as I would've expected.

> The center banks are smaller and draw 2 mA, the larger banks all draw ~ 8mA
> each.
Very good to know, thanks for the info!

But, another question: What happens if I turn Vccint and Vccaux back on
again? According to the User Guide, it doesn't matter if Vcco is powered
up before Vccaux:

"The VCCAUX and VCCO power supplies can be turned on in
any sequence, though VCCAUX must powered on before or
with VCCO for the specifications shown in Table 5. Xilinx
does not specify the current when VCCAUX is not powered
on first."

But Vccint isn't mentioned at all here... So, can I just power up the
FPGA-core again and reprogram the part (assuming I don't violate Tconfig)?

cu,
Sean

Article: 96699
Subject: Re: question for the EDK users out there...
From: Peter Ryser <peter.ryser@xilinx.com>
Date: Wed, 08 Feb 2006 23:27:25 -0800
Links: << >>  << T >>  << A >>
> Unfortunately it doesn't work in V4 ES (early silicon) parts due to a
> silicon bug.  I assume it was fixed for production silicon.

That was only a problem for very very early LX silicon and has been 
fixed for quite some time now.

- Peter




Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search