Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 62825

Article: 62825
Subject: Re: FPGAs and DRAM bandwidth
From: Phil Hays <SpamPostmaster@attbi.com>
Date: Sat, 08 Nov 2003 18:46:04 GMT
Links: << >>  << T >>  << A >>
Fernando wrote:

> How fast can you really get data in and out of an FPGA?
> With current pin layouts it is possible to hook four (or maybe even
> five) DDR memory DIMM modules to a single chip.
> 
> Let's say you can create memory controllers that run at 200MHz (as
> claimed in an Xcell article), for a total bandwidth of
> 5(modules/FPGA) * 64(bits/word) * 200e6(cycles/sec) * (2words/cycle) *
> (1byte/8bits)=
> 5*3.2GB/s=16GB/s
> 
> Assuming an application that needs more BW than this, does anyone know
> a way around this bottleneck? Is this a physical limit with current
> memory technology?

Probably can get a little better.  With a 2V8000 in a FF1517 package, 
there are 1,108 IOs.  (!)  If we shared address and control lines between 
banks (timing is easier on these lines), it looks to me like 11 DIMMs 
could be supported.

Data pins 64
DQS pins   8
CS,CAS,
RAS,addr  12 (with sharing)
        ====
          92

1108/92 = 11 with 100 pins left over for VTH, VRP, VRN, clock, reset, ...

Of course, the communication to the outside world would also need go 
somewhere...


-- 
Phil Hays

Article: 62826
Subject: Re: latch and shift 15 bits.
From: Nate Goldshlag <nateg@remove_me_first_pobox.com>
Date: Sat, 08 Nov 2003 13:59:54 -0500
Links: << >>  << T >>  << A >>
In article <184c35f9.0311070333.7a6acaae@posting.google.com>, Denis
Gleeson <dgleeson-2@utvinternet.com> wrote:

>  
> always @ (ACB_Decade_Count_Enable or OUT_Acquisition_Count or clear)
>    if(clear) 
>          Store_Trigger_Acquisition_Count <= 14'b0;
>    else   
>     begin
>       if(ACB_Decade_Count_Enable)  // event happened input is high.
>            Store_Trigger_Acquisition_Count <= OUT_Acquisition_Count;
>     end

You have a fundamental problem here - the design is not synchrounous. 
If ACB_Decade_Count_Enable is a synchronous signal created by your
system clock then you have a race condition here.  If
OUT_Acquisition_Count changes before ACB_Decade_Count_Enable goes away,
you may not latch the proper data.

A better way would be the following:

always @(posedge clear or posedge clk)
   if (clear)
      Store_Trigger_Acquisition_Count <= 15'b0;
   else
      if (ACB_Decade_Count_Enable)
         Store_Trigger_Acquisition_Count <= OUT_Acquisition_Count;

Article: 62827
Subject: Re: 0.13u device with 5V I/O
From: Eric Crabill <eric.crabill@xilinx.com>
Date: Sat, 08 Nov 2003 16:01:11 -0800
Links: << >>  << T >>  << A >>

Hi,

> + 0.13u/150MHz core, but they manage to deliver 5V I/O,
> ADCs etc [ FPGA vendors could learn from this ]

I think your last comment is unfair.  I'm sure their 5V I/O
doesn't even begin to approach the performance levels that
you can obtain from the programmable I/O in recent FPGAs.

There's nothing impossible about 5V I/O using .13u (or 90 nm).
It just takes more processing steps, which means more $$$ to
build the chip.  On top of that, the structures that would
result may be good for 5V I/O, but not for 840 Mbps LVDS...

I believe the "features" available in recent FPGAs is the
result of market forces at work.

Eric

Article: 62828
Subject: Re: 0.13u device with 5V I/O
From: Jim Granville <jim.granville@designtools.co.nz>
Date: Sun, 09 Nov 2003 14:22:42 +1300
Links: << >>  << T >>  << A >>
Eric Crabill wrote:
> 
> Hi,
> 
> > + 0.13u/150MHz core, but they manage to deliver 5V I/O,
> > ADCs etc [ FPGA vendors could learn from this ]
> 
> I think your last comment is unfair.  I'm sure their 5V I/O
> doesn't even begin to approach the performance levels that
> you can obtain from the programmable I/O in recent FPGAs.

 There are many different units to measure 'performance levels',
- if you choose VOLTS, then they actually exceed recent FPGA.
- if you choose MHz of LVDS IO, then clearly FPGA exceeds
  the uC, as the uC does not even offer LVDS....

> There's nothing impossible about 5V I/O using .13u (or 90 nm).
> It just takes more processing steps, which means more $$$ to
> build the chip.  On top of that, the structures that would
> result may be good for 5V I/O, but not for 840 Mbps LVDS...

 Can you think of an instance where a customer would need 
both at once, on the same pin ?

> I believe the "features" available in recent FPGAs is the
> result of market forces at work.

 That's one spin on it. A more realistic pathway may be 
(significant) process improvements, and market guesses :)
 
 The point is, if FPGA wish to broaden their market with 
processor cores, and so play in the larger controller sandpit,
they are going to have to address issues that normally
go into the 'too hard / not enough market' reflex-box. 

 Over the recent few years 5V IO has been added-back to a number
of uC devices, where 'process rush' meant it was 'improved out'.
 They found out the hard way, that lack of 5V IO => fewer design wins.

 Error correcting Code storage (2MB Flash) was interesting to see, 
in the Motorola uC, as I am sure that was not a zero cost item, 
but the result of 'market forces' == customer demand for reliability.

 -jg

Article: 62829
Subject: Re: FPGAs and DRAM bandwidth
From: fortiz80@tutopia.com (Fernando)
Date: 9 Nov 2003 02:56:26 -0800
Links: << >>  << T >>  << A >>
Sharing the control pins is a good idea; the only thing that concerns
me is the PCB layout.  This is not my area of expertise, but seems to
me that it would be pretty challenging to put (let's say) 10 DRAM
DIMMs and a big FPGA on a single board.

It can get even uglier if symmetric traces are required to each memory
sharing the control lines...(not sure if this is required)

Anyway, I'll start looking into it 

Thanks

Phil Hays <SpamPostmaster@attbi.com> wrote in message news:<3FAD39F6.5572E42C@attbi.com>...
> Fernando wrote:
> 
> > How fast can you really get data in and out of an FPGA?
> > With current pin layouts it is possible to hook four (or maybe even
> > five) DDR memory DIMM modules to a single chip.
> > 
> > Let's say you can create memory controllers that run at 200MHz (as
> > claimed in an Xcell article), for a total bandwidth of
> > 5(modules/FPGA) * 64(bits/word) * 200e6(cycles/sec) * (2words/cycle) *
> > (1byte/8bits)=
> > 5*3.2GB/s=16GB/s
> > 
> > Assuming an application that needs more BW than this, does anyone know
> > a way around this bottleneck? Is this a physical limit with current
> > memory technology?
> 
> Probably can get a little better.  With a 2V8000 in a FF1517 package, 
> there are 1,108 IOs.  (!)  If we shared address and control lines between 
> banks (timing is easier on these lines), it looks to me like 11 DIMMs 
> could be supported.
> 
> Data pins 64
> DQS pins   8
> CS,CAS,
> RAS,addr  12 (with sharing)
>         ====
>           92
> 
> 1108/92 = 11 with 100 pins left over for VTH, VRP, VRN, clock, reset, ...
> 
> Of course, the communication to the outside world would also need go 
> somewhere...

Article: 62830
Subject: Re: Building the 'uber processor'
From: jetmarc@hotmail.com (jetmarc)
Date: 9 Nov 2003 03:15:53 -0800
Links: << >>  << T >>  << A >>
> do you know about this nice stuff developed by Cradle
> (http://www.cradle.com) ?
> 
> They have developed something like an FPGA. But the PFUs 
> do not consist of generic logic blocks but small processors.

This reminds me of the PACT XPP, which is an array ALUs with
reconfigurable interconnect.  Basically you replace the LUT
with a math component.

New ideas have a hard time, unless there's a real advantage
over traditional technology.  PACT tries to find their niche
by offering DSP IP, eg for the upcoming UMTS cellular market.

Here's their URL: http://www.pactcorp.com/

Marc

Article: 62831
(removed)


Article: 62832
Subject: Re: FPGAs and DRAM bandwidth
From: Marc Randolph <mrand@my-deja.com>
Date: Sun, 09 Nov 2003 15:08:08 GMT
Links: << >>  << T >>  << A >>
Fernando wrote:

> Phil Hays <SpamPostmaster@attbi.com> wrote in message news:<3FAD39F6.5572E42C@attbi.com>...
> 
>>Fernando wrote:
>>
>>
>>>How fast can you really get data in and out of an FPGA?
>>>With current pin layouts it is possible to hook four (or maybe even
>>>five) DDR memory DIMM modules to a single chip.
>>>
>>>Let's say you can create memory controllers that run at 200MHz (as
>>>claimed in an Xcell article), for a total bandwidth of
>>>5(modules/FPGA) * 64(bits/word) * 200e6(cycles/sec) * (2words/cycle) *
>>>(1byte/8bits)=
>>>5*3.2GB/s=16GB/s
>>>
>>>Assuming an application that needs more BW than this, does anyone know
>>>a way around this bottleneck? Is this a physical limit with current
>>>memory technology?
>>
>>Probably can get a little better.  With a 2V8000 in a FF1517 package, 
>>there are 1,108 IOs.  (!)  If we shared address and control lines between 
>>banks (timing is easier on these lines), it looks to me like 11 DIMMs 
>>could be supported.
>>
>>Data pins 64
>>DQS pins   8
>>CS,CAS,
>>RAS,addr  12 (with sharing)
>>        ====
>>          92
>>
>>1108/92 = 11 with 100 pins left over for VTH, VRP, VRN, clock, reset, ...
>>
 >>Of course, the communication to the outside world would also need go
 >>somewhere...

Of course, the 2V8000 is REALLY expensive. I'm sure there is a pricing 
sweat spot where it makes sense to break it up into multiple smaller 
parts, providing both more pins and lower cost (something like two 
2VP30's or 2VP40's [between the two: 1288 to 1608 I/Os, depending on 
package]).  They could be inter-connected using the internal SERDES. 
The SERDES could also be used for communicating with the outside world.

 > Sharing the control pins is a good idea; the only thing that concerns
 > me is the PCB layout.  This is not my area of expertise, but seems to
 > me that it would be pretty challenging to put (let's say) 10 DRAM
 > DIMMs and a big FPGA on a single board.

It may be challenging, but that is what is encountered when trying to 
push the envelope, as it appears you are trying to do.  This sometimes 
entails accepting a bit less design margin to fulfill the requirements 
in the alloted space or budget.  Knowing what you can safely give up, 
and where you can give it up, requires expertise (and so if you don't 
have that expertise, you'll need to find someone that does).

If you are really set on meeting the memory requirements, you may need 
to be open to something besides DIMM's (or perhaps make your own custom 
DIMM's).  A possible alternative: it looks like Toshiba is in the 
process of releasing their 512 Mbit FCRAM.  It supposedly provides 400 
Mbps per data bit (using 200 MHz DDR... not a problem for modern FPGAs).

 > It can get even uglier if symmetric traces are required to each memory
 > sharing the control lines...(not sure if this is required)

I don't know what tools/budget you have available to you.  Cadence 
allows you to put a bus property on as many nets as you want.  You can 
then constrain all nets that form that bus to be within X% of each other 
(in terms of length).

Good luck,

    Marc


Article: 62833
Subject: Re: FPGAs and DRAM bandwidth
From: nweaver@ribbit.CS.Berkeley.EDU (Nicholas C. Weaver)
Date: Sun, 9 Nov 2003 15:49:53 +0000 (UTC)
Links: << >>  << T >>  << A >>
In article <2658f0d3.0311090256.21ce5a9a@posting.google.com>,
Fernando <fortiz80@tutopia.com> wrote:
>Sharing the control pins is a good idea; the only thing that concerns
>me is the PCB layout.  This is not my area of expertise, but seems to
>me that it would be pretty challenging to put (let's say) 10 DRAM
>DIMMs and a big FPGA on a single board.

Simple.  Use external registers for the control lines, and drive 4
registers which then drive 4 DIMMs each.  Adds a cycle of latency, but
so what?
-- 
Nicholas C. Weaver                                 nweaver@cs.berkeley.edu

Article: 62834
Subject: Re: 0.13u device with 5V I/O
From: "Garden Gnome" <mjd03003@yahoo.com>
Date: Sun, 09 Nov 2003 17:48:15 GMT
Links: << >>  << T >>  << A >>
Five volts would have been nice for some incremental respins of older
products. Instead I had to level translate (using an IDT device) which was
$$$, real estate etc. I definitely would have liked it to be supported in
the FPGA.


"Eric Crabill" <eric.crabill@xilinx.com> wrote in message
news:3FAD83C7.8C6E6C71@xilinx.com...
>
> Hi,
>
> > + 0.13u/150MHz core, but they manage to deliver 5V I/O,
> > ADCs etc [ FPGA vendors could learn from this ]
>
> I think your last comment is unfair.  I'm sure their 5V I/O
> doesn't even begin to approach the performance levels that
> you can obtain from the programmable I/O in recent FPGAs.
>
> There's nothing impossible about 5V I/O using .13u (or 90 nm).
> It just takes more processing steps, which means more $$$ to
> build the chip.  On top of that, the structures that would
> result may be good for 5V I/O, but not for 840 Mbps LVDS...
>
> I believe the "features" available in recent FPGAs is the
> result of market forces at work.
>
> Eric



Article: 62835
Subject: ASIC vs FPGA
From: "newbie" <nospam@nospam.org>
Date: Mon, 10 Nov 2003 07:59:05 +1300
Links: << >>  << T >>  << A >>
Hi all

Can someone help in understanding the main difference between ASIC and FPGA.
I keep hearing both these terms and am not fully clear.
Is there a website explaning this?

Thanks



Article: 62836
Subject: Home grown CPU core legal?
From: bpride@monad.net (Bruce P.)
Date: 9 Nov 2003 11:30:24 -0800
Links: << >>  << T >>  << A >>
I've always been interested in designing my own 8-bit CPU core in an
FPGA for educational purposes.  After visiting www.opencores.org, it
seems the easiest/most popular way to go about this is to make the CPU
core be compatible with an existing ISA (Instruction Set Architecture)
from an available device (e.g. 8051, PIC, etc.).  That way I could use
readily available development tools to write code, debug, create a hex
file, etc.

If by some chance I ever used my home grown, ISA compatible, core in a
commercialized product, would there be legal issues?   Chances are I
would never use my own and would probably use a Nios or Microblaze
instead, but if I just needed a simple little core, it could prove
useful.

I know very little about the IP core business, but I've seen off the
shelf compatible CPU cores for sale, so I'm guessing these IP
companies must pay companies like Microchip when they sell a PIC
compatible core?

Just curious if anyone has any insight into how all this works. 

Thanks.

-Bruce

Article: 62837
Subject: Re: 0.13u device with 5V I/O
From: Eric Crabill <eric.crabill@xilinx.com>
Date: Sun, 09 Nov 2003 11:30:54 -0800
Links: << >>  << T >>  << A >>

Hi,

> Five volts would have been nice for some incremental respins
> of older products. Instead I had to level translate (using an
> IDT device) which was $$$, real estate etc. I definitely would
> have liked it to be supported in the FPGA.

Don't get me wrong.  As a standard bus interface IP developer
(PCI, PCI-X, and now PCI Express...) I like 5V I/O even more
than the next guy.  I'd love to be able to directly support
5V PCI on newer Xilinx parts without external components.

What I was trying to point out is that the economic/market
reality of 5V support on newer FPGA devices has resulted in
a tradeoff: faster, low voltage I/O and less costly devices
at the expense of 5V I/O support.

I believe every major programmable logic manufacturer has
made this tradeoff.  It isn't Xilinx trying to alienate
users of 5V logic.  If someone can show me a commercial
FPGA at .15u or below that has real 5V I/O support, I'll
eat humble pie.

Like you pointed out, those who need a lot of 5V I/O end up
paying for it, either by using older parts (more $/logic)
or external (more $) components such as level translators.
It is the unfortunate cost of designing with I/O signaling
levels that are no longer mainstream.

Speaking entirely for myself,
Eric

Article: 62838
Subject: Re: 0.13u device with 5V I/O
From: Eric Crabill <eric.crabill@xilinx.com>
Date: Sun, 09 Nov 2003 11:58:40 -0800
Links: << >>  << T >>  << A >>

Hi,

>  There are many different units to measure 'performance levels',
> - if you choose VOLTS, then they actually exceed recent FPGA.
> - if you choose MHz of LVDS IO, then clearly FPGA exceeds
>   the uC, as the uC does not even offer LVDS....

I'd agree with that, if you are looking at a 5V application
then high speed LVDS I/O probably doesn't get you very far...

>  Can you think of an instance where a customer would need
> both at once, on the same pin ?

For a general purpose FPGA, with general purpose programmable
I/O, you need this on every pin...  While the end user makes
a selection via the bitstream, the actual hardware has to be
capable of handling all possible configurations.

On recent devices, 5V I/O is gone...  I would not be surprised
if it's the same issue all over again with 3.3V in a few years.

As I had mentioned, it is possible to build a device to support
5V I/O.  That device will cost more for everyone, even if they
are not using 5V I/O.  Those I/O will also be slower.  I believe
few people would buy these parts, due to the higher cost and
lower performance.

There are other approaches -- things like dedicated banks of I/O
just for a specific purpose.  Xilinx uses this for the gigabit
serial transceivers in V2pro.  One could do something similar
but for a bank of 5V I/O.  For 5V I/O, it would still make the
devices more expensive.

I do not believe making general purpose devices more expensive
to cater to a declining market is a good business decision.  I
think, for better or worse, we're all being swept along by the
tide of economics.

> The point is, if FPGA wish to broaden their market with
> processor cores, and so play in the larger controller
> sandpit, they are going to have to address issues that
> normally go into the 'too hard / not enough market'
> reflex-box.

If there's "not enough market" I doubt anyone trying to make
money is going to address it.  If there were a significant
market, but there were technical hurdles, I am sure people
here, and at other FPGA vendors, would be researching a way
to cash in on it.

In the near future, I don't think the FPGA will be a direct
drop in replacement for a controller with tons of 5V I/O.

The vision of a programmable system is that the entire system
goes into the FPGA.  What might have been 5V signals between
all the modules are now low voltage signals running over the
internal FPGA routing, because almost everything is iniside
the FPGA.  There will still be things outside.  But most of
those that use a large number of I/O (large memories, etc...)
are no longer designed with 5V I/0.  A quick survey of Micron,
Cypress, and IDT websites will confirm this.

For smaller RAMs (things like a 6116, etc...) those can be
implemented in the FPGA block memory.  So the need for these
things with high pincount 5V I/O goes away...

I'm not denying that you will need 5V I/O.  Just that you
probably don't need much, unless you're doing a legacy
design, and in that case you might anticipate having to
pay for a feature that is not in "mainstream" use anymore.
That's how it appears to me, at least in the FPGA market...

These are entirely my opinions,
Eric

Article: 62839
Subject: Re: ASIC vs FPGA
From: Rene Tschaggelar <none@none.none>
Date: Sun, 09 Nov 2003 21:17:18 GMT
Links: << >>  << T >>  << A >>
newbie wrote:
> Hi all
> 
> Can someone help in understanding the main difference between ASIC and FPGA.
> I keep hearing both these terms and am not fully clear.
> Is there a website explaning this?

An ASIC (Application Specific Integrated Circuit) is a custom device,
whereas an FPGA is a generic multipurpose device.
The ASIC is cost efficient at high numbers, whereas an FPGA is
cost efficient for development and for smaller numbers.
Many ASIC is developped as FPGA.

Rene
-- 
Ing.Buero R.Tschaggelar - http://www.ibrtses.com
& commercial newsgroups - http://www.talkto.net


Article: 62840
Subject: Re: 0.13u device with 5V I/O
From: "Garden Gnome" <mjd03003@yahoo.com>
Date: Sun, 09 Nov 2003 22:56:06 GMT
Links: << >>  << T >>  << A >>
> What I was trying to point out is that the economic/market
> reality of 5V support on newer FPGA devices has resulted in
> a tradeoff: faster, low voltage I/O and less costly devices
> at the expense of 5V I/O support.


You do get the speed for these new I/O voltages - for new designs this is
great. I was wondering if Xilinx et al considers respins a design win as
well? I've been presented with the situation that if we can upgrade
performance/function on an existing  5V design the benifits are faster
market and lower test risk. I believe the major concern for a 5V I/O
sink/source pin is capacitance regardless of the speed it's used at. I'd
like to see 5V I/O, or an acceptable work-around while still meeting the
requirements of the other speeds.



Article: 62841
Subject: Re: 0.13u device with 5V I/O
From: hmurray@suespammers.org (Hal Murray)
Date: Sun, 09 Nov 2003 23:43:04 -0000
Links: << >>  << T >>  << A >>
> Don't get me wrong.  As a standard bus interface IP developer
> (PCI, PCI-X, and now PCI Express...) I like 5V I/O even more
> than the next guy.  I'd love to be able to directly support
> 5V PCI on newer Xilinx parts without external components.

What's the newest/best/cheapest part that's reasonable to
connect to old 5V PCI?

Is there a legal recipe using external parts?  I'd expect
that approach would have troubles meeting the loading rules.

-- 
The suespammers.org mail server is located in California.  So are all my
other mailboxes.  Please do not send unsolicited bulk e-mail or unsolicited
commercial e-mail to my suespammers.org address or any of my other addresses.
These are my opinions, not necessarily my employer's.  I hate spam.


Article: 62842
Subject: Re: FPGAs and DRAM bandwidth
From: Jeff Cunningham <jcc@sover.net>
Date: Mon, 10 Nov 2003 00:26:39 GMT
Links: << >>  << T >>  << A >>
Fernando wrote:
> Sharing the control pins is a good idea; the only thing that concerns
> me is the PCB layout.  This is not my area of expertise, but seems to
> me that it would be pretty challenging to put (let's say) 10 DRAM
> DIMMs and a big FPGA on a single board.

Don't forget simultaneous switching considerations. Driving 640 pins at 
200 Mhz would probably require a bit of cleverness. Maybe you could run 
different banks at different phases of the clock. Hopefully your app 
does not need to write all DIMMs at once.

Jeff


Article: 62843
Subject: ISE 5.2 to 6.1
From: "colin hankins" <colinhankins@cox.net>
Date: Sun, 9 Nov 2003 18:29:46 -0800
Links: << >>  << T >>  << A >>
I installed ISE 6.1 with the service pack and my Spartan IIe design that
worked fine under 5.2 no longer works in the timing simulation in 6.1. Also,
the number of Slices used increased by 10% as well as the TBUFS. I have
fiddled with almost every combination of options in the synth, map, and par
to try and recreate the better results I had in ISE 5.2. I talked to Xilinx
and they just told me to fiddle with the parameters some more. Has anyone
else had similar problems migrating a project from 5.2 to 6.1?

Regards,
Colin



Article: 62844
Subject: Re: FPGAs and DRAM bandwidth
From: "Robert Sefton" <rsefton@abc.net>
Date: Sun, 9 Nov 2003 18:29:59 -0800
Links: << >>  << T >>  << A >>
Fernando -

Your instincts are right on with respect to the difficulty of fitting
that many DIMMs on a board and interfacing to them from a single FPGA.
Forget about it. The bottom line is that there's a trade-off between
memory size and speed, and memory is almost always the limiting factor
in system throughput. If you need lots of memory then DRAM is probably
your best/only option, and the max reasonable throughput is about what
you calculated, but even the 5-DIMM 320-bit-wide data bus in your
example would be a very tough PCB layout.

If you can partition your memory into smaller fast-path memory and
slower bulk memory, then on-chip memory is the fastest you'll find and
you can use SDRAM for the bulk. Another option, if you can tolerate
latency, is to spread the memory out to multiple PCBs/daughtercards,
each with a dedicated memory controller, and use multiple lanes of
extremely fast serial I/O between the master and slave memory
controllers.

A hierarchy of smaller/faster and larger/slower memories is a common
approach, e.g., on-chip core-rate L1 cache, off-chip fast L2 cache, and
slower bulk SDRAM in the case of microprocessors. If you tossed out some
specific system requirements here you'd probably get some good feedback
because this is a common dilemma.

Robert

"Fernando" <fortiz80@tutopia.com> wrote in message
news:2658f0d3.0311090256.21ce5a9a@posting.google.com...
> Sharing the control pins is a good idea; the only thing that concerns
> me is the PCB layout.  This is not my area of expertise, but seems to
> me that it would be pretty challenging to put (let's say) 10 DRAM
> DIMMs and a big FPGA on a single board.
>
> It can get even uglier if symmetric traces are required to each memory
> sharing the control lines...(not sure if this is required)
>
> Anyway, I'll start looking into it
>
> Thanks
>
> Phil Hays <SpamPostmaster@attbi.com> wrote in message
news:<3FAD39F6.5572E42C@attbi.com>...
> > Fernando wrote:
> >
> > > How fast can you really get data in and out of an FPGA?
> > > With current pin layouts it is possible to hook four (or maybe
even
> > > five) DDR memory DIMM modules to a single chip.
> > >
> > > Let's say you can create memory controllers that run at 200MHz (as
> > > claimed in an Xcell article), for a total bandwidth of
> > > 5(modules/FPGA) * 64(bits/word) * 200e6(cycles/sec) *
(2words/cycle) *
> > > (1byte/8bits)=
> > > 5*3.2GB/s=16GB/s
> > >
> > > Assuming an application that needs more BW than this, does anyone
know
> > > a way around this bottleneck? Is this a physical limit with
current
> > > memory technology?
> >
> > Probably can get a little better.  With a 2V8000 in a FF1517
package,
> > there are 1,108 IOs.  (!)  If we shared address and control lines
between
> > banks (timing is easier on these lines), it looks to me like 11
DIMMs
> > could be supported.
> >
> > Data pins 64
> > DQS pins   8
> > CS,CAS,
> > RAS,addr  12 (with sharing)
> >         ====
> >           92
> >
> > 1108/92 = 11 with 100 pins left over for VTH, VRP, VRN, clock,
reset, ...
> >
> > Of course, the communication to the outside world would also need go
> > somewhere...



Article: 62845
Subject: CF card problem in Virtex-II Multimedia Board
From: "#YU WEI#" <yuwei@pmail.ntu.edu.sg>
Date: Mon, 10 Nov 2003 12:18:05 +0800
Links: << >>  << T >>  << A >>
We have got a Virtex-II Multimedia Board which include CF card slot.

http://www.xilinx.com/xlnx/xebiz/board_detail.jsp?=3D&category=3D-21481&i=
Lan
guageID=3D1&key=3DHW-V2000-MLTA

When we downloaded SystemACE to the board, the CF card can't work known
from the LED of status. Anyone knows how to make it work?

=20



Article: 62846
Subject: Re: FPGAs and DRAM bandwidth
From: "Martin Euredjian" <0_0_0_0_@pacbell.net>
Date: Mon, 10 Nov 2003 05:55:00 GMT
Links: << >>  << T >>  << A >>
It would seem to me that the idea of using custom "serial dimms" combined
with Virtex II Pro high speed serial I/O capabilities might be the best way
to get a boost in data moving capabilities.  This would avoid having to
drive hundreds of pins (and related issues) and would definetly simplify
board layout.

I haven't done the numbers.  I'm just thinking out loud.


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_"  =  "martineu"



"Robert Sefton" <rsefton@abc.net> wrote in message
news:bomt69$1equ1q$1@ID-212988.news.uni-berlin.de...
> Fernando -
>
> Your instincts are right on with respect to the difficulty of fitting
> that many DIMMs on a board and interfacing to them from a single FPGA.
> Forget about it. The bottom line is that there's a trade-off between
> memory size and speed, and memory is almost always the limiting factor
> in system throughput. If you need lots of memory then DRAM is probably
> your best/only option, and the max reasonable throughput is about what
> you calculated, but even the 5-DIMM 320-bit-wide data bus in your
> example would be a very tough PCB layout.
>
> If you can partition your memory into smaller fast-path memory and
> slower bulk memory, then on-chip memory is the fastest you'll find and
> you can use SDRAM for the bulk. Another option, if you can tolerate
> latency, is to spread the memory out to multiple PCBs/daughtercards,
> each with a dedicated memory controller, and use multiple lanes of
> extremely fast serial I/O between the master and slave memory
> controllers.
>
> A hierarchy of smaller/faster and larger/slower memories is a common
> approach, e.g., on-chip core-rate L1 cache, off-chip fast L2 cache, and
> slower bulk SDRAM in the case of microprocessors. If you tossed out some
> specific system requirements here you'd probably get some good feedback
> because this is a common dilemma.
>
> Robert
>
> "Fernando" <fortiz80@tutopia.com> wrote in message
> news:2658f0d3.0311090256.21ce5a9a@posting.google.com...
> > Sharing the control pins is a good idea; the only thing that concerns
> > me is the PCB layout.  This is not my area of expertise, but seems to
> > me that it would be pretty challenging to put (let's say) 10 DRAM
> > DIMMs and a big FPGA on a single board.
> >
> > It can get even uglier if symmetric traces are required to each memory
> > sharing the control lines...(not sure if this is required)
> >
> > Anyway, I'll start looking into it
> >
> > Thanks
> >
> > Phil Hays <SpamPostmaster@attbi.com> wrote in message
> news:<3FAD39F6.5572E42C@attbi.com>...
> > > Fernando wrote:
> > >
> > > > How fast can you really get data in and out of an FPGA?
> > > > With current pin layouts it is possible to hook four (or maybe
> even
> > > > five) DDR memory DIMM modules to a single chip.
> > > >
> > > > Let's say you can create memory controllers that run at 200MHz (as
> > > > claimed in an Xcell article), for a total bandwidth of
> > > > 5(modules/FPGA) * 64(bits/word) * 200e6(cycles/sec) *
> (2words/cycle) *
> > > > (1byte/8bits)=
> > > > 5*3.2GB/s=16GB/s
> > > >
> > > > Assuming an application that needs more BW than this, does anyone
> know
> > > > a way around this bottleneck? Is this a physical limit with
> current
> > > > memory technology?
> > >
> > > Probably can get a little better.  With a 2V8000 in a FF1517
> package,
> > > there are 1,108 IOs.  (!)  If we shared address and control lines
> between
> > > banks (timing is easier on these lines), it looks to me like 11
> DIMMs
> > > could be supported.
> > >
> > > Data pins 64
> > > DQS pins   8
> > > CS,CAS,
> > > RAS,addr  12 (with sharing)
> > >         ====
> > >           92
> > >
> > > 1108/92 = 11 with 100 pins left over for VTH, VRP, VRN, clock,
> reset, ...
> > >
> > > Of course, the communication to the outside world would also need go
> > > somewhere...
>
>



Article: 62847
Subject: Re: Building the 'uber processor'
From: Goran Bilski <goran@xilinx.com>
Date: Mon, 10 Nov 2003 10:02:02 +0100
Links: << >>  << T >>  << A >>

Hi John,

john jakson wrote:

>Hi Goran
>
>  
>
>>The new instruction in MicroBlaze for handling these locallinks are 
>>simple but there is no HW scheduler in MicroBlaze. I have done processor 
>>before with complete Ada RTOS in HW but it would be an overkill in a FPGA:
>>
>>    
>>
>
>.. now that sounds like something we could chat about for some time.
>An Ada RTOS in HW certainly would be heavy, but the Occam model is
>very light. The Burns book on Occam compares them, the jist being that
>ADA has something for everybody, and Occam is maybe too light. Anyway
>they both rendezvous. At the beginning of my Inmos days we were
>following ADA and the iAPX32 very closely to see where concurrency on
>other cpus might go (or not as the case turned out). Inmos went for
>simplicity, ADA went for complexity.
>
>Thanks for all the gory details.
>  
>
Actually the Ada RTOS was not that large. The whole processor THOR was 
created to run Ada as efficient as possible 
(http://www.space.se/node3066.asp?product={7EA5439E-962C-11D5-B730-00508B63C9B4}&category=148)
The processor only supported 16 tasks in HW but you could have more taks 
that had to be swap in and out.
The other thing was that the processor didn't have interrupts only 
external rendezvous.
There was also other implementation to handle Ada much better, HW stack 
control, HW handled exception handling,...

>  
>
>>The Locallinks for MicroBlaze is 32-bit wide so they are not serial.
>>They can handle a new word every clock cycle.
>>
>>You could also connect up a massive array of MicroBlaze over FSL ala 
>>transputer but I think that the usage of the FPGA logic as SW 
>>accelarators will be a more popular way since FPGA logic can be many 
>>magnitudes faster than any processor and with the ease of interconnect 
>>as the FSL provides it will be the most used case.
>>
>>    
>>
>
>I am curious what the typ power useage of MicroBlaze is per node, and
>has anybody actually tried to hook any no of them up. If I wanted
>large no of cpus to work on some project that weren't Transputers, I
>might also look at PicoTurbo, Clearspeed or some other BOPSy cpu
>array, but they would all be hard to program and I wouldn't be able to
>customize them. Having lots of cpus in FPGA brings up the issue of how
>to organize memory hierarchy. Most US architects seem to favor the
>complexity of shared memory and complicated coherent caches, Europeans
>seem to favor strict message passing (as I do).
>
>We agree that if SW can be turned into HW engines quickly and
>obviously, for the kernals, sure they should be mapped right onto FPGA
>fabric for whatever speed up. That brings up some points, 1st P4
>outruns typ FPGA app maybe 50x on clockspeed. 2nd converting C code to
>FPGA is likely to be a few x less efficient than an EE designed
>engine, I guess 5x. IO bandwidth to FPGA engine from PC is a killer.
>It means FPGAs best suited to continuous streaming engines like real
>time DSP. When hooked to PC, FPGA would need to be doing between
>50-250x more work in parallel just to be even. But then I thinks most
>PCs run far slower than Intel/AMD would have us believe because they
>too have been turned into streaming engines that stall on cache misses
>all too often.
>
>But SW tends to follow 80/20 (or whatever xx/yy) rule, some little
>piece of code takes up most of the time. What about the rest of it, it
>will still be sequential code that interacts with the engine(s). We
>would still be forced to rewrite the code and cut it with an axe and
>keep one side in C and one part in HDL. If C is used as a HDL, we know
>thats already very inefficient compared to EE HDL code.
>
>The Transputer & mixed language approach allows a middle road between
>the PC cluster and raw FPGA accelerator. It uses less resources than
>cluster but more than the dedicated accelerator. Being more general
>means that code can run on an array of cpus can leave decision to
>commit to HW for later or never. The less efficient approach also
>sells more FPGAs or Transputer nodes than one committed engine. In the
>Bioinformatics case, a whole family of algorithms need to be
>implemented, all in C, some need FP. An accelerator board that suits
>one problem may not suit others, so does Bio guy get another board,
>probably not. TimeLogic is an interesting case study, the only
>commercial FPGA solution left for Bio.
>
>My favourite candidate for acceleration is in our own backyard, EDA,
>esp P/R, I used to spend days waiting for it to finish on much smaller
>ASICs and FPGAs. I don't see how it can get better as designs are
>getting bigger much faster than pentium can fake up its speed. One
>thing EDA SW must do is to use ever increasingly complex algorithms to
>make up the short fall, but that then becomes a roadblock to turning
>it to HW so it protects itself in clutter. Not as important as the Bio
>problem (growing at 3x Moores law), but its in my backyard.
>
>rant_mode_off
>
>Regards
>
>johnjakson_usa_com
>  
>




Article: 62848
Subject: Re: ISE 5.2 to 6.1
From: muthu_nano@yahoo.co.in (Muthu)
Date: 10 Nov 2003 01:47:43 -0800
Links: << >>  << T >>  << A >>
"colin hankins" <colinhankins@cox.net> wrote in message news:<kzCrb.14571$Zb7.8816@fed1read01>...
> I installed ISE 6.1 with the service pack and my Spartan IIe design that
> worked fine under 5.2 no longer works in the timing simulation in 6.1. Also,
> the number of Slices used increased by 10% as well as the TBUFS. I have
> fiddled with almost every combination of options in the synth, map, and par
> to try and recreate the better results I had in ISE 5.2. I talked to Xilinx
> and they just told me to fiddle with the parameters some more. Has anyone
> else had similar problems migrating a project from 5.2 to 6.1?
> 
> Regards,
> Colin

Hi,

I was using ISE5.1i. Now i moved to ISE6.1i

From timing perspective, ISE6.1i gives excellent timings compared to
ISE5.1i

Almost 7 Plus MHz improvement for the same constraint,Options and
ofcourse same design :-)

Regards,
Muthu

Article: 62849
Subject: Re: ISE 5.2 to 6.1
From: anjanr@yahoo.com (Anjan)
Date: 10 Nov 2003 01:53:40 -0800
Links: << >>  << T >>  << A >>
Hi
I am facing the same problem. This uses more logic. I installed sp2
but no use. The too has to mature more
Rgd
Anjan
"colin hankins" <colinhankins@cox.net> wrote in message news:<kzCrb.14571$Zb7.8816@fed1read01>...
> I installed ISE 6.1 with the service pack and my Spartan IIe design that
> worked fine under 5.2 no longer works in the timing simulation in 6.1. Also,
> the number of Slices used increased by 10% as well as the TBUFS. I have
> fiddled with almost every combination of options in the synth, map, and par
> to try and recreate the better results I had in ISE 5.2. I talked to Xilinx
> and they just told me to fiddle with the parameters some more. Has anyone
> else had similar problems migrating a project from 5.2 to 6.1?
> 
> Regards,
> Colin



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search