Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 151300

Article: 151300
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of
From: Mittens <mittens@_nospam_hush.ai>
Date: Tue, 22 Mar 2011 00:48:56 -0000
Links: << >>  << T >>  << A >>
On Mon, 21 Mar 2011 14:11:54 -0000, PovTruffe <PovTache@gaga.invalid>  =

wrote:

> "PovTruffe" <PovTache@gaga.invalid> a =C3=A9crit :
>> Hi,
>>
>> Yet another newbie question:
>> Is SDRAM fast enough to generate a 720p or 1024p video stream (VGA or=

>> DVI output) using a Spartan-3 or -3E FPGA ?
>
> No response  :-((
> Maybe my question was not very clear. Let me paraphrase it:
> What kind of RAM would you use for a video frame buffer (Spartan-3E) ?=

> Or would either type of RAM work ?

SDRAM most certainly is fast enough for frame buffering. Think low end P=
C  =

chipsets which borrow some of the main memory for the video processor.

If you are going to write the controller and from scratch, it will be so=
  =

much easier to use QDR SRAM (dual unidirectional port) instead.

Article: 151301
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: Gabor <gabor@alacron.com>
Date: Mon, 21 Mar 2011 18:58:28 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Monday, March 21, 2011 7:08:32 PM UTC-4, PovTruffe wrote:
> > The original question is too ill-posed - I wouldn't take any "rule of thumb"
> > type response with respect to video - the numbers add up too fast.
> 
> OK but I have the freedom to choose whatever video size, rate, # of bits I like.
> If I can generate only a 100 x 100 pixel video at 10Hz, thats is fine.
> 
> > One would assume you're not just reading or writing to the DDR - you probably
> > need to do (at least one) of both a frame-buffer read, and a frame-buffer write.
> > So 2X (at least) the BW requirements there.  How are you going to pack
> > (20-bit, 24-bit, 30-bit and/or 32-bit?) pixel data onto a (16/32/48/64 bit)
> > memory interface?  Pack it, or throw away bandwidth?
> 
> Yes I am aware of the multiple accesses, read and write, that will be required.
> Because of the PQ208 package the memory interface will probably be limited
> to 16 bit. And some address lines may not be used as well. I am mainly worried
> about the PQ208 high pin capacitance.
> 

You should be worried about pin inductance in the PQ208.  In
my experience these packages are not suitable for high-speed
I/O.

> > Reads and Writes at same time? - can you still guarantee "sequential access"
> > enough so you don't lose bandwidth efficiencies to the DRAM?  Is there
> > anything else (a CPU?) using the DRAM too that throws this off?
> 
> I will probably include a CPU later and try to access the RAM as a learning
> exercise.
> 
> > To the OP - there's no "rule of thumb".  Sit down with a pen and paper,
> > or excel spreadsheet, and calculate you're requirements.
> 
> I will do that later. I tryed to make it clear that for this project I do not have
> the professional / engineering approach that most of you in this group are
> used to. There are no predefined and rigid features for the board. I will just
> choose a FPGA, throw a RAM and a few peripherals, then play with the board.
> However the PCB will be designed as optimally as possible (shortest trace
> lengths, equal length for buses, etc). Later things will become clearer and I will
> get a much better feel about the capabilities of a FPGA. Because of the steep
> learning curve, if I begin working with all the details, the board will never be
> finished this year and I would probably run out of motivation...

The problem with starting with a flawed design just
to "finish this year" is that your board won't do what
you wanted, and then you could be more discouraged.

If you really want to use the large PQ package, you need to
resign yourself to using low slew-rate I/O because of the
horrendous ground-bounce in those packages (even when
using additional "virtual ground" pins).

Single data-rate Mobile SDRAM may be your best choice
because it can run at low clock rates and use only
LVCMOS levels.  You won't find a pre-made MIG core
for it, though as you might for DDR memory types.
Writing your own Mobile DDR (also called lpDDR)
controller would be a good exercise as well.  I
have used single data rate memories for a number
of video products including UXGA resolution video
capture.  You can realize any data rate you need
by making the memory wide enough.  It doesn't even
have to be a power of two width.  The UXGA capture
board uses 48-bit wide memory which matches the
RGB 2-pixel-wide data stream.

Video buffering is an easier design than a general
purpose memory controller for fully random access.
My controllers usually have a minimum transfer of
one burst to each of the banks in the memory.  The
bank overlap allows me to "bury" the precharge and
activate time and keep the interface streaming at
almost 100% of the peak rate.

Good luck on your projects.

-- Gabor

Article: 151302
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of
From: Joel Williams <nospamwhydontyoublogaboutit@nospamgmail.com>
Date: Tue, 22 Mar 2011 12:58:37 +1030
Links: << >>  << T >>  << A >>
On 22/03/11 10:22 AM, glen herrmannsfeldt wrote:
> PovTruffe<PovTache@gaga.invalid>  wrote:
> (snip)
>
>> OK but I have the freedom to choose whatever video size, rate, #
>> of bits I like.
>> If I can generate only a 100 x 100 pixel video at 10Hz, thats is fine.
>>
>>> One would assume you're not just reading or writing to the
>> DDR - you probably
>>> need to do (at least one) of both a frame-buffer read, and
>> a frame-buffer write.
>>> So 2X (at least) the BW requirements there.  How are you going to pack
>>> (20-bit, 24-bit, 30-bit and/or 32-bit?) pixel data
>> onto a (16/32/48/64 bit)
>>> memory interface?  Pack it, or throw away bandwidth?
>
> You might look at some of the Digilent boards.  The Spartan3E board
> has on-board DDR, and a VGA connector, though only one bit per color.
> (It should be possible to dither, though, otherwise only eight colors.)
>
> There are other boards with FPGA, RAM, and VGA connector, too.

I second that - the Digilent Atlys is quite inexpensive (especially if 
you're a student) and has four HDMI interfaces (two in, two out) and DDR2.

It all depends on whether this is a PCB and circuit design exercise or 
an FPGA development exercise. If it's the latter, spending a couple of 
hundred dollars on a professionally designed and tested board will save 
you dozens of hours and a lot of frustration.

Trying to get high speed designs working on two layer boards can be a 
complete waste of time when things just plain don't work and you have to 
start again. A four layer board is actually considerably easier to lay 
out for complicated designs and if you're not in much of a hurry, some 
of the hobbyist orientated manufacturing services are incredibly cheap. 
For example, http://dorkbotpdx.org/wiki/pcb_order charge $10 per square 
inch with four layers, and that's for three boards. I'd seriously 
consider doing this even if I was using the PQ208 package.

If you're interested in soldering BGA packages at home, have a look at 
this project: 
http://danstrother.com/2011/01/16/spartan-6-bga-test-board/ . It hasn't 
yet been built, but if the reflow oven method works it would be very 
easy to adapt this (open source) design for your uses, either by adding 
a simple VGA output or an HDMI buffer.

Joel

Article: 151303
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: rickman <gnuarm@gmail.com>
Date: Mon, 21 Mar 2011 21:59:38 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 21, 11:28=A0am, "PovTruffe" <PovTa...@gaga.invalid> wrote:
> "maxascent" <maxascent@n_o_s_p_a_m.n_o_s_p_a_m.yahoo.co.uk> a crit :
>
> >>No response =A0:-((
> >>Maybe my question was not very clear. Let me paraphrase it:
> >>What kind of RAM would you use for a video frame buffer (Spartan-3E) ?
> >>Or would either type of RAM work ?
>
> > For any application you must calculate what size and speed of ram you
> > require. So for your application you must determine the memory bandwidt=
h
> > and the size of memory needed to fit the data into the memory. You know
> > what your application is and the relevan parameters so you just need to
> > match those to the standard ram available. You may find that it is not
> > possible to with the fpga you want to use and you need to choose a high=
er
> > spec device.
>
> Thank you for your response. However I was in fact expecting more a rule
> of thumb response such as for example "SDRAM would probably work for
> VGA resolution at 30Hz rate no more...".
>
> I am still choosing the right components for my first FPGA design that is=
 just
> a learning project with no other specific purpose. If I can generate a vi=
deo
> stream thats fine, if not I will do something else (or lower the frame si=
ze,
> refresh rate, etc).
>
> The challenge is also to design a working Spartan-3 FPGA board with the
> largest non BGA package and with only 2 layers. The risk of course is the
> board will never work.

If you think about it for a moment, what do they use in PCs for frame
buffer memory?  In all of the lower end systems they *share* the main
SDRAM based memory with the CPU.  Video does not require a large
bandwidth compared to what SDRAM is capable of.  Many years ago when
DRAM had a lower throughput they made special SDRAM which could load a
page from memory into a fast on-chip SRAM.  This could then be shifted
out at very high rates to support very fast displays of 1200 x 1024.
Today displays are 2048 x 1152 and larger.  SDRAM still works and
doesn't even need a special interface.

Rick

Article: 151304
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: rickman <gnuarm@gmail.com>
Date: Mon, 21 Mar 2011 22:06:34 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 21, 4:17=A0pm, gtw...@sonic.net (Mark Curry) wrote:
> In article <8JKdnaSMPotTFxrQnZ2dnUVZ8qOdn...@brightview.co.uk>,
>
>
>
> Phil Jessop <p...@noname.org> wrote:
>
> >"PovTruffe" <PovTa...@gaga.invalid> wrote in message
> >news:4d876ea8$0$19933$426a74cc@news.free.fr...
> >> "maxascent" <maxascent@n_o_s_p_a_m.n_o_s_p_a_m.yahoo.co.uk> a crit :
> >>>>No response =A0:-((
> >>>>Maybe my question was not very clear. Let me paraphrase it:
> >>>>What kind of RAM would you use for a video frame buffer (Spartan-3E) =
?
> >>>>Or would either type of RAM work ?
>
> >>> For any application you must calculate what size and speed of ram you
> >>> require. So for your application you must determine the memory bandwi=
dth
> >>> and the size of memory needed to fit the data into the memory. You kn=
ow
> >>> what your application is and the relevan parameters so you just need =
to
> >>> match those to the standard ram available. You may find that it is no=
t
> >>> possible to with the fpga you want to use and you need to choose a hi=
gher
> >>> spec device.
>
> >> Thank you for your response. However I was in fact expecting more a ru=
le
> >> of thumb response such as for example "SDRAM would probably work for
> >> VGA resolution at 30Hz rate no more...".
>
> >> I am still choosing the right components for my first FPGA design that=
 is
> >> just
> >> a learning project with no other specific purpose. If I can generate a
> >> video
> >> stream thats fine, if not I will do something else (or lower the frame
> >> size,
> >> refresh rate, etc).
>
> >> The challenge is also to design a working Spartan-3 FPGA board with th=
e
> >> largest non BGA package and with only 2 layers. The risk of course is =
the
> >> board will never work.
>
> >For a full HD display running at 60Hz frame rate you need about 125Mpix/=
s x
> >24 or 30 bits. Easily within the lowest spec DDR SDRAM as you only need
> >sequential access. You can always up the bitwidth of the memory to incre=
ase
> >bandwidth if access time proves inadequate but obviously you'll need to
> >determine that at the outset.
>
> The original question is too ill-posed - I wouldn't take any "rule of thu=
mb"
> type response with respect to video - the numbers add up too fast.
>
> One would assume you're not just reading or writing to the DDR - you prob=
ably
> need to do (at least one) of both a frame-buffer read, and a frame-buffer=
 write.
> So 2X (at least) the BW requirements there. =A0How are you going to pack
> (20-bit, 24-bit, 30-bit and/or 32-bit?) pixel data onto a (16/32/48/64 bi=
t)
> memory interface? =A0Pack it, or throw away bandwidth?
>
> Reads and Writes at same time? - can you still guarantee "sequential acce=
ss"
> enough so you don't lose bandwidth efficiencies to the DRAM? =A0Is there
> anything else (a CPU?) using the DRAM too that throws this off?
>
> To the OP - there's no "rule of thumb". =A0Sit down with a pen and paper,
> or excel spreadsheet, and calculate you're requirements.
>
> --Mark

This reminds me of myself many, many years ago when I was dying to
build a terminal from a TV and a variety of ICs.  I read Don
Lancaster's book, "TV Typewriter Cookbook" that explained enough of
how it could work that I was able to cobble together my own design.  I
think I actually designed it a dozen times on paper before I ever
constructed it.  I did eventually get it built and modified a 12" TV
to use as the monitor.  64x25 characters!  Back then my eyes could
actually see that.

I would suggest to the OP that he spend a lot of time reading about
other people's designs and do a lot of paper designing before he
builds anything.  I learned a lot that way and I think there is a lot
he can learn before he is ready to build the device he is thinking
of.

Rick

Article: 151305
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Tue, 22 Mar 2011 05:28:27 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:
(snip)

> This reminds me of myself many, many years ago when I was dying to
> build a terminal from a TV and a variety of ICs.  I read Don
> Lancaster's book, "TV Typewriter Cookbook" that explained enough of
> how it could work that I was able to cobble together my own design.  I
> think I actually designed it a dozen times on paper before I ever
> constructed it.  I did eventually get it built and modified a 12" TV
> to use as the monitor.  64x25 characters!  Back then my eyes could
> actually see that.

I remember when the TV typewriter was a Radio-Electronics
magazine series.  I was interested, but couldn't afford one.

Later, when I was in college, I had the idea of building
a bit-map display storing the data compressed, as memory was
still pretty expensive.  As far as I know, no-one ever tried
that.

-- glen

Article: 151306
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: hal-usenet@ip-64-139-1-69.sjc.megapath.net (Hal Murray)
Date: Tue, 22 Mar 2011 01:10:05 -0500
Links: << >>  << T >>  << A >>
In article <im9c1r$ofe$1@news.eternal-september.org>,
 glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:

>Later, when I was in college, I had the idea of building
>a bit-map display storing the data compressed, as memory was
>still pretty expensive.  As far as I know, no-one ever tried
>that.

Compressed?

Back in the way old days, they had character generator ROMs.
The input (address) was the character code and the bottom bits of the
vertical line address.  The output was the bit pattern for that chunk
of the horizontal line.

So instead of storing the dispalay screen bits, you stored the
characters on the appropriate grid.

-- 
These are my opinions, not necessarily my employer's.  I hate spam.

Article: 151307
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Tue, 22 Mar 2011 06:26:33 +0000 (UTC)
Links: << >>  << T >>  << A >>
Hal Murray <hal-usenet@ip-64-139-1-69.sjc.megapath.net> wrote:
> In article <im9c1r$ofe$1@news.eternal-september.org>,
> glen herrmannsfeldt <gah@ugcs.caltech.edu> writes:
 
>>Later, when I was in college, I had the idea of building
>>a bit-map display storing the data compressed, as memory was
>>still pretty expensive.  As far as I know, no-one ever tried
>>that.
 
> Compressed?
 
> Back in the way old days, they had character generator ROMs.
> The input (address) was the character code and the bottom bits of the
> vertical line address.  The output was the bit pattern for that chunk
> of the horizontal line.

No, I meant for bit map graphics.  The idea at the time was
to do run-length coding for the bit map.  Then decode each scan
line just before displaying it.  Software would be needed to
update the display memory, and that might be complicated and slow.
(It would have been on an 8080 at the time.)  It is a tradeoff
of memory vs. display logic, and complexity of the display.

I do remember stories about Versatec printers in bitmap mode
decoding compressed data while printing, but I don't know about
doing it each time through a video display buffer.   I believe
HP PCL printers also store the page in memory compressed, and
then decompress while printing.  There are some pages where the
decompression isn't fast enough, such that that part of the page
has already been printed.  The printer prints it anyway, and in
the wrong place on the page.

Now, it used to be that vector graphics, which take up much less
display memory, were common.  That doesn't fit with a TV 
(or video monitor) display, though.  Also, vector graphics works
a little better with a light pen.

-- glen

Article: 151308
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: "PovTruffe" <PovTache@gaga.invalid>
Date: Tue, 22 Mar 2011 12:04:40 +0100
Links: << >>  << T >>  << A >>
"Mittens" <mittens@_nospam_hush.ai> a écrit :
> If you are going to write the controller and from scratch, it will be so  much easier to use QDR SRAM (dual 
> unidirectional port) instead.

The smallest QDR SRAM I could find on Digikey has a 165 pin BGA package !



Article: 151309
Subject: SRL as a synchroniser
From: Allan Herriman <allanherriman@hotmail.com>
Date: 22 Mar 2011 12:27:10 GMT
Links: << >>  << T >>  << A >>
Hi,

At a client's site I have some legacy VHDL code that is being synthesised 
with Xilinx XST 13.1 with Virtex 6 as a target.

This code has some clock domain crossing circuits that use two flip flops 
cascaded in the destination clock domain (it's the usual synchroniser 
that you've all seen before).

This code worked fine in whatever version of XST it was originally 
written for, but with 13.1 I find that these flip flops are being 
replaced by SRL32 primitives, the default inference rule being that 
whenever XST sees two or more flip flops (without a reset and with the 
same control inputs) in series, it replaces them with an SRL.

Obviously, the XST team thought that this was an appropriate thing to do, 
but anyone with a modicum of experience should know better.


Question: just how bad are the SRL for this application?  Have they even 
been characterised?


BTW, there are many possible fixes:

- Put an attribute (SHREG_something_or_other) on each and every 
synchroniser to disable SRL inference.

- Disable SRL inference globally (not good).

- Change the XST setting (SHREG_something_else) to 3 so that chains of 2 
flip flops are left alone.

- Add a reset (or other control signal that will stop SRL inference) to 
each and every synchroniser in the design.

- Wait for XST to be fixed (but don't hold your breath).


Thanks,
Allan

Article: 151310
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: rickman <gnuarm@gmail.com>
Date: Tue, 22 Mar 2011 07:15:34 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 22, 2:26=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote:
> Hal Murray <hal-use...@ip-64-139-1-69.sjc.megapath.net> wrote:
> > In article <im9c1r$of...@news.eternal-september.org>,
> > glen herrmannsfeldt <g...@ugcs.caltech.edu> writes:
> >>Later, when I was in college, I had the idea of building
> >>a bit-map display storing the data compressed, as memory was
> >>still pretty expensive. =A0As far as I know, no-one ever tried
> >>that.
> > Compressed?
> > Back in the way old days, they had character generator ROMs.
> > The input (address) was the character code and the bottom bits of the
> > vertical line address. =A0The output was the bit pattern for that chunk
> > of the horizontal line.
>
> No, I meant for bit map graphics. =A0The idea at the time was
> to do run-length coding for the bit map. =A0Then decode each scan
> line just before displaying it. =A0Software would be needed to
> update the display memory, and that might be complicated and slow.
> (It would have been on an 8080 at the time.) =A0It is a tradeoff
> of memory vs. display logic, and complexity of the display.

That would be an interesting idea.  I suppose the hardware to do this
would not be complex.  A simple counter would do the trick, but it
would have to run faster than the dot clock and there is plenty of
potential for the size to blow up with certain degenerate patterns
such as alternating 1's and 0's.

I seem to recall that most designers were more worried about improving
the quality of the display since they were often rather limited.
Heck, 1024x whatever was state of the art then.  It was hard to find a
monitor that could actually show that level of detail.


> I do remember stories about Versatec printers in bitmap mode
> decoding compressed data while printing, but I don't know about
> doing it each time through a video display buffer. =A0 I believe
> HP PCL printers also store the page in memory compressed, and
> then decompress while printing. =A0There are some pages where the
> decompression isn't fast enough, such that that part of the page
> has already been printed. =A0The printer prints it anyway, and in
> the wrong place on the page.
>
> Now, it used to be that vector graphics, which take up much less
> display memory, were common. =A0That doesn't fit with a TV
> (or video monitor) display, though. =A0Also, vector graphics works
> a little better with a light pen.

I was under the impression that the vector graphic displays came from
the CRT side of the design.  That was the most efficient means of
displaying lines on the screen at the time and you could even use a
display that held the last image drawn so it didn't need to be
refreshed.  This would allow the construction of images too slow to be
drawn in real time.  Once the raster graphic hardware got to be better
quality and speed the vector hardware got pushed out rather quickly.
I guess the cost of the digital part of the hardware also had to come
down, but it wasn't like vector graphics electronics was cheap!

Rick

Article: 151311
Subject: Re: SRL as a synchroniser
From: rickman <gnuarm@gmail.com>
Date: Tue, 22 Mar 2011 07:21:47 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 22, 8:27=A0am, Allan Herriman <allanherri...@hotmail.com> wrote:
> Hi,
>
> At a client's site I have some legacy VHDL code that is being synthesised
> with Xilinx XST 13.1 with Virtex 6 as a target.
>
> This code has some clock domain crossing circuits that use two flip flops
> cascaded in the destination clock domain (it's the usual synchroniser
> that you've all seen before).
>
> This code worked fine in whatever version of XST it was originally
> written for, but with 13.1 I find that these flip flops are being
> replaced by SRL32 primitives, the default inference rule being that
> whenever XST sees two or more flip flops (without a reset and with the
> same control inputs) in series, it replaces them with an SRL.
>
> Obviously, the XST team thought that this was an appropriate thing to do,
> but anyone with a modicum of experience should know better.
>
> Question: just how bad are the SRL for this application? =A0Have they eve=
n
> been characterised?
>
> BTW, there are many possible fixes:
>
> - Put an attribute (SHREG_something_or_other) on each and every
> synchroniser to disable SRL inference.
>
> - Disable SRL inference globally (not good).
>
> - Change the XST setting (SHREG_something_else) to 3 so that chains of 2
> flip flops are left alone.
>
> - Add a reset (or other control signal that will stop SRL inference) to
> each and every synchroniser in the design.
>
> - Wait for XST to be fixed (but don't hold your breath).

Interesting.  I just had a conversation about synchronizing across
clock domains and I recalled being told by some knowledgeable folks
from Xilinx that with "modern" FPGAs (this was maybe 7 years ago) the
gain-bandwidth product of the buffer was high enough that even running
with data and clock running at 100 MHz the MTTF would be in the ball
park of a million years.. or was it a billion years?

The FFs were intentionally designed to minimize issues with clock
domain crossing.  I have no idea if the SRLs are good for this at all
really.  I'm pretty sure they are a very different design optimized
for minimum die size.  I would say, talk to Xilinx... or add the
resets.

Rick

Article: 151312
Subject: Re: Alternative To Altera's Cyclone III Starter Board
From: rickman <gnuarm@gmail.com>
Date: Tue, 22 Mar 2011 07:23:49 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 14, 6:38=A0pm, "Abby Brown" <abbybr...@charter.net> wrote:
> Hi,
>
> Does someone produce a cheaper and simpler substitute for
> Altera's Cyclone III starter board? =A0It needs to connect to a
> laptop to download configuration and test cases and upload
> results (ICT). =A0A driver that connects to Windows .Net would be
> ideal.
>
> Thanks,
> Gary

I have to say I am pretty ignorant of .NET other than this was
Microsoft's response when Sun told them to stop tinkering with JAVA.
What would the driver accomplish and how would you use it?

Rick

Article: 151313
Subject: Re: SRL as a synchroniser
From: =?UTF-8?B?QWxlxaEgU3ZldGVr?= <ales.svetek@gmDELail.com>
Date: Tue, 22 Mar 2011 15:42:12 +0100
Links: << >>  << T >>  << A >>
On 22/03/2011 13:27, Allan Herriman wrote:
> Hi,
>
> At a client's site I have some legacy VHDL code that is being synthesised
> with Xilinx XST 13.1 with Virtex 6 as a target.
>
> This code has some clock domain crossing circuits that use two flip flops
> cascaded in the destination clock domain (it's the usual synchroniser
> that you've all seen before).
>
> This code worked fine in whatever version of XST it was originally
> written for, but with 13.1 I find that these flip flops are being
> replaced by SRL32 primitives, the default inference rule being that
> whenever XST sees two or more flip flops (without a reset and with the
> same control inputs) in series, it replaces them with an SRL.
>
> Obviously, the XST team thought that this was an appropriate thing to do,
> but anyone with a modicum of experience should know better.
>
>
> Question: just how bad are the SRL for this application?  Have they even
> been characterised?

Exactly the same question was asked quite some time ago (7 years). You 
can read the discussion at:

http://www.fpgarelated.com/usenet/fpga/show/4224-1.php


Also, Austin Lesea (Principal Engineer at Xilinx) said:

"The SR16 is not a chain of master slave flip flops, so their
metastability resolution is not going to very good at all.  In fact,
they may be exactly the wrong choice to use as asynchronous signal
synchronizers!"

http://groups.google.com/group/comp.arch.fpga/browse_thread/thread/e7148c35e7313032/b7e62b1e636df590?q=avoid+metastability+austin&lnk=nl&#

So, I would conclude that for clock-domain crossing one should avoid 
using SRLs and use fabric flip-flops placed relatively closely together.

Regards,
~ Ales


Article: 151314
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: "PovTruffe" <PovTache@gaga.invalid>
Date: Tue, 22 Mar 2011 15:53:17 +0100
Links: << >>  << T >>  << A >>
"rickman" <gnuarm@gmail.com> a écrit :
> I would suggest to the OP that he spend a lot of time reading about
> other people's designs and do a lot of paper designing before he
> builds anything.  I learned a lot that way and I think there is a lot
> he can learn before he is ready to build the device he is thinking of.

This is what I have been doing for the past 2/3 months and still have
thousands of pages to study (hopefully aspirin and coffee are not that
expensive...). At some point I feel the need to get my hands dirty. I
already have experience with digital design though not at very high
speeds. Experience also in PCB design and I have some idea about
what is specific about FPGA layout design (I read about this too)
but with no practical experience on those devices yet.



Article: 151315
Subject: Re: SRL as a synchroniser
From: Muzaffer Kal <kal@dspia.com>
Date: Tue, 22 Mar 2011 07:56:17 -0700
Links: << >>  << T >>  << A >>
On 22 Mar 2011 12:27:10 GMT, Allan Herriman
<allanherriman@hotmail.com> wrote:

>Question: just how bad are the SRL for this application?  Have they even 
>been characterised?
>
As far as I understand SRL is made from a small block of SRAM which
exists for the LUT so it would be a rather poor synchronizer. A series
of closely packed DFFs are best for synchronization.
>
>BTW, there are many possible fixes:
>
>- Put an attribute (SHREG_something_or_other) on each and every 
>synchroniser to disable SRL inference.
>
>- Disable SRL inference globally (not good).
>
>- Change the XST setting (SHREG_something_else) to 3 so that chains of 2 
>flip flops are left alone.
>
>- Add a reset (or other control signal that will stop SRL inference) to 
>each and every synchroniser in the design.
>
>- Wait for XST to be fixed (but don't hold your breath).

I am a heavy user of X parts but I would never go with the last
choice. Individual attributes or reset are the best options in my
opinion.
-- 
Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Services

http://www.dspia.com

Article: 151316
Subject: Via in Hyperlynx linesim
From: Mark <markjsunil@gmail.com>
Date: Tue, 22 Mar 2011 08:47:35 -0700 (PDT)
Links: << >>  << T >>  << A >>
Is there anyway to specify via in HyperLynx LineSim? For example, I
have a transmission line with microstrip of length 20mm on top layer ,
then goes through a via to inner layer for microstrip and again
through another via back to top layer before going to the receiver. I
want simulate this in Hyperlynx LineSim but doesn't know how to
specify in the LineSim schematic.

-mark

Article: 151317
Subject: Re: Via in Hyperlynx linesim
From: KJ <kkjennings@sbcglobal.net>
Date: Tue, 22 Mar 2011 09:38:49 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 22, 11:47=A0am, Mark <markjsu...@gmail.com> wrote:
> Is there anyway to specify via in HyperLynx LineSim? For example, I
> have a transmission line with microstrip of length 20mm on top layer ,
> then goes through a via to inner layer for microstrip and again
> through another via back to top layer before going to the receiver. I
> want simulate this in Hyperlynx LineSim but doesn't know how to
> specify in the LineSim schematic.
>

Create a PCB stackup and then make your net in Linesim by having
segments of that net travel on whatever PCB layers you want (i.e. 1
inch on Layer 1; 1.5 inch on Layer 2; 0.5 inch on Layer 1 again).

KJ

Article: 151318
Subject: Re: Via in Hyperlynx linesim
From: Mark <markjsunil@gmail.com>
Date: Tue, 22 Mar 2011 10:31:11 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 22, 9:38=A0pm, KJ <kkjenni...@sbcglobal.net> wrote:
> On Mar 22, 11:47=A0am, Mark <markjsu...@gmail.com> wrote:
>
> > Is there anyway to specify via in HyperLynx LineSim? For example, I
> > have a transmission line with microstrip of length 20mm on top layer ,
> > then goes through a via to inner layer for microstrip and again
> > through another via back to top layer before going to the receiver. I
> > want simulate this in Hyperlynx LineSim but doesn't know how to
> > specify in the LineSim schematic.
>
> Create a PCB stackup and then make your net in Linesim by having
> segments of that net travel on whatever PCB layers you want (i.e. 1
> inch on Layer 1; 1.5 inch on Layer 2; 0.5 inch on Layer 1 again).
>
> KJ

Thanks KJ. I quickly created a simple LineSim schematic for this
clarification. Please find it at the link below.
https://picasaweb.google.com/115209162259670831525/MyAlbum?authkey=3DGv1sRg=
CPjvkPjw26WhPQ#

In the screen shot at the link above, there should be a via from
Microstrip to Stripline (top layer to inner layer) and again from
Stripline to Microstrip (inner layer to top layer). How do I specify
that? Does HyperLynx automatically assumes via from Microstrip to
Stripline and from Stripline to Microstrip? If Yes, via size doesn't
have any impact on SI?

-mark

Article: 151319
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of
From: Mittens <mittens@_nospam_hush.ai>
Date: Tue, 22 Mar 2011 18:47:54 -0000
Links: << >>  << T >>  << A >>
On Tue, 22 Mar 2011 11:04:40 -0000, PovTruffe <PovTache@gaga.invalid>  =

wrote:

> "Mittens" <mittens@_nospam_hush.ai> a =C3=A9crit :
>> If you are going to write the controller and from scratch, it will be=
  =

>> so  much easier to use QDR SRAM (dual
>> unidirectional port) instead.
>
> The smallest QDR SRAM I could find on Digikey has a 165 pin BGA packag=
e !

Presumably the FPGA you will eventually need will have many more balls  =

than that though.

Article: 151320
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Tue, 22 Mar 2011 20:57:49 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:

(snip)
>> No, I meant for bit map graphics.  The idea at the time was
>> to do run-length coding for the bit map.  Then decode each scan
>> line just before displaying it.  Software would be needed to
>> update the display memory, and that might be complicated and slow.
>> (It would have been on an 8080 at the time.)  It is a tradeoff
>> of memory vs. display logic, and complexity of the display.
 
> That would be an interesting idea.  I suppose the hardware to do this
> would not be complex.  A simple counter would do the trick, but it
> would have to run faster than the dot clock and there is plenty of
> potential for the size to blow up with certain degenerate patterns
> such as alternating 1's and 0's.

I believe the idea was that display memory would be bytes with one
nybble the number of zero bits before the next one, and the other
the number of one bits before the next zero.  I didn't get far into
the design before I got distracted by other things (like classwork).
Yes, it fails with many complicated displays.  (The 4+4 bit allocation
is probably not optimal, either.)
 
> I seem to recall that most designers were more worried about improving
> the quality of the display since they were often rather limited.
> Heck, 1024x whatever was state of the art then.  It was hard to find a
> monitor that could actually show that level of detail.

1024 required a pretty expensive monitor.  

(snip)
>> Now, it used to be that vector graphics, which take up much less
>> display memory, were common.  That doesn't fit with a TV
>> (or video monitor) display, though.  Also, vector graphics works
>> a little better with a light pen.
 
> I was under the impression that the vector graphic displays came from
> the CRT side of the design.  That was the most efficient means of
> displaying lines on the screen at the time 

Yes, but I believe that it was efficient use of memory that
drove the design.  The first graphic display device I remember
is the IBM 2250, well described in a page in www.ibm1130.net,
and was introduced around 1965.  I believe display memory was 4K,
but I don't know the word size (maybe 16 bits).  

> and you could even use a
> display that held the last image drawn so it didn't need to be
> refreshed.  This would allow the construction of images too slow to be
> drawn in real time.  

The Tektronix terminals, using technology developed for storage
oscilloscopes.  The problem was, you can only erase the whole screen,
once something is drawn.  But pretty good for the time, allowing 
complicated graphics without large display buffers.  I believe using
analog circuitry to do line drawing.  

> Once the raster graphic hardware got to be better
> quality and speed the vector hardware got pushed out rather quickly.
> I guess the cost of the digital part of the hardware also had to come
> down, but it wasn't like vector graphics electronics was cheap!

The analog electronics got a little cheaper over the years, as
economy of scale helped.  The digital part got a lot cheaper,
especially large display memories.  Also, raster display allows the
use of ordinary video monitors, instead of custom displays
(all the way to the CRT, which likely needed a long-persistance 
phosphor.)  For the 2250, the refresh rate slowed as the display
got more complicated.  

Which reminds me of the story from the Boeing 777, the first
ariplane designed, as I understand it, entirely by computer.
The first comment from the designers when they saw an actual
airplane was how big it looked.  (After seeing it on computer
displays for so long.)

-- glen

Article: 151321
Subject: Re: RAM - DIMM vs SO-DIMM: price vs. (hardware & software) ease of use
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Tue, 22 Mar 2011 21:06:03 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:
(snip)

> That would be an interesting idea.  I suppose the hardware to do this
> would not be complex.  A simple counter would do the trick, but it
> would have to run faster than the dot clock and there is plenty of
> potential for the size to blow up with certain degenerate patterns
> such as alternating 1's and 0's.

Oh, I forgot in my last post.  Remember the Don Lancaster TV
typewrite used MOS dynamic shift registers for display memory.
512 bits (a lot for the time) in an 8 pin package.  You have
to keep shifting or the bits go away.  That was before affordable
RAM, either DRAM or SRAM.  (Around the time of 1101 SRAM and 
1103 DRAM.)  The HP 9810A desktop programmable calculator uses
1103 1K bit PMOS DRAM.

In the 8080 days, the 2102 1Kx1 SRAM was popular, especially as
reliable DRAM boards were hard to find.

-- glen

Article: 151322
Subject: Re: SRL as a synchroniser
From: hal-usenet@ip-64-139-1-69.sjc.megapath.net (Hal Murray)
Date: Tue, 22 Mar 2011 19:53:58 -0500
Links: << >>  << T >>  << A >>
In article <4d88959e$0$11105$c3e8da3@news.astraweb.com>,
 Allan Herriman <allanherriman@hotmail.com> writes:

>This code has some clock domain crossing circuits that use two flip flops 
>cascaded in the destination clock domain (it's the usual synchroniser 
>that you've all seen before).
>
>This code worked fine in whatever version of XST it was originally 
>written for, but with 13.1 I find that these flip flops are being 
>replaced by SRL32 primitives, the default inference rule being that 
>whenever XST sees two or more flip flops (without a reset and with the 
>same control inputs) in series, it replaces them with an SRL.

Good catch.  How did you notice it?


>BTW, there are many possible fixes:
>
>- Put an attribute (SHREG_something_or_other) on each and every 
>synchroniser to disable SRL inference.

Can you kill two birds with one stone by requiring timing that
the SRLs can't meet?  That would also constrain the FFs to be
close together so you didn't use up all the slack time routing
across the chip.

-- 
These are my opinions, not necessarily my employer's.  I hate spam.


Article: 151323
Subject: Re: Video Framebuffer using Nexys2 (Spartan-3E)
From: "Ste" <lordste2@n_o_s_p_a_m.n_o_s_p_a_m.hotmail.com>
Date: Wed, 23 Mar 2011 01:44:28 -0500
Links: << >>  << T >>  << A >>
Thanks, Joel, for all the answers and the motivating attitude.  One of the
other members of our squad is pretty keen on VHDL.  We'll try our best!
	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 151324
Subject: Xilinx EDK - max array size
From: Tobias Baumann <ttobsen@hotmail.com>
Date: Wed, 23 Mar 2011 12:00:31 +0100
Links: << >>  << T >>  << A >>
Hi

I want to handle an array with some hundred elements.

Here is an example code:

#include "xmk.h"
#include "sys/init.h"
#include "platform.h"
#include "xbasic_types.h"

#include <stdio.h>

#define BUFFER_LENGTH 496

void printBuffer(Xuint8* buffer, Xuint16 length)
{

	Xuint16 i = 0;

	for(i=0; i<length; i++) {
		printf("buffer %u: %x\r\n", i, buffer[i]);
	}

	return;
}

void *hello_world(void *arg)
{

	Xuint32 i = 0;
	Xuint8 buffer[BUFFER_LENGTH];

	for(i=0; i<BUFFER_LENGTH; i++) {
		buffer[i] = 0xAA;
	}

	printBuffer(buffer, BUFFER_LENGTH);

     print("Hello World\r\n");

     return 0;
}

int main()
{
     init_platform();

     /* Initialize xilkernel */
     xilkernel_init();

     /* add a thread to be launched once xilkernel starts */
     xmk_add_static_thread(hello_world, 0);

     /* start xilkernel - does not return control */
     xilkernel_start();

     /* Never reached */
     cleanup_platform();

     return 0;
}

The maximum buffer length can be 496. If I increase the heap and stack 
size from 1kB to 2kB, the maximum buffer length can be 772. It's a bit 
weired, because I double the stack size, but I can't double the array 
size. In an other programm I have a stack and heap size of 20MB and I 
only can create an array with 256 elements.

So I need some basical information about where I have to find mistakes 
in my program and how I can solve them. And I need information about 
where the limits are. For example when I increase the stack size so 1000 
kB, the program above won't run with an 5000 elements array. I have no 
idea where the problem is and where I have to search.

Thanks a lot for helping.

Greets,
Tobias



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search