Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 155325

Article: 155325
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sun, 23 Jun 2013 14:10:19 -0700 (PDT)
Links: << >>  << T >>  << A >>
I can see the need for some kind of semaphore mechanism if you have one or =
more caches sitting between the processor and the main memory (a "memory hi=
erarchy") but that certainly isn't the case for my (Hive) processor, which =
is targeted towards small processor-centric tasks in FPGAs.  Main memory is=
 static, dual port, and connected directly to the core.

Article: 155326
Subject: Re: Ask about finding maximum and second's maximum number in array
From: Richard Damon <Richard@Damon-Family.org>
Date: Sun, 23 Jun 2013 17:10:42 -0400
Links: << >>  << T >>  << A >>
On 6/19/13 9:39 PM, rickman wrote:
> On 6/19/2013 11:40 AM, jonesandy@comcast.net wrote:
>> To borrow Gabor's card game analogy...
>>
>> You have two stacks, (highest and 2nd highest)
>>
>> If the drawn card is same or higher than the highest stack, then
>>
>>    move the top card from the highest stack to the 2nd highest stack,
>>    move the drawn card to the highest stack.
>>
>> else if the drawn card is same or higher than the 2nd highest stack, then
>>
>>    move the drawn card to the 2nd highest stack.
>>
>> draw another card and repeat.
> 
> They don't need to be stacks.  You just need to have two holding spots
> (registers) and initialize them to something less than anything you will
> have on the input.  Then on each draw of a card (or sample on the input)
> you compare to both spots, if the input is higher than the "highest"
> spot you save it there and put the old highest on the "second highest"
> spot.  If not, but it is higher than the "second highest" you put it there.
> 
> Gabor was using a stack because he thought it would get him both the
> highest and the second highest with one compare operation, but it didn't
> work.  Two compares are needed for each input.
> 
> In your approach your compare is "higher or same", why do you need to do
> anything if they are the same?  Not that it is a big deal, but in some
> situations this could require extra work.
> 
You actually only need to compare most of the entries to the second
highest register, if it isn't higher, than you can discard the item.
Only if it is higher than the second highest, do you need to compare it
to the highest to see if the new item goes into the highest or second
highest.
I.E.

Compare drawn card to 2nd highest stack, if not higher, discard and repeat
if higher (same doesn't really matter), discard the 2nd highest stack
and compare the new card to the highest stack.

if not higher, new card goes into 2nd highest stack, if higher, item in
highest goes to 2nd highest, and new goes to highest.



Article: 155327
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 18:30:28 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 4:52 PM, Eric Wallin wrote:
> On Sunday, June 23, 2013 1:24:48 PM UTC-4, rickman wrote:
>
>> I'd be interested in reading the design document, but this is what I
>> find at Opencores...
>>
>> SVN: No files checked in
>
> I believe SVN is for the verilog, which isn't there quite yet, but the document is.  Click on "Downloads" at the upper right.
>
> Here is a link to it:
>
> http://opencores.org/usercontent,doc,1371986749

Ok, this is certainly a lot more document than is typical for CPU 
designs on opencores.

I'm not sure why you need to insert so much opinion of stack machines in 
the discussions of the paper.  Some of what I have read so far is not 
very clear exactly what your point is and just comes off as a general 
bias about stack machines including those who promote them.  I don't 
mind at all when technical shortcomings are pointed out, but I'm not 
excited about reading the sort of opinion shown...

"Stack machines are (perhaps somewhat inadvertently) portrayed as a 
panacea for all computing ills"  I don't recall ever hearing anyone 
saying that.  Certainly there are a lot of claims for stack machines, 
but the above is almost hyperbole.

There is a lot to digest in your document.  I'll spend some time looking 
at it.

-- 

Rick

Article: 155328
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sun, 23 Jun 2013 23:43:06 +0100
Links: << >>  << T >>  << A >>
Eric Wallin wrote:
> I can see the need for some kind of semaphore mechanism if you have one or more caches sitting between the processor and the main memory (a "memory hierarchy") but that certainly isn't the case for my (Hive) processor, which is targeted towards small processor-centric tasks in FPGAs.  Main memory is static, dual port, and connected directly to the core.

Unless your system is constrained in ways you haven't mentioned...

Do you have interrupts? If so you need semaphores.

Can more than one "source" cause a memory location
to be read or written within one processor instruction
cycle? If so you need semaphores.

I first realised the need for atomic operations when
doing hard real-time work on a 6800 (no caches,
single processor) as a vacation student. Then I did
some research and found out about semaphores. Atomicity
could only be guaranteed by disabling interrupts
for the critical operations. And if you ran two 6809s
off opposite clock phases, even that wasn't sufficient.

Article: 155329
Subject: Re: New soft processor core paper publisher?
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Mon, 24 Jun 2013 00:15:09 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:
> On 6/23/2013 4:52 PM, Eric Wallin wrote:

(snip)
>> I believe SVN is for the verilog, which isn't there quite yet, 
>> but the document is.  Click on "Downloads" at the upper right.

(snip)
>> http://opencores.org/usercontent,doc,1371986749
 
> Ok, this is certainly a lot more document than is typical for CPU 
> designs on opencores.
 
> I'm not sure why you need to insert so much opinion of stack 
> machines in the discussions of the paper.  Some of what I have 
> read so far is not very clear exactly what your point is and 
> just comes off as a general bias about stack machines including 
> those who promote them.  I don't mind at all when technical 
> shortcomings are pointed out, but I'm not excited about reading 
> the sort of opinion shown...
 
> "Stack machines are (perhaps somewhat inadvertently) portrayed as a 
> panacea for all computing ills"  I don't recall ever hearing anyone 
> saying that.  Certainly there are a lot of claims for stack 
> machines, but the above is almost hyperbole.

I suppose. Stack machines are pretty much out of style now.
One reason is that current compiler technology has a hard
time generating good code for them. 

Well, stack machines, such as the Burroughs B5500, were popular
when machines had a small number of registers. They could be
implemented with most or all of the stack in main memory
(usually magnetic core). They allow for smaller instructions,
even as addressing space gets larger. (The base-displacement
addressing for S/360 was also to help with addressing.)

Now, I suppose if stack machines had stayed popular, that compiler
technology would have developed to use them more efficiently,
but general registers, 16 or more of them, allow for flexibility
in addressing that stacks make difficult.

Now, you could do like the x87, with a stack that also allows
one to address any stack element. The best, and some of the 
worst, of both worlds.

-- glen
 
> There is a lot to digest in your document.  I'll spend some 
> time looking at it.
 

Article: 155330
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sun, 23 Jun 2013 18:23:26 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sunday, June 23, 2013 6:43:06 PM UTC-4, Tom Gardner wrote:

> Do you have interrupts?=20

Yes, one per thread.

> If so you need semaphores.

Not sure I follow, but I'm not sure you've read the paper.

> Can more than one "source" cause a memory location
> to be read or written within one processor instruction
> cycle? If so you need semaphores.

If the programmer writes the individual thread programs so that two threads=
 never write to the same address then by definition it can't happen (unless=
 there is a bug in the code).  I probably haven't thought about this as muc=
h as you have, but I don't see the fundamental need for more hardware if th=
e programmer does his/her job.

Article: 155331
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 21:57:19 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 9:23 PM, Eric Wallin wrote:
> On Sunday, June 23, 2013 6:43:06 PM UTC-4, Tom Gardner wrote:
>
>> Do you have interrupts?
>
> Yes, one per thread.
>
>> If so you need semaphores.
>
> Not sure I follow, but I'm not sure you've read the paper.

I used to know this stuff, but it has been a long time.  I think what 
Tom is referring to may not apply if you don't run more than one task on 
a given processor.  The issue is that to implement a semaphore you have 
to do a read-modify-write operation on a word in memory.  If "anyone" 
else can get in the middle of your operation the semaphore can be 
corrupted or fails.  But I'm not sure just using an interrupt means you 
will have problems, I think it simply means the door is open since 
context can be switched causing a failure in the semaphore.

But as I say, it has been a long time and there are different reasons 
for semaphores and different implementations.


>> Can more than one "source" cause a memory location
>> to be read or written within one processor instruction
>> cycle? If so you need semaphores.
>
> If the programmer writes the individual thread programs so that two threads never write to the same address then by definition it can't happen (unless there is a bug in the code).  I probably haven't thought about this as much as you have, but I don't see the fundamental need for more hardware if the programmer does his/her job.

There are other resources that might be shared.  Or maybe not, but if 
so, you need to manage it.

Wow, I never realized how much I have forgotten.

-- 

Rick

Article: 155332
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 22:06:58 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 8:15 PM, glen herrmannsfeldt wrote:
> rickman<gnuarm@gmail.com>  wrote:
>> On 6/23/2013 4:52 PM, Eric Wallin wrote:
>
> (snip)
>>> I believe SVN is for the verilog, which isn't there quite yet,
>>> but the document is.  Click on "Downloads" at the upper right.
>
> (snip)
>>> http://opencores.org/usercontent,doc,1371986749
>
>> Ok, this is certainly a lot more document than is typical for CPU
>> designs on opencores.
>
>> I'm not sure why you need to insert so much opinion of stack
>> machines in the discussions of the paper.  Some of what I have
>> read so far is not very clear exactly what your point is and
>> just comes off as a general bias about stack machines including
>> those who promote them.  I don't mind at all when technical
>> shortcomings are pointed out, but I'm not excited about reading
>> the sort of opinion shown...
>
>> "Stack machines are (perhaps somewhat inadvertently) portrayed as a
>> panacea for all computing ills"  I don't recall ever hearing anyone
>> saying that.  Certainly there are a lot of claims for stack
>> machines, but the above is almost hyperbole.
>
> I suppose. Stack machines are pretty much out of style now.
> One reason is that current compiler technology has a hard
> time generating good code for them.

I think you might be referring to the sort of stack machines used in 
minicomputers 30 years ago.  For FPGA implementations stack CPUs are 
alive and kicking.  Forth seems to do a pretty good job with them.  What 
is the problem with other languages?


> Well, stack machines, such as the Burroughs B5500, were popular
> when machines had a small number of registers. They could be
> implemented with most or all of the stack in main memory
> (usually magnetic core). They allow for smaller instructions,
> even as addressing space gets larger. (The base-displacement
> addressing for S/360 was also to help with addressing.)

Yes, you *are* talking about 30 year old machines, or even 40 year old 
machines.


> Now, I suppose if stack machines had stayed popular, that compiler
> technology would have developed to use them more efficiently,
> but general registers, 16 or more of them, allow for flexibility
> in addressing that stacks make difficult.
>
> Now, you could do like the x87, with a stack that also allows
> one to address any stack element. The best, and some of the
> worst, of both worlds.

You are reading my mind!  That is what I spent some time looking at this 
past winter.  Then I got busy with work and have had to put it aside.

Eric's machine is a bit different having four stacks for each processor 
and allowing each one to be popped or not rather than any addressing on 
the stack itself.  Interesting, but not so small as the two stack CPUs.

-- 

Rick

Article: 155333
Subject: Re: New soft processor core paper publisher?
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Mon, 24 Jun 2013 02:07:37 +0000 (UTC)
Links: << >>  << T >>  << A >>
Eric Wallin <tammie.eric@gmail.com> wrote:

(snip)
> If the programmer writes the individual thread programs so that 
> two threads never write to the same address then by definition 
> it can't happen (unless there is a bug in the code).  

The question is related to communication between threads.
If they are independent, processing independent data, then no
problem. Usually they at least need to communicate with the OS
(or outside world, in general), which often needs semaphores.

> I probably haven't thought about this as much as you have, 
> but I don't see the fundamental need for more hardware if 
> the programmer does his/her job.

-- glen

Article: 155334
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sun, 23 Jun 2013 19:54:14 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sunday, June 23, 2013 6:30:28 PM UTC-4, rickman wrote:

> I'm not sure why you need to insert so much opinion of stack machines in=
=20
> the discussions of the paper.  Some of what I have read so far is not=20
> very clear exactly what your point is and just comes off as a general=20
> bias about stack machines including those who promote them.  I don't=20
> mind at all when technical shortcomings are pointed out, but I'm not=20
> excited about reading the sort of opinion shown...

Point taken.  I suppose I'm trying to spare others from wasting too much ti=
me and energy on canonical one and two stack machines.  There just aren't e=
nough stacks, so unless you want to deal with the top entry or two right no=
w you'll be digging around, wasting both programming and real time, and get=
ting confused.  And they automatically toss data away that you often very m=
uch need, so you waste more time copying it or reloading it or whatever.  I=
 spent years trying to like them, thinking the problem was me.  The J proce=
ssor really helped break the spell. =20

Not saying I have all the answers, I hope the paper doesn't come across tha=
t way, but I do have to sell it to some degree (the paper ends with the dow=
n sides that I'm aware of, I'm sure there are more).

> "Stack machines are (perhaps somewhat inadvertently) portrayed as a=20
> panacea for all computing ills"  I don't recall ever hearing anyone=20
> saying that.  Certainly there are a lot of claims for stack machines,=20
> but the above is almost hyperbole.

Defense exhibit A:

http://www.ultratechnology.com/cowboys.html

Maybe I'm seeing things that aren't there, but almost every web site, paper=
, and book on stack machines and Forth that I've encountered has a vibe of =
"look at this revolutionary idea that the man has managed to keep down!"  A=
bsolutely no down sides mentioned, so the hapless noob is left with much to=
o flattering of an impression.  In my case this false impression was quite =
lasting, so I guess I've got something of an axe to grind.  Perhaps I'll mo=
derate this in future releases of the design document.

Article: 155335
Subject: Re: New soft processor core paper publisher?
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Mon, 24 Jun 2013 03:18:21 +0000 (UTC)
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> wrote:

(snip, I wrote)
>> I suppose. Stack machines are pretty much out of style now.
>> One reason is that current compiler technology has a hard
>> time generating good code for them.
 
> I think you might be referring to the sort of stack machines used in 
> minicomputers 30 years ago.  For FPGA implementations stack CPUs are 
> alive and kicking.  Forth seems to do a pretty good job with them.  What 
> is the problem with other languages?

The code generators designed for register machines, such as that
used by GCC or LCC, don't adapt to stack machines well. 

As users of HP calculators know, given an expression with unrelated
arguments, it isn't hard to evaluate using a stack. But consider that
the expression might have some common subexpressions? You want to
evaluate the expression, evaluating the common subexpressions only
once. It is not so easy to get things into the right place on
the stack, such that they are at the top at the right time.

-- glen

Article: 155336
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Sun, 23 Jun 2013 22:21:14 -0500
Links: << >>  << T >>  << A >>
Rob Doyle wrote:
> On 6/23/2013 10:27 AM, rickman wrote:
>> On 6/23/2013 5:31 AM, Tom Gardner wrote:
>>> rickman wrote:
>>>> On 6/22/2013 5:57 PM, Tom Gardner wrote:
>>>
>>>>> How do you propose to implement mailboxes reliably?
>>>>> You need to think of all the possible memory-access
>>>>> sequences, of course.
>>>>
>>>> I don't get the question. Weren't semaphores invented a long time ago
>>>> and require no special support from the processor?
>>>
>>> Of course they are one communications mechanism, but not
>>> the only one. Implementation can be made impossible by
>>> some design decisions. Whether support is "special" depends
>>> on what you regard as "normal", so I can't give you an
>>> answer to that one!
>>
>> What aspect of a processor can make implementation of semaphores
>> impossible?
>
> Lack of atomic operations.
>
> Rob.
>
>

No. The only requirement for semaphores
to work is to be able to turn off interrupts briefly.


--
Les Cargill

Article: 155337
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 23:42:33 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 11:18 PM, glen herrmannsfeldt wrote:
> rickman<gnuarm@gmail.com>  wrote:
>
> (snip, I wrote)
>>> I suppose. Stack machines are pretty much out of style now.
>>> One reason is that current compiler technology has a hard
>>> time generating good code for them.
>
>> I think you might be referring to the sort of stack machines used in
>> minicomputers 30 years ago.  For FPGA implementations stack CPUs are
>> alive and kicking.  Forth seems to do a pretty good job with them.  What
>> is the problem with other languages?
>
> The code generators designed for register machines, such as that
> used by GCC or LCC, don't adapt to stack machines well.

That shouldn't be a surprise to anyone.  The guy who designed the ZPU 
found that out the hard way.


> As users of HP calculators know, given an expression with unrelated
> arguments, it isn't hard to evaluate using a stack. But consider that
> the expression might have some common subexpressions? You want to
> evaluate the expression, evaluating the common subexpressions only
> once. It is not so easy to get things into the right place on
> the stack, such that they are at the top at the right time.

Tell me about it ;^)

-- 

Rick

Article: 155338
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Mon, 24 Jun 2013 00:07:23 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 10:54 PM, Eric Wallin wrote:
> On Sunday, June 23, 2013 6:30:28 PM UTC-4, rickman wrote:
>
>> I'm not sure why you need to insert so much opinion of stack machines in
>> the discussions of the paper.  Some of what I have read so far is not
>> very clear exactly what your point is and just comes off as a general
>> bias about stack machines including those who promote them.  I don't
>> mind at all when technical shortcomings are pointed out, but I'm not
>> excited about reading the sort of opinion shown...
>
> Point taken.  I suppose I'm trying to spare others from wasting too much time and energy on canonical one and two stack machines.  There just aren't enough stacks, so unless you want to deal with the top entry or two right now you'll be digging around, wasting both programming and real time, and getting confused.  And they automatically toss data away that you often very much need, so you waste more time copying it or reloading it or whatever.  I spent years trying to like them, thinking the problem was me.  The J processor really helped break the spell.
>
> Not saying I have all the answers, I hope the paper doesn't come across that way, but I do have to sell it to some degree (the paper ends with the down sides that I'm aware of, I'm sure there are more).

I'm glad you can take (hopefully) constructive criticism.  I was 
concerned when I wrote the above that it might be a bit too blunt.

It will be a while before I get to the end of your paper.  Do you 
describe the applications you think the design would be good for?  One 
reason I don't completely agree with you about the suitability of MISC 
type CPUs is that there are many apps with different requirements.  Some 
will definitely do better with a design other than yours.  I wonder if 
you had some specific class of applications that you were seeing that 
you didn't think the MISC approach was optimal for or if it was just the 
various "features" of MISC that didn't suit your tastes.


>> "Stack machines are (perhaps somewhat inadvertently) portrayed as a
>> panacea for all computing ills"  I don't recall ever hearing anyone
>> saying that.  Certainly there are a lot of claims for stack machines,
>> but the above is almost hyperbole.
>
> Defense exhibit A:
>
> http://www.ultratechnology.com/cowboys.html
>
> Maybe I'm seeing things that aren't there, but almost every web site, paper, and book on stack machines and Forth that I've encountered has a vibe of "look at this revolutionary idea that the man has managed to keep down!"  Absolutely no down sides mentioned, so the hapless noob is left with much too flattering of an impression.  In my case this false impression was quite lasting, so I guess I've got something of an axe to grind.  Perhaps I'll moderate this in future releases of the design document.

I can't argue with you on this one.  When I first saw the GA144 design 
it sounded fantastic!  But that is typical corporate product hype.  The 
reality of the chip is very different.  When it comes to CPU cores for 
FPGAs I don't see a lot of difference.  Check out some of the other 
offerings on Opencores.  Everyone touts their design as something pretty 
special even if they are just one of two or three that do the same 
thing!  I think they had some five or six PIC implementations and all 
seemed to say they were the best!

I do have to say I am not in complete agreement with you about the 
issues of MISC machines.  Yes, there can be a lot of stack ops compared 
to a register machine.  But these can be minimized with careful 
programming.  I know that from experience.  However, part of the utility 
of a design is the ease of programming efficiently.  I haven't looked at 
yours yet, but just picturing the four stacks makes it seem pretty 
simple... so far. :^)

I have to say I'm not crazy about the large instruction word.  That is 
one of the appealing things about MISC to me.  I work in very small 
FPGAs and 16 bit instructions are better avoided if possible, but that 
may be a red herring.  What matters is how many bytes a given program 
uses, not how many bits are in an instruction.

I am supposed to present to the SVFIG and I think your design would be a 
very interesting part of the presentation unless you think you would 
rather present yourself.  I'm sure they would like to hear about it and 
they likely would be interested in your opinions on MISC.  I know I am.

-- 

Rick

Article: 155339
Subject: Re: New soft processor core paper publisher?
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Mon, 24 Jun 2013 04:49:50 +0000 (UTC)
Links: << >>  << T >>  << A >>
Les Cargill <lcargill99@comcast.com> wrote:
> Rob Doyle wrote:

(snip)

>> Lack of atomic operations.
 
> No. The only requirement for semaphores
> to work is to be able to turn off interrupts briefly.

What about other processors or I/O using the same memory?

-- glen

Article: 155340
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Mon, 24 Jun 2013 08:24:44 +0100
Links: << >>  << T >>  << A >>
Eric Wallin wrote:
> On Sunday, June 23, 2013 6:43:06 PM UTC-4, Tom Gardner wrote:
>
>> Do you have interrupts?
>
> Yes, one per thread.
>
>> If so you need semaphores.
>
> Not sure I follow, but I'm not sure you've read the paper.

Nope. I've too many other things to understand in detail.
I have no bandwidth to debug your design.


>> Can more than one "source" cause a memory location
>> to be read or written within one processor instruction
>> cycle? If so you need semaphores.
>
> If the programmer writes the individual thread programs so that two threads never write to the same address then by definition it can't happen (unless there is a bug in the code).  I probably haven't thought about this as much as you have, but I don't see the fundamental need for more hardware if the programmer does his/her job.

The problems that arise with the lack of atomic operations
and/or semaphores are a known problem. Any respectable
university-level software course will cover the problems
and various solutions.

Consider trying to pass a message consisting of one
integer from one thread to another such that the
receiving thread is guaranteed to be able to picks
it up exactly once.



Article: 155341
Subject: Pure HDL Xilinx Zynq Arm Instantiation
From: peter dudley <padudle@gmail.com>
Date: Mon, 24 Jun 2013 03:03:15 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hello All,

I have a Xilinx Zynq development board and I am starting to teach myself to=
 build systems for Zynq.  The recommended flow described in UG873 is a very=
 long sequence of graphical menu clicks, pull-downs and forms.  The tools t=
hen produce a great deal of machine generated code.

I am wondering if it is possible to use a more conventional approach to bui=
lding hardware and connecting it to the AXI bus of the ARM processor.  I gr=
eatly prefer to directly instantiate components in my HDL code.  I find str=
ait HDL development easier to maintain in the long run and less sensitive t=
o changes in FPGA compiler tools.

Has anyone on this group succeeded in going around the PlanAhead/XPS graphi=
cal flow for building systems for the Zynq ARM?

Any advice or opinions are appreciated.

  Pete=20




Article: 155342
Subject: FPGA Exchange
From: Guy Eschemann <Guy.Eschemann@gmail.com>
Date: Mon, 24 Jun 2013 03:07:07 -0700 (PDT)
Links: << >>  << T >>  << A >>
I'd like to introduce a new FPGA discussion forum. It's called FPGA Exchange, and you can check it out at: http://fpga-exchange.com

Feel free to jump in, create new topics, or answer existing ones.

Guy.

Article: 155343
Subject: Re: FPGA Exchange
From: Uwe Bonnes <bon@elektron.ikp.physik.tu-darmstadt.de>
Date: Mon, 24 Jun 2013 11:20:13 +0000 (UTC)
Links: << >>  << T >>  << A >>
Guy Eschemann <Guy.Eschemann@gmail.com> wrote:
> I'd like to introduce a new FPGA discussion forum. It's called
> FPGA Exchange, and you can check it out at: http://fpga-exchange.com

> Feel free to jump in, create new topics, or answer existing ones.

Any reason for trying to split up the community?

-- 
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Article: 155344
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Mon, 24 Jun 2013 05:03:52 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Monday, June 24, 2013 3:24:44 AM UTC-4, Tom Gardner wrote:

> Consider trying to pass a message consisting of one
> integer from one thread to another such that the
> receiving thread is guaranteed to be able to picks
> it up exactly once.

Thread A works on the integer value and when it is done it writes it to loc=
ation Z.  It then reads a value at location X, increments it, and writes it=
 back to location X.

Thread B has been repeatedly reading location X and notices it has been inc=
remented.  It reads the integer value at Z, performs some function on it, a=
nd writes it back to location Z.  It then reads a value at Y, increments it=
, and writes it back to location Y to let thread A know it took, worked on,=
 and replaced the integer at Z.

The above seems airtight to me if reads and writes to memory are not cached=
 or otherwise delayed, and I don't see how interrupts are germane, but perh=
aps I haven't taken everything into account.

Article: 155345
Subject: Re: Pure HDL Xilinx Zynq Arm Instantiation
From: Guy Eschemann <Guy.Eschemann@gmail.com>
Date: Mon, 24 Jun 2013 05:20:00 -0700 (PDT)
Links: << >>  << T >>  << A >>
I've seen people do just that, but it's a very tedious and error-prone task=
. If I had to do it myself, I would first generate a known-good system in X=
ilinx Platform Studio (XPS), and use the generated HDL code as a starting p=
oint.

Guy Eschemann
Ingenieurb=FCro ESCHEMANN
Am Sandfeld 17a
76149 Karlsruhe, Germany

Tel.: +49 (0) 721 170 293 89
Fax: +49 (0) 721 170 293 89 - 9
Guy.Eschemann@gmail.com
Follow me on Twitter: @geschema
http://noasic.com
NEW: http://fpga-exchange.com
http://fpga-news.de

Article: 155346
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Mon, 24 Jun 2013 13:30:46 +0100
Links: << >>  << T >>  << A >>
Eric Wallin wrote:
> On Monday, June 24, 2013 3:24:44 AM UTC-4, Tom Gardner wrote:
>
>> Consider trying to pass a message consisting of one
>> integer from one thread to another such that the
>> receiving thread is guaranteed to be able to picks
>> it up exactly once.
>
> Thread A works on the integer value and when it is done it writes it to location Z.  It then reads a value at location X, increments it, and writes it back to location X.
>
> Thread B has been repeatedly reading location X and notices it has been incremented.  It reads the integer value at Z, performs some function on it, and writes it back to location Z.  It then reads a value at Y, increments it, and writes it back to location Y to let thread A know it took, worked on, and replaced the integer at Z.
>
> The above seems airtight to me if reads and writes to memory are not cached or otherwise delayed, and I don't see how interrupts are germane, but perhaps I haven't taken everything into account.
>

Consider what happens if interrupt occurs at inopportune moment in the above sequence, and the other thread runs. You can get double or missed updates.

Do some research to find why "test and set" and "compare and swap" instructions exist.

Article: 155347
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Mon, 24 Jun 2013 07:31:58 -0500
Links: << >>  << T >>  << A >>
glen herrmannsfeldt wrote:
> Les Cargill <lcargill99@comcast.com> wrote:
>> Rob Doyle wrote:
>
> (snip)
>
>>> Lack of atomic operations.
>
>> No. The only requirement for semaphores
>> to work is to be able to turn off interrupts briefly.
>
> What about other processors or I/O using the same memory?
>
> -- glen
>

Then it's no longer what I would call a semaphore. A
semaphore is, SFAIK, only a Dijkstra P() or V() operation.

It's not that things like this don't exist but rather that
they should be called something else, like "bus arbitration
scheme."

-- 
Les Cargill



Article: 155348
Subject: Re: Pure HDL Xilinx Zynq Arm Instantiation
From: Allan Herriman <allanherriman@hotmail.com>
Date: 24 Jun 2013 12:46:35 GMT
Links: << >>  << T >>  << A >>
On Mon, 24 Jun 2013 03:03:15 -0700, peter dudley wrote:

> Hello All,
> 
> I have a Xilinx Zynq development board and I am starting to teach myself
> to build systems for Zynq.  The recommended flow described in UG873 is a
> very long sequence of graphical menu clicks, pull-downs and forms.  The
> tools then produce a great deal of machine generated code.
> 
> I am wondering if it is possible to use a more conventional approach to
> building hardware and connecting it to the AXI bus of the ARM processor.
>  I greatly prefer to directly instantiate components in my HDL code.  I
> find strait HDL development easier to maintain in the long run and less
> sensitive to changes in FPGA compiler tools.
> 
> Has anyone on this group succeeded in going around the PlanAhead/XPS
> graphical flow for building systems for the Zynq ARM?
> 
> Any advice or opinions are appreciated.


Best advice I can give is just to accept that the tools were written to 
make it difficult to do what you want to do.  Try not to get upset at 
them - it is counterproductive.

I've done it though - GUI-free Zynq scripting nirvana achieved.

The documention is poor; the only way to work out what to do is to run 
through the GUI and see what it does.
That said, the following guides are indispensable:

- Xilinx Platform Specification Format Reference Manual (psf_rm.pdf)
- Xilinx PlanAhead Tcl Command Reference Guide (UG789)

A file & directory diff tool is useful for before/after comparisons.  I 
use winmerge.

Planahead can be scripted in TCL without needing the GUI.  It saves a 
journal file (.jou maybe?) that you can use as a starting point for your 
own script.

You will need to write your design in the form of an XPS peripheral and 
integrate it using XPS.

Again, run through the tools, creating a dummy peripheral template to see 
where it puts all the files.  

Constraints:
- No inferred FPGA bidirectional I/O.  Inputs ok, outputs ok.  Inout not 
ok unless using instantiated iobuffers.  Deal with this either by 
instantiating buffers or by having separate foo_I, foo_O and foo_T ports 
instead of a single "inout" foo port on your design.  They must have 
those exact _I, _O and _T suffixes.  (BTW, _T is an active low enable.)  
The tools will do the right thing and infer the buffers for you.
N.B. Most Xilinx cores, e.g. from MIG will instantiate their I/O 
primitives, so aren't a problem here.

- Your "peripheral" must have a lower case name.

- The Zynq PS has an AXI-3 interface in silicon.  You must make an AXI-4 
or AXI-4-lite interface, and the tools will insert an AXI-3 to AXI-4 shim 
for you.  I did not find a way around this (not that it's a problem).

- Not a hard constraint, but it makes things a lot easier: Use the 
official names for the AXI-4 signals, as the tools can recognise them.

- XPS wants the peripheral files buried about 7 levels of directory 
down.  Your .pao file (see below) can point back up to your original 
source files (in a more reasonable location) but if running on Windows 
(yes, even 64 bit Windows) you will run into a hard limit of 260 
character paths.  Solution: use Linux, or copy all your source files down 
into the bowels of XPS's directory structure.

- You will need to keep the exact directory structure that the GUI tools 
create.  The tools crash a lot if you change anything from the way the 
programmers at Xilinx tested it.  This gets really hard to debug, as the 
error message is usually just "unexpected termination" or something 
equally unhelpful.


Important files:

- my_project.ppr - planahead project file, points to fileset.xml

- sources_1/fileset.xml - points to my_system.xmp

- my_system.xmp - project file, points to my_system.mhs.

- my_system.mhs - a sort of crude HDL definition of the connections 
between your core "my_peripheral", the AXI fabric, the ARM (curiously 
known as processing_system7) and the FPGA I/O.  This is created by XPS, 
but you can edit it by hand.

- my_peripheral.mpd - port definitions of your core

- my_peripheral.pao - list of source files for your core.



I hope this helps, and let us know how you get on.


Regards,
Allan

Article: 155349
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Mon, 24 Jun 2013 07:50:04 -0500
Links: << >>  << T >>  << A >>
Eric Wallin wrote:
> On Monday, June 24, 2013 3:24:44 AM UTC-4, Tom Gardner wrote:
>
>> Consider trying to pass a message consisting of one integer from
>> one thread to another such that the receiving thread is guaranteed
>> to be able to picks it up exactly once.
>
> Thread A works on the integer value and when it is done it writes it
> to location Z.  It then reads a value at location X, increments it,
> and writes it back to location X.
>
> Thread B has been repeatedly reading location X and notices it has
> been incremented.  It reads the integer value at Z, performs some
> function on it, and writes it back to location Z.  It then reads a
> value at Y, increments it, and writes it back to location Y to let
> thread A know it took, worked on, and replaced the integer at Z.
>
> The above seems airtight to me if reads and writes to memory are not
> cached or otherwise delayed, and I don't see how interrupts are
> germane, but perhaps I haven't taken everything into account.
>

http://www.acm.uiuc.edu/sigops/roll_your_own/6.a.html

--
Les Cargill



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search