Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 155300

Article: 155300
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sat, 22 Jun 2013 16:37:00 -0400
Links: << >>  << T >>  << A >>
On 6/22/2013 4:11 PM, Eric Wallin wrote:
> On Saturday, June 22, 2013 2:08:20 PM UTC-4, rickman wrote:
>
>> So what clock speeds does your processor achieve?  It is an interesting
>> idea to pipeline everything and then treat the one processor as N
>> processors running in parallel.  I think you have mentioned that here
>> before and I seem to recall taking a quick look at the idea some time
>> back.  It fits well with many of the features available in FPGAs and
>> likely would do ok in an ASIC.  I just would not have much need for it
>> in most of the things I am looking at doing.
>
> The core will do ~200 MHz in the smallest Cyclone 3 or 4 speed grade 8 (the cheapest and slowest).  It looks to the outside world like 8 independent processors (threads) running at 25 MHz, each with its own independent interrupt.  Internally each thread has 4 private general purpose stacks that are each 32 entries deep, but all threads fully share main memory (combined instruction/data).
>
>> Rather than N totally independent processors, have you considered using
>> pipelining to implement SIMD?  This could get around some of the
>> difficulties in the N wide processor like memory bandwidth.
>
> I haven't given this very much thought.  But different cores could simultaneously work on different byte fields in a word in main memory so I'm not sure HW SIMD support is all that necessary.
>
> This is just a small FPGA core, not an x86 killer.  Though it beats me why more muscular processors don't employ these simple techniques.

That is my point.  With SIMD you have 1/8th the instruction rate saving 
memory accesses, but the same amount of data can be processed.  Of 
course, it all depends on your app.

-- 

Rick

Article: 155301
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sat, 22 Jun 2013 14:11:56 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Saturday, June 22, 2013 4:22:02 PM UTC-4, Tom Gardner wrote:

> Have you defined what happens when one processor writes to a
> memory location that is being read by another processor?

Main memory is accessed by the threads sequentially, so there is no real co=
ntention possible.

> In other words, what primitives do you provide that allow
> one processor to reliably communicate with another?

None, it's all up to the programmer.  Off the top of my head, one thread mi=
ght keep tabs on a certain memory location A looking for a change of some s=
ort, perform some activity in response to this, then write to a separate lo=
cation B that one or more other threads are similarly watching.

Another option (that I didn't implement, but it would be simple to do) woul=
d be to enable interrupt access via the local register set, giving threads =
the ability to interrupt one another for whatever reason.  But doing this v=
ia a single register could lead to confusion because there is no atomic rea=
d/write access (and I don't think it's worth implementing atomics just for =
this).  Each thread interrupt could be in a separate register I suppose.  W=
ith an ocean of main memory available for flags and mail boxes and such I g=
uess I don't see the need for added complexity.

Article: 155302
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sat, 22 Jun 2013 14:21:51 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Saturday, June 22, 2013 4:37:00 PM UTC-4, rickman wrote:

> That is my point.  With SIMD you have 1/8th the instruction rate saving 
> memory accesses, but the same amount of data can be processed.  Of 
> course, it all depends on your app.

But this processor core doesn't have a memory bandwidth bottleneck, so the instruction rate is moot.

Main memory is a full dual port BRAM, so each thread gets a chance to read/write data and fetch an instruction every cycle.  The bandwidth is actually overkill - the fetch side write port is unused.

Article: 155303
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sat, 22 Jun 2013 14:32:43 -0700 (PDT)
Links: << >>  << T >>  << A >>
I think the design document is good enough for general public consumption, =
so I applied for a project over at opencores.org (they say they'll get arou=
nd to it in one working day). =20

Still doing verification and minor code polishing, no bugs so far.  All bra=
nch immediate distances and conditionals check out; interrupts are working =
as expected; stack functionality, depth, and error reporting via the local =
register set checks out.  A log base 2 subroutine returns the same values a=
s a spreadsheet, ditto for restoring unsigned division.  I just need to con=
firm a few more things like logical and arithmetic ALU operations and the c=
ode should be good to go.

Article: 155304
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 22 Jun 2013 22:57:23 +0100
Links: << >>  << T >>  << A >>
Eric Wallin wrote:
> On Saturday, June 22, 2013 4:22:02 PM UTC-4, Tom Gardner wrote:
>
>> Have you defined what happens when one processor writes to a
>> memory location that is being read by another processor?
>
> Main memory is accessed by the threads sequentially, so there is no real contention possible.

OK, so what *atomic* synchronisation primitives are available?
Classic examples involve atomic read-modify-write operations (e.g.
test and set, compare and swap). And they are bloody difficult
and non-scalable if there is any memory hierarchy.


>> In other words, what primitives do you provide that allow
>> one processor to reliably communicate with another?
>
> None, it's all up to the programmer.

That raises red flags with software engineers. Infamously
with the Itanic, for example!


> Off the top of my head, one thread might keep tabs on a certain memory location A looking for a change of some sort, perform some activity in response to this, then write to a separate location B that one or more other threads are similarly watching.
>
> Another option (that I didn't implement, but it would be simple to do) would be to enable interrupt access via the local register set, giving threads the ability to interrupt one another for whatever reason.  But doing this via a single register could lead to confusion because there is no atomic read/write access (and I don't think it's worth implementing atomics just for this).  Each thread interrupt could be in a separate register I suppose.  With an ocean of main memory available for flags and mail boxes and such I guess I don't see the need for added complexity.

How do you propose to implement mailboxes reliably?
You need to think of all the possible memory-access
sequences, of course.


Article: 155305
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sat, 22 Jun 2013 23:44:44 -0400
Links: << >>  << T >>  << A >>
On 6/22/2013 5:57 PM, Tom Gardner wrote:
> Eric Wallin wrote:
>> On Saturday, June 22, 2013 4:22:02 PM UTC-4, Tom Gardner wrote:
>>
>>> Have you defined what happens when one processor writes to a
>>> memory location that is being read by another processor?
>>
>> Main memory is accessed by the threads sequentially, so there is no
>> real contention possible.
>
> OK, so what *atomic* synchronisation primitives are available?
> Classic examples involve atomic read-modify-write operations (e.g.
> test and set, compare and swap). And they are bloody difficult
> and non-scalable if there is any memory hierarchy.
>
>
>>> In other words, what primitives do you provide that allow
>>> one processor to reliably communicate with another?
>>
>> None, it's all up to the programmer.
>
> That raises red flags with software engineers. Infamously
> with the Itanic, for example!
>
>
>> Off the top of my head, one thread might keep tabs on a certain memory
>> location A looking for a change of some sort, perform some activity in
>> response to this, then write to a separate location B that one or more
>> other threads are similarly watching.
>>
>> Another option (that I didn't implement, but it would be simple to do)
>> would be to enable interrupt access via the local register set, giving
>> threads the ability to interrupt one another for whatever reason. But
>> doing this via a single register could lead to confusion because there
>> is no atomic read/write access (and I don't think it's worth
>> implementing atomics just for this). Each thread interrupt could be in
>> a separate register I suppose. With an ocean of main memory available
>> for flags and mail boxes and such I guess I don't see the need for
>> added complexity.
>
> How do you propose to implement mailboxes reliably?
> You need to think of all the possible memory-access
> sequences, of course.

I don't get the question.  Weren't semaphores invented a long time ago 
and require no special support from the processor?

-- 

Rick

Article: 155306
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sat, 22 Jun 2013 23:45:56 -0400
Links: << >>  << T >>  << A >>
On 6/22/2013 5:21 PM, Eric Wallin wrote:
> On Saturday, June 22, 2013 4:37:00 PM UTC-4, rickman wrote:
>
>> That is my point.  With SIMD you have 1/8th the instruction rate saving
>> memory accesses, but the same amount of data can be processed.  Of
>> course, it all depends on your app.
>
> But this processor core doesn't have a memory bandwidth bottleneck, so the instruction rate is moot.
>
> Main memory is a full dual port BRAM, so each thread gets a chance to read/write data and fetch an instruction every cycle.  The bandwidth is actually overkill - the fetch side write port is unused.

But that is only true if you limit yourself to on chip memory.

What is the app you designed this processor for?

-- 

Rick

Article: 155307
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sat, 22 Jun 2013 23:47:49 -0400
Links: << >>  << T >>  << A >>
On 6/22/2013 5:32 PM, Eric Wallin wrote:
> I think the design document is good enough for general public consumption, so I applied for a project over at opencores.org (they say they'll get around to it in one working day).
>
> Still doing verification and minor code polishing, no bugs so far.  All branch immediate distances and conditionals check out; interrupts are working as expected; stack functionality, depth, and error reporting via the local register set checks out.  A log base 2 subroutine returns the same values as a spreadsheet, ditto for restoring unsigned division.  I just need to confirm a few more things like logical and arithmetic ALU operations and the code should be good to go.

Someone was talking about this recently, I don't recall if it was you or 
someone else.  It was pointed out that the most important aspect of any 
core is the documentation.  opencores has lots of pretty worthless cores 
because you have to reverse engineer them to do anything with them.

-- 

Rick

Article: 155308
Subject: Re: New soft processor core paper publisher?
From: David Brown <david@westcontrol.removethisbit.com>
Date: Sun, 23 Jun 2013 09:54:10 +0200
Links: << >>  << T >>  << A >>
On 22/06/13 20:08, rickman wrote:
> On 6/22/2013 12:26 PM, Eric Wallin wrote:
>> I find it exceeding odd that the PC industry is still using x86
>> _anything_ at this point.  Apple showed us you can just dump your
>> processor and switch horses in midstream pretty much whenever you feel
>> like it (68k =>  PowerPC =>  x86) and not torch your product line /
>> lose your customer base.  I suppose having Intel and MS go belly up
>> overnight is beyond the pale and at the root of why we can't have nice
>> things.  I remember buying my first 286, imagining of all the
>> wonderful projects it would enable, and then finding out what complete
>> dogs the processor and OS were - it was quite disillusioning for the
>> big boys to sell me a lump of shit like that (and for a lot more than
>> 3 farthings).
> 
> You know why the x86 is still in use.  It is not really that bad in
> relation to the other architectures when measured objectively.  It may
> not be the best, but there is a large investment, mostly by Intel.  

The x86 was already considered an old-fashioned architecture the day it
was first released.  It was picked for the IBM PC (against the opinion
of all the technical people, who wanted the 68000) because some PHB
decided that the PC was a marketing experiment of no more than a 1000 or
so units, so the processor didn't matter and they could pick the cheaper
x86 chip.

Modern x86 chips are fantastic pieces of engineering - but they are
fantastic implementations of a terrible original design.  They are the
world's best example that given enough money and clever people, you
/can/ polish a turd.

> if
> Intel doesn't change why would anyone else?  But that is being eroded by
> the ARM processors in the handheld market.  We'll see if Intel can
> continue to adapt the x86 to low power and maintain a low cost.
> 
> I don't think MS is propping up the x86.  They offer a version of
> Windows for the ARM don't they?  As you say, there is a bit of processor
> specific code but the vast bulk of it is just a matter of saying ARMxyz
> rather than X86xyz.  Developers are another matter.  Not many want to
> support yet another target, period.  If the market opens up for Windows
> on ARM devices then that can change.  In the mean time it will be
> business as usual for desktop computing.
> 

MS props up the x86 - of that there is no doubt.  MS doesn't really care
if the chips are made by Intel, AMD, or any of the other x86
manufacturers that have come and gone.

MS tried to make Windows independent of the processor architecture when
they first made Windows NT.  That original ran on x86, MIPS, PPC and
Alpha.  But they charged MIPS, PPC and Alpha for the privilege - and
when they couldn't afford the high costs to MS, they stopped paying and
MS stopped making these Windows ports.  MS did virtually nothing to
promote these ports of Windows, and almost nothing to encourage any
other developers to target them.  They just took the cash from the
processor manufacturers, and used it to split the workstation market
(which was dominated by Unix on non-x86 processors) and discourage Unix.

I don't think MS particularly cares about x86 in any way - they just
care that you run /their/ software.  Pushing x86 above ever other
architecture just makes things easier and cheaper for them.

Part of the problem is that there is a non-negligible proportion of
Windows (and many third-party programs) design and code that /is/
x86-specific, and it is not separated into portability layers because it
was never designed for portability - so porting is a lot more work than
just a re-compile.  It is even more work if you want to make the code
run fast - there is a lot of "manual optimisation" in key windows code
that is fine-tuned to the x86.  For example, sometimes 8-bit variables
will be used because they are the fastest choice on old x86 processors
and modern x86 cpus handle them fine - but 32-bit RISC processors will
need extra masking and extending instructions to use them.  To be fair
on MS, such code was written long before int_fast8_t and friends came
into use.

However, it is a lot better these days than it used to be - the
processor-specific code is much less as more code is in higher level
languages, and in particular, little of the old assembly code remained.
 It is also easier on the third-party side, as steadily more developers
are making cross-platform code for Windows, MacOS and Linux - such code
is far easier to port to other processors.

As for Windows on the ARM, it is widely considered to be a bad joke.  It
exists to try and take some of the ARM tablet market, but is basically a
con - it is a poor substitute for Android or iOS as a pad system, and
like other pads, it is a poor substitute for a "real" PC for content
creation rather than just viewing.  People buy it thinking they can run
Windows programs on their new pad (they can't), or that they can use it
for MS Office applications (they can't - it's a cut-down version that
has less features than Polaris office on Android, and you can't do
sensible work on a pad anyway).

If ARM takes off as a replacement for x86, it will not be due to MS - it
will be due to Linux.  The first "target" is the server world - software
running on Linux servers is already highly portable across processors.
For most of the software, you have the source code and you just
re-compile (by "you", I mean usually Red Hat, Suse, or other distro).
Proprietary Linux server apps are also usually equally portable - if
Oracle sees a market for their software on ARM Linux servers, they'll do
the re-compile quickly and easily.

/Real/ Windows for ARM, if we ever see it, will therefore come first to
servers.


Article: 155309
Subject: Re: New soft processor core paper publisher?
From: David Brown <david@westcontrol.removethisbit.com>
Date: Sun, 23 Jun 2013 10:00:24 +0200
Links: << >>  << T >>  << A >>
On 23/06/13 05:44, rickman wrote:
> On 6/22/2013 5:57 PM, Tom Gardner wrote:
>> Eric Wallin wrote:
>>> On Saturday, June 22, 2013 4:22:02 PM UTC-4, Tom Gardner wrote:
>>>
>>>> Have you defined what happens when one processor writes to a
>>>> memory location that is being read by another processor?
>>>
>>> Main memory is accessed by the threads sequentially, so there is no
>>> real contention possible.
>>
>> OK, so what *atomic* synchronisation primitives are available?
>> Classic examples involve atomic read-modify-write operations (e.g.
>> test and set, compare and swap). And they are bloody difficult
>> and non-scalable if there is any memory hierarchy.
>>
>>
>>>> In other words, what primitives do you provide that allow
>>>> one processor to reliably communicate with another?
>>>
>>> None, it's all up to the programmer.
>>
>> That raises red flags with software engineers. Infamously
>> with the Itanic, for example!
>>
>>
>>> Off the top of my head, one thread might keep tabs on a certain memory
>>> location A looking for a change of some sort, perform some activity in
>>> response to this, then write to a separate location B that one or more
>>> other threads are similarly watching.
>>>
>>> Another option (that I didn't implement, but it would be simple to do)
>>> would be to enable interrupt access via the local register set, giving
>>> threads the ability to interrupt one another for whatever reason. But
>>> doing this via a single register could lead to confusion because there
>>> is no atomic read/write access (and I don't think it's worth
>>> implementing atomics just for this). Each thread interrupt could be in
>>> a separate register I suppose. With an ocean of main memory available
>>> for flags and mail boxes and such I guess I don't see the need for
>>> added complexity.
>>
>> How do you propose to implement mailboxes reliably?
>> You need to think of all the possible memory-access
>> sequences, of course.
> 
> I don't get the question.  Weren't semaphores invented a long time ago
> and require no special support from the processor?
> 

If threads can do some sort of compare-and-swap instruction, then
semaphores should be no problem with this architecture.  Without them,
there are algorithms to make semaphores but they don't scale well
(typically each semaphore needs a memory location for each thread that
might want access, and taking the semaphore requires a read of each of
these locations).  It helps if you can guarantee that your thread has a
certain proportion of the processor time - i.e., there is a limit to how
much other threads can do in between instructions of your thread.


Article: 155310
Subject: Re: New soft processor core paper publisher?
From: David Brown <david@westcontrol.removethisbit.com>
Date: Sun, 23 Jun 2013 10:38:04 +0200
Links: << >>  << T >>  << A >>
On 22/06/13 18:18, rickman wrote:
> On 6/22/2013 11:21 AM, David Brown wrote:
>> On 22/06/13 07:23, rickman wrote:
>>> On 6/21/2013 7:18 AM, David Brown wrote:
>>>> On 21/06/13 11:30, Tom Gardner wrote:
>>>>
>>>>> I suppose I ought to change "nobody writes Forth" to
>>>>> "almost nobody writes Forth.
>>>>>
>>>>
>>>> Shouldn't that be "almost nobody Forth writes" ?
>>>
>>> I would say that was "nobody almost Forth writes". Wouldn't it be [noun
>>> [adjective] [noun [adjective]]] verb?
>>
>> I thought about that, but I was not sure. When I say "work with Forth
>> again", I have only "played" with Forth, not "worked" with it, and it
>> was a couple of decades ago.
> 
> Hey, it's not like this is *real* forth.  But looking at how some Forth
> code works for things like assemblers and my own projects, the data is
> dealt with first starting with some sort of a noun type piece of data
> (like a register) which may be modified by an adjective (perhaps an
> addressing mode) followed by others, then the final verb to complete the
> action (operation).
> 
> 
>>>> (I too would like an excuse to work with Forth again.)
>>>
>>> What do you do instead?
>>>
>>
>> I do mostly small-systems embedded programming, which is mostly in C. It
>> used to include a lot more assembly, but that's quite rare now (though
>> it is not uncommon to have to make little snippets in assembly, or to
>> study compiler-generated assembly), and perhaps in the future it will
>> include more C++ (especially with C++11 features). I also do desktop and
>> server programming, mostly in Python, and I have done a bit of FPGA work
>> (but not for a number of years).
> 
> Similar to myself, but with the opposite emphasis.  I mostly do hardware
> and FPGA work with embedded programming which has been rare for some years.
> 
> I think Python is the language a customer recommended to me.  He said
> that some languages are good for this or good for that, but Python
> incorporates a lot of the various features that makes it good for most
> things.  They write code running under Linux on IP chassis.  I think
> they use Python a lot.
> 
> 
>> I don't think of Forth as being a suitable choice of language for the
>> kind of systems I work with - but I do think it would be fun to work
>> with the kind of systems for which Forth is the best choice. However, I
>> suspect that is unlikely to happen in practice. (Many years ago, my
>> company looked at a potential project for which Atmel's Marc-4
>> processors were a possibility, but that's the nearest I've come to Forth
>> at work.)
> 
> So why can't you consider Forth for processors that aren't stack based?
> 

There are two main reasons.

The first, and perhaps most important, is non-technical - C (and to a
much smaller extent, C++) is the most popular language for embedded
development.  That means it is the best supported by tools, best
understood by other developers, has the most sample code and libraries,
etc.  There are a few niches where other languages are used - assembly,
Ada, etc.  And of course there are hobby developers, lone wolves, and
amateurs pretending to be professionals who pick Pascal, Basic, or Forth.

I get to pick these things myself to a fair extent (with some FPGA work
long ago, I used confluence rather than the standard VHDL/Verilog).  But
I would need very strong reasons to pick anything other than C or C++
for embedded development.


The other reason is more technical - Forth is simply not a great
language for embedded development work.

It certainly has some good points - its interactivity is very nice, and
you can write very compact source code.

But the stack model makes it hard to work with more complex functions,
so it is difficult to be sure your code is correct and maintainable.
The traditional Forth solution is to break your code into lots of tiny
pieces - but that means the programmer is jumping back and forth in the
code, rather than working with sequential events in a logical sequence.
 The arithmetic model makes it hard to work with different sized types,
which are essential in embedded systems - the lack of overloading on
arithmetic operators means a lot of manual work in manipulating types
and getting the correct variant of the operator you want.  The highly
flexible syntax means that static error checking is almost non-existent.


> 
>> I just think it's fun to work with different types of language - it
>> gives you a better understanding of programming in general, and new
>> ideas of different ways to handle tasks.
> 
> I'm beyond "playing" in this stuff and I don't mean "playing" in a
> derogatory way, I mean I just want to get my work done.  I'm all but
> retired and although some of my projects are not truly profit motivated,
> I want to get them done with a minimum of fuss.  I look at the tools
> used to code in C on embedded systems and it scares me off really,
> especially the open source ones that require you to learn so much before
> you can become productive or even get the "hello world" program to work.
>  That's why I haven't done anything with the rPi or the Beagle Boards.
> 


Article: 155311
Subject: Re: New soft processor core paper publisher?
From: Andrew Haley <andrew29@littlepinkcloud.invalid>
Date: Sun, 23 Jun 2013 04:04:56 -0500
Links: << >>  << T >>  << A >>
In comp.lang.forth rickman <gnuarm@gmail.com> wrote:

> Backing up a bit, it strikes me as a bit crazy to make a language
> based on the concept of a weird target processor.  I mean, I get the
> portability thing, but at what cost?  If my experience as a casual
> user (not programmer) of Java on my PC is any indication (data point
> of one, the plural of anecdote isn't data, etc.), the virtual
> stack-based processor paradigm has failed, as the constant updates,
> security issues, etc. pretty much forced me to uninstall it.  And I
> would think that a language targeting a processor model that is
> radically different than the physically underlying one would be
> terribly inefficient unless the compiler can do hand stands while
> juggling spinning plates on fire - even if it is, god knows what it
> spits out.

Let's pick this apart a bit.  Firstly, most Java updates and security
bugs have nothing whatsoever to do with the concept of a virtual
machine.  They're almost always caused by coding errors in the
library, and they'd be bugs regardless of the architecture of the
virtual machine.  Secondly, targeting a processor model that is
radically different than the physically underlying one is what every
optimizing compiler does evey day, and Java is no different.

> Canonical stack processors and their languages (Forth, Java,
> Postscript) at this point seem to be hanging by a legacy thread
> (even if every PC runs one peripherally at one time or another).

Not even remotely true.  Java is either the most popular or the second
most popular porgramming language in the world.  Most Java runs on
servers; the desktop is such a tiny part of the market that even if
everyone drops Java in the browser it will make almost no difference.

Andrew.

Article: 155312
Subject: Re: New soft processor core paper publisher?
From: Paul Rubin <no.email@nospam.invalid>
Date: Sun, 23 Jun 2013 02:09:09 -0700
Links: << >>  << T >>  << A >>
> Java is either the most popular or the second most popular porgramming
> language in the world.  Most Java runs on servers;

I think most Java runs on SIM cards.  Of course there are more of those
than desktops and servers put together.

Article: 155313
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sun, 23 Jun 2013 10:31:24 +0100
Links: << >>  << T >>  << A >>
rickman wrote:
> On 6/22/2013 5:57 PM, Tom Gardner wrote:

>> How do you propose to implement mailboxes reliably?
>> You need to think of all the possible memory-access
>> sequences, of course.
>
> I don't get the question.  Weren't semaphores invented a long time ago and require no special support from the processor?

Of course they are one communications mechanism, but not
the only one. Implementation can be made impossible by
some design decisions. Whether support is "special" depends
on what you regard as "normal", so I can't give you an
answer to that one!


Article: 155314
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sun, 23 Jun 2013 04:34:26 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Saturday, June 22, 2013 11:47:49 PM UTC-4, rickman wrote:

> Someone was talking about this recently, I don't recall if it was you or 
> someone else.  It was pointed out that the most important aspect of any 
> core is the documentation.  opencores has lots of pretty worthless cores 
> because you have to reverse engineer them to do anything with them.

I just posted the design document:

http://opencores.org/project,hive

I'd be interested in any comments, my email address is in the document.  I'll post the verilog soon.

Cheers!

Article: 155315
Subject: Re: New soft processor core paper publisher?
From: Andrew Haley <andrew29@littlepinkcloud.invalid>
Date: Sun, 23 Jun 2013 12:09:23 -0500
Links: << >>  << T >>  << A >>
In comp.lang.forth Paul Rubin <no.email@nospam.invalid> wrote:
>> Java is either the most popular or the second most popular porgramming
>> language in the world.  Most Java runs on servers;
> 
> I think most Java runs on SIM cards.

Err, what is this belief based on?  I mean, you might be right, but I
never heard that before.

Andrew.

Article: 155316
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 13:18:44 -0400
Links: << >>  << T >>  << A >>
You might want to fix your attribution line...  What you replied to were 
not my words.

Rick


On 6/23/2013 5:04 AM, Andrew Haley wrote:
> In comp.lang.forth rickman<gnuarm@gmail.com>  wrote:
>
>> Backing up a bit, it strikes me as a bit crazy to make a language
>> based on the concept of a weird target processor.  I mean, I get the
>> portability thing, but at what cost?  If my experience as a casual
>> user (not programmer) of Java on my PC is any indication (data point
>> of one, the plural of anecdote isn't data, etc.), the virtual
>> stack-based processor paradigm has failed, as the constant updates,
>> security issues, etc. pretty much forced me to uninstall it.  And I
>> would think that a language targeting a processor model that is
>> radically different than the physically underlying one would be
>> terribly inefficient unless the compiler can do hand stands while
>> juggling spinning plates on fire - even if it is, god knows what it
>> spits out.
>
> Let's pick this apart a bit.  Firstly, most Java updates and security
> bugs have nothing whatsoever to do with the concept of a virtual
> machine.  They're almost always caused by coding errors in the
> library, and they'd be bugs regardless of the architecture of the
> virtual machine.  Secondly, targeting a processor model that is
> radically different than the physically underlying one is what every
> optimizing compiler does evey day, and Java is no different.
>
>> Canonical stack processors and their languages (Forth, Java,
>> Postscript) at this point seem to be hanging by a legacy thread
>> (even if every PC runs one peripherally at one time or another).
>
> Not even remotely true.  Java is either the most popular or the second
> most popular porgramming language in the world.  Most Java runs on
> servers; the desktop is such a tiny part of the market that even if
> everyone drops Java in the browser it will make almost no difference.
>
> Andrew.

-- 

Rick

Article: 155317
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 13:24:48 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 7:34 AM, Eric Wallin wrote:
> On Saturday, June 22, 2013 11:47:49 PM UTC-4, rickman wrote:
>
>> Someone was talking about this recently, I don't recall if it was you or
>> someone else.  It was pointed out that the most important aspect of any
>> core is the documentation.  opencores has lots of pretty worthless cores
>> because you have to reverse engineer them to do anything with them.
>
> I just posted the design document:
>
> http://opencores.org/project,hive
>
> I'd be interested in any comments, my email address is in the document.  I'll post the verilog soon.
>
> Cheers!

I'd be interested in reading the design document, but this is what I 
find at Opencores...


HIVE - a 32 bit, 8 thread, 4 register/stack hybrid, pipelined verilog 
soft processor core :: Overview Overview News Downloads Bugtracker

Project maintainers

Wallin, Eric
Details

Name: hive
Created: Jun 22, 2013
Updated: Jun 23, 2013
SVN: No files checked in


-- 

Rick

Article: 155318
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 13:27:16 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 5:31 AM, Tom Gardner wrote:
> rickman wrote:
>> On 6/22/2013 5:57 PM, Tom Gardner wrote:
>
>>> How do you propose to implement mailboxes reliably?
>>> You need to think of all the possible memory-access
>>> sequences, of course.
>>
>> I don't get the question. Weren't semaphores invented a long time ago
>> and require no special support from the processor?
>
> Of course they are one communications mechanism, but not
> the only one. Implementation can be made impossible by
> some design decisions. Whether support is "special" depends
> on what you regard as "normal", so I can't give you an
> answer to that one!

What aspect of a processor can make implementation of semaphores 
impossible?

-- 

Rick

Article: 155319
Subject: Re: New soft processor core paper publisher?
From: Rob Doyle <radioengr@gmail.com>
Date: Sun, 23 Jun 2013 10:34:09 -0700
Links: << >>  << T >>  << A >>
On 6/23/2013 10:27 AM, rickman wrote:
> On 6/23/2013 5:31 AM, Tom Gardner wrote:
>> rickman wrote:
>>> On 6/22/2013 5:57 PM, Tom Gardner wrote:
>>
>>>> How do you propose to implement mailboxes reliably?
>>>> You need to think of all the possible memory-access
>>>> sequences, of course.
>>>
>>> I don't get the question. Weren't semaphores invented a long time ago
>>> and require no special support from the processor?
>>
>> Of course they are one communications mechanism, but not
>> the only one. Implementation can be made impossible by
>> some design decisions. Whether support is "special" depends
>> on what you regard as "normal", so I can't give you an
>> answer to that one!
>
> What aspect of a processor can make implementation of semaphores
> impossible?

Lack of atomic operations.

Rob.



Article: 155320
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 13:45:03 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 4:38 AM, David Brown wrote:
> On 22/06/13 18:18, rickman wrote:
>> On 6/22/2013 11:21 AM, David Brown wrote:
>>> On 22/06/13 07:23, rickman wrote:
>>>> On 6/21/2013 7:18 AM, David Brown wrote:
>>>>> On 21/06/13 11:30, Tom Gardner wrote:
>>>>>
>>>>>> I suppose I ought to change "nobody writes Forth" to
>>>>>> "almost nobody writes Forth.
>>>>>>
>>>>>
>>>>> Shouldn't that be "almost nobody Forth writes" ?
>>>>
>>>> I would say that was "nobody almost Forth writes". Wouldn't it be [noun
>>>> [adjective] [noun [adjective]]] verb?
>>>
>>> I thought about that, but I was not sure. When I say "work with Forth
>>> again", I have only "played" with Forth, not "worked" with it, and it
>>> was a couple of decades ago.
>>
>> Hey, it's not like this is *real* forth.  But looking at how some Forth
>> code works for things like assemblers and my own projects, the data is
>> dealt with first starting with some sort of a noun type piece of data
>> (like a register) which may be modified by an adjective (perhaps an
>> addressing mode) followed by others, then the final verb to complete the
>> action (operation).
>>
>>
>>>>> (I too would like an excuse to work with Forth again.)
>>>>
>>>> What do you do instead?
>>>>
>>>
>>> I do mostly small-systems embedded programming, which is mostly in C. It
>>> used to include a lot more assembly, but that's quite rare now (though
>>> it is not uncommon to have to make little snippets in assembly, or to
>>> study compiler-generated assembly), and perhaps in the future it will
>>> include more C++ (especially with C++11 features). I also do desktop and
>>> server programming, mostly in Python, and I have done a bit of FPGA work
>>> (but not for a number of years).
>>
>> Similar to myself, but with the opposite emphasis.  I mostly do hardware
>> and FPGA work with embedded programming which has been rare for some years.
>>
>> I think Python is the language a customer recommended to me.  He said
>> that some languages are good for this or good for that, but Python
>> incorporates a lot of the various features that makes it good for most
>> things.  They write code running under Linux on IP chassis.  I think
>> they use Python a lot.
>>
>>
>>> I don't think of Forth as being a suitable choice of language for the
>>> kind of systems I work with - but I do think it would be fun to work
>>> with the kind of systems for which Forth is the best choice. However, I
>>> suspect that is unlikely to happen in practice. (Many years ago, my
>>> company looked at a potential project for which Atmel's Marc-4
>>> processors were a possibility, but that's the nearest I've come to Forth
>>> at work.)
>>
>> So why can't you consider Forth for processors that aren't stack based?
>>
>
> There are two main reasons.
>
> The first, and perhaps most important, is non-technical - C (and to a
> much smaller extent, C++) is the most popular language for embedded
> development.  That means it is the best supported by tools, best
> understood by other developers, has the most sample code and libraries,
> etc.  There are a few niches where other languages are used - assembly,
> Ada, etc.  And of course there are hobby developers, lone wolves, and
> amateurs pretending to be professionals who pick Pascal, Basic, or Forth.
>
> I get to pick these things myself to a fair extent (with some FPGA work
> long ago, I used confluence rather than the standard VHDL/Verilog).  But
> I would need very strong reasons to pick anything other than C or C++
> for embedded development.

What you just said in response to my question about why you *can't* pick 
Forth is, "because it doesn't suit me".  That's fair enough, but not as 
much about Forth as it is about your preferences and biases.


> The other reason is more technical - Forth is simply not a great
> language for embedded development work.
>
> It certainly has some good points - its interactivity is very nice, and
> you can write very compact source code.
>
> But the stack model makes it hard to work with more complex functions,
> so it is difficult to be sure your code is correct and maintainable.

I think you will get some disagreement on that point.


> The traditional Forth solution is to break your code into lots of tiny
> pieces - but that means the programmer is jumping back and forth in the
> code, rather than working with sequential events in a logical sequence.

I am no expert, so far be it from me to defend Forth in this regard, but 
my experience is that if you are having trouble writing code in Forth, 
you don't "get it".

I've mentioned many times I think the first time I can recall hearing 
"the word".  I liked the idea of Forth, but was having trouble writing 
code in it for the various reasons that people give, one of which is 
your issue above.  One time I was complaining that it was hard to find 
stack mismatches where words were leaving too many parameter on the 
stack or not enough.  Jeff Fox weighed in (as he often would) and told 
me I didn't need debuggers and such, statck mismatches just showed that 
I couldn't count...  That hit me between the eyes and I realized he was 
right.  Balancing the stack is just a matter of counting... *and* 
keeping your word definitions small so that you aren't prone to 
miscounting.  That was the real lesson, keep the definitions small.

You don't need to jump "back and forth" so much, you just need to learn 
to decompose the code so that each word is small enough to debug 
visually.  It was recognized a long time ago that even in C programming 
that smaller is better.  I don't recall the expert, but one of the 
programming gurus of yesteryear had a guideline that C routines should 
fit on a screen which was 24 lines at the time.  But do people listen? 
No.  They write large routines that are hard to debug.


>   The arithmetic model makes it hard to work with different sized types,
> which are essential in embedded systems - the lack of overloading on
> arithmetic operators means a lot of manual work in manipulating types
> and getting the correct variant of the operator you want.  The highly
> flexible syntax means that static error checking is almost non-existent.

Really?  I have never considered data types to be a problem in Forth. 
Using S>D or just typing 0 to convert from single to double precision 
isn't so hard.  You could also define words that are C like, 
(signed_double) and (unsigned_double), but then I don't know if this 
would help you since I don't understand your concern.

Yes, in terms of error checking, Forth is at the other end of the 
universe (almost) from Ada or VHDL (I'm pretty proficient at VHDL, not 
so much with Ada).  I can tell you that in VHDL you spend almost as much 
time specifying and converting data types as you do the rest of coding. 
  The only difference from not having the type checking is that the tool 
catches the "bugs" and you spend your time figuring out how to make it 
happy, vs. debugging the usual way.  I'm not sure which is really 
faster.  I honestly can't recall having a bug from data types in Forth, 
but it could have happened.

-- 

Rick

Article: 155321
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 23 Jun 2013 13:46:26 -0400
Links: << >>  << T >>  << A >>
On 6/23/2013 1:34 PM, Rob Doyle wrote:
> On 6/23/2013 10:27 AM, rickman wrote:
>> On 6/23/2013 5:31 AM, Tom Gardner wrote:
>>> rickman wrote:
>>>> On 6/22/2013 5:57 PM, Tom Gardner wrote:
>>>
>>>>> How do you propose to implement mailboxes reliably?
>>>>> You need to think of all the possible memory-access
>>>>> sequences, of course.
>>>>
>>>> I don't get the question. Weren't semaphores invented a long time ago
>>>> and require no special support from the processor?
>>>
>>> Of course they are one communications mechanism, but not
>>> the only one. Implementation can be made impossible by
>>> some design decisions. Whether support is "special" depends
>>> on what you regard as "normal", so I can't give you an
>>> answer to that one!
>>
>> What aspect of a processor can make implementation of semaphores
>> impossible?
>
> Lack of atomic operations.

Lol, so semaphores were never implemented on a machine without an atomic 
read modify write?

-- 

Rick

Article: 155322
Subject: Re: New soft processor core paper publisher?
From: Rob Doyle <radioengr@gmail.com>
Date: Sun, 23 Jun 2013 11:05:14 -0700
Links: << >>  << T >>  << A >>
On 6/23/2013 10:46 AM, rickman wrote:
> On 6/23/2013 1:34 PM, Rob Doyle wrote:
>> On 6/23/2013 10:27 AM, rickman wrote:
>>> On 6/23/2013 5:31 AM, Tom Gardner wrote:
>>>> rickman wrote:
>>>>> On 6/22/2013 5:57 PM, Tom Gardner wrote:
>>>>
>>>>>> How do you propose to implement mailboxes reliably?
>>>>>> You need to think of all the possible memory-access
>>>>>> sequences, of course.
>>>>>
>>>>> I don't get the question. Weren't semaphores invented a long time ago
>>>>> and require no special support from the processor?
>>>>
>>>> Of course they are one communications mechanism, but not
>>>> the only one. Implementation can be made impossible by
>>>> some design decisions. Whether support is "special" depends
>>>> on what you regard as "normal", so I can't give you an
>>>> answer to that one!
>>>
>>> What aspect of a processor can make implementation of semaphores
>>> impossible?
>>
>> Lack of atomic operations.
>
> Lol, so semaphores were never implemented on a machine without an atomic
> read modify write?

You don't need a read/modify/write instruction.   You need to perform a 
read/modify/write sequence of instructions atomically.   On simple 
processors that could be accomplished by disabling interrupts around the 
critical section of code.

I don't know if there is a machine that /can't/ implement a semaphore - 
but that is not the question that you asked.

I suppose I could contrive one.  For example, if you had a processor 
that required disabling interrupts as described above and you had a had 
to support a non-maskable interrupt...

Rob.


Article: 155323
Subject: Re: New soft processor core paper publisher?
From: David Brown <david.brown@removethis.hesbynett.no>
Date: Sun, 23 Jun 2013 22:51:34 +0200
Links: << >>  << T >>  << A >>
On 23/06/13 19:45, rickman wrote:
> On 6/23/2013 4:38 AM, David Brown wrote:
>> On 22/06/13 18:18, rickman wrote:
>>> On 6/22/2013 11:21 AM, David Brown wrote:
>>>> On 22/06/13 07:23, rickman wrote:
>>>>> On 6/21/2013 7:18 AM, David Brown wrote:
>>>>>> On 21/06/13 11:30, Tom Gardner wrote:
>>>>>>
>>>>>>> I suppose I ought to change "nobody writes Forth" to
>>>>>>> "almost nobody writes Forth.
>>>>>>>
>>>>>>
>>>>>> Shouldn't that be "almost nobody Forth writes" ?
>>>>>
>>>>> I would say that was "nobody almost Forth writes". Wouldn't it be
>>>>> [noun
>>>>> [adjective] [noun [adjective]]] verb?
>>>>
>>>> I thought about that, but I was not sure. When I say "work with Forth
>>>> again", I have only "played" with Forth, not "worked" with it, and it
>>>> was a couple of decades ago.
>>>
>>> Hey, it's not like this is *real* forth.  But looking at how some Forth
>>> code works for things like assemblers and my own projects, the data is
>>> dealt with first starting with some sort of a noun type piece of data
>>> (like a register) which may be modified by an adjective (perhaps an
>>> addressing mode) followed by others, then the final verb to complete the
>>> action (operation).
>>>
>>>
>>>>>> (I too would like an excuse to work with Forth again.)
>>>>>
>>>>> What do you do instead?
>>>>>
>>>>
>>>> I do mostly small-systems embedded programming, which is mostly in
>>>> C. It
>>>> used to include a lot more assembly, but that's quite rare now (though
>>>> it is not uncommon to have to make little snippets in assembly, or to
>>>> study compiler-generated assembly), and perhaps in the future it will
>>>> include more C++ (especially with C++11 features). I also do desktop
>>>> and
>>>> server programming, mostly in Python, and I have done a bit of FPGA
>>>> work
>>>> (but not for a number of years).
>>>
>>> Similar to myself, but with the opposite emphasis.  I mostly do hardware
>>> and FPGA work with embedded programming which has been rare for some
>>> years.
>>>
>>> I think Python is the language a customer recommended to me.  He said
>>> that some languages are good for this or good for that, but Python
>>> incorporates a lot of the various features that makes it good for most
>>> things.  They write code running under Linux on IP chassis.  I think
>>> they use Python a lot.
>>>
>>>
>>>> I don't think of Forth as being a suitable choice of language for the
>>>> kind of systems I work with - but I do think it would be fun to work
>>>> with the kind of systems for which Forth is the best choice. However, I
>>>> suspect that is unlikely to happen in practice. (Many years ago, my
>>>> company looked at a potential project for which Atmel's Marc-4
>>>> processors were a possibility, but that's the nearest I've come to
>>>> Forth
>>>> at work.)
>>>
>>> So why can't you consider Forth for processors that aren't stack based?
>>>
>>
>> There are two main reasons.
>>
>> The first, and perhaps most important, is non-technical - C (and to a
>> much smaller extent, C++) is the most popular language for embedded
>> development.  That means it is the best supported by tools, best
>> understood by other developers, has the most sample code and libraries,
>> etc.  There are a few niches where other languages are used - assembly,
>> Ada, etc.  And of course there are hobby developers, lone wolves, and
>> amateurs pretending to be professionals who pick Pascal, Basic, or Forth.
>>
>> I get to pick these things myself to a fair extent (with some FPGA work
>> long ago, I used confluence rather than the standard VHDL/Verilog).  But
>> I would need very strong reasons to pick anything other than C or C++
>> for embedded development.
>
> What you just said in response to my question about why you *can't* pick
> Forth is, "because it doesn't suit me".  That's fair enough, but not as
> much about Forth as it is about your preferences and biases.


I viewed the question as "why *you* can't pick Forth" - I can only 
really answer for myself.

You say "preferences and biases" - I say "experience and understanding" :-)

>
>
>> The other reason is more technical - Forth is simply not a great
>> language for embedded development work.
>>
>> It certainly has some good points - its interactivity is very nice, and
>> you can write very compact source code.
>>
>> But the stack model makes it hard to work with more complex functions,
>> so it is difficult to be sure your code is correct and maintainable.
>
> I think you will get some disagreement on that point.
>

No doubt I will.

Of course, remember your own preferences and biases - I say Forth makes 
these things hard or difficult, but not impossible.  If you are very 
experienced with Forth, you'll find them easier.  In fact, you will 
forget that you ever found them hard, and can't see why it's not easy 
for everyone.

>
>> The traditional Forth solution is to break your code into lots of tiny
>> pieces - but that means the programmer is jumping back and forth in the
>> code, rather than working with sequential events in a logical sequence.
>
> I am no expert, so far be it from me to defend Forth in this regard, but
> my experience is that if you are having trouble writing code in Forth,
> you don't "get it".
>

I can agree with that to a fair extent.  Forth requires you to think in 
a different manner than procedural languages (just as object oriented 
languages, function languages, etc., all require different ways to think 
about the task).

My claim is that even when you do "get it", there are disadvantages and 
limitations to Forth.

> I've mentioned many times I think the first time I can recall hearing
> "the word".  I liked the idea of Forth, but was having trouble writing
> code in it for the various reasons that people give, one of which is
> your issue above.  One time I was complaining that it was hard to find
> stack mismatches where words were leaving too many parameter on the
> stack or not enough.  Jeff Fox weighed in (as he often would) and told
> me I didn't need debuggers and such, statck mismatches just showed that
> I couldn't count...  That hit me between the eyes and I realized he was
> right.  Balancing the stack is just a matter of counting... *and*
> keeping your word definitions small so that you aren't prone to
> miscounting.  That was the real lesson, keep the definitions small.

There are times when code is complex, because the task in hand is 
complex, and it cannot sensibly be reduced into small parts without a 
lot of duplication, inefficiency, or confusing structure (the same 
applies to procedural programming - sometimes the best choice really is 
a huge switch statement).  I don't want to deal with trial-and-error 
debugging in the hope that I've tested all cases of miscounting - I want 
a compiler that handles the drudge work automatically and lets me 
concentrate on the important things.

>
> You don't need to jump "back and forth" so much, you just need to learn
> to decompose the code so that each word is small enough to debug
> visually.  It was recognized a long time ago that even in C programming
> that smaller is better.  I don't recall the expert, but one of the
> programming gurus of yesteryear had a guideline that C routines should
> fit on a screen which was 24 lines at the time.  But do people listen?
> No.  They write large routines that are hard to debug.

Don't kid yourself here - people write crap in all languages.  And most 
programmers - of all languages - are pretty bad at it.  Forth might 
encourage you to split up the code into small parts, but there will be 
people who call these "part1", "part2", "part1b", etc.

>
>
>>   The arithmetic model makes it hard to work with different sized types,
>> which are essential in embedded systems - the lack of overloading on
>> arithmetic operators means a lot of manual work in manipulating types
>> and getting the correct variant of the operator you want.  The highly
>> flexible syntax means that static error checking is almost non-existent.
>
> Really?  I have never considered data types to be a problem in Forth.
> Using S>D or just typing 0 to convert from single to double precision
> isn't so hard.  You could also define words that are C like,
> (signed_double) and (unsigned_double), but then I don't know if this
> would help you since I don't understand your concern.
>

I need to easily and reliably deal with data that is 8-bit, 16-bit, 
32-bit and 64-bit.  Sometimes I need bit fields that are a different 
size.  Sometimes I work with processors that have 20-bit, 24-bit or 
40-bit data.  I need to know exactly what I am getting, and exactly what 
I am doing with it.  Working with a "cell" or "double cell" is not good 
enough - just like C "int" or "short int" is unacceptable.

If Forth has the equivalent of "uint8_t", "int_fast16_t", etc., then it 
could work - but as far as I know, it does not.  Unless I am missing 
something, there is no easy way to write code that is portable between 
different Forth targets if cell width is different.  You would have to 
define your own special set of words and operators, with different 
definitions depending on the cell size.  You are no longer working in 
Forth, but your own little private language.

> Yes, in terms of error checking, Forth is at the other end of the
> universe (almost) from Ada or VHDL (I'm pretty proficient at VHDL, not
> so much with Ada).  I can tell you that in VHDL you spend almost as much
> time specifying and converting data types as you do the rest of coding.

Yes, I dislike that about VHDL and Ada, though I have done little work 
with either.  C is a bit more of a happy medium - though sometimes the 
extra protection you can get (but only if you want it) with C++ can be a 
good idea.

>   The only difference from not having the type checking is that the tool
> catches the "bugs" and you spend your time figuring out how to make it
> happy, vs. debugging the usual way.  I'm not sure which is really
> faster.  I honestly can't recall having a bug from data types in Forth,
> but it could have happened.
>

I use Python quite a bit - it has strong typing, but the types are 
dynamic.  This means there is very little compile-time checking.  I 
definitely miss that in the language - you waste a lot of time debugging 
by trial-and-error when a statically typed language would spot your 
error for you immediately.


Article: 155324
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sun, 23 Jun 2013 13:52:57 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sunday, June 23, 2013 1:24:48 PM UTC-4, rickman wrote:

> I'd be interested in reading the design document, but this is what I 
> find at Opencores...
>
> SVN: No files checked in

I believe SVN is for the verilog, which isn't there quite yet, but the document is.  Click on "Downloads" at the upper right.  

Here is a link to it:

http://opencores.org/usercontent,doc,1371986749



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search