Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 155450

Article: 155450
Subject: Re: New soft processor core paper publisher?
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Sat, 29 Jun 2013 02:56:17 +0000 (UTC)
Links: << >>  << T >>  << A >>
Les Cargill <lcargill99@comcast.com> wrote:

(snip)
> RAM was both large and expensive until recently. Different people
> made RAM than made processors and it would have been challenging to get
> the business arrangements such that they'd glue up.

Not so long ago, some used separate chips for cache in the same
package as the CPU die. 

I have wondered if processors with large enough cache can run
with no external RAM, as long as you stay within the cache.

Then you could call your L3 (I believe) cache the system
main memory, either on chip or in the same package.
 
> Plus, beginning not long ago, you're rwally dealing with cache directly, 
> not RAM. Throw in that main memory is DRAM, and it gets a lot more 
> complicated.

-- glen

Article: 155451
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Fri, 28 Jun 2013 23:55:07 -0400
Links: << >>  << T >>  << A >>
On 6/28/2013 10:44 PM, Les Cargill wrote:
> Eric Wallin wrote:
>> On Friday, June 28, 2013 9:02:10 PM UTC-4, rickman wrote:
>>
>>> You are still thinking von Neumann. Any application can be broken
>>> down into small units and parceled out to small processors. But
>>> you have to think in those terms rather than just saying, "it
>>> doesn't fit". Of course it can fit!
>>
>> Intra brain communications are hierarchical as well.
>>
>> I'm nobody, but one of the reasons for designing Hive was because I
>> feel processors in general are much too complex, to the point where
>> I'm repelled by them. I believe one of the drivers for this
>> over-complexity is the fact that main memory is external. I've been
>> assembling PCs since the 286 days, and I've never understood why main
>> memory wasn't tightly integrated onto the uP die.
>
> RAM was both large and expensive until recently. Different people
> made RAM than made processors and it would have been challenging to get
> the business arrangements such that they'd glue up.

That's not the reason.  Intel could buy any of the DRAM makers any day 
of the week.  At several points DRAM divisions of companies were sold 
off to form merged DRAM specialty companies primarily so the risk was 
shared by several companies and they didn't have to take such a large 
hit to their bottom line when DRAM was in the down phase of the business 
cycle.  Commodity parts like DRAMs are difficult to make money on and 
there is always one of the makers who could be bought easily.

In fact, Intel started out making DRAMs!  The main reason why main 
memory isn't on the CPU chip is because there are lots of variations in 
size *and* that it just wouldn't fit!  You don't put one DRAM chip in a 
computer, they used to need a minimum of four, IIRC to make up a module, 
often they were 8 to a module and sometimes double sided with 16 chips 
to a DRAM module.

The next bigger problem is that CPUs and DRAMs use highly optimized 
processes and are not very compatible.  A combined chip would likely not 
have as fast a CPU and would have poor DRAM on board.


> Plus, beginning not long ago, you're rwally dealing with cache directly,
> not RAM. Throw in that main memory is DRAM, and it gets a lot more
> complicated.
>
> Building a BSP for a new board from scratch with a DRAM controller
> is a lot of work.
>
>> Everyone pretty
>> much gets the same ballpark memory size when putting a PC together,
>> and I can remember only once or twice upgrading memory after the
>> initial build (for someone else's Dell or similar where the initial
>> build was anemically low-balled for "value" reasons). Here we are in
>> 2013, the memory is several light cm away from the processor on the
>> MB, talking in cache lines, and I still don't get why we have this
>> gross inefficiency.
>>
>
> That's not generally the bottleneck, though.

I'm not so sure.  With the multicore processors my understanding is that 
memory bandwidth *is* the main bottle neck.  If you could move the DRAM 
on chip it could run faster but more importantly it could be split into 
a bank for each processor giving each one all the bandwidth it could want.

I think a large part of the problem is that we have been designing more 
and more complex machines so that the majority of the CPU cycles are 
spent supporting the framework rather than doing the work the user 
actually cares about.  It is a bit like the amount of fuel needed to go 
into space.  Add one pound of payload and you need some hundred or 
thousand more pounds of fuel to launch it.  If you want to travel 
further out into space, the amount of fuel goes up exponentially.  We 
seem to be reaching the point that the improvements in processor speed 
are all being consumed by the support software rather than getting to 
the apps.

-- 

Rick

Article: 155452
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Fri, 28 Jun 2013 22:57:21 -0500
Links: << >>  << T >>  << A >>
glen herrmannsfeldt wrote:
> Les Cargill <lcargill99@comcast.com> wrote:
>
> (snip)
>> RAM was both large and expensive until recently. Different people
>> made RAM than made processors and it would have been challenging to get
>> the business arrangements such that they'd glue up.
>
> Not so long ago, some used separate chips for cache in the same
> package as the CPU die.
>

Yes. Or even farther back, the chips were remote
from the CPU package.

> I have wondered if processors with large enough cache can run
> with no external RAM, as long as you stay within the cache.
>

I am sure there are drawers full of papers on the subject :)
If you were clever about managing cache misses* and they were
sufficiently infrequent, it might work out to be a fraction
of the same thing.

*as in "oops, your process gets queued if you have a cache miss".

> Then you could call your L3 (I believe) cache the system
> main memory, either on chip or in the same package.
>

I am not sure what prevents massive cache from being universal in
the first pace. I expect it's pricing.

>> Plus, beginning not long ago, you're rwally dealing with cache directly,
>> not RAM. Throw in that main memory is DRAM, and it gets a lot more
>> complicated.
>
> -- glen
>

--
Les Cargill

Article: 155453
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 29 Jun 2013 10:14:17 +0100
Links: << >>  << T >>  << A >>
On 29/06/13 02:02, rickman wrote:
> On 6/28/2013 5:11 PM, Tom Gardner wrote:
>> On 28/06/13 20:06, rickman wrote:
>>> I think the trick will be in finding ways of dividing up the programs
>>> so they can meld to the hardware rather than trying to optimize
>>> everything.
>>
>> My suspicion is that, except for compute-bound
>> problems that only require "local" data, that
>> granularity will be too small.
>>
>> Examples where it will work, e.g. protein folding,
>> will rapidly migrate to CUDA and graphics processors.
>
> You are still thinking von Neumann.  Any application can be broken down into small units and parceled out to small processors.  But you have to think in those terms rather than just saying, "it
> doesn't fit".  Of course it can fit!

Regrettably not. People have been trying different
techniques for ~50 years, with varying degrees of
success as technology bottlenecks change.
The people working in those areas are highly
intelligent and motivated (e.g. high performance
computing research) and there is serious money
available (e.g. life sciences, big energy).

As a good rule of thumb, if you can think of it,
they've already tried it and found where it does
and doesn't work.


>>> Consider a chip where you have literally a trillion operations per
>>> second available all the time. Do you really care if half go to waste?
>>> I don't! I design FPGAs and I have never felt obliged (not
>>> since the early days anyway) to optimize the utility of each LUT and
>>> FF. No, it turns out the precious resource in FPGAs is routing and you
>>> can't do much but let the tools manage that anyway.
>>
>> Those internal FPGA constraints also have analogues at
>> a larger scale, e.g. ic pinout, backplanes, networks...
>>
>>
>>> So a fine grained processor array could be very effective if the
>>> programming can be divided down to suit. Maybe it takes 10 of these
>>> cores to handle 100 Mbps Ethernet, so what? Something like a
>>> browser might need to harness a couple of dozen. If the load slacks
>>> off and they are idling, so what?
>>
>> The fundamental problem is that in general as you make the
>> granularity smaller, the communications requirements
>> get larger. And vice versa :(
>
> Actually not.  The aggregate comms requirements may increase, but we aren't sharing an Ethernet bus.  All of the local processors talk to each other and less often have to talk to non-local
> processors. I think the phone company knows something about that.

That works to an extent, particularly in "embarrassingly parallel"
problems such as telco systems. I know: I've architected and
implemented some :)

It still has its limits in most interesting computing systems.


>>
>> I'm sort-of retired (I got sick of corporate in-fighting,
>> and I have my "drop dead money", so...)
>
> That's me too, but I found some work that is paying off very well now. So I've got a foot in both camps, retired, not retired... both are fun in their own way.  But dealing with international shipping
> is a PITA.

Or even sourcing some components, e.g. a MAX9979KCTK+D or +TD :(


>> I regard golf as silly, despite having two courses in
>> walking distance. My equivalent of kayaking is flying
>> gliders.
>
> That has got to be fun!

Probably better than you imaging (and that's recursive
without a terminating condition). I know instructors
that still have pleasant surprises after 50 years :)

I did a tiny bit of kayaking on flat water, but now
I wear hearing aids :(


> I've never worked up the whatever to learn to fly.

Going solo is about as difficult as learning to drive
a car. And then the learning really starts :)


> It seems like a big investment and not so cheap overall.

Not in money. In the UK club membership is $500/year,
a launch + 10 mins instruction is $10, and an hour
instruction in the air is $30. The real cost is time:
club members help you get airborne, and you help them
in return. Very sociable, unlike aircraft with air
conditioning fans up front or scythes above.


> But there is clearly a great thrill there.

0-40kt in 3s, 0-50kt in 5s, climb with your feet
above your head, fly in close formation with raptors,
eyeball sheep on a hillside as you whizz past
below them at 60kt, 10-20kft, 40kt-150kt, hundreds
and thousands of km range, pre-solo spinning at
altitudes that make power pilots blanche, and
pre-solo flying in loose formation with other
aircraft.

Let me know if you want pointers to youtube vids.



Article: 155454
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 29 Jun 2013 10:31:44 +0100
Links: << >>  << T >>  << A >>
On 29/06/13 03:15, Eric Wallin wrote:
> On Friday, June 28, 2013 9:02:10 PM UTC-4, rickman wrote:
>
>> You are still thinking von Neumann.  Any application can be broken down
>> into small units and parceled out to small processors.  But you have to
>> think in those terms rather than just saying, "it doesn't fit".  Of
>> course it can fit!
>
> Intra brain communications are hierarchical as well.
>
> I'm nobody, but one of the reasons for designing Hive was because I feel processors in general are much too complex, to the point where I'm repelled by them.

> I believe one of the drivers for this over-complexity is the fact that main memory is external.

Partly. The random-access latency is also a killer (Sun's
Niagara series  sparcs have an interesting work-around.)

> I've never understood why main memory wasn't tightly integrated onto the uP die.

1) Yield => cost, which is highly non-linear with
increasing area. Tolerable for big iron where the
processor cost is a small fraction of the total

2) Different semiconductor structures, which are
can't be used o the same die.

> My dual core multi-GHz PC with SSD often just sits there for many seconds after I click on something, and malware is now taking me sometimes days to fix.  Windows 7 is a dog to install, with relentless updates that often completely hose it rather than improve it.  The future isn't looking too bright for the desktop with the way we're going.

Download xubuntu, blow it onto a cd/dvd or usb.
Reboot and try that "live cd" version without
touching your disk.

Live cd has slow disk accesses since everything
has to be fetched from cd. But the desktop is
blindingly fast, even on notebooks.

No resident malware to worry about (just spearphishing
and man-in-the browser attacks).

No endless re-booting whenever updates arrive -
and they arrive daily. Only reboots are for
kernel upgrades.

Speedy installation: I get a fully-patched installed
system in well under an hour. Last time MS would
let me (!) install XP, it took me well over a day
because of all the reboots.

Speedy re-installation once every 3 years: your
files are untouched so you just upgrade the o/s
(trivial precondition: put /home on a separate
disk partition). Takes < 1 hour.

Article: 155455
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 29 Jun 2013 10:34:01 +0100
Links: << >>  << T >>  << A >>
On 29/06/13 03:56, glen herrmannsfeldt wrote:
> Les Cargill <lcargill99@comcast.com> wrote:
>
> (snip)
>> RAM was both large and expensive until recently. Different people
>> made RAM than made processors and it would have been challenging to get
>> the business arrangements such that they'd glue up.
>
> Not so long ago, some used separate chips for cache in the same
> package as the CPU die.
>
> I have wondered if processors with large enough cache can run
> with no external RAM, as long as you stay within the cache.

 From the point of view of speed, yes. The DRAM becomes the
equivalent of paging to disk. But you still need DRAM to boot :)

> Then you could call your L3 (I believe) cache the system
> main memory, either on chip or in the same package.

Yup, that's the effect. Doesn't work with bloatware :(

Article: 155456
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 29 Jun 2013 10:35:15 +0100
Links: << >>  << T >>  << A >>
On 29/06/13 04:57, Les Cargill wrote:
> *as in "oops, your process gets queued if you have a cache miss".

Almost, but see how Sun's Niagara architecture circumvented
that easily and cheaply.



Article: 155457
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 29 Jun 2013 10:39:36 +0100
Links: << >>  << T >>  << A >>
On 29/06/13 04:55, rickman wrote:
> With the multicore processors my understanding is that memory bandwidth *is* the main bottle neck.

Bandwidth and latency, particularly the mismatch
between processor cycle speed and DRAM random-access cycle time.

> If you could move the DRAM on chip it could run faster but more importantly it
> could be split into a bank for each processor giving each one all the bandwidth it could want.

You've just re-invented the L3 cache on AMD's
Opteron processors :)

>
> I think a large part of the problem is that we have been designing more and more complex machines so that the majority of the CPU cycles are spent supporting the framework rather than doing the work
> the user actually cares about.  It is a bit like the amount of fuel needed to go into space.  Add one pound of payload and you need some hundred or thousand more pounds of fuel to launch it.  If you
> want to travel further out into space, the amount of fuel goes up exponentially.  We seem to be reaching the point that the improvements in processor speed are all being consumed by the support
> software rather than getting to the apps.

In general purpose desktop work, that's true.

For high-performance and industrial computing,
it is less clear cut.


Article: 155458
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 29 Jun 2013 11:06:56 +0100
Links: << >>  << T >>  << A >>
On 29/06/13 03:55, Les Cargill wrote:
> Bakul Shah wrote:
>> Most of the concepts
>> are from ~40 years back (CSP, guarded commands etc.).
>
> Most *all* concepts in computers are from that long ago or longer.
> The "new stuff" is more about arbitraging market forces than getting real work done.

There's some truth in that. For most people
re-inventing the wheel is completely unprofitable.

Who want to learn iron-mining, smelting, forging, and
finishing when all you need to do is cut up this
evening's meal.



>> Turning serial programs
>> into parallel versions is manual, laborious, error prone
>> and not very successful.
>
> So don't do that. Write them to be parallel from the
> git-go. Write them to be event-driven. It's better in
> all dimensions.

But not at all scales; there's a reason fine-grained
dataflow failed.
And not with all timescales :)

> After all, we're all really clockmakers. Events regulate our
> "wheels" just like the escapement on a pendulum clock. .
> When you get that happening, things get to be a lot more
> deterministic and that is what parallelism needs the most.

Don't get me wrong, I really like event-driven programming,
and some programming types are triumphantly re-inventing
it yet again, many-many layers up the software stack!

For example, and to torture the nomenclature:
  - unreliable photons/electons at the PMD level
  - unreliable bits at the PHY level
  - reliable bits, unreliable frames at MAC level
  - reliable frames, unreliable packets at the IP level
  - reliable packets, unreliable streams at the TCP level
  - reliable streams, unreliable conversations at the app level
  - app protocols to make conversations reliable
  - reliable conversations within apps:
    - protocols to make apps reliable
    - streams to send unreliable message events
    - frameworks to make message events reliable
where some of the app and framework stuff looks *very* like
some of the networking stuff.

But I'm not going to throw the baby out with the
bathwater: there are *very* good reasons why most
(not all) of those levels are there.


Article: 155459
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sat, 29 Jun 2013 06:10:51 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, June 28, 2013 10:44:37 PM UTC-4, Les Cargill wrote:

> Geez. Ever use virtual machines? If you break/infect one,
> just roll it back.

My XP PC doesn't get infected too often, but I build PCs and do tech suppor=
t on the side for family and friends, so their problems become my problems.=
  My last Win7 build took several days just to stabilize with all the updat=
es.  One update put it in a permanent update loop and that took me a while =
to even notice.  And I've got a Win7 laptop sitting here that for the life =
of me I can't get to run outside of safe mode.  I did a repair install (mal=
ware knocked out system restore rollback) and it works fine until the .net =
v4 updates hit after which it stutters and the HD disappears.  I don't get =
all the accolades for Win7, it's a dog.

Article: 155460
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 29 Jun 2013 14:56:30 +0100
Links: << >>  << T >>  << A >>
On 29/06/13 14:10, Eric Wallin wrote:
> I build PCs and do tech support on the side for family and friends, so their problems become my problems

I refuse, unless they let me install some version of Linux; so far the people that have accepted have been happy.


> My last Win7 build took several days just to stabilize with all the updates.

What?! That's ridiculous. I can just about understand that for the decade old XP (kudos to MS for supporting it for that long).
But I sure can't see why that should be true for Win7.

> One update put it in a permanent update loop and that took me a while to even notice.
 > And I've got a Win7 laptop sitting here that for the life of me I can't get to run outside of safe mode.
 > I did a repair install (malware knocked out system restore rollback)

Why not just do a full re-install from CD?

I was thinking of getting Win7 to replace XP when MS withdraw support next year. Now I'm in doubt.
As for Win8: "just say no" (everyone else does).


> and it works fine until the .net v4 updates hit after which it stutters and the HD disappears.


> I don't get all the accolades for Win7, it's a dog.

Yes, but it is better than Vista, and the hacks don't feel so guilty about supporting it.

Good things about linux: fanbois are vocally and acerbically critical when
things don't work smoothly, and then point you towards the many alternatives
that /do/ work smoothly.




Article: 155461
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sat, 29 Jun 2013 08:03:34 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Saturday, June 29, 2013 9:56:30 AM UTC-4, Tom Gardner wrote:

> But I sure can't see why that should be true for Win7.

Zillions of updates that only begin to slow down coming at you after two da=
ys or so.

> Why not just do a full re-install from CD?

It's a couple of days trying to repair vs. a couple of days reinstalling an=
d updating.  The former is usually the safe bet but I think I've met my mat=
ch in this laptop (which I previously did a reinstall on due to a hard driv=
e crash).

> I was thinking of getting Win7 to replace XP when MS withdraw support nex=
t year. Now I'm in doubt.

I'm riding XP Pro until the hubcaps fall off.

> Yes, but it is better than Vista, and the hacks don't feel so guilty abou=
t supporting it.

I'm beginning to think the whole "every other MS OS is a POS, and every oth=
er one is golden" meme is 99% marketing.  I work on a couple of Vista machi=
nes here and there and Win7 seems about the same in terms of fixing things =
(i.e. a dog).  XP has it's issues as well, but it is simpler and there are =
more ways to fix it without blowing absolutely everything off the HD.  I ju=
st want an OS that mounts a drive, garbage collects, and runs the programs =
I'm familiar with (the last is the kicker).

Article: 155462
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Sat, 29 Jun 2013 11:50:32 -0500
Links: << >>  << T >>  << A >>
rickman wrote:
> On 6/28/2013 10:44 PM, Les Cargill wrote:
>> Eric Wallin wrote:
>>> On Friday, June 28, 2013 9:02:10 PM UTC-4, rickman wrote:
>>>
>>>> You are still thinking von Neumann. Any application can be broken
>>>> down into small units and parceled out to small processors. But
>>>> you have to think in those terms rather than just saying, "it
>>>> doesn't fit". Of course it can fit!
>>>
>>> Intra brain communications are hierarchical as well.
>>>
>>> I'm nobody, but one of the reasons for designing Hive was because I
>>> feel processors in general are much too complex, to the point where
>>> I'm repelled by them. I believe one of the drivers for this
>>> over-complexity is the fact that main memory is external. I've been
>>> assembling PCs since the 286 days, and I've never understood why main
>>> memory wasn't tightly integrated onto the uP die.
>>
>> RAM was both large and expensive until recently. Different people
>> made RAM than made processors and it would have been challenging to get
>> the business arrangements such that they'd glue up.
>
> That's not the reason.  Intel could buy any of the DRAM makers any day
> of the week.


I have to presume they "couldn't", because they didn't. But memory
architectures did evolve over time - I believe 286 machines still
used DIP packages for DRAM. And the target computers I used
from the mid-80s to the mid-90s may well have still used SRAM.

At that point, by the time SIP/SIMM/DIMM modules were available,
the culture expected things to be seperate. We were also arbitraging
RAM prices - we'd buy less-quantity, more-expensive DRAM now, then buy
bigger later when the price dropped.

Some of that was doubtless retail behavioral stuff.

> At several points DRAM divisions of companies were sold
> off to form merged DRAM specialty companies primarily so the risk was
> shared by several companies and they didn't have to take such a large
> hit to their bottom line when DRAM was in the down phase of the business
> cycle.  Commodity parts like DRAMs are difficult to make money on and
> there is always one of the makers who could be bought easily.
>

Right. So if you integrate it into the main core package, you no
longer have to suffer as a commodity vendor. It's a captive market.

I'm sure it's not that simple.

> In fact, Intel started out making DRAMs!

Precisely!  They did one of the first "bet the company" moves
that resulted in the 4004.

> The main reason why main
> memory isn't on the CPU chip is because there are lots of variations in
> size *and* that it just wouldn't fit!  You don't put one DRAM chip in a
> computer, they used to need a minimum of four, IIRC to make up a module,
> often they were 8 to a module and sometimes double sided with 16 chips
> to a DRAM module.
>

If you were integrating inside the package, you could use any
physical configuration you wanted. But the thing would still have been 
too big.

> The next bigger problem is that CPUs and DRAMs use highly optimized
> processes and are not very compatible.  A combined chip would likely not
> have as fast a CPU and would have poor DRAM on board.
>
>

I also have to wonder if the ability to cool things was involved.

>> Plus, beginning not long ago, you're rwally dealing with cache directly,
>> not RAM. Throw in that main memory is DRAM, and it gets a lot more
>> complicated.
>>
>> Building a BSP for a new board from scratch with a DRAM controller
>> is a lot of work.
>>
>>> Everyone pretty
>>> much gets the same ballpark memory size when putting a PC together,
>>> and I can remember only once or twice upgrading memory after the
>>> initial build (for someone else's Dell or similar where the initial
>>> build was anemically low-balled for "value" reasons). Here we are in
>>> 2013, the memory is several light cm away from the processor on the
>>> MB, talking in cache lines, and I still don't get why we have this
>>> gross inefficiency.
>>>
>>
>> That's not generally the bottleneck, though.
>
> I'm not so sure.  With the multicore processors my understanding is that
> memory bandwidth *is* the main bottle neck.  If you could move the DRAM
> on chip it could run faster but more importantly it could be split into
> a bank for each processor giving each one all the bandwidth it could want.
>

That's consistent with my understanding as well. The big thing on 
transputers in the '80s was the 100MBit links between them. As
we used to say - "the bus is usually the bottleneck". Er,
at least once you got past 10MHz clock speeds...


> I think a large part of the problem is that we have been designing more
> and more complex machines so that the majority of the CPU cycles are
> spent supporting the framework rather than doing the work the user
> actually cares about.

Yep - although it's eminently possible to avoid this problem. I use
a lot of old programs - some going back to Win 3.1.

Really, pure 64-bit computers would have completely failed
had there not been the abiity to run a legacy O/S in a VM
or run 32 bit progs through the main O/S.

> It is a bit like the amount of fuel needed to go
> into space.  Add one pound of payload and you need some hundred or
> thousand more pounds of fuel to launch it.  If you want to travel
> further out into space, the amount of fuel goes up exponentially.

So Project X is trying to do something about that. There is something
about engineering culture that "wants scale" - a Saturn V is a really
impressive thing to watch, I am sure.

>  We
> seem to be reaching the point that the improvements in processor speed
> are all being consumed by the support software rather than getting to
> the apps.
>

But things like BeOS and the like have been available, and remain
widely unused. There is some massive culture fail in play; either
that or things are just good enough.

Heck, pad/phone computers do much much *less* than desktops and
have the bullet in the market. You can't even type on them but people
still try...

--
Les Cargill


Article: 155463
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Sat, 29 Jun 2013 11:53:27 -0500
Links: << >>  << T >>  << A >>
Eric Wallin wrote:
> On Friday, June 28, 2013 10:44:37 PM UTC-4, Les Cargill wrote:
>
>> Geez. Ever use virtual machines? If you break/infect one, just roll
>> it back.
>
> My XP PC doesn't get infected too often, but I build PCs and do tech
> support on the side for family and friends, so their problems become
> my problems.

Ah! Right.

> My last Win7 build took several days just to stabilize
> with all the updates.

Holy cow. I would be tempted to just not do that, then.

> One update put it in a permanent update loop
> and that took me a while to even notice.  And I've got a Win7 laptop
> sitting here that for the life of me I can't get to run outside of
> safe mode.  I did a repair install (malware knocked out system
> restore rollback) and it works fine until the .net v4 updates hit
> after which it stutters and the HD disappears.  I don't get all the
> accolades for Win7, it's a dog.
>

Yeah, that's ugly. Although that's more the update infrastructure
that's ugly rather than Win7 itself.

-- 
Les Cargill


Article: 155464
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Sat, 29 Jun 2013 11:58:28 -0500
Links: << >>  << T >>  << A >>
Tom Gardner wrote:
> On 29/06/13 03:15, Eric Wallin wrote:
<snip>
>
> Speedy installation: I get a fully-patched installed
> system in well under an hour. Last time MS would
> let me (!) install XP, it took me well over a day
> because of all the reboots.
>
> Speedy re-installation once every 3 years: your
> files are untouched so you just upgrade the o/s
> (trivial precondition: put /home on a separate
> disk partition). Takes < 1 hour.

And things like virtualbox make running a
Windows guest pretty simple. I'm stuck with a
Win7 host for now because of one PCI card, but
virtualbox claims to be able to publish PCI cards to
guests presently but only on a Linux host.


-- 
Les Cargill

Article: 155465
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Sat, 29 Jun 2013 12:13:29 -0500
Links: << >>  << T >>  << A >>
Tom Gardner wrote:
> On 29/06/13 03:55, Les Cargill wrote:
>> Bakul Shah wrote:
>>> Most of the concepts
>>> are from ~40 years back (CSP, guarded commands etc.).
>>
>> Most *all* concepts in computers are from that long ago or longer.
>> The "new stuff" is more about arbitraging market forces than getting
>> real work done.
>
> There's some truth in that. For most people
> re-inventing the wheel is completely unprofitable.
>
> Who want to learn iron-mining, smelting, forging, and
> finishing when all you need to do is cut up this
> evening's meal.
>

Yep. Although there is a place in the world for katana-makers.
That's almost a ... religious devotion.

http://video.pbs.org/video/1150578495/

>
>
>>> Turning serial programs
>>> into parallel versions is manual, laborious, error prone
>>> and not very successful.
>>
>> So don't do that. Write them to be parallel from the
>> git-go. Write them to be event-driven. It's better in
>> all dimensions.
>
> But not at all scales; there's a reason fine-grained
> dataflow failed.
> And not with all timescales :)
>

Of course. But we do what we can.

>> After all, we're all really clockmakers. Events regulate our
>> "wheels" just like the escapement on a pendulum clock. .
>> When you get that happening, things get to be a lot more
>> deterministic and that is what parallelism needs the most.
>
> Don't get me wrong, I really like event-driven programming,
> and some programming types are triumphantly re-inventing
> it yet again, many-many layers up the software stack!
>

Har! All that's old is new again. :)

> For example, and to torture the nomenclature:
>   - unreliable photons/electons at the PMD level
>   - unreliable bits at the PHY level
>   - reliable bits, unreliable frames at MAC level
>   - reliable frames, unreliable packets at the IP level
>   - reliable packets, unreliable streams at the TCP level
>   - reliable streams, unreliable conversations at the app level
>   - app protocols to make conversations reliable
>   - reliable conversations within apps:
>     - protocols to make apps reliable
>     - streams to send unreliable message events
>     - frameworks to make message events reliable
> where some of the app and framework stuff looks *very* like
> some of the networking stuff.
>

And so you end up throwing all that out and writing one layer
with the business rules, and another that does transport
and event management on top of UDP*.

*or something less sophisticated, like a serial port.

Then you write a GUI if you need it that uses pipes/sockets
to talk to the middleware.

Same as it ever was...

> But I'm not going to throw the baby out with the
> bathwater: there are *very* good reasons why most
> (not all) of those levels are there.
>

The Bad Things are that you end up making assumptions
about the defect rates in the libraries you link in. I am
relatively secure in the knowledge that it's easier to do all that
from scratch. That should not be so, but it frequently
is.

--
Les Cargill

Article: 155466
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sat, 29 Jun 2013 19:32:09 +0100
Links: << >>  << T >>  << A >>
On 29/06/13 17:58, Les Cargill wrote:
> Tom Gardner wrote:
>> On 29/06/13 03:15, Eric Wallin wrote:
> <snip>
>>
>> Speedy installation: I get a fully-patched installed
>> system in well under an hour. Last time MS would
>> let me (!) install XP, it took me well over a day
>> because of all the reboots.
>>
>> Speedy re-installation once every 3 years: your
>> files are untouched so you just upgrade the o/s
>> (trivial precondition: put /home on a separate
>> disk partition). Takes < 1 hour.
>
> And things like virtualbox make running a
> Windows guest pretty simple. I'm stuck with a
> Win7 host for now because of one PCI card, but
> virtualbox claims to be able to publish PCI cards to
> guests presently but only on a Linux host.

I'm not going to comment on Win in a VM,
because I only use win98 like that :)

But shortly before XP is discontinued (and MS shoots
its corporate customers in the foot!), I'll be
putting a clean WinXP inside at least one VM.

Does MS squeal about putting its o/s inside a VM?
They certainly stop me re-installing my perfectly
legal version of XP on a laptop, even though I have
the product code for that laptop! They sure do make
it difficult for me to use their products, sigh.


Article: 155467
Subject: Re: New soft processor core paper publisher?
From: Eric Wallin <tammie.eric@gmail.com>
Date: Sat, 29 Jun 2013 20:38:02 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Saturday, June 29, 2013 12:53:27 PM UTC-4, Les Cargill wrote:

> Yeah, that's ugly. Although that's more the update infrastructure
> that's ugly rather than Win7 itself.

Part and parcel.  The modern OS seems to be a constantly moving target.

I want an OS that has all the bad bugs wrung out of it and is stuck in amber (ROM) for a couple of decades so I might actually get some work done already.

Article: 155468
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 30 Jun 2013 02:21:20 -0400
Links: << >>  << T >>  << A >>
On 6/29/2013 12:50 PM, Les Cargill wrote:
> rickman wrote:
>> On 6/28/2013 10:44 PM, Les Cargill wrote:
>>> Eric Wallin wrote:
>>>> On Friday, June 28, 2013 9:02:10 PM UTC-4, rickman wrote:
>>>>
>>>>> You are still thinking von Neumann. Any application can be broken
>>>>> down into small units and parceled out to small processors. But
>>>>> you have to think in those terms rather than just saying, "it
>>>>> doesn't fit". Of course it can fit!
>>>>
>>>> Intra brain communications are hierarchical as well.
>>>>
>>>> I'm nobody, but one of the reasons for designing Hive was because I
>>>> feel processors in general are much too complex, to the point where
>>>> I'm repelled by them. I believe one of the drivers for this
>>>> over-complexity is the fact that main memory is external. I've been
>>>> assembling PCs since the 286 days, and I've never understood why main
>>>> memory wasn't tightly integrated onto the uP die.
>>>
>>> RAM was both large and expensive until recently. Different people
>>> made RAM than made processors and it would have been challenging to get
>>> the business arrangements such that they'd glue up.
>>
>> That's not the reason. Intel could buy any of the DRAM makers any day
>> of the week.
>
>
> I have to presume they "couldn't", because they didn't.

That's a bit silly.  Why would they want to?  They don't even need to 
buy an SDRAM company to add SDRAM to their chips.


> But memory
> architectures did evolve over time - I believe 286 machines still
> used DIP packages for DRAM. And the target computers I used
> from the mid-80s to the mid-90s may well have still used SRAM.
>
> At that point, by the time SIP/SIMM/DIMM modules were available,
> the culture expected things to be seperate. We were also arbitraging
> RAM prices - we'd buy less-quantity, more-expensive DRAM now, then buy
> bigger later when the price dropped.

Trust me, it's not about "culture", it is about what they can make work 
the best at the lowest price.  That's why they added cache memory, then 
put the cache memory on a module with the CPU then on the chip itself. 
Did they stick with a "culture" that cache should be chips on the 
motherboard or stick with separate cache chips, no, they continued to 
improve to what the current technology would support.


> Some of that was doubtless retail behavioral stuff.
>
>> At several points DRAM divisions of companies were sold
>> off to form merged DRAM specialty companies primarily so the risk was
>> shared by several companies and they didn't have to take such a large
>> hit to their bottom line when DRAM was in the down phase of the business
>> cycle. Commodity parts like DRAMs are difficult to make money on and
>> there is always one of the makers who could be bought easily.
>>
>
> Right. So if you integrate it into the main core package, you no
> longer have to suffer as a commodity vendor. It's a captive market.
>
> I'm sure it's not that simple.

Yes, it's not that simple.  Adding main memory to the CPU chip has all 
sorts of problems.  But knowing how to make SDRAM is not one of them.


>> In fact, Intel started out making DRAMs!
>
> Precisely! They did one of the first "bet the company" moves
> that resulted in the 4004.

Making the 4004 was *not* a "bet the company" design.  They did it under 
contract for a calculator company who paid for the work.  Intel took 
virtually no risk in the matter.


>> The main reason why main
>> memory isn't on the CPU chip is because there are lots of variations in
>> size *and* that it just wouldn't fit! You don't put one DRAM chip in a
>> computer, they used to need a minimum of four, IIRC to make up a module,
>> often they were 8 to a module and sometimes double sided with 16 chips
>> to a DRAM module.
>>
>
> If you were integrating inside the package, you could use any
> physical configuration you wanted. But the thing would still have been
> too big.

Yes, I agree, main memory is too big to fit on the CPU die for any size 
memory in common use at the time.  Isn't that what I said?


>> The next bigger problem is that CPUs and DRAMs use highly optimized
>> processes and are not very compatible. A combined chip would likely not
>> have as fast a CPU and would have poor DRAM on board.
>>
>>
>
> I also have to wonder if the ability to cool things was involved.

SDRAM does not use a lot of power.  It is cooler running than the CPU.


>>>> Everyone pretty
>>>> much gets the same ballpark memory size when putting a PC together,
>>>> and I can remember only once or twice upgrading memory after the
>>>> initial build (for someone else's Dell or similar where the initial
>>>> build was anemically low-balled for "value" reasons). Here we are in
>>>> 2013, the memory is several light cm away from the processor on the
>>>> MB, talking in cache lines, and I still don't get why we have this
>>>> gross inefficiency.
>>>>
>>>
>>> That's not generally the bottleneck, though.
>>
>> I'm not so sure. With the multicore processors my understanding is that
>> memory bandwidth *is* the main bottle neck. If you could move the DRAM
>> on chip it could run faster but more importantly it could be split into
>> a bank for each processor giving each one all the bandwidth it could
>> want.
>>
>
> That's consistent with my understanding as well. The big thing on
> transputers in the '80s was the 100MBit links between them. As
> we used to say - "the bus is usually the bottleneck". Er,
> at least once you got past 10MHz clock speeds...

Then why did you write "That's not generally the bottleneck"?


>> I think a large part of the problem is that we have been designing more
>> and more complex machines so that the majority of the CPU cycles are
>> spent supporting the framework rather than doing the work the user
>> actually cares about.
>
> Yep - although it's eminently possible to avoid this problem. I use
> a lot of old programs - some going back to Win 3.1.
>
> Really, pure 64-bit computers would have completely failed
> had there not been the abiity to run a legacy O/S in a VM
> or run 32 bit progs through the main O/S.
>
>> It is a bit like the amount of fuel needed to go
>> into space. Add one pound of payload and you need some hundred or
>> thousand more pounds of fuel to launch it. If you want to travel
>> further out into space, the amount of fuel goes up exponentially.
>
> So Project X is trying to do something about that. There is something
> about engineering culture that "wants scale" - a Saturn V is a really
> impressive thing to watch, I am sure.
>
>> We
>> seem to be reaching the point that the improvements in processor speed
>> are all being consumed by the support software rather than getting to
>> the apps.
>>
>
> But things like BeOS and the like have been available, and remain
> widely unused. There is some massive culture fail in play; either
> that or things are just good enough.

I'm not sure why you consider this to be a "culture" issue.  Windows is 
the dominant OS.  It is very hard to work with other OSs because there 
are so many fewer apps.  I can design FPGAs only with Windows or Linux 
and at one time I couldn't even use Linux unless I paid for the 
software.  BeOS doesn't run current Windows programs does it?


> Heck, pad/phone computers do much much *less* than desktops and
> have the bullet in the market. You can't even type on them but people
> still try...

They do some of the same things which are what most people need, but 
they are very different products than what computers are.  The market 
evolved because the technology evolved.  10 years ago pads were mostly a 
joke and there smart phones weren't really possible/practical.  Now the 
processors are fast enough running from battery that hand held computing 
is practical and the market will turn that way some 99%.  Desktops will 
always be around just as "workstations" are still around, but only in 
very specialized, demanding applications.

-- 

Rick

Article: 155469
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 30 Jun 2013 02:25:22 -0400
Links: << >>  << T >>  << A >>
On 6/29/2013 9:56 AM, Tom Gardner wrote:
> On 29/06/13 14:10, Eric Wallin wrote:
>
>> I don't get all the accolades for Win7, it's a dog.
>
> Yes, but it is better than Vista, and the hacks don't feel so guilty
> about supporting it.
>
> Good things about linux: fanbois are vocally and acerbically critical when
> things don't work smoothly, and then point you towards the many
> alternatives
> that /do/ work smoothly.

Many of the Linux "fanbois" also expect all users to be geeks who are 
happy to dig into the machine to keep it humming.  Most people don't 
want to know how it works under the hood, they just want it to work... 
like a car.  Linux is no family sedan.  That is what Windows tries to be 
with some moderate level of success.

-- 

Rick

Article: 155470
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sun, 30 Jun 2013 10:01:09 +0100
Links: << >>  << T >>  << A >>
On 30/06/13 07:25, rickman wrote:
> On 6/29/2013 9:56 AM, Tom Gardner wrote:
>> On 29/06/13 14:10, Eric Wallin wrote:
>>
>>> I don't get all the accolades for Win7, it's a dog.
>>
>> Yes, but it is better than Vista, and the hacks don't feel so guilty
>> about supporting it.
>>
>> Good things about linux: fanbois are vocally and acerbically critical when
>> things don't work smoothly, and then point you towards the many
>> alternatives
>> that /do/ work smoothly.
>
> Many of the Linux "fanbois" also expect all users to be geeks who are happy to dig into the machine to keep it humming.  Most people don't want to know how it works under the hood, they just want it
> to work... like a car.  Linux is no family sedan.  That is what Windows tries to be with some moderate level of success.

Many, but not all.

One deep geek whose idea of an ideal distro is that "it just
works and lets me get on with what I want to do" is
http://www.dedoimedo.com/
He savages distros that don't work out of the box.

Have you looked at some of the modern distros?
They are easy to get going and easy to learn - arguably
easier than Windows8 judging by its reviews and
lack of uptake.

Try Mint, or xubuntu.

Article: 155471
Subject: Re: New soft processor core paper publisher?
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Sun, 30 Jun 2013 10:03:52 +0100
Links: << >>  << T >>  << A >>
On 30/06/13 04:38, Eric Wallin wrote:
> On Saturday, June 29, 2013 12:53:27 PM UTC-4, Les Cargill wrote:
>
>> Yeah, that's ugly. Although that's more the update infrastructure
>> that's ugly rather than Win7 itself.
>
> Part and parcel.  The modern OS seems to be a constantly moving target.
>
> I want an OS that has all the bad bugs wrung out of it and is stuck in amber (ROM) for a couple of decades so I might actually get some work done already.

If you want an o/s in ROM, will CD-ROM do? If so, try any
modern linux liveCD!

If you want security, try Lightweight Portable Security,
by the US DoD, for accessing sensitive information, e.g.
you bank account.

If you want multimedia, try Mint.


Article: 155472
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Sun, 30 Jun 2013 10:36:03 -0500
Links: << >>  << T >>  << A >>
Tom Gardner wrote:
> On 29/06/13 17:58, Les Cargill wrote:
>> Tom Gardner wrote:
>>> On 29/06/13 03:15, Eric Wallin wrote:
>> <snip>
>>>
>>> Speedy installation: I get a fully-patched installed
>>> system in well under an hour. Last time MS would
>>> let me (!) install XP, it took me well over a day
>>> because of all the reboots.
>>>
>>> Speedy re-installation once every 3 years: your
>>> files are untouched so you just upgrade the o/s
>>> (trivial precondition: put /home on a separate
>>> disk partition). Takes < 1 hour.
>>
>> And things like virtualbox make running a
>> Windows guest pretty simple. I'm stuck with a
>> Win7 host for now because of one PCI card, but
>> virtualbox claims to be able to publish PCI cards to
>> guests presently but only on a Linux host.
>
> I'm not going to comment on Win in a VM,
> because I only use win98 like that :)
>
> But shortly before XP is discontinued (and MS shoots
> its corporate customers in the foot!), I'll be
> putting a clean WinXP inside at least one VM.
>

It works well.

> Does MS squeal about putting its o/s inside a VM?

Not in my experience. Even OEM versions can be activated.

> They certainly stop me re-installing my perfectly
> legal version of XP on a laptop, even though I have
> the product code for that laptop! They sure do make
> it difficult for me to use their products, sigh.
>

That's bizarre. I know the activation process is unreliable;
that's why you may have to call the phone number on some
reinstalls.


--
Les Cargill


Article: 155473
Subject: Re: New soft processor core paper publisher?
From: Les Cargill <lcargill99@comcast.com>
Date: Sun, 30 Jun 2013 11:03:45 -0500
Links: << >>  << T >>  << A >>
rickman wrote:
> On 6/29/2013 12:50 PM, Les Cargill wrote:
>> rickman wrote:
>>> On 6/28/2013 10:44 PM, Les Cargill wrote:
>>>> Eric Wallin wrote:
>>>>> On Friday, June 28, 2013 9:02:10 PM UTC-4, rickman wrote:
>>>>>
>>>>>> You are still thinking von Neumann. Any application can be broken
>>>>>> down into small units and parceled out to small processors. But
>>>>>> you have to think in those terms rather than just saying, "it
>>>>>> doesn't fit". Of course it can fit!
>>>>>
>>>>> Intra brain communications are hierarchical as well.
>>>>>
>>>>> I'm nobody, but one of the reasons for designing Hive was because I
>>>>> feel processors in general are much too complex, to the point where
>>>>> I'm repelled by them. I believe one of the drivers for this
>>>>> over-complexity is the fact that main memory is external. I've been
>>>>> assembling PCs since the 286 days, and I've never understood why main
>>>>> memory wasn't tightly integrated onto the uP die.
>>>>
>>>> RAM was both large and expensive until recently. Different people
>>>> made RAM than made processors and it would have been challenging to get
>>>> the business arrangements such that they'd glue up.
>>>
>>> That's not the reason. Intel could buy any of the DRAM makers any day
>>> of the week.
>>
>>
>> I have to presume they "couldn't", because they didn't.
>
> That's a bit silly.  Why would they want to?


I presume for the same reasons that "soundcards" were added to 
motherboards. You lose traces, you lose connectors.

 > They don't even need to
 > buy an SDRAM company to add SDRAM to their chips.
 >

Right.

>
>> But memory
>> architectures did evolve over time - I believe 286 machines still
>> used DIP packages for DRAM. And the target computers I used
>> from the mid-80s to the mid-90s may well have still used SRAM.
>>
>> At that point, by the time SIP/SIMM/DIMM modules were available,
>> the culture expected things to be seperate. We were also arbitraging
>> RAM prices - we'd buy less-quantity, more-expensive DRAM now, then buy
>> bigger later when the price dropped.
>
> Trust me, it's not about "culture", it is about what they can make work
> the best at the lowest price.  That's why they added cache memory, then
> put the cache memory on a module with the CPU then on the chip itself.
> Did they stick with a "culture" that cache should be chips on the
> motherboard or stick with separate cache chips, no, they continued to
> improve to what the current technology would support.
>

Right!

>
>> Some of that was doubtless retail behavioral stuff.
>>
>>> At several points DRAM divisions of companies were sold
>>> off to form merged DRAM specialty companies primarily so the risk was
>>> shared by several companies and they didn't have to take such a large
>>> hit to their bottom line when DRAM was in the down phase of the business
>>> cycle. Commodity parts like DRAMs are difficult to make money on and
>>> there is always one of the makers who could be bought easily.
>>>
>>
>> Right. So if you integrate it into the main core package, you no
>> longer have to suffer as a commodity vendor. It's a captive market.
>>
>> I'm sure it's not that simple.
>
> Yes, it's not that simple.  Adding main memory to the CPU chip has all
> sorts of problems.  But knowing how to make SDRAM is not one of them.
>
>
>>> In fact, Intel started out making DRAMs!
>>
>> Precisely! They did one of the first "bet the company" moves
>> that resulted in the 4004.
>
> Making the 4004 was *not* a "bet the company" design.  They did it under
> contract for a calculator company who paid for the work.  Intel took
> virtually no risk in the matter.
>

Interestingly, many people say they took considerable risk. It was 
certainly disruptive.

>
>>> The main reason why main
>>> memory isn't on the CPU chip is because there are lots of variations in
>>> size *and* that it just wouldn't fit! You don't put one DRAM chip in a
>>> computer, they used to need a minimum of four, IIRC to make up a module,
>>> often they were 8 to a module and sometimes double sided with 16 chips
>>> to a DRAM module.
>>>
>>
>> If you were integrating inside the package, you could use any
>> physical configuration you wanted. But the thing would still have been
>> too big.
>
> Yes, I agree, main memory is too big to fit on the CPU die for any size
> memory in common use at the time.  Isn't that what I said?
>

If you did, I missed it.

>
>>> The next bigger problem is that CPUs and DRAMs use highly optimized
>>> processes and are not very compatible. A combined chip would likely not
>>> have as fast a CPU and would have poor DRAM on board.
>>>
>>>
>>
>> I also have to wonder if the ability to cool things was involved.
>
> SDRAM does not use a lot of power.  It is cooler running than the CPU.
>
>
>>>>> Everyone pretty
>>>>> much gets the same ballpark memory size when putting a PC together,
>>>>> and I can remember only once or twice upgrading memory after the
>>>>> initial build (for someone else's Dell or similar where the initial
>>>>> build was anemically low-balled for "value" reasons). Here we are in
>>>>> 2013, the memory is several light cm away from the processor on the
>>>>> MB, talking in cache lines, and I still don't get why we have this
>>>>> gross inefficiency.
>>>>>
>>>>
>>>> That's not generally the bottleneck, though.
>>>
>>> I'm not so sure. With the multicore processors my understanding is that
>>> memory bandwidth *is* the main bottle neck. If you could move the DRAM
>>> on chip it could run faster but more importantly it could be split into
>>> a bank for each processor giving each one all the bandwidth it could
>>> want.
>>>
>>
>> That's consistent with my understanding as well. The big thing on
>> transputers in the '80s was the 100MBit links between them. As
>> we used to say - "the bus is usually the bottleneck". Er,
>> at least once you got past 10MHz clock speeds...
>
> Then why did you write "That's not generally the bottleneck"?
>

Because on most designs I have seen for the last decade or
more, the memory bus is not the processor interconnect bus.

>
>>> I think a large part of the problem is that we have been designing more
>>> and more complex machines so that the majority of the CPU cycles are
>>> spent supporting the framework rather than doing the work the user
>>> actually cares about.
>>
>> Yep - although it's eminently possible to avoid this problem. I use
>> a lot of old programs - some going back to Win 3.1.
>>
>> Really, pure 64-bit computers would have completely failed
>> had there not been the abiity to run a legacy O/S in a VM
>> or run 32 bit progs through the main O/S.
>>
>>> It is a bit like the amount of fuel needed to go
>>> into space. Add one pound of payload and you need some hundred or
>>> thousand more pounds of fuel to launch it. If you want to travel
>>> further out into space, the amount of fuel goes up exponentially.
>>
>> So Project X is trying to do something about that. There is something
>> about engineering culture that "wants scale" - a Saturn V is a really
>> impressive thing to watch, I am sure.
>>
>>> We
>>> seem to be reaching the point that the improvements in processor speed
>>> are all being consumed by the support software rather than getting to
>>> the apps.
>>>
>>
>> But things like BeOS and the like have been available, and remain
>> widely unused. There is some massive culture fail in play; either
>> that or things are just good enough.
>
> I'm not sure why you consider this to be a "culture" issue.  Windows is
> the dominant OS.

Well - that is a cultural artifact. What else can it be? There is no
feedback path in the marketplace for us to express our disdain
over bloatware.

> It is very hard to work with other OSs because there
> are so many fewer apps.  I can design FPGAs only with Windows or Linux
> and at one time I couldn't even use Linux unless I paid for the
> software.  BeOS doesn't run current Windows programs does it?
>

No. My point is that the culture does not reward minimalist
software solutions unless they're in the control or embedded
space.


>
>> Heck, pad/phone computers do much much *less* than desktops and
>> have the bullet in the market. You can't even type on them but people
>> still try...
>
> They do some of the same things which are what most people need, but
> they are very different products than what computers are.  The market
> evolved because the technology evolved.  10 years ago pads were mostly a
> joke and there smart phones weren't really possible/practical.  Now the
> processors are fast enough running from battery that hand held computing
> is practical and the market will turn that way some 99%.

Phones and tablets are and will always be cheezy little non-computers.
They don't have enough peripheral options to do anything
besides post cat pictures to social media sites.

You *can* make serious control surface computers out of them, but
they're  no longer at a consumer-friendly price. And the purchasing 
window  for them is very narrow, so managing market thrash is
a problem.

> Desktops will
> always be around just as "workstations" are still around, but only in
> very specialized, demanding applications.
>

Or they can be a laptop in a box. The world* is glued together by
  Visual Basic. Dunno if the Win8 tablets can be relied on to run that
in a manner to support all that.

*as opposed to the fantasy world - the Net - which is glued with Java.

I expect the death of the desktop is greatly exaggerated.


-- 
Les Cargill


Article: 155474
Subject: Re: New soft processor core paper publisher?
From: rickman <gnuarm@gmail.com>
Date: Sun, 30 Jun 2013 19:10:33 -0400
Links: << >>  << T >>  << A >>
On 6/30/2013 12:03 PM, Les Cargill wrote:
> rickman wrote:
>> On 6/29/2013 12:50 PM, Les Cargill wrote:
>>> rickman wrote:
>>>>
>>>> That's not the reason. Intel could buy any of the DRAM makers any day
>>>> of the week.
>>>
>>>
>>> I have to presume they "couldn't", because they didn't.
>>
>> That's a bit silly. Why would they want to?
>
> I presume for the same reasons that "soundcards" were added to
> motherboards. You lose traces, you lose connectors.

The only fly in the ointment is that it isn't practical to combine an 
x86 CPU with 4 GB of DRAM on a single chip.  Oh well, otherwise a great 
idea.  That might be practical in another 5 years when low end computers 
are commonly using more than 16 GB of DRAM on the board.

You presume a lot.  That is not the same as it being correct.


> Right.


> Right!

You seem to be learning... ;^)


>>>> In fact, Intel started out making DRAMs!
>>>
>>> Precisely! They did one of the first "bet the company" moves
>>> that resulted in the 4004.
>>
>> Making the 4004 was *not* a "bet the company" design. They did it under
>> contract for a calculator company who paid for the work. Intel took
>> virtually no risk in the matter.
>>
>
> Interestingly, many people say they took considerable risk. It was
> certainly disruptive.


Like who?  What was the risk, that the calculator wouldn't work, they 
wouldn't get the contract???  Where was the "considerable" risk?

Actually, there was little risk.  Once they convinced the calculator 
company that they could do it more cheaply it was an obvious move to 
make.  The technology was to the point where they could put a small CPU 
on a chip (or chips) and make a fully functional computer.  There was no 
idea of becoming the huge computer giant.  I am sure they realized that 
this could become the basis of a very significant industry.  So where 
was the risk?


>>>> The main reason why main
>>>> memory isn't on the CPU chip is because there are lots of variations in
>>>> size *and* that it just wouldn't fit! You don't put one DRAM chip in a
>>>> computer, they used to need a minimum of four, IIRC to make up a
>>>> module,
>>>> often they were 8 to a module and sometimes double sided with 16 chips
>>>> to a DRAM module.
>>>>
>>>
>>> If you were integrating inside the package, you could use any
>>> physical configuration you wanted. But the thing would still have been
>>> too big.
>>
>> Yes, I agree, main memory is too big to fit on the CPU die for any size
>> memory in common use at the time. Isn't that what I said?
>>
>
> If you did, I missed it.

Uh, look above...

"The main reason why main memory isn't on the CPU chip is because there 
are lots of variations in size *and* that it just wouldn't fit!"


>>>> The next bigger problem is that CPUs and DRAMs use highly optimized
>>>> processes and are not very compatible. A combined chip would likely not
>>>> have as fast a CPU and would have poor DRAM on board.
>>>>
>>>>
>>>
>>> I also have to wonder if the ability to cool things was involved.
>>
>> SDRAM does not use a lot of power. It is cooler running than the CPU.
>>
>>
>>>>>> Everyone pretty
>>>>>> much gets the same ballpark memory size when putting a PC together,
>>>>>> and I can remember only once or twice upgrading memory after the
>>>>>> initial build (for someone else's Dell or similar where the initial
>>>>>> build was anemically low-balled for "value" reasons). Here we are in
>>>>>> 2013, the memory is several light cm away from the processor on the
>>>>>> MB, talking in cache lines, and I still don't get why we have this
>>>>>> gross inefficiency.
>>>>>>
>>>>>
>>>>> That's not generally the bottleneck, though.
>>>>
>>>> I'm not so sure. With the multicore processors my understanding is that
>>>> memory bandwidth *is* the main bottle neck. If you could move the DRAM
>>>> on chip it could run faster but more importantly it could be split into
>>>> a bank for each processor giving each one all the bandwidth it could
>>>> want.
>>>>
>>>
>>> That's consistent with my understanding as well. The big thing on
>>> transputers in the '80s was the 100MBit links between them. As
>>> we used to say - "the bus is usually the bottleneck". Er,
>>> at least once you got past 10MHz clock speeds...
>>
>> Then why did you write "That's not generally the bottleneck"?
>>
>
> Because on most designs I have seen for the last decade or
> more, the memory bus is not the processor interconnect bus.

What does that mean?  I don't know what processor designs you have seen, 
but all of the multicore stuff (which is what they have been building 
for nearly a decade) is memory bus speed constrained because you have 
two or three or four or eight processors sharing just one memory 
interface or in some cases I believe they have used two.  This is a 
classic problem at this point referred to as the "memory wall".  Google it.


>>>> We
>>>> seem to be reaching the point that the improvements in processor speed
>>>> are all being consumed by the support software rather than getting to
>>>> the apps.
>>>>
>>>
>>> But things like BeOS and the like have been available, and remain
>>> widely unused. There is some massive culture fail in play; either
>>> that or things are just good enough.
>>
>> I'm not sure why you consider this to be a "culture" issue. Windows is
>> the dominant OS.
>
> Well - that is a cultural artifact. What else can it be? There is no
> feedback path in the marketplace for us to express our disdain
> over bloatware.

You can't buy a computer with Linux or some other OS?


>> It is very hard to work with other OSs because there
>> are so many fewer apps. I can design FPGAs only with Windows or Linux
>> and at one time I couldn't even use Linux unless I paid for the
>> software. BeOS doesn't run current Windows programs does it?
>>
>
> No. My point is that the culture does not reward minimalist
> software solutions unless they're in the control or embedded
> space.

Why is that?  Of course rewards are there for anyone who makes a better 
product.


>>> Heck, pad/phone computers do much much *less* than desktops and
>>> have the bullet in the market. You can't even type on them but people
>>> still try...
>>
>> They do some of the same things which are what most people need, but
>> they are very different products than what computers are. The market
>> evolved because the technology evolved. 10 years ago pads were mostly a
>> joke and there smart phones weren't really possible/practical. Now the
>> processors are fast enough running from battery that hand held computing
>> is practical and the market will turn that way some 99%.
>
> Phones and tablets are and will always be cheezy little non-computers.
> They don't have enough peripheral options to do anything
> besides post cat pictures to social media sites.

Ok, another quote to go up there with "No one will need more than 640 
kBytes" and "I see little commercial potential for the internet for the 
next 10 years."

I'll bet you have one of these things as a significant computing 
platform in four years... you can quote me on that!


> You *can* make serious control surface computers out of them, but
> they're no longer at a consumer-friendly price. And the purchasing
> window for them is very narrow, so managing market thrash is
> a problem.
>
>> Desktops will
>> always be around just as "workstations" are still around, but only in
>> very specialized, demanding applications.
>>
>
> Or they can be a laptop in a box. The world* is glued together by
> Visual Basic. Dunno if the Win8 tablets can be relied on to run that
> in a manner to support all that.
>
> *as opposed to the fantasy world - the Net - which is glued with Java.
>
> I expect the death of the desktop is greatly exaggerated.

I don't know what the "death of the desktop" is, but I think you and I 
will no longer have traditional computers (aka, laptops and desktops) as 
anything but reserve computing platforms in six years.

I am pretty much a Luddite when it comes new technology.  I think most 
of it is bogus crap.  But I have seen the light of phones and tablets 
and I am a believer.  I have been shown the way and the way is good.

Here's a clue to the future.   How many here want to use Windows after 
XP?  Who likes Vista?  Who likes Win7?  Win8?  Is your new PC any faster 
than your old PC (other than the increased memory for memory bound 
apps)?  PCs are reaching the wall while hand held devices aren't. 
Handhelds will be catching up in six years and will be able to do all 
the stuff you want from your computer today.  Tomorrow's PC's, 
meanwhile, won't be doing a lot more.  So the gap will narrow and who 
wants all the baggage of traditional PCs when they can use much more 
convenient hand helds?  I/O won't be a problem.  I think all the tablets 
plug into a TV via HDMI and you can add a keyboard and mouse easily.  So 
there you have all the utility of a PC in a tiny form factor along with 
all the advantages of the handheld when you want a handheld.

If the FPGA design software ran on them well, I'd get one today.  But I 
need to wait a few more years for the gap to close.

-- 

Rick



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search