Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Is there an equivalent .tdf file in Aldec Active-HDL? One can find "text design file" in Quartus II. I like the same in Active-HDL. Thanks.Article: 155276
Elizabeth D. Rather wrote: > I'm also baffled about your comment about "programs that emit > Forth." Although PostScript has many features in common with > Forth, it is quite different, both in terms of command set > and programming model. It was my comment, based on experience of writing a few Forth and PostScript programs back in the mid 80s. I'm know there are differences between Forth and PostScript, but the similarities are /more/ significant. Forth is like PostScript in the same way that Delphi is like Pascal, C# is like Java, Scheme is like Lisp, Objective-C is like SmallTalk (and unlike C++) etc. And, more importantly, ForthPostscript is unlike C, is unlike Prolog, is unlike Lisp, is unlike Smalltalk is unlike Pascal. I was beguiled by Forth and to some extend remain so: I'd /love/ to find a good justification to use it again, and might do so, just for the hell of it. I'm sure Forth has moved on since the 80s, but it cannot escape its ethos any more than C, Prolog, Lisp, Smalltalk can. Or rather, if it did chance then it wouldn't be Forth anymore. > Modern Forths (e.g. since the release of ANS Forth 94) > feature a variety of implementation strategies, ranging > from fairly conventional compilers that generate > optimized machine code to more traditional threaded code models. I'm sure that's true, but it misses the point. Interpreted Java is the same as jitted Java is the same as Hotspotted Java. Ditto Forth; if it wasn't then it simply wouldn't be Forth! I suppose I ought to change "nobody writes Forth" to "almost nobody writes Forth. I would, however, be interested to know whether Forth has a defined memory model, since that's a _necessary_ _precondition_ to be able to write portable multithreaded code that runs on multicore processors. Hells teeth, even C has now has finally decided that they need to define a memory model, only 20 years after those With Clue knew it was required. (I don't think there are any implementation though!)Article: 155277
On 21/06/13 11:30, Tom Gardner wrote: > I suppose I ought to change "nobody writes Forth" to > "almost nobody writes Forth. > Shouldn't that be "almost nobody Forth writes" ? I wonder if Yoda programs in Forth... (I too would like an excuse to work with Forth again.)Article: 155278
David Brown wrote: > On 21/06/13 11:30, Tom Gardner wrote: > >> I suppose I ought to change "nobody writes Forth" to >> "almost nobody writes Forth. >> > > Shouldn't that be "almost nobody Forth writes" ? :) Still use my HP calculators in preference to algebraic and half-algebraic calculators!Article: 155279
On Friday, June 21, 2013 10:25:55 AM UTC-4, Tom Gardner wrote: > Still use my HP calculators in preference to > algebraic and half-algebraic calculators! Same here! When manually calculating, give me HP (or similar) or give me d= eath. But Forth kind of sucks (and it pains me deeply to have discovered t= hat). The failure (as I see it) of Forth is the awkward target programming= model (or virtual machine as the kids say these days - virtual anything is= sexy). There's a pretty big gulf between between manual data & operation entry on = something like an HP (or any hand held calculator) and programming with For= th (or any language). A single stack very much facilitates the former but = places handcuffs on the latter. These are two fairly different activities = that IMO don't exactly have boatloads of overlap, however much we might wan= t or personally need for it to be so.Article: 155280
On 6/20/2013 8:34 PM, Kevin Neilson wrote: > I'd really like to make some cores in my spare time, but the revenues would be pretty small, and there is no way it would be worthwhile to buy Synplify and Modelsim licenses for such a small endeavor. I don't know exactly what that would cost, but I'm sure it's tens of thousands. It'd be great if I use the tools online for a few hours here and there and just pay for that.. Even if I couldn't use the GUI--if I could just get an EDIF and .srr file back--that would be useful. I know Modelsim isn't that high, or at least it wasn't some 5 years ago. I think I was quoted $5k give or take. But then it is another $1k per year maintenance. The point is the company has to have a given level of revenue and even if they adopt a per use based pricing structure, they would have to charge pretty steeply for each use to get that same level of revenue. The only way it could become less expensive is if they ended up with a lot more sales. I'm sure they have looked at it and found the "sweet spot" for pricing maximizing their profit. After all, that is what it is about. > I guess I could use Icarus or something, but I'm sure it's not going to parse the nice SysVerilog / VHDL 2008 code I write, and who wants to buy a core that comes with an Icarus project file? Why can't you use the free versions of the tools provided by the FPGA vendors? You get Active-HDL from Lattice, i think Xilinx has their own simulator and I don't know what Altera offers. What does Microsemi offer these days since they bought Actel? -- RickArticle: 155281
On 6/20/2013 11:38 PM, Eric Wallin wrote: > On Thursday, June 20, 2013 7:50:09 PM UTC-4, rickman wrote: >> Quitter! If the syntax (or near total lack thereof) bothers you, then >> you must have a very thin skin. > > Ha ha! And I see what you did there. > > > On Thursday, June 20, 2013 9:31:53 PM UTC-4, Elizabeth D. Rather wrote: >> Eric, I'm curious what books these were that you found so offensive? > > The books ("Starting Forth", "Thinkind Forth", "Forth Programmer's Handbook") weren't themselves offensive, but they revealed Forth to be much lamer than I expected for all the stick-it-to-the-man ethos surrounding it. I was totally stoked for a stack-based language that would solve all my problems, but all I got was some books gathering dust. > >> I'm also baffled about your comment about "programs that emit Forth." >> Although PostScript has many features in common with Forth, it is quite >> different, both in terms of command set and programming model. > > You're looking for Tom Gardner, he's down the hall near the elevators using a little stamp at the bottom of his cane to make little chicken footprints on the floor.. I just saw that episode a few weeks ago. The chicken contest was pretty cool actually. I actually identify with House. Not that I am as smart as he is, but I have a bad hip (waiting for Obamacare to kick in so I can get a new one) and have a natural tendency to tick off people unless I work to reign it in. :] -- RickArticle: 155282
On 6/21/2013 5:30 AM, Tom Gardner wrote: > Elizabeth D. Rather wrote: > > I'm also baffled about your comment about "programs that emit > > Forth." Although PostScript has many features in common with > > Forth, it is quite different, both in terms of command set > > and programming model. > > It was my comment, based on experience of writing > a few Forth and PostScript programs back in the mid 80s. > > I'm know there are differences between Forth and > PostScript, but the similarities are /more/ significant. > Forth is like PostScript in the same way that Delphi > is like Pascal, C# is like Java, Scheme is like Lisp, > Objective-C is like SmallTalk (and unlike C++) etc. > > And, more importantly, ForthPostscript is unlike C, > is unlike Prolog, is unlike Lisp, is unlike Smalltalk > is unlike Pascal. > > I was beguiled by Forth and to some extend remain so: > I'd /love/ to find a good justification to use it again, > and might do so, just for the hell of it. If you ever need to control hardware, you will want it. The interactivity is great. Actually, when doing things in real time the interactivity sort of goes away (or at least the utility of it) since you can't type fast enough to control a robot or intercept a serial port running at 38.4 kbps. But you can very easily test small portions of your code in ways that are tricky in C or the other languages you mention. Then those ideas can be applied to any app... even if not real time. > I'm sure Forth has moved on since the 80s, but it cannot > escape its ethos any more than C, Prolog, Lisp, > Smalltalk can. Or rather, if it did chance then it wouldn't > be Forth anymore. Can *any* of us ever escape our ethos? I know I've been trying for many a year and it is still right here by my side, aren't you Ethos? Actually, that would be a good name for a couple of dogs, Ethos and Pathos, Chesapeake Bay retrievers. > > Modern Forths (e.g. since the release of ANS Forth 94) > > feature a variety of implementation strategies, ranging > > from fairly conventional compilers that generate > > optimized machine code to more traditional threaded code models. > > I'm sure that's true, but it misses the point. > Interpreted Java is the same as jitted Java is > the same as Hotspotted Java. > > Ditto Forth; if it wasn't then it simply wouldn't be Forth! > > I suppose I ought to change "nobody writes Forth" to > "almost nobody writes Forth. So now I've been demoted to to "almost" nobody? What do I have to do to work my way *up* to nobody? > I would, however, be interested to know whether Forth > has a defined memory model, since that's a _necessary_ > _precondition_ to be able to write portable multithreaded > code that runs on multicore processors. Hells teeth, > even C has now has finally decided that they need to define > a memory model, only 20 years after those With Clue > knew it was required. (I don't think there are any > implementation though!) Uh, can *someone* who knows something answer that one? Do you have a lot of call to program multithreaded code for multicore processors? What sort of apps are you coding? -- RickArticle: 155283
On 6/21/2013 7:18 AM, David Brown wrote: > On 21/06/13 11:30, Tom Gardner wrote: > >> I suppose I ought to change "nobody writes Forth" to >> "almost nobody writes Forth. >> > > Shouldn't that be "almost nobody Forth writes" ? I would say that was "nobody almost Forth writes". Wouldn't it be [noun [adjective] [noun [adjective]]] verb? > (I too would like an excuse to work with Forth again.) What do you do instead? -- RickArticle: 155284
On 6/22/2013 12:44 AM, Eric Wallin wrote: > On Friday, June 21, 2013 10:25:55 AM UTC-4, Tom Gardner wrote: > >> Still use my HP calculators in preference to >> algebraic and half-algebraic calculators! > > Same here! When manually calculating, give me HP (or similar) or give me death. But Forth kind of sucks (and it pains me deeply to have discovered that). The failure (as I see it) of Forth is the awkward target programming model (or virtual machine as the kids say these days - virtual anything is sexy). > > There's a pretty big gulf between between manual data& operation entry on something like an HP (or any hand held calculator) and programming with Forth (or any language). A single stack very much facilitates the former but places handcuffs on the latter. These are two fairly different activities that IMO don't exactly have boatloads of overlap, however much we might want or personally need for it to be so. I won't argue with any of that, or I might... So why are you here exactly? I'm not saying you shouldn't be here or that you shouldn't be saying what you are saying. But given how you feel about Forth, I'm just curious why you want to have the conversation you are having? Are you exploring your inner curmudgeon? -- RickArticle: 155285
rickman wrote: > On 6/21/2013 5:30 AM, Tom Gardner wrote: >> Elizabeth D. Rather wrote: >> > I'm also baffled about your comment about "programs that emit >> > Forth." Although PostScript has many features in common with >> > Forth, it is quite different, both in terms of command set >> > and programming model. >> >> I was beguiled by Forth and to some extend remain so: >> I'd /love/ to find a good justification to use it again, >> and might do so, just for the hell of it. > > If you ever need to control hardware, you will want it. The interactivity is great. The first time I wished I had Forth was when manually testing prototype hardware with a z80-class processor c1983. Forth would have been ideal, and significantly better than the rubbish I threw together. That experience has shaped my views of "domain specific languages" vs libraries to this day. > Actually, when doing things in real time the interactivity sort of goes away (or at least the utility of it) since you can't type fast enough to control a robot or intercept a serial port running at > 38.4 kbps. But you can very easily test small portions of your code in ways that are tricky in C or the other languages you mention. Then those ideas can be applied to any app... even if not real time. Just so. It seems we are in violent agreement. One also has to consider the tools that other people are familiar with; changing away from them requires very significant advantages, and Forth just doesn't have those. >> I suppose I ought to change "nobody writes Forth" to >> "almost nobody writes Forth. > > So now I've been demoted to to "almost" nobody? What do I have to do to work my way *up* to nobody? Why are you assuming "demoted"? Is a screw above or below a nail in some imagined hierarchy! >> I would, however, be interested to know whether Forth >> has a defined memory model, since that's a _necessary_ >> _precondition_ to be able to write portable multithreaded >> code that runs on multicore processors. Hells teeth, >> even C has now has finally decided that they need to define >> a memory model, only 20 years after those With Clue >> knew it was required. (I don't think there are any >> implementation though!) > > Uh, can *someone* who knows something answer that one? > > Do you have a lot of call to program multithreaded code for multicore processors? What sort of apps are you coding? Even cellphones have dual and quad processors nowadays. And, of course, there are outliers like the Parallax Propellor. Multicore will become the norm in the near future; the only constraint in embedded systems will be memory bandwidth.Article: 155286
rickman wrote: > I won't argue with any of that, or I might... > > So why are you here exactly? I'm not saying you shouldn't be here or that you shouldn't be saying what you are saying. But given how you feel about Forth, I'm just curious why you want to have the > conversation you are having? Are you exploring your inner curmudgeon? (a) because it is fun (b) to gain a personal understanding of the advantages of nails and screws, and when each should/shouldn't be usedArticle: 155287
On 22/06/13 06:03, rickman wrote: > On 6/20/2013 8:34 PM, Kevin Neilson wrote: >> I'd really like to make some cores in my spare time, but the revenues would be pretty small, and there is no way it would be worthwhile to buy Synplify and Modelsim licenses for such a small endeavor. I don't know exactly what that would cost, but I'm sure it's tens of thousands. It'd be great if I use the tools online for a few hours here and there and just pay for that.. Even if I couldn't use the GUI--if I could just get an EDIF and .srr file back--that would be useful. > > I know Modelsim isn't that high, or at least it wasn't some 5 years ago. > I think I was quoted $5k give or take. But then it is another $1k per > year maintenance. The point is the company has to have a given level of > revenue and even if they adopt a per use based pricing structure, they > would have to charge pretty steeply for each use to get that same level > of revenue. > > The only way it could become less expensive is if they ended up with a > lot more sales. I'm sure they have looked at it and found the "sweet > spot" for pricing maximizing their profit. After all, that is what it > is about. > > >> I guess I could use Icarus or something, but I'm sure it's not going to parse the nice SysVerilog / VHDL 2008 code I write, and who wants to buy a core that comes with an Icarus project file? > > Why can't you use the free versions of the tools provided by the FPGA > vendors? You get Active-HDL from Lattice, i think Xilinx has their own > simulator and I don't know what Altera offers. What does Microsemi > offer these days since they bought Actel? > Altera offer Modelsim Altera Edition (linked to the subscription edition of Quartus), and Modelsim Altera Starter Edition (completely free). Obviously they have limited capacity compared to the full edition of Modelsim - but they do have a surprising range of language support, include the design features of SystemVerilog. There is a "cloud" version of Aldec Riviera - I don't know how much it costs. See http://www.aldec.com/en/solutions/functional_verification/aldec_cloud Microsemi Libero SoC comes with a version of Modelsim, and of Synplify. regards Alan -- Alan FitchArticle: 155288
On 6/22/2013 4:34 AM, Tom Gardner wrote: > rickman wrote: >> On 6/21/2013 5:30 AM, Tom Gardner wrote: >>> Elizabeth D. Rather wrote: >>> > I'm also baffled about your comment about "programs that emit >>> > Forth." Although PostScript has many features in common with >>> > Forth, it is quite different, both in terms of command set >>> > and programming model. >>> >>> I was beguiled by Forth and to some extend remain so: >>> I'd /love/ to find a good justification to use it again, >>> and might do so, just for the hell of it. >> >> If you ever need to control hardware, you will want it. The >> interactivity is great. > > The first time I wished I had Forth was when manually > testing prototype hardware with a z80-class processor > c1983. Forth would have been ideal, and significantly > better than the rubbish I threw together. > > That experience has shaped my views of "domain specific > languages" vs libraries to this day. I'm not sure what that implies, but I'll go with it. >> Actually, when doing things in real time the interactivity sort of >> goes away (or at least the utility of it) since you can't type fast >> enough to control a robot or intercept a serial port running at >> 38.4 kbps. But you can very easily test small portions of your code in >> ways that are tricky in C or the other languages you mention. Then >> those ideas can be applied to any app... even if not real time. > > Just so. It seems we are in violent agreement. Not *too* violent. I don't have guns or anything like that. > One also has to consider the tools that other people > are familiar with; changing away from them requires > very significant advantages, and Forth just doesn't > have those. I'm not sure what tools you mean. Sometimes I miss using a debugger where I can step through my code. Actually I bet Win32Forth has that somewhere, but the docs are not so great and I am likely missing some 95% of what is in it. Still, debuggers have their limits, notably they don't do a great job of getting you in the vicinity of the bug so you can step through the code to find it. My willingness to give up debuggers and "go with the Forth" was largely prompted by a statement by Jeff Fox. I don't recall the context exactly but I was talking about the difficulties of tracking down stack underflows. My context was thinking like I was coding in C where I would write a bunch of code and then start in on the bugs often adding new ones in the process. Jeff pointed out that in Forth, if you have a stack mismatch it shows that *you can't count*. Even though (or perhaps because) it was so simple, that really struck me. If I can't write a Forth word that balances the stack effects, it means I can't count to three or four. Counting to three or four is a lot easier than using a debugger! >>> I suppose I ought to change "nobody writes Forth" to >>> "almost nobody writes Forth. >> >> So now I've been demoted to to "almost" nobody? What do I have to do >> to work my way *up* to nobody? > > Why are you assuming "demoted"? Is a screw above or > below a nail in some imagined hierarchy! Context is everything. I have no idea what the context of screw v nail is about. >>> I would, however, be interested to know whether Forth >>> has a defined memory model, since that's a _necessary_ >>> _precondition_ to be able to write portable multithreaded >>> code that runs on multicore processors. Hells teeth, >>> even C has now has finally decided that they need to define >>> a memory model, only 20 years after those With Clue >>> knew it was required. (I don't think there are any >>> implementation though!) >> >> Uh, can *someone* who knows something answer that one? >> >> Do you have a lot of call to program multithreaded code for multicore >> processors? What sort of apps are you coding? > > Even cellphones have dual and quad processors nowadays. > > And, of course, there are outliers like the Parallax > Propellor. > > Multicore will become the norm in the near future; the > only constraint in embedded systems will be memory bandwidth. So which are you writing code for? Or is this just a theoretical discussion? As to multicore being the norm... well, the last toaster I bought does seem to have a micro in it, *really*. It's a four slice unit, each half has three little buttons for bagel, frozen and reheat, one button for cancel and a darkness knob. You have up to five seconds after pushing down the handle to make your selections. I'll bet you anything this has a GA4 in it! I can't imagine doing this without a multicore processor... oh, wait, that is programmed in Forth isn't it... what is that memory model again? -- RickArticle: 155289
On 6/22/2013 1:20 AM, rickman wrote: >... > Actually, that would be a good name for a couple of dogs, Ethos and > Pathos, Chesapeake Bay retrievers. >... What happened to the third dog, Logos? A funny omission, given that this group is primarily logic related (only secondarily philosophy related). ChrisArticle: 155290
On 22/06/13 07:23, rickman wrote: > On 6/21/2013 7:18 AM, David Brown wrote: >> On 21/06/13 11:30, Tom Gardner wrote: >> >>> I suppose I ought to change "nobody writes Forth" to >>> "almost nobody writes Forth. >>> >> >> Shouldn't that be "almost nobody Forth writes" ? > > I would say that was "nobody almost Forth writes". Wouldn't it be [noun > [adjective] [noun [adjective]]] verb? I thought about that, but I was not sure. When I say "work with Forth again", I have only "played" with Forth, not "worked" with it, and it was a couple of decades ago. > >> (I too would like an excuse to work with Forth again.) > > What do you do instead? > I do mostly small-systems embedded programming, which is mostly in C. It used to include a lot more assembly, but that's quite rare now (though it is not uncommon to have to make little snippets in assembly, or to study compiler-generated assembly), and perhaps in the future it will include more C++ (especially with C++11 features). I also do desktop and server programming, mostly in Python, and I have done a bit of FPGA work (but not for a number of years). I don't think of Forth as being a suitable choice of language for the kind of systems I work with - but I do think it would be fun to work with the kind of systems for which Forth is the best choice. However, I suspect that is unlikely to happen in practice. (Many years ago, my company looked at a potential project for which Atmel's Marc-4 processors were a possibility, but that's the nearest I've come to Forth at work.) I just think it's fun to work with different types of language - it gives you a better understanding of programming in general, and new ideas of different ways to handle tasks.Article: 155291
On 6/22/2013 11:21 AM, David Brown wrote: > On 22/06/13 07:23, rickman wrote: >> On 6/21/2013 7:18 AM, David Brown wrote: >>> On 21/06/13 11:30, Tom Gardner wrote: >>> >>>> I suppose I ought to change "nobody writes Forth" to >>>> "almost nobody writes Forth. >>>> >>> >>> Shouldn't that be "almost nobody Forth writes" ? >> >> I would say that was "nobody almost Forth writes". Wouldn't it be [noun >> [adjective] [noun [adjective]]] verb? > > I thought about that, but I was not sure. When I say "work with Forth > again", I have only "played" with Forth, not "worked" with it, and it > was a couple of decades ago. Hey, it's not like this is *real* forth. But looking at how some Forth code works for things like assemblers and my own projects, the data is dealt with first starting with some sort of a noun type piece of data (like a register) which may be modified by an adjective (perhaps an addressing mode) followed by others, then the final verb to complete the action (operation). >>> (I too would like an excuse to work with Forth again.) >> >> What do you do instead? >> > > I do mostly small-systems embedded programming, which is mostly in C. It > used to include a lot more assembly, but that's quite rare now (though > it is not uncommon to have to make little snippets in assembly, or to > study compiler-generated assembly), and perhaps in the future it will > include more C++ (especially with C++11 features). I also do desktop and > server programming, mostly in Python, and I have done a bit of FPGA work > (but not for a number of years). Similar to myself, but with the opposite emphasis. I mostly do hardware and FPGA work with embedded programming which has been rare for some years. I think Python is the language a customer recommended to me. He said that some languages are good for this or good for that, but Python incorporates a lot of the various features that makes it good for most things. They write code running under Linux on IP chassis. I think they use Python a lot. > I don't think of Forth as being a suitable choice of language for the > kind of systems I work with - but I do think it would be fun to work > with the kind of systems for which Forth is the best choice. However, I > suspect that is unlikely to happen in practice. (Many years ago, my > company looked at a potential project for which Atmel's Marc-4 > processors were a possibility, but that's the nearest I've come to Forth > at work.) So why can't you consider Forth for processors that aren't stack based? > I just think it's fun to work with different types of language - it > gives you a better understanding of programming in general, and new > ideas of different ways to handle tasks. I'm beyond "playing" in this stuff and I don't mean "playing" in a derogatory way, I mean I just want to get my work done. I'm all but retired and although some of my projects are not truly profit motivated, I want to get them done with a minimum of fuss. I look at the tools used to code in C on embedded systems and it scares me off really, especially the open source ones that require you to learn so much before you can become productive or even get the "hello world" program to work. That's why I haven't done anything with the rPi or the Beagle Boards. -- RickArticle: 155292
On Saturday, June 22, 2013 1:26:53 AM UTC-4, rickman wrote: > So why are you here exactly? I'm not saying you shouldn't be here or=20 > that you shouldn't be saying what you are saying. But given how you=20 > feel about Forth, I'm just curious why you want to have the conversation= =20 > you are having? Are you exploring your inner curmudgeon? Just venting at the industry, and procrastinating a bit (putting off the fi= nal verification of the processor). Apparently OT is my favorite subject a= s it seems I'm always busy derailing my own (and others) threads. That, an= d Y'all have very interesting takes on these and various and sundry other t= hings. Backing up a bit, it strikes me as a bit crazy to make a language based on = the concept of a weird target processor. I mean, I get the portability thi= ng, but at what cost? If my experience as a casual user (not programmer) o= f Java on my PC is any indication (data point of one, the plural of anecdot= e isn't data, etc.), the virtual stack-based processor paradigm has failed,= as the constant updates, security issues, etc. pretty much forced me to un= install it. And I would think that a language targeting a processor model = that is radically different than the physically underlying one would be ter= ribly inefficient unless the compiler can do hand stands while juggling spi= nning plates on fire - even if it is, god knows what it spits out. Canonic= al stack processors and their languages (Forth, Java, Postscript) at this p= oint seem to be hanging by a legacy thread (even if every PC runs one perip= herally at one time or another). I suspect that multiple independent equal bandwidth threads (as I strongly = suspect the Propeller has, and my processor definitely has) is such a natur= al construct - it fully utilizes the HW pipeline by eliminating all hazards= , bubbles, stalls, branch prediction, etc. and uses the interstage register= ing for data and control value storage - that it will come more into common= usage as compilers better adapt to multi-cores and threads. Then again, t= he industry never met a billion transistor bizarro world processor it didn'= t absolutely love, so what do I know? I find it exceeding odd that the PC industry is still using x86 _anything_ = at this point. Apple showed us you can just dump your processor and switch= horses in midstream pretty much whenever you feel like it (68k =3D> PowerP= C =3D> x86) and not torch your product line / lose your customer base. I s= uppose having Intel and MS go belly up overnight is beyond the pale and at = the root of why we can't have nice things. I remember buying my first 286,= imagining of all the wonderful projects it would enable, and then finding = out what complete dogs the processor and OS were - it was quite disillusion= ing for the big boys to sell me a lump of shit like that (and for a lot mor= e than 3 farthings).Article: 155293
Why is it that every "serious" processor is a ball of cruft with multiple bags on the side, some much worse than others, but none even a mother could love? Is it because overly complex compilers have made HW developers disillusioned / complacent? I honestly don't get how we got here from there. If it's anything, engineering is an exercise in complexity management, but processor design is somehow flying under the radar.Article: 155294
Eric Wallin wrote: > Why is it that every "serious" processor is a ball of cruft with multiple bags on the side, some much worse than others, but none even a mother could love? Is it because overly complex compilers have made HW developers disillusioned / complacent? 1) an adherence to the von Neumann concepts of what a processor should be 2) processing models that worked well given previous technology limits, but which don't scale to modern technology limits (e.g processor/memory speed ratio, number of ic pins, cache coherence) 3) backwards compatibility (*the* dominant commercial consideration) > I honestly don't get how we got here from there. If it's anything, engineering is an exercise in complexity management, but processor design is somehow flying under the radar. 1) look at The Mill on comp.arch, for one beguiling possibility 2) message passing between processors/threads executing on multiprocessor machines with non-coherent memoriesArticle: 155295
Eric Wallin wrote: > On Saturday, June 22, 2013 1:26:53 AM UTC-4, rickman wrote: > >> So why are you here exactly? I'm not saying you shouldn't be here or >> that you shouldn't be saying what you are saying. But given how you >> feel about Forth, I'm just curious why you want to have the conversation >> you are having? Are you exploring your inner curmudgeon? > > Just venting at the industry, and procrastinating a bit (putting off the final verification of the processor). Apparently OT is my favorite subject as it seems I'm always busy derailing my own (and others) threads. That, and Y'all have very interesting takes on these and various and sundry other things. > > Backing up a bit, it strikes me as a bit crazy to make a > language based on the concept of a weird target processor. > I mean, I get the portability thing, but at what cost? > If my experience as a casual user (not programmer) of > Java on my PC is any indication (data point of one, > the plural of anecdote isn't data, etc.), the virtual stack-based > processor paradigm has failed, as the constant updates, security > issues, etc. pretty much forced me to uninstall it. None of those real problems are related to "virtual stack-based processor paradigm". > And I would think that a language targeting a processor model > that is radically different than the physically underlying one > would be terribly inefficient unless the compiler can do hand > stands while juggling spinning plates on fire - even if it is, > god knows what it spits out. That is just about what HotSpot does :) As L Peter Deutsch remarked when people had similar reservations about the first (Smalltalk) JIT in the mid 80s, if you can't tell the difference externally, the internals don't matter. Now for some fun how emulated processors can be faster than native processors *even when both processors are the same* - take processor X executing cpu intensive C benchmarks compiled with optimisation on, and measure speed S1 - write an emulator for processor X and run that emulator on processor X - run those C benchmarks on in the emulator, see what the code is actually doing (as opposed to what the compiler dared not assume) - use that knowledge to "patch the optimised binaries" - run the patched binaries in the emulator, and measure speed S2 - note that S2 can be faster than S1 http://archive.arstechnica.com/reviews/1q00/dynamo/dynamo-1.html > Canonical stack processors and theirlanguages (Forth, Java, > Postscript) at this point seem to be hanging by a legacy thread > (even if every PC runs one peripherally at one time or another). That's a bizarre assertion. > I find it exceeding odd that the PC industry is still using > x86 _anything_ at this point. backwards compatibility is the dominant commercial imperative: don't inconvenience your existing customers. (Windows 8? Tee hee) > Apple showed us you can just dumpyour processor and switch > horses in midstream pretty much whenever you feel like it > (68k => PowerPC => x86) and not torch your product line / > lose your customer base. Only given preconditions that don't apply in the Wintel world.Article: 155296
David Brown wrote: > I do mostly small-systems embedded programming, which is mostly in C. It used to include a lot more assembly, but that's quite rare now (though it is not uncommon to have to make little snippets in > assembly, or to study compiler-generated assembly), and perhaps in the future it will include more C++ (especially with C++11 features). I also do desktop and server programming, mostly in Python, > and I have done a bit of FPGA work (but not for a number of years). If C++ is the answer, I want to know what the question is. How many *years* does it take before the first commercial implementation of a C++ standard becomes available? Yes, I know partial implementations become available pretty quickly. Soustroup's tome describing "this is what I meant by the various bits of C++" started out at ~400 pages and is now around 1300 pages. 'Nuff said. Don't forget that it is possible to get the compiler to emit the sequence of prime numbers during the (unterminating) compilation process. The language designers didn't realise they had created such a monster until it was demonstrated to them! <http://en.wikibooks.org/wiki/C%2B%2B_Programming/Templates/Template_Meta-Programming#History_of_TMP> > I just think it's fun to work with different types of language - it gives you a better understanding of programming in general, and new ideas of different ways to handle tasks. Very true. Choose the right tool for the task at hand.Article: 155297
On 6/22/2013 12:26 PM, Eric Wallin wrote: > On Saturday, June 22, 2013 1:26:53 AM UTC-4, rickman wrote: > >> So why are you here exactly? I'm not saying you shouldn't be here or >> that you shouldn't be saying what you are saying. But given how you >> feel about Forth, I'm just curious why you want to have the conversation >> you are having? Are you exploring your inner curmudgeon? > > Just venting at the industry, and procrastinating a bit (putting off the final verification of the processor). Apparently OT is my favorite subject as it seems I'm always busy derailing my own (and others) threads. That, and Y'all have very interesting takes on these and various and sundry other things. I am cc'ing this to the forth group in case anyone there cares to join in. I'm still a novice at the language so I only can give you my take on things. > Backing up a bit, it strikes me as a bit crazy to make a language based on the concept of a weird target processor. I mean, I get the portability thing, but at what cost? If my experience as a casual user (not programmer) of Java on my PC is any indication (data point of one, the plural of anecdote isn't data, etc.), the virtual stack-based processor paradigm has failed, as the constant updates, security issues, etc. pretty much forced me to uninstall it. And I would think that a language targeting a processor model that is radically different than the physically underlying one would be terribly inefficient unless the compiler can do hand stands while juggling spinning plates on fire - even if it is, god knows what it spits out. Canonical stack processors and their languages (Forth, Java, Postscript) at this point seem to be hanging by a legacy thread (even if every PC runs one peripherally at one time or another). Off the topic at hand, here is one of thunderbird's many issues as a news reader. It displays messages just fine in the reading window, but in the edit window all of your quoted paragraphs show as single lines goiing far off the right side of the screen. I have to switch back and forth to read the text I am replying to! Back to the discussion... By weird target processor you mean the virtual machine? That is because it is a very simple model. It does seem odd that such a model would be adopted, but the use of the stack makes for a very simple parameter passing method supported by very simple language features. There is no need for syntax other than spaces. That is *very* powerful and allows the tool to be kept very small. Chuck Moore is all about simplicity and this is how he got this level of simplicity in the language. > I suspect that multiple independent equal bandwidth threads (as I strongly suspect the Propeller has, and my processor definitely has) is such a natural construct - it fully utilizes the HW pipeline by eliminating all hazards, bubbles, stalls, branch prediction, etc. and uses the interstage registering for data and control value storage - that it will come more into common usage as compilers better adapt to multi-cores and threads. Then again, the industry never met a billion transistor bizarro world processor it didn't absolutely love, so what do I know? So what clock speeds does your processor achieve? It is an interesting idea to pipeline everything and then treat the one processor as N processors running in parallel. I think you have mentioned that here before and I seem to recall taking a quick look at the idea some time back. It fits well with many of the features available in FPGAs and likely would do ok in an ASIC. I just would not have much need for it in most of the things I am looking at doing. Rather than N totally independent processors, have you considered using pipelining to implement SIMD? This could get around some of the difficulties in the N wide processor like memory bandwidth. > I find it exceeding odd that the PC industry is still using x86 _anything_ at this point. Apple showed us you can just dump your processor and switch horses in midstream pretty much whenever you feel like it (68k => PowerPC => x86) and not torch your product line / lose your customer base. I suppose having Intel and MS go belly up overnight is beyond the pale and at the root of why we can't have nice things. I remember buying my first 286, imagining of all the wonderful projects it would enable, and then finding out what complete dogs the processor and OS were - it was quite disillusioning for the big boys to sell me a lump of shit like that (and for a lot more than 3 farthings). You know why the x86 is still in use. It is not really that bad in relation to the other architectures when measured objectively. It may not be the best, but there is a large investment, mostly by Intel. if Intel doesn't change why would anyone else? But that is being eroded by the ARM processors in the handheld market. We'll see if Intel can continue to adapt the x86 to low power and maintain a low cost. I don't think MS is propping up the x86. They offer a version of Windows for the ARM don't they? As you say, there is a bit of processor specific code but the vast bulk of it is just a matter of saying ARMxyz rather than X86xyz. Developers are another matter. Not many want to support yet another target, period. If the market opens up for Windows on ARM devices then that can change. In the mean time it will be business as usual for desktop computing. -- RickArticle: 155298
On Saturday, June 22, 2013 2:08:20 PM UTC-4, rickman wrote: > So what clock speeds does your processor achieve? It is an interesting= =20 > idea to pipeline everything and then treat the one processor as N=20 > processors running in parallel. I think you have mentioned that here=20 > before and I seem to recall taking a quick look at the idea some time=20 > back. It fits well with many of the features available in FPGAs and=20 > likely would do ok in an ASIC. I just would not have much need for it=20 > in most of the things I am looking at doing. The core will do ~200 MHz in the smallest Cyclone 3 or 4 speed grade 8 (the= cheapest and slowest). It looks to the outside world like 8 independent p= rocessors (threads) running at 25 MHz, each with its own independent interr= upt. Internally each thread has 4 private general purpose stacks that are = each 32 entries deep, but all threads fully share main memory (combined ins= truction/data). > Rather than N totally independent processors, have you considered using= =20 > pipelining to implement SIMD? This could get around some of the=20 > difficulties in the N wide processor like memory bandwidth. I haven't given this very much thought. But different cores could simultan= eously work on different byte fields in a word in main memory so I'm not su= re HW SIMD support is all that necessary. This is just a small FPGA core, not an x86 killer. Though it beats me why = more muscular processors don't employ these simple techniques.Article: 155299
Eric Wallin wrote: > The core will do ~200 MHz in the smallest Cyclone 3 or 4 speed grade > 8 (the cheapest and slowest). It looks to the outside world like > 8 independent processors (threads) running at 25 MHz, each with > its own independent interrupt. Internally each thread has 4 > private general purpose stacks that are each 32 entries deep, > but all threads fully share main memory (combined instruction/data). Have you defined what happens when one processor writes to a memory location that is being read by another processor? In other words, what primitives do you provide that allow one processor to reliably communicate with another?
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z