Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On Mar 5, 3:54=A0pm, Peter Flass <Peter_Fl...@Yahoo.com> wrote: > Quadibloc wrote: > > On Feb 22, 3:53 pm, Peter Flass <Peter_Fl...@Yahoo.com> wrote: > > >> PL/I can be, but doesn't have to be. =A0If the arguments of a procedur= e > >> match the parameters, only the argument address (and possibly a > >> descriptor address for strings structures, and arrays) is passed. > > > Doesn't PL/I (or, rather, normal implementations thereof) support > > separate compilation of subroutines, just like FORTRAN and COBOL? > > Yes, but the calling sequence among PL/I programs passes information on > string =A0lengths, array bounds, which is why the "bloat" is unavoidable in PL/I, not unnecessary, you had previously claimed and as I was trying to contradict. Yes, I'm well aware PL/I does not use the standard S-type calling sequence. John SavardArticle: 146126
In comp.arch.fpga Quadibloc <jsavard@ecn.ab.ca> wrote: (snip) >> [pfeiffer@snowball ~/temp]# ./awry >> a[2] at 0xbfff97b8 >> 2[a] at 0xbfff97b8 >> (a+2) is 0xbfff97b8 >> (2+a) is 0xbfff97b8 > The 2[a] syntax actually *works* in C the way it was described? I am > astonished. I would expect it to yield the contents of the memory > location a+&2 assuming that &2 can be persuaded to yield up the > location where the value of the constant "2" is stored. > Evidently there is some discrepancy between C and FORTRAN. I believe the standard requires it. The subscript operator, [], is defined such that a[b] is equivalent to *(a+b). a[b] is *(a+b) is *(b+a) is b[a] It gets even more interesting with more subscripts, but still works. -- glenArticle: 146127
Quadibloc wrote: > On Mar 5, 3:54 pm, Peter Flass <Peter_Fl...@Yahoo.com> wrote: >> Quadibloc wrote: >>> On Feb 22, 3:53 pm, Peter Flass <Peter_Fl...@Yahoo.com> wrote: >>>> PL/I can be, but doesn't have to be. If the arguments of a procedure >>>> match the parameters, only the argument address (and possibly a >>>> descriptor address for strings structures, and arrays) is passed. >>> Doesn't PL/I (or, rather, normal implementations thereof) support >>> separate compilation of subroutines, just like FORTRAN and COBOL? >> Yes, but the calling sequence among PL/I programs passes information on >> string lengths, array bounds, > > which is why the "bloat" is unavoidable in PL/I, not unnecessary, you > had previously claimed and as I was trying to contradict. Yes, I'm > well aware PL/I does not use the standard S-type calling sequence. > Yes, but is it bloat? When I've converted C functions that work with arrays, most of them need to be passed the [address of the] array and the upper bound. The first thing I do is get rid of the latter, since it's handled automatically, but it's necessary one way or the other, so why not have the language do it?Article: 146128
Our Prog3 cable http://www.enterpoint.co.uk/programming_solutions/prog3.html is just starting to ship and will do anything a Xilinx USB cable will do. We are awaiting some more cases before shipping more and you can but this programmer with out Polmaddie1 CPLD board as a dongle for about US $140 or a bit less if for academic use. We are hoping to have the cases in the next couple of weeks. John Adair Enterpoint Ltd. On 4 Mar, 20:24, Jason Thibodeau <jason.p.thibod...@gmail.com> wrote: > Hello, > > I have a Spartan 3 Starter board. I used to program it with my parallel > JTAG cable, but I now do my implementations on a laptop without a > parallel port. I am in the market for a USB JTAG cable. > > Requirements: Xilinx ISE 11.1 running on Fedora 12. > > Any suggestions? Will the digilent cable suffice? > > Thanks in advance. > -- > Jason Thibodeauwww.jayt.orgArticle: 146129
Peter Flass wrote: > Quadibloc wrote: >> On Mar 5, 3:54 pm, Peter Flass <Peter_Fl...@Yahoo.com> wrote: >>> Quadibloc wrote: >>>> On Feb 22, 3:53 pm, Peter Flass <Peter_Fl...@Yahoo.com> wrote: >>>>> PL/I can be, but doesn't have to be. If the arguments of a procedure >>>>> match the parameters, only the argument address (and possibly a >>>>> descriptor address for strings structures, and arrays) is passed. >>>> Doesn't PL/I (or, rather, normal implementations thereof) support >>>> separate compilation of subroutines, just like FORTRAN and COBOL? >>> Yes, but the calling sequence among PL/I programs passes information on >>> string lengths, array bounds, >> >> which is why the "bloat" is unavoidable in PL/I, not unnecessary, you >> had previously claimed and as I was trying to contradict. Yes, I'm >> well aware PL/I does not use the standard S-type calling sequence. >> > > > Yes, but is it bloat? When I've converted C functions that work with > arrays, most of them need to be passed the [address of the] array and > the upper bound. The first thing I do is get rid of the latter, since > it's handled automatically, but it's necessary one way or the other, so > why not have the language do it? Sorry to reply to my own post, but I just thought of something I should have said. If you want to eliminate all the "bloat" and emulate C directly in PL/I, it's possible. DECLARE my_array (0:some_upper_bound-1) ...attributes... ; /* By default, PL/I arrays start with subscript one, * to exactly duplicate C change the lower bound to zero. * Also, C arrays are declared with [number_of_elements] rather * than highest subscript. Since the lower bound is zero * the number of elements is always one greater than the highest * subscript, which is always a source of confusion * for new C programmers. */ DECLARE my_C_func ENTRY(POINTER, FIXED BINARY) OPTIONS(..options...); /* The "OPTIONS" keyword would indicate that this function is * a C or C-like function. For IBM this is "BYVALUE LINKAGE(SYSTEM)". */ CALL my_C_func( addr(my_array), some_upper_bound ); /* This calls the function exactly as in C, passing the address * of the array and the upper bound by value */ Now try to replicate PL/I calls in C.Article: 146130
The starting point for this is storage for data and the bandwidth to access it. For those reasons I would suggest our Drigmorn3 as a starting point with 1Gbit of DDR3. The advantage of the DDR3 is density and speed. For the manipulation there will be a big difference between possible control/processing engines and it is difficult to be accurate of what size is best. The LX16 on the Drigmorn3 is a reasonable size but if you think that is not enough there will be another product shortly with a bigger part. John Adair Enterpoint Ltd. - Home of Merrick1. The HPC Solutions. On 5 Mar, 15:00, "Maurice Branson" <trauben...@arcor.de> wrote: > Hi folks, > > I have been introduced to a project where I have to implement a TFT display > controlller on a low-cost FPGA like Spartan-3 or Spartan-6. The controller > has to support at least XGA - 1024x768, SXGA - 1280x1024, UXGA - 1600x1200 > and WUXGA 1920x1200. I have to chose an appropriate platform now. As I am a > total newbie in the field of image processing on FPGAs I would ask you for > your recommendation about size and type of FPGA, especially in terms of > logic for video scaling and SRAM memories for frame buffer storage. > > Many thanks in advance. > > KR, MauriceArticle: 146131
We have a 1Gbps Phy module coming that will fit most of our range. It will fit most of development board range http://www.enterpoint.co.uk/boardproducts.html. We also have Virtex-6 board coming that will also support 2 1Gbps ports natively. John Adair Enterpoint Ltd. - Home of Drigmorn3. The Spartan-6 Starter Board. On 4 Mar, 13:04, Kastil Jan <ikas...@stud.fit.vutbr.cz> wrote: > Hi folks, > I would like to ask you for recomandation of the ethernet development kit > with FPGA (much preferably Xilinx's one). Our requirements are the low > power, as big FPGA as possible and at least 3 ethernet ports at 1Gbps. > > I am not sure it there is currently such kit being distributed, because I > was not able to find it by google. > > Please, could you send information about kits, that could meet the > requirements? (or are close to them?) > > Thank you very much > > JanArticle: 146132
I7 laptops if you can get the mobile versions are best but do watch the battery lifetime. Most software isn't using the multiple cores so the next best Core2 duo (T9800) based can be good too. Have a look at a HP 8730w for one based on T9800. This can go 3-4hrs and double that with an extension battery they have that clips on. That's series computing on battery for most of a normal man's working day. There are quad core ones too but I don't think the extra money is worth spending. Better to spend money on a good SSD drive which are great for making things go faster too if you can get the right one. HP are going to release a 8740w at some point in time. Hopefully soon and that is I7 based from what little is in the public domain. HP are still offering XP downgrades last time I looked if you want to use Windows as OS. That is one the reasons I use them. Dell do the same. John Adair Enterpoint Ltd. On 3 Mar, 14:48, "Pete Fraser" <pfra...@covad.net> wrote: > I'm going to be travelling soon, and will continue to > do FPGA design from the road. I'll need to get a > new laptop for this. > > Any thoughts? > I think something based on the Core i7-620M might > be fast enough and low power, but they seem rare. > Looks like I'll probably end up with something with > a Core i7-720QM or a Core i7-820QM. > Anybody here have any experience with on of these > machines? Is there another processor I should be looking at? > > The obvious OS with a new machine would be Windows 7, > 64-bit, but I'm not sure my software will run on that. > I'm running ISE Foundation 10.1 (and don't plan on > upgrading quite yet). I also use Modelsim XE, but will > be upgrading to Modelsim PE or Aldec. > > It's not clear what software runs on what OS. It seems > that I might be safer with 32-bit XP for the Modelsim > and the Xilinx software. Windows 7 Professional > seems to have a downgrade option to XP. Does that > mean I choose to install one or the other OS, or can > I install both and switch between them? 7 Pro seems > to have some sort of XP mode. Will that work for these > tools? Is there a performance penalty over a real XP > installation? Can I emulate XP 32-bit under W7 64-bit? > > Thanks for your thoughts and suggestions. > > PeteArticle: 146133
On Mar 5, 5:34=A0am, Martin Thompson <martin.j.thomp...@trw.com> wrote: > > Am I the only one that makes *no* use of the various "project things" > (either in Modelsim or Aldec)? =A0I just have a makefile and use the GUI > to run the sim (from "their" command-line) and show me the waveforms. > I guess I don't like to be tied to a tool (as much as I can manage) > much as I don't like to be tied to a particular silicon vendor (as > much as I can manage :) > But you're also running *their* commands to compile, run and view so you're not really any more independent. Maintaining make files can be a chore also, unless you use something to help you manage it...but then you're now dependent on that tool as well. > Am I missing something valuable, or is it just different? > Probably depends on which scenario is more likely to occur 1. Change sim tools 2. Add new developers (temporary, or because you move on to something else in the company) If #1 is prevalent, then maybe using other tools to help you manage 'make' is better. If #2 is more prevalent, then using the tool's project system is probably better in easing the transition. If neither is particularly likely...well...then it probably doesn't much matter since one can probably be just as productive with various approaches. Kevin JenningsArticle: 146134
On Mar 5, 4:53=A0pm, Andy Peters <goo...@latke.net> wrote: > > It turns out that it is reasonable to create one workspace for an FPGA > project and within this workspace create a "design" for the > subentities and the top level. If you let it use the design name as > the working library for the design, then as long as you "use" the > library in a higher-level source, that source can see those other > libraries. Why do you think that you need to segregate the library that the source files get compiled into? In other words, what is wrong with compiling everything into 'work'? That's not a source file, it's an intermediate folder(s) that gets created along the way to doing what you need to have done. What do you gain by trying to have tidy intermediate folders? Having a separate library helps you avoid name clashes, but for things that you're developing yourself this is more easily avoided by considering some of the following points: - Question the validity of why you have two things named the same (presumably doing the same thing) - Consider parameterizing the design instead so that there is only one thing with that name, but now you have a parameter that can select what needs to be different. - If the differences between the designs are significant such that parameterizing simply encapsulates each approach within big 'if xyz generate...end generate', if abc generate...end generate' statements then consider collecting each design as simply differently named architectures of the same entity (i.e. 'architecture RTL1 of widget', 'architecture RTL2 of widget') - Avoid the name clash by renaming one of them to be more descriptive to distinguish it from it's sibling > > Now I'm thinking that the usual method of doing: > > =A0 =A0 u_foo : entity work.foo port map (bar =3D> bar, bletch =3D> bletc= h); > > might be better as: > > =A0 =A0 u_foo : entity foo.foo port map (bar =3D> bar, bletch =3D> bletch= ); > > The other option is to create a package with a component definition > for foo, and analyze that package into the foo library, so the > instantiation can be: > > =A0 =A0 u_foo : foo port map (bar =3D> bar, bletch =3D> bletch); > > I really don't know which is "better." > Neither one is particularly good in my opinion. The reasons against the first approach I've mentioned above (i.e. what do you really get for not simply compiling everything into 'work'?). The only place I've found a component declaration to be useful is when you would like to use a configuration to swap things out and about. The only time I've found configurations to be useful really is when the VHDL source is not really under my control (such as when a PCBA model is generated by a CAD tool). With a component declaration, you still have to decide where to put that declaration. The best place is in the source file with the entity so that changes to one are more likely to get changed in both places. Given that, I don't see how components will help you manage anything better....my two or three cents Kevin JenningsArticle: 146135
On Mar 6, 1:59=A0am, whygee <y...@yg.yg> wrote: > Peter Alfke wrote: > > I did repeatedly offer my services as a consultant, even without pay, > > but there are no takers. > > Is it because of "budget restrictions" or because... > you're outside the company's loop now ? > are you an "outcast" already ? > > > Peter A. > > yg > > --http://ygdes.com/http://yasep.org All of it... But don't feel sorry for me, I am fine. Feel sorry for a company that is too hung up to take advantage of an available resource. Peter AArticle: 146136
In article <20100305171635.e538ef18.steveo@eircom.net>, Ahem A Rivet's Shot <steveo@eircom.net> wrote: > On Fri, 5 Mar 2010 09:07:31 -0800 (PST) > Quadibloc <jsavard@ecn.ab.ca> wrote: > > > On Feb 26, 4:56Â am, Ahem A Rivet's Shot <ste...@eircom.net> wrote: > > > > > Â Â Â Â No, he's saying that C doesn't really implement an array type, > > > the var[offset] syntax is just syntactic sugar for *(var + offset) > > > which is why things like 3[x] work the same as x[3] in C. > > > > Um, no. > > > > x = y + 3 ; > > > > in a C program will _not_ store in x the value of y plus the contents > > of memory location 3. > > No but x = *(y + 3) will store in x the contents of the memory > location at 3 + the value of y just as x = y[3] will and x = 3[y] will, > which is what I stated. You missed out the all important * and ()s. No, that will compare x and the right val. = is a comparasion operator in c. -- A computer without Microsoft is like a chocolate cake without mustard.Article: 146137
On Mar 6, 8:10=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > Peter Alfke <al...@sbcglobal.net> wrote: > > (snip) > > > Peter Alfke, formerly Xilinx Applications (some of you may remember me) > > and also notice that you don't post as often as before. > > I was not so long ago thinking of asking: > > =A0 =A0There is picoblaze (8 bit), and microblaze (32 bit), but no > =A0 =A0nanoblaze (16 bit) or milliblaze (64 bit). =A0It might even > =A0 =A0be interesting to have a femtoblaze (4 bit) processor. There is a newish 4 bit core here, that could be a template ? GC49C50x series http://www.coreriver.co.kr/product-lines/top_corerivermcu.html Atom summary: # CPU - 4-bit reduced 8051 architecture - Continuous program addressing, not paged. - 51 instructions including push, pop and logic inst. - Instruction cycle : fSYS/6 - Multi-level subroutine nesting with RAM based stack. # On-chip Memories : - FLASH : 1024 Bytes (including EEPROM : 128 Bytes ) - RAM : 64 nibbles (including stack) It could also be timely for someone to target the new QuadSPI Flash to a FPGA core. Code memory is always the achilles heel of Soft-CPU. -jgArticle: 146138
On Mar 6, 10:55=A0pm, -jg <jim.granvi...@gmail.com> wrote: > On Mar 6, 8:10=A0am, glen herrmannsfeldt <g...@ugcs.caltech.edu> wrote: > > > Peter Alfke <al...@sbcglobal.net> wrote: > > > (snip) > > > > Peter Alfke, formerly Xilinx Applications (some of you may remember m= e) > > > and also notice that you don't post as often as before. > > > I was not so long ago thinking of asking: > > > =A0 =A0There is picoblaze (8 bit), and microblaze (32 bit), but no > > =A0 =A0nanoblaze (16 bit) or milliblaze (64 bit). =A0It might even > > =A0 =A0be interesting to have a femtoblaze (4 bit) processor. > > There is a newish 4 bit core here, that could be > a template ? =A0 GC49C50x serieshttp://www.coreriver.co.kr/product-lines/= top_corerivermcu.html > > Atom summary: > # CPU > - 4-bit reduced 8051 architecture > - Continuous program addressing, not paged. > - 51 instructions including push, pop and logic inst. > - Instruction cycle : fSYS/6 > - Multi-level subroutine nesting with RAM based stack. > # On-chip Memories : > - FLASH : 1024 Bytes (including EEPROM : 128 Bytes ) > - RAM : 64 nibbles (including stack) > > It could also be timely for someone to target the > new QuadSPI Flash to a FPGA core. > Code memory is always the achilles heel of Soft-CPU. > > -jg Jim, 1) the 4-bit "Atom" isnt so new ;) 2) NanoBlaze is registered trademark of Xilinx hm... one of my softcores has been pushed into useable status its small core that is optimized to run from one single block ram and to NOT use distributed ram, so it is very small in all vendors FPGA's it does have a compromise, 1 instruction takes 4 clock, but well then it has windowed register file and no overhead context switching, i do consider it much more interesting then ATOM, at least what goes soft cores for FPGA AnttiArticle: 146139
In article <proto-18F31F.15035106032010@news.panix.com>, Walter Bushell <proto@panix.com> wrote: >In article <20100305171635.e538ef18.steveo@eircom.net>, > Ahem A Rivet's Shot <steveo@eircom.net> wrote: > >> On Fri, 5 Mar 2010 09:07:31 -0800 (PST) >> Quadibloc <jsavard@ecn.ab.ca> wrote: >> >> > On Feb 26, 4:56Â am, Ahem A Rivet's Shot <ste...@eircom.net> wrote: >> > >> > > Â Â Â Â No, he's saying that C doesn't really implement an array type, >> > > the var[offset] syntax is just syntactic sugar for *(var + offset) >> > > which is why things like 3[x] work the same as x[3] in C. >> > >> > Um, no. >> > >> > x = y + 3 ; >> > >> > in a C program will _not_ store in x the value of y plus the contents >> > of memory location 3. >> >> No but x = *(y + 3) will store in x the contents of the memory >> location at 3 + the value of y just as x = y[3] will and x = 3[y] will, >> which is what I stated. You missed out the all important * and ()s. > > No, that will compare x and the right val. > > = is a comparasion operator in c. No it isn't. '=' is assignment; '==' is comparison.Article: 146140
On Mar 6, 11:52 am, "HT-Lab" <han...@ht-lab.com> wrote: > "Adam G rski" <totutousungors...@malpawp.pl> wrote in message > > Just as a warning a number of Flexlm based software is blocking remote desktop > so you won't be able to run a node-locked license using remote desktop. VNC and > others work fine, > Good old telnet is my preferred solution to this particular annoyance. But I only work over LAN. If I'd ever want to work over Internet I'd probably tunnel the telnet over SSH.Article: 146141
On Mar 6, 5:37 pm, John Adair <g...@enterpoint.co.uk> wrote: > I7 laptops if you can get the mobile versions are best but do watch > the battery lifetime. Most software isn't using the multiple cores so > the next best Core2 duo (T9800) based can be good too. Why not T9900? At single FPGA compilation it should easily beat any 35W member of core-i7 family, including i7-620M. FPGA tools love on-chip cache above anything else. In fact, it's possible that even T9600 is faster than i7-620M. i7-820QM would be faster, yet, but at 45W TDP you will likely find it only in special heavyweight models. Of course, if you often find yourself compiling several variants in parallel, stick with i7/i5, since in that scenario core2duo is pretty weak. >Have a look at > a HP 8730w for one based on T9800. This can go 3-4hrs and double that > with an extension battery they have that clips on. That's series > computing on battery for most of a normal man's working day. There are > quad core ones too but I don't think the extra money is worth > spending. Better to spend money on a good SSD drive which are great > for making things go faster too if you can get the right one. > > HP are going to release a 8740w at some point in time. Hopefully soon > and that is I7 based from what little is in the public domain. > > HP are still offering XP downgrades last time I looked if you want to > use Windows as OS. That is one the reasons I use them. Dell do the > same. Do they offer XP64 drivers? XP32 is sufficient in for 98% of todays FPGA but the upper 2% (the biggest Stratix-IV devices, for example) require 64-bit tools. > > John Adair > Enterpoint Ltd. > >Article: 146142
Antti <antti.lukats@googlemail.com> wrote: (snip) > 1) the 4-bit "Atom" isnt so new ;) > 2) NanoBlaze is registered trademark of Xilinx A quick search of uspto.gov didn't find nanoblaze, but, following the pattern, the 4 bit processor would be femtoblaze, which also doesn't seem to be registered. -- glenArticle: 146143
Walter Bushell <proto@panix.com> writes: > In article <20100305171635.e538ef18.steveo@eircom.net>, > Ahem A Rivet's Shot <steveo@eircom.net> wrote: > > > On Fri, 5 Mar 2010 09:07:31 -0800 (PST) > > Quadibloc <jsavard@ecn.ab.ca> wrote: > > > > > On Feb 26, 4:56Â am, Ahem A Rivet's Shot <ste...@eircom.net> wrote: > > > > > > > Â Â Â Â No, he's saying that C doesn't really implement an array type, > > > > the var[offset] syntax is just syntactic sugar for *(var + offset) > > > > which is why things like 3[x] work the same as x[3] in C. > > > > > > Um, no. > > > > > > x = y + 3 ; > > > > > > in a C program will _not_ store in x the value of y plus the contents > > > of memory location 3. > > > > No but x = *(y + 3) will store in x the contents of the memory > > location at 3 + the value of y just as x = y[3] will and x = 3[y] will, > > which is what I stated. You missed out the all important * and ()s. > > No, that will compare x and the right val. > > = is a comparasion operator in c. I think you've got them mixed up. = is the assignment operator, == is the comparison operator. -- PatrickArticle: 146144
Walter Bushell <proto@panix.com> writes: > In article <20100305171635.e538ef18.steveo@eircom.net>, > Ahem A Rivet's Shot <steveo@eircom.net> wrote: > >> On Fri, 5 Mar 2010 09:07:31 -0800 (PST) >> Quadibloc <jsavard@ecn.ab.ca> wrote: >> >> > On Feb 26, 4:56Â am, Ahem A Rivet's Shot <ste...@eircom.net> wrote: >> > >> > > Â Â Â Â No, he's saying that C doesn't really implement an array type, >> > > the var[offset] syntax is just syntactic sugar for *(var + offset) >> > > which is why things like 3[x] work the same as x[3] in C. >> > >> > Um, no. >> > >> > x = y + 3 ; >> > >> > in a C program will _not_ store in x the value of y plus the contents >> > of memory location 3. >> >> No but x = *(y + 3) will store in x the contents of the memory >> location at 3 + the value of y just as x = y[3] will and x = 3[y] will, >> which is what I stated. You missed out the all important * and ()s. > > No, that will compare x and the right val. > > = is a comparasion operator in c. Umm..... no. It's not. It's the assignment operator. -- As we enjoy great advantages from the inventions of others, we should be glad of an opportunity to serve others by any invention of ours; and this we should do freely and generously. (Benjamin Franklin)Article: 146145
Walter Bushell wrote: > In article <20100305171635.e538ef18.steveo@eircom.net>, > Ahem A Rivet's Shot <steveo@eircom.net> wrote: > >> On Fri, 5 Mar 2010 09:07:31 -0800 (PST) >> Quadibloc <jsavard@ecn.ab.ca> wrote: >> >>> On Feb 26, 4:56Â am, Ahem A Rivet's Shot <ste...@eircom.net> wrote: >>> >>>> Â Â Â Â No, he's saying that C doesn't really implement an array type, >>>> the var[offset] syntax is just syntactic sugar for *(var + offset) >>>> which is why things like 3[x] work the same as x[3] in C. >>> Um, no. >>> >>> x = y + 3 ; >>> >>> in a C program will _not_ store in x the value of y plus the contents >>> of memory location 3. >> No but x = *(y + 3) will store in x the contents of the memory >> location at 3 + the value of y just as x = y[3] will and x = 3[y] will, >> which is what I stated. You missed out the all important * and ()s. > > No, that will compare x and the right val. > > = is a comparasion operator in c. > '=' is assignment, '==' is comparison.Article: 146146
On Mar 7, 10:06=A0am, Antti <antti.luk...@googlemail.com> wrote: > > Jim, > > 1) the 4-bit "Atom" isnt so new ;) It is, relative to an 80C51 ;) > hm... one of my softcores has been pushed into useable status > its small core that is optimized to run from one single block ram > and to NOT use distributed ram, so it is very small in all vendors > FPGA's > it does have a compromise, 1 instruction takes 4 clock, but well then > it has windowed register file and no overhead context switching, i do > consider > it much more interesting then ATOM, at least what goes soft cores for FPG= A Yes, a core that targets BlockRam is going to be a better fit. Windowed register is too often overlooked. Single cycle opcodes are over-rated, and I like the XMOS approach, where they time-slice to give the illusion of 4 x 100MHz cores. In a serial Flash CPU, to avoid thrashing the serial memory, and give superfast interrupts, you would allocate a small, fast interrupt/FSM type area, and then have one thread allowed to access serial flash. Serial flash also naturally has a clocks/opcode number. One chip will feed 16b/4 clocks, 24b in 6 clocks, and a Pair, would give 32b in 4 clocks, 40b in 5 clocks. -jgArticle: 146147
On Mar 4, 1:45=A0pm, Peter Flass <Peter_Fl...@Yahoo.com> wrote: > Eric Chomko wrote: > > On Mar 3, 7:06 pm, Greg Menke <guse...@comcast.net> wrote: > >> Peter Flass <Peter_Fl...@Yahoo.com> writes: > >>> Michael Wojcik wrote: > >>>> Peter Flass wrote: > >>>>> Hey! =A0C's finally caught up to PL/I. =A0Only took them 50 years, = and then > >>>>> of course all the features are just tacked-on in true C fashion, in= stead > >>>>> of thought-through. > >>>> Well, that's rather insulting to the members of WG14, who spent a > >>>> decade designing those features. Fortunately, they published the > >>>> Rationale showing that, in fact, they were thought through.[1] And a > >>>> great deal of documentation describing the process is available in t= he > >>>> archives.[2] > >>>> If you'd care to show why you think otherwise, perhaps there would b= e > >>>> some grounds for debate. > >>> "The flexible array must be last"? > >>> "sizeof applied to the structure ignores the array but counts any > >>> padding before it"? > >>> C is a collection of ad-hoc ideas. =A0WG14 may have put a great deal = of > >>> thought into how to extend it without breaking the existing mosh, but > >>> that's my point, it's still a mosh. > >> iostream formatting operators, because we really need more operator > >> overloading and no enhancements are too bizarre in service of making > >> everything, (for particular values of everything), specialized? > > >> Oh but wait, you can compile, install and dig your way through Boost s= o > >> as to avoid the fun & games of vanilla iostream. > > >> Thank goodness printf and friends are still around. > > > More generally when speaking about C++, than goodness C is still > > around. > > I've said before, C started out as a fairly simple and clean language, > with possibly a few rough spots. =A0Unfortunately instead of accepting it > on its own terms, and maybe coming up with "D", people tried to turn it > into a real 3GL, but then it gets in its own way. =A0It's lots of fun > reading some of the Gnu stuff to see a language less readable than APL.- = Hide quoted text - > > - Show quoted text - I'm pretty much in Rip Van Winkle mode. Took a C class back in 1988 for the fun of it, managed to get an A in it, and kind of liked it. Twenty years later I pop my head up and it might as well be Hieroglyphs. Funny thing<to me> is the people running the show or part of it love the way it evolved. I'm definately in the minority. *BUT* not trying to wrap the drift back to FPGA or anything since I am frequently guilty of drift myself, it is all interesting. What I meant was more along the lines of hardware that would make a language writer's job easier, a language that would execute faster, an implementation that would be more robust. Just as an example since the link I posted was a ref to the 65C02: The hardware stack is only 256 bytes so passing parameters on the stack can run into problems. You can have a software stack but branches have to be within ~+-128. This leaves you with a software stack =3D> 16 bit pointers, 64k limit, and slower then the other tools. One of the other old processors from my stone knives and bear skin days was the RCA1802. It had IMHO a great feature for the calling subroutines. Anyone of the 16 general purpose registers could be made the program counter with a single instruction. The author of that web page decoded instructions on the buss to implement his own set of register and instructions. IIRC he called one register 'W' and it acted as an extended program counter so the memory could be expanded to ~4 megs. Of course he did a lot more then that with adding useful instructions. I was just thinking if I wanted to have a faster 'C one of the things I would like to do is implement the single instruction PC change. If for example you had say 255 external 24 bit hardware registers in a FPGA that would function as a PC you could have that many subroutines implemented in a 2 byte instruction i.e. op code for next byte will be PC register selection followed by a single byte. I would have to look at the op codes available but you may be able to implement ~16 subroutine calls in a single byte. I think it could be done on the cheap and simple with a small GAL and maybe a fast static ram to contain the microcode and function as the W and PC registers. The 16 external registers would probably be enough if you choose the 16 most called routines from the runt time code. You could also implement runtime read only memory, memory moves by blit, pattern matching for things like string search functions in a form similar to blit, a second 'hardware stack' separate from main memory. Of course it comes back to my original statement that people could just tell me to throw a Pentium at the problem and it would be valid point. RickArticle: 146148
On Sat, 6 Mar 2010 08:41:57 -0800 (PST), KJ <kkjennings@sbcglobal.net> wrote: >On Mar 5, 4:53 pm, Andy Peters <goo...@latke.net> wrote: >> >> It turns out that it is reasonable to create one workspace for an FPGA >> project and within this workspace create a "design" for the >> subentities and the top level. If you let it use the design name as >> the working library for the design, then as long as you "use" the >> library in a higher-level source, that source can see those other >> libraries. > >Why do you think that you need to segregate the library that the >source files get compiled into? In other words, what is wrong with >compiling everything into 'work'? That's not a source file, it's an >intermediate folder(s) that gets created along the way to doing what >you need to have done. What do you gain by trying to have tidy >intermediate folders? as you said, tidiness... I use separate libraries for major categories within the design; e.g. memory interface, core logic, common (reusable) blocks, testbench - not separate libraries for foo, bar and bletch. I can't say it buys me a whole lot but it does help me keep the design hierarchy straighter - e.g. if the synthesis project contains something from the Testbench library, the design has gone seriously astray somewhere! though Xilinx EDK also uses it to version libraries that are common (apart from being different versions) to different IP blocks. >Having a separate library helps you avoid name clashes, not if you're using Xilinx XST it doesn't... unfortunately. Incredibly, XST doesn't support VHDL qualified identifiers from different libraries. In the presence of unqualified name clashes it will pick the right name from whichever library it wants, so WILL synthesise something completely different from what you intended, and tested in simulation. This has been around since 6.1 at least, and may or may not be fixed in ISE 12. which, as far as I can see, is the ONLY reason why all those EDK blocks get separately synthesised, then black boxed into the top level design. - BrianArticle: 146149
Peter Flass wrote: > Quadibloc wrote: >> On Feb 22, 3:53 pm, Peter Flass <Peter_Fl...@Yahoo.com> wrote: >> >>> PL/I can be, but doesn't have to be. If the arguments of a procedure >>> match the parameters, only the argument address (and possibly a >>> descriptor address for strings structures, and arrays) is passed. >> >> Doesn't PL/I (or, rather, normal implementations thereof) support >> separate compilation of subroutines, just like FORTRAN and COBOL? >> > > Yes, but the calling sequence among PL/I programs passes information on > string lengths, array bounds, etc. You have to dumb it down to call > other languages. > > For example, you can code: > a: PROCEDURE( array ); > DECLARE array (*) CHARACTER(*); > DO i=1 to HBOUND(array); > DO j=1 TO LENGTH(array(i)); ... END; > END; > > where both the upper limit of "array" is passed along with the string > length of the element. ISTM that one of the design goals of C was to make function calls have a smaller overhead that other HLL's. -- +----------------------------------------+ | Charles and Francis Richmond | | | | plano dot net at aquaporin4 dot com | +----------------------------------------+
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z