Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Thanks for the response John! > Do you also have an assembler, C++ compiler and debugger for this beast? > You should have a reference design running on a FPGA board if you want to > attract a following. Ideally it should also run linux. See my response to Nikolaos above. Full-blown OS support was not the development target. But it's not a pico-blaze either. Somewhere in the middle, mainly for FPGA algorithms that can benefit from serialization. > Why can't you do both. Post the code to opencores.org and then write a > paper about it and publish. That's probably the route I'll end up taking.Article: 155226
I have come across a VHDL Free Model Foundry mt47h16m16.vhd which gives me some errors. Has anyone else used this model? If so has anyone had issues with twr timing errors? Am I right in assuming that this model doesn't feature Concurrent Auto Precharge? Is DDR2 like SDR SDRAMs where some devices can cope with concurrent auto-precharge and others not? Where if the datasheet doesn't mention it then it's an unsupported feature? -- Mike Perkins Video Solutions Ltd www.videosolutions.ltd.ukArticle: 155227
Eric Wallin <tammie.eric@gmail.com> wrote: > Thanks for the response John! > > > Do you also have an assembler, C++ compiler and debugger for this beast? > > You should have a reference design running on a FPGA board if you want to > > attract a following. Ideally it should also run linux. > > See my response to Nikolaos above. Full-blown OS support was not the > development target. But it's not a pico-blaze either. Somewhere in the > middle, mainly for FPGA algorithms that can benefit from serialization. Benchmarks. Tell us why we should use your processor. How does it win compared with the alternatives? How easy is it to program? An assembler or a C compiler are really necessary to make something usable - LLVM may come in handy as a C compiler toolkit, I'm not sure what's an equivalent assembler toolkit. Actually synthesise the thing. It's hard to take seriously something that's never actually been tested for real, especially if it makes assumptions like having gigabytes of single-cycle-latency memory. Debug it and make sure it works in real hardware. > > Why can't you do both. Post the code to opencores.org and then write a > > paper about it and publish. If you put it on opencores, document document document. There are tons of half-baked projects with lame or nonexistent documentation, that kind of half work on the author's dev system but fall over in real life for one reason or another. Is it vendor-independent, or does it use Xilinx/Altera/etc special stuff? If so, how easily can that be replaced with an alternative vendor? Regression tests and test suites. How do we know it's working? Can we work on the code and make sure we don't break anything? What does 'working' mean in the first place? If you're trying to make an argument in computer architecture you can get away without some of this stuff (a research prototype can have rough edges because it's only to prove a point, as long as you tell us what they are). Generally you need to tell a convincing story, and either the story is that XYZ is a useful approach to take (so we can throw away the prototype and build something better) or XYZ is a component people should use (when it becomes more convincing if there's more support) Some lists of well-known conferences: http://sites.google.com/site/calasweb/fpga-conferences-and-workshops http://tcfpga.org/conferences.html Good luck :) TheoArticle: 155228
Thanks for your response Theo! On Thursday, June 13, 2013 8:44:03 PM UTC-4, Theo Markettos wrote: > Benchmarks. Tell us why we should use your processor. How does it win > compared with the alternatives? Good point. So far I've coded a verification boot code gauntlet that it ha= s passed, as well as restoring division and log2. If I had more code to pu= sh through it I could statistically tailor the instruction set (size the im= mediates, etc.) but I don't. I may at some point but I may not either. Th= is is mainly for me, to help me to implement various projects that require = complex computations in an FPGA (I currently need it for a digital Theremin= that is under development), but I want to release it so others may examine= and possibly use it or help me make it better, or use some of the ideas in= there in their own stuff. > How easy is it to program? An assembler or a C compiler are really > necessary to make something usable - LLVM may come in handy as a C compil= er > toolkit, I'm not sure what's an equivalent assembler toolkit. It's fairly general purpose and I think if you read the paper you might (or= might not) find it easy to understand and program by hand using verilog in= itial statements. My main goals were that it be simple enough to grasp wit= hout tools, complex and fast enough to do real things, have compact opcodes= so BRAM isn't wasted, etc. A compiler, OS, etc. are overkill and definite= ly not the intended target. There is a middle ground between trivial and full-blown processors (particu= larly for FPGA logical use). Of all the commercial offerings in this range= that I'm aware of, my processor is probably most similar to the Parallax P= ropeller, which is almost certainly pipeline threaded (though they don't te= ll you that in the documentation). The Propeller and has a video generator= ; character, sine, and log tables; and other stuff mine doesn't. But mine = has a simpler, more unified overall concept and programming model. It is a= true middle ground between register and stack machines. > Actually synthesise the thing. It's hard to take seriously something tha= t's > never actually been tested for real, especially if it makes assumptions l= ike > having gigabytes of single-cycle-latency memory. Debug it and make sure = it > works in real hardware. Not trying to argue from authority, but I've got 10 years of professional H= DL experience, and have made several processors in the past for my own edif= ication and had them up and running on Xilinx demo boards. This one hasn't= actually run in the flesh yet, but it has gone through the build process m= any times and has been pretty thoroughly verified, so I would be amazed if = there were any issues (famous last words). But I'll run it on a Cyclone IV= board before releasing it. > If you put it on opencores, document document document. There are tons o= f > half-baked projects with lame or nonexistent documentation, that kind of > half work on the author's dev system but fall over in real life for one > reason or another. =20 I know what you mean, I never use any code directly from there. To be fair= , most of the code I ran across in industry was fairly poor as well. Anywa= y, I've got a really nice document that took me about a month to write, wit= h lots of drawings, tables, examples, etc. describing the design and my tho= ughts behind it. Even if people don't particularly like my processor they = might be able to get something out of the background info in the paper (FPG= A multipliers and RAM, LIFO & ALU design, pipelining, register set construc= tion, etc.). > Is it vendor-independent, or does it use Xilinx/Altera/etc special stuff?= =20 > If so, how easily can that be replaced with an alternative vendor? I was careful to not use vendor specific constructs in the verilog. The bl= ock RAM for main memory the the stacks is inferred, as are the ALU signed m= ultipliers. I spent a long time on the modular partitioning of the code wi= th a strong eye towards verification (as I usually do). The code was devel= oped in Quartus, and has been compiled many, many times, but I haven't run = it through XST yet. > Regression tests and test suites. How do we know it's working? Can we w= ork > on the code and make sure we don't break anything? What does 'working' m= ean > in the first place? I'm probably an odd man out, but I don't agree with a lot of "standard" ind= ustry verification methodology. Test benches are fine for really complex c= ode and / or data environments, but there is no substitute for good coding,= proper modular partitioning, and thorough hand testing of each module. I'= ve seen too many out of control projects with designers throwing things ove= r various walls, leaving the verification up to the next guy who usually is= n't familiar enough with it to really bang on the sensitive parts. And I k= ind of hate modelsim. Anyone that codes should spend a lot of time verifying - I do, and for the = most part really enjoy it. The industry has turned this essential activity= into something most people loathe, so it just doesn't happen unless people= get pushed into doing it. And even then it usually doesn't get done very = thoroughly. Co-developing in environments like that is a nightmare. > Some lists of well-known conferences: > http://sites.google.com/site/calasweb/fpga-conferences-and-workshops > http://tcfpga.org/conferences.html Thanks, I'll check them out!Article: 155229
Eric Wallin <tammie.eric@gmail.com> wrote: > Thanks for your response Theo! > > On Thursday, June 13, 2013 8:44:03 PM UTC-4, Theo Markettos wrote: > > > Benchmarks. Tell us why we should use your processor. How does it win > > compared with the alternatives? > > Good point. So far I've coded a verification boot code gauntlet that it > has passed, as well as restoring division and log2. If I had more code to > push through it I could statistically tailor the instruction set (size the > immediates, etc.) but I don't. I may at some point but I may not either. > This is mainly for me, to help me to implement various projects that > require complex computations in an FPGA (I currently need it for a digital > Theremin that is under development), but I want to release it so others > may examine and possibly use it or help me make it better, or use some of > the ideas in there in their own stuff. FWIW 'benchmarks' doesn't necessarily mean running SPECfoo at 2.7 times quicker than a 4004, but things like 'how many instructions does it take to write division/FFT/quicksort/whatever' compared with the leading brand. Or how many LEs, BRAMs, mW, etc. Numbers are good (as is publishing the source so we can reproduce them). > > How easy is it to program? An assembler or a C compiler are really > > necessary to make something usable - LLVM may come in handy as a C compiler > > toolkit, I'm not sure what's an equivalent assembler toolkit. > > It's fairly general purpose and I think if you read the paper you might > (or might not) find it easy to understand and program by hand using > verilog initial statements. My main goals were that it be simple enough > to grasp without tools, complex and fast enough to do real things, have > compact opcodes so BRAM isn't wasted, etc. A compiler, OS, etc. are > overkill and definitely not the intended target. Fair enough. If you're making architectural points, you can probably get away with assembly examples. A simple assembler is good for developer sanity, though. Could probably be knocked up in Python reasonably fast. > > Actually synthesise the thing. It's hard to take seriously something that's > > never actually been tested for real, especially if it makes assumptions like > > having gigabytes of single-cycle-latency memory. Debug it and make sure it > > works in real hardware. > > Not trying to argue from authority, but I've got 10 years of professional > HDL experience, and have made several processors in the past for my own > edification and had them up and running on Xilinx demo boards. This one > hasn't actually run in the flesh yet, but it has gone through the build > process many times and has been pretty thoroughly verified, so I would be > amazed if there were any issues (famous last words). But I'll run it on a > Cyclone IV board before releasing it. I'm just a bit jaded from seeing papers at conferences where somebody wrote some verilog which they only ran in modelsim, and never had to worry about limited BRAM, or meeting timing, or multiple clock domains, or... > > If you put it on opencores, document document document. There are tons of > > half-baked projects with lame or nonexistent documentation, that kind of > > half work on the author's dev system but fall over in real life for one > > reason or another. > > I know what you mean, I never use any code directly from there. To be > fair, most of the code I ran across in industry was fairly poor as well. > Anyway, I've got a really nice document that took me about a month to > write, with lots of drawings, tables, examples, etc. describing the > design and my thoughts behind it. Even if people don't particularly like > my processor they might be able to get something out of the background > info in the paper (FPGA multipliers and RAM, LIFO & ALU design, > pipelining, register set construction, etc.). This is good. Just a thought - could you angle it as 'how to do processor design' using your processor as a case study? That makes it more of a useful tutorial than 'buy our brand, it's great'... > I'm probably an odd man out, but I don't agree with a lot of "standard" > industry verification methodology. Test benches are fine for really > complex code and / or data environments, but there is no substitute for > good coding, proper modular partitioning, and thorough hand testing of > each module. I've seen too many out of control projects with designers > throwing things over various walls, leaving the verification up to the > next guy who usually isn't familiar enough with it to really bang on the > sensitive parts. And I kind of hate modelsim. That's not exactly what I meant... let's say you rearrange the pipelining on your CPU. It turns out you introduce some obscure bug that causes branches to jump to the wrong place if there's a multiply 3 instructions back from the branch. How would you know if you did this, and make sure it didn't happen again? Hand testing modules won't catch that. It's worse if there's an OS involved, of course. But it can be easy to introduce stupid bugs when you're refactoring something, and waste a lot of time tracking them down. We use Bluespec so avoid modelsim ;-) (with Jenkins so we run the test suite for every commit. A bit overkill for your needs, perhaps) > Anyone that codes should spend a lot of time verifying - I do, and for the > most part really enjoy it. The industry has turned this essential > activity into something most people loathe, so it just doesn't happen > unless people get pushed into doing it. And even then it usually doesn't > get done very thoroughly. Co-developing in environments like that is a > nightmare. I admit the tools don't always make it easy... TheoArticle: 155230
On Friday, June 14, 2013 5:29:15 PM UTC-4, Theo Markettos wrote: =20 > FWIW 'benchmarks' doesn't necessarily mean running SPECfoo at 2.7 times > quicker than a 4004, but things like 'how many instructions does it take = to > write division/FFT/quicksort/whatever' compared with the leading brand. = Or > how many LEs, BRAMs, mW, etc. Numbers are good (as is publishing the sou= rce > so we can reproduce them). I have FPGA resource numbers for the Cyclone III target in the paper. Brie= fly, it consumes ~1800 LEs, 4 18x18 multipliers, 4 BRAMs for the stacks, pl= us whatever the main memory needs. This is roughly 1/3 of the smallest Cyc= lone III part. I have a restoring division example in the paper that gives= 197 / 293 cycles best / worst case (a thread cycle is 8 200MHz clocks, but= there are 8 threads running at this speed so aggregate throughput is poten= tially 200 MIPs if all threads are busy doing something). =20 I've seen lots of papers that claim speed numbers but don't give the speed = grade, or tell you what hoops they jumped through to get those speeds. Wit= hout that info the speeds are meaningless. > Fair enough. If you're making architectural points, you can probably get > away with assembly examples. A simple assembler is good for developer > sanity, though. Could probably be knocked up in Python reasonably fast. That's certainly possible. At this point I'm writing code for it directly = in verilog using an initial statement text file that gets included in the m= ain memory. Several define statements make this clearer and actually fairl= y easy. But uploading code to a boot loader would require something like a= n assembler. I'm really trying to stay away from the need for toolsets. > This is good. Just a thought - could you angle it as 'how to do processo= r > design' using your processor as a case study? That makes it more of a > useful tutorial than 'buy our brand, it's great'... The paper is kind of that, background and general how to, but my processor = doesn't have caches, branch prediction, pipeline hazards, TLBs, etc. so peo= ple wanting to know how to do that stuff will come up totally empty. > That's not exactly what I meant... let's say you rearrange the pipelining= on > your CPU. It turns out you introduce some obscure bug that causes branch= es to > jump to the wrong place if there's a multiply 3 instructions back from th= e > branch. How would you know if you did this, and make sure it didn't happ= en > again? Hand testing modules won't catch that. It's correct by construction! ;-) Seriously though, there are no hazards t= o speak of and very little internal state, so branches pretty much either w= ork or they don't. Once basic functionality was confirmed in simulation, I = used processor code to check the processor itself e.g. I wrote some code th= at checks all branches against all possible branch conditions. Each test i= ncrements a count if it passes or decrements if it fails. The final passin= g number can only be reached if all tests pass. I've got simple code like = this to test all of the opcodes. This exercise can help give an early feel= for the completeness of the instruction set as well. Verifying something = like the Pentium must be one agonizingly painful mountain to climb. Verify= ing each silicon copy must be a bear as well.Article: 155231
So is there anything like the old Byte magazine (or a web equivalent) where enthusiasts and other non-academic, non-industry types can publish articles on computers / computing hardware?Article: 155232
On 6/13/2013 1:07 PM, Eric Wallin wrote: > Thank you for your reply Nikolaos! > >> This reads like a "fourstack" architecture on steroids. It seems >> good! > > "A Four Stack Processor" by Bernd Paysan? I ran across that paper several years ago (thanks!). Very interesting, but with multiple ALUs, access to data below the LIFO tops, TLBs, security, etc. it is much more complex than my processor. It looks like a real bear to program and manage at the lowest level. > >> How do you compare with more classic RISC-like soft-cores like >> MicroBlaze, Nios-II, LEON, etc? > > The target audience for my processor is an FPGA developer who needs to implement complex functionality that tolerates latency but requires deterministic timing. Hand coding with no toolchain (verilog initial statement boot code). Simple enough to keep the processor model and current state in one's head (with room to spare). Small enough to fit in the smallest of FPGAs (with room to spare). Not meant at all to run a full-blown OS, but not a trivial processor. That the ground I have been plowing off and on for the last 10 years. >> There is also a classic book on stack-based computers, you really need >> to go through this and reference it in your publication. > > "Stack Computers: The New Wave" by Philip J. Koopman, Jr.? Also ran across that many years ago (thanks!). The main thrust of it seems to be the advocating of single data stack, single return stack, zero operand machines, which I feel (nothing personal) are crap. Easy to design and implement (I've made several while under the spell) but impossible to program in an efficient manner (gobs of real time wasted on stack thrash, the minimization of which leads directly to unreadable procedural coding practices, which leads to catastrophic stack faults). I assume that you do understand that the point of MISC is that the implementation can be minimized so that the instructions run faster. In theory this makes up for the extra instructions needed to manipulate the stack on occasion. But I understand your interest in minimizing the inconvenience of stack ops. I spent a little time looking at alternatives and am currently looking at a stack CPU design that allows offsets into the stack to get around the extra stack ops. I'm not sure how this compares to your ideas. It is still a dual stack design as I have an interest in keeping the size of the implementation at a minimum. 1800 LEs won't even fit on the FPGAs I am targeting. > My processor incorporates what I believe are a couple of new innovations (but who ever really knows?) that I'd like to get out there if possible. And I wouldn' mind a bit of personal recognition if only for my efforts. I would like to hear about your innovations. As you seem to understand, it is hard to be truly innovative finding new ideas that others have not uncovered. But I think you are certainly in an area that is not thoroughly explored. > IEEE is probably out. I fundamentally disagree with the hoarding of tecnical papers behind a greedy paywall. I won't argue with that. Even when I was an IEEE member, I never found a document I didn't have to pay for. When can we expect to see your paper? -- RickArticle: 155233
Thanks for your response rickman! On Saturday, June 15, 2013 8:40:27 PM UTC-4, rickman wrote: > That the ground I have been plowing off and on for the last 10 years. Ooo, same here, and my condolences. I caught a break a couple of months ag= o and have been beavering away on it ever since, and I finally have somethi= ng that doesn't cause me to vomit when I code for it. Multiple indexed sim= ple stacks with explicit pointer control makes everything a lot easier than= a bog standard stack machine. I think the auto-consumption of literally e= verything, particularly the data, indexes, and pointers you dearly want to = use again is at the bottom of all the crazy people just accept with stack m= achines. This mechanism works great for manual data entry on HP calculator= s, but not so much for stack machines IMHO. Auto consumption also pretty m= uch rules out conditional execution of single instructions. > I assume that you do understand that the point of MISC is that the=20 > implementation can be minimized so that the instructions run faster. In= =20 > theory this makes up for the extra instructions needed to manipulate the= =20 > stack on occasion. But I understand your interest in minimizing the=20 > inconvenience of stack ops. I spent a little time looking at=20 > alternatives and am currently looking at a stack CPU design that allows= =20 > offsets into the stack to get around the extra stack ops. I'm not sure= =20 > how this compares to your ideas. It is still a dual stack design as I=20 > have an interest in keeping the size of the implementation at a minimum.= =20 MISC is interesting, but you have to consider that all ops, including simpl= e stack manipulations, will generally consume as much real time as a multip= ly, which suddenly makes all of those confusing stack gymnastics you have t= o perform to dig out your loop index or whatever from underneath your read/= write pointer from underneath your data and such overly burdensome. Indexes into a moving stack - that way lies insanity. Ever hit the roll do= wn button on an HP calculator and get instantly flummoxed? Maybe a compile= r can keep track of that kind of stuff, but my weak brain isn't up to the t= ask. Altera BRAM doesn't go as wide as Xilinx with true dual port. When I was w= orking in Xilinx I was able to use a single BRAM for both the data and retu= rn stacks (16 bit data). > 1800 LEs won't even fit on the FPGAs I am targeting. I'm not sure anything less than the smallest Cyclone 2 is really worth deve= loping in. A lot of the stuff below that is often more expensive due to th= e built-in configuration memory and such. There are quite inexpensive Cycl= one dev boards on eBay from China. > I would like to hear about your innovations. As you seem to understand,= =20 > it is hard to be truly innovative finding new ideas that others have not= =20 > uncovered. But I think you are certainly in an area that is not=20 > thoroughly explored. I haven't seen anything exactly like it, certainly not the way the stacks a= re implemented. And I deal with extended arithmetic results in an unusual = way. In terms of scheduling and pipelining, the Parallax Propeller is prob= ably the closest in architecture (you can infer from the specs and operatio= nal model what they don't explicitly tell you in the datasheet). > I won't argue with that. Even when I was an IEEE member, I never found= =20 > a document I didn't have to pay for. I was a member too right out of grad school. But, like Janet Jackson sang:= "What have they done for me lately?" > When can we expect to see your paper? It's all but done, just picking around the edges at this point. As soon as= the code is verified to my satisfaction I'll release both and post here.Article: 155234
On 6/13/2013 6:12 PM, Mike Perkins wrote: > I have come across a VHDL Free Model Foundry mt47h16m16.vhd which gives > me some errors. > > Has anyone else used this model? If so has anyone had issues with twr > timing errors? Am I right in assuming that this model doesn't feature > Concurrent Auto Precharge? > > Is DDR2 like SDR SDRAMs where some devices can cope with concurrent > auto-precharge and others not? Where if the datasheet doesn't mention > it then it's an unsupported feature? I haven't worked with DDR2, but I have worked with SDR SDRAM. I don't recall the specific mode you mention, but I'm pretty sure if a given device data sheet does not mention any given feature, it isn't in the part. I believe these things are standardized or at least a minimum set of features should be standardized. Using a vendor unique feature means you are locked into that vendor. On the other hand, I believe there is a way to read the specifics of the brand and model of RAM you are using in a standard way. So you should be able to query the device to see if it supports the feature you want to use. What does the data sheet for your part say about status and configuration settings? -- RickArticle: 155235
Eric Wallin wrote: > Indexes into a moving stack - that way lies insanity. Ever hit the roll down button on an HP calculator and get instantly flummoxed? Maybe a compiler can keep track of that kind of stuff, but my weak brain isn't up to the task. Have a look at comp.arch, in particular the current discussion about the "belt" in the Mill processor. Start by watching the video. The Mill is a radical architecture that offers far greater instruction level parallelism than existing processors, partly by having no general purpose registers. The Mill is irrelevant to FPGA processors; it is aimed at beating x86 machines.Article: 155236
On Sunday, June 16, 2013 5:23:01 AM UTC-4, Tom Gardner wrote: > The Mill is irrelevant to FPGA processors... The Mill looks vaguely interesting (if you're into billion transistor processors) but as you indicated I'm not sure how it is relevant to this thread?Article: 155237
Eric Wallin wrote: > On Sunday, June 16, 2013 5:23:01 AM UTC-4, Tom Gardner wrote: > >> The Mill is irrelevant to FPGA processors... > > The Mill looks vaguely interesting (if you're into billion transistor processors) but as you indicated I'm not sure how it is relevant to this thread? You wrote "Indexes into a moving stack - that way lies insanity." The Mill's belt is effectively exactly that, and they appear not to have gone insane.Article: 155238
On 16/06/2013 04:24, rickman wrote: > On 6/13/2013 6:12 PM, Mike Perkins wrote: >> I have come across a VHDL Free Model Foundry mt47h16m16.vhd which >> gives me some errors. >> >> Has anyone else used this model? If so has anyone had issues with >> twr timing errors? Am I right in assuming that this model doesn't >> feature Concurrent Auto Precharge? >> >> Is DDR2 like SDR SDRAMs where some devices can cope with >> concurrent auto-precharge and others not? Where if the datasheet >> doesn't mention it then it's an unsupported feature? > > I haven't worked with DDR2, but I have worked with SDR SDRAM. I > don't recall the specific mode you mention, but I'm pretty sure if a > given device data sheet does not mention any given feature, it isn't > in the part. I know Micron SDR devices feature concurrent auto-precharge, and another that didn't, or should I say I had a sample where it didn't work. > I believe these things are standardized or at least a minimum set of > features should be standardized. Using a vendor unique feature > means you are locked into that vendor. On the other hand, I believe > there is a way to read the specifics of the brand and model of RAM > you are using in a standard way. So you should be able to query the > device to see if it supports the feature you want to use. I would have thought that in the passage of time this feature would be an industry standard. Hence my uncertainly! > What does the data sheet for your part say about status and > configuration settings? > It's not a configuration issue, the part either support concurrent auto-precharge or it doesn't. The part is a Winbond W9725G6KB and I'm beginning to assume that if the datasheet doesn't mention it then it's not a feature I can reply upon. However the absence of any restriction of the time between two auto-precharge instructions on different banks does tends to suggest that concurrent auto-precharge is inherently possible. I'm now using a Micron verilog model which is not only unencrypted but almost holds your hand through the initialisation procedures, and tells you what data has been written to which row and column! Superb! Being unencrypted means I can also shorten the 200us period where the clock must be stable for faster simulation! -- Mike Perkins Video Solutions Ltd www.videosolutions.ltd.ukArticle: 155239
On Sunday, June 16, 2013 1:16:42 PM UTC-4, Tom Gardner wrote: > You wrote "Indexes into a moving stack - that way lies insanity." > The Mill's belt is effectively exactly that, and they appear not to have gone insane. I bet they would if they tried to hand code it in assembly! ;-) The first video comment is priceless: "Gandalf?"Article: 155240
Eric Wallin wrote: > On Sunday, June 16, 2013 1:16:42 PM UTC-4, Tom Gardner wrote: > >> You wrote "Indexes into a moving stack - that way lies insanity." >> The Mill's belt is effectively exactly that, and they appear not to have gone insane. > > I bet they would if they tried to hand code it in assembly! ;-) It *is* considerably easier than hand-coding Itanium. With that you change *any* aspect of the microarchitecture and you go back to the beginning. How do I know? I asked someone that was doing it to assess its performance, and decided to Run Away from anything to do with the Itanium. > The first video comment is priceless: "Gandalf?" How shallow :)Article: 155241
I've used the Micron Verilog models a couple of times. They're written in = oldschool Verilog, but they're pretty good. I have to go in and change som= e parameters to make them work with my parts, and then they are very detail= ed. I also put in the Modelsim metacomment to make the RAM array sparse wh= ich speeds up the sim a lot.Article: 155242
On Tuesday, April 23, 2013 1:13:42 PM UTC-7, Kevin Neilson wrote: > Why is Modelsim so expensive? It is a mature product and yet it segfault= s on me all the time. Constantly. Often, when it ought to give me warning= s or errors (such as when there is a port width mismatch) it just core dump= s instead, leaving me to comment out lines one at a time until I figure out= why it's crashing. That's my rant. It's still pretty decent, but ought t= o be cheaper if it's going to coredump like freeware. I use Questa on Windows 7 64 bit on intel based machine for my SystemVerilo= g designs. I can tell you that I have never seen a coredump lately. Nonet= heless, occasionally, the tool would simply exit with a cryptic code during= compile, optimization or elaboration stage. At which point I have to file= a support request. Sometimes, I would be able to get an idea on the probl= em - something silly, by turning off the vopt switch at which point I would= think why wasn't the tool able to flag this without giving up! What would be the groups' sentiment if there was a cloud service to use the= tool on a timeshare basis? I can see a win-win scenario for vendors and the developer community.Article: 155243
On 6/16/2013 5:40 PM, Mike Perkins wrote: > On 16/06/2013 04:24, rickman wrote: >> On 6/13/2013 6:12 PM, Mike Perkins wrote: >>> I have come across a VHDL Free Model Foundry mt47h16m16.vhd which >>> gives me some errors. >>> >>> Has anyone else used this model? If so has anyone had issues with >>> twr timing errors? Am I right in assuming that this model doesn't >>> feature Concurrent Auto Precharge? >>> >>> Is DDR2 like SDR SDRAMs where some devices can cope with >>> concurrent auto-precharge and others not? Where if the datasheet >>> doesn't mention it then it's an unsupported feature? >> >> I haven't worked with DDR2, but I have worked with SDR SDRAM. I >> don't recall the specific mode you mention, but I'm pretty sure if a >> given device data sheet does not mention any given feature, it isn't >> in the part. > > I know Micron SDR devices feature concurrent auto-precharge, and another > that didn't, or should I say I had a sample where it didn't work. > >> I believe these things are standardized or at least a minimum set of >> features should be standardized. Using a vendor unique feature >> means you are locked into that vendor. On the other hand, I believe >> there is a way to read the specifics of the brand and model of RAM >> you are using in a standard way. So you should be able to query the >> device to see if it supports the feature you want to use. > > I would have thought that in the passage of time this feature would be > an industry standard. Hence my uncertainly! > >> What does the data sheet for your part say about status and >> configuration settings? >> > > It's not a configuration issue, the part either support concurrent > auto-precharge or it doesn't. I mean there may be a status something indicating what features are supported. It has been a long time since I read an SDRAM data sheet, but I recall something like this in the part I used. There were extended bits of some sort for extended features. > The part is a Winbond W9725G6KB and I'm beginning to assume that if the > datasheet doesn't mention it then it's not a feature I can reply upon. > However the absence of any restriction of the time between two > auto-precharge instructions on different banks does tends to suggest > that concurrent auto-precharge is inherently possible. > > I'm now using a Micron verilog model which is not only unencrypted but > almost holds your hand through the initialisation procedures, and tells > you what data has been written to which row and column! Superb! Being > unencrypted means I can also shorten the 200us period where the clock > must be stable for faster simulation! -- RickArticle: 155244
I have a bug in a test fixture that is FPGA based. I had thought it was in the software which controls it, but after many hours of chasing it around I've concluded it must be in the FPGA code. I didn't think it was in the VHDL because it had been simulated well and the nature of the bug is an occasional dropped character on the receive side. Who can't design a UART? Well, it could be in the handshake with the state machine, but still... So I finally got around to adding some debug signals which I would monitor on an analyzer and guess what, the bug is gone! I *hate* when that happens. I can change the code so the debug signals only appear when a control register is set to enable them, but still, I don't like this. I want to know what is causing this DURN THING! Anyone see this happen to them before? Oh yeah, someone in another thread (that I can't find, likely because I don't recall the group I posted it in) suggested I add synchronizing FFs to the serial data in. Sure enough I had forgotten to do that. Maybe that was the fix... of course! It wasn't metastability, I bet it was feeding multiple bits of the state machine! Durn, I never make that sort of error. Thanks to whoever it was that suggested the obvious that I had forgotten. -- RickArticle: 155245
On Mon, 17 Jun 2013 20:00:01 -0400 rickman <gnuarm@gmail.com> wrote: > So I finally got around to adding some debug signals which I would > monitor on an analyzer and guess what, the bug is gone! I *hate* when > that happens. I can change the code so the debug signals only appear > when a control register is set to enable them, but still, I don't like > this. I want to know what is causing this DURN THING! > > Anyone see this happen to them before? > > Oh yeah, someone in another thread (that I can't find, likely because I > don't recall the group I posted it in) suggested I add synchronizing FFs > to the serial data in. Sure enough I had forgotten to do that. Maybe > that was the fix... of course! It wasn't metastability, I bet it was > feeding multiple bits of the state machine! Durn, I never make that > sort of error. Thanks to whoever it was that suggested the obvious that > I had forgotten. > > -- > > Rick Not metastability, a race condition. Asynchronous external input headed to multiple clocked elements, each of which it reaches via a different path with a different delay. When you added debugging signals you changed the netlist, which changed the place and route, making unpredictable changes to those delays. In this case, it happened to push it into a place where _as far as you tested_, it seems happy. But it's still unsafe, because as you change other parts of the design, the P&R of that section will still change anyhow, and you start getting my favorite situation, the problem that comes and goes based on entirely unrelated factors. The fix you fixed fixes it. When you resynchronized it on the same clock as you're running around the rest of the logic, you forced that path to become timing constrained. As such, the P&R takes it upon itself to make sure that the timing of that route is irrelevant with respect to the clock period, and your problem goes away for good. -- Rob Gaddi, Highland Technology -- www.highlandtechnology.com Email address domain is currently out of order. See above to fix.Article: 155246
I am starting to study VHDL. Now, I have to do an exercise with the following content: I have to define an array of 10 elements ( 8 bit range) ([3,4,2,8,9,0,1,5,7,6] for example). And 10 elements were imported to within 10 clock cycles. The question is find the maximum number and second maximum number in this array after 10 clock cycle. Anyone help to show me the method to solve it using VHDL ? Thank you.Article: 155247
On 18 Jun., 03:19, phanhuyich <khanhnguyent...@gmail.com> wrote: > I am starting to study VHDL. Now, I have to do an exercise with the follo= wing content: > > =A0I have to define an array of 10 elements ( 8 bit range) ([3,4,2,8,9,0,= 1,5,7,6] for example). And 10 elements were imported to within 10 clock cyc= les. The question is find the maximum number and second maximum number in t= his array after 10 clock cycle. > =A0Anyone help to show me the method to solve it using VHDL ? No problem. Just write down your sollution to that problem in "not VHDL". Then ask what part of the algorithm is hard for you transfer in VHDL and why so we can help. HInt it helps to think about the RTL and draw a picture about how the data flow might be than it is easy to write it down in VHDL. regards ThomasArticle: 155248
>So I finally got around to adding some debug signals which I would >monitor on an analyzer and guess what, the bug is gone! I *hate* when >that happens. I can change the code so the debug signals only appear >when a control register is set to enable them, but still, I don't like >this. I want to know what is causing this DURN THING! > >Anyone see this happen to them before? > >-- > >Rick > Yes, This is called a "Heisenbug". Usually involves a clock domain crossing mistake. John Eaton --------------------------------------- Posted through http://www.FPGARelated.comArticle: 155249
On 18/06/2013 00:55, rickman wrote: > On 6/16/2013 5:40 PM, Mike Perkins wrote: >> On 16/06/2013 04:24, rickman wrote: >>> On 6/13/2013 6:12 PM, Mike Perkins wrote: >>>> I have come across a VHDL Free Model Foundry mt47h16m16.vhd >>>> which gives me some errors. >>>> >>>> Has anyone else used this model? If so has anyone had issues >>>> with twr timing errors? Am I right in assuming that this model >>>> doesn't feature Concurrent Auto Precharge? >>>> >>>> Is DDR2 like SDR SDRAMs where some devices can cope with >>>> concurrent auto-precharge and others not? Where if the >>>> datasheet doesn't mention it then it's an unsupported feature? >>> >>> I haven't worked with DDR2, but I have worked with SDR SDRAM. I >>> don't recall the specific mode you mention, but I'm pretty sure >>> if a given device data sheet does not mention any given feature, >>> it isn't in the part. >> >> I know Micron SDR devices feature concurrent auto-precharge, and >> another that didn't, or should I say I had a sample where it didn't >> work. >> >>> I believe these things are standardized or at least a minimum set >>> of features should be standardized. Using a vendor unique >>> feature means you are locked into that vendor. On the other hand, >>> I believe there is a way to read the specifics of the brand and >>> model of RAM you are using in a standard way. So you should be >>> able to query the device to see if it supports the feature you >>> want to use. >> >> I would have thought that in the passage of time this feature would >> be an industry standard. Hence my uncertainly! >> >>> What does the data sheet for your part say about status and >>> configuration settings? >>> >> >> It's not a configuration issue, the part either support concurrent >> auto-precharge or it doesn't. > > I mean there may be a status something indicating what features are > supported. It has been a long time since I read an SDRAM data sheet, > but I recall something like this in the part I used. There were > extended bits of some sort for extended features. Sorry, I didn't mean to be disingenuous. I'm not aware of there being a status bit and, as I've been told, Concurrent Auto Precharge is not a JEDEC specified feature. The only way to finding out, apart from reading the datasheet of course, is to try and perform an Auto Precharge on more than bank at once and see if the data is corrupted! -- Mike Perkins Video Solutions Ltd www.videosolutions.ltd.uk
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z