Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On 06.08.2015 00:52, thomas.entner99@gmail.com wrote: > For FPGA design, the user base is much smaller than for a C compiler. > How much of them would really use the open source alternatives when > there are very advanced free vendor tools? And how much of them are > really skilled software gurus? And have enough spare time? Of course > you would find some students which are contributing (e.g. for their > thesis), but I doubt that it will be enough to get a competitve > product and to maintain it. New devices should be supported with > short delay, otherwise the tool would not be very useful. I don't see the big difference to compilers targeting microcontrollers here. There are plenty of older FPGA types, such as the Xilinx XC9500 still in use. A free toolchain for them would be useful, and having advanced optimizations would be benefitial there as well. On the microcontroller side, SDCC also targets mostly older architectures, and a few newer ones, such as Freescale S08 and STMMicroelectronics STM8. You don't need every user to become a developer. A few are enough. PhilippArticle: 158101
On Sat, 6 Jun 2015 03:53:37 -0700 (PDT) cpandya@yahoo.com wrote: > Hi, > > I am trying create verilog module that can support > parameterized instance name. I understand that the signal > width and other such things can be parameterized. But can we > also parameterize the module instance name? There is a trivial example here: http://rosettacode.org/wiki/Four_bit_adder search page for Professional Code - with test bench Notice that there is no width parameter nor global to indicate how wide Multibit_Adder should be. Jan Coombs -- email valid, else fix dots and hyphen jan4clf2014@murrayhyphenmicroftdotcodotukArticle: 158102
On Thursday, August 6, 2015 at 9:48:11 AM UTC+8, John Miles wrote: > On Tuesday, August 4, 2015 at 4:46:49 PM UTC-7, rickman wrote: > > Not trying to give anyone grief. I'd just like to understand what=20 > > people expect to happen with FOSS that isn't happening with the vendor'= s=20 > > closed, but free tools. > >=20 >=20 > Here's one example: during development, I'm targeting an FPGA that's seve= ral times larger than it needs to be, and the design has plenty of timing m= argin. So why in the name of Woz do I have to cool my heels for 10 minutes= every time I tweak a single line of Verilog? =20 >=20 > If the tools were subject to community development, they probably wouldn'= t waste enormous amounts of time generating 99.9% of the same logic as last= time. Incremental compilation and linking is ubiquitous in the software w= orld, but as usual the FPGA tools are decades behind. That's the sort of i= mprovement that could be expected with an open toolchain. >=20 > It's as if Intel had insisted on keeping the x86 ISA closed, and you coul= dn't get a C compiler or even an assembler from anyone else. How much fart= her behind would we be? Well, there's your answer. >=20 > -- john, KE5FX Incremental synthesis/compilation is supported by both Xilinx (ISE and Viva= do) and Altera (Quartus) tools, even in the latest versions. One needs to u= se the appropriate switch/options. Of course, their definition of increment= al compile/synthesis may not match exactly with yours. They tend to support= more at the block level using partitions etc.Article: 158103
The quoted post has beed turned upside-down for the purposes of my typing. On Wed, 05 Aug 2015 18:50:11 -0400, rickman wrote: > Maybe I just don't have enough imagination. A distinct possibility. On Wed, 05 Aug 2015 18:50:11 -0400, rickman wrote: > On 8/5/2015 5:30 PM, Philipp Klaus Krause wrote: >> On 05.08.2015 01:46, rickman wrote: >>> On 8/4/2015 7:05 PM, Aleksandar Kuktin wrote: >>>> >>>> Hackability. If you have an itch, you can scratch it yourself with >>>> FOSS tools. If you discover a bug, you can fix it yourself. If you >>>> want to repurpose, optimize or otherwise change the tool, you can do >>>> it with FOSS. >>> >>> That's great. But only important to a small few. Few matter. How many ISA designers are there? Yet, if they get good tools that let them creatively hack out the solution, we're all better of. Same with random dudes banging on some FPGA somewhere. You never know where the next thing you want will appear, and having good peer-reviewed tools creates more potential for good stuff to be made. >>> I use tools to get work done. I have zero interest in digging into >>> the code of the tools without a real need. I have not found any bugs >>> in the vendor's tools that would make me want to spend weeks learning >>> how they work in the, most likely, vain hope that I could fix them. Maybe you just didn't try hard enough? Maybe you did but didn't notice you found a gaping bug in vendor tools. >>> Not trying to give anyone grief. I'd just like to understand what >>> people expect to happen with FOSS that isn't happening with the >>> vendor's closed, but free tools. Maybe you would be able to generate a FPGA handheld device that can reconfigure itself on the fly. Like a smartphone^H^H^H^H^H^H^H^H^H^H PDA^H^H^H trikoder that runs on some energy-efficient MIPS and that has a scriptable (meaning CLI) synthesizer that you can feed random Verilog sources and then instantiate an Ethernet device so you can jack yourself in while at home, a FM radio to listen to while driving down the road, a TV receiver with HDMA output so you can view the news and maybe a vibrator or something for the evening. Anyway, that's what I want to have and can't right now but COULD have with FOSS tools (since I'm not gonna use QEMU to instantiate a VM so I could synthesize on my phone). >> This resulted in some quite unusual optimizations in SDCC currently >> not found in any other compiler. Okay, now we need to check out SDCC. > I think this is the point some are making. The examples of the utility > of FOSS often point to more obscure examples which impact a relatively > small number of users. I appreciate the fact that being able to tinker > with the tools can be very useful to a few. But those few must have the > need as well as the ability. > With hardware development both are less likely to happen. But they will happen nevertheless.Article: 158104
On Wed, 05 Aug 2015 15:52:52 -0700, thomas.entner99 wrote: > Of course you > would find some students which are contributing (e.g. for their thesis), > but I doubt that it will be enough to get a competitve product and to > maintain it. New devices should be supported with short delay, otherwise > the tool would not be very useful. With GCC, Linux and the ilk, it's actually the other way around. They add support for new CPUs before the new CPUs hit the market (quoting x86_64). This is partially due to hardware producers understanding they need toolchain support and working actively on getting that support. If even a single FOSS FPGA toolchain gets to a similar penetration, you can count on FPGA houses paying their own people to hack those leading FOSS toolchains, for the benefit of all.Article: 158105
On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote: > On Wed, 05 Aug 2015 15:52:52 -0700, thomas.entner99 wrote: > >> Of course you >> would find some students which are contributing (e.g. for their thesis), >> but I doubt that it will be enough to get a competitve product and to >> maintain it. New devices should be supported with short delay, otherwise >> the tool would not be very useful. > > With GCC, Linux and the ilk, it's actually the other way around. They add > support for new CPUs before the new CPUs hit the market (quoting x86_64). > This is partially due to hardware producers understanding they need > toolchain support and working actively on getting that support. If even a > single FOSS FPGA toolchain gets to a similar penetration, you can count > on FPGA houses paying their own people to hack those leading FOSS > toolchains, for the benefit of all. How will any FPGA toolchain get "a similar penetration" if the vendors don't open the spec on the bitstream? Do you see lots of people coming together to reverse engineer the many brands and flavors of FPGA devices to make this even possible? Remember that CPU makers have *always* released detailed info on their instruction sets because it was useful even if, no, *especially if* coding in assembly. -- RickArticle: 158106
rickman <gnuarm@gmail.com> wrote: > On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote: (snip) >> With GCC, Linux and the ilk, it's actually the other way around. They add >> support for new CPUs before the new CPUs hit the market (quoting x86_64). >> This is partially due to hardware producers understanding they need >> toolchain support and working actively on getting that support. If even a >> single FOSS FPGA toolchain gets to a similar penetration, you can count >> on FPGA houses paying their own people to hack those leading FOSS >> toolchains, for the benefit of all. OK, but that is relatively (in the life of gcc) recent. The early gcc were replacements for existing C compilers on systems that already had C compilers. > How will any FPGA toolchain get "a similar penetration" if the vendors > don't open the spec on the bitstream? Do you see lots of people coming > together to reverse engineer the many brands and flavors of FPGA devices > to make this even possible? Only the final stage of processing needs to know the real details of the bitstream. I don't know so well the current tool chain, but it might be that you could replace most of the steps, and use the vendor supplied final step. Remember the early gcc before glibc? They used the vendor supplied libc, which meant that it had to use the same call convention. > Remember that CPU makers have *always* released detailed info on their > instruction sets because it was useful even if, no, *especially if* > coding in assembly. If FOSS tools were available, there would be reason to release those details. But note that you don't really need bit level to write assembly code, only to write assemblers. You need to know how many bits there are (for example, in an address) but not which bits. Now, most assemblers do print out the hex codes, but most often that isn't needed for actual programming, sometimes for debugging. -- glenArticle: 158107
On 8/10/2015 2:33 AM, glen herrmannsfeldt wrote: > rickman <gnuarm@gmail.com> wrote: >> On 8/8/2015 12:51 PM, Aleksandar Kuktin wrote: > > (snip) > >> How will any FPGA toolchain get "a similar penetration" if the vendors >> don't open the spec on the bitstream? Do you see lots of people coming >> together to reverse engineer the many brands and flavors of FPGA devices >> to make this even possible? > > Only the final stage of processing needs to know the real details > of the bitstream. I don't know so well the current tool chain, but > it might be that you could replace most of the steps, and use the > vendor supplied final step. > > Remember the early gcc before glibc? They used the vendor supplied > libc, which meant that it had to use the same call convention. We have had open source compilers and simulators for some time now. When will the "similar penetration" happen? >> Remember that CPU makers have *always* released detailed info on their >> instruction sets because it was useful even if, no, *especially if* >> coding in assembly. > > If FOSS tools were available, there would be reason to release > those details. But note that you don't really need bit level to > write assembly code, only to write assemblers. You need to know how > many bits there are (for example, in an address) but not which bits. > > Now, most assemblers do print out the hex codes, but most often > that isn't needed for actual programming, sometimes for debugging. I'm not sure what your point is. You may disagree with details of what I wrote, but I don't get what you are trying to say about the topic of interest. -- RickArticle: 158108
On 09/08/2015 3:45 PM, rickman wrote: > > How will any FPGA toolchain get "a similar penetration" if the vendors > don't open the spec on the bitstream? Do you see lots of people coming > together to reverse engineer the many brands and flavors of FPGA devices > to make this even possible? > > Remember that CPU makers have *always* released detailed info on their > instruction sets because it was useful even if, no, *especially if* > coding in assembly. > This thread implies that a bitstream is like a processor ISA and to some extent it is. FOSS tools have for the avoided the minor variations in processor ISA's preferring to use a subset of the instruction set to support a broad base of processors in a family. The problem in some FPGA devices is that a much larger set of implementation rules are required to produce effective code implementations. This doesn't say that FOSS can't or won't do it but it would require a much more detailed attention to detail than the FOSS tools that I have looked at. w..Article: 158109
On 8/10/2015 5:07 PM, Walter Banks wrote: > On 09/08/2015 3:45 PM, rickman wrote: > >> >> How will any FPGA toolchain get "a similar penetration" if the vendors >> don't open the spec on the bitstream? Do you see lots of people coming >> together to reverse engineer the many brands and flavors of FPGA devices >> to make this even possible? >> >> Remember that CPU makers have *always* released detailed info on their >> instruction sets because it was useful even if, no, *especially if* >> coding in assembly. >> > > This thread implies that a bitstream is like a processor ISA and to some > extent it is. FOSS tools have for the avoided the minor variations in > processor ISA's preferring to use a subset of the instruction set to > support a broad base of processors in a family. > > The problem in some FPGA devices is that a much larger set of > implementation rules are required to produce effective code > implementations. This doesn't say that FOSS can't or won't do it but it > would require a much more detailed attention to detail than the FOSS > tools that I have looked at. I don't know for sure, but I think this is carrying the analogy a bit too far. If FOSS compilers for CPUs have mostly limited code to subsets of instructions to make the compiler easier to code and maintain that's fine. Obviously the pressure to further optimize the output code just isn't there. I have no reason to think the tools for FPGA development don't have their own set of tradeoffs and unique pressures for optimization. So it is hard to tell where they will end up if they become mainstream FPGA development tools which I don't believe they are currently, regardless of the issues of bitstream generation. I can't say just how important users find the various optimizations possible with different FPGAs. I remember working for a test equipment maker who was using Xilinx in a particular product. They did not want us to code the unique HDL patterns required to utilize some of the architectural features because the code would not be very portable to other brands which they might use in other products in the future. In other words, they didn't feel the optimizations were worth limiting their choice of vendors in the future. I guess that is another reason why the FPGA vendors like having their own tools. They want to be able to control the optimizations for their architectural features. I think they could do this just fine with FOSS tools as well as proprietary, but they would have to share their code which the competition might be able to take advantage of. -- RickArticle: 158110
rickman <gnuarm@gmail.com> writes: > If FOSS compilers for CPUs have mostly limited code to subsets of > instructions to make the compiler easier to code and maintain that's > fine. As one of the GCC maintainers, I can tell you that the opposite is true. We take advantage of everything the ISA offers.Article: 158111
On 11/08/15 02:51, DJ Delorie wrote: > > rickman <gnuarm@gmail.com> writes: >> If FOSS compilers for CPUs have mostly limited code to subsets of >> instructions to make the compiler easier to code and maintain that's >> fine. > > As one of the GCC maintainers, I can tell you that the opposite is true. > We take advantage of everything the ISA offers. > My guess is that Walter's experience here is with SDCC rather than gcc, since he writes compilers that - like SDCC - target small, awkward 8-bit architectures. In that world, there are often many variants of the cpu - the 8051 is particularly notorious - and getting the best out of these devices often means making sure you use the extra architectural features your particular device provides. SDCC is an excellent tool, but as Walter says it works with various subsets of ISA provided by common 8051, Z80, etc., variants. The big commercial toolchains for such devices, such as from Keil, IAR and Walter's own Bytecraft, provide better support for the range of commercially available parts. gcc is in a different world - it is a much bigger compiler suite, with more developers than SDCC, and a great deal more support from the cpu manufacturers and other commercial groups. One does not need to dig further than the manual pages to see the huge range of options for optimising use of different variants of many of targets it supports - including not just use of differences in the ISA, but also differences in timings and instruction scheduling.Article: 158112
DJ Delorie <dj@delorie.com> wrote: > > rickman <gnuarm@gmail.com> writes: > > If FOSS compilers for CPUs have mostly limited code to subsets of > > instructions to make the compiler easier to code and maintain that's > > fine. > > As one of the GCC maintainers, I can tell you that the opposite is true. > We take advantage of everything the ISA offers. But the point is the ISA is the software-level API for the processor. There's a lot more fancy stuff in the microarchitecture that you don't get exposed to as a compiler writer[1]. The contract between programmers and the CPU vendor is the vendor will implement the ISA API, and software authors can be confident their software will work.[2] You don't get exposed to things like branch latency, pipeline hazards, control flow graph dependencies, and so on, because microarchitectural techniques like branch predictors, register renaming and out-of-order execution do a massive amount of work to hide those details from the software world. The nearest we came is VLIW designs like Itanium where more microarchitectural detail was exposed to the compiler - which turned out to be very painful for the compiler writer. There is no such API for FPGAs - the compiler has to drive the raw transistors to set up the routing for the exact example of the chip being programmed. Not only that, there are no safeguards - if you drive those transistors wrong, your chip catches fire. Theo [1] There is a certain amount of performance tweaking you can do with knowledge of caching, prefetching, etc - but you rarely have the problem of functional correctness; the ISA is not violated, even if slightly slower [2] To a greater or lesser degree - Intel takes this to extremes, supporting binary compatibility of OSes back to the 1970s; ARM requires the OS to co-evolve but userland programs are (mostly) unchangedArticle: 158113
On 11/08/15 10:59, Theo Markettos wrote: > DJ Delorie <dj@delorie.com> wrote: >> >> rickman <gnuarm@gmail.com> writes: >>> If FOSS compilers for CPUs have mostly limited code to subsets of >>> instructions to make the compiler easier to code and maintain that's >>> fine. >> >> As one of the GCC maintainers, I can tell you that the opposite is true. >> We take advantage of everything the ISA offers. > > But the point is the ISA is the software-level API for the processor. > There's a lot more fancy stuff in the microarchitecture that you don't get > exposed to as a compiler writer[1]. The contract between programmers and > the CPU vendor is the vendor will implement the ISA API, and software > authors can be confident their software will work.[2] > > You don't get exposed to things like branch latency, pipeline hazards, > control flow graph dependencies, and so on, because microarchitectural > techniques like branch predictors, register renaming and out-of-order > execution do a massive amount of work to hide those details from the > software world. As you note below, that is true regarding the functional execution behaviour - but not regarding the speed. For many targets, gcc can take such non-ISA details into account as well as a large proportion of the device-specific ISA (contrary to what Walter thought). > > The nearest we came is VLIW designs like Itanium where more > microarchitectural detail was exposed to the compiler - which turned out to > be very painful for the compiler writer. > > There is no such API for FPGAs - the compiler has to drive the raw > transistors to set up the routing for the exact example of the chip being > programmed. Not only that, there are no safeguards - if you drive those > transistors wrong, your chip catches fire. > Indeed. The bitstream and the match between configuration bits and functionality in an FPGA do not really correspond to cpu's ISA. They are at a level of detail and complexity that is /way/ beyond an ISA. > Theo > > > [1] There is a certain amount of performance tweaking you can do with > knowledge of caching, prefetching, etc - but you rarely have the problem of > functional correctness; the ISA is not violated, even if slightly slower > > [2] To a greater or lesser degree - Intel takes this to extremes, > supporting binary compatibility of OSes back to the 1970s; ARM requires the > OS to co-evolve but userland programs are (mostly) unchanged >Article: 158114
On 11/08/2015 2:32 AM, David Brown wrote: > On 11/08/15 02:51, DJ Delorie wrote: >> >> rickman <gnuarm@gmail.com> writes: >>> If FOSS compilers for CPUs have mostly limited code to subsets >>> of instructions to make the compiler easier to code and maintain >>> that's fine. >> >> As one of the GCC maintainers, I can tell you that the opposite is >> true. We take advantage of everything the ISA offers. >> > > My guess is that Walter's experience here is with SDCC rather than > gcc, since he writes compilers that - like SDCC - target small, > awkward 8-bit architectures. In that world, there are often many > variants of the cpu - the 8051 is particularly notorious - and > getting the best out of these devices often means making sure you use > the extra architectural features your particular device provides. > SDCC is an excellent tool, but as Walter says it works with various > subsets of ISA provided by common 8051, Z80, etc., variants. The big > commercial toolchains for such devices, such as from Keil, IAR and > Walter's own Bytecraft, provide better support for the range of > commercially available parts. That frames the point I was making about bitstream information. My limited understanding of the issue is getting the bitstream information correct for a specific part goes beyond getting the internal interconnects being functional and goes to issues dealing with timing, power, gate position and data loads. It is not saying that FOSS couldn't or shouldn't do it but it would change a lot of things in both the FOSS and fpga world. The chip companies have traded speed for detail complexity. In the same way that speed has been traded for ISA use restrictions (specific instruction combinations) in many of the embedded system processors we have supported. w..Article: 158115
On 11/08/15 13:20, Walter Banks wrote: > On 11/08/2015 2:32 AM, David Brown wrote: >> On 11/08/15 02:51, DJ Delorie wrote: >>> >>> rickman <gnuarm@gmail.com> writes: >>>> If FOSS compilers for CPUs have mostly limited code to subsets >>>> of instructions to make the compiler easier to code and maintain >>>> that's fine. >>> >>> As one of the GCC maintainers, I can tell you that the opposite is >>> true. We take advantage of everything the ISA offers. >>> >> >> My guess is that Walter's experience here is with SDCC rather than >> gcc, since he writes compilers that - like SDCC - target small, >> awkward 8-bit architectures. In that world, there are often many >> variants of the cpu - the 8051 is particularly notorious - and >> getting the best out of these devices often means making sure you use >> the extra architectural features your particular device provides. >> SDCC is an excellent tool, but as Walter says it works with various >> subsets of ISA provided by common 8051, Z80, etc., variants. The big >> commercial toolchains for such devices, such as from Keil, IAR and >> Walter's own Bytecraft, provide better support for the range of >> commercially available parts. > > That frames the point I was making about bitstream information. My > limited understanding of the issue is getting the bitstream information > correct for a specific part goes beyond getting the internal > interconnects being functional and goes to issues dealing with timing, > power, gate position and data loads. > > It is not saying that FOSS couldn't or shouldn't do it but it would > change a lot of things in both the FOSS and fpga world. The chip > companies have traded speed for detail complexity. In the same way that > speed has been traded for ISA use restrictions (specific instruction > combinations) in many of the embedded system processors we have supported. > This is not really a FOSS / Closed software issue (despite the thread). Bitstream information in FPGA's is not really suitable for /any/ third parties - it doesn't matter significantly if they are open or closed development. When an FPGA company makes a new design, there will be automatic flow of the details from the FPGA design details into the placer/router/generator software - the information content and the detail is far too high to deal sensibly with documentation or any other interchange between significantly separated groups. Though I have no "inside information" about how FPGA companies do their development, I would expect there is a great deal of back-and-forth work between the hardware designers, the software designers, and the groups testing simulations to figure out how well the devices work in practice. Whereas with a cpu design, the ISA is at least mostly fixed early in the design process, and also the chip can be simulated and tested without compilers or anything more than a simple assembler, for FPGA's your bitstream will not be solidified until the final hardware design is complete, and you are totally dependent on the placer/router/generator software while doing the design. All this means that it is almost infeasible for anyone to make a sensible third-party generator, at least for large FPGAs. And the FPGA manufacturers cannot avoid making such tools anyway. At best, third-parties (FOSS or not) can hope to make limited bitstream models of a few small FPGAs, and get something that works but is far from optimal for the device. Of course, there are many interesting ideas that can come out of even such limited tools as this, so it is still worth making them and "opening" the bitstream models for a few small FPGAs. For some uses, it is an advantage that all software in the chain is open source, even if the result is not as speed or space optimal. For academic use, it makes research and study much easier, and can lead to new ideas or algorithms for improving the FPGA development process. And you can do weird things - I remember long ago reading of someone who used a genetic algorithm on bitstreams for a small FPGA to make a filter system without actually knowing /how/ it worked!Article: 158116
On 8/10/2015 8:51 PM, DJ Delorie wrote: > > rickman <gnuarm@gmail.com> writes: >> If FOSS compilers for CPUs have mostly limited code to subsets of >> instructions to make the compiler easier to code and maintain that's >> fine. > > As one of the GCC maintainers, I can tell you that the opposite is true. > We take advantage of everything the ISA offers. You are replying to the wrong person. I was not saying GCC limited the instruction set used, I was positing a reason for Walter Bank's claim this was true. My point is that there are different pressures in compiling for FPGAs and CPUs. -- RickArticle: 158117
On 8/11/2015 5:14 AM, David Brown wrote: > On 11/08/15 10:59, Theo Markettos wrote: >> DJ Delorie <dj@delorie.com> wrote: >>> >>> rickman <gnuarm@gmail.com> writes: >>>> If FOSS compilers for CPUs have mostly limited code to subsets of >>>> instructions to make the compiler easier to code and maintain that's >>>> fine. >>> >>> As one of the GCC maintainers, I can tell you that the opposite is true. >>> We take advantage of everything the ISA offers. >> >> But the point is the ISA is the software-level API for the processor. >> There's a lot more fancy stuff in the microarchitecture that you don't get >> exposed to as a compiler writer[1]. The contract between programmers and >> the CPU vendor is the vendor will implement the ISA API, and software >> authors can be confident their software will work.[2] >> >> You don't get exposed to things like branch latency, pipeline hazards, >> control flow graph dependencies, and so on, because microarchitectural >> techniques like branch predictors, register renaming and out-of-order >> execution do a massive amount of work to hide those details from the >> software world. > > As you note below, that is true regarding the functional execution > behaviour - but not regarding the speed. For many targets, gcc can take > such non-ISA details into account as well as a large proportion of the > device-specific ISA (contrary to what Walter thought). I'm not clear on what is being said about speed. It is my understanding that compiler writers often consider the speed of the output and try hard to optimize that for each particular generation of processor ISA or even versions of processor with the same ISA. So don't see that as being particularly different from FPGAs. Sure, FPGAs require a *lot* of work to get routing to meet timing. That is the primary purpose of one of the three steps in FPGA design tools, compile, place, route. I don't see this as fundamentally different from CPU compilers in a way that affects the FOSS issue. >> The nearest we came is VLIW designs like Itanium where more >> microarchitectural detail was exposed to the compiler - which turned out to >> be very painful for the compiler writer. >> >> There is no such API for FPGAs - the compiler has to drive the raw >> transistors to set up the routing for the exact example of the chip being >> programmed. Not only that, there are no safeguards - if you drive those >> transistors wrong, your chip catches fire. >> > > Indeed. The bitstream and the match between configuration bits and > functionality in an FPGA do not really correspond to cpu's ISA. They > are at a level of detail and complexity that is /way/ beyond an ISA. I think that is not a useful distinction. If you include all aspects of writing compilers, the ISA has to be supplemented by other information to get good output code. If you only consider the ISA your code will never be very good. In the end the only useful distinction between the CPU tools and FPGA tools are that FPGA users are, in general, not as capable in modifying the tools. -- RickArticle: 158118
Hello, colleagues. I'm fixing issues of the design with many clocks and its interaction on Art= ix7 FPGA. In order to move data from one clock domain to another I'm using = double-port distributed memory with synchronous read, but this design doesn= 't meet timing constrains (250Mhz). Critical path consists of a trigger of = the read address of the Distributed RAM, a Distributed Ram primitive and a = Read data trigger. This route has total slack -3 ns. This two triggers are = in neighbour slices of each other and RAM primitive is a few slices apart o= f them and all should look good. But route report of Vivado tool displays that source clock (of address trig= ger) has delay of 6 ns, destination clock (of data trigger) has delay of 10= ns (that is strange - it's neighbour slice) and to meet this timing the to= ol route the path from RAM primitive to destination flop throughout entire = FPGA and ends up with 13 ns delay which is 3 ns more then it should be. Here are my questions: 1. why 2 triggers (having the same clock and placed in neighbour slices) ha= ve such a big difference of clock arrival time (6 and 10 ns)? 2. Why routing tool can't find a path with arrival time of 10 ns when it ha= ve found path with 13 ns and FPGA is almost free now? 3. How can I solve this problem? 4. Is there any document to read how to manage with such kind of problems?Article: 158119
May be it's important: there is a lot of different clocks in design derived from one source and mo= st of clock boundary crossings are timed i.e. designer implied that design = tool could manage to make this crossing safe. I think that this big differe= nce of clock arrival time is may be caused by attempts of vivado to meet re= quirements of such clock crossing. But I'm not sure.Article: 158120
On Tuesday, August 11, 2015 at 3:59:22 AM UTC-5, Theo Markettos wrote: > DJ Delorie <dj@....com> wrote: > >=20 > > rickman <gnuarm@....com> writes: > > > If FOSS compilers for CPUs have mostly limited code to subsets of > > > instructions to make the compiler easier to code and maintain that's > > > fine. > >=20 > > As one of the GCC maintainers, I can tell you that the opposite is true= . > > We take advantage of everything the ISA offers. >=20 > But the point is the ISA is the software-level API for the processor.=20 > There's a lot more fancy stuff in the microarchitecture that you don't ge= t > exposed to as a compiler writer[1]. The contract between programmers and > the CPU vendor is the vendor will implement the ISA API, and software > authors can be confident their software will work.[2] >=20 > You don't get exposed to things like branch latency, pipeline hazards, > control flow graph dependencies, and so on, because microarchitectural > techniques like branch predictors, register renaming and out-of-order > execution do a massive amount of work to hide those details from the > software world. >=20 > The nearest we came is VLIW designs like Itanium where more > microarchitectural detail was exposed to the compiler - which turned out = to > be very painful for the compiler writer. >=20 > There is no such API for FPGAs - the compiler has to drive the raw > transistors to set up the routing for the exact example of the chip being > programmed. Not only that, there are no safeguards - if you drive those > transistors wrong, your chip catches fire. >=20 > Theo >=20 >=20 > [1] There is a certain amount of performance tweaking you can do with > knowledge of caching, prefetching, etc - but you rarely have the problem = of > functional correctness; the ISA is not violated, even if slightly slower >=20 > [2] To a greater or lesser degree - Intel takes this to extremes, > supporting binary compatibility of OSes back to the 1970s; ARM requires t= he > OS to co-evolve but userland programs are (mostly) unchanged One could make the analogy that a FPGA's ISA is the LUT, register, ALU & RA= M primitives that the mapper generates from the EDIF. There is no suitable analogy for the router phase of bitstream generation. = The router resources are a hierarchy of variable length wires in an assort= ment of directions (horizontal, vertical, sometimes diagonal) with pass tra= nsistors used to connect wires, source and destinations. Timing driven place & route is easy to express, difficult to implement. Re= gister and/or logic replication may be performed to improve timing. There are some open? router tools at Un. Toronto: http://www.eecg.toronto.edu/~jayar/software/software.html Jim BrakefieldArticle: 158121
On Tuesday, August 11, 2015 at 5:17:48 PM UTC-4, Ilya Kalistru wrote: > May be it's important: > there is a lot of different clocks in design derived from one source and = most of clock boundary crossings are timed i.e. designer implied that desig= n tool could manage to make this crossing safe. I think that this big diffe= rence of clock arrival time is may be caused by attempts of vivado to meet = requirements of such clock crossing. But I'm not sure. Are these clocks only from a clock input pin of the device, or the output o= f a PLL/DLL? Or are these clocks generated from logic (either gates or fli= p flop outputs)? If these are generated clocks, there is your problem. Kevin JenningsArticle: 158122
One clock is an output clock of PCIe block and other are derived from it by MMCM. Nothing wrong here. No gating clocks, no logic on the clock path.Article: 158123
There was a lot of different clocks in design derived from one source and most of clock boundary crossings are timed i.e. designer implied that design tool could manage to make this crossing safe. It looks like this phenomenon was caused by attempts of vivado to meet requirements of crossing interclock boundary. After I got rid of all clock crossings that is not through "false path" this huge clock skew disappeared and now all right.Article: 158124
On 11.08.2015 08:32, David Brown wrote: > On 11/08/15 02:51, DJ Delorie wrote: >> >> rickman <gnuarm@gmail.com> writes: >>> If FOSS compilers for CPUs have mostly limited code to subsets of >>> instructions to make the compiler easier to code and maintain that's >>> fine. >> >> As one of the GCC maintainers, I can tell you that the opposite is true. >> We take advantage of everything the ISA offers. >> > > My guess is that Walter's experience here is with SDCC rather than gcc, > since he writes compilers that - like SDCC - target small, awkward 8-bit > architectures. In that world, there are often many variants of the cpu > - the 8051 is particularly notorious - and getting the best out of these > devices often means making sure you use the extra architectural features > your particular device provides. SDCC is an excellent tool, but as > Walter says it works with various subsets of ISA provided by common > 8051, Z80, etc., variants. The big commercial toolchains for such > devices, such as from Keil, IAR and Walter's own Bytecraft, provide > better support for the range of commercially available parts. > > gcc is in a different world - it is a much bigger compiler suite, with > more developers than SDCC, and a great deal more support from the cpu > manufacturers and other commercial groups. One does not need to dig > further than the manual pages to see the huge range of options for > optimising use of different variants of many of targets it supports - > including not just use of differences in the ISA, but also differences > in timings and instruction scheduling. > I'd say the SDCC situation is more complex, and it seems to do uite well compared to other compilers for the same architectures. On one hand, SDCC always has had few developers. It has some quite advanced optimizations, but one the other hand it is lacking in some standard optimizations and features (SDCC's pointer analysis is not that good, we don't have generalized constant propagation yet, there are some standard C features still missing - see below, after the discussion of the ports). IMO, the bigest weaknesses are there, and not in the use of exotic instructions. The 8051 has many variants, and SDCC currently does not support some of the advanced features available in some of them, such as 4 dptrs, etc. I do not know how SDCC compares to non-free compilers in that respect. The Z80 is already a bit different. We use the differences in the instruction sets of the Z80, Z180, LR35902, Rabbit, TLCS-90. SDCC does not use the undocumented instructions available in some Z80 variants, and does not use the alternate register set for code generation; there definitely is potential for further improvement, but: Last time I did a comparison of compilers for these architectures, IAR was the only one that did better than SDCC for some of them. Newer architectures supported by SDCC are the Freescale HC08, S08 and the STMicroelectronics STM8. The non-free compilers for these targets seem to be able to often generate better code, but SDCC is not far behind. The SDCC PIC backends are not up to the standard of the others. In terms of standard complaince, IMO, SDCC is dong better than the non-free compilers, with the exception of IAR. Most non-free compilers support something resembling C90 with a few deviations from the standard, IAR seems to support mostly standard C99. SDCC has a few gaps, even in C90 (such as K&R functions and assignment of structs). ON th other hand, SDC supports most of the new features of C99 and C11 (the only missing feature introduced in C11 seems to be UTF-8 strings). Philipp
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z