Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On 10/17/2016 12:15 PM, John Larkin wrote: > On Sun, 16 Oct 2016 19:55:13 -0500, Tim Wescott > <tim@seemywebsite.really> wrote: > >> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote: >> >>> I found this pretty impressive. I wonder if this is why Intel bought >>> Altera or if they are not working together on this? Ulpp! Seak and yea >>> shall find.... >>> >>> "Microsoft is using so many FPGA the company has a direct influence over >>> the global FPGA supply and demand. Intel executive vice president, Diane >>> Bryant, has already stated that Microsoft is the main reason behind >>> Intel's decision to acquire FPGA-maker, Altera." >>> >>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a >>> Second http://hubs.ly/H04JLSp0 >>> >>> I guess this will only steer the FPGA market more in the direction of >>> larger and faster rather than giving us much at the low end of energy >>> efficient and small FPGAs. That's where I like to live. >> >> Hopefully it'll create a vacuum into which other companies will grow. >> Very possibly not without some pain in the interim. Markets change, we >> have to adapt. > > The interim pain includes an almost total absence of tech support for > the smaller users. The biggies get a team of full-time, on-site > support people; small users can't get any support from the principals, > and maybe a little mediocre support from distributors. > > That trend is almost universal, but it's worst with FPGAs, where the > tools are enormously complex and correspondingly buggy. Got a problem? > Post it on a forum. The tools for the Zynq is complex. For the most part the FPGA tools are fine and not any more problematic than tools for CPUs. Its the funky interface stuff inside the Zynq that makes it complex. Its new so the tools aren't polished. Bottom line is they aren't going to spend some kilo bucks to support a sub thousand part user when they have mega part users they need to support. That's pretty universal, not just FPGAs. -- Rick CArticle: 159376
On Mon, 17 Oct 2016 03:56:46 -0400, rickman <gnuarm@gmail.com> wrote: >On 10/16/2016 8:55 PM, Tim Wescott wrote: >> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote: >> >>> I found this pretty impressive. I wonder if this is why Intel bought >>> Altera or if they are not working together on this? Ulpp! Seak and yea >>> shall find.... >>> >>> "Microsoft is using so many FPGA the company has a direct influence over >>> the global FPGA supply and demand. Intel executive vice president, Diane >>> Bryant, has already stated that Microsoft is the main reason behind >>> Intel's decision to acquire FPGA-maker, Altera." >>> >>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a >>> Second http://hubs.ly/H04JLSp0 >>> >>> I guess this will only steer the FPGA market more in the direction of >>> larger and faster rather than giving us much at the low end of energy >>> efficient and small FPGAs. That's where I like to live. >> >> Hopefully it'll create a vacuum into which other companies will grow. >> Very possibly not without some pain in the interim. Markets change, we >> have to adapt. > >I've never been clear on the fundamental forces in the FPGA business. >The major FPGA companies have operated very similarly catering to the >telecom markets while giving pretty much lip service to the rest of the >electronics world. Lattice has some pretty nice low-end parts. Altera did, too, but that'll probably change. > >I suppose there is a difference in technology requirements between MCUs >and FPGAs. MCUs often are not even near the bleeding edge of process >technology while FPGAs seem to drive it to some extent. Other than >Intel who seems to always be the first to bring chips out at a given >process node, the FPGA companies are a close second. But again, I think >that is driven by their serving the telecom market where density is king. MCUs have a huge development cost/cycle. FPGAs are arrays of the same thing. Being arrays, they're easier to debug/test, as well. Telecom and defense. > >So I don't see any fundamental reasons why FPGAs can't be built on older >processes to keep price down. If MCUs can be made in a million >combinations of RAM, Flash and peripherals, why can't FPGAs? Even >analog is used in MCUs, why can't FPGAs be made with the same processes >giving us programmable logic combined with a variety of ADC, DAC and >comparators on the same die. Put them in smaller packages (lower pin >counts, not the micro pitch BGAs) and let them to be used like MCUs. The different MCU combinations are often the same chips with fuse-blows for the different configurations, not that this couldn't be (or isn't) done with FPGAs, as well. > >Maybe the market just isn't there. Many seem to feel FPGAs are much >harder to work with than MCUs. To me they are much simpler.Article: 159377
On 10/17/2016 9:13 PM, krw wrote: > On Mon, 17 Oct 2016 03:56:46 -0400, rickman <gnuarm@gmail.com> wrote: > >> On 10/16/2016 8:55 PM, Tim Wescott wrote: >>> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote: >>> >>>> I found this pretty impressive. I wonder if this is why Intel bought >>>> Altera or if they are not working together on this? Ulpp! Seak and yea >>>> shall find.... >>>> >>>> "Microsoft is using so many FPGA the company has a direct influence over >>>> the global FPGA supply and demand. Intel executive vice president, Diane >>>> Bryant, has already stated that Microsoft is the main reason behind >>>> Intel's decision to acquire FPGA-maker, Altera." >>>> >>>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a >>>> Second http://hubs.ly/H04JLSp0 >>>> >>>> I guess this will only steer the FPGA market more in the direction of >>>> larger and faster rather than giving us much at the low end of energy >>>> efficient and small FPGAs. That's where I like to live. >>> >>> Hopefully it'll create a vacuum into which other companies will grow. >>> Very possibly not without some pain in the interim. Markets change, we >>> have to adapt. >> >> I've never been clear on the fundamental forces in the FPGA business. >> The major FPGA companies have operated very similarly catering to the >> telecom markets while giving pretty much lip service to the rest of the >> electronics world. > > Lattice has some pretty nice low-end parts. Altera did, too, but > that'll probably change. >> >> I suppose there is a difference in technology requirements between MCUs >> and FPGAs. MCUs often are not even near the bleeding edge of process >> technology while FPGAs seem to drive it to some extent. Other than >> Intel who seems to always be the first to bring chips out at a given >> process node, the FPGA companies are a close second. But again, I think >> that is driven by their serving the telecom market where density is king. > > MCUs have a huge development cost/cycle. FPGAs are arrays of the same > thing. Being arrays, they're easier to debug/test, as well. Telecom > and defense. >> >> So I don't see any fundamental reasons why FPGAs can't be built on older >> processes to keep price down. If MCUs can be made in a million >> combinations of RAM, Flash and peripherals, why can't FPGAs? Even >> analog is used in MCUs, why can't FPGAs be made with the same processes >> giving us programmable logic combined with a variety of ADC, DAC and >> comparators on the same die. Put them in smaller packages (lower pin >> counts, not the micro pitch BGAs) and let them to be used like MCUs. > > The different MCU combinations are often the same chips with > fuse-blows for the different configurations, not that this couldn't be > (or isn't) done with FPGAs, as well. It definitely is done with FPGAs. I make a board with a Lattice part that is EOL. The part is a 3.3 volt only version, but I had added a voltage regulator so the 1.2 volt core version can be used. With this last batch of boards the 1.2 volt version was much cheaper than the 3.3 volt version. But I didn't want to have to run the tools to produce a new bit file. The FAE said he thought they might use the same die and so would have the same bit stream most likely. I found what I had to change in the tools to get it to let me download the old version of the bit stream into the 1.2 volt device and it worked. So clearly the die is the same. There is a just bonding difference that bypasses the 1.2 volt internal regulator. -- Rick CArticle: 159378
On 18/10/16 00:45, rickman wrote: > On 10/17/2016 6:25 AM, David Brown wrote: >> On 17/10/16 09:56, rickman wrote: >>> On 10/16/2016 8:55 PM, Tim Wescott wrote: >>>> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote: >>>> >>>>> I found this pretty impressive. I wonder if this is why Intel bought >>>>> Altera or if they are not working together on this? Ulpp! Seak >>>>> and yea >>>>> shall find.... >>>>> >>>>> "Microsoft is using so many FPGA the company has a direct influence >>>>> over >>>>> the global FPGA supply and demand. Intel executive vice president, >>>>> Diane >>>>> Bryant, has already stated that Microsoft is the main reason behind >>>>> Intel's decision to acquire FPGA-maker, Altera." >>>>> >>>>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a >>>>> Second http://hubs.ly/H04JLSp0 >>>>> >>>>> I guess this will only steer the FPGA market more in the direction of >>>>> larger and faster rather than giving us much at the low end of energy >>>>> efficient and small FPGAs. That's where I like to live. >>>> >>>> Hopefully it'll create a vacuum into which other companies will grow. >>>> Very possibly not without some pain in the interim. Markets change, we >>>> have to adapt. >>> >>> I've never been clear on the fundamental forces in the FPGA business. >>> The major FPGA companies have operated very similarly catering to the >>> telecom markets while giving pretty much lip service to the rest of the >>> electronics world. >>> >>> I suppose there is a difference in technology requirements between MCUs >>> and FPGAs. MCUs often are not even near the bleeding edge of process >>> technology while FPGAs seem to drive it to some extent. Other than >>> Intel who seems to always be the first to bring chips out at a given >>> process node, the FPGA companies are a close second. But again, I think >>> that is driven by their serving the telecom market where density is >>> king. >>> >>> So I don't see any fundamental reasons why FPGAs can't be built on older >>> processes to keep price down. If MCUs can be made in a million >>> combinations of RAM, Flash and peripherals, why can't FPGAs? Even >>> analog is used in MCUs, why can't FPGAs be made with the same processes >>> giving us programmable logic combined with a variety of ADC, DAC and >>> comparators on the same die. Put them in smaller packages (lower pin >>> counts, not the micro pitch BGAs) and let them to be used like MCUs. >> >> As far as I understand it, there is quite a variation in the types of >> processes used - it's not just about the feature size. The number of >> layers, the types of layers, the types of doping, the fault tolerance, >> etc., all play a part in what fits well on the same die. So you might >> easily find that if you put an ADC on a die setup that was good for FPGA >> fabric, then the ADC would be a lot worse (speed, accuracy, power >> consumption, noise, cost) than usual. Alternatively, your die setup >> could be good for the ADC - and then it would give a poor quality FPGA >> part. > > What's a "poor" FPGA? What is a "good" FPGA? It has fast switching, predictable timing, low power, low cost, lots of gates, registers and memory, flexible routing, etc. A "poor" FPGA is one that is significantly worse in some or all of these features than you might otherwise expect. > MCUs have digital and usually as fast as possible > digital. They also want the lowest possible power consumption. What > part of that is bad for an FPGA? The digital parts of an MCU are fixed. Each gate in an MCU design is a tiny fraction of the size, cost, power and latency of a logic element in an FPGA. Just compare the speed, die size and power of a hard cpu macro in an FPGA with a soft cpu on the same device - the hard macro is hugely superior in every way except flexibility. Now, I don't have any good references for what I am writing here - just "things I have read" and "things I know about". So if you or anyone else knows better, I am happy to be corrected - and if any of this is important to you (rather than just for interest), please check it with more knowledgeable people. With that disclaimer,... There are important differences between the die stackup for FPGA design and other types of digital logic. The most obvious feature is that for high-end FPGA's, there are many more layers in the die than you usually get for microcontrollers or even fast cpus. FPGA's need a /lot/ more routing lines than fixed digital parts. These routes are mostly highly symmetrical, and can be tightly packed because only a small fraction of them are ever active in any given design - you don't need enough power or heat dissipation for them all. On a microcontroller or other digital part, you have far more complex routing patterns, with most routes being short distance and you can have most of them active at a time. On memory parts, you have a different type of routing pattern again - few layers, with a lot of symmetry, and a lot of simultaneous switching on some of the buses. The point is, the optimal die stackup and process technology for an FPGA is different from the optimal setup for an MCU, a memory block, analogue blocks, etc. So when you combine these, you are making a lot of compromises. It is relatively easy and cost-effective to take a big, expensive FPGA die design and stick a little processor on it somewhere. You can spread the normal cpu routing amongst the many FPGA routing layers for better power and heat spreading. It will be a little bigger and slower than a dedicated cpu die could be, and the cost per mm² is higher - but that extra cost is small in the total cost of the chip. But you cannot take an optimised microcontroller or cpu die design and add serious FPGA hardware to it - you simply don't have the routing space. You can add some CPLD-type programmable logic without too much extra cost (look at the AVR XMega E series, or the PSoC devices) because that kind of programmable logic puts more weight on complex logic blocks and less on the routing. Note that flash is also a poor fit for both MCU and FPGA die stacks. For flash, you want a different kind of transistor than for the purely digital parts, you have significant analogue areas, and you need to deal with high voltages and a charge pump. The match between an MCU and flash is not too bad - so the combination of the two parts on the same die is clearly a "win" overall. But if you want the best (cheapest, fastest, highest density, lowest power) flash block, you don't mix it with a cpu on the same die - similarly if you want the best cpu block. As far as I know (and as noted above, I may be wrong), Flash FPGA devices are made with a large FPGA die and a small serial flash die packaged together. You get the same for analogue parts. You can buy devices that are good microcontrollers with okay analogue parts built in. You can buy devices that are basically high-end analogue parts with a half-decent microcontroller tagged on. But you /cannot/ buy a device that has high-end analogue interfaces /and/ a high-end processor or microcontroller, all on the same die. It is just like PCB design. You do not easily match 1000V IGBT switchers, 1000-pin 0.4mm pitch BGAs, and 24-bit ADCs on the same board. > Forget the analog. What do you > sacrifice by building FPGAs on a line that works well for CPUs with > Flash and RAM? If you can also build decent analog with that you get an > MCU/FPGA/Analog device that is no worse than current MCUs. > > >> Microcontrollers are made with a compromise. The cpu part is not as >> fast or efficient as a pure cpu could be, nor is the flash part, nor the >> analogue parts. But they are all good enough that the combination is a >> saving (in dollars and watts, as well as mm²) overall. > > It's not much of a compromise. As you say, they are all good enough. I > am sure an FPGA could be combined with little loss of what defines an FPGA. > As I wrote above, the compromise is significant. It is certainly worth making in some cases - and I too would like to see such combined devices. And I think we will see such devices turning up - technology progress will reduce the technical disadvantages, and economy of scale will reduce the cost disadvantages. But it is not as simple a matter as you might think. And then, of course, there is the joys of making tools that let developers work easily with the whole system - that is not a small matter. I believe that what we will see first is something more like the above-mentioned Atmel XMega E series, or some of the PIC devices (AFAIK), where you have a "normal" microcontroller with a bit of programmable logic. This will give designers a good deal more flexibility in their layouts. Rather than buying a part with 3 UARTs and 2 SPI where one of the SPI's shares the pins of one of the UARTs, the developer could use the chip's pin switch matrix to get all 5 interfaces at once. Some simple PLD blocks could give you high-speed interfaces without external glue logic, and they could let the chip support a wide range of timer functions without the chip designer having to think of every desirable combination in advance. > >> But I think there are some FPGA's with basic analogue parts, and >> certainly with flash. There are also microcontrollers with some >> programmable logic (more CPLD-type logic than FPGA). Maybe we will see >> more "compromise" parts in the future, but I doubt if we will see good >> analogue bits and good FPGA bits on the same die. > > I know of one (well one line) from Microsemi (formerly Actel), > SmartFusion (not to be confused with SmartFusion2). They have a CM3 > with SAR ADC and sigma-delta DAC, comparators, etc in addition to the > FPGA. So clearly this is possible and it is really a marketing issue, > not a technical one. No, it is a combination of many issues and compromises. When Actel saw the success of SmartFusion and thought how they could make a new SmartFusion2 family, they did not think "no one really wants analogue interfaces, so we can remove that" - they made the sacrifices needed to get the other features they needed. It was very much a compromise. But you are right that the SmartFusion shows that combinations can be made - just as the SmartFusion2 shows that it is not a simple matter. > > The focus seems to be on the FPGA Indeed. And that is how it (currently, at least) must be if you want decent FPGA on the device. >, but they do give a decent amount of > Flash and RAM (up to 512 and 64 kB respectively). My main issue is the > very large packages, all BGA except for the ginormous TQ144. I'd like > to see 64 and 100 pin QFPs. The packaging is something that should be easier to change - there is no technical reason not to put the same chip in a lower pin package (as long as the package is big enough for the die and a carrier pcb, of course). > > >> What will, I think, make more of a difference is multi-die packaging - >> either as side-by-side dies or horizontally layered dies. But I expect >> that to be more on the high-end first (like FPGA die combined with big >> ram blocks). > > Very pointless not to mention costly. You lose a lot running the FPGA > to MCU interface through I/O pads for some applications. That is how > Intel combined FPGA with their x86 CPUs initially though. But it is a > very pricey result. No, it is certainly not pointless - although it certainly /is/ costly at the moment. Horizontal side-by-side packaging is an established technique, and is used in a number of high-end devices. If you have a wide and fast memory bus, then the whole thing can be much smaller, simpler and lower power if the dies are adjacent and you have short, thin traces between dies on a carrier pcb within the package. The board designer has no issues with length or impedance matching, and the line drivers are far smaller and lower power. Vertical die-on-die stacking is a newer technology, with a good deal of research into a variety of techniques. It is already in use for symmetrical designs such as multi-die DRAM and Flash packages. But the real benefit will come with DRAM dies connected to processor or FPGA dies. Rather than having a 64-bit wide databus with powerful bi-directional drivers, complex serialisation/deserialisation hardware, PLL's, etc., a 20-bit address/command bus with tracking of pages, pre-fetches, etc., you could just have a full-duplex 512-bit wide databus and full address bus, with everything running at a lower clock rate and data lines driven over a distance of a mm or two. Total system power would be cut drastically, as would latency, and you could drop much of the complex interface and control circuitry on both sides of the link. Your DRAM starts to look more like wide tightly-coupled SRAM - your processor can drop all but its L0 cache. There are still many manufacturing challenges to overcome, and heat management is hard, but it will come - the potential benefits are enormous. > > >>> Maybe the market just isn't there. Many seem to feel FPGAs are much >>> harder to work with than MCUs. To me they are much simpler. >>> >> >> I think that is habit and familiarity - there is a lot of difference to >> the mindset for FPGA programming and MCU programming. I don't think you >> can say that one type of development is fundamentally harder or easier >> than the other, but the simple fact is that a great deal more people are >> familiar with programming serial execution devices than with developing >> for programmable logic. > > The main difference between programming MCUs and FPGAs is you don't need > to be concerned with the problems of virtual multitasking (sharing one > processor between many tasks). Otherwise FPGAs are pretty durn simple > to use really. For sure, some tasks fit well in an MCU. If you have > the performance they can be done in an MCU, but that is not a reason why > they can't be done in an FPGA just as easily. I know, many times I've > taken an MCU algorithm and coded it into HDL. The hard part is > understanding what the MCU code is doing. >Article: 159379
On 10/18/2016 4:35 AM, David Brown wrote: > On 18/10/16 00:45, rickman wrote: >> On 10/17/2016 6:25 AM, David Brown wrote: >>> On 17/10/16 09:56, rickman wrote: >>>> On 10/16/2016 8:55 PM, Tim Wescott wrote: >>>>> On Sun, 16 Oct 2016 20:22:29 -0400, rickman wrote: >>>>> >>>>>> I found this pretty impressive. I wonder if this is why Intel bought >>>>>> Altera or if they are not working together on this? Ulpp! Seak >>>>>> and yea >>>>>> shall find.... >>>>>> >>>>>> "Microsoft is using so many FPGA the company has a direct influence >>>>>> over >>>>>> the global FPGA supply and demand. Intel executive vice president, >>>>>> Diane >>>>>> Bryant, has already stated that Microsoft is the main reason behind >>>>>> Intel's decision to acquire FPGA-maker, Altera." >>>>>> >>>>>> #Microsoft's #FPGA Translates #Wikipedia in less than a Tenth of a >>>>>> Second http://hubs.ly/H04JLSp0 >>>>>> >>>>>> I guess this will only steer the FPGA market more in the direction of >>>>>> larger and faster rather than giving us much at the low end of energy >>>>>> efficient and small FPGAs. That's where I like to live. >>>>> >>>>> Hopefully it'll create a vacuum into which other companies will grow. >>>>> Very possibly not without some pain in the interim. Markets change, we >>>>> have to adapt. >>>> >>>> I've never been clear on the fundamental forces in the FPGA business. >>>> The major FPGA companies have operated very similarly catering to the >>>> telecom markets while giving pretty much lip service to the rest of the >>>> electronics world. >>>> >>>> I suppose there is a difference in technology requirements between MCUs >>>> and FPGAs. MCUs often are not even near the bleeding edge of process >>>> technology while FPGAs seem to drive it to some extent. Other than >>>> Intel who seems to always be the first to bring chips out at a given >>>> process node, the FPGA companies are a close second. But again, I think >>>> that is driven by their serving the telecom market where density is >>>> king. >>>> >>>> So I don't see any fundamental reasons why FPGAs can't be built on older >>>> processes to keep price down. If MCUs can be made in a million >>>> combinations of RAM, Flash and peripherals, why can't FPGAs? Even >>>> analog is used in MCUs, why can't FPGAs be made with the same processes >>>> giving us programmable logic combined with a variety of ADC, DAC and >>>> comparators on the same die. Put them in smaller packages (lower pin >>>> counts, not the micro pitch BGAs) and let them to be used like MCUs. >>> >>> As far as I understand it, there is quite a variation in the types of >>> processes used - it's not just about the feature size. The number of >>> layers, the types of layers, the types of doping, the fault tolerance, >>> etc., all play a part in what fits well on the same die. So you might >>> easily find that if you put an ADC on a die setup that was good for FPGA >>> fabric, then the ADC would be a lot worse (speed, accuracy, power >>> consumption, noise, cost) than usual. Alternatively, your die setup >>> could be good for the ADC - and then it would give a poor quality FPGA >>> part. >> >> What's a "poor" FPGA? > > What is a "good" FPGA? It has fast switching, predictable timing, low > power, low cost, lots of gates, registers and memory, flexible routing, > etc. A "poor" FPGA is one that is significantly worse in some or all of > these features than you might otherwise expect. So which of these go to hell when you use a process in use by MCU makers? Heck, you mention Flash putting the clock back a couple of process nodes, but that is what I am using, Lattice Flash FPGAs. >> MCUs have digital and usually as fast as possible >> digital. They also want the lowest possible power consumption. What >> part of that is bad for an FPGA? > > The digital parts of an MCU are fixed. Each gate in an MCU design is a > tiny fraction of the size, cost, power and latency of a logic element in > an FPGA. Just compare the speed, die size and power of a hard cpu macro > in an FPGA with a soft cpu on the same device - the hard macro is hugely > superior in every way except flexibility. None of that is relevant. The point is the process used for MCUs with Flash, RAM and analog is just as good for FPGAs if you aren't trying to be on the bleeding edge. > Now, I don't have any good references for what I am writing here - just > "things I have read" and "things I know about". So if you or anyone > else knows better, I am happy to be corrected - and if any of this is > important to you (rather than just for interest), please check it with > more knowledgeable people. With that disclaimer,... > > There are important differences between the die stackup for FPGA design > and other types of digital logic. The most obvious feature is that for > high-end FPGA's, there are many more layers in the die than you usually > get for microcontrollers or even fast cpus. FPGA's need a /lot/ more > routing lines than fixed digital parts. These routes are mostly highly > symmetrical, and can be tightly packed because only a small fraction of > them are ever active in any given design - you don't need enough power > or heat dissipation for them all. On a microcontroller or other digital > part, you have far more complex routing patterns, with most routes being > short distance and you can have most of them active at a time. On > memory parts, you have a different type of routing pattern again - few > layers, with a lot of symmetry, and a lot of simultaneous switching on > some of the buses. You said the key words... "high-end FPGA's" [sic]. I'm not talking about high end FPGAs. I'm talking about small parts at the low end combined with an MCU and analog. Even the Xilinx Zynq parts use very fast, very power hungry CPUs that require off chip memory. Totally different market... as usual, the telecom market. > The point is, the optimal die stackup and process technology for an FPGA > is different from the optimal setup for an MCU, a memory block, analogue > blocks, etc. So when you combine these, you are making a lot of > compromises. Compromises, yes, "lot of"... I don't know. That's my point. They make those compromises for MCUs and seem to make it work. You have said nothing about why a useful FPGA can't be made using the same process as an MCU. You just talk about what they do when trying to squeeze every bit out of the silicon for the express purpose of large, fast FPGAs. Not every design needs large or fast. > It is relatively easy and cost-effective to take a big, expensive FPGA > die design and stick a little processor on it somewhere. You can spread > the normal cpu routing amongst the many FPGA routing layers for better > power and heat spreading. It will be a little bigger and slower than a > dedicated cpu die could be, and the cost per mm² is higher - but that > extra cost is small in the total cost of the chip. > > But you cannot take an optimised microcontroller or cpu die design and > add serious FPGA hardware to it - you simply don't have the routing > space. You can add some CPLD-type programmable logic without too much > extra cost (look at the AVR XMega E series, or the PSoC devices) because > that kind of programmable logic puts more weight on complex logic blocks > and less on the routing. This does not make sense at all. First, most CPLD type devices are actually FPGA type devices in a smaller capacity. Second, and FPGA uses die space. There is nothing magical about how much space is needed for routing or anything else. Just add a block of FPGA fabric to an MCU with an appropriate special interface and Bob's your uncle. The proof of the pudding is the fact that it has been done. My question is why this isn't done more often with a wider variety of parts, in particular more vendors. > Note that flash is also a poor fit for both MCU and FPGA die stacks. > For flash, you want a different kind of transistor than for the purely > digital parts, you have significant analogue areas, and you need to deal > with high voltages and a charge pump. The match between an MCU and > flash is not too bad - so the combination of the two parts on the same > die is clearly a "win" overall. But if you want the best (cheapest, > fastest, highest density, lowest power) flash block, you don't mix it > with a cpu on the same die - similarly if you want the best cpu block. > As far as I know (and as noted above, I may be wrong), Flash FPGA > devices are made with a large FPGA die and a small serial flash die > packaged together. Yes, none of these things want to be on the same die, and yet it happens. You are mistaken about the Flash FPGAs. Only Xilinx adds a flash chip to an FPGA chip in one package. That offers little advantage. The Lattice parts have the flash on the die and offer *much* faster configuration load times, on the order of 1 ms instead of 100s of ms. > You get the same for analogue parts. You can buy devices that are good > microcontrollers with okay analogue parts built in. You can buy devices > that are basically high-end analogue parts with a half-decent > microcontroller tagged on. But you /cannot/ buy a device that has > high-end analogue interfaces /and/ a high-end processor or > microcontroller, all on the same die. So? > It is just like PCB design. You do not easily match 1000V IGBT > switchers, 1000-pin 0.4mm pitch BGAs, and 24-bit ADCs on the same board. > > >> Forget the analog. What do you >> sacrifice by building FPGAs on a line that works well for CPUs with >> Flash and RAM? If you can also build decent analog with that you get an >> MCU/FPGA/Analog device that is no worse than current MCUs. >> >> >>> Microcontrollers are made with a compromise. The cpu part is not as >>> fast or efficient as a pure cpu could be, nor is the flash part, nor the >>> analogue parts. But they are all good enough that the combination is a >>> saving (in dollars and watts, as well as mm²) overall. >> >> It's not much of a compromise. As you say, they are all good enough. I >> am sure an FPGA could be combined with little loss of what defines an FPGA. >> > > As I wrote above, the compromise is significant. It is certainly worth > making in some cases - and I too would like to see such combined > devices. And I think we will see such devices turning up - technology > progress will reduce the technical disadvantages, and economy of scale > will reduce the cost disadvantages. But it is not as simple a matter as > you might think. I don't know where you get the "significant" part. They sell literally billions of MCUs with analog on them each year. Obviously the compromise is not so bad. > And then, of course, there is the joys of making tools that let > developers work easily with the whole system - that is not a small matter. Only if you try to make it complex and ugly. Interfacing an FPGA to a CPU is not hard. > I believe that what we will see first is something more like the > above-mentioned Atmel XMega E series, or some of the PIC devices > (AFAIK), where you have a "normal" microcontroller with a bit of > programmable logic. This will give designers a good deal more > flexibility in their layouts. Rather than buying a part with 3 UARTs > and 2 SPI where one of the SPI's shares the pins of one of the UARTs, > the developer could use the chip's pin switch matrix to get all 5 > interfaces at once. Some simple PLD blocks could give you high-speed > interfaces without external glue logic, and they could let the chip > support a wide range of timer functions without the chip designer having > to think of every desirable combination in advance. You mean you have seen these before Microsemi (formerly Atmel) came out with their SmartFusion and SmartFusion2 devices? >>> But I think there are some FPGA's with basic analogue parts, and >>> certainly with flash. There are also microcontrollers with some >>> programmable logic (more CPLD-type logic than FPGA). Maybe we will see >>> more "compromise" parts in the future, but I doubt if we will see good >>> analogue bits and good FPGA bits on the same die. >> >> I know of one (well one line) from Microsemi (formerly Actel), >> SmartFusion (not to be confused with SmartFusion2). They have a CM3 >> with SAR ADC and sigma-delta DAC, comparators, etc in addition to the >> FPGA. So clearly this is possible and it is really a marketing issue, >> not a technical one. > > No, it is a combination of many issues and compromises. When Actel saw > the success of SmartFusion and thought how they could make a new > SmartFusion2 family, they did not think "no one really wants analogue > interfaces, so we can remove that" - they made the sacrifices needed to > get the other features they needed. It was very much a compromise. What other features? Engineering doesn't dictate products. Marketing does. Clearly Microsemi feels there is not enough of a market to provide the "everything" chip. I've had this discussion with Xilinx and they don't say it is too hard to do. They say it makes the number of different line items they have to inventory far too large. That's not an engineering problem. That is exactly what they do with MCUS, dozens or even hundreds of different versions. It just needs to be what your company wants to do... as decided by marketing. > But you are right that the SmartFusion shows that combinations can be > made - just as the SmartFusion2 shows that it is not a simple matter. > >> >> The focus seems to be on the FPGA > > Indeed. And that is how it (currently, at least) must be if you want > decent FPGA on the device. > >> , but they do give a decent amount of >> Flash and RAM (up to 512 and 64 kB respectively). My main issue is the >> very large packages, all BGA except for the ginormous TQ144. I'd like >> to see 64 and 100 pin QFPs. > > The packaging is something that should be easier to change - there is no > technical reason not to put the same chip in a lower pin package (as > long as the package is big enough for the die and a carrier pcb, of course). > >> >> >>> What will, I think, make more of a difference is multi-die packaging - >>> either as side-by-side dies or horizontally layered dies. But I expect >>> that to be more on the high-end first (like FPGA die combined with big >>> ram blocks). >> >> Very pointless not to mention costly. You lose a lot running the FPGA >> to MCU interface through I/O pads for some applications. That is how >> Intel combined FPGA with their x86 CPUs initially though. But it is a >> very pricey result. > > > No, it is certainly not pointless - although it certainly /is/ costly at > the moment. Horizontal side-by-side packaging is an established > technique, and is used in a number of high-end devices. If you have a > wide and fast memory bus, then the whole thing can be much smaller, > simpler and lower power if the dies are adjacent and you have short, > thin traces between dies on a carrier pcb within the package. The board > designer has no issues with length or impedance matching, and the line > drivers are far smaller and lower power. > > Vertical die-on-die stacking is a newer technology, with a good deal of > research into a variety of techniques. It is already in use for > symmetrical designs such as multi-die DRAM and Flash packages. But the > real benefit will come with DRAM dies connected to processor or FPGA > dies. Rather than having a 64-bit wide databus with powerful > bi-directional drivers, complex serialisation/deserialisation hardware, > PLL's, etc., a 20-bit address/command bus with tracking of pages, > pre-fetches, etc., you could just have a full-duplex 512-bit wide > databus and full address bus, with everything running at a lower clock > rate and data lines driven over a distance of a mm or two. Total system > power would be cut drastically, as would latency, and you could drop > much of the complex interface and control circuitry on both sides of the > link. Your DRAM starts to look more like wide tightly-coupled SRAM - > your processor can drop all but its L0 cache. > > There are still many manufacturing challenges to overcome, and heat > management is hard, but it will come - the potential benefits are enormous. You are talking about an entirely different world than I am. You are still talking about the markets Xilinx and Altera are going for, large fast FPGAs. Only expensive parts can use multiple die packages and the large complex functions you are describing. That is exactly what I don't need or want. Look at the data sheet for a 64 pin ARM CM3 CPU chip. You will find lots of Flash, RAM and analog peripherals. None of them work poorly. They sell TONS of them, literally. MCU makers are afraid of FPGAs and don't have access to the patents. FPGA makers have their telecom blinders on and now, with Microsoft getting into the server hardware market, that may be the next big thing for FPGAs. That's my point. There is no reason why smaller devices can't be made like the SmartFusion and SmartFusion2. Even those parts are more FPGA than MCU with hundreds of pins and large packages. I would like to see products just like a 64 pin MCU with some analog, clock oscillators, brownout, etc. There is no technical reason why this can't be done. -- Rick CArticle: 159380
rickman <gnuarm@gmail.com> writes: > Even analog is used in MCUs, why can't FPGAs be made with the same > processes giving us programmable logic combined with a variety of ADC, > DAC and comparators on the same die. Put them in smaller packages > (lower pin counts, not the micro pitch BGAs) and let them to be used > like MCUs. > Maybe the market just isn't there. How many would you buy if the product existed? Do you know others with your kind of needs? I'd say anyone who seriously wanted to do an "everything" chip thing would need to have both FPGA and MCU know how and customers. That would mean basically Xilinx buying up some MCU company or vice versa. Doesn't seem likely. Xilinx did announce some slightly lower end parts with the new Spartan 7s (with ADC) and single core Zynqs but they aren't even close to what you want. I think Altera's Max 10 is a little bit closer (flash + ADC). Maybe if some MCU company bought Lattice? But why would an MCU company ever think they need HW programmability if they don't know a first thing about it? I don't think Lattice has the money to buy anyone. Intel's acquisition of Altera got them more high priced chip business which is what they seem want instead of cheap chips.Article: 159381
On Mon, 17 Oct 2016 04:00:55 -0400, rickman <gnuarm@gmail.com> wrote: >On 10/16/2016 11:00 PM, quiasmox@yahoo.com wrote: >> On Sun, 16 Oct 2016 20:22:29 -0400, rickman <gnuarm@gmail.com> wrote: >> >>> I found this pretty impressive. >> >> Translates it where? Across the room? >> To what? Rot13? > >Did you read the article? They are designing Internet servers that will >operate much faster and at lower power levels. I believe a translation >app is being used as a benchmark. It's not like websites are never >translated. I did read it. It was no more specific than the headline. -- JohnArticle: 159382
On 10/18/2016 6:52 AM, Anssi Saari wrote: > rickman <gnuarm@gmail.com> writes: > >> Even analog is used in MCUs, why can't FPGAs be made with the same >> processes giving us programmable logic combined with a variety of ADC, >> DAC and comparators on the same die. Put them in smaller packages >> (lower pin counts, not the micro pitch BGAs) and let them to be used >> like MCUs. > >> Maybe the market just isn't there. > > How many would you buy if the product existed? Do you know others with > your kind of needs? > > I'd say anyone who seriously wanted to do an "everything" chip thing > would need to have both FPGA and MCU know how and customers. That would > mean basically Xilinx buying up some MCU company or vice versa. Doesn't > seem likely. Xilinx did announce some slightly lower end parts with the > new Spartan 7s (with ADC) and single core Zynqs but they aren't even > close to what you want. I think Altera's Max 10 is a little bit closer > (flash + ADC). Really? You have to buy an MCU company to add a CPU to an FPGA? Xilinx already makes the Zynq with an ARM. Seems adding ARMs to any digital chip these days is like falling off a log. No need to buy anything remotely like a company. Maybe a couple of good engineers from an MCU company. I was not aware of the MAX 10. I don't peruse the Altera or Xilinx sites much anymore. The MAX 10 has an interesting ADC, but only one. It could be multiplexed and integrated to produce a pair of 48 kHz, 15 bit data streams. Not quite 16 bits, but close. No direct support for DAC and none of the other MCU features (that I could find in a brief look) like internal clock oscillator, internal POR, brownout detector, clock divider for low power operation, etc, etc, etc. > Maybe if some MCU company bought Lattice? But why would an MCU company > ever think they need HW programmability if they don't know a first thing > about it? I don't think Lattice has the money to buy anyone. Intel's > acquisition of Altera got them more high priced chip business which is > what they seem want instead of cheap chips. Or any company who makes FPGAs can integrate an ARM... oh, wait! All four of them have! Five if you count Atmel who made an old FPGA with an 8 bit processor (may have been an AVR). The problem has nearly nothing to do with integrating a digital CPU into the digital FPGA, that's falling off a log. Making it work in place of an MCU with all the features is the part that seems to be missing. I don't know for sure why this hasn't happened to date. There are the reasons I've been told and the reasons that I think. Who knows what they really are. I believe an FPGA + a full MCU would be a big winner. I have a board design that I was going to redo as the FPGA is EOL. The combo of an MCU, small FPGA (3000 4LUTs would be gravy for the fast interfaces) and the equivalent of a 16 bit, 48 kHz stereo CODEC would be all my digital logic in one package, well, all of it if the I/Os are 5 volt tolerant, lol. I have to use a pair of largish quick switches to interface some 10 signals from a 5 volt interface. Then it needs to be in an MCU package (64/100 pin QFP and/or QFN). The FPGA really doesn't need to be at all fast for most designs. If it ran half as fast as an FPGA in the same process node, that would handle some 99% of designs I expect. I've only worked on one where we were pushing the speed of the part and that was an existing part in a some 5-8 year old product, the TTC T-Berd. We were asking it to handle an interface that was 4 times faster than the interface it was designed to handle originally. Usually it is more the density, that is trying to use all the LUTs in the part. Usually there isn't enough routing to get much past 80 or 90 percent at best. -- Rick CArticle: 159383
On 18/10/16 17:23, quiasmox@yahoo.com wrote: > On Mon, 17 Oct 2016 04:00:55 -0400, rickman <gnuarm@gmail.com> wrote: > >> On 10/16/2016 11:00 PM, quiasmox@yahoo.com wrote: >>> On Sun, 16 Oct 2016 20:22:29 -0400, rickman <gnuarm@gmail.com> wrote: >>> >>>> I found this pretty impressive. >>> >>> Translates it where? Across the room? >>> To what? Rot13? >> >> Did you read the article? They are designing Internet servers that will >> operate much faster and at lower power levels. I believe a translation >> app is being used as a benchmark. It's not like websites are never >> translated. > > I did read it. It was no more specific than the headline. > Same here. Just headlines and no content.Article: 159384
On 10/19/2016 4:42 AM, o pere o wrote: > On 18/10/16 17:23, quiasmox@yahoo.com wrote: >> On Mon, 17 Oct 2016 04:00:55 -0400, rickman <gnuarm@gmail.com> wrote: >> >>> On 10/16/2016 11:00 PM, quiasmox@yahoo.com wrote: >>>> On Sun, 16 Oct 2016 20:22:29 -0400, rickman <gnuarm@gmail.com> wrote: >>>> >>>>> I found this pretty impressive. >>>> >>>> Translates it where? Across the room? >>>> To what? Rot13? >>> >>> Did you read the article? They are designing Internet servers that will >>> operate much faster and at lower power levels. I believe a translation >>> app is being used as a benchmark. It's not like websites are never >>> translated. >> >> I did read it. It was no more specific than the headline. >> > Same here. Just headlines and no content. The content is that they are building very much faster servers as well as much lower power. What other content would you like to see? The development is not done but it is pretty clear they are on to something significant. No? -- Rick CArticle: 159385
> What other content would you like to see? =20 They claim something impressive ("Translate Wikipedia in less than a Tenth = of a Second") but give no details about the task nor the system. If the claim is not total marketing nonsense I would imply that they mean t= ranslating from one language to another (e.g. English to German). From the article link (and the picture) you could also imply that one FPGA = (or the card in the hand of the guy) does this. But this is simply unbeliev= able. So the question is: How many FPGAs are involved? With out this, the c= laimed time is simply not meaningful, as double the number of FPGA will mea= n half the time (every Wikipedia article can be translated individually, so= it is easy to execute the task in parallel...). But I guess this is all not Microsoft's fault, but the problem of that spec= ific link. I found following which gives much more insight at the end of th= e page: https://www.top500.org/news/microsoft-goes-all-in-for-fpgas-to-build-out-cl= oud-based-ai/ There it says that 4 FPGAs (Stratix V D5, ca. 500k LE) would require 4 hour= s to translate Wikipedia. The 0.1 seconds are achieved with a huge cloud of= such FPGA equipped systems... Of course still impressive, but not the same as most people might think aft= er reading the headline. (And it also makes me wonder about the future of t= he Altera/Intel low cost FPGAs, when to want to sell a Stratix into every s= erver...) Regards, Thomas www.entner-electronics.com - Home of EEBlaster and JPEG CodecArticle: 159386
May I suggest Waveme? waveme.weebly.com It is a new, free, GUI-based, digital timing diagram drawing software for Windows (and Linux/MacOS via Wine). Waveme is intended primarily for documentation purposes, where a diagram can be exported (stored) to an image file (PNG, BMP or TIFF) or a PDF document. Waveme can be used to draw waveforms (signals and buses), gaps, arrows and labels (see attached images).Article: 159387
On 2016-10-18, quiasmox@yahoo.com <quiasmox@yahoo.com> wrote: > > I did read it. It was no more specific than the headline. > It could be that it was merely translating from wiki markup to HTML -- This email has not been checked by half-arsed antivirus softwareArticle: 159388
On Fri, 21 Oct 2016 14:48:43 -0700, wavemediagram wrote: > May I suggest Waveme? > > waveme.weebly.com > > It is a new, free, GUI-based, digital timing diagram drawing software > for Windows (and Linux/MacOS via Wine). > > Waveme is intended primarily for documentation purposes, > where a diagram can be exported (stored) to an image file (PNG, BMP or > TIFF) or a PDF document. You need to add SVG or other vector formats to that list. Possibly EPS as well. Bitmap formats (PNG, BMP, TIFF) aren't really that great for exporting something that is inherently vector based. AllanArticle: 159389
I have binding warnings http://paste2.org/La9jIxbF with http://paste2.org/wnHDY0g3 How do I solve them ? even I modify the configuration as in http://paste2.org/bJZJPdWt , I have this error http://paste2.org/BLW1yg32Article: 159390
On Sun, 23 Oct 2016 00:15:03 -0700, Marvin L wrote: > I have binding warnings http://paste2.org/La9jIxbF with > http://paste2.org/wnHDY0g3 How do I solve them ? even I modify the > configuration as in http://paste2.org/bJZJPdWt , I have this error > http://paste2.org/BLW1yg32 The first error message: key_expansion.vhd:46:1:warning: 's0' is not bound Line 46 looks like this: s0: subbytes port map ( sbox_in => tmp_w(23 downto 16) , sbox_out => subword(31 downto 24) ); Your component declaration looks ok (based on what I would expect subbytes to look like). Did you remember to compile subbytes.vhd? BTW, you can download free versions of AES crypto engine VHDL source in all shapes an sizes. Many of them will even work correctly. Regards, AllanArticle: 159391
On Sunday, October 23, 2016 at 5:17:57 PM UTC+8, Allan Herriman wrote: > On Sun, 23 Oct 2016 00:15:03 -0700, Marvin L wrote: > > > I have binding warnings http://paste2.org/La9jIxbF with > > http://paste2.org/wnHDY0g3 How do I solve them ? even I modify the > > configuration as in http://paste2.org/bJZJPdWt , I have this error > > http://paste2.org/BLW1yg32 > > > The first error message: > key_expansion.vhd:46:1:warning: 's0' is not bound > > Line 46 looks like this: > s0: subbytes port map ( sbox_in => tmp_w(23 downto 16) , sbox_out => > subword(31 downto 24) ); > > Your component declaration looks ok (based on what I would expect subbytes > to look like). > > Did you remember to compile subbytes.vhd? > > > BTW, you can download free versions of AES crypto engine VHDL source in > all shapes an sizes. Many of them will even work correctly. > > Regards, > Allan I have solved the problem. I forgot to compile subbytes.vhd and round_constant.vhd together with key_expansion.vhd but in gHDL, I open in .ghw format but I still could not view internal signals such as w0, w1, w2, w3, temp_w WHY ?Article: 159392
On Sunday, October 23, 2016 at 11:49:37 PM UTC+8, Marvin L wrote: > On Sunday, October 23, 2016 at 5:17:57 PM UTC+8, Allan Herriman wrote: > > On Sun, 23 Oct 2016 00:15:03 -0700, Marvin L wrote: > >=20 > > > I have binding warnings http://paste2.org/La9jIxbF with > > > http://paste2.org/wnHDY0g3 How do I solve them ? even I modify th= e > > > configuration as in http://paste2.org/bJZJPdWt , I have this error > > > http://paste2.org/BLW1yg32 > >=20 > >=20 > > The first error message: > > key_expansion.vhd:46:1:warning: 's0' is not bound > >=20 > > Line 46 looks like this: > > s0: subbytes port map ( sbox_in =3D> tmp_w(23 downto 16) , sbox_out =3D= >=20 > > subword(31 downto 24) ); > >=20 > > Your component declaration looks ok (based on what I would expect subby= tes=20 > > to look like). > >=20 > > Did you remember to compile subbytes.vhd? > >=20 > >=20 > > BTW, you can download free versions of AES crypto engine VHDL source in= =20 > > all shapes an sizes. Many of them will even work correctly. > >=20 > > Regards, > > Allan >=20 > I have solved the problem. I forgot to compile subbytes.vhd and round_con= stant.vhd together with key_expansion.vhd but in gHDL, I open in .ghw form= at but I still could not view internal signals such as w0, w1, w2, w3, temp= _w WHY ? I could not view the internal signal http://i.imgur.com/w4jwnN1.png even th= ough I am using the formal format *ghw with http://paste2.org/mVMOJZYA , ht= tp://paste2.org/vpbvXcID , http://paste2.org/1pAZac73 , http://paste2.org/F= Dh4c6Av and http://paste2.org/UwgnBndsArticle: 159393
hi, i want verilog code for RS232Article: 159394
On Monday, October 24, 2016 at 8:22:59 PM UTC+10:30, korada...@gmail.com wrote: > hi, > > i want verilog code for RS232 https://www.inf.ethz.ch/personal/wirth/ProjectOberon/index.html (RS232T.v and RS232R.v)Article: 159395
cfbsoftware@gmail.com wrote: > On Monday, October 24, 2016 at 8:22:59 PM UTC+10:30, korada...@gmail.com wrote: >> hi, >> >> i want verilog code for RS232 > > https://www.inf.ethz.ch/personal/wirth/ProjectOberon/index.html > > (RS232T.v and RS232R.v) > > Both the question and the answer are mis-using the term RS232. RS232 is only an electrical interface standard defining the voltages used for a standard modem interface. What the Verilog code does, and I presume what the OP asked for is called a UART. To have communication over RS232 you typically need both a UART (or USART) and an electrical interface (drivers and receivers or transceivers) to convert the LVCMOS signals to RS232 levels. -- GaborArticle: 159396
On 10/19/2016 7:47 PM, thomas.entner99@gmail.com wrote: >> What other content would you like to see? > > They claim something impressive ("Translate Wikipedia in less than a Tenth of a Second") but give no details about the task nor the system. > > If the claim is not total marketing nonsense I would imply that they mean translating from one language to another (e.g. English to German). > > From the article link (and the picture) you could also imply that one FPGA (or the card in the hand of the guy) does this. But this is simply unbelievable. So the question is: How many FPGAs are involved? With out this, the claimed time is simply not meaningful, as double the number of FPGA will mean half the time (every Wikipedia article can be translated individually, so it is easy to execute the task in parallel...). > > But I guess this is all not Microsoft's fault, but the problem of that specific link. I found following which gives much more insight at the end of the page: > https://www.top500.org/news/microsoft-goes-all-in-for-fpgas-to-build-out-cloud-based-ai/ > > There it says that 4 FPGAs (Stratix V D5, ca. 500k LE) would require 4 hours to translate Wikipedia. The 0.1 seconds are achieved with a huge cloud of such FPGA equipped systems... > > Of course still impressive, but not the same as most people might think after reading the headline. (And it also makes me wonder about the future of the Altera/Intel low cost FPGAs, when to want to sell a Stratix into every server...) For sure the release is short of engineering data... it *is* a marketing pitch. The point is they plan to be providing a combination of FPGA and CPU which will run much faster and use less power than the CPU alone. No, they aren't offering hard numbers and the task of translating wikipedia is not really the best benchmark for serving up or searching web pages. It is meant to offer a metric that even laymen can relate to. In other words, it's meant to sound good to those who would not understand more engineering information. Microsoft has no incentive to sell FPGAs. Their incentive is to provide the software on faster hardware. If the hardware doesn't pan out, Microsoft gets nothing but expenses. -- Rick CArticle: 159397
On 10/21/2016 5:48 PM, wavemediagram@gmail.com wrote: > May I suggest Waveme? > > waveme.weebly.com > > It is a new, free, GUI-based, digital timing diagram drawing software for Windows (and Linux/MacOS via Wine). > > Waveme is intended primarily for documentation purposes, > where a diagram can be exported (stored) to an image file (PNG, BMP or TIFF) or a PDF document. > > Waveme can be used to draw waveforms (signals and buses), gaps, arrows and labels (see attached images). This is "free" software in the sense of "free beer", but not as in "free speech", right? It doesn't appear that there is an interest in making money from this, at least not for now. Why not make it open source? I've seen too many special purpose graphical tools go by the wayside to consider spending time to learn a tool like this that I would only use sporadically. If this tool ends up with no support I don't think I would want to be using it unless the source were available. I have an email program like that which I don't want to stop using because it works well and I'd have a learning curve to switch. But no more bug fixes and one of these days it won't port to the new machine. -- Rick CArticle: 159398
On 10/24/2016 10:17 AM, rickman wrote: > On 10/21/2016 5:48 PM, wavemediagram@gmail.com wrote: >> May I suggest Waveme? >> >> waveme.weebly.com >> >> It is a new, free, GUI-based, digital timing diagram drawing software >> for Windows (and Linux/MacOS via Wine). >> >> Waveme is intended primarily for documentation purposes, >> where a diagram can be exported (stored) to an image file (PNG, BMP or >> TIFF) or a PDF document. >> >> Waveme can be used to draw waveforms (signals and buses), gaps, arrows >> and labels (see attached images). > > This is "free" software in the sense of "free beer", but not as in "free > speech", right? It doesn't appear that there is an interest in making > money from this, at least not for now. Why not make it open source? > > I've seen too many special purpose graphical tools go by the wayside to > consider spending time to learn a tool like this that I would only use > sporadically. If this tool ends up with no support I don't think I > would want to be using it unless the source were available. > > I have an email program like that which I don't want to stop using > because it works well and I'd have a learning curve to switch. But no > more bug fixes and one of these days it won't port to the new machine. > Eudora? -- Cecil - k5nwaArticle: 159399
On 10/24/2016 11:33 AM, Cecil Bayona wrote: > On 10/24/2016 10:17 AM, rickman wrote: >> On 10/21/2016 5:48 PM, wavemediagram@gmail.com wrote: >>> May I suggest Waveme? >>> >>> waveme.weebly.com >>> >>> It is a new, free, GUI-based, digital timing diagram drawing software >>> for Windows (and Linux/MacOS via Wine). >>> >>> Waveme is intended primarily for documentation purposes, >>> where a diagram can be exported (stored) to an image file (PNG, BMP or >>> TIFF) or a PDF document. >>> >>> Waveme can be used to draw waveforms (signals and buses), gaps, arrows >>> and labels (see attached images). >> >> This is "free" software in the sense of "free beer", but not as in "free >> speech", right? It doesn't appear that there is an interest in making >> money from this, at least not for now. Why not make it open source? >> >> I've seen too many special purpose graphical tools go by the wayside to >> consider spending time to learn a tool like this that I would only use >> sporadically. If this tool ends up with no support I don't think I >> would want to be using it unless the source were available. >> >> I have an email program like that which I don't want to stop using >> because it works well and I'd have a learning curve to switch. But no >> more bug fixes and one of these days it won't port to the new machine. Yeah. I use T-bird for newsgroups, but I've never gotten used to how it would work with filters and such for my regular email. Eudora is a great program, but some day I won't be able to use it anymore. -- Rick C
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z