Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi all, do you really expect anyone will not grab as much data of interest as possible if they can? Are people really that naive? This case has been caught only because it has been done in a clumsy way - plain http transaction. What about all those "autoupdates" and stuff all the windows etc. software does? How difficult is to interleave the update data with the secretly sent one? How difficult is it to encrypt it a bit so no grep will catch it? Get real, using a wintel PC means Big Brother is watching you. Some of his smaller cousins included, probably (as in this case). Some of those who have wondered why I am so fierce about using my own software under my own OS etc. may now begin to understand part of my motivation ... Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------ stenasc@yahoo.com wrote: > Hi Folks, > > Altera does the same as well. In Quartus II, they have a feature called > "TalkBack" which reports back to Altera via an XML file, details about > the software tools you are using(including synthesis, simulation, > timing analysis and "others" , design constraints, IP usage, name of > top level file, time of compilation etc). Also reported back include > hostid, NIC ID and C: drive info), which they state in their EULA > (whoever reads that !!!) that they may use to determine the identity of > the user. > > Even if you are disconnected from the Net, all details are saved for > later transmission. > However, I'm sure it is all used to help the user....hmmm. > > Bob > > PS you can disable this feature, but I'll let you read the EULA to > enable you to do this. > > Jim wrote: > > This afternoon I run a flow (ISE 8, Linux) and got in the log a warning > > message. The word 'Warning' was an hyper link so I clicked on it > > hopping to get a more detailed description of the warning. > > > > Few seconds later I was surprise to find myself in Xilinx site in a > > page that displays the full path of the file I compiled. The path > > included sensitive information such as my name, the name of my employer > > and the code name of the project I was working on. > > > > Xilinx, please be more sensitive to the privacy of your customers. > > > > JimArticle: 103076
Tomasz Dziecielewski <tomdz@NOSPAMmultimedia.edu.pl> wrote: >Please excuse me posting a presumably lame question here, but despite >a rather thorough search I can't find answers to two bothering >questions. >1. I understand that when, say, a D flop's input changes along with >the arrival of the active clock edge, the flop is likely to go >metastable. But assuming it does, what will happen when no timing >violations occur on the next active clock edge (i.e. the flop's input >is ready and steady). Will the flop remain metastable or will its >output settle to the valid input? The odds of the flop remaining metastable decrease with time in the absence of a clock. The arrival of the next active clock edge (with correct setup and hold time for data) will end the metastable state. State in a flipflop is maintained by a positive feedback loop, except during a very short time near the clock edge, when the input to the FF is transferred to the output of the FF. If the input is stable during this short time, output will be stable after the normal propagation delay, regardless of the previous state of the FF. >2. What happens if a metastable flop's output is presented to the >following (in a chain) flop's input? Will it go metastable too? Or is >its action undefined? There is a low probability that the following flipflop will also go metastable. In buffered CMOS logic (almost anything modern), meta stability usually shows up as a slower output time. If this slower output causes the input to the next flipflow to change at just the wrong time, then that flipflop may go metastable. -- Phil Hays(Xilinx)Article: 103077
Good qustions. 1. On the next clock, with valid and stable data on D, the flip-flop will react normally. 2. The metastable output of a modern CMOS flip-flop is actually not at a strange level, but it can change between 0 and 1 at an uncontrolled time. Therefore, the cascaded flip-flop has no reason to go metastable, unless (and this is very unlikely) its D input happens to change exactky at the "moment of truth" where the second flip-flop is being clocked. That moment of truth is a very tiny window, measured in femtoseconds ( millionth of a nanosecond). That's why cascading two flip-flops is the standard method to effectively avoid metastability. (Avoid means: reducing its probability to a tolerably low level) Look for the Xilinx app note XAPP094 .(Or google with my name) Peter Alfke, Xilinx ApplicationsArticle: 103078
An update if anyone was interested: I tried to constrain the net in the system.vhd file as stated below. These constraints were noted by the synthesizer but for some reason the fanout is as before, according to the timing analyzer. Any thoughts? Matt Matt Blanton wrote: > I want to constrain the max fanout for a particular net in my design. I am > using the XPS flow and the net I want to constrain is a BRAM address > signal > and is generate by xps, during platgen I suppose. I don't think I can > just go in and add attribute constraints to the system.vhd file in the > hdl/ directory because those files seem to be regenerate every time. How > can I constrain the fanout for this net? Thanks for any pointers. > > MattArticle: 103079
"Jan Panteltje" <pNaonStpealmtje@yahoo.com> wrote in message news:e54brn$ce3$1@news.datemas.de... > Now, to do the synthesis, many people have to buy advanced (very fast) > hardware. > A FPGA vendor could team up with say (for example) Sun, and you would > use their server farm. > The FPGA vendor would take care of all updates and software related problems > transparent to the customer. It's a very interesting idea (I was thinking about this sort of model the other day). Some of the barriers to adoption would be: (1) Privacy concerns - do you want your company's crown jewels stored on someone else's server? I think vendors would have to guarantee not only an encrypted link, but strongly encrypted storage for your files as well (and only you have the key). (2) Tool versions - you'd have to be sure that the vendor wasn't going to switch your build process over to a new version of XST (or whatever) without your permission, or you'll never get any stability; (3) Availability - could you trust your FPGA vendor to provide 100% uptime? Recent experiences with companies A and X's websites point to "no". :) What about DoS vulnerabilities? (4) Licensing - the vendor will really have you in a corner if you don't pay up (they could potentially just cut you off and confiscate all your files). Note that I'm not advocating that as a business practice, just pointing out that many people have trust issues here. Most of those problems are things that "web application delivery" advocates are already thinking hard about anyway, so if that idea gains traction I expect these barriers will slowly fall. When "compute" truly becomes a commodity, the most likely model would not be "you get to use your FPGA vendor's server farm", but "your FPGA vendor re-sells compute from someone else who runs a server farm, with the FPGA toolchain as a value-add". Which I think is what you meant by "teaming up". I mean, who wants to maintain a data centre / server farm on their own premises any more, anyway? It's expensive, labour intensive and unrelated to most firms' core competencies. In conclusion, the future looks like the 1970s, just overclocked and with better hair... -Ben-Article: 103080
Matt, why do you think that you need to constrain the fan-out? Do you have an indication that delays are excessive? Peter Alfke, XilinxArticle: 103081
Peter, Yes, the fanout is excessive. I have a fanout of 32. With 0 logic levels, the delay from that net exceeds my required period. I am hoping that reducing the fanout will reduce the delay from that net. Matt Peter Alfke wrote: > Matt, why do you think that you need to constrain the fan-out? > Do you have an indication that delays are excessive? > Peter Alfke, XilinxArticle: 103082
yes, i use the method of adding signals to ILA through FPGA editor all the time ... since ISE7.1 SP4 i haven't had any problems with it ... kind regards, yArticle: 103083
Hi, what is this .ant file? I'm working with ISE 7.1, I added a test bench waveform to my project and then I choose "Generate expected simulation results" and I received back the error: Compiling vhdl file "//Server/FPGA/VHDL/prova_simulazione/test.ant" in Library work. ERROR:HDLParsers:3264 - Can't read file "//Server/FPGA/VHDL/prova_simulazione/test.ant": No such file or directory Parsing "test_gen.prj": 0.05 ERROR: Fuse failed I then made another project with just one input mirrored to the output to see if I get that same issue and I get it! What is this? Something got broken with ISE files? Thanks, MarcoArticle: 103084
On Wed, 24 May 2006 21:14:17 +0200, Kolja Sulimma <news@sulimma.de> wrote: >John Adair schrieb: >> I don't understand where the there are issues of jitter unless you use a DCM >> in which case you will have issues with all Xilinx FPGAs and to varying >> extent other vendors too. The point of our board is that you can have a high >> speed clock with low jitter and not necessarily using the DCM which does >> have jitter of some 10s of picoseconds. > >Even with a zero jitter 1GHz clock the generated delay will jitter 1ns. >The output will arrive anytime in a clock period, but the output will be >generated a fixed time after a clock edge. The delay is the difference >between input and output. It will have +-500ps error. > >The 50ps jitter in the DG535 spec really means that the delay is fixed >with 50ps accuracy, not only that the output time can be predicted with >50ps accuracy. > The DG535 jitter is spec'd at 60 ps trigger-to-output and is often not that good in units I've measured. Accuracy is spec'd at +-1.5 ns, interesting given the speed of the output edges. I know of a few ways to get low delay jitter: SRS generates delay using counters clocked by a crystal oscillator at 80 MHz, followed by an analog ramp vernier delay. At every trigger, a front-end circuit measures the time offset from the trigger to the local clock, as an analog signal, and applies it to the vernier to correct for the 1-clock jitter. This adds a lot of insertion delay, requires precise ramp calibrations, and has s/h drift problems for longer delays. Signal Recovery (formerly EG&G) uses the fiendishly clever interrupted-ramp technique, Pepper's patent. Imagine a simple trigger+analog-ramp+comparator delay generator, very accurate and jitter-free for short delays. Now interrupt, freeze, the ramp for N cycles of a crystal-controlled clock. That extends the delay by the freeze time and adds no jitter, even though the freeze clock is unrelated to the trigger. The only serious problem is analog drift of the ramp capacitor during the freeze interval, similar to the SRS drift issue. Several companies make (or made) a number of DDGs based on starting an oscillator at trigger time, using a digital counter for coarse delay, and an analog vernier for fine delay. To maintain jitter performance for long delays, the timing oscillator must be phase-locked to a good crystal oscillator while maintaining the original phase offset. HP, copied by LeCroy and BNC, used a heterodyne PLL technique to accomplish this. My company uses a DSP system: after the triggered oscillator has started, we digitize its waveform using a flash ADC clocked by the crystal oscillator, figure out the phase difference, and close a digital servo loop onto the oscillator. Our technique is (he says modestly) clearly the best. http://www.highlandtechnology.com/DSS/P400DS.html http://www.highlandtechnology.com/DSS/V850DS.html Both of these have jitters in the single digits of ps for delays into the tens of us, and longterm jitter limited only by the phase noise of the xo. Both use ecl for the critical signal path. This one runs all the fast stuff through the fpga... http://www.highlandtechnology.com/DSS/T560DS.html which adds a lot of jitter from crosstalk, ground bounce, and fpga delay variations as a function of tiny power supply changes and millikelvin temperature noise, still under 50 ps. JohnArticle: 103085
> > Now, to do the synthesis, many people have to buy advanced (very fast) > > hardware. Such nonsense. If a GHz range CPU is not enough I don't know what is. The hardware costs are negligible from a developers point of view nowadays. Now if the software has been explicitly written in a way to clog the system this is of course another matter... > (1) Privacy concerns - do you want your company's crown jewels stored on > someone else's server? This is only a relatively small part of the problem. The thing is, they want to sell you something - some sort of software - which afterwards will make you pay them on a regular basis. More than that, they want to be in a position to take your IP - encrypt it as much as you like, the chip vendor can reverse engineer it any time he wants - and if of interest, incorporate it as a "core" or whatever, you will be allowed to complain from the cold outside at will... For me, anyone selling me a product incorporating secrets which can be used to make me pay more or control me in some other way at a later stage is in the blackmail busyness. Unfortunately a great part of todays industry is heading this way (PLD vendors being actually a not so big part of the phenomenon). Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------ Ben Jones wrote: > "Jan Panteltje" <pNaonStpealmtje@yahoo.com> wrote in message > news:e54brn$ce3$1@news.datemas.de... > > > Now, to do the synthesis, many people have to buy advanced (very fast) > > hardware. > > A FPGA vendor could team up with say (for example) Sun, and you would > > use their server farm. > > The FPGA vendor would take care of all updates and software related > problems > > transparent to the customer. > > It's a very interesting idea (I was thinking about this sort of model the > other day). Some of the barriers to adoption would be: > > (1) Privacy concerns - do you want your company's crown jewels stored on > someone else's server? I think vendors would have to guarantee not only an > encrypted link, but strongly encrypted storage for your files as well (and > only you have the key). > > (2) Tool versions - you'd have to be sure that the vendor wasn't going to > switch your build process over to a new version of XST (or whatever) without > your permission, or you'll never get any stability; > > (3) Availability - could you trust your FPGA vendor to provide 100% uptime? > Recent experiences with companies A and X's websites point to "no". :) What > about DoS vulnerabilities? > > (4) Licensing - the vendor will really have you in a corner if you don't > pay up (they could potentially just cut you off and confiscate all your > files). Note that I'm not advocating that as a business practice, just > pointing out that many people have trust issues here. > > Most of those problems are things that "web application delivery" advocates > are already thinking hard about anyway, so if that idea gains traction I > expect these barriers will slowly fall. > > When "compute" truly becomes a commodity, the most likely model would not be > "you get to use your FPGA vendor's server farm", but "your FPGA vendor > re-sells compute from someone else who runs a server farm, with the FPGA > toolchain as a value-add". Which I think is what you meant by "teaming up". > I mean, who wants to maintain a data centre / server farm on their own > premises any more, anyway? It's expensive, labour intensive and unrelated to > most firms' core competencies. > > In conclusion, the future looks like the 1970s, just overclocked and with > better hair... > > -Ben-Article: 103086
Hi Keith, "krw" <krw@att.bizzzz> wrote in message news:MPG.1edee8d2bcf4487c9896d0@news.individual.net... > What's so hard to understand about "hack"? Ah, sorry, I was thinking "hack" in the "relatively sophisticated endeavor given very few resources... and you might have to crack open a book or two..." sense (like the guy who did a full-up HID USB device using only bit-banging with some low-end AVR microcontroller) rather than the "Make magazine" sense (which is closer to "kinda cool things that, yeah, your beer-guzzling neighbor down the street won't be doing anytime soon, but that smart high school kid next door could probably pull off"). :-) I see your point more clearly now. ---JoelArticle: 103087
"John Aderseen" <John@nospam.com> wrote in message news:44755bcf$0$295$7a628cd7@news.club-internet.fr... > Why is ISA still out there ? Look at the PowerPC architecture, there's ISA > in it. Look at any PC - there's ISA in it (even if the bus does not come out > on connectors on the mobo it's in the chipset). To a certain extent pretty much everybody who ever had to hang a handful of peripherals off of a bus has designed their own bus "standard," it's just that some are more sophisticated than others and only a small handful ever become known outside of a given design gruop. The guys at nVidia, VIA, SiS, etc. have been enjoying themselves coming up with new busses between their north and south bridge chipsets for years now... :-)Article: 103088
"dp" <dp@tgi-sci.com> wrote in message news:1148572767.071777.138910@g10g2000cwb.googlegroups.com... > > > Now, to do the synthesis, many people have to buy advanced (very fast) > > > hardware. > Such nonsense. If a GHz range CPU is not enough I don't know what > is. And 640K will be enough for anybody... > The hardware costs are negligible from a developers point of view > nowadays. Now if the software has been explicitly written in a way > to clog the system this is of course another matter... Have you ever tried multi-pass P&Ring a multi-million gate FPGA design with marginal timing? Plenty of people would pay good money to have their compile times dropped from 12 hours to 12 minutes if the technology were available. > encrypt it as much as you like, the chip vendor can reverse > engineer it any time he wants Not if he doesn't have the key, he can't. Of course, the design has to be decrypted and loaded into memory for the tools to process it, I guess... > For me, anyone selling me a product incorporating secrets > which can be used to make me pay more or control me in some > other way at a later stage is in the blackmail busyness. That'll be those trust issues I was talking about, right there. :) -Ben-Article: 103089
In article <1148542032.255347.136790@i39g2000cwa.googlegroups.com>, fpgabuilder-groups@yahoo.com says... > To each his own. > > PCI is just a vehicle for me to get data from the CPU to my logic. It > could be anything else for that matter. Like it has been said earlier > in the post, there are many vendors with standard pci controllers that > cost less than a low end spartan. I therefore, would like to see a > board that uses these of the shelf PCI semiconductors. They're out there. Look for prototyping boards. I had one for the PLX-9054 some time back. Programming them still isn't trivial (as ISA is). > It woudl not be > very productive and cost effective to integrate 3rd party PCI core into > the fpga. Not sure with PCI Express, but it certainly isn't as easy > and cheap to use as PCI. I came to the same conclusion a few years back. Why use a hunk of an *expensive* (at the time about $1K) Virtex-E when a $25 PLX-9054 works out of the gate, sorta. -- KeithArticle: 103090
>I think you mis the point. No, I think you're just annoyed that I make my point forcefully. >ESPECIALLY if the server farm was significantly faster then the normal high >end PC used by (for example) you today. Well, I use a 3GHz Pentium with 2Gb RAM. It wasn't expensive. What would these servers use? Something like a 20GHz Pentium maybe? How much are those? Where do they get them? When I compile a design on my PC, it gets 100% of the CPU right away. Would your imaginary "server farmers" be happy to have their machines idle most of the time, ready to respond like my PC does or would my design often have to queue? Don't forget these downsides to the glorious world you imagine. >If I really listen to your blunt remarks I almost think you >have no clue about software at all (regarding the graphics remark) Actually, I've done little but software for the last 25 years, but that's not the point. You claim you can strip down the needs of my desktop PC (presumably to something like a 286 with a few meg of RAM). Well, to reduce the cost significantly, you'll also need to take out the graphics card and some other complex bits and pieces. But I like my graphics card. It means I can have fast, hi-res graphics in colour. You can't send me those graphics as bit maps from your imaginary servers and I don't think you can send them effectively in any other form, either, even if you allow me a more powerful processor to render the images. Being happy with a PC isn't about feeling that you got super value-for-money. It's much more about response times and having those response times predictable. If you feel irked when Google pauses for a few seconds, think what it will be like when the same happens as you try to flick between two pages of your EDA tool. >it would make sense to work it out in detail. Nobody's stopping you. But I'd like to see the results, not the optimistic theory. Why not contact Larry Ellison with your ideas? After all, his was perhaps the highest-profile attempt to sell this idea of thin clients and powerful servers (though we don't hear so much of it now).Article: 103091
Austin Lesea wrote: > Kolja, > > Software is immediately available for the early access folks. > > The general release of Virtex-5 LX (minus the LX330) will be in the > initial release of 8.2i which is about a month away. > > Austin > > Kolja Sulimma wrote: > > > Antti schrieb: > > > > > >>1) Xilinx website says to the general public that 'start designing' NOW > >>to my understanding it means that software support is available NOW, or > >>is there is any other way to see it? > > > > > > You can either download a text editor from the Xilinx Website or select > > the "pencil and paper package" (5 weeks lead time) from the webshop to > > start your HDL capture now. Why do we change SW everytime a new device comes out? every 6 months? Why can't have just a service pack...? Unhappy sw userArticle: 103092
MikeShepherd564@btinternet.com wrote: > >>That, of course, is called "time sharing", and is what we used 30 years > >>ago, before PCs arrived. Back to the future... > > > >Not all old ideas are bad.... > > There's no point pretending that processing power and RAM are still > expensive. They're cheap. They're very cheap. That's why we now > have it locally. Do you want a server to render your graphics images, > too? Or maybe you'd prefer to do everything on a command-line? Get > real. Actually, we run a lot of our software here through Terminal Services on a Win2K Server. There are lots of arguments for doing it. The high cost of processing power and RAM is not really one of them. The arguments that convinced us to do this: 1. Only need to install software on one computer rather than many. Software installation and upgrades can be a very time consuming (and expensive) process. 2. Only one central computer (or rack of computers) needs to be protected in the event of power outages or data loss. All of the terminals can be treated as disposable, and not needing much performance, and in the event something happens, wiped clean or replaced. 3. I can start a job before I leave work (or from my laptop when I get on an airplane), suspend the terminal session, and log in from somewhere else (even Kinkos), on a different computer, at some point later to see the status. All of a sudden nobody needs a expensive laptop. (or a tiny, not so high performance one, is good enough) 4. My laptop fried a couple months back, and I was forced to think about these things. I wasn't able to simply sit down at another computer in the office and get to all of my applications and data immediately. It took 24 hours. This was an unacceptable failure point in the process. And yes, the GUI is rendered on the server and exported to the client. We obviously don't use this strategy for running something like Solidworks (a 3D solid modeler) as it has very high bandwidth from the application to the diplay. Most EE and business applications do not have high bandwidth GUIs. All of the bandwidth is used to process the data sets. A Verizon EVDO cellular modem has high enough bandwidth to use our accounting software (Quickbooks Enterprise) which has a somewhat graphical intensive GUI, and exporting the display is much higher performance than querying the database server using a local copy of the software over this link (or even a hardwired LAN). Things like JTAG emulation, device programming and debug, etc, run on clients. In the event that one wants to sit at his desk and his hardware is sitting in the lab, he can use terminal services with WinXP to run these applications. In the event that this computer melts down, it can be wiped clean, get a standard disk image, and he is up and running with the installation of a single application. Or simpler yet, in the mean time, he can grab another computer and install a single application. Strategies like this start to make a lot of sense even if there are only two or three users of any given application, and there at least a few different applications used by any given person. Regards, Erik. --- Erik Widding President Birger Engineering, Inc. (mail) 100 Boylston St #1070; Boston, MA 02116 (voice) 617.695.9233 (fax) 617.695.9234 (web) http://www.birger.comArticle: 103093
antti.tyrvainen@luukku.com wrote: > I'm running Quartus in remote Linux workstation and I use Cygwin > X-server in my PC. > If I use gnome desktop (xwin -query) Quartus opens nicely. But if I use > a single X-terminal (ssh -X) to open Quartus it doesn't work. I receive > an empty white Quartus splash screen. Option -no_splash doesn't help, > then the main window is empty white. Could you run Quartus locally and with vpn for the remote license and CVS servers? -- Mike TreselerArticle: 103094
On a sunny day (Thu, 25 May 2006 17:45:18 +0100) it happened MikeShepherd564@btinternet.com wrote in <l2kb72hg3ss6icmne96jvupnipjrf27ob1@4ax.com>: >>it would make sense to work it out in detail. >Nobody's stopping you. But I'd like to see the results, not the >optimistic theory. Why not contact Larry Ellison with your ideas? >After all, his was perhaps the highest-profile attempt to sell this >idea of thin clients and powerful servers (though we don't hear so >much of it now). Somebody will do this, somebody who possibly has a financial interest. I for myself, me, would like to have that possibility, so I do not have to buy n packages (from n tool vendors), just open an account, use so much computing time, see credit card bill at end of month. And I would have top of the line tools. That graphics card thing you mention makes no sense to me, you are not talking about layout or something, the most detailed pics will be for the floor planner perhaps, hey, we can have google maps of the world, zoom in. This is the age of H264 hi def via Internet. 1920x1280 @25fps. Maybe you are still on a dialup, OK, for those it will work too, to send a 800x600 screen (all you really need) is about 11.5 Mbit for 8 bits full colour RGB, without any compression (800x600x3x8). When you zoom or move only the changes (the new edge[s] that come[s] in\ are needed. Add a bit compression, you do not need 8 bits RGB for floorplanner, graphs can go in gnu whatsitsname format, error messages are text, so is you listing. Hey I am working with video, this is kids stuff. So, no tech limits, yes and as Sun was looking for projects for their server farms, FARM, more then one, comprendre? The soft could well split you up in several threads on several machines, would not worry too much about speed. Add a bit of competition, and speed (= cost) would go up and price down. From a phylosofic pov, maybe we all want our own, our own music recordings, our own car, our own whatever. And maybe they want to sell everyone that 1000$ software tool, even if you use it only every now and then. But in some cases (like we do not all have our own power plant either), systems like I proposed make sense.Article: 103095
John, The link does not clearly mention PCI 32/33 or PCI 64/66. Looks quite similar to the one from http://www.4dsp.com/PCI.htm . What's your experience been with it? Thanks. -sanjayArticle: 103096
Good god, perhaps you're suffering from aluminium poisoning from excessive wearing of your tinfoil hat, and debating you is useless but, well, here we go anyway: dp wrote: > > > Now, to do the synthesis, many people have to buy advanced (very fast) > > > hardware. > > Such nonsense. If a GHz range CPU is not enough I don't know what > is. The hardware costs are negligible from a developers point of view > nowadays. Now if the software has been explicitly written in a way > to clog the system this is of course another matter... well, for place and route you could certainly conceive of dedicated systems that could better tackle the problem. Any signifcant reduction in the time it takes to create bitfiles speeds up the design process and increases productivity. Plus you can avoid the wasting of potential compute power that you have in existing systems. It's more efficient and you have economies of scale too, so it ends up being cheaper for the user. Xilinx, being xilinx, could even look to the possibilities of using large scale reconfigurable computers to tackle the problem. Let me get this straight though, you believe that Xilinx have written software that deliberately disadvantages their customers by clogging up their machines? If that were so, wouldn't Altera have noticably better performing software, or are they in on it too? > > (1) Privacy concerns - do you want your company's crown jewels stored on > > someone else's server? > > This is only a relatively small part of the problem. The thing is, they > want to sell you something - some sort of software - which afterwards > will > make you pay them on a regular basis. More than that, they want to > be in a position to take your IP - encrypt it as much as you like, > the chip vendor can reverse engineer it any time he wants - and > if of interest, incorporate it as a "core" or whatever, you will > be allowed to complain from the cold outside at will... > For me, anyone selling me a product incorporating secrets > which can be used to make me pay more or control me in some > other way at a later stage is in the blackmail busyness. > Unfortunately a great part of todays industry > is heading this way (PLD vendors being actually a not so big > part of the phenomenon). > Dimiter > > OK, again you think that Xilinx are out to get you... If remote application delivery offers increased productivity at reduced risk and cost, then at least some companies will make the not-too-great leap of faith in trusting the company to deliver honestly on their promises. The companies that make the leap to a more productive, efficient and lower risk and cost process will be more profitable and the practice will become standard. When you concentrate computing, you can do a better job of reducing energy use in computation, and the incentive would certainly be there as power usage costs would be in the M$/year for remote application delivery systems. So, you see, it would be good for the environment too. Why are you so mistrusting of Xilinx? They want to make a profit, and they won't do it by putting their customers out of business. Civilisation itself is based on trust. You can put safeguards in place to prevent fraud and theft but at the end of the day everything we do is based on mutual trust. Think about when you walk into a bank and hand over a wad of notes to a complete stranger, someone who you've never seen before and may never see again. This doesn't seem to faze us because we, to the greater extent, trust the bank. In exchange for goods and services we're happy to accept bits of paper that correspond to nothing, printed off by the government as and when they feel like it, but we trust them to adopt a rational monetary policy and so we trust that we'll be able to exchange the money for goods and services in turn. Why stop at running your own software on your own OS? Why not make your own shoes in case Xilinx have bugged them too? Grow your own food, adopt your own currency for trading with other people, build your own house, I could go on... I'm not saying power isn't abused, and that we shouldn't question those in a position of authority, but we should certainly have some perspective... RobinArticle: 103097
> ..... Of course, the design has to be > decrypted and loaded into memory for the tools to process it, I guess... You got that right :-). And even without the sources - just by the bitstream - the device manufacturer would have no trouble at all understanding the design. They have all the information about the chips they make, you know. > > For me, anyone selling me a product incorporating secrets > > which can be used to make me pay more or control me in some > > other way at a later stage is in the blackmail busyness. > > That'll be those trust issues I was talking about, right there. :) It boils down to one single thing: does the chip vendor have access to your programming data or not. If you use a wintel PC the chip manufacturer typically has access to your data. Has had for over a decade, to be more precise. You may still be able to protect your data if you never connect the PC to a network - and make sure there is no wireless hardware beyond your control on it - and never exchange disks with other systems - etc. Dimiter ------------------------------------------------------ Dimiter Popoff Transgalactic Instruments http://www.tgi-sci.com ------------------------------------------------------ Ben Jones wrote: > "dp" <dp@tgi-sci.com> wrote in message > news:1148572767.071777.138910@g10g2000cwb.googlegroups.com... > > > > Now, to do the synthesis, many people have to buy advanced (very fast) > > > > hardware. > > Such nonsense. If a GHz range CPU is not enough I don't know what > > is. > > And 640K will be enough for anybody... > > > The hardware costs are negligible from a developers point of view > > nowadays. Now if the software has been explicitly written in a way > > to clog the system this is of course another matter... > > Have you ever tried multi-pass P&Ring a multi-million gate FPGA design with > marginal timing? Plenty of people would pay good money to have their compile > times dropped from 12 hours to 12 minutes if the technology were available. > > > encrypt it as much as you like, the chip vendor can reverse > > engineer it any time he wants > > Not if he doesn't have the key, he can't. Of course, the design has to be > decrypted and loaded into memory for the tools to process it, I guess... > > > For me, anyone selling me a product incorporating secrets > > which can be used to make me pay more or control me in some > > other way at a later stage is in the blackmail busyness. > > That'll be those trust issues I was talking about, right there. :) > > -Ben-Article: 103098
What device, and what delay at fanout=32? Peter AlfkeArticle: 103099
even though DXP2004 doesnt support S3-V4 architectures (that suppor it only in Altium Designer 6) it still possible to use DXP2004/Nexxar designs targetting Spartan3 or Virtex4 here is the a working example (dxp and ise projects) http://hydraxc.xilant.com/CMS/index.php?option=com_remository&Itemid=41&func=fileinfo&id=9 direct link to project archive http://hydraxc.xilant.com/downloads/XCAPP005.zip the project is just the original "MorseCode TSK51" demo from DXP2004 DXP is used to generate edif files and hex code for the TSK51 (8051 core) processor, then ISE is used to make bit file and init the code memory block. this example also shows how to use the processor IP cores that ship with DXP2004 outside the DXP environment. The demo does work, it beeps some morse code on my desk :) Antti
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z