Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
In article <3501b34d.270562177@news.netcomuk.co.uk>, Peter <z80@ds2.com> wrote: >It is not crashproof, of course. It is easily crashed when there are >hardware-related problems. It is very hard to crash it with most >normal windoze apps though. This is where NT differs from Unix. NT is designed to resist standard bugs in standard applications. Unix is designed to resist malicious attacks in handmade applications. This is why most people experience much better stability with Unix (especially Linux but not only) than with NT. Of course there must be, and there are, NT configurations that truly work and are stable. Just a matter of luck. If you are in such a situation, consider yourself as lucky. --Thomas PorninArticle: 9276
Thank you for reply, Gavin and Peter. I managed to find the reason of VL malfunction. The problem was with too long path. I had path about 180 chars long. None of old programs I was using complained about it. VL was the first one. I had to shorten path (now ~65 chars). Since then all works perfectly. Probably VL has a workspace for path limited to 128 chars (DOS <=6.22 limit IIRC). Hope it helps someone. Regards Chris -- Christopher Rozniak Gdansk, Poland, Europe, Earth E-mail: k.rozniak@XXX.ien.gda.pl remove anty-spam XXX. to emailArticle: 9277
> paul: > : I'd like to find the file format for the symbols created in Viewdraw. > : These are ASCII files, and I've already done some reverse-engineering > : and found some of the info. However, as long as this format has been > : around, I'd be surprised if the information isn't already available. > > rk: > and i'd like to have a good definition of the sch and wir files too. i'm > writing a program to do some automagic and semi-automagic editing of a > design. dh: I've written several tools that manipulate viewlogic files: symbol creators, extensions to put PAL-like equations directly on the schematic page, and a net-list flattener with mapping control. The viewlogic file format is pretty straightforward and only takes a few minutes to figure out most of the details. If you want any of these tools, send me email. -- Don Husby <husby@fnal.gov> Phone: 630-840-3668 Fermi National Accelerator Lab Fax: 630-840-5406 Batavia, IL 60510Article: 9278
Harris have a few: CD22100 CMOS 4 x 4 Crosspoint Switch with Control Memory High-Voltage Type (20V Rating) CD74HC22106, CD74HCT22106 QMOS 8 x 8 x 1 Crosspoint Switches with Memory Control CD22M3493 12 x 8 x 1 BiMOS-E Crosspoint Switch CD22M3494 16 x 8 x 1 BiMOS-E Crosspoint Switch CD22101, CD22102 CMOS 4 x 4 x 2 Crosspoint Switch with Control Memory HA4404B 330MHz, 4 x 1 Video Crosspoint Switch with Tally Outputs HA4314B 400MHz, 4 x 1 Video Crosspoint Switch HA4201 480MHz, 1 x 1 Video Crosspoint Switch with Tally Output HA4344B 350MHz, 4 x 1 Video Crosspoint Switch with Synchronous Controls HA4244 480MHz, 1 x 1 Video Crosspoint Switch with Synchronous Enable HA456 120MHz, Low Power, 8 x 8 Video Crosspoint Switch (14 pages) FN4153.1 www.semi.harris.com Andy.Article: 9279
gogo@netcom.com (R. Mark Gogolewski) writes: > Rick, > > Great points. > > You are very correct on the following two things: > > [] A port from any other Unix OS to Linux is essentially cake. > > [] Unix houses can very easily switch to Linux while NT houses > would have more difficulty. > > BTW, I _love_ the idea of Linux. However, if I ran a pure > Solaris group, the only $$ I would really save would be on hardware. > The EDA software won't get cheaper on Linux. The admin support > won't get cheaper, etc. , etc. In fact, I'll have to support > both for awhile when switching, so my admin costs go up. Well, I've been waiting for the thread like this for 4 years now! :-) Our group is designing some really weird (superconductive) circuitry and we use both our specialized tools: physical-level simulation, inductance extractor, etc., AND standard tools that we "borrow" from what semiconductor industry has: schematic capture, layout editor, VHDL simulator (no synthesis yet). I have been using Linux on my desktop machine since v0.97, and now we have a Cadence license (really cheap since we are a university lab) to perform all standard tasks (well, it took us probably a year and half to customize Cadence for our technology!). Cadence runs on HP machines (HP also offers nice educational discounts). Anyway, my experiece shows that when you take a bare-bones HP box (or SUN, SGI is nicer, but it is not a well-supported EDA platform either) you have to spend lots and lots of effort (and/or money) if you want to use it as your desktop machine. I mean, you have to install gcc (or buy an expensive C compiler from HP), one can insist on having emacs, TeX (hey, we need to report our achievements once in a while!), presentation graphics software, etc, etc, and Netscape on top of that! If you are connected to the network, you need to install all the latest security patches, when you get a nice large monitor it might not be supported by the proprietory X server and you need to get a newer one somewhere (helped one EE guy to do that on his Solaris). * The point is that when you install Linux you have all this included BY * DEFAULT, you can easily upgrade the system or any system component * whenever you want and as often as you want. And you can have your familiar OpenLook or CDE or a bunch of other window managers (including one with Win95 look-and-feel!) instantly! And if someone wants, he can have a commercial-grade relational database (Postgres) up and running for free. And httpd, etc. So, it is not just hardware cost, any default distribution of Linux is much more user-friendly (in nice UNIX meaning of this word ;-) ) than a standard HP or SUN distribution, and you have also count the cost of making your proprietary UNIX box as convenient! Anyway, just sharing some experience. > > I think this is one reason that we do not see large groups in > many companies strongly pushing global adoption of Linux. Small > groups and individuals are pushing it because you can buy a sweet > Linux machine for less than $5K, and a Solaris machine is going > to be quite a bit more. You are right! Paul -- ("`-''-/").___..--''"`-._ UNIX *is* user-friendly, he is just very `6_ 6 ) `-. ( ).`-.__.`) picky about who his friends are... (_Y_.)' ._ ) `._ `. ``-..-' Paul Bunyk, Research Scientist _..`--'_..-_/ /--'_.' ,'art by (and part-time UN*X sysadm) (il),-'' (li),' ((!.-' F. Lee http://pbunyk.physics.sunysb.edu/~paulArticle: 9280
rk wrote: > > someone said: > : >>4) Stability. Linux typically has uptimes measured in months while NT > : >>crashes about twice a week (in my experience, and in the experience of > : >>others I've talked to). > > peter: > : This is simply not true. I am the last one to defend MS but NT is very > : reliable. I use it all day. If you find regular crashes, as some > : people indeed do, you very probably have hardware problems, or you > : need a decent UPS. > > rk: > at day job, i'm running two NT machines. basically never shut them off and > rarely do they get in trouble. however, bad software can crash them, it's > NOT crashproof, but that's another story. I am primarily a UNIX developer; so I have had to rely on NT experts to get me past some wicked problems. They made an interesting observation: if you develop your application within the strict confines of the "Microsoft path," you should have no problem; but if you try to do something really innovative that they haven't thought of, you are going to get into lots of trouble (which apparantly was why my problems were so wicked :-). I draw from that: if you basically build an app based on Microsoft's MFC classes, you're okay. If you try to use their POSIX personality (I tried), you get into a lot of trouble (which I've had). In spite of commercial portability packages for UNIX->NT, people tell me that you ultimately will have to your UNIX/POSIX interfaces to use WIN32, or face big performance losses. > someone said: > : >>5) Performance. When NT is doing any kind of disk access it seems to > be > : >>very unresponsive. > > peter: > : Not true. > > rk: > we haven't witnessed this problem. in fact, our #1 test engineer (day job) > remarked positively how fast the NT machine seemed to run. was using it > for real-time data acquisition and storing to disk (and we considered just > adding more memory and storing to ram but having the disk grind away wasn't > an observed problem). NT can be very fast. If you machine constantly focused on doing the same thing repeatedly, and your program and data buffers are paged/swapped in, yes, it will be very fast. (But then, under those same conditions, any modern demand paged OS with eough memory will be fast. This argues that simulation should run equally well on both UNIX and NT, given enough memory.) But when you try to switch between tasks, or worse still, create new ones, there is a marked sluggishness. Here, Microsoft will argue that you should be using threads. But now we're talking about debugging asynchronous behavior... which I started to write about, but I'll spare you... for now. :-) --RickArticle: 9281
In article <01bd47c5$c95a2720$6e84accf@homepc>, rk <stellare@erols.com.NOSPAM> wrote: >andrew: >: I suspect that part of the problem is that EDA companies want to charge >: UNIX CAD prices for Linux software, rather than NT CAD software prices >: which seem to be much less (for less functionality in most cases). > >rk: >thought i just read in ee times that synopsys will charge same $ for unix >and NT (marvelous). also said they would not do linux, no customer demand. > interHDL supports all of its products including Verilint and the Verilog/VHDL bidirectional translators on Linux. Eli -- Eli Sternheim interHDL, Inc. 4984 El Camino Real, Suite 210 Los Altos, CA. 94022-1433 phone: 650-428-4200 fax: 650-428-4201 email: eli@interhdl.comArticle: 9282
rk wrote: > > unix versions of software have traditionally cost far more than the dos/win > versions. yes, same code and all (in principle). but it is a different > market. and, i believe, it will continue to be a different market. The embedding of posts were getting kinda long, so I'll apologize up front. Pricing. This is an interesting artificial, obsolete separator between UNIX and PC platforms. I am arguing here for equivalent application pricing for UNIX and NT systems. When PCs were limited to 4 MB of memory, there was a case for it. However, Microsoft is clearly trying to tell us that UNIX and NT are of the same class. For software licensing purposes, we should take their word for it. (Clearly, that is where their design objectives are.) In earlier times, the key principal for what to charge was amount of work your system could do. (Ideally, equal dollars for equal work.) About 20 years ago, on centralized mainframes, you could rent out a large application, and charge the computer center according to metered usage. Alternatively, you could lease it according to the power of the mainframe. The latter model continued onto Sun workstations. Of course, the software was licensed against a particular hardware key that you had to report. Built into the key was the class of machine. As a result, the vendor could verify you were really running on a Sun-3/50 and not a 3/280, and charge you accordingly. With PCs, this model has broken down. PC manufacturers want us to believe that their machines perform as well as traditional worktations. Furthermore, today you might have a 150 MHz machine; tomorrow, it might be 333 MHz. You can no longer base a software license fee on machine power. Furthermore, human productivity is increasingly the gating item. The speed of your machine is less a factor than the functionality of your OS and toolset. (Yes, I know. Those really big simulations will forever be CPU-bound.) If we believe Microsoft, this argues that tools for NT and UNIX should have equivalent pricing. Take *that* message to your software vendor. (Hmm... can of worms here. You bought the cheaper PC product, you say?) >From the software vendor's point of view, pricing should reflect development and support costs. If it is the same on both NT and UNIX, then it should be priced the same. If it is not, you are getting ripped off on one platform or the other. (Oh, BTW, compilers and debuggers come free on Linux and FreeBSD, not to mention the OS. :-) --Rick Kwan humble developer (today, anyway :-)Article: 9283
Andrew Phillips wrote: - -Hi, - -Are you looking for information about the Texas Instruments TMS320C6x -digital signal processors? Please check out my website: - -http://www.scs.ch/~andrew/c6x.html - -Here you'll find the latest documentation and silicon availability -information. There is plenty of stuff about doing hardware and software -design with these processors, some application notes and a comprehensive -bug list. Also, stacks of info about commercially available 'C6x -processor boards and lots of other stuff ..... - -Have a look and please send me any comments. Don't forget to join my -mailing list if you want to be notified when the site is updated ... - -Cheers, - -Andrew Phillips -Supercomputing Systems AG -Zurich, Switzerland How about a valid URL?Article: 9284
In article <34FEEFBC.2530@home.sleeping>, me@home.sleeping says... > Andrew Phillips wrote: > - > -Hi, > - > -Are you looking for information about the Texas Instruments TMS320C6x > -digital signal processors? Please check out my website: > - > -http://www.scs.ch/~andrew/c6x.html > - <snip> > > How about a valid URL? > URL works fine for me. -- Casey Lang clang@mindspring.com Opinions expressed are mine alone... unless you agree.Article: 9285
Hi, all, Does anyone know if there exist tools or even it's possible to use Java as verilog PLI? How about using C++ classes in the code for PLI. I need them to model some complex telecomm devices and it's very AWKWARD to use C and downright impossible to use vhdl/verilog. As an observation: EDA languages (vhdl/Verilog) and tools (Modelsim, VSS, Cadence's tools) seem way behind the software world. Formal analysis tools are useful to verify the gate level against rtl designs. But to do system modeling or verify rtl designs, more powerful languages such as Java or C++ will be greatly helpful. Any info or leads will be greatly appreciated. Commends are also welcomed. Jian Zhang ASIC Designer Petaluma, CAArticle: 9286
rk: : >at day job, i'm running two NT machines. basically never shut them off and : >rarely do they get in trouble. however, bad software can crash them, it's : >NOT crashproof, but that's another story. peter: : It is not crashproof, of course. It is easily crashed when there are : hardware-related problems. It is very hard to crash it with most : normal windoze apps though. rk: hi peter, in this case, it was software. learning how to access our custom hardware on the isa bus, had some oopses. but if the software is good, the machines seem to be quite stable, don't have many problems at all. and the crash protection in nt is normally quite good. -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 9287
peter: : >It is not crashproof, of course. It is easily crashed when there are : >hardware-related problems. It is very hard to crash it with most : >normal windoze apps though. thomas: : This is where NT differs from Unix. NT is designed to resist standard bugs : in standard applications. Unix is designed to resist malicious attacks in : handmade applications. This is why most people experience much better : stability with Unix (especially Linux but not only) than with NT. rk: perhaps current unix implementations are. back in the '80s, when unix wasn't exactly a spring chicken, unix security and stability and system availability was basically a joke. but over the years things get better. and, most likely, they will for nt too. but my observations is that working hardware with working software on NT results in a stable system that runs without problems, the original point of this sub-thread. thomas: : Of course there must be, and there are, NT configurations that truly work : and are stable. Just a matter of luck. If you are in such a situation, : consider yourself as lucky. rk: well, guess i'm lucky then. ;)Article: 9288
In article <34FEE93E.9EC4786F@ix.netcom.com>, Rick Kwan <rkwan@ix.netcom.com> wrote: >Pricing. This is an interesting artificial, obsolete separator >between UNIX and PC platforms. I am arguing here for equivalent >application pricing for UNIX and NT systems. <snip> >From the software vendor's point of view, pricing should reflect >development and support costs. If it is the same on both NT >and UNIX, then it should be priced the same. If it is not, >you are getting ripped off on one platform or the other. That is definitely the crux. As to "rk"'s points, there is certainly a market out there to lower performance/cheaper software. That makes the development/support costs much less. You should get what you pay for. >(Oh, BTW, compilers and debuggers come free on Linux and >FreeBSD, not to mention the OS. :-) Yeah, and that represents < 5% of development costs. I would find it difficult to argue that any specific platform significantly changes the development costs vis-a-vis compilers, debuggers, etc... MarkArticle: 9289
In comp.arch.fpga Peter <z80@ds2.com> wrote: : There are lots of stupid things in NT but none of them make it in any : way unsuitable for EDA. I would have to disagree with that statement for the following reason: When you do EDA, you don't just use a tool in isolation. You use it with a host of other tools, and those tools are either broken, nonexistent, or occasionally present on an NT box. They work on UNIX. (I use NetBSD UNIX, www.netbsd.org for more info.) The tools I refer to are: CVS diff gcc emacs/vi make perl shell scripts screen grep and many more... One of the most important tools I use is X windows. It does me absolutely no good to have an NT box or server across campus. However, if I have a UNIX box, I just log in remotely and run my CAD tool while displaying the application GUI on my local X server. Also, the beauty of the UNIX system is that you can quickly build tools from smaller tools. Case in point, while I was typing this to you, I am debugging a physical layout by extracting a netlist from it, running the simulator based on a model which I created in Verilog. The Verilog is written by hand and also automatically generated by a perl script. We also automatically run regressions on the design to ensure that DRC is satisfied and that our new changed still agree with the spec. The whole process is automated by "make", and we use CVS to coordinate work between developers. Our results are on the web page in real time - all UNIX. And I set it up myself in a few hours time. I think this is the real NT/UNIX issue. And now that UNIX (Free UNIX) is as capable on the workstation and small server level as the big UNIX iron, it's accesable to a small company or single consultant/developer. If the CAD vendors build it, they will win that (perhaps small margin) market. Do they want it? To me, an NT port makes no sense and I can't imagine how anyone coordinates a full design on one of thos boxes. Is it actually being done? Every company that I have worked for or looked into is using UNIX for all things related to design and doesn't appear to be wanting to change. Questions that I have: Is there any chip maker that is using NT? Who? What project? Can one coordinate a whole design on NT? How do you do DA with so many missing or buggy ports? What about compute servers? Can you fill a machine room with headless NT and have people farm jobs off to them? Is the cost actually effective? The incremental admin cost has got to be so much greater than a UNIX box... is that true? MarkArticle: 9290
<snip> : > rk: : > at day job, i'm running two NT machines. basically never shut them off and : > rarely do they get in trouble. however, bad software can crash them, it's : > NOT crashproof, but that's another story. rick kwan (also an rk) : I am primarily a UNIX developer; so I have had to rely on NT experts to : get me past some wicked problems. They made an interesting observation: : if you develop your application within the strict confines of the : "Microsoft path," you should have no problem; but if you try to do : something really innovative that they haven't thought of, you are going : to get into lots of trouble (which apparantly was why my problems were : so wicked :-). : : I draw from that: if you basically build an app based on Microsoft's : MFC classes, you're okay. If you try to use their POSIX personality : (I tried), you get into a lot of trouble (which I've had). In spite of : commercial portability packages for UNIX->NT, people tell me that you : ultimately will have to your UNIX/POSIX interfaces to use WIN32, or : face big performance losses. rk: our application was written in pascal (delphi version). got into a tad of trouble, worked out relatively quickly (< 24 hours of clock time), interfacing to our custom h/w on the backplane. it ran well and faster than we expected. and handled rather large volumes of data. someone said: : > : >>5) Performance. When NT is doing any kind of disk access it seems to be : > : >>very unresponsive. peter: : > : Not true. rk: : > we haven't witnessed this problem. in fact, our #1 test engineer (day job) : > remarked positively how fast the NT machine seemed to run. was using it : > for real-time data acquisition and storing to disk (and we considered just : > adding more memory and storing to ram but having the disk grind away wasn't : > an observed problem). rick kwan (the other rk): : NT can be very fast. If you machine constantly focused on : doing the same thing repeatedly, and your program and data : buffers are paged/swapped in, yes, it will be very fast. : (But then, under those same conditions, any modern demand paged : OS with eough memory will be fast. This argues that simulation : should run equally well on both UNIX and NT, given enough memory.) rk: i would expect that a large simulation run would have the app in electronic memory as well as the data. and an emphasis, i would think, should be in a good, large cache as LOTS of ram, since a simulation wouldn't have locality properties for the data. and swapping will be death. these assumptions correct? if you have enough ram, the OS shouldn't really make that much of a difference, as i would hope it would do what it could to just stay out of the way. interestingly, in the integrated system design article, "testing the viability of eda applications on a pc," they cited the compaq machine as the leader in pc's, saying it's boost in preformance was from its memory controller.Article: 9291
A datapoint on the speed of OSes: TR-09-95 J. Bradley Chen, Yasuhiro Endo, Kee Chan, David Mazieres, Antonio Dias, Margo Seltzer, and Michael Smith. 1995. <a href= "ftp://deas-ftp.harvard.edu/techreports/tr-09-95.ps.gz"> The Impact of Operating System Structure on Personal Computer Performance</a>. Check it out and see what conclusions they came to. Any well-tuned system will be fast. And some OSes are inherently a little faster due to their organization. This report is a little dated, and in the near future, you will see some even better performance from the PC UNIX world, I believe. Mark In comp.arch.fpga Rick Kwan <rkwan@ix.netcom.com> wrote: : rk wrote: :> :> rk: :> we haven't witnessed this problem. in fact, our #1 test engineer (day job) :> remarked positively how fast the NT machine seemed to run. was using it :> for real-time data acquisition and storing to disk (and we considered just :> adding more memory and storing to ram but having the disk grind away wasn't :> an observed problem). : NT can be very fast. If you machine constantly focused on : doing the same thing repeatedly, and your program and data : buffers are paged/swapped in, yes, it will be very fast. : (But then, under those same conditions, any modern demand paged : OS with eough memory will be fast. This argues that simulation : should run equally well on both UNIX and NT, given enough memory.) : But when you try to switch between tasks, or worse : still, create new ones, there is a marked sluggishness. Here, : Microsoft will argue that you should be using threads. But now : we're talking about debugging asynchronous behavior... which : I started to write about, but I'll spare you... for now. :-) : --RickArticle: 9292
>I am primarily a UNIX developer; so I have had to rely on NT experts to >get me past some wicked problems. They made an interesting observation: >if you develop your application within the strict confines of the >"Microsoft path," you should have no problem; but if you try to do >something really innovative that they haven't thought of, you are going >to get into lots of trouble (which apparantly was why my problems were >so wicked :-). Not sure what this means in specifics. All apps end up calling the API, one way or another. Maybe some version of unix does more rigorous checking on the entry parameters than NT, so a dodgy app is less likely to crash under it. >NT can be very fast. If you machine constantly focused on >doing the same thing repeatedly, and your program and data >buffers are paged/swapped in, yes, it will be very fast. This is IMO the major problem with NT: its virtual memory implementation is poor. I have 128MB RAM, and still I get lots of swapfile accesses. In contrast, good ole windoze 3.11 never touches the swapfile if there is enough RAM to hold everything. >But when you try to switch between tasks, or worse >still, create new ones, there is a marked sluggishness. Here, >Microsoft will argue that you should be using threads. But now >we're talking about debugging asynchronous behavior... which >I started to write about, but I'll spare you... for now. :-) Unix has a far longer pedigree in real time control than NT. It is only now that people are starting to use NT in factory automation. And NT was originally designed as a file server O/S. This is its other major problem: it does not properly multitask. If a DOS app goes round in a tight loop, it will grab 99%of CPU time. There is no way to tell NT "this app is to get max 35% CPU time". In most ways, NT is a product which could have been delivered by the likes of Digital Research, 10 years ago. I therefore particularly dislike MS being continually pushed as an "innovative" company, with BG being a "brilliant" programmer. But DR failed, MS are still there. *That* is what matters if you use it in a commercial situation. Peter. Return address is invalid to help stop junk mail. E-mail replies to zX80@digiYserve.com but remove the X and the Y.Article: 9293
In article <6dn451$b0c$1@nntp.Stanford.EDU>, Mark Willey <willey@etla.ml.org> wrote: >The tools I refer to are: [...several useful tools...] >One of the most important tools I use is X windows. It does me absolutely >no good to have an NT box or server across campus. However, if I have a >UNIX box, I just log in remotely and run my CAD tool while displaying the >application GUI on my local X server. Actually, all (or almost all) the tools you describe are available for NT for free; the "export DISPLAY" one exists also and is called NTrig (or something like that)(and it costs much !)(there is also something free I saw once, but it was not very efficient in terms of network consumption). So you can get many posix tools and make your NT look like a Unix. The point is: if you want something like Unix, why do you use NT ? Get Linux, it is faster and cheaper, and at least as reliable as NT. Anyway, I once met a guy who was doing some development on FPGAs using only NT -- and he was quite happy and efficient, more than he could be with a Unix shell. I think I represent the opposite: I am painfully slow with a mouse, but I am also some god of the keyboard, and a strong Linux user and advocate. To sum up: some people prefer NT, other prefer Unix. So it is a pity that tools are not as available under Unix as they are under NT. --Thomas PorninArticle: 9294
rk: : > : > unix versions of software have traditionally cost far more than the dos/win : > versions. yes, same code and all (in principle). but it is a different : > market. and, i believe, it will continue to be a different market. rk: one little paragraph generated a large response! ok, let's move ahead. rick kwan: : The embedding of posts were getting kinda long, so I'll : apologize up front. : : Pricing. This is an interesting artificial, obsolete separator : between UNIX and PC platforms. I am arguing here for equivalent : application pricing for UNIX and NT systems. rk: me argues for lowest pricing period, regulated by the laws of supply and demand and competition. and what helps this is avoiding proprietary formats and tools to avoid getting locked in. a nice example is edif. now i can generate an edif netlist from any number of different tools from a number of different manufacturers, and, as an example, move it into viewlogic, run my low level logic simulations, cut another composite netlist, pop into my p&r tool, and i'm off and running. typically, i take 3 different types of input and put it into my designs (schematic, macro generators, and vhdl). and edif is the glue that makes it happen. if a tool is too expensive, then i have the choice to possibly go with some other manufacturer. like the mentor $35,000 station vs. the orcad $500 pc package for card-level design and netlisting from a while back. and there are a number of companies in between. obviously, sales volumes will make prices drop, since the design and maitenance cost of a tool are fixed per version but the manufacturing costs are really quite low. support costs should be linear w.r.t. the # of users. if the tools are well designed and has good doc the support costs should be low and the users happy. and we're all happy, right?!?!?!?!?!?!?!?!?! rick kwan: : With PCs, this model has broken down. PC manufacturers : want us to believe that their machines perform as well as : traditional worktations. Furthermore, today you might have : a 150 MHz machine; tomorrow, it might be 333 MHz. You can no : longer base a software license fee on machine power. rk: well, i run on both unix and pc but don't think the two are equivalent. they're not. but if the pc is good enough for an app, and some studies have shown better performance / $, then it makes sense. it won't make sense for the million gate asics. but it probably will for 100,000 gates and less. and for < $1k, you can go down to the store, pick up a new motherboard and cpu and memory, and have something that really flies. lots of m-board competition, and with the amd k6 and others, there's getting to be some real competition for cpu slots. and the cpu market is now unbelievable w.r.t. performance and it's accellerating, with prices dropping. intel can no longer decide when to put out a new cpu and control pricing. somebody else will come along and beat them. an interesting aside. you can go down to the local hole in the wall shop and pick up a m-board+cpu that runs at 333 MHz. that's a 3 nanosecond cycle time. here's a quote from one of my architecture books, (c) 1980. ... we cannot fail to mention that the clock cycle of 12.5 nS could only be achieved through some amazing technological advances in LSI ... and the cooling system. ... this is a description of the cray-1 supercomputer. <snipped lots of good stuff> i don't think unix or linux or domain or win '95 or win nt or dos (yes, people still do good engineering in dos) or exec 8 or rt-ll or vms or mvs or whatever is really what's important. it's staying away from proprietary models that lock you in. a good healthy market with competition will lower prices while improving efficiency and performance. some comapnies will want to stay on top. and startups will want to take the market and money away from them. gotta beat the standard and convince people to change. -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 9295
peter: : : There are lots of stupid things in NT but none of them make it in any : : way unsuitable for EDA. mark: : I would have to disagree with that statement for the following reason: : : When you do EDA, you don't just use a tool in isolation. You use it with a : host of other tools, and those tools are either broken, nonexistent, or : occasionally present on an NT box. They work on UNIX. (I use NetBSD UNIX, : www.netbsd.org for more info.) : : The tools I refer to are: : : CVS : diff : gcc : emacs/vi : make : perl : shell scripts : screen : grep : and many more... <snip some good points esp. for large designs> rk: ok, i do eda, but i also do engineering and, well, regular office work too. here's a list of the tools/apps/whatever that i use, and sort of need to have on the pc. is the pc good for everything? nope, so we get big unix/sun machines to use too, for when it's needed. perhaps we can write down what people use and what is doable on linux. but, as peter said, pc/win are fine for eda. not for designing the pentium 99 or whatever. here's a list: eudora for mail ms office - standard format for passing lots of documents and info spss sigma plot for graphics actel designer and programming - for fpga quicklogic tools borland delphi - for lab apps and analysis viewlogic - have pc and unix, no linux synopsys - have unix, moving to NT, no linux ftp and www servers - available for all statecad - pc and unix dos programs - got a lot of them i still run drivers for my national instrument hpib cards isa bus for all of the custom cards we bought and built ourselves adobe acrobat writer adobe photoshop vendor support for chip express orcad made lots of chips using dos and windows. and unix works too. but it's not the only way for many jobs. -------------------------------------------------------------- rk "there's nothing like real data to screw up a great theory" - me (modified from original, slightly more colorful version) --------------------------------------------------------------Article: 9296
Jan Zegers wrote: > > > The Questions are:- > > > > 1) What file name should I use to contain the listing given above? Should > > it be my_pkg.vhd, comp1.vhd, comp2.vhd or something else? > > There is no relation between the filename and the VHDL object(s) > in a filename. So, a very bad idea could be to put everything > in one file. Best is to have a "convention" and to split in Except with the Altera tool where there is a direct link betweeen the entity name and the file name. I think it was to make it easier to enter the entity into an altera schematic design or to instantiate it into a gdf file or.. Anywat Altera requires you to nmae the file with the same name as the entity if you use their tools for synthesis. quoting from page 126 of the Max+Plus VHDL manual for version 8 In the hierarchy display window, a filename along with the file icon and filename extension, represents a file in the current hierarchy tree. As a further comment, this really annoys me. I have a package containing generic declarations for things like synchronizers that I use as required. Whne I ported the design to an Altera CPLD and attempted to use the Altera tools i started to get error messages telling me that I had not named the file the same as the entity within it. So a file which I managed to successfully compile using the Xilinx foundation tools, Warp from Cypress, Viewsynthesis and Synplicity was suddenly unacceptable to the Altera tools. Did I feel stupid for not following the apporopriate design guidelines from the LRM or did I start looking to blame Altera for their particular implementation? <stuff deleted> Garry Allen ABC Technology Research and Development Ultimo AUSTRALIA ph +61-2-93335248Article: 9297
In a previous article z80@ds2.com (Peter) writes: : ; :>>4) Stability. Linux typically has uptimes measured in months while NT ;>>crashes about twice a week (in my experience, and in the experience of :>>others I've talked to). ; :This is simply not true. I am the last one to defend MS but NT is very ;reliable. I use it all day. If you find regular crashes, as some :people indeed do, you very probably have hardware problems, or you ;need a decent UPS. : ;>>5) Performance. When NT is doing any kind of disk access it seems to be :>>very unresponsive. ; :Not true. ; :There are lots of stupid things in NT but none of them make it in any ;way unsuitable for EDA. Oh yes. For one, it prevents me from using the tools if I am not sitting in front of of the machine.Article: 9298
Jian_Zhang wrote: > > Hi, all, > > Does anyone know if there exist tools or even it's possible to use Java > as verilog PLI? How about using C++ classes in the code for PLI. I > need them to model some complex telecomm devices and it's very AWKWARD > to use C and downright impossible to use vhdl/verilog. > > As an observation: EDA languages (vhdl/Verilog) and tools (Modelsim, > VSS, Cadence's tools) seem way behind the software world. Formal > analysis tools are useful to verify the gate level against rtl designs. > But to do system modeling or verify rtl designs, more powerful languages > such as Java or C++ will be greatly helpful. > > Any info or leads will be greatly appreciated. Commends are also > welcomed. > > Jian Zhang > ASIC Designer > Petaluma, CA I don't know of anyone currently using Java (sort of a back burner project of mine :), but people DO use C++ for PLI/FLI code (also for system test and verification). There will be presenting a paper one method of "hooking up" tests written in C (or C++) to Verilog at the upcoming IVC conference in Santa Clara. There's also a commercial product available that pretty much does the same thing (VCPU from SimTech). Is this what you're talking about? --Bob -- Bob Beckwith To reply, remove NOSPAM. from the email address above.Article: 9299
On Thu, 05 Mar 1998 14:45:00 GMT, k.rozniak@XXX.ien.gda.pl (Krzysztof Rozniak) wrote: >Thank you for reply, Gavin and Peter. > >I managed to find the reason of VL malfunction. The problem was with >too long path. I had path about 180 chars long. None of old programs I >was using complained about it. VL was the first one. I had to shorten >path (now ~65 chars). Since then all works perfectly. Probably VL has >a workspace for path limited to 128 chars (DOS <=6.22 limit IIRC). >Hope it helps someone. Oh -- THAT version. I had not read the original post very carefully. It is actually a little more subtle than that -- not only must the path be less than 127 characters long, but ALL the directories in that path must exist. -- Gavin Melville gavin@cypher.co.nz
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z