Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
On May 5, 10:49=A0pm, Eric <delage.e...@gmail.com> wrote: > Hi, > > Does someone have some benchmarks comparing the compilation time > between the Windows 64b and Linux 64b editions of the Xilinx ISE > Design Suite? I need some arguments to invest in the right development > platform. > > Many thanks. > > Eric Sorry, no real benchmarks here, but what's to invest? Ubuntu is free :-) Regards, PatArticle: 147601
Patrick Maupin wrote: > On May 5, 5:47 pm, Sharmila <sharmi...@gmail.com> wrote: >> I am working on bit of a complex DSP design and generating .vhdl >> files based on the MATLAB/Simulink design. These vhdl files are then >> imported into a Xilinx project file and mapped, placed, routed, >> testing for timing to eventually generate a bit file for a Virtex >> SX95T. When I get a failed timing constraint, I manually place slices >> or add latency to meet timing. >> I succeed in fixing this failed timing constraint but get a new one >> in a unrelated block. My question is if this new timing constraint >> is a side effect of my change or if Xilinx does sequential failed >> timing error reporting ? >> I'm running Xilinx ISE 10.1.03 Foundation and the appropriate sysgen. >> Regards, >> Sharmila > > Under the properties tab for the post-PAR timing analysis, you can > tell it how many paths per clock you want to see. I think the default > is only 3. You can also ask it to perform "advanced analysis" and > tell it you want a "verbose report" and get information on paths that > pass, but are marginal. > > But the whole thing is kind of like a balloon animal -- press over > here, and it will pop out over there. That certainly could be > happening to you. > Good advice and also my experience. To speed up synthesis time and fitting, I often compile the modules seapartely with a tight timing constraint in a corresponding UCF file. If timing is critical, the iteration design change - fit cycle time is greatly shortened, and critical areas can be identified with a tighter than design required timing constraint. Policies of multiple cycle constraints can also be made. It's all a bit late when it takes 1/2 hour to find out your design doesn't meet a timing constraint!Article: 147602
On 5/6/2010 3:33 AM, Patrick Maupin wrote: > On May 5, 7:38 pm, Symon<symon_bre...@hotmail.com> wrote: >> On 5/5/2010 10:03 PM, glen herrmannsfeldt wrote: >>> Also, the RS232 asynchronous communication stop bit is needed >>> to allow for possible clock differences between two stations. >> >> Also, I am interested in your concluding statement about RS232 stop >> bits. I gather you live in a world of half-duplex. >> >> How would you propose to eradicate the stop bit in a world where we are >> all synchronised? >> >> How would we synchronise ourselves? >> >> Would we need to be adjacent? > > First of all, I'm never sure exactly how serious some of your > questions are. Especially on the latter two questions, I'm not sure > exactly where your tongue is. But on the off-chance that it's not > firmly planted in your cheek, I will endeavor to answer some of these > questions. > In this case the questions are rhetorical to provoke Glen into thinking about what he posts. Thanks, Symon.Article: 147603
"bhaskar" <bhaskar15nov@n_o_s_p_a_m.gmail.com> writes: > Hi to all, > I am the begginer in microblaze. I want to just print the value of > the float variable "fpu" (=2.345) on the uart. can you please tell me how > to write it in the microblaze (C), Can you tell me right from the include > statements, because I not even know what are the header files it usesw to > include floating point unit. I have just added FPU in the BSB launcher when > the EDK invokes. I dont know what should i do later. Plz help me Have you tried - does something like this not work? #include <stdio.h> int main(void) { float somefloat=2.345f; printf("%f\n", somefloat); } XPS should sort everything out behind the scenes for this to work.You do have to use "proper" printf() rather than xil_printf() which means your elf file gets pretty big all of a sudden though. Don't try and run it out of internal memory! Cheers, Martin -- martin.j.thompson@trw.com TRW Conekt - Consultancy in Engineering, Knowledge and Technology http://www.conekt.net/electronics.htmlArticle: 147604
On Wed, 05 May 2010 20:49:18 -0700, Eric wrote: > Hi, > > Does someone have some benchmarks comparing the compilation time between > the Windows 64b and Linux 64b editions of the Xilinx ISE Design Suite? I > need some arguments to invest in the right development platform. > > Many thanks. > > Eric The parallel processing modes only work in Linux, at least up to ISE 11. I don't know if they've added parallel support for Windows in ISE 12. Quartus 9.1 has parallel processing support on Linux, I've never used it on Windows so I don't know if it also has parallel support on Windows.Article: 147605
> The parallel processing modes only work in Linux, at least up to ISE 11. > I don't know if they've added parallel support for Windows in ISE 12. > Quartus 9.1 has parallel processing support on Linux, I've never used it > on Windows so I don't know if it also has parallel support on Windows. 'General', Do the parallel processing modes make much difference in your experience? Nial.Article: 147606
General Schvantzkoph <schvantzkoph@yahoo.com> writes: > The parallel processing modes only work in Linux, at least up to ISE 11. > I don't know if they've added parallel support for Windows in ISE 12. Is this parallel processing for a single build, or is it different building different seeds on different machines? I remember that the latter was only available on Linux (and Solaris at the time) for ISE since it required remote shell operation. > Quartus 9.1 has parallel processing support on Linux, I've never used it > on Windows so I don't know if it also has parallel support on Windows. I think it's the same on Windows, even though I mostly run Quartus under Linux. In my experience the Linux and Windows versions run at pretty much the same pace. Petter -- .sig removed by request.Article: 147607
Thanks. LCArticle: 147608
On 05/06/2010 10:44 AM, Petter Gustad wrote: > In my experience the Linux and Windows versions run at pretty much the > same pace. > > Petter My completely unscientific, 'seat of the pants' comparisons have found the same result. -- Jason ThibodeauArticle: 147609
Hi, Can I set the XIlinx FFT core such that the early (maybe first 5 out of 10) stages use smaller precision bits (same as input precision) --- and they use scaling. But the later 5 stages don't use scaling --- instead we allow them more precision bits. This will help me to use less precision (and hence less hardware) with scaling wherever possible. (using the small precision throughout the FFT gives underflow results) With initial stages scaled and later stages allowed bit growth --- I can prevent overflow as well as achieve the meaningful results (with higher precision output) while saving hardware (since only later stages use high precision) Does this make sense? Is this possible with Xilinx FFT core? Onkar --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147610
On Thu, 06 May 2010 15:18:46 +0100, Nial Stewart wrote: >> The parallel processing modes only work in Linux, at least up to ISE >> 11. I don't know if they've added parallel support for Windows in ISE >> 12. Quartus 9.1 has parallel processing support on Linux, I've never >> used it on Windows so I don't know if it also has parallel support on >> Windows. > > 'General', > > Do the parallel processing modes make much difference in your > experience? > > > > Nial. I'm using the full parallel mode on Quartus right now and I'm getting pretty good times on an iCore7 machine. However I haven't taken the time to benchmark it so I don't actually know if it helps. I don't use Windows except in a VM to run Word and Quickbooks so I don't have any numbers that I've generated myself. Xilinx FAEs have told me that the Linux versions of ISE are faster, Altera FAEs have told be that the Windows version of Quartus is a little faster. However I don't know how true either statement is with the latest versions. The code base for the core applications is identical, it's only the GUI code that's different, so you wouldn't expect a giant difference. Linux had a huge advantage until recently because it has been 64 bits for years, until Win7 very few people were using a 64 bit version of Windows. Large Xilinx FPGAs require a lot of RAM, I've found that the V5 300s require about 10G, so 64 bits is an absolute requirement for the big parts which made Linux an absolute requirement.Article: 147611
Any help will be really appreciated. > >Hi, > > >Can I set the XIlinx FFT core such that the early (maybe first 5 out of 10) >stages use smaller precision bits (same as input precision) --- and they >use scaling. But the later 5 stages don't use scaling --- instead we allow >them more precision bits. > >This will help me to use less precision (and hence less hardware) with >scaling wherever possible. (using the small precision throughout the FFT >gives underflow results) With initial stages scaled and later stages >allowed bit growth --- I can prevent overflow as well as achieve the >meaningful results (with higher precision output) while saving hardware >(since only later stages use high precision) > >Does this make sense? Is this possible with Xilinx FFT core? > > >Onkar > >--------------------------------------- >Posted through http://www.FPGARelated.com > --------------------------------------- Posted through http://www.FPGARelated.comArticle: 147612
hi ! General Schvantzkoph wrote: > Linux had a huge advantage until > recently because it has been 64 bits for years, until Win7 very few > people were using a 64 bit version of Windows. Large Xilinx FPGAs require > a lot of RAM, I've found that the V5 300s require about 10G, so 64 bits > is an absolute requirement for the big parts which made Linux an absolute > requirement. Does it mean that... Win7 has killed Linux ? :-D yg -- http://ygdes.com / http://yasep.orgArticle: 147613
On May 5, 11:49=A0pm, Eric <delage.e...@gmail.com> wrote: > Hi, > > Does someone have some benchmarks comparing the compilation time > between the Windows 64b and Linux 64b editions of the Xilinx ISE > Design Suite? I need some arguments to invest in the right development > platform. > > Many thanks. > > Eric This is somewhat anecdotal, as I almost always use 32bit ISE on a 32 bit windows XP machine, but I have tried out some other combos and this is what I've got: - This applies only to ngdbuild/map/par/bitgen, I don't use XST. - The last time I tried this was circa ISE 11.1 I think. - 64bit ISE on a 64bit windows XP machine is 2-3 times slower in map/ par than 32 bit ISE on the same 64 bit windows XP machine - 32 bit ISE is essentially identical in performance on both 32bit winXP and 64bit winXP - 64 bit ISE inherently uses much more memory than 32 bit ISE - 64 bit ISE allows you to use more memory (for very large designs) - Supposedly, 64 bit linux ISE is more on par with 32bit win ISE in terms of performance than 64 bit win ISE ChrisArticle: 147614
On Fri, 07 May 2010 06:43:54 +0200, whygee wrote: > hi ! > > General Schvantzkoph wrote: > > Linux had a huge advantage until >> recently because it has been 64 bits for years, until Win7 very few >> people were using a 64 bit version of Windows. Large Xilinx FPGAs >> require a lot of RAM, I've found that the V5 300s require about 10G, so >> 64 bits is an absolute requirement for the big parts which made Linux >> an absolute requirement. > > Does it mean that... Win7 has killed Linux ? :-D > > yg Of course not, Linux is a vastly more productive environment. The high performance simulators, NC and VCS, are Linux only. It's vastly easier to use multiple machines, I'm currently sshed into a 4 core i7 and two Core2 boxes that are running Verilog simulations and Quartus builds. I'm also sshed into a remote system that has SignalTap running. I'm doing the Quartus builds on the i7 and I'm rsyncing the results over to the remote system. Try doing any of that with Windows. When I run Xilinx tools I do that with scripts which is easier to do on Linux although the scripts can run on Windows if you have Cygwin installed.Article: 147615
General Schvantzkoph wrote: >> Does it mean that... Win7 has killed Linux ? :-D > Of course not, Linux is a vastly more productive environment. no doubt about it ;-) > The high > performance simulators, NC and VCS, are Linux only. It's vastly easier to > use multiple machines, I'm currently sshed into a 4 core i7 and two Core2 > boxes that are running Verilog simulations and Quartus builds. I'm also > sshed into a remote system that has SignalTap running. I'm doing the > Quartus builds on the i7 and I'm rsyncing the results over to the remote > system. Try doing any of that with Windows. When I run Xilinx tools I do > that with scripts which is easier to do on Linux although the scripts can > run on Windows if you have Cygwin installed. thanks for the details :-) I'll maybe quote or use them one day... yg -- http://ygdes.com / http://yasep.orgArticle: 147616
On May 5, 11:52 am, wallge <wal...@gmail.com> wrote: > > Another intractable solution would be to remove all the custom types > which are used all throughout my custom component and use generic > parameters and std_logic_vectors types > all the way through the project. At this point the component is too > complex to try and remove all of the custom types and replace them > with standard types (whose bitwidths, etc would be set via generics). > Is it still too much work if only the top level entity used generics and the rest is left nearly all alone? Seems to me like it shouldn't, the 'intractable' part I'm suspecting has to do if you had to replace all usages in the architecture. The thought here would be... - Modify the entity to parameterize with generics using std_logic_vector types per your post - For each custom type, write a 'to_std_ulogic_vector' and a 'from_std_ulogic_vector' function that converts between std_ulogic_vector and a particular custom type. - For each entity interface signal add a signal in the architecture that converts to or from the custom type. Use the signal name of the old entity signal in the architecture, come up with a new name for the entity. That way, nothing else inside the architecture needs to be changed. As an example... Before... entity Foo is generic(...) port( Inp1: in t_type1; Inp2: in t_type2; Out3: out t_type3 ); end Foo; After... entity Foo is generic( T1_WIDTH: natural; T2_WIDTH: natural; T3_WIDTH: natural ...) port( Inp1_sulv: in std_ulogic_vector(T1_WIDTH-1 downto 0); Inp2_sulv: in std_ulogic_vector(T2_WIDTH-1 downto 0); Out3_sulv: out std_ulogic_vector(T3_WIDTH-1 downto 0); ); architecture RTL of Foo is signal Inp1: t_type1; signal Inp2: t_type2; signal Out3: t_type3; -- In most cases, the following functions reeeeeally should be put in the package -- where the types are defined, not in the architecture. function to_std_ulogic_vector(L: t_type1) return std_ulogic_vector is begin ... end function to_std_ulogic_vector; function from_std_ulogic_vector(L: std_ulogic_vector) return t_type1 is begin ... end function from_std_ulogic_vector; -- Repeat the to/from functions for each custom type -- Strictly speaking, you only need the 'to' functions for custom type -- entity outputs and the 'from' functions for custom type entity inputs. -- -- The reality is that once you create these functions you'll find many -- uses for them beyond your immediate needs. It should take less than -- a minute to copy/paste/edit to convert a record type into to/ from -- vector functions. Once you see that, you'll probably be able to -- macro-ize the steps using whatever text editor you use. begin Inp1 <= from_std_ulogic_vector(Inp1_sulv); Inp2 <= from_std_ulogic_vector(Inp2_sulv); Out3_sulv <= to_std_ulogic_vector(Out3_sulv); ...rest of code that you currently have remains unchanged ); Kevin JenningsArticle: 147617
Hello, I am trying to implement (simulation + synthesis) for 32bit floating point division unit. To perform division basically the 23+1bit (1 hidden bit) mantissa part is divided with the other mantissa, and then 8bit exponents are subtracted and finally normalization is applied. So for the mantissa division part I am following Binary Division by Shift and Subtract method (http://courses.cs.vt.edu/~cs1104/Division/ ShiftSubtract/Shift.Subtract.html). I can use this algorithm if both the mantissa's are such that no remainder is left (i.e. remainder=0) but if mantissa's are such that a remainder is left then how can i proceed with the division? if i proceed then quotient would be inaccurate. I have already searched google for srt division algorithm but i am not able to find an simple example. If some one give me srt division example/algorithm for a value of 22/7 i would really appreciate that. ThanksArticle: 147618
niyander <mightycatniyander@gmail.com> wrote: > I am trying to implement (simulation + synthesis) for 32bit floating > point division unit. > To perform division basically the 23+1bit (1 hidden bit) mantissa part > is divided with the other mantissa, and then 8bit exponents are > subtracted and finally normalization is applied. > So for the mantissa division part I am following Binary Division by > Shift and Subtract method (http://courses.cs.vt.edu/~cs1104/Division/ > ShiftSubtract/Shift.Subtract.html). > I can use this algorithm if both the mantissa's are such that no > remainder is left (i.e. remainder=0) but if mantissa's are such that a > remainder is left then how can i proceed with the division? if i > proceed then quotient would be inaccurate. You either truncate or round. Unless you are implementing an existing architecture, it is your choice. IBM hex floating point truncates, most of the others, including IEEE, round. > I have already searched google for srt division algorithm but i am not > able to find an simple example. If some one give me srt division > example/algorithm for a value of 22/7 i would really appreciate that. That will help you do it faster, but it won't change the question about what to do with a remainder. If shift and subtract, or more likely a non-restoring algorithm, is fast enough then you might just as well use it. -- glenArticle: 147619
On Apr 30, 8:52=A0pm, KJ <kkjenni...@sbcglobal.net> wrote: > On Apr 30, 6:56=A0pm, Brian Drummond <brian_drumm...@btconnect.com> > wrote: > > > On Fri, 30 Apr 2010 14:06:19 -0700 (PDT), rickman <gnu...@gmail.com> > > >In fact, it is only a warning when you have two drivers of an > > >std_logic signal. =A0I would like to make that an error by using > > >std_ulogic, but I'm not sure how that works with the other types. =A0I > > >need to investigate that sometime. > > <snip> > > I suspect there may be some tools issues in the less-well-trodden path > > of std_ulogic. And I have a nagging suspicion that numeric_std is > > compatible with std_logic and may be harder to use with its unresolved > > cousin. (But I hope not) > > > Again, if you get a chance to investigate, I would be interested to hea= r > > how you get on. > > I've been using std_ulogic/std_ulogic_vector for a while...no issues > with Quartus or Synplify on the synthesis front. =A0 My concern is compatibility with numeric_std types. I almost never use std_logic_vector, if for no other reason, the name is soooooo long to type. I much prefer signed/unsigned types. I guess the real issue is that if I am using signed/unsigned, I am using slv, not sulv... end of story, right? Would I need to make my own library to use ulogic based signed/unsigned types? > The main place the mixing of std_logic_vector and std_ulogic_vector > occurs is instantiating some outside widget that uses std_logic_vector > on the interface. =A0Once I learned that type conversions can be put > into the port map and you didn't need to create std_ulogic 'wrappers', > or use intermediate signals to connect the vectors, it all came > together rather nicely. > > Example: > > Inst_Some_Widget : entity work.widget > port map( > =A0 =A0Gazinta_slv =3D> std_logic_vector(Gazinta_sulv), > =A0 =A0std_logic_vector(Gazouta_slv) =3D> Gazouta_sulv > ); > > std_logic and std_ulogic can be freely assigned without any type > conversions I know I have run into trouble with this in the past. In fact, I thought there were some limitations in the standard, not just tool limitations. Rather than learn to work around the limitations, I have always used "wrapper" signals for the conversion. RickArticle: 147620
General Schvantzkoph <schvantzkoph@yahoo.com> writes: > that with scripts which is easier to do on Linux although the scripts can > run on Windows if you have Cygwin installed. But Cygwin is slooow, especially file access, e.g using git under Cygwin is almost impossible. Petter -- .sig removed by request.Article: 147621
On May 8, 3:40=A0am, rickman <gnu...@gmail.com> wrote: > > > > Again, if you get a chance to investigate, I would be interested to h= ear > > > how you get on. > > > I've been using std_ulogic/std_ulogic_vector for a while...no issues > > with Quartus or Synplify on the synthesis front. =A0 > > I guess the real issue > is that if I am using signed/unsigned, I am using slv, not sulv... end > of story, right? No, start of story...but it's the story of strong typing that you object to that started this thread so I'm guessing you won't like the story, but here it is anyway. The definition of the types 'signed', 'unsigned', 'std_logic_vector' and 'std_ulogic_vector' are... type signed is array (NATURAL range <>) of std_logic; type unsigned is array (NATURAL range <>) of std_logic; type std_logic_vector is array ( NATURAL RANGE <>) of std_logic; type std_ulogic_vector is array ( NATURAL RANGE <> ) of std_ulogic; As you can see, they all have the same definition...but that doesn't make them the same from the perspective of the language. They are different types, none of them are subtypes of anything that is more general. If you have a signal or variable of any of the above types, and you want to assign it to something of any of the other types, you will need to do a type conversion because they are different *types* not just different *subtypes*. Now let's take a look at the definition of std_logic for a moment. It is... SUBTYPE std_logic IS resolved std_ulogic; Since std_logic is defined as a *subtype* of the more general std_ulogic type then you can freely assign two signals/variables without the type conversion. Note though that while std_logic is a subtype of std_ulogic, the previously mentioned definition of std_ulogic_vector is NOT a subtype of std_logic_vector. That is why std_logic and std_ulogic can be freely assigned without type conversions, but std_logic_vector and std_ulogic_vector can not. I don't know why the vector versions were defined this way, and maybe whoever decided this wishes they had done it differently, but in any case it is the way it is...but before completely throwing in the towel on the language itself, also recognize that the definitions of those types are in packages that are outside of the language definition itself. If you want to create your own types and subtypes without this limitation, you can do so. > =A0Would I need to make my own library to use ulogic > based signed/unsigned types? > No. > > > The main place the mixing of std_logic_vector and std_ulogic_vector > > occurs is instantiating some outside widget that uses std_logic_vector > > on the interface. =A0Once I learned that type conversions can be put > > into the port map and you didn't need to create std_ulogic 'wrappers', > > or use intermediate signals to connect the vectors, it all came > > together rather nicely. > > > Example: > > > Inst_Some_Widget : entity work.widget > > port map( > > =A0 =A0Gazinta_slv =3D> std_logic_vector(Gazinta_sulv), > > =A0 =A0std_logic_vector(Gazouta_slv) =3D> Gazouta_sulv > > ); > > > std_logic and std_ulogic can be freely assigned without any type > > conversions > > I know I have run into trouble with this in the past. =A0In fact, I > thought there were some limitations in the standard, not just tool > limitations. =A0Rather than learn to work around the limitations, I have > always used "wrapper" signals for the conversion. > I've never had any problems with this approach. Tool limitations though are not only a function of which tool you are using but it also changes over time. Perhaps if you can find and dust off your example where you thought this was a limitation of either the tool, the standard or both you might find that it was something different. In my case, the fact that you can put a type conversion on the left side of the port map was my "learn something new every day" moment several years back...and the end of any need for wrappers for conversions on entity outputs. Kevin JenningsArticle: 147622
Hello everybody!! I spent 3 days on it without success and after it I decided to searching for an help. I have a Virtex-II system with Microblaze 7.10.d and external SDRAM memory. The hardware and software system correctly working and the usual "hello world" and "blinking leds" programs work properly. Since I plan to use linux on it, I start preparing a bootloader to be stored into BRAM. In order to perform some test, I prepare two projects in EDK: the bootloader and the SDRAM application. The bootloader waits for a "magic word" (0xDEADBEEF) from PCI bus before starting. From PCI bus I load the second application in SDRAM in the correct location (I create the bin file using mb-objcopy -O binary). When the program receives the "magic word", the program jump to the SDRAM address. But no console output is visible...I try to manually insert the interrupt/reset/exception vectors but nothing change. Here's the codes //BOOTLOADER PROGRAM #include "xparameters.h" #include "xbasic_types.h" #include "xgpio.h" #include "gpio_header.h" //==================================================== void (*app_start)(); int main (void) { #if XPAR_MICROBLAZE_0_USE_ICACHE microblaze_init_icache_range(0, XPAR_MICROBLAZE_0_CACHE_BYTE_SIZE); microblaze_enable_icache(); #endif #if XPAR_MICROBLAZE_0_USE_DCACHE microblaze_init_dcache_range(0, XPAR_MICROBLAZE_0_DCACHE_BYTE_SIZE); microblaze_enable_dcache(); #endif xil_printf("waiting for magic word\n"); Xuint32 magic_word=0; do { magic_word = XIo_In32(XPAR_PLBV46_LOCAL_BRIDGE_0_BASEADDR +0X10); }while(magic_word!=0xDEADBEEF); XIo_Out32(XPAR_OPB_GPIO_0_BASEADDR,0xFFFFFFFF); xil_printf("magic word received\n"); #if XPAR_MICROBLAZE_0_USE_DCACHE microblaze_disable_dcache(); microblaze_init_dcache_range(0, XPAR_MICROBLAZE_0_DCACHE_BYTE_SIZE); #endif #if XPAR_MICROBLAZE_0_USE_ICACHE microblaze_disable_icache(); microblaze_init_icache_range(0, XPAR_MICROBLAZE_0_CACHE_BYTE_SIZE); #endif app_start = 0x44000000; app_start(); return 0; } // SDRAM APPLICATION #include "xparameters.h" #include "xbasic_types.h" #include "xgpio.h" int main (void) { #if XPAR_MICROBLAZE_0_USE_ICACHE microblaze_init_icache_range(0, XPAR_MICROBLAZE_0_CACHE_BYTE_SIZE); microblaze_enable_icache(); #endif #if XPAR_MICROBLAZE_0_USE_DCACHE microblaze_init_dcache_range(0, XPAR_MICROBLAZE_0_DCACHE_BYTE_SIZE); microblaze_enable_dcache(); #endif xil_printf("Hello World, I'm the one in the SDRAM!!\n"); #if XPAR_MICROBLAZE_0_USE_DCACHE microblaze_disable_dcache(); microblaze_init_dcache_range(0, XPAR_MICROBLAZE_0_DCACHE_BYTE_SIZE); #endif #if XPAR_MICROBLAZE_0_USE_ICACHE microblaze_disable_icache(); microblaze_init_icache_range(0, XPAR_MICROBLAZE_0_CACHE_BYTE_SIZE); #endif return 0; } What's wrong?!? Please help me!!! Thanks for your attentionArticle: 147623
Everything snipped... That is why I am going to take a good look at Verilog. I've been using VHDL for some 12 years and I still don't feel like I completely understand even basic things like how signed/unsigned relate to std_ulogic and how closely related types... well, relate! When you convert slv to unsigned or unsigned using unsigned(), this is not really a conversion is it? It is not the same as using to_integer() to convert signed to integer. In the std_numeric library they include conversion functions between integer and signed/ unsigned. But there are no functions to convert slv and these types. So it would seem this is not a conversion by function. So what is it? At one time I thought I understood all this, but it is so far removed from getting work done that I typically adopt standard practices and forget the details. Then when I need to figure out something new I have to go back to basics. It just gets so time consuming. I want to focus on the work, not the method. RickArticle: 147624
"rickman" <gnuarm@gmail.com> wrote in message news:34aaac95-f886-481d-a4bb-a6b9c63b336f@r11g2000yqa.googlegroups.com... > When you convert slv to unsigned or unsigned using unsigned(), this is > not really a conversion is it? It is not the same as using > to_integer() to convert signed to integer. In the std_numeric library > they include conversion functions between integer and signed/ > unsigned. But there are no functions to convert slv and these types. > So it would seem this is not a conversion by function. So what is > it? If you're not doing arithemetic on it, nobody cares, and slv is fine. If you're doing arithmetic (adding, subtracting, multiplying, comparing to integer, etc) it tells the synthesizer and / or simulator whether you consider the N bits to represent an unsigned number (0 to 2^N -1) or a two's complement signed number (-(2^(N-1)) to 2^(N-1) -1). Pete
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z