Computer Science Canada Need help!!! some problems in c++ |
Author: | cqhqm150 [ Thu Oct 16, 2008 3:57 pm ] | ||
Post subject: | Need help!!! some problems in c++ | ||
when we want to assign a value to a specific position of a vector, there are two ways: myvector.at(i) = 10 and myvector[i] = 10. I just do not know which one is faster and why it is faster. (I think that the second one is faster, but I just cannot figure out the reason) another problem: I use makefile to compile my program, but it can only compile my program without creating any a.out file, so I can not execute my program after compilation. Here is my makefile:
[edit by md]Fixed makefile block |
Author: | md [ Thu Oct 16, 2008 5:09 pm ] |
Post subject: | RE:Need help!!! some problems in c++ |
Both C++ methods should be nearly as fast, though theoretically the first is slow since it returns a reference which is then assigned. The difference in instruction count is rather small. The problem with your makefile is that you have nothing to build the object files, no dependencies on the source files. |
Author: | S_Grimm [ Thu Oct 16, 2008 5:26 pm ] |
Post subject: | RE:Need help!!! some problems in c++ |
please do not post "I need help titles." Also md is right. your compiler needs source files to comile. yours has none. its like parachuting without a chute. |
Author: | cqhqm150 [ Thu Oct 16, 2008 6:21 pm ] |
Post subject: | RE:Need help!!! some problems in c++ |
thank you. I will not post "need help" next time. |
Author: | cqhqm150 [ Thu Oct 16, 2008 6:39 pm ] |
Post subject: | RE:Need help!!! some problems in c++ |
I solved my makefile problem. Because the executable file is complexnumber instead of a.out, makefile create a file named complexnumber. That's why I can not find a.out after compilation. |
Author: | wtd [ Thu Oct 16, 2008 7:57 pm ] |
Post subject: | RE:Need help!!! some problems in c++ |
That may have made it appear to work, but you need to actually compile some source files. The file.o and filedriver.o rules do not require that the source (presumably *.cpp) even be present, and they certainly don't perform any sort of compilation. |
Author: | btiffin [ Fri Oct 17, 2008 1:13 am ] | ||||
Post subject: | RE:Need help!!! some problems in c++ | ||||
Old guy ramble; Re the vector method or indexing overload; try running some tiny source that includes the two different through
Re make (for everyone); don't forget that make usually ships with a huge ugly bag of built-in rules. For instance
will look in the dir (with no Makefile whatsoever) for program.c and actually add -o program to the cc command it uses by default. If no program.c, and make finds program.cpp, c++ will be used for the compile. I have no clue about the actual rule chain for default make ... but I know .c trumps .cpp cqhqm150's make commands would behave just as he implied. Overrides in the Makefile will change cxx to g++ etc, but it is mostly the default rules that get the build done in this case. make simply tries all its rules on complexnumber. As wtd points out, the built-in rule chain makes it appear to work and by fluke the build actually does work for this simple case, with all the potential problems lying in wait. ![]() cqhqm150; This may be poor advice; but I usually write a script that includes all the tectonics for a build while first developing anything. Explicitly state the build commands and comment in where any special external libraries come from. When tectonic works, then I move to the efficiency of a Makefile and make. Cheers |
Author: | Rigby5 [ Mon Oct 20, 2008 9:03 pm ] |
Post subject: | RE:Need help!!! some problems in c++ |
myvector.at(i) = 10; Is much slower then myvector[i] = 10; The size of the program has nothing to do with it. The difference is that the first calls a function, while the second is done entirely in line by the code generated by the compiler, that calculated the address ahead of time. These days the difference is trivial, but in the old days you some times needed to avoid excess function calls due to the need to push and pop all the registers. |
Author: | md [ Mon Oct 20, 2008 11:12 pm ] | ||
Post subject: | Re: RE:Need help!!! some problems in c++ | ||
Rigby5 @ 2008-10-20, 9:03 pm wrote: myvector.at(i) = 10;
Is much slower then myvector[i] = 10; The size of the program has nothing to do with it. The difference is that the first calls a function, while the second is done entirely in line by the code generated by the compiler, that calculated the address ahead of time. These days the difference is trivial, but in the old days you some times needed to avoid excess function calls due to the need to push and pop all the registers. Don't be so sure. std::vector<T>::at() could very easily be an inline function just like std::vector<T>::operator[](). The only real difference between the two methods is actually in the C++ course. Both in fact generate remarkably similar assembly, especially since the address cannot be calculated in advance for a std::vector (with the possible exception of a static std::vector which is very uncommon). For example, in this very quick (and fast) implementation both are *exactly* the same
|
Author: | DemonWasp [ Tue Oct 21, 2008 11:47 pm ] |
Post subject: | RE:Need help!!! some problems in c++ |
I just dealt with this exact issue on an assignment for one of my own CS courses (CS246 at UW, incidentally). Here's some test results (all times in seconds, all executed on the same machine, all compiled with the same compiler): Vector, checked access, no optimisation: 2.380 Vector, checked access, -O2 optimisation: 0.152 Vector, unchecked access, no optimisation: 1.028 Vector, unchecked access, -O2 optimisation: 0.120 This is with "checked access" meaning using .at(), which checks bounds, and "unchecked" meaning using the access operator, which does not. That's the key difference here: if you use the std::vector<T>::at(), it will check that the requested index is not out-of-bounds, whereas using std::vector<T>::operator[]() will not. |
Author: | Pockets [ Thu Oct 23, 2008 11:13 am ] |
Post subject: | RE:Need help!!! some problems in c++ |
Spot on there, DemonWasp. Checked access is great for reducing the likelihood someone can overflow your program (but it's always a possibility). Unchecked access is the old standard, built for speed above all else. Unchecked is fine for things like school programs (unless the assignment includes making it secure), but the function call is there for a good reason. Funny thing about things built for speed. Some guys a while back thought it'd be faster to drop the first two digits of the year date... |
Author: | btiffin [ Fri Oct 24, 2008 1:00 am ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
Pockets @ Thu Oct 23, 2008 11:13 am wrote: Funny thing about things built for speed. Some guys a while back thought it'd be faster to drop the first two digits of the year date... Re dates; Not quite. In 1960, hard disk cost many many thousands of dollars per meg. Many many. Some $100,000 for a 25 meg disc. So, if a bank could save 2 bytes off each of 1 million transactions, well that was like saving $10,000 (1960's dollars). Huge. Paid the programmer for his year. The culture stuck with the industry right up till the late 90's. Very few old timer programmers would truncate a date for speed, not from a CPU perspective anyway. Bus speed to storage would be a larger concern. Data space was the primary reason the COBOLers of old got away with this. Old guy portents of doom and a ranting follows Anyone in highschool now will be just hitting their mid 40's when the Unix 32 bit date epoch rolls over and really could take some systems for a loop or two. Y2K was a visible problem. The 32bit Epoch problem less visible, more insidious. People may seem "smarter" now, but the 1970 based datetime routines are embedded in a LOT of systems. The esteemed members of compsci.ca will be in for an employment boom as the world bitches about "those foolish and crotchety old out of date C programmers" that are causing everyone grief. 03:14:07 AM January 19, 2038 ... any existent 32bit clib based system will think it is 00:00:00 January 01, 1970. And anyone reading this thinking ... oh, all that software will be replaced by then, I'll ask; Do you think that code you write now is so shitty that it won't last 30 years? 30 years in not a long time. Look at the years of service expected from aircraft or power plants. From personal experience ... I still get called in to support a 25 year old system that I was in on the initial development of. And so you know that I've eaten my own dog food ... we built an index that included a Priority Category Realtime Outage key smudged into an 6 byte field. Someone wanted it separated by In Service, Out of Service. So we stole a bit from the 32bit datetime field. That index will become unusable somewhen in 2015. The system won't fail in any critical sense. It's a Forth system that compiles from source every morning after the back up. There is an assert to cause the load to fail on the magic day. So somewhen in 2015 I'll probably get a call that the system didn't load, and the quick fix will be to remove the assertion and let the index sort wrong. This is not a hidden fact of the system ... it was and is well documented and we got approval from management. And on a personal note; Sorry Pockets ... I'm not trying to pick on you ... It's an old fart ranting to everyone to be careful that when you ignore history, you are bound to repeat its mistakes. Y2K didn't really teach anyone anything as far as I can tell. But it sure scared the bejeezus out of people in positions of fiduciary responsibility. So after the world made it through the year 2000 mythical meltdown, the big boys looked at IT, mad having been forced to spend millions on systems that they had already spent millions on for no benefit other than "just in case", and yanked all our play money away ... sealing the fate of the dot com bust. And more ranting ... COBOL and string held dates are rarely in control of large electro-magnet power generating systems. Unix epoch 32 bit C datetimes ... hmm. Let these AC systems drift out of phase ... leading to voltage drop, without compensation with either a change to resistance and/or current and you get a boat load of heat ... boat loads. Y2K may end up looking like a cake walk. Cheers, and sorry if I went too far off topic for the thread Edit; typos |
Author: | Rigby5 [ Fri Oct 24, 2008 2:22 pm ] |
Post subject: | RE:Need help!!! some problems in c++ |
Md, good point with the inline function. However, I think the compiler can do even better at array indexing when not constrained by the code of an inline function. |
Author: | md [ Fri Oct 24, 2008 11:01 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
Rigby5 @ 2008-10-24, 2:22 pm wrote: Md, good point with the inline function.
However, I think the compiler can do even better at array indexing when not constrained by the code of an inline function. The underlying memory layout of a vector may not be an array. It most likely is... but there are alternate and equally fast methods for creating vectors that are a bit more complex. In either case the compiler is far from constrained when you tell it to optimize anyways. It will inline functions, re-arrange code, even get rid of things you wrote if it can do the same thing faster. Compilers are very good at optimization, and without too much owrk on the programmers part they will work very well. Code optimization is a assembly skill, not really a C++ skill. |
Author: | OneOffDriveByPoster [ Sun Oct 26, 2008 3:10 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
md @ Fri Oct 24, 2008 11:01 pm wrote: The underlying memory layout of a vector may not be an array. It most likely is... but there are alternate and equally fast methods for creating vectors that are a bit more complex. I'm interested in knowing what these ways are.
In C++, you are limited by the following: Quote: The elements of a vector are stored contiguously, meaning that if v is a vector<T, Allocator> where T is some type other than bool, then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size(). |
Author: | md [ Sun Oct 26, 2008 7:00 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
OneOffDriveByPoster @ 2008-10-26, 3:10 pm wrote: md @ Fri Oct 24, 2008 11:01 pm wrote: The underlying memory layout of a vector may not be an array. It most likely is... but there are alternate and equally fast methods for creating vectors that are a bit more complex. I'm interested in knowing what these ways are.
In C++, you are limited by the following: Quote: The elements of a vector are stored contiguously, meaning that if v is a vector<T, Allocator> where T is some type other than bool, then it obeys the identity &v[n] == &v[0] + n for all 0 <= n < v.size(). Ok, so perhaps implementing it differently would not be contiguous; however unless you are purposely going around the abstraction of iterators they would be no different. My point was more that premature optimization is bad - and guessing at what compilers will or will not do is equally haphazard ![]() Incidentally there are binary tree/block structures which while not quite as fast for random access are just as fast for traversal and quicker for insertion. They just fail the test of contiguousness. |
Author: | Rigby5 [ Sun Oct 26, 2008 11:28 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
md @ Fri Oct 24, 2008 11:01 pm wrote: Rigby5 @ 2008-10-24, 2:22 pm wrote: Md, good point with the inline function.
However, I think the compiler can do even better at array indexing when not constrained by the code of an inline function. The underlying memory layout of a vector may not be an array. It most likely is... but there are alternate and equally fast methods for creating vectors that are a bit more complex. In either case the compiler is far from constrained when you tell it to optimize anyways. It will inline functions, re-arrange code, even get rid of things you wrote if it can do the same thing faster. Compilers are very good at optimization, and without too much owrk on the programmers part they will work very well. Code optimization is a assembly skill, not really a C++ skill. I did not mean to imply one should do their own optimization, but that templates over riding operators can inhibit the best optimization. In fact, C++ in general is much slower then C because of the late binding, and the additional code that requires. These days computers are so fast it hardly matters. But in the old days, C++ was a real dog compared to C. C allows the compiler to be much more efficient at optimization. With C++, too much happens at run time. |
Author: | wtd [ Mon Oct 27, 2008 1:52 am ] |
Post subject: | RE:Need help!!! some problems in c++ |
With C you either have programmers who are much less productive, or they use libraries that do a lot at run time, and probably less efficiently than a C++ compiler. |
Author: | OneOffDriveByPoster [ Mon Oct 27, 2008 8:14 am ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
Rigby5 @ Sun Oct 26, 2008 11:28 pm wrote: I did not mean to imply one should do their own optimization, but that templates over riding operators can inhibit the best optimization.
I think you are confused between templates (handled at compile time) and virtual functions (handled at run time).In fact, C++ in general is much slower then C because of the late binding, and the additional code that requires. |
Author: | Rigby5 [ Wed Oct 29, 2008 11:41 am ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
wtd @ Mon Oct 27, 2008 1:52 am wrote: With C you either have programmers who are much less productive, or they use libraries that do a lot at run time, and probably less efficiently than a C++ compiler.
I don't think so. C libraries are always faster then C++ libraries, and C++ compilers can't be as efficient because they don't know as much as C compilers do, because they have to wait for run time information. C compliers have all the information at compile time, so can easily optimize much better. |
Author: | Rigby5 [ Wed Oct 29, 2008 11:45 am ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
OneOffDriveByPoster @ Mon Oct 27, 2008 8:14 am wrote: Rigby5 @ Sun Oct 26, 2008 11:28 pm wrote: I did not mean to imply one should do their own optimization, but that templates over riding operators can inhibit the best optimization.
I think you are confused between templates (handled at compile time) and virtual functions (handled at run time).In fact, C++ in general is much slower then C because of the late binding, and the additional code that requires. No, what I meant was that the compiler knows how to do array indexing, so can easily optimize for that. But if you over ride the indexing operator yourself, you take away the compiler's ability to do that. The fact there also have to be additional code for runtime resolution of late binding and virtual function table look up, are additional reasons why C++ is much slower then C, just as C is much slower then assembly. But these days, that is not as important as what is easiest to maintain. |
Author: | OneOffDriveByPoster [ Wed Oct 29, 2008 3:07 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
Rigby5 @ Wed Oct 29, 2008 11:41 am wrote: C libraries are always faster then C++ libraries, and C++ compilers can't be as efficient because they don't know as much as C compilers do, because they have to wait for run time information. C compliers have all the information at compile time, so can easily optimize much better. |
Author: | Rigby5 [ Fri Oct 31, 2008 11:58 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
OneOffDriveByPoster @ Wed Oct 29, 2008 3:07 pm wrote: Rigby5 @ Wed Oct 29, 2008 11:41 am wrote: C libraries are always faster then C++ libraries, and C++ compilers can't be as efficient because they don't know as much as C compilers do, because they have to wait for run time information. C compliers have all the information at compile time, so can easily optimize much better. Not true. In C++ there are things like polymorphism that prevent the compilor from knowing the size of object is could be passed, ahead of time. Objects and pointers are not bound until run time, so the compilor can not know about them at compile time, and therefore can not optimize for them. Can the programmer prevent that and ensure the compilor does have the same information as in C? Yes, but then he has to essentially write in C, without any runtime virtual function table lookups, dynamic casts, using base class pointers, or any thing like that. |
Author: | OneOffDriveByPoster [ Sat Nov 01, 2008 10:01 am ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
Rigby5 @ Fri Oct 31, 2008 11:58 pm wrote: Not true.
Truth is, hiding things in libraries is the worse thing you could do for compiler optimization. In C, that is much more likely to happen. In C++, things like templates reveal more to the compiler than C would--it provides compile-time binding and type checking. Have you seen the hideous qsort() and bsearch() functions in C? Or how the POSIX libraries have struct-hacks which allow you pass different size structs through a pointer? Have you considered link-time or whole-program optimization? The compiler can perform devirtualization for example. Not to say that the compiler is magic though.In C++ there are things like polymorphism that prevent the compilor from knowing the size of object is could be passed, ahead of time. Objects and pointers are not bound until run time, so the compilor can not know about them at compile time, and therefore can not optimize for them. Can the programmer prevent that and ensure the compilor does have the same information as in C? Yes, but then he has to essentially write in C, without any runtime virtual function table lookups, dynamic casts, using base class pointers, or any thing like that. |
Author: | Rigby5 [ Tue Nov 04, 2008 4:22 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
OneOffDriveByPoster @ Sat Nov 01, 2008 10:01 am wrote: Rigby5 @ Fri Oct 31, 2008 11:58 pm wrote: Not true.
Truth is, hiding things in libraries is the worse thing you could do for compiler optimization. In C, that is much more likely to happen. In C++, things like templates reveal more to the compiler than C would--it provides compile-time binding and type checking. Have you seen the hideous qsort() and bsearch() functions in C? Or how the POSIX libraries have struct-hacks which allow you pass different size structs through a pointer? Have you considered link-time or whole-program optimization? The compiler can perform devirtualization for example. Not to say that the compiler is magic though.In C++ there are things like polymorphism that prevent the compilor from knowing the size of object is could be passed, ahead of time. Objects and pointers are not bound until run time, so the compilor can not know about them at compile time, and therefore can not optimize for them. Can the programmer prevent that and ensure the compilor does have the same information as in C? Yes, but then he has to essentially write in C, without any runtime virtual function table lookups, dynamic casts, using base class pointers, or any thing like that. Not at all true. Libraries do not hide anything from the compiler because they are already optimized by the compiler. Libraries are the most optimized code one can create. In contrast, templates reveal nothing to the compiler, because they are run time. The compiler is long gone by then. There is nothing complex or slow about dealing with different types without trying to do run time binding. People have been using simple void pointers for many decades. Embedded data structures makes a lot more sense then casting through base class pointers. The POSIX different size struct problem is no different in C++. C++ compilers still are not uniform as to struct or class sizes, because different compilers pad differently. Do a sizeof() on different compilors, and see. Yes, I have used link time and whole-program optimization. It became very important to the Itanium project, because the virtual nature of the 64 bit Itanium was hard to optimize by a compiler. And it did not work well. Compiler optimization turned out to be better. The reality is that much of what people are doing with C++ is an abuse, that is slower, more error prone, and illogical. C++ has strengths in modularizing functions along with their corresponding data, and allows avoiding long switch statements. But there are things like multiple inheritance, that should just never be done. And anyone who inherits classes more than 5 deep, should be shot. |
Author: | md [ Tue Nov 04, 2008 6:54 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
Rigby5 @ 2008-11-04, 4:22 pm wrote: OneOffDriveByPoster @ Sat Nov 01, 2008 10:01 am wrote: Rigby5 @ Fri Oct 31, 2008 11:58 pm wrote: Not true.
Truth is, hiding things in libraries is the worse thing you could do for compiler optimization. In C, that is much more likely to happen. In C++, things like templates reveal more to the compiler than C would--it provides compile-time binding and type checking. Have you seen the hideous qsort() and bsearch() functions in C? Or how the POSIX libraries have struct-hacks which allow you pass different size structs through a pointer? Have you considered link-time or whole-program optimization? The compiler can perform devirtualization for example. Not to say that the compiler is magic though.In C++ there are things like polymorphism that prevent the compilor from knowing the size of object is could be passed, ahead of time. Objects and pointers are not bound until run time, so the compilor can not know about them at compile time, and therefore can not optimize for them. Can the programmer prevent that and ensure the compilor does have the same information as in C? Yes, but then he has to essentially write in C, without any runtime virtual function table lookups, dynamic casts, using base class pointers, or any thing like that. Not at all true. So you Claim, now let's look at these claims. Rigby5 @ 2008-11-04, 4:22 pm wrote: Libraries do not hide anything from the compiler because they are already optimized by the compiler.
Yes, libraries *can* be optimized by the compiler, however that depends entirely on how they were compiled. Let's assume that they are optimized in and of themselves though. Rigby5 @ 2008-11-04, 4:22 pm wrote: Libraries are the most optimized code one can create.
False. You could re-write your C library in assembler and it would be more optimized. You could pass -O3 instead of -O2. Maybe you want size optimized instead of speed optimized; now you need to recompile again. Libraries are optimized, but never ideally. Rigby5 @ 2008-11-04, 4:22 pm wrote: In contrast, templates reveal nothing to the compiler, because they are run time.
Templates reveal everything to the compiler because they are a compile time contruct. They are not run time. The compiler generates the necessary template code at compile time and the code is static. There is no run-time overhead for templates. None. Rigby5 @ 2008-11-04, 4:22 pm wrote: There is nothing complex or slow about dealing with different types without trying to do run time binding.
Dealing with different types at runtime requires attempting to figure out what they are, which is more work then already knowing what they are and therefor slower. Sure it can be optimized out, after all a pointer is a pointer is a pointer, but not always. Rigby5 @ 2008-11-04, 4:22 pm wrote: People have been using simple void pointers for many decades.
That does not make it right, nor a good solution. People have been writing programs with buffer overflow holes for decades too, is that a good idea? Rigby5 @ 2008-11-04, 4:22 pm wrote: Embedded data structures makes a lot more sense then casting through base class pointers.
What? First, you cannot embed a data structure, they are compile time constructs. After that it's all memory addresses and offsets. Interestingly enough it's exactly the same for derived classes! There difference is that inheritance makes it much easier to sensible code. Rigby5 @ 2008-11-04, 4:22 pm wrote: The POSIX different size struct problem is no different in C++. Yes, when using posix functions and many APIs you still need to deal with variable sized structures in C++. It's annoying.
Rigby5 @ 2008-11-04, 4:22 pm wrote: C++ compilers still are not uniform as to struct or class sizes, because different compilers pad differently.
That's why when writing APIs you specify offsets, not C style structs. Or you specify how it is to be padded (not at all) and tell the compiler that. If such things were not possible then binary APIs would not be possible. And yet... Rigby5 @ 2008-11-04, 4:22 pm wrote: Do a sizeof() on different compilors, and see. Or different architectures, or languages, etc. etc. APIs are language independent, and compiler independent. They specify the sizes they expect and the offset they expect them at.
Rigby5 @ 2008-11-04, 4:22 pm wrote: Yes, I have used link time and whole-program optimization.
It became very important to the Itanium project, because the virtual nature of the 64 bit Itanium was hard to optimize by a compiler. And it did not work well. Compiler optimization turned out to be better. Compiler optimization was hard but it turned out better? For what? When doing what? Actually, how about not using an anecdote. Care to cite something? Rigby5 @ 2008-11-04, 4:22 pm wrote: The reality is that much of what people are doing with C++ is an abuse, that is slower, more error prone, and illogical.
Character strings are so often abused and so often lead to buffer overflows that they are the leading cause of bugs and security holes. std::strings on the other hand, not so much. Much of what people do irregardless of language is stupidly thought out, error prone and illogical; however C makes it much easier to do then C++. Rigby5 @ 2008-11-04, 4:22 pm wrote: C++ has strengths in modularizing functions along with their corresponding data, and allows avoiding long switch statements.
Modularization makes testing better, security auditing better, and modifying code easier; hiding case statements is a side effect, nothing more. Rigby5 @ 2008-11-04, 4:22 pm wrote: But there are things like multiple inheritance, that should just never be done.
And anyone who inherits classes more than 5 deep, should be shot. I cannot agree. Abusing multiple inheritance should be punishable by extreme pain, but abusing any construct should be treated likewise. And there is nothing wrong with inheriting more then 5 levels deep where it makes sense. Atificially limiting yourself is always a silly thing to do. |
Author: | Rigby5 [ Wed Nov 05, 2008 4:55 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
md @ Tue Nov 04, 2008 6:54 pm wrote: ...
So you Claim, now let's look at these claims. ... False. You could re-write your C library in assembler and it would be more optimized. You could pass -O3 instead of -O2. Maybe you want size optimized instead of speed optimized; now you need to recompile again. Libraries are optimized, but never ideally. Optimizing compilors have been able to beat writing your own assembly for awhile now. That is because the internal processor is virutal, and for example, you get a stack of registers instead of the actual 8 historical registers that had to be pushed and popped. To beat the compilor you need to set your own dependency points for the pipeline optimization, and humans are not good at that. Quote: Templates reveal everything to the compiler because they are a compile time contruct. They are not run time. The compiler generates the necessary template code at compile time and the code is static. There is no run-time overhead for templates. None.
I disagree. If templates were only compile time, then there would be no virtual function look ups, no code run to determine types, etc. And they would run as fast as C. But they don't. Besides, did you ever try to step into template code? It is horrendous. Not only is it lacking symbol information, but totally unoptimized. Quote: Dealing with different types at runtime requires attempting to figure out what they are, which is more work then already knowing what they are and therefor slower. Sure it can be optimized out, after all a pointer is a pointer is a pointer, but not always.
It is not hard to know types in C, people have done it for decades. And it does not need to be done at runtime, if one decides to use function pointers. In contrast, C++ casts back and forth all the time, especially through base pointers, so there is even more opportunity for a run time crash. Quote: That does not make it right, nor a good solution. People have been writing programs with buffer overflow holes for decades too, is that a good idea?
Safe arrays and strings are not a C++ invention. They originated in Pascale, migrated to BASIC, and were used in C for decades. Quote: What? First, you cannot embed a data structure, they are compile time constructs. After that it's all memory addresses and offsets. Interestingly enough it's exactly the same for derived classes! There difference is that inheritance makes it much easier to sensible code.
Embedding in this context means adding the data to the linkage structure by setting the void ptr of the linkage cell to that of the data struct. The alternative is to derive the new class from the linkage class, and add the new data members. Both can be done in C++, but the later does not keep the linkage class and its data contents separate, as it should. It ends up with massive class bloat as you end up adding different data contents. Quote: That's why when writing APIs you specify offsets, not C style structs. Or you specify how it is to be padded (not at all) and tell the compiler that. If such things were not possible then binary APIs would not be possible. And yet...
Sure it is possible to make binary compatible APIs, but it is harder in C++, because C++ compilers are less consistent as to what they add and what size they are. In general one uses macros to standardize API entities. C++ itself is not at all good at it. Quote: Compiler optimization was hard but it turned out better? For what? When doing what? Actually, how about not using an anecdote. Care to cite something?
With the Itanium projects I did, it just so happened that we had access to the Intel compiler with standard optimization, and a Microsoft compiler with whole program optimization. And the comparison was in favor of the Intel compiler, by over a factor of 10. I have done similar comparisons before and after as well. But perhaps whole program optmization was in its infancy then, (6 years ago). Quote: Character strings are so often abused and so often lead to buffer overflows that they are the leading cause of bugs and security holes. std::strings on the other hand, not so much. Much of what people do irregardless of language is stupidly thought out, error prone and illogical; however C makes it much easier to do then C++.
As I said already, safe strings are not a C++ invention. Stings are not really part of C. You can use any string library you want. Quote: Modularization makes testing better, security auditing better, and modifying code easier; hiding case statements is a side effect, nothing more.
I will agree with that. Quote: I cannot agree. Abusing multiple inheritance should be punishable by extreme pain, but abusing any construct should be treated likewise. And there is nothing wrong with inheriting more then 5 levels deep where it makes sense. Artificially limiting yourself is always a silly thing to do.
It seems to me that deep inheritance levels is anti-modularity. Forcing one to have to look and remember more than 5 .hpp file classes, is against human abilities, and is bound to add mistakes. Why should anyone need more than 5? There could be no benefit to it. We make artificial limits all the time, because we know the limits of human nature. Just like the limit for the number of parameters to a function call is 6. If you want more, you must put them into a struct and pass a ptr to that. Similarly, one is not supposed to nest loops more than 4 deep. It is artificial but not arbitrary. The statistics prove what causes errors. |
Author: | OneOffDriveByPoster [ Wed Nov 05, 2008 5:34 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
Rigby5 @ Wed Nov 05, 2008 4:55 pm wrote: Optimizing compilors have been able to beat writing your own assembly for awhile now. That is because the internal processor is virutal, and for example, you get a stack of registers instead of the actual 8 historical registers that had to be pushed and popped. To beat the compilor you need to set your own dependency points for the pipeline optimization, and humans are not good at that. I agree that optimizing compilers can be better than writing your own assembly. I must point out that you seem to be locked in on CISC architectures though.
Rigby5 @ Wed Nov 05, 2008 4:55 pm wrote: I disagree. If templates were only compile time, then there would be no virtual function look ups, no code run to determine types, etc. And they would run as fast as C. But they don't. Besides, did you ever try to step into template code? It is horrendous. Not only is it lacking symbol information, but totally unoptimized. Templates themselves are completely compile time. Virtual functions and RTTI are separate features of C++. What compiler are you using? Unless if you did a study on multiple implementations, I would not call your experience representative.
Rigby5 @ Wed Nov 05, 2008 4:55 pm wrote: Embedding in this context means adding the data to the linkage structure by setting the void ptr of the linkage cell to that of the data struct.
It sounds like you are advocating additional pointer indirection... Why should a base class contain a pointer to allow you to "derive" classes from it? What specific class bloat are you talking about? Derived classes contain base-class subobjects in most C++ object models--they do not cause the compiler to put more into "normal" base class objects.
The alternative is to derive the new class from the linkage class, and add the new data members. Both can be done in C++, but the later does not keep the linkage class and its data contents separate, as it should. It ends up with massive class bloat as you end up adding different data contents. Rigby5 @ Wed Nov 05, 2008 4:55 pm wrote: Sure it is possible to make binary compatible APIs, but it is harder in C++, because C++ compilers are less consistent as to what they add and what size they are. In general one uses macros to standardize API entities. C++ itself is not at all good at it. C++03 has the same preprocessor as C90. The upcoming C++ standard has the same preprocessor as C99. POD types in C++ are more than likely to be compatible with the corresponding C compiler.
Rigby5 @ Wed Nov 05, 2008 4:55 pm wrote: With the Itanium projects I did, it just so happened that we had access to the Intel compiler with standard optimization, and a Microsoft compiler with whole program optimization. And the comparison was in favor of the Intel compiler, by over a factor of 10.
You do know that the Intel compiler works better on an Intel processor for a reason...
I have done similar comparisons before and after as well. But perhaps whole program optmization was in its infancy then, (6 years ago). Rigby5 @ Wed Nov 05, 2008 4:55 pm wrote: It seems to me that deep inheritance levels is anti-modularity.
Sound like good rules of thumb. You still should not lock yourself in. There will be exceptions, and you can stay out of trouble with a good high-level design. Inheritance hierarchy graphs are meant to help.Forcing one to have to look and remember more than 5 .hpp file classes, is against human abilities, and is bound to add mistakes. Why should anyone need more than 5? There could be no benefit to it. We make artificial limits all the time, because we know the limits of human nature. Just like the limit for the number of parameters to a function call is 6. If you want more, you must put them into a struct and pass a ptr to that. Similarly, one is not supposed to nest loops more than 4 deep. It is artificial but not arbitrary. The statistics prove what causes errors. |
Author: | Rigby5 [ Wed Nov 05, 2008 8:16 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
[quote="OneOffDriveByPoster ...] I must point out that you seem to be locked in on CISC architectures though.[/quote] Not really I think. RISC also have piplelining with dependency point requirements. It is a question of letting the hardware know how far it can look ahead. Quote: Templates themselves are completely compile time. Virtual functions and RTTI are separate features of C++. What compiler are you using? Unless if you did a study on multiple implementations, I would not call your experience representative.
The point was that if you were not using any virtual function or real time type information, what is the point of using a template at all? Why not just put it in a library, that would be an independent module? Templates are not better, because they are much harder to debug if nothing else. Quote: It sounds like you are advocating additional pointer indirection... Why should a base class contain a pointer to allow you to "derive" classes from it? What specific class bloat are you talking about? Derived classes contain base-class subobjects in most C++ object models--they do not cause the compiler to put more into "normal" base class objects.
No, the idea is simply that you can nest an instance of one class into another, or you can merge them. And merging is more confusing, since they have different functionalities, but may have identical method or member names. The whole thing can be extremely wasteful. For example, if you have a linked queue for data storage, why does it have to know anything about the class you want store in it? Why should you have to track iterators or anything else? Quote: C++03 has the same preprocessor as C90. The upcoming C++ standard has the same preprocessor as C99. POD types in C++ are more than likely to be compatible with the corresponding C compiler.
It is not the preprocessor that is different between C and C++ external entry point macros, but the contents of the macros. And I believe you are wrong about API compatibility. I believe that there is no direct linkage between modules of different languages, platforms, or even compilers. I believe a lot of thunking code in run time libraries make up for all the incompatibilities. I do not believe is it ever possible to ensure padding is identical between different compilers. There are even issues like byte swapping between Intel and Motorola byte ordering. Quote: You do know that the Intel compiler works better on an Intel processor for a reason...
Good point, but Intel compilers do not work better in general, because they are not sold and therefore don't get worked on as much. Quote: Sound like good rules of thumb. You still should not lock yourself in. There will be exceptions, and you can stay out of trouble with a good high-level design. Inheritance hierarchy graphs are meant to help.
Agreed. I often break the rules with quick fixes that would take to long to redo completely. And a good IDE helps quite a bit. |
Author: | OneOffDriveByPoster [ Wed Nov 05, 2008 10:22 pm ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
Rigby5 @ Wed Nov 05, 2008 8:16 pm wrote: The point was that if you were not using any virtual function or real time type information, what is the point of using a template at all? Why not just put it in a library, that would be an independent module? Templates are not better, because they are much harder to debug if nothing else. The point is to have compile time type checking. A container object can also be optimized for each specific type that the template is instantiated for based on the either the primary (general) template or on a specialized (more suitable) template. vector<bool> is a wonderful example of how templates can be specialized to provide better efficiency. Yes, templates can be hard to debug, but then the upcoming C++ standard is meant to make that easier.
Rigby5 @ Wed Nov 05, 2008 8:16 pm wrote: No, the idea is simply that you can nest an instance of one class into another, or you can merge them.
The programer is supposed to handle is-a and has-a relationships properly. Iterators are meant for abstraction. As for knowing about the class, what about the class do you think it needs to know? Again, the primary template is meant to be coded for the general case. Telling it what class you are storing helps with compile-time type checking and optimization. In a C library, you might have to pass the size of the stored object for example. You can use the C++ containers with pointer types anyway.
And merging is more confusing, since they have different functionalities, but may have identical method or member names. The whole thing can be extremely wasteful. For example, if you have a linked queue for data storage, why does it have to know anything about the class you want store in it? Why should you have to track iterators or anything else? Of course, if you are talking about code that goes around using multiple inheritance by having a class inherit from a template base class that adds "prev" and "next" pointers, I sympathize. Rigby5 @ Wed Nov 05, 2008 8:16 pm wrote: It is not the preprocessor that is different between C and C++ external entry point macros, but the contents of the macros.
extern "C" works wonders. Linux has the Itanium ABI (not API) for C++. Most compilers provide bit-packed structs.
And I believe you are wrong about API compatibility. I believe that there is no direct linkage between modules of different languages, platforms, or even compilers. I believe a lot of thunking code in run time libraries make up for all the incompatibilities. I do not believe is it ever possible to ensure padding is identical between different compilers. There are even issues like byte swapping between Intel and Motorola byte ordering. Rigby5 @ Wed Nov 05, 2008 8:16 pm wrote: Good point, but Intel compilers do not work better in general, because they are not sold and therefore don't get worked on as much. Intel gets their front-end from EDG. I also think that they get their standard library from another vendor. They don't need to do as much in-house. |
Author: | Rigby5 [ Thu Nov 06, 2008 12:42 am ] |
Post subject: | Re: RE:Need help!!! some problems in c++ |
OneOffDriveByPoster @ Wed Nov 05, 2008 10:22 pm wrote: The point is to have compile time type checking. A container object can also be optimized for each specific type that the template is instantiated for based on the either the primary (general) template or on a specialized (more suitable) template. vector<bool> is a wonderful example of how templates can be specialized to provide better efficiency. Yes, templates can be hard to debug, but then the upcoming C++ standard is meant to make that easier.
But when you are just passing the data to the container via pointer, there does not need to be any type checking. And since the container only has to hold a pointer, there is no way it can optimize the data. But I also know that vector<bool> does not save any space. The sizeof(bool) is a char in Java and is a short in C#, and C++ for MSVC. That is because the hardware is optimized for shorts. Chars take longer, and bitfields take 10 times as long. That is because since MMX, registers are totally virtual. A 32 bit register may simply be some portion of a much larger register. So calculating bitfields can take many dozens of instructions. Quote: In a C library, you might have to pass the size of the stored object for example. You can use the C++ containers with pointer types anyway.
The only reason a C library would need to know the size would be if you were not using pointers, and needed to sequentially index through. And yes I agree that you can use pointers in C++ containers. The point is why would we want to use anything else? Quote: extern "C" works wonders. Linux has the Itanium ABI (not API) for C++. Most compilers provide bit-packed structs.
Microsoft used Common Language Runtime libraries to convert API calls between different languages. That is not just simple entry points like with libraries, and data needs extensive marshaling. For example, C# uses strings that have a prepended length in from of them, because it is actually derived from Visual Basic, with its variant string. The C++ wrapper class for it is _bstr_t. So the .Net framework would freak out if C# ever got a null terminated string, and .Net uses unicode. |