Bregalad wrote:
There's zero reason to force you to use a painful language just because it's used in the industry
You're getting a bit ridiculous here. "Painful language"? I use C++ in my own hobby stuff because I like the way it works and what it can do.
Give me a break.
Bregalad wrote:
(I can't optimize because ++i should use inc and i+=1 should not) ...the ++ operator is basically completely useless, they might just as well not have it in the language at all.
No, to be optimized it should use whatever method of incrementing is most efficient or practical for the situtation, and that what it does.
I think you're mistaking "optimization" for "direct control of assembly output".
I'd agree ++ is not a great feature, and I think it's a little bit vestigial, and there's actually something in it very worth complaining about (e.g. prefix/postfix side effects) but in general it's very easy to use and completely normal to understand.
Bregalad wrote:
(I can't optimize because I can't specify when to use the zero flag with ! vs ==0)
Again, it does use the zero flag when it's efficient or practical to do so. Again, this is not an optimization concern, but an assembly output concern.
C actually usually has inline assembly built into the language for addressing that kind of concern if you need to. (MSVC has started to phase it out with its 64-bit compilers though, in favour of other techniques.)
Bregalad wrote:
The "const" keyword will make you think it helps the compiler to optimize the code, by putting things in ROM instead of RAM, or by avoiding useless copy of data. Actually it does none of this and this keyword is useless when it comes to optimization. Any data with "const" keyword is still writeable, just not through the name you're declaring.
Not really true. On a platform that has ROM, or can write protect memory, const data very often does go somewhere that is phyiscally made read only.
In some cases you can't protect it in that way, e.g. a const temporary on the stack. Attempts to assign it to a non const pointer are a type safety violation and the compiler will stop you. You're allowed to explicitly override this, but in that case you're deliberately sabotaging yourself.
There
is the problem that direct memory access is not prohibited in general, and the most often source of errors with this comes up because of how arrays have no bounds checking (e.g. buffer overflow attacks).
Of course in C++ they added mechanisms that provide bounds checking for arrays, so that problem is already solved if you're not obstinate about it.
Bregalad wrote:
The ANSI C standard requires all operations to be performed to at least 16-bit ints by default, killing any hope of getting a decent performance on any 8-bit CPU.
No it doesn't. Maybe you're thinking that the default "int" is going to be 16-bit, but if the result is going back into an 8-bit type the compiler is by no means required to do any 16-bit calculations at all.
Bregalad wrote:
Another problem is how arguments are passed by value, killing many optimisation possibilities, especially when passing large object. You need to either pass a pointer (wastes resources) or copy the data to the stack (wastes resources) or use C++ and the tedious "const& " all over the place. I know in some cases those copies can still be optimized out, but only at the price of a major effort from the compiler and full control-flow analyzis of what your program is doing, definitely not thanks to the language.
Okay so you just described that you can pass by value, by reference, or inline. What is the other optimization possibility that is being killed here?
In a lot of other high level languages you don't even have options about whether something is passed by value or reference. Java in an example that bothers me, where you primitive and object types are passed in different ways without any syntactic difference. Python on the other hand has types that are implicitly mutable or not and often you don't find out which until a (cryptic) runtime exception occurs (e.g. is this a Bytes or a ByteArray?).
Bregalad wrote:
For instance, in 6502 it's common to store LSB and MSB of arrays in two separate 8-bit arrays. C could never achieve such an optimisation, because arrays of 16-bit ints are explicit to the user and accessible with pointer arithmetic. When higher level languages such as Python (or any other langauge without explicit pointers) COULD optimize it that way.
C can do this through macros, to some degree.
C++ actually
can do this efficiently and effectively though.
Anyway, I honestly find this nitpicky attack on C really strange. What are you comparing against? Is there a language that you use all the time that you really don't have any qualms like? Everything here you've picked on was either trivial or wrong. If you named any language I could say a thousand similar things about it. What's the point? They've all got some problems, but well used languages have plenty of good ways to cope with those problems (i.e. practice effective coding styles, and learn the ropes).