On Wed, 2016-07-13 at 19:01 +0100, Anton Shterenlikht wrote:
> There's nothing wrong with C++.
Leslie Hatton has actually looked extensively and quantitatively at
errors in programs. He has a collection of what he calls "T"
experiments, wherein he compared equivalent nontrivial programs written
in Fortran 77, C, C++, and Ada. He found that the lifetime ownership
costs of C++ programs were six times those of equivalent programs
written in C, Fortran or Ada, and that C++ programs had a higher density
of statically-detectable errors. He observed that fewer than one out of
six programs begun in C++ are ever deployed. He has written papers with
such titles as "Does OO Sync with the Way We Think."
C++ has cool features, such as templates, but it has weird gotchas such
as the need to declare destructors to be virtual. Why isn't that the
default? And why do methods have to be declared to be virtual at all?
Is it because there's no distinction between monomorphic and polymorphic
variables (CLASS vs. TYPE)? I taught a C++ class four times. Every
time, I gave students a simple exercise: Write a package for complex
arithmetic. I always warned them "Do not return a reference to a local
object." They always did, because a function result in C++ can only be
a scalar -- and their codes eventually all went "bang."
Hatton's quantitative investigations suggest there is in fact something
wrong with C++.
|