IMHO D just missed the mark with the GC in core. It was released in a time where a replacement for C++ was sorely needed, and it tried to position itself as that (obvious from the name).
But by including the GC/runtime it went into a category with C# and Java which are much better options if you're fine with shipping a runtime and GC. Eventually Go showed up to crowd out this space even further.
Meanwhile in the C/C++ replacement camp there was nothing credible until Rust showed up, and nowadays I think Zig is what D wanted to be with more momentum behind it.
Still kind of salty about the directions they took because we could have had a viable C++ alternative way earlier - I remember getting excited about the language a lifetime ago :D
I'd rather say that the GC is the superpower of the language. It allows you to quickly prototype without focusing too much on performance, but it also allows you to come back to the exact same piece of code and rewrite it using malloc at any time. C# or Java don't have this, nor can they compile C code and seamlessly interoperate with it — but in D, this is effortless.
Furthermore, if you dig deeper, you'll find that D offers far greater control over its garbage collector than any other high-level language, to the point that you can eagerly free chunks of allocated memory, minimizing or eliminating garbage collector stops where it matters.
Do you know of any popular real-time (for some definition of real-time) applications written in D? Like, streaming music or video? C has FFmpeg [0]:
> FFmpeg is proudly written in the C programming language for the highest performance. Other fashionable languages like C++, C#, Rust, Go etc do not meet the needs of FFmpeg.
How does D perform in benchmarks against other programming languages?
D by definition meets the FFmpeg's criteria because it's also a C compiler. Because of that I never wondered how D performs in the benchmarks, as I know for sure that it can give me the performance of C where I need it.
But then, to use D for performance, would I then have to master both D, C and their interaction? That doesn't seem great. It's like having to learn 2 languages and also how they interact.
> C# or Java don't have this, nor can they compile C code and seamlessly interoperate with it — but in D, this is effortless.
C# C interop is pretty smooth, Java is a different story. The fact that C# is becoming the GC language in game dev is proving my point.
>Furthermore, if you dig deeper, you'll find that D offers far greater control over its garbage collector than any other high-level language, to the point that you can eagerly free chunks of allocated memory, minimizing or eliminating garbage collector stops where it matters.
Yes, and the no-gc stuff was just attempts to backpedal on the wrong initial decision to fit into the use-cases they should have targeted from the start in my opinion.
Look D was an OK language but it had no corporate backing and there was no case where it was "the only good solution". If it was an actual C++ modernization attempt that stayed C compatible it would have seen much better adoption.
True, but you still need to either generate or manually write the bindings. In D, you just import the C headers directly without depending on the bindings' maintainers.
> If it was an actual C++ modernization attempt that stayed C compatible it would have seen much better
Any D compiler is literally also a C compiler. I sincerely don't know how can one be more C compatible than that.
> Yes, and the no-gc stuff was just attempts to backpedal on the wrong initial decision
I think that it was more of an attempt to appease folks who won't use GC even with a gun to their head.
I'm not saying D didn't have nice features - but if D/C#/Java are valid options I'm never picking D - language benefits cannot outweigh the ecosystem/support behind those two. Go picked a niche with backend plumbing and got Google backing to push it through.
Meanwhile look at how popular Zig is getting 2 decades later. Why is that not D ? D also has comp-time and had it for over a decade I think ? Zig proves there's a need that D was in the perfect spot to fill if it did not make the GC decision - and we could have had 2 decades of software written in D instead of C++ :)
> D was in the perfect spot to fill if it did not make the GC decision
I just find it hard to believe that the GC is the one big wart that pushed everyone away from the language. To me, the GC combined with the full power of a systems language are the killer features that made me stick to D. The language is not perfect and has bad parts too, but I really don't see the GC as one of them.
> The fact that C# is becoming the GC language in game dev is proving my point.
Respectfully, it doesn't prove your point. Unity is a commercial product that employed C# because they could sell it easily, not because it's the best language for game dev.
Godot supports C# because Microsoft sponsored the maintainers precisely on that condition.
> The fact that C# is becoming the GC language in game dev is proving my point.
That is just the Unity effect. Godot adopted C# because they get paid to do so by Microsoft.
C# allows for far lees control over the garbage collection compared to D. The decision to use C# is partly responsible for the bad reputation of Unity games as it causes a lot of stutters when people are not very careful about how to manage the memory.
The creator of the Mono runtime actually calls using C# his Multi-million dollar mistake and instead works on swift bindings for Godot: https://www.youtube.com/watch?v=tzt36EGKEZo
1. Runtime: A runtime is any code that is not a direct result of compiling the program's code (i.e. it is used across different programs) that is linked, either statically or dynamically, into the executable. I remember that when I learnt C in the eighties, the book said that C isn't just a language but a rich runtime. Rust also has a rich runtime. It's true that you can write Rust in a mode without a runtime, but then you can barely even use strings, and most Rust programs use the runtime. What's different about Java (in the way it's most commonly used) isn't that it has a runtime, but that it relies on a JIT compiler included in the runtime. A JIT has pros and cons, but they're not a general feature of "a runtime".
2. GC: A garbage collector is any mechanism that automatically reuses a heap object's memory after it becomes unreachable. The two classic GC designs, reference counting and tracing, date back to the sixties, and have evolved in different ways. E.g. in the eighties and nineties there were GC designs where the compiler could either infer a non-escaping object's lifetime and statically insert a `free` or have the language track lifetimes ("regions", 1994) and have the compiler statically insert a `free` based on information annotated in the language. On the other hand, in the eighties Andrew Appel famously showed that moving tracing collectors "can be faster than stack allocation". So different GCs employ different combination of static inference and dynamic information on object reachability to optimise for different things, such as footprint or throughput. There are tradeoffs between having a GC or not, and they also exist between Rust (GC) and Zig (no GC), e.g. around arenas, but most tradeoffs are among the different GC algorithms. Java, Go, and Rust use very different GCs with different tradeoffs.
So the problem with using the terms "runtime" and "GC" colloquially as they're used today is not so much that it differs from the literature, but that it misses what the actual tradeoffs are. We can talk about the pros and cons of linking a runtime statically or dynamically, we can talk about the pros and cons of AOT vs. JIT compilation, and we can talk about the pros and cost of a refcounting/"static" GC algorithm vs a moving tracing algorithm, but talking in general about having a GC/runtime or not, even if these things mean something specific in the colloquial usage, is not very useful because it doesn't express the most relevant properties.
Op saying Rust has a kind of GC is absurd. Rust keeps track of the lifetime of variables and drops them at the end of their lifecycle. If you really want to call that a GC you should at least make a huge distinction that it works at compile time: the generated code will have drop calls inserted without any overhead at runtime. But no one calls that a GC.
You see OP is trying to murk the waters when they claim C has a runtime. While there is a tiny amount of truth to that, in the sense that there’s some code you don’t write present at runtime, if that’s how you define runtime the term loses all meaning since even Assemblers insert code you don’t have to write yourself, like keeping track of offsets and so on.
Languages like Java and D have a runtime that include lots of things you don’t call yourself, like GC obviously, but also many stdlib functions that are needed and you can’t remove because it may be used internally. That’s a huge difference from inserting some code like Rust and C do.
To be fair, D does let you remove the runtime or even replace it. But it’s not easy by any means.
> If you really want to call that a GC you should at least make a huge distinction that it works at compile time: the generated code will have drop calls inserted without any overhead at runtime. But no one calls that a GC.
Except for the memory management literature, because it's interested in the actual tradeoffs of memory management. A compiler inferring lifetimes, either automatically for some objects or for most objects based on language annotations, has been part of GC research for decades now.
The distinction of working at compile time or runtime is far from huge. Working at compile time reduces the work associated with modifying the counters in a refcounting GC in many situations, but the bigger differences are between optimising for footprint or for throughput. When you mathematically model the amount of CPU spent on memory management and the heap size as functions of the allocation rate and live set size (residency), the big differences are not whether calling `free` is determined statically or not.
So you can call that GC (as is done in academic memory management research) or not (as is done in colloquial use), but that's not where the main distinction is. A refcounting algorithm, like that found in Rust's (and C++'s) runtime is such a classic GC that not calling it a GC is just confusing.
My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
Do you want GC? Great! Do not want GC? Well, you can turn it off, and lose access to most things. Do you want a borrow-checker? Great, D does that as well, though less wholeheartedly than Rust. Do you want a safer C/memory safety? There's the SafeD mode. And probably more that I forget.
I wonder if all these different (often incompatible) ways of using D ends up fragmenting the D ecosystem, and in turn make it that much harder for it to gain critical mass
> My (likely unfair) impression of D is that it feels a bit rudderless
The more positive phrasing would be that it is a very pragmatic language. And I really like this.
Currently opinionated langues are really in vogue. Yes they are easier to market but I have personally very soured on this approach now that I am a bit older.
There is not one right way to program. It is fun to use on opinionated language until you hit a problem that it doesn't cover very well and suddenly you are in a world of pain. I like languages that give me escape hatches. That allow me to program they way I want to.
>My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
This can very clearly be said about C++ as well, which may have started out as C With Classes but became very kitchen sinky. Most things that get used accrete a lot of features over time, though.
FWIW, I think "standing out" due to paradigm commitment is mostly downstream of "xyz-purity => fewer ways to do things => have to think/work more within the constraints given". This then begs various other important questions, of course.. E.g., do said constraints actually buy users things of value overcoming their costs, and if so for what user subpopulations? Most adoption is just hype-driven, though. Not claiming you said otherwise, but I also don't think the kind of standing out you're talking about correlates so well to marketing. E.g., browsers marketed Javascript (which few praised for its PLang properties in early versions).
Re: the point about Zig: Especially considering I used and played a lot with D's BetterC model when I was a student, I wonder as a language designer what Walter thinks about the development and rise in popularity of Zig. Of course, thinking "strategically" about a language's adoption comes off as Machiavellian in a crowd of tinkers/engineers, but I can't help but wonder.
Zig got too much in to avoiding "hidden behavior" that destructors and operator overloading were banned. Operator overloading is indeed a mess, but destructors are too useful. The only compromise for destructors was adding the "defer" feature. (Was there ever a corresponding "error if you don't defer" feature?)
FIl-C, the new memory-safe C/C++ compiler actually achieved that through introducing a GC, with that in mind I'd say D was kind of a misunderstood prodigy in retrospect.
There's two classes of programs - stuff written in C for historic reasons that could have been written in higher level language but rewrite is too expensive - fill c.
Stuff where you need low level - Rust/C++/Zig
FillC works fine with all C code no matter how low level. There’s a small performance overhead but for almost every scenario it’s an acceptable overhead!
I often see people lament the lack of popularity for D in comparison to Rust. I've always been curios about D as I like a lot of what Rust does, but never found the time to deep dive and would appreciate someone whetting my appetite.
Are there technical reasons that Rust took off and D didn't?
What are some advantages of D over Rust (and vice versa)?
D and Rust are on the opposite sides at dealing with memory safety. Rust ensures safety by constantly making you think about memory with its highly sophisticated compile-time checks. D, on the other hand, offers you to either employ a GC and forget about (almost) all memory-safety concerns or a block scoped opt-out with cowboy-style manual memory management.
D retains object-oriented programming but also allows functional programming, while Rust seems to be specifically designed for functional programming and does not allow OOP in the conventional sense.
I've been working with D for a couple of months now and I noticed that it's almost a no-brainer to port C/C++ code to D because it mostly builds on the same semantics. With Rust, porting a piece of code may often require rethinking the whole thing from scratch.
The term 'Cowboy coder' has been around for some time. Everybody's favourite unreliable source of knowledge has issues dating back to 2011: <https://en.wikipedia.org/wiki/Cowboy_coding>
> Are there technical reasons that Rust took off and D didn't?
As someone who considered it back then when it actually stood a chance to become the next big thing, from what I remember, the whole ecosystem was just too confusing and simply didn't look stable and reliable enough to build upon long-term. A few examples:
* The compiler situation: The official compiler was not yet FOSS and other compilers were not available or at least not usable. Switch to FOSS happened way too late and GCC support took too long to mature.
* This whole D version 1 vs version 2 thingy
* This whole Phobos vs Tango standard library thingy
* This whole GC vs no-GC thingy
This is not a judgement on D itself or its governance. I always thought it's a very nice language and the project simply lacked man-power and commercial backing to overcome the magical barrier of wide adoption. There was some excitement when Facebook picked it up, but unfortunately, it seems it didn't really stick.
1. D had a split similar to python 2 vs 3 early on with having the garbage collector or not (and therefor effectively 2 standard libraries), but unlike python it didn't already have a massive community that was willing to suffer through it.
2. It didn't really have any big backing. Rust having Mozilla backing it for integration with Firefox makes a pretty big difference.
3. D wasn't different enough, it felt much more "this is c++ done better" than it's own language, but unlike c++ where it's mostly a superset of c you couldn't do "c with classes" style migrations
D has much better metaprogramming compared to Rust. That has been one of the only things making me still write a few D programs. You can do compile time type introspection to generate types or functions from other elements without having to create a compiler plug-in parsing Rust and manipulating syntax trees.
Rust has some of the functional programming niceties like algebraic data types and that's something lacking in D.
One feature of D that i really wish other languages would adopt (not sure about Rust but i also think it lacks it, though if it has it to a similar extent as D it might be the reason i check it again more seriously) is the metaprogramming and compile-time code evaluation features it has (IIRC you can use most of the language during compile time as it runs in a bytecode VM), down to even having functions that generate source code which is then treated as part of the compilation process.
Of course you can make codegen as part of your build process with any language, but that can be kludgy (and often limited to a single project).
Arguably, most of the metaprogramming in D is done with templates and it comes with all the flaws of templates in C++. The error messages are long and it's hard to decipher what exactly went wrong (static asserts help a lot for this, when they actually exist). IDE support is non-existent after a certain point because IDE can't reason about code that doesn't exist yet. And code gets less self-documenting because it's all Output(T,U) foo(T, U)(T t, U u) and even the official samples use auto everywhere because it's hard to get the actual output types.
I'd say D's template error messages are much better than C++'s, because D prints the instantiation stack with exact locations in the code and the whole message is just more concise. In C++, it just prints a bunch of gibberish, and you're basically left guessing.
It is quite ridiculous to place C++ metaprogramming and D's. For one in D it's the same language and one can choose whether to execute compile time constant parts at compile time or run time. In C++ it's a completely different language that was bolted on. C++ did adopt compile time constant expressions from D though.
> Are there technical reasons that Rust took off and D didn't?
My (somewhat outdated) experience is that D feels like a better and more elegant C++. Rust certainly has been influenced by C and C++, but it also took a lot of inspiration from the ML-family of languages and it has a much stronger type system as a consequence.
More like the companies that jumped into D versus Rust, D only had Facebook and Remedy Games toy a bit with it.
Many of us believe on automatic memory management for systems programming, having used quite a few in such scenarios, so that is already one thing that D does better than Rust.
There is the GC phobia, mostly by folks that don't get not all GCs were born alike, and just like you need to pick and chose your malloc()/free() implementation depending on the scenario, there are many ways to implement a GC, and having a GC doesn't preclude having value types, stack and global memory segment allocation.
D has compile time reflection, and compile time metaprogramming is much easier to use than Rust macros, and it does compile time execution as well.
And the compile times! It is like using Turbo Pascal, Delphi,... even thought the language is like C++ in capabilities. Yet another proof complexity doesn't imply slow compile natives in a native systems language.
For me, C# and Swift replace the tasks at work were I in the past could have reached for D instead, mostly due to who is behind those languages, and I don't want to be that guy that leaves and is the one that knew the stack.
> Many of us believe on automatic memory management for systems programming
The problem is the term "systems programming". For some, it's kernels and device drivers. For some, it's embedded real-time systems. For some, it's databases, game engines, compilers, language run-times, whatever.
There is no GC that could possibly handle all these use-cases.
Except there is, only among GC-haters there is not.
People forget there isn't ONE GC, rather several of possible implementations depending on the use case.
Java Real-Time GC implementations are quite capable to power weapon targeting systems in the battlefield, where a failure causes the wrong side to die.
> Aonix PERC Ultra Virtual Machine supports Lockheed Martin's Java components in Aegis Weapon System aboard guided missile cruiser USS Bunker Hill
Look, when someone says "There's no thing that could handle A,B,C, and D at the same time", answering "But there's one handling B" is not very convincing.
(Also, what's with this stupid "hater" thing, it's garbage collection we're talking about, not war crimes)
It is, because there isn't a single language that is an hammer for all types of nails.
It isn't stupid, it is the reality of how many behave for decades.
Thankfully, that issue has been slowly sorting out throughout generation replacement.
I already enjoy that nowadays we already have reached a point in some platforms where the old ways are nowadays quite constrained to a few scenarios and that's it.
Go wasn't around when D was released and Java has for the longest time been quite horrible (I first learnt it before diamond inference was a thing, but leaving that aside it's been overly verbose and awkward until relatively recently).
Depends if one considers writing compilers, linkers, JITs, database engines, and running bare metal on embedded real time systems "systems programming".
> (People who want that sort of stupidity already have Go and Java, they don't need D.)
Go wasn't around when D was created, and Java was an unbelievable memory hog, with execution speeds that could only be described as "glacial".
As an example, using my 2001 desktop, the `ls` program at the time was a few kb, needed about the same in runtime RAM and started up and completed execution in under 100ms.
The almost equivalent Java program I wrote in 2001 to list files (with `ls` options) took over 5s just to start up and chewed through about 16MB of RAM (around 1/4 of my system's RAM).
Java was a non-starter at the time D came out - the difference in execution speed between C++ systems programs and Java systems programs felt, to me (i.e. my perception), larger than the current difference in performance between C++/C/Rust programs and Bash shell scripts.
As far as adoption is concerned, I'm not sure it should be that big of a concern.
After all, D is supported by GCC and Clang and continually being maintained, and if updates stopped coming at some point in the future, anyone who knew a bit of C / Java / insert language here could easily port it to their language of choice.
Meanwhile, its syntax is more expressive than many other compiled languages, the library is feature-rich and fairly tidy, and for me it's been a joy to use.
GCC usually drops frontends if there are no maintainers around, it already happened to gcj, and I am waiting for the same to happen to gccgo any time now, as it has hardly gotten any updates since Go 1.18.
The team is quite small and mostly volunteers, so there is the question how long can Walter Bright keep at it, and who will keep it going afterwards when he passes the torch.
I like D in general, however it is missing out in WASM where other languages like Rust, Zig, even Go are thriving. Official reasoning usually included waiting for GC support from WASM runtime, but other GC languages seem to just ship their own GC and move on.
I tried some D some time ago, it is a nice language. Given today's landscape of programming languages I think it's difficult to reason why a program should be written in D if there are more programming languages that overlap in features. Also depends on how fast you need to scale in developers, how quickly people can learn a language (and not just the syntax) so popularity is also important. I work in consultancy and this is what I always factor in for a client.
When I was student, our group was forced to use D lang instead C++ for CS2* classes. That was back in 2009. After 16 years I see that level of adoption did not change at all.
To be a modern and sane C++ that C++ could have been, (rather than a complex collection of tacked on languages that C++ is), with modules instead of the the mess of C++'s headers, with instant compilation times that does not need a compilation server farm.
One good case for it that I see is a viable basis for cross-platform desktop apps. Today, cross-platform desktop GUI apps are either just a snapshot of the website contained inside Electron, or a C/C++ code base with manual memory management. D can serve as a nice middle ground in that space.
Where is the extensive tooling support for this use case if that is where you think it fits?
Apple is all in on Swift, so you will not be writing native MacOS or iOS code for UI in D, best case you put your business logic in D but you can do that in any language which has bindings to swift/Obj-C.
Android is all in on Kotlin/Java, not D again
Microsoft is all in on C#, again not D.
Linux your two best options for UI is GTK and Qt, C and C++ respectively.
So the only place where you could bave seemless integration is Linux through FFI.
Here's the thing though, for building a core layer that you can wrap a UI around, Rust has insanely good ergonomics, with very good third-party libraries to automatically generate safe bindings to a decent amount of languages, at least all those listed above and WASM for web.
It's true that there is no off the shelf tool that you can use right now to write your app, but it certainly doesn't prove that making such a tool is impossible or even complicated.
It makes sense for a complex productivity app (e.g. an office suite editor) to implement the UI from scratch anyway, and for that they may choose D. If Jane Street didn't pick OCaml, it would've died long ago -- in the same manner, some company might pick D to do UI or anything else really.
Handling energy efficiency/a11y/i8n is non trivial in any language, using the paved road of the system's native implementation solves for many of those problems out of the box.
You would need to reimplement all of that in D lang for your UI layer, and all you wanted to do was build an application to solve a problem, you weren't in the business of building a UI library to begin with.
You'd be surprised to see how active the D community is, despite your fair point that it's noticeably smaller than in the "competing" (in quotes because it's not a competition, actually) languages.
The latest release [1] was on Jan 7th, and it contains more updates than, say, the latest release of Dart, which has one of the largest corporations behind it.
> This very post is probably his too, under an alt :)
The probability of that is virtually zero. Walter is a principled person, has better things to do, and his writing style is vastly different from the OP's.
When design a language for everything nobody will use it for anything. When you design a language to simply accomplish one thing people will use it for everything. This is because people get the most efficient training using that simple language for one thing. From there it is only marginally more effort to carry some boilerplate.
That "one thing" could be real or propaganda. Rust's one thing is writing "memory-safe" without GC.
Eventually the marginal cost becomes too high or your are tricked by advertising and "graduate" from awk to perl. From there depending on the pull of the community or the actual utility of the language you will use it for more and more tasks. If the community pull is strong your programs start to look like line noise or boilerplate hell. If the utility for your problems is genuine they remain simple but you probably aren't producing the most efficient binaries.
As for why c programmers don't just use -betterc well some do, but for most people the reality is that can just do it in c and prefer c -> c++ (ofc the vast majority of projects just start as c++ which makes -betterC )
c++'s one thing c with objects.
If you learned to code writing Go what did you do?
If you learned to code writing D what did you do?
That's not to say you can't learn to code from writing D just that it discipline, most people don't even know a problem exists before they are already learning some language or tool, nor do they have the goal of building everything, most programmers are lazy they want to build the minimal amount and end up building everything by accident.
Why don't experienced devs use D then? They think if they strive for ideological purity that they won't "build everything" next time, or they just enjoy ideological purity as it's own mental exercises. Unix faithfuls want to show that computing can be (conceptually) simple in implementation and use. Rust programmers want to show that those simple (to use) unix programs can be (memory) safe. To a senior engineer D is just too good and easy to take.
I don't use D.. I find Nim helps with even lower ceremony. That said, it's hard for me to understand how "getting into gcc" is failure. The list of such PLangs is very short. People can be very parochial, though. They probably mean pretty shallow things (just one example, but something like "failed to convert me, personally", or "jobs I'd like to apply for", or etc.).
Maybe people should instead talk about how they use D or what they'd like to see added to D? In an attempt to be the change one wants to see, I'd say named arguments are a real win and there seem to be some stalled proposals for that in D last I checked.
Years ago I got interested in D. It's a great language, but at the time its garbage collector was leaky. There weren't any D entries on the Benchmarks Game back then, so I ported most of the programs to D and optimized them as best I could as a newcomer. Performance wise, D was in the C/Rust/C++ range and in many cases it even beat Rust and C++. I tried to get the community involved to help the language gain wider adoption, but nothing really happened. I think everything has its moment, and D's moment has passed. They didn't make the most of the window when D could have gone mainstream.
This is a somewhat simplistic view of ownership and borrowing for modern programming languages.
Pointers are not the only 'pointer's to resources. You can have handles specific to your codebase or system, you can have indices to objects in some flat array that the rest of your codebase uses, even temporary file names.
An object oriented (or 'multi paradigm') language has to account for these and not just literal pointers.
This is handled reasonably well both in Rust and C++. (In the spirit of avoiding yet another C++ vs Rust flamewar here, yes the semantics are different, no it doesn not make sense for C++ to adopt Rust semantics)
I don't know D so I'm probably missing some basic syntax. If pointers cannot be copied how do you have multiple objects referencing the same shared object?
OOP and ownership are two concepts that mix poorly - ownership in the presence of OOP-like constructs is never simple.
The reason for that is OOP tends to favor constructs where each objects holds references to other objects, creating whole graphs, its not uncommon that from a single object, hundreds of others can be traversed.
Even something so simple as calling a member function from a member function becomes incredibly difficult to handle.
Tbh - this is with good reason, one of the biggest flaws of OOP is that if x.foo() calls x.bar() in the middle, x.bar() can clobber a lot of local state, and result in code that's very difficult to reason about, both for the compiler and the programmer.
And it's a simple case, OOP offers tons of tools to make the programmers job even more difficult - virtual methods, object chains with callbacks, etc. It's just not a clean programming style.
Edit: Just to make it clear, I am not pointing out these problems, to sell you or even imply that I have the solution. I'm not saying programming style X is better.
I work at a D company. We tend to use OOP only for state owners with strict dependencies, so it's rare to even get cycles. It is extremely useful for modeling application state. However, all the domain data is described by immutable values and objects are accessed via parameters as much as fields.
When commandline apps were everywhere, people dreamed of graphical interfaces. Burdened by having to also do jobs that it was bad at, the commandline got a bad reputation. It took the dominance of the desktop for commandline apps to find their niche.
In a similar way, OOP is cursed by its popularity. It has to become part of a mixed diet so that people can put it where it has advantages, and it does have advantages.
It worked alright for Rust, and yes Rust does support OOP, there are many meanings to what is OOP from CS point of view.
I have ported Ray Tracing in One Weekend into Rust, while keeping the same OOP design from the tutorial, and affine types were not an impediment to interfaces, polymorphism and dynamic dispatch.
>one of the biggest flaws of OOP is that if x.foo() calls x.bar() in the middle, x.bar() can clobber a lot of local state, and result in code that's very difficult to reason about
That's more a problem of having mutable references, you'd have the same problem in a procedural language.
On the flipside, with OOP is usually quite easy to put a debugger breakpoint on a particular line and see the full picture of what the program is doing.
In diehard FP (e.g. Haskell) it's hard to even place a breakpoint, let alone see the complete state. In many cases, where implementing a piece of logic without carrying a lot of state is impossible, functional programming can also become very confusing. This is especially true when introducing certain theoretical concepts that facilitate working with IO and state, such as Monad Transformers.
That is true, but on the flip-flip side, while procedural or FP programs are usually easy to run piecewise, with OOP, you have to run the entire app, and navigate to the statement in question to be even able to debug it.
Imho, most FP languages have very serious human-interface issues.
It's no accident that C likes statements (and not too complex ones at that). You can read and parse a statement atomically, which makes the code much easier to read.
In contrast, FP tends to be very, very dense, or even worse, have a density that's super inconsistent.
I agree with the sentiment, I really like D and find a missing opportunity that it wasn't taken off regarding adoption.
Most of what made D special in D is nowadays partially available in mainstream languages, making the adoption speech even harder, and lack of LLM training data doesn't help either.
Eventually yes, when incapable becomes a synonymous with finding a job in an AI dominated software factory industry.
Enterprise CMS deployment projects have already dropped amount of assets teams, translators, integration teams, backend devs, replaced by a mix of AI, SaaS and iPaaS tools.
Now the teams are a fraction of the size they used to be like five years ago.
Fear not, there will be always a place for the few ones that can invert a tree, calculate how many golf balls fit into a plane, and are elected to work at the AI dungeons as the new druids.
While I don't share this cynical worldview, I am mildly amused by the concept of a future where, Warhammer 40,000 style, us code monkeys get replaced by tech priests who appease the machine gods by burning incense and invoking hymns.
Same for ERP/CRM/HRM and some financial systems ; all systems that were heavy 'no-code' (or a lot of configuration with knobs and switches rather than code) before AI are now just going to lose their programmers (and the other roles); the business logic / financial calcs etc were already done by other people upfront in excel, visio etc ; now you can just throw that into Claude Code. These systems have decades of rigid code practices so there is not a lot of architecting/design to be done in the first place.
> Yeah, that is why carpenters are still around and no one buys Ikea.
I'm sorry, what? Are you suggesting that Ikea made carpenters obsolete? It's been less than 6 months since last I had a professional carpenter do work in my house. He seemed very real. And charged very real prices. This despite the fact that I've got lots of Ikea stuff.
Nah, IKEA has replaced moving furniture with throwing it away and rebuying it. Prior to IKEA hiring a carpenter was also something that is done a few times in a lifetime/century. If anything it has commodized creating new furniture.
> Compared to before, not a lot of carpenters/furniture makers are left.
Which is it? Carpenters or furniture makers? Because the two have nothing in common beyond the fact that both professions primarily work with wood. The former has been unaffected by automation – or even might plausibly have more demand due to the overall economic activity caused by automation! The latter certainly has been greatly affected.
The fact that people all over the thread are mixing up the two is mindboggling. Is there a language issue or something?
There is a language issue: carpenter is used as synonym of woodworker. It's like someone who doesn't know anything about computers using the term 'memory' to mean storage rather than working memory (i.e. RAM).
> that is why carpenters are still around and no one buys Ikea
The irony in this statement is hilarious, and perfectly sums up the reality of the situation IMO.
For anyone who doesn't understand the irony: a carpenter is someone who makes things like houses, out of wood. They absolutely still fucking exist.
Industrialised furniture such as IKEA sells has reduced the reliance on a workforce of cabinet makers - people who make furniture using joinery.
Now if you want to go ask a carpenter to make you a table he can probably make one, but it's going to look like construction lumber nailed together. Which is also quite a coincidence when you consider the results of asking spicy autocomplete to do anything more complex than auto-complete a half-written line of code.
> I think you have misunderstood what a carpenter is. A carpenter is someone who makes wooden furniture (among other things).
I think _you_ have misunderstood what a carpenter is. At least where I live, you might get a carpenter to erect the wood framing for a house. Or build a wooden staircase. Or erect a drywall. I'm sure most carpenters worth their salt could plausibly also make wooden furniture, at an exorbitant cost, but it's not at all what they do.
I sanity checked with Wiktionary, and it agrees: "A person skilled at carpentry, the trade of cutting and joining timber in order to construct buildings or other structures."
My experience is that all LLMs that I have tested so far did a very good job producing D code.
I actually think that the average D code produced has been superior to the code produced for the C++ problems I tested. This may be an outlier (the problems are quite different), but the quality issues I saw on the C++ side came partially from the ease in which the language enables incompatible use of different features to achieve similar goals (e.g. smart_ptr s new/delete).
I work with D and LLMs do very well with it. I don't know if it could be better but it does D well enough. The problem is only working on a complex system that cannot all be held in context at once.
Let's be serious, most people are regulars and this has been on the front page multiple times like constantly. And it was upvoted 4 times on new to get to the front page rapidly. It's not something new that we're all "Oh that's cool".
We also know there are tons of sock accounts.
And no half of the posts on front page can't be put in that since they aren't constantly reposted like this.
So, while there are a few people who will have learnt about this for the first time. Most of you know what it is and somehow feel like this is your chance to go look I'm smarter than Iain. And I think you've failed again.
Do you know the joke with "I'll repeat the joke to you until you understand it?".
That's why some things get reposted and upvoted. In hope of getting someone else to understand them.
By the way, do you complain about sock accounts when yet another "Here is this problem, and by the way we sell a product that claims to solve it" gets upvoted?
> Do you know the joke with "I'll repeat the joke to you until you understand it?".
Nope. That's not a joke. That's not funny.
> That's why some things get reposted and upvoted. In hope of getting someone else to understand them.
No, they get reposted and upvoted by sock accounts in hope that someone will finally be interested in a 30 year old programming language.
> By the way, do you complain about sock accounts when yet another "Here is this problem, and by the way we sell a product that claims to solve it" gets upvoted?
What does content marketing have to do with sock accounts?
I'm honestly not sure what point you thought was getting made. Do you honestly think people don't understand D? It's been looked at repeatedly and still nothing cool is built in it.
> What does content marketing have to do with sock accounts?
If you accuse interesting subjects of being pushed by sock accounts, why wouldn't content marketing, which has even more interest in getting to the front page, be pushed by sock accounts?
Interesting subjects? A 30(?) year old programming language that has been on here repeatedly is not an interesting subject. New programming languages are. Cool things written in obscure languages is also interesting.
The content marketing that is pushed by sock accounts is wank and generally drops pretty quickly and called out like this. But you just complain about content marketing because you're one of those people who think you should make money but no one else.
You're harsh but that's OK. There is a lot of truth in what you're saying. I really wish people would quit downvoting everything they disagree with. HN would be 100x better if both the downvote and flag buttons were removed.
To me, a C guy, the focus on garbage collection is a turn-off. I'm aware that D can work without it, but it's unclear how much of the standard library etc works fine with no garbage collection. That hasn't been explained, that I saw at least.
The biggest problem however is the bootstrapping requirement, which is annoyingly difficult or too involved. (See explanation in my other post.)
I'm not sure how I'm being harsh. It's literally a somewhat well known programming language being reposted for the 100th time or something silly like that. I'm literally just pointing out the truth and it's almost certainly the main poster downvoting things.
As evidenced by several other comments, even if someone already knows about D they can still use posts like this as a prompt for talking about their experiences and current thoughts about it (which can be different from 1, 5 or 10 years ago).
All seriousness, do you honestly think this site has 10,000 new users a day? How many people do you think are on here that aren't very well informed? Honestly, I'm just wondering?
Also, do you know it only gets to front page if the hardcore that go to new upvote it? How many hardcore people don't know what D is?
It was competing with C and Java when it came out. People who like C will not use a language with garbage collection, even one that allows you to not use it. Against Java, it was a losing battle due to Java being backed by a giant (Sun , then Oracle) and basically taking the world by storm. Then there were also license problems in early versions of D, and two incompatible and competing standard libraries dividing the community. By the time all these problems were fixed, like a decade ago, it was already too late to make a comeback. Today D is a nice language with 3 different compilers with different strengths, one very fast, one produces faster results, and one also does that by works in the GCC ecosystem. That’s something few languages have. D even has a betterC mode now which makes it very good as a C replacement, with speed and size equivalent or better than a C equivalent binary… and D has the arguably best meta programming capabilities of any language that is not a Lisp, including Zig. But no one seems to care anymore as all the hotness is now with Rust and Zig in the systems languages space.
I like and use D but Nim has better metaprogramming capabilities (but D's templates are top-notch except for the error message cascades). (And Zig's metaprogramming is severely hobbled by Andrew's hatred of macros, mixins, and anything else that smells of code generation.)
> Note: ImportC and BetterC are very different. ImportC is an actual C compiler. BetterC is a subset of D that relies only on the existence of the C Standard library. BetterC code can be linked with ImportC code, too.
D contains an actual C compiler because Walter Bright wrote one long ago and then incorporated it into D.
Zig also contains an actual C compiler, based on clang, and has a @cImport directive.
I had D support in my distro for a while, but regrettably had to remove it. There's just too many problems with this language and how it's packaged and offered to the end user, IMO. It was too much hassle to keep it around.
To get it onto one's system, a bootstrapping step is required. Either building gcc 9 (and only gcc 9) with D support, then using that gcc to bootstrap a later version, or bootstrapping dmd with itself.
In the former case I'm already having to bootstrap Ada onto the system, so D just adds another level of pain. It also doesn't support all the same architectures as other gcc languages.
In the case of dmd, last I checked they just shove a tarball at you containing vague instructions and dead FTP links. Later I think they "updated" this to some kind of fancy script that autodownloads things. Neither is acceptable for my purposes.
I just want a simple tarball containing everything needed with clear instructions, and no auto downloading anything, like at least 90% of other packages provide. Why is this so hard?
Tip: pretend it's still the BBS days and you are distributing your software. How would you do it? That's how you should still do it.
I haven't tried the LLVM D compiler, and at this point quite frankly I don't want to waste any more time with the language, in its current form at least--with apologies to Walter Bright, who is truly a smart and likeable guy. Like I said, it's regrettable.
The only way to revive interest in D is through a well planned rebranding and marketing campaign. I think the technical foundation is pretty sound, but the whole image and presentation needs a major overhaul. I have an idea of how to approach that, were there interest.
The first step would be to revive and update the C/C++ version of the D compiler for gcc so as to remove the bootstrapping requirement and allow the latest D to be built, plus a commitment to keeping this up to date indefinitely. It needs to support all architectures that GCC does.
Next, a rebranding focused on the power of D without garbage collection.
I'm willing to offer ongoing consultation in this area and assistance in the form of distro support and promotion, in exchange for a Broadwell or later Xeon workstation with at least 40 cores. (Approx $350 on Ebay.) That's the cost of entry for me as I have way too much work to do and too few available CPU cycles to process it.
Otherwise, I sincerely wish the D folks best of luck. The language has a lot of good ideas and I trust that Walter knows what he is doing from a technical standpoint. The marketing has not been successful however, sadly.
"We all know of the language and chosen not to use it."
Is a strange claim, and hard to cite. But I think many HNers have tried out D and decided that it's not good enough for them for anything. It is certainly advertised hard here.
Never has an old language gained traction, its all about the initial network effects created by excitement.
No matter how much better it is from C now, C is slowly losing traction and its potential replacements already have up and running communities (Rust, zig etc)
Not everything needs to have "traction", "excitement" or the biggest community. D is a useful, well designed programming language that many thousands of people in this vast world enjoy using, and if you enjoy it too, you can use it. Isn't that nice?
Oh a programming language certainly needs to have traction and community for it to succeed, or be a viable option for serious projects.
You can code your quines in whatever you'd like, but a serious project needs existence of good tooling, good libraries, proven track record & devs that speak the language.
"Good tooling, good libraries, proven track record" are all relative concepts, it's not something you have or don't have.
There are serious projects being written in D as we speak, I'm sure, and the language has a track record of having been consistently maintained and improved since 2001, and has some very good libraries and tooling (very nice standard library, three independent and supported compiler implementations!) It does not have good libraries and tooling for all things; certainly integrations with other libs and systems often lag behind more popular languages, but no programming language is suitable for everything.
What I'm saying is there's a big world out there, not all programmers are burdened with having to care about CV-maxxing, community or the preferences of other devs, some of them can just do things in the language they prefer. And therefore, not everything benefits from being written in Rust or whatever the top #1 Most Popular! Trending! Best Choice for System Programming 2026! programming language of the week happens to be.
D has three high quality compiler implementations. It has been around for ages and is very stable and has a proven track record.
Zig has one implementation and constant breaking changes.
D is the far more pragmatic and safer choice for serious projects.
Not that Zig is a bad choice but to say that a unstable lang in active development like Zig would be a better choice for "serious projects" compared to a very well established but less popular lang shows the insanity of hype driven development.
That's not how I remember it. Excitement for python strongly predated ML and data science. I remember python being the cool new language in 1997 when I was still in high school. Python 2.4 was already out, and O'Reilly had put several books kn the topic already it. Python was known as this almost pseudo code like language thst used indentation for blocking. MIT was considering switching to it for its introductory classes. It was definitely already hyped back then -- which led to U Toronto picking it for its first ML projects that eventually everyone adopted when deep learning got started.
It was popular as a teaching language when it started out, along side BASIC or Pascal. When the Web took off, it was one of a few that took off for scripting simple backends, along side PHP, JS and Ruby.
I agree with the person you're replying to. Python was definitely already a thing before ML. The way I remember it is it started taking off as a nice scripting language that was more user friendly than Perl, the king of scripting languages at the time. The popularity gain accelerated with the proliferation of web frameworks, with Django tailgating immensely popular at the time Ruby on Rails and Flask capturing the micro-framework enthusiast crowd. At the same time the perceived ease of use and availability of numeric libraries established Python in scientific circles. By the time ML started breaking into mainstream, Python was already one of the most popular programming languages.
Sure, but the point was that it being used for web backends was years after it was invented, an area in which it never ruled the roost. ML is where it has gained massive traction outside SW dev.
Python was common place long before ML. Ever since 1991, it would jump in popularity every now and then, collect enough mindshare, then dives again once people find better tools for the job. It long took the place of perl as the quick "linux script that's too complex for bash" especially when python2 was shipping with almost all distros.
For example, python got a similar boost in popularity in the late 2000s and early 2010s when almost every startup was either ruby on rails or django. Then again in the mid 2010s when "data science" got popular with pandas. Then again in the end of 2010s with ML. Then again in the 2020s with LLMs. Every time people eventually drop it for something else. It's arguably in a much better place with types, asyncio, and much better ecosystem in general these days than it was back then. As someone who worked on developer tools and devops for most of the time, I always dread dealing with python developers though tbh.
There are plenty of brilliant people who use python. However, in every one of these boom cycles with python I dealt with A LOT of developers with horrific software engineering practices, little understanding of how their applications and dependencies work, and just plane bizarre ideas of how services work. Like the one who comes with 1 8k line run.py with like 3 functions asking to “deploy it as a service”, expecting it to literally launch `python3 run.py` for every request. It takes 5 minutes to run. It assumes there is only 1 execution at a time per VM because it always writes to /tmp/data.tmp. Then poses a lot of “You guys don’t know what you’re doing” questions like “yeah, it takes a minute, but can’t you just return a progress bar?” In a REST api? Or “yeah, just run one per machine. Shouldn’t you provide isolation?”. Then there is the guy who zips up their venv from a Mac or Windows machine and expects it to just run on a Linux server. Or the guy who has no idea what system libs their application needs and is so confused we’re not running a full Ubuntu desktop in a server environment. Or the guy who gives you a 12GB docker image because ‘well, I’m using anaconda”
Containers have certainly helped a lot with python deployments these days, even if the Python community was late to adopt it for some reason. throughout the 2010s where containers would have provided a much better story especially for python where most libraries are just C wrappers and you must pip install on the same target environments, python developers I dealt with were all very dismissive of it and just wanted to upload a zip or tarball because “python is cross platform. It shouldn’t matter” then we had to invent all sorts of workarounds to make sure we have hundreds of random system libs installed because who knows what they are using and what pip will need to build their things. prebuilt wheels were a lot less common back then too causing pip installs to be very resource intensive, slow and flaky because som system lib is missing or was updated. Still python application docker images always range in the 10s of GBs
Python crossed the chasm in the early 2000s with scripting, web applications, and teaching. Yes, it's riding an ML rocket, but it didn't become popular because it was used for ML, it was chosen for ML because it was popular.
Oh? How about Raymond's "Why python?" article that basically described the language as the best thing since sliced bread? Published in 2000, and my first contact with python.
Python had already exploded in popularity in the early 2000s, and for all sorts of things (like cross-platform shell scripting or as scripting/plugin system for native applications).
Not really, back in 2003 when I joined CERN it was already the offical scripting language on ATLAS, our build pipeline at the time (CMT) used Python, there were Python trainings available for the staff, and it was a required skill for anyone working in Grid Computing.
I started using Python in version 1.6, there were already several O'Reilly books, and Dr.Dobbs issues dedicated to Python.
This is not true. It took about 20 years for Python to reach the levels of its today's popularity. JavaScript also wasn't so dominant and omnipresent until the Chrome era.
Also, many languages that see a lot of hype initially lose most of their admirers in the long run, e.g. Scala.
> Never has an old language gained traction, its all about the initial network effects created by excitement.
Python?! Created in 1991, became increasingly popular – especially in university circles – only in the mid-2000s, and then completely exploded thanks to the ML/DL boom of the 2010s. That boom fed back into programming training, and it's now a very popular first language too.
Love it or hate it, Python was a teenager by the time it properly took off.
IMHO D just missed the mark with the GC in core. It was released in a time where a replacement for C++ was sorely needed, and it tried to position itself as that (obvious from the name).
But by including the GC/runtime it went into a category with C# and Java which are much better options if you're fine with shipping a runtime and GC. Eventually Go showed up to crowd out this space even further.
Meanwhile in the C/C++ replacement camp there was nothing credible until Rust showed up, and nowadays I think Zig is what D wanted to be with more momentum behind it.
Still kind of salty about the directions they took because we could have had a viable C++ alternative way earlier - I remember getting excited about the language a lifetime ago :D
I'd rather say that the GC is the superpower of the language. It allows you to quickly prototype without focusing too much on performance, but it also allows you to come back to the exact same piece of code and rewrite it using malloc at any time. C# or Java don't have this, nor can they compile C code and seamlessly interoperate with it — but in D, this is effortless.
Furthermore, if you dig deeper, you'll find that D offers far greater control over its garbage collector than any other high-level language, to the point that you can eagerly free chunks of allocated memory, minimizing or eliminating garbage collector stops where it matters.
Do you know of any popular real-time (for some definition of real-time) applications written in D? Like, streaming music or video? C has FFmpeg [0]:
> FFmpeg is proudly written in the C programming language for the highest performance. Other fashionable languages like C++, C#, Rust, Go etc do not meet the needs of FFmpeg.
How does D perform in benchmarks against other programming languages?
[0] https://www.linkedin.com/posts/ffmpeg_ffmpeg-is-proudly-writ...
D by definition meets the FFmpeg's criteria because it's also a C compiler. Because of that I never wondered how D performs in the benchmarks, as I know for sure that it can give me the performance of C where I need it.
But then, to use D for performance, would I then have to master both D, C and their interaction? That doesn't seem great. It's like having to learn 2 languages and also how they interact.
Sociomantic (bought by Dunhumby, now defunct IIRC) had a realtime advertisement business built in D.
Weka have a realtime distributed filesystem written in D, used for ML/HPC workloads.
> Weka have a realtime distributed filesystem written in D, used for ML/HPC workloads.
This https://github.com/weka ?
Most of the D repositories appear to have very little activity. The Go repositories seem to have more activity.
> C# or Java don't have this, nor can they compile C code and seamlessly interoperate with it — but in D, this is effortless.
C# C interop is pretty smooth, Java is a different story. The fact that C# is becoming the GC language in game dev is proving my point.
>Furthermore, if you dig deeper, you'll find that D offers far greater control over its garbage collector than any other high-level language, to the point that you can eagerly free chunks of allocated memory, minimizing or eliminating garbage collector stops where it matters.
Yes, and the no-gc stuff was just attempts to backpedal on the wrong initial decision to fit into the use-cases they should have targeted from the start in my opinion.
Look D was an OK language but it had no corporate backing and there was no case where it was "the only good solution". If it was an actual C++ modernization attempt that stayed C compatible it would have seen much better adoption.
> C# C interop is pretty smooth
True, but you still need to either generate or manually write the bindings. In D, you just import the C headers directly without depending on the bindings' maintainers.
> If it was an actual C++ modernization attempt that stayed C compatible it would have seen much better
Any D compiler is literally also a C compiler. I sincerely don't know how can one be more C compatible than that.
> Yes, and the no-gc stuff was just attempts to backpedal on the wrong initial decision
I think that it was more of an attempt to appease folks who won't use GC even with a gun to their head.
I'm not saying D didn't have nice features - but if D/C#/Java are valid options I'm never picking D - language benefits cannot outweigh the ecosystem/support behind those two. Go picked a niche with backend plumbing and got Google backing to push it through.
Meanwhile look at how popular Zig is getting 2 decades later. Why is that not D ? D also has comp-time and had it for over a decade I think ? Zig proves there's a need that D was in the perfect spot to fill if it did not make the GC decision - and we could have had 2 decades of software written in D instead of C++ :)
> D/C#/Java are valid options I'm never picking D
This is perfectly fair.
> D was in the perfect spot to fill if it did not make the GC decision
I just find it hard to believe that the GC is the one big wart that pushed everyone away from the language. To me, the GC combined with the full power of a systems language are the killer features that made me stick to D. The language is not perfect and has bad parts too, but I really don't see the GC as one of them.
> The fact that C# is becoming the GC language in game dev is proving my point.
Respectfully, it doesn't prove your point. Unity is a commercial product that employed C# because they could sell it easily, not because it's the best language for game dev.
Godot supports C# because Microsoft sponsored the maintainers precisely on that condition.
> The fact that C# is becoming the GC language in game dev is proving my point.
Popularity is not proof of anything. C# is popular because it’s made by Microsoft and rode the OOP hype.
> The fact that C# is becoming the GC language in game dev is proving my point.
That is just the Unity effect. Godot adopted C# because they get paid to do so by Microsoft.
C# allows for far lees control over the garbage collection compared to D. The decision to use C# is partly responsible for the bad reputation of Unity games as it causes a lot of stutters when people are not very careful about how to manage the memory.
The creator of the Mono runtime actually calls using C# his Multi-million dollar mistake and instead works on swift bindings for Godot: https://www.youtube.com/watch?v=tzt36EGKEZo
> GC/runtime
1. Runtime: A runtime is any code that is not a direct result of compiling the program's code (i.e. it is used across different programs) that is linked, either statically or dynamically, into the executable. I remember that when I learnt C in the eighties, the book said that C isn't just a language but a rich runtime. Rust also has a rich runtime. It's true that you can write Rust in a mode without a runtime, but then you can barely even use strings, and most Rust programs use the runtime. What's different about Java (in the way it's most commonly used) isn't that it has a runtime, but that it relies on a JIT compiler included in the runtime. A JIT has pros and cons, but they're not a general feature of "a runtime".
2. GC: A garbage collector is any mechanism that automatically reuses a heap object's memory after it becomes unreachable. The two classic GC designs, reference counting and tracing, date back to the sixties, and have evolved in different ways. E.g. in the eighties and nineties there were GC designs where the compiler could either infer a non-escaping object's lifetime and statically insert a `free` or have the language track lifetimes ("regions", 1994) and have the compiler statically insert a `free` based on information annotated in the language. On the other hand, in the eighties Andrew Appel famously showed that moving tracing collectors "can be faster than stack allocation". So different GCs employ different combination of static inference and dynamic information on object reachability to optimise for different things, such as footprint or throughput. There are tradeoffs between having a GC or not, and they also exist between Rust (GC) and Zig (no GC), e.g. around arenas, but most tradeoffs are among the different GC algorithms. Java, Go, and Rust use very different GCs with different tradeoffs.
So the problem with using the terms "runtime" and "GC" colloquially as they're used today is not so much that it differs from the literature, but that it misses what the actual tradeoffs are. We can talk about the pros and cons of linking a runtime statically or dynamically, we can talk about the pros and cons of AOT vs. JIT compilation, and we can talk about the pros and cost of a refcounting/"static" GC algorithm vs a moving tracing algorithm, but talking in general about having a GC/runtime or not, even if these things mean something specific in the colloquial usage, is not very useful because it doesn't express the most relevant properties.
Does Rust really require reference counting? I thought Rust programs only used reference counting when types like Rc and Arc are used.
Swift seems to require reference counting significantly more than Rust.
Op saying Rust has a kind of GC is absurd. Rust keeps track of the lifetime of variables and drops them at the end of their lifecycle. If you really want to call that a GC you should at least make a huge distinction that it works at compile time: the generated code will have drop calls inserted without any overhead at runtime. But no one calls that a GC.
You see OP is trying to murk the waters when they claim C has a runtime. While there is a tiny amount of truth to that, in the sense that there’s some code you don’t write present at runtime, if that’s how you define runtime the term loses all meaning since even Assemblers insert code you don’t have to write yourself, like keeping track of offsets and so on. Languages like Java and D have a runtime that include lots of things you don’t call yourself, like GC obviously, but also many stdlib functions that are needed and you can’t remove because it may be used internally. That’s a huge difference from inserting some code like Rust and C do. To be fair, D does let you remove the runtime or even replace it. But it’s not easy by any means.
> If you really want to call that a GC you should at least make a huge distinction that it works at compile time: the generated code will have drop calls inserted without any overhead at runtime. But no one calls that a GC.
Except for the memory management literature, because it's interested in the actual tradeoffs of memory management. A compiler inferring lifetimes, either automatically for some objects or for most objects based on language annotations, has been part of GC research for decades now.
The distinction of working at compile time or runtime is far from huge. Working at compile time reduces the work associated with modifying the counters in a refcounting GC in many situations, but the bigger differences are between optimising for footprint or for throughput. When you mathematically model the amount of CPU spent on memory management and the heap size as functions of the allocation rate and live set size (residency), the big differences are not whether calling `free` is determined statically or not.
So you can call that GC (as is done in academic memory management research) or not (as is done in colloquial use), but that's not where the main distinction is. A refcounting algorithm, like that found in Rust's (and C++'s) runtime is such a classic GC that not calling it a GC is just confusing.
> A refcounting algorithm, like that found in Rust's (and C++'s) runtime is such a classic GC that not calling it a GC is just confusing.
But is it not easy to opt out of in C, C++, Zig and Rust, by simply not using the types that use reference counting?
And how does your performance analysis consider techniques like arenas and allocating at startup only?
My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
Do you want GC? Great! Do not want GC? Well, you can turn it off, and lose access to most things. Do you want a borrow-checker? Great, D does that as well, though less wholeheartedly than Rust. Do you want a safer C/memory safety? There's the SafeD mode. And probably more that I forget.
I wonder if all these different (often incompatible) ways of using D ends up fragmenting the D ecosystem, and in turn make it that much harder for it to gain critical mass
> My (likely unfair) impression of D is that it feels a bit rudderless
The more positive phrasing would be that it is a very pragmatic language. And I really like this.
Currently opinionated langues are really in vogue. Yes they are easier to market but I have personally very soured on this approach now that I am a bit older.
There is not one right way to program. It is fun to use on opinionated language until you hit a problem that it doesn't cover very well and suddenly you are in a world of pain. I like languages that give me escape hatches. That allow me to program they way I want to.
>My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
My (likely unfair) impression of D is that it feels a bit rudderless: It is trying to be too many things to too many people, and as a consequence it doesn't really stand out compared to the languages that commit to a paradigm.
Nim kind of does that, too.
This can very clearly be said about C++ as well, which may have started out as C With Classes but became very kitchen sinky. Most things that get used accrete a lot of features over time, though.
FWIW, I think "standing out" due to paradigm commitment is mostly downstream of "xyz-purity => fewer ways to do things => have to think/work more within the constraints given". This then begs various other important questions, of course.. E.g., do said constraints actually buy users things of value overcoming their costs, and if so for what user subpopulations? Most adoption is just hype-driven, though. Not claiming you said otherwise, but I also don't think the kind of standing out you're talking about correlates so well to marketing. E.g., browsers marketed Javascript (which few praised for its PLang properties in early versions).
Re: the point about Zig: Especially considering I used and played a lot with D's BetterC model when I was a student, I wonder as a language designer what Walter thinks about the development and rise in popularity of Zig. Of course, thinking "strategically" about a language's adoption comes off as Machiavellian in a crowd of tinkers/engineers, but I can't help but wonder.
Zig got too much in to avoiding "hidden behavior" that destructors and operator overloading were banned. Operator overloading is indeed a mess, but destructors are too useful. The only compromise for destructors was adding the "defer" feature. (Was there ever a corresponding "error if you don't defer" feature?)
FIl-C, the new memory-safe C/C++ compiler actually achieved that through introducing a GC, with that in mind I'd say D was kind of a misunderstood prodigy in retrospect.
There's two classes of programs - stuff written in C for historic reasons that could have been written in higher level language but rewrite is too expensive - fill c. Stuff where you need low level - Rust/C++/Zig
FillC works fine with all C code no matter how low level. There’s a small performance overhead but for almost every scenario it’s an acceptable overhead!
I often see people lament the lack of popularity for D in comparison to Rust. I've always been curios about D as I like a lot of what Rust does, but never found the time to deep dive and would appreciate someone whetting my appetite.
Are there technical reasons that Rust took off and D didn't?
What are some advantages of D over Rust (and vice versa)?
D and Rust are on the opposite sides at dealing with memory safety. Rust ensures safety by constantly making you think about memory with its highly sophisticated compile-time checks. D, on the other hand, offers you to either employ a GC and forget about (almost) all memory-safety concerns or a block scoped opt-out with cowboy-style manual memory management.
D retains object-oriented programming but also allows functional programming, while Rust seems to be specifically designed for functional programming and does not allow OOP in the conventional sense.
I've been working with D for a couple of months now and I noticed that it's almost a no-brainer to port C/C++ code to D because it mostly builds on the same semantics. With Rust, porting a piece of code may often require rethinking the whole thing from scratch.
> block scoped opt-out with cowboy-style manual memory management
Is this a Walter Bright alt? I've seen him use the cowboy programmer term a few times on the forum before.
The term 'Cowboy coder' has been around for some time. Everybody's favourite unreliable source of knowledge has issues dating back to 2011: <https://en.wikipedia.org/wiki/Cowboy_coding>
Yeah, I just saw his posts too and picked up the term :)
It makes sense for someone who has read about D to pick up on Bright phrasing.
> Are there technical reasons that Rust took off and D didn't?
As someone who considered it back then when it actually stood a chance to become the next big thing, from what I remember, the whole ecosystem was just too confusing and simply didn't look stable and reliable enough to build upon long-term. A few examples:
* The compiler situation: The official compiler was not yet FOSS and other compilers were not available or at least not usable. Switch to FOSS happened way too late and GCC support took too long to mature.
* This whole D version 1 vs version 2 thingy
* This whole Phobos vs Tango standard library thingy
* This whole GC vs no-GC thingy
This is not a judgement on D itself or its governance. I always thought it's a very nice language and the project simply lacked man-power and commercial backing to overcome the magical barrier of wide adoption. There was some excitement when Facebook picked it up, but unfortunately, it seems it didn't really stick.
How many people were working on the core compiler/language at the time versus Rust? This could explain it.
D has always been an handful of people.
Can you elaborate on the points? I know nothing about D, but I'm just curious about old drama
I think 3 things
1. D had a split similar to python 2 vs 3 early on with having the garbage collector or not (and therefor effectively 2 standard libraries), but unlike python it didn't already have a massive community that was willing to suffer through it.
2. It didn't really have any big backing. Rust having Mozilla backing it for integration with Firefox makes a pretty big difference.
3. D wasn't different enough, it felt much more "this is c++ done better" than it's own language, but unlike c++ where it's mostly a superset of c you couldn't do "c with classes" style migrations
D has much better metaprogramming compared to Rust. That has been one of the only things making me still write a few D programs. You can do compile time type introspection to generate types or functions from other elements without having to create a compiler plug-in parsing Rust and manipulating syntax trees.
Rust has some of the functional programming niceties like algebraic data types and that's something lacking in D.
One feature of D that i really wish other languages would adopt (not sure about Rust but i also think it lacks it, though if it has it to a similar extent as D it might be the reason i check it again more seriously) is the metaprogramming and compile-time code evaluation features it has (IIRC you can use most of the language during compile time as it runs in a bytecode VM), down to even having functions that generate source code which is then treated as part of the compilation process.
Of course you can make codegen as part of your build process with any language, but that can be kludgy (and often limited to a single project).
Arguably, most of the metaprogramming in D is done with templates and it comes with all the flaws of templates in C++. The error messages are long and it's hard to decipher what exactly went wrong (static asserts help a lot for this, when they actually exist). IDE support is non-existent after a certain point because IDE can't reason about code that doesn't exist yet. And code gets less self-documenting because it's all Output(T,U) foo(T, U)(T t, U u) and even the official samples use auto everywhere because it's hard to get the actual output types.
I'd say D's template error messages are much better than C++'s, because D prints the instantiation stack with exact locations in the code and the whole message is just more concise. In C++, it just prints a bunch of gibberish, and you're basically left guessing.
It is quite ridiculous to place C++ metaprogramming and D's. For one in D it's the same language and one can choose whether to execute compile time constant parts at compile time or run time. In C++ it's a completely different language that was bolted on. C++ did adopt compile time constant expressions from D though.
> Are there technical reasons that Rust took off and D didn't?
My (somewhat outdated) experience is that D feels like a better and more elegant C++. Rust certainly has been influenced by C and C++, but it also took a lot of inspiration from the ML-family of languages and it has a much stronger type system as a consequence.
More like the companies that jumped into D versus Rust, D only had Facebook and Remedy Games toy a bit with it.
Many of us believe on automatic memory management for systems programming, having used quite a few in such scenarios, so that is already one thing that D does better than Rust.
There is the GC phobia, mostly by folks that don't get not all GCs were born alike, and just like you need to pick and chose your malloc()/free() implementation depending on the scenario, there are many ways to implement a GC, and having a GC doesn't preclude having value types, stack and global memory segment allocation.
D has compile time reflection, and compile time metaprogramming is much easier to use than Rust macros, and it does compile time execution as well.
And the compile times! It is like using Turbo Pascal, Delphi,... even thought the language is like C++ in capabilities. Yet another proof complexity doesn't imply slow compile natives in a native systems language.
For me, C# and Swift replace the tasks at work were I in the past could have reached for D instead, mostly due to who is behind those languages, and I don't want to be that guy that leaves and is the one that knew the stack.
> Many of us believe on automatic memory management for systems programming
The problem is the term "systems programming". For some, it's kernels and device drivers. For some, it's embedded real-time systems. For some, it's databases, game engines, compilers, language run-times, whatever.
There is no GC that could possibly handle all these use-cases.
But there could be a smoother path between having a GC and having no GC.
Right now, you'd have to switch languages.
But in a Great Language you'd just have to refactor some code.
Except there is, only among GC-haters there is not.
People forget there isn't ONE GC, rather several of possible implementations depending on the use case.
Java Real-Time GC implementations are quite capable to power weapon targeting systems in the battlefield, where a failure causes the wrong side to die.
> Aonix PERC Ultra Virtual Machine supports Lockheed Martin's Java components in Aegis Weapon System aboard guided missile cruiser USS Bunker Hill
https://www.militaryaerospace.com/computers/article/16724324...
> Thales Air Systems Selects Aonix PERC Ultra For Java Execution on Ground Radar Systems
https://vita.militaryembedded.com/5922-thales-execution-grou...
Aonix is nowadays owned by PTC, and there are other companies in the field offering similar implementations.
Look, when someone says "There's no thing that could handle A,B,C, and D at the same time", answering "But there's one handling B" is not very convincing.
(Also, what's with this stupid "hater" thing, it's garbage collection we're talking about, not war crimes)
It is, because there isn't a single language that is an hammer for all types of nails.
It isn't stupid, it is the reality of how many behave for decades.
Thankfully, that issue has been slowly sorting out throughout generation replacement.
I already enjoy that nowadays we already have reached a point in some platforms where the old ways are nowadays quite constrained to a few scenarios and that's it.
> Are there technical reasons that Rust took off and D didn't?
Yes. D tried to jump on the "systems programming with garbage collection" dead horse, with predictable results.
(People who want that sort of stupidity already have Go and Java, they don't need D.)
Go wasn't around when D was released and Java has for the longest time been quite horrible (I first learnt it before diamond inference was a thing, but leaving that aside it's been overly verbose and awkward until relatively recently).
Is Java even a "systems programming" language?
I don't even know what that term means anymore; but afaik Java didn't really have reliable low-level APIs until recently.
Depends if one considers writing compilers, linkers, JITs, database engines, and running bare metal on embedded real time systems "systems programming".
> (People who want that sort of stupidity already have Go and Java, they don't need D.)
Go wasn't around when D was created, and Java was an unbelievable memory hog, with execution speeds that could only be described as "glacial".
As an example, using my 2001 desktop, the `ls` program at the time was a few kb, needed about the same in runtime RAM and started up and completed execution in under 100ms.
The almost equivalent Java program I wrote in 2001 to list files (with `ls` options) took over 5s just to start up and chewed through about 16MB of RAM (around 1/4 of my system's RAM).
Java was a non-starter at the time D came out - the difference in execution speed between C++ systems programs and Java systems programs felt, to me (i.e. my perception), larger than the current difference in performance between C++/C/Rust programs and Bash shell scripts.
As far as adoption is concerned, I'm not sure it should be that big of a concern.
After all, D is supported by GCC and Clang and continually being maintained, and if updates stopped coming at some point in the future, anyone who knew a bit of C / Java / insert language here could easily port it to their language of choice.
Meanwhile, its syntax is more expressive than many other compiled languages, the library is feature-rich and fairly tidy, and for me it's been a joy to use.
GCC usually drops frontends if there are no maintainers around, it already happened to gcj, and I am waiting for the same to happen to gccgo any time now, as it has hardly gotten any updates since Go 1.18.
The team is quite small and mostly volunteers, so there is the question how long can Walter Bright keep at it, and who will keep it going afterwards when he passes the torch.
It has an LLVM backend, LDC, that is separate from the LLVM project/Clang.
I like D in general, however it is missing out in WASM where other languages like Rust, Zig, even Go are thriving. Official reasoning usually included waiting for GC support from WASM runtime, but other GC languages seem to just ship their own GC and move on.
Off topic: Back in the day, C++ programming books Andrei Alexandrescu are a joy to read, especially, Modern C++ design.
Also, this presentation https://accu.org/conf-docs/PDFs_2007/Alexandrescu-Choose_You... killed a lot of bike shedding!
I tried some D some time ago, it is a nice language. Given today's landscape of programming languages I think it's difficult to reason why a program should be written in D if there are more programming languages that overlap in features. Also depends on how fast you need to scale in developers, how quickly people can learn a language (and not just the syntax) so popularity is also important. I work in consultancy and this is what I always factor in for a client.
When I was student, our group was forced to use D lang instead C++ for CS2* classes. That was back in 2009. After 16 years I see that level of adoption did not change at all.
D is a treasure we should continue to cherish and protect
A language with sane Compile Time features (Type Introspection, CTFE, mixins, etc)
A language that can embrace C ecosystem with sane diagnostics
A language that ships with its own optimizing code generator and inline assembler!
A compiler that compiles code VERY fast
A compiler with a readable source code that bootstraps itself in just 5 seconds
People who dunk on it "bEcAuSe iT Is nOt MaInsTrEaM" are clueless
What can D do other languages can't?
Say your starting a new Staff Engineer or Tech Lead job. What gets you to convince a CTO that we need to have a team learn D ?
On the flip side, where are the 200k base salary D positions.
Get me an interview in 2 months and I'll drop 10 hours a week into learning
Well, I would say it's more like glasses - you can't convince those who don't wear them, and you don't need to convince those who need them either.
What problem is D solving ?
To be a modern and sane C++ that C++ could have been, (rather than a complex collection of tacked on languages that C++ is), with modules instead of the the mess of C++'s headers, with instant compilation times that does not need a compilation server farm.
One good case for it that I see is a viable basis for cross-platform desktop apps. Today, cross-platform desktop GUI apps are either just a snapshot of the website contained inside Electron, or a C/C++ code base with manual memory management. D can serve as a nice middle ground in that space.
Where is the extensive tooling support for this use case if that is where you think it fits?
Apple is all in on Swift, so you will not be writing native MacOS or iOS code for UI in D, best case you put your business logic in D but you can do that in any language which has bindings to swift/Obj-C.
Android is all in on Kotlin/Java, not D again
Microsoft is all in on C#, again not D.
Linux your two best options for UI is GTK and Qt, C and C++ respectively.
So the only place where you could bave seemless integration is Linux through FFI.
Here's the thing though, for building a core layer that you can wrap a UI around, Rust has insanely good ergonomics, with very good third-party libraries to automatically generate safe bindings to a decent amount of languages, at least all those listed above and WASM for web.
None of those uses cases are painless in D.
It's true that there is no off the shelf tool that you can use right now to write your app, but it certainly doesn't prove that making such a tool is impossible or even complicated.
It makes sense for a complex productivity app (e.g. an office suite editor) to implement the UI from scratch anyway, and for that they may choose D. If Jane Street didn't pick OCaml, it would've died long ago -- in the same manner, some company might pick D to do UI or anything else really.
It is extremely complicated to do so yourself.
Handling energy efficiency/a11y/i8n is non trivial in any language, using the paved road of the system's native implementation solves for many of those problems out of the box.
You would need to reimplement all of that in D lang for your UI layer, and all you wanted to do was build an application to solve a problem, you weren't in the business of building a UI library to begin with.
Seen D being posted regularly on here, seems like flogging a dead horse. It's the equivalent of keeping grandma on life support when there is no hope.
You'd be surprised to see how active the D community is, despite your fair point that it's noticeably smaller than in the "competing" (in quotes because it's not a competition, actually) languages.
The latest release [1] was on Jan 7th, and it contains more updates than, say, the latest release of Dart, which has one of the largest corporations behind it.
1. https://dlang.org/changelog/2.112.0.html
I was personally a lot more excited by D and subsequently Nim, but ultimately it's Rust and Zig that got adoption. Sigh.
I remember the creator of D programming Language replying to me on HN on one of my posts!
https://news.ycombinator.com/item?id=46261452
Walter's a regular on HN.
> This very post is probably his too, under an alt :)
The probability of that is virtually zero. Walter is a principled person, has better things to do, and his writing style is vastly different from the OP's.
This very post is probably his too, under an alt :)
I had an interview at Facebook 10+ years ago and my interviewer was the other creator!
D is boring, let's see how to recreate the B language:
https://www.youtube.com/playlist?list=PLpM-Dvs8t0VZn81xEz6Ng...
How good are the big LLMs at writing D code? Just curious.
When design a language for everything nobody will use it for anything. When you design a language to simply accomplish one thing people will use it for everything. This is because people get the most efficient training using that simple language for one thing. From there it is only marginally more effort to carry some boilerplate.
That "one thing" could be real or propaganda. Rust's one thing is writing "memory-safe" without GC. Eventually the marginal cost becomes too high or your are tricked by advertising and "graduate" from awk to perl. From there depending on the pull of the community or the actual utility of the language you will use it for more and more tasks. If the community pull is strong your programs start to look like line noise or boilerplate hell. If the utility for your problems is genuine they remain simple but you probably aren't producing the most efficient binaries.
As for why c programmers don't just use -betterc well some do, but for most people the reality is that can just do it in c and prefer c -> c++ (ofc the vast majority of projects just start as c++ which makes -betterC )
c++'s one thing c with objects.
If you learned to code writing Go what did you do?
If you learned to code writing D what did you do?
That's not to say you can't learn to code from writing D just that it discipline, most people don't even know a problem exists before they are already learning some language or tool, nor do they have the goal of building everything, most programmers are lazy they want to build the minimal amount and end up building everything by accident.
Why don't experienced devs use D then? They think if they strive for ideological purity that they won't "build everything" next time, or they just enjoy ideological purity as it's own mental exercises. Unix faithfuls want to show that computing can be (conceptually) simple in implementation and use. Rust programmers want to show that those simple (to use) unix programs can be (memory) safe. To a senior engineer D is just too good and easy to take.
I never understood why this language didn't gain much traction. It seems very solid.
At the same time, I've never used it, I'm not sure why.
Anyway, the author of D language is here on HN (Walter Bright).
Every talk about D here seems to transform into "why D failed?".
I don't use D.. I find Nim helps with even lower ceremony. That said, it's hard for me to understand how "getting into gcc" is failure. The list of such PLangs is very short. People can be very parochial, though. They probably mean pretty shallow things (just one example, but something like "failed to convert me, personally", or "jobs I'd like to apply for", or etc.).
Maybe people should instead talk about how they use D or what they'd like to see added to D? In an attempt to be the change one wants to see, I'd say named arguments are a real win and there seem to be some stalled proposals for that in D last I checked.
Maybe because there were some expectations when "The D Programming Language" was published back in 2010, regarding where D would be 16 years later.
Years ago I got interested in D. It's a great language, but at the time its garbage collector was leaky. There weren't any D entries on the Benchmarks Game back then, so I ported most of the programs to D and optimized them as best I could as a newcomer. Performance wise, D was in the C/Rust/C++ range and in many cases it even beat Rust and C++. I tried to get the community involved to help the language gain wider adoption, but nothing really happened. I think everything has its moment, and D's moment has passed. They didn't make the most of the window when D could have gone mainstream.
Sigh.
Ownership and borrowing are so much less baroque in D than in Rust. And compile times are superb.
In a better world, we would all be using D instead of C, C++ or Rust.
However in this age of Kali...
For those curious what ownership and borrowing looks like in D: https://dlang.org/blog/2019/07/15/ownership-and-borrowing-in...
This is a somewhat simplistic view of ownership and borrowing for modern programming languages.
Pointers are not the only 'pointer's to resources. You can have handles specific to your codebase or system, you can have indices to objects in some flat array that the rest of your codebase uses, even temporary file names.
An object oriented (or 'multi paradigm') language has to account for these and not just literal pointers.
This is handled reasonably well both in Rust and C++. (In the spirit of avoiding yet another C++ vs Rust flamewar here, yes the semantics are different, no it doesn not make sense for C++ to adopt Rust semantics)
How does Rust (or C++) treat array indices as resources? And won't that defy the reason to use indices over pointers?
I don't know D so I'm probably missing some basic syntax. If pointers cannot be copied how do you have multiple objects referencing the same shared object?
> If pointers cannot be copied
They can.
Is there any experience on how this works in practice?
OOP and ownership are two concepts that mix poorly - ownership in the presence of OOP-like constructs is never simple.
The reason for that is OOP tends to favor constructs where each objects holds references to other objects, creating whole graphs, its not uncommon that from a single object, hundreds of others can be traversed.
Even something so simple as calling a member function from a member function becomes incredibly difficult to handle.
Tbh - this is with good reason, one of the biggest flaws of OOP is that if x.foo() calls x.bar() in the middle, x.bar() can clobber a lot of local state, and result in code that's very difficult to reason about, both for the compiler and the programmer.
And it's a simple case, OOP offers tons of tools to make the programmers job even more difficult - virtual methods, object chains with callbacks, etc. It's just not a clean programming style.
Edit: Just to make it clear, I am not pointing out these problems, to sell you or even imply that I have the solution. I'm not saying programming style X is better.
I work at a D company. We tend to use OOP only for state owners with strict dependencies, so it's rare to even get cycles. It is extremely useful for modeling application state. However, all the domain data is described by immutable values and objects are accessed via parameters as much as fields.
When commandline apps were everywhere, people dreamed of graphical interfaces. Burdened by having to also do jobs that it was bad at, the commandline got a bad reputation. It took the dominance of the desktop for commandline apps to find their niche.
In a similar way, OOP is cursed by its popularity. It has to become part of a mixed diet so that people can put it where it has advantages, and it does have advantages.
It worked alright for Rust, and yes Rust does support OOP, there are many meanings to what is OOP from CS point of view.
I have ported Ray Tracing in One Weekend into Rust, while keeping the same OOP design from the tutorial, and affine types were not an impediment to interfaces, polymorphism and dynamic dispatch.
>one of the biggest flaws of OOP is that if x.foo() calls x.bar() in the middle, x.bar() can clobber a lot of local state, and result in code that's very difficult to reason about
That's more a problem of having mutable references, you'd have the same problem in a procedural language.
On the flipside, with OOP is usually quite easy to put a debugger breakpoint on a particular line and see the full picture of what the program is doing.
In diehard FP (e.g. Haskell) it's hard to even place a breakpoint, let alone see the complete state. In many cases, where implementing a piece of logic without carrying a lot of state is impossible, functional programming can also become very confusing. This is especially true when introducing certain theoretical concepts that facilitate working with IO and state, such as Monad Transformers.
That is true, but on the flip-flip side, while procedural or FP programs are usually easy to run piecewise, with OOP, you have to run the entire app, and navigate to the statement in question to be even able to debug it.
Imho, most FP languages have very serious human-interface issues.
It's no accident that C likes statements (and not too complex ones at that). You can read and parse a statement atomically, which makes the code much easier to read.
In contrast, FP tends to be very, very dense, or even worse, have a density that's super inconsistent.
Slowly it is going to be only skills.md.
I agree with the sentiment, I really like D and find a missing opportunity that it wasn't taken off regarding adoption.
Most of what made D special in D is nowadays partially available in mainstream languages, making the adoption speech even harder, and lack of LLM training data doesn't help either.
> lack of LLM training data doesn't help either.
That shouldn't stop any self-respecting programmer.
Self respecting developers are an endangered species, otherwise we would not have so much Electron crap.
Those that learn to do robot maintenance, are the ones left at the factory.
Nor does it stop self-respecting LLMs.
Exactly. We wrote code before LLMs and we can after their advent too
Yeah, that is why carpenters are still around and no one buys Ikea.
Is your proposition that programmers are now incapable of writing code?
Eventually yes, when incapable becomes a synonymous with finding a job in an AI dominated software factory industry.
Enterprise CMS deployment projects have already dropped amount of assets teams, translators, integration teams, backend devs, replaced by a mix of AI, SaaS and iPaaS tools.
Now the teams are a fraction of the size they used to be like five years ago.
Fear not, there will be always a place for the few ones that can invert a tree, calculate how many golf balls fit into a plane, and are elected to work at the AI dungeons as the new druids.
While I don't share this cynical worldview, I am mildly amused by the concept of a future where, Warhammer 40,000 style, us code monkeys get replaced by tech priests who appease the machine gods by burning incense and invoking hymns.
Same for ERP/CRM/HRM and some financial systems ; all systems that were heavy 'no-code' (or a lot of configuration with knobs and switches rather than code) before AI are now just going to lose their programmers (and the other roles); the business logic / financial calcs etc were already done by other people upfront in excel, visio etc ; now you can just throw that into Claude Code. These systems have decades of rigid code practices so there is not a lot of architecting/design to be done in the first place.
> Yeah, that is why carpenters are still around and no one buys Ikea.
I'm sorry, what? Are you suggesting that Ikea made carpenters obsolete? It's been less than 6 months since last I had a professional carpenter do work in my house. He seemed very real. And charged very real prices. This despite the fact that I've got lots of Ikea stuff.
Compared to before, not a lot of carpenters/furniture makers are left. This is due to automation.
Nah, IKEA has replaced moving furniture with throwing it away and rebuying it. Prior to IKEA hiring a carpenter was also something that is done a few times in a lifetime/century. If anything it has commodized creating new furniture.
> Compared to before, not a lot of carpenters/furniture makers are left.
Which is it? Carpenters or furniture makers? Because the two have nothing in common beyond the fact that both professions primarily work with wood. The former has been unaffected by automation – or even might plausibly have more demand due to the overall economic activity caused by automation! The latter certainly has been greatly affected.
The fact that people all over the thread are mixing up the two is mindboggling. Is there a language issue or something?
There is a language issue: carpenter is used as synonym of woodworker. It's like someone who doesn't know anything about computers using the term 'memory' to mean storage rather than working memory (i.e. RAM).
From the context it was pretty obvious what the original poster meant, as long as you charitably interpret their message. As per the site guidelines.
> that is why carpenters are still around and no one buys Ikea
The irony in this statement is hilarious, and perfectly sums up the reality of the situation IMO.
For anyone who doesn't understand the irony: a carpenter is someone who makes things like houses, out of wood. They absolutely still fucking exist.
Industrialised furniture such as IKEA sells has reduced the reliance on a workforce of cabinet makers - people who make furniture using joinery.
Now if you want to go ask a carpenter to make you a table he can probably make one, but it's going to look like construction lumber nailed together. Which is also quite a coincidence when you consider the results of asking spicy autocomplete to do anything more complex than auto-complete a half-written line of code.
I think you have misunderstood what a carpenter is. A carpenter is someone who makes wooden furniture (among other things).
> I think you have misunderstood what a carpenter is. A carpenter is someone who makes wooden furniture (among other things).
I think _you_ have misunderstood what a carpenter is. At least where I live, you might get a carpenter to erect the wood framing for a house. Or build a wooden staircase. Or erect a drywall. I'm sure most carpenters worth their salt could plausibly also make wooden furniture, at an exorbitant cost, but it's not at all what they do.
I sanity checked with Wiktionary, and it agrees: "A person skilled at carpentry, the trade of cutting and joining timber in order to construct buildings or other structures."
https://dictionary.cambridge.org/dictionary/english/carpente...
a person whose job is making and repairing wooden objects and structures
https://en.wikipedia.org/wiki/Carpentry
Carpenters make many things besides houses.
See the section "Types of carpentry".
Self-respecting programmers write assembly for the machines they built themselves. I swear, kids these days have no respect for the craft
My experience is that all LLMs that I have tested so far did a very good job producing D code.
I actually think that the average D code produced has been superior to the code produced for the C++ problems I tested. This may be an outlier (the problems are quite different), but the quality issues I saw on the C++ side came partially from the ease in which the language enables incompatible use of different features to achieve similar goals (e.g. smart_ptr s new/delete).
I work with D and LLMs do very well with it. I don't know if it could be better but it does D well enough. The problem is only working on a complex system that cannot all be held in context at once.
I based my opinion on this recent thread, https://forum.dlang.org/thread/bvteanmgrxnjiknrkeyg@forum.dl...
Which the discussion seems to imply it kind of works, but not without a few pain points.
Kali Yuga.
https://en.wikipedia.org/wiki/Kali_Yuga
Serious question, how is this on the front page? We all know of the language and chosen not to use it.
Edit: Instead of downvoting, just answer the question if you've upvoted it. But I'm guessing it's the same sock accounts that upvoted it.
> We all know...
HN isn't as homogeneous as you think. By this measuring stick, half of the posts on the front page can be put into question every day.
Let's be serious, most people are regulars and this has been on the front page multiple times like constantly. And it was upvoted 4 times on new to get to the front page rapidly. It's not something new that we're all "Oh that's cool".
We also know there are tons of sock accounts.
And no half of the posts on front page can't be put in that since they aren't constantly reposted like this.
So, while there are a few people who will have learnt about this for the first time. Most of you know what it is and somehow feel like this is your chance to go look I'm smarter than Iain. And I think you've failed again.
Do you know the joke with "I'll repeat the joke to you until you understand it?".
That's why some things get reposted and upvoted. In hope of getting someone else to understand them.
By the way, do you complain about sock accounts when yet another "Here is this problem, and by the way we sell a product that claims to solve it" gets upvoted?
> Do you know the joke with "I'll repeat the joke to you until you understand it?".
Nope. That's not a joke. That's not funny.
> That's why some things get reposted and upvoted. In hope of getting someone else to understand them.
No, they get reposted and upvoted by sock accounts in hope that someone will finally be interested in a 30 year old programming language.
> By the way, do you complain about sock accounts when yet another "Here is this problem, and by the way we sell a product that claims to solve it" gets upvoted?
What does content marketing have to do with sock accounts?
I'm honestly not sure what point you thought was getting made. Do you honestly think people don't understand D? It's been looked at repeatedly and still nothing cool is built in it.
> What does content marketing have to do with sock accounts?
If you accuse interesting subjects of being pushed by sock accounts, why wouldn't content marketing, which has even more interest in getting to the front page, be pushed by sock accounts?
Interesting subjects? A 30(?) year old programming language that has been on here repeatedly is not an interesting subject. New programming languages are. Cool things written in obscure languages is also interesting.
The content marketing that is pushed by sock accounts is wank and generally drops pretty quickly and called out like this. But you just complain about content marketing because you're one of those people who think you should make money but no one else.
You're harsh but that's OK. There is a lot of truth in what you're saying. I really wish people would quit downvoting everything they disagree with. HN would be 100x better if both the downvote and flag buttons were removed.
To me, a C guy, the focus on garbage collection is a turn-off. I'm aware that D can work without it, but it's unclear how much of the standard library etc works fine with no garbage collection. That hasn't been explained, that I saw at least.
The biggest problem however is the bootstrapping requirement, which is annoyingly difficult or too involved. (See explanation in my other post.)
I'm not sure how I'm being harsh. It's literally a somewhat well known programming language being reposted for the 100th time or something silly like that. I'm literally just pointing out the truth and it's almost certainly the main poster downvoting things.
> I'm literally just pointing out the truth
Problem identified.
That's not popular here.
As evidenced by several other comments, even if someone already knows about D they can still use posts like this as a prompt for talking about their experiences and current thoughts about it (which can be different from 1, 5 or 10 years ago).
Weird post. How does one of today's 10,000 who have never heard of a subject learn about it?
Interestingly, today someone can be one of the lucky to learn about the lucky 10000:
https://xkcd.com/1053/
meta
All seriousness, do you honestly think this site has 10,000 new users a day? How many people do you think are on here that aren't very well informed? Honestly, I'm just wondering?
Also, do you know it only gets to front page if the hardcore that go to new upvote it? How many hardcore people don't know what D is?
https://xkcd.com/1053/
And if you've never heard of the lucky 10000, QED.
Genuinely curious as I'm relatively new compared to the time of inception of this language. Can you cite the reasons why people didn't choose D?
It was competing with C and Java when it came out. People who like C will not use a language with garbage collection, even one that allows you to not use it. Against Java, it was a losing battle due to Java being backed by a giant (Sun , then Oracle) and basically taking the world by storm. Then there were also license problems in early versions of D, and two incompatible and competing standard libraries dividing the community. By the time all these problems were fixed, like a decade ago, it was already too late to make a comeback. Today D is a nice language with 3 different compilers with different strengths, one very fast, one produces faster results, and one also does that by works in the GCC ecosystem. That’s something few languages have. D even has a betterC mode now which makes it very good as a C replacement, with speed and size equivalent or better than a C equivalent binary… and D has the arguably best meta programming capabilities of any language that is not a Lisp, including Zig. But no one seems to care anymore as all the hotness is now with Rust and Zig in the systems languages space.
I like and use D but Nim has better metaprogramming capabilities (but D's templates are top-notch except for the error message cascades). (And Zig's metaprogramming is severely hobbled by Andrew's hatred of macros, mixins, and anything else that smells of code generation.)
Can you explain what BetterC is, and what it is used for?
I think there's also something called ImportC. Not sure what that is either.
I read the D blog sometimes, and have written some programs in D, but am not quite clear about these two terms.
https://dlang.org/spec/betterc.html
https://dlang.org/spec/importc.html
> Note: ImportC and BetterC are very different. ImportC is an actual C compiler. BetterC is a subset of D that relies only on the existence of the C Standard library. BetterC code can be linked with ImportC code, too.
D contains an actual C compiler because Walter Bright wrote one long ago and then incorporated it into D.
Zig also contains an actual C compiler, based on clang, and has a @cImport directive.
I had D support in my distro for a while, but regrettably had to remove it. There's just too many problems with this language and how it's packaged and offered to the end user, IMO. It was too much hassle to keep it around.
To get it onto one's system, a bootstrapping step is required. Either building gcc 9 (and only gcc 9) with D support, then using that gcc to bootstrap a later version, or bootstrapping dmd with itself.
In the former case I'm already having to bootstrap Ada onto the system, so D just adds another level of pain. It also doesn't support all the same architectures as other gcc languages.
In the case of dmd, last I checked they just shove a tarball at you containing vague instructions and dead FTP links. Later I think they "updated" this to some kind of fancy script that autodownloads things. Neither is acceptable for my purposes.
I just want a simple tarball containing everything needed with clear instructions, and no auto downloading anything, like at least 90% of other packages provide. Why is this so hard?
Tip: pretend it's still the BBS days and you are distributing your software. How would you do it? That's how you should still do it.
I haven't tried the LLVM D compiler, and at this point quite frankly I don't want to waste any more time with the language, in its current form at least--with apologies to Walter Bright, who is truly a smart and likeable guy. Like I said, it's regrettable.
The only way to revive interest in D is through a well planned rebranding and marketing campaign. I think the technical foundation is pretty sound, but the whole image and presentation needs a major overhaul. I have an idea of how to approach that, were there interest.
The first step would be to revive and update the C/C++ version of the D compiler for gcc so as to remove the bootstrapping requirement and allow the latest D to be built, plus a commitment to keeping this up to date indefinitely. It needs to support all architectures that GCC does.
Next, a rebranding focused on the power of D without garbage collection.
I'm willing to offer ongoing consultation in this area and assistance in the form of distro support and promotion, in exchange for a Broadwell or later Xeon workstation with at least 40 cores. (Approx $350 on Ebay.) That's the cost of entry for me as I have way too much work to do and too few available CPU cycles to process it.
Otherwise, I sincerely wish the D folks best of luck. The language has a lot of good ideas and I trust that Walter knows what he is doing from a technical standpoint. The marketing has not been successful however, sadly.
"We all know of the language and chosen not to use it."
Is a strange claim, and hard to cite. But I think many HNers have tried out D and decided that it's not good enough for them for anything. It is certainly advertised hard here.
Maybe you should Ask HN.
even in this empty thread there are people who dont know it.
It's a programming language that some people like, and or would like to see become more mainstream?
I think any presumption about what "we all know" will earn you downvotes.
You should familiarize yourself with these: https://news.ycombinator.com/newsguidelines.html
D is like a forced meme at that point.
Never has an old language gained traction, its all about the initial network effects created by excitement.
No matter how much better it is from C now, C is slowly losing traction and its potential replacements already have up and running communities (Rust, zig etc)
Not everything needs to have "traction", "excitement" or the biggest community. D is a useful, well designed programming language that many thousands of people in this vast world enjoy using, and if you enjoy it too, you can use it. Isn't that nice?
Oh a programming language certainly needs to have traction and community for it to succeed, or be a viable option for serious projects.
You can code your quines in whatever you'd like, but a serious project needs existence of good tooling, good libraries, proven track record & devs that speak the language.
"Good tooling, good libraries, proven track record" are all relative concepts, it's not something you have or don't have.
There are serious projects being written in D as we speak, I'm sure, and the language has a track record of having been consistently maintained and improved since 2001, and has some very good libraries and tooling (very nice standard library, three independent and supported compiler implementations!) It does not have good libraries and tooling for all things; certainly integrations with other libs and systems often lag behind more popular languages, but no programming language is suitable for everything.
What I'm saying is there's a big world out there, not all programmers are burdened with having to care about CV-maxxing, community or the preferences of other devs, some of them can just do things in the language they prefer. And therefore, not everything benefits from being written in Rust or whatever the top #1 Most Popular! Trending! Best Choice for System Programming 2026! programming language of the week happens to be.
D has three high quality compiler implementations. It has been around for ages and is very stable and has a proven track record.
Zig has one implementation and constant breaking changes.
D is the far more pragmatic and safer choice for serious projects.
Not that Zig is a bad choice but to say that a unstable lang in active development like Zig would be a better choice for "serious projects" compared to a very well established but less popular lang shows the insanity of hype driven development.
Python was first released in 1991. It rumbled along for about 20 years until exploding in popularity with ML and the rise of data science.
That's not how I remember it. Excitement for python strongly predated ML and data science. I remember python being the cool new language in 1997 when I was still in high school. Python 2.4 was already out, and O'Reilly had put several books kn the topic already it. Python was known as this almost pseudo code like language thst used indentation for blocking. MIT was considering switching to it for its introductory classes. It was definitely already hyped back then -- which led to U Toronto picking it for its first ML projects that eventually everyone adopted when deep learning got started.
It was popular as a teaching language when it started out, along side BASIC or Pascal. When the Web took off, it was one of a few that took off for scripting simple backends, along side PHP, JS and Ruby.
But the real explosion happened with ML.
I agree with the person you're replying to. Python was definitely already a thing before ML. The way I remember it is it started taking off as a nice scripting language that was more user friendly than Perl, the king of scripting languages at the time. The popularity gain accelerated with the proliferation of web frameworks, with Django tailgating immensely popular at the time Ruby on Rails and Flask capturing the micro-framework enthusiast crowd. At the same time the perceived ease of use and availability of numeric libraries established Python in scientific circles. By the time ML started breaking into mainstream, Python was already one of the most popular programming languages.
As I remember it there was a time when Ruby and Python were the two big up-and-coming scripting languages while Perl was in decline.
Sure, but the point was that it being used for web backends was years after it was invented, an area in which it never ruled the roost. ML is where it has gained massive traction outside SW dev.
Python was common place long before ML. Ever since 1991, it would jump in popularity every now and then, collect enough mindshare, then dives again once people find better tools for the job. It long took the place of perl as the quick "linux script that's too complex for bash" especially when python2 was shipping with almost all distros.
For example, python got a similar boost in popularity in the late 2000s and early 2010s when almost every startup was either ruby on rails or django. Then again in the mid 2010s when "data science" got popular with pandas. Then again in the end of 2010s with ML. Then again in the 2020s with LLMs. Every time people eventually drop it for something else. It's arguably in a much better place with types, asyncio, and much better ecosystem in general these days than it was back then. As someone who worked on developer tools and devops for most of the time, I always dread dealing with python developers though tbh.
> I always dread dealing with python developers though tbh.
Out of curiosity, why is that?
There are plenty of brilliant people who use python. However, in every one of these boom cycles with python I dealt with A LOT of developers with horrific software engineering practices, little understanding of how their applications and dependencies work, and just plane bizarre ideas of how services work. Like the one who comes with 1 8k line run.py with like 3 functions asking to “deploy it as a service”, expecting it to literally launch `python3 run.py` for every request. It takes 5 minutes to run. It assumes there is only 1 execution at a time per VM because it always writes to /tmp/data.tmp. Then poses a lot of “You guys don’t know what you’re doing” questions like “yeah, it takes a minute, but can’t you just return a progress bar?” In a REST api? Or “yeah, just run one per machine. Shouldn’t you provide isolation?”. Then there is the guy who zips up their venv from a Mac or Windows machine and expects it to just run on a Linux server. Or the guy who has no idea what system libs their application needs and is so confused we’re not running a full Ubuntu desktop in a server environment. Or the guy who gives you a 12GB docker image because ‘well, I’m using anaconda”
Containers have certainly helped a lot with python deployments these days, even if the Python community was late to adopt it for some reason. throughout the 2010s where containers would have provided a much better story especially for python where most libraries are just C wrappers and you must pip install on the same target environments, python developers I dealt with were all very dismissive of it and just wanted to upload a zip or tarball because “python is cross platform. It shouldn’t matter” then we had to invent all sorts of workarounds to make sure we have hundreds of random system libs installed because who knows what they are using and what pip will need to build their things. prebuilt wheels were a lot less common back then too causing pip installs to be very resource intensive, slow and flaky because som system lib is missing or was updated. Still python application docker images always range in the 10s of GBs
Python crossed the chasm in the early 2000s with scripting, web applications, and teaching. Yes, it's riding an ML rocket, but it didn't become popular because it was used for ML, it was chosen for ML because it was popular.
Oh? How about Raymond's "Why python?" article that basically described the language as the best thing since sliced bread? Published in 2000, and my first contact with python.
Python had already exploded in popularity in the early 2000s, and for all sorts of things (like cross-platform shell scripting or as scripting/plugin system for native applications).
Not really, back in 2003 when I joined CERN it was already the offical scripting language on ATLAS, our build pipeline at the time (CMT) used Python, there were Python trainings available for the staff, and it was a required skill for anyone working in Grid Computing.
I started using Python in version 1.6, there were already several O'Reilly books, and Dr.Dobbs issues dedicated to Python.
This is not true. It took about 20 years for Python to reach the levels of its today's popularity. JavaScript also wasn't so dominant and omnipresent until the Chrome era.
Also, many languages that see a lot of hype initially lose most of their admirers in the long run, e.g. Scala.
> Never has an old language gained traction, its all about the initial network effects created by excitement.
Python?! Created in 1991, became increasingly popular – especially in university circles – only in the mid-2000s, and then completely exploded thanks to the ML/DL boom of the 2010s. That boom fed back into programming training, and it's now a very popular first language too.
Love it or hate it, Python was a teenager by the time it properly took off.
Oohh, riiiighttt, D is new(s).
Slow day?