The commit removing Swift has a little bit more detail:
Everywhere: Abandon Swift adoption
After making no progress on this for a very long time, let's acknowledge
it's not going anywhere and remove it from the codebase.
As someone who first began using Swift in 2021, after almost 10 years in C#/.NET land, I was already a bit grumpy at how complex C# was, (C# was 21 years at that point), but then coming to Swift, I couldn't believe how complex Swift was compared to C# - Swift was released in 2014, so would've been 8 years old in 2022. How is a language less than half the age of C# MORE complex than C#?
And this was me trying to use Swift for a data access layer + backend web API. There's barely any guidance or existing knowledge on using Swift for backend APIs, let alone a web browser of all projects.
There's no precedent or existing implementation you can look at for reference; known best practices in Swift are geared almost entirely towards using it with Apple platform APIs, so tons of knowledge about using the language itself simply cannot be applied outside the domain of building client-running apps for Apple hardware.
To use swift outside its usual domain is to become a pioneer, and try something truly untested. It was always a longshot.
In the last years, simplistic languages such as Python and Go have “made the case” that complexity is bad, period. But when humans communicate expertly in English (Shakespeare, JK Rowling, etc) they use its vast wealth of nuance, shading and subtlety to create a better product. Sure you have to learn all the corners to have full command of the language, to wield all that expressive power (and newcomers to English are limited to the shallow end of the pool). But writing and reading are asymmetrical and a more expressive language used well can expose the code patterns and algorithms in a way that is easier for multiple maintainers to read and comprehend. We need to match the impedance of the tool to the problem. [I paraphrase Larry Wall, inventor of the gloriously expressive https://raku.org]
Not sure how I feel about Shakespeare and JK Rowling living in the same parenthesis!
Computer languages are the opposite of natural languages - they are for formalising and limiting thought, the exact opposite of literature. These two things are not comparable.
If natural language was so good for programs, we’d be using it - many many people have tried from literate programming onward.
I fully accept that formalism is an important factor in programming language design. But all HLLs (well, even ASM) are a compromise between machine speak (https://youtu.be/CTjolEUj00g?si=79zMVRl0oMQo4Tby) and human speak. My case is that the current fashion is to draw the line at an overly simple level, and that there are ways to wrap the formalism in more natural constructs that trigger the parts of the brain that have evolved to hanle language (nouns, verbs, adverbs, prepositions and so on).
Here's a very simple, lexical declaration made more human friendly by use of the preposition `my` (or `our` if it is packaged scoped)...
I started using it around 2018. After being reasonably conversant in Objective-C, I fully adopted Swift for a new iOS app and thought it was a big improvement.
But there's a lot of hokey, amateurish stuff in there... with more added all the time. Let's start with the arbitrary "structs are passed by value, classes by reference." And along with that: "Prefer structs over classes."
But then: "Have one source of truth." Um... you can't do that when every data structure is COPIED on every function call. So now what? I spent so much time dicking around trying to conform to Swift's contradictory "best practices" that developing became a joyless trudge with glacial progress. I finally realized that a lot of the sources I was reading didn't know WTF they were talking about and shitcanned their edicts.
A lot of the crap in Swift and SwiftUI remind me of object orientation, and how experienced programmers arrived at a distilled version of it that kept the useful parts and rejected dumb or utterly impractical ideas that were preached in the early days.
I think Swift was developed to keep a number of constituencies happy.
You can do classic OOP, FP, Protocol-Oriented Programming, etc., or mix them all (like I do).
A lot of purists get salty that it doesn’t force implementation of their choice, but I’m actually fine with it. I tend to have a “chimeric” approach, so it suits me.
Been using it since 2014 (the day it was announced). I enjoy it.
No Swift was developed as a strategic moat around Apple's devices. They cannot be dependent on any other party for the main language that runs on their hardware. Controlling your own destiny full stack means having your own language.
There are plenty of valid reasons to use classes in Swift. For example if you want to have shared state you will need to use a class so that each client has the same reference instead of a copy.
> But there's a lot of hokey, amateurish stuff in there... with more added all the time. Let's start with the arbitrary "structs are passed by value, classes by reference." And along with that: "Prefer structs over classes."
This is the same way that C# works and C and C++ why is this a surprise?
Nowhere does it say structs provide “one source of truth”. It says the opposite actually- that classes are to be used when unique instances are required. All classes have a unique ID, which is simply it’s virtual memory address. Structs by contrast get memcpy’d left and right and have no uniqueness.
You can also look at the source code for the language if any it’s confusing. It’s very readable.
Regardless of the language it is written in, one thing that I hope Ladybird will focus on when the time comes is a user-respecting Javascript implementation. Regardless of what the Web standards say, it is unacceptable that websites can (ab)use JS against the users for things such as monitoring presence/activity, disabling paste, and extracting device information beyond what is strictly necessary for an acceptably formatted website. One approach could be to report standardized (spoofed) values across the user base so that Ladybird users are essentially indistinguishable from each other (beyond the originating IP). This is more or less the approach taken by Tor, and where a project like Ladybird could make a real difference.
There's just too many defense mechanisms on popular websites that would simply make Ladybird flagged as a bot and render the website unusable. I wouldn't mind a toggle to switch between this and normal behavior but having that as a default would be bad for wider adoption.
If those "popular websites" are the likes of Facebook and Instagram, I don't see that as a big loss. That being said, I find that most of the Web works just fine on Tor, so it's certainly possible. Most of the issues seem related to the (known) the exit IP being overused or identified as Tor.
> If those "popular websites" are the likes of Facebook and Instagram, I don't see that as a big loss.
Personally I wouldn't mind either but my point is that they probably want to cater to the average person, and not just security conscious tech savvy people, and if that's the case, then you really can't exclude FB/IG/YT and others from working properly in your browser.
> they probably want to cater to the average person, and not just security conscious tech savvy people
Why? The average person is well served by large existing players, whereas security conscious tech people are extremely underserved and often actually willing to pay.
Specific numbers aside, one possible reason is they want to increase adoption to gain user volume, in order to have an effect on the larger ecosystem.
Once you have non-trivial network effects, you could continue to influence the ecosystem (see: MSIE, Firefox in its early days, and Google Chrome). There are probably multiple paths to this. This is one.
Firefox tries to position itself as that secure and private alternative, but this is mostly marketing. For a long time, Chromium had better site isolation, and the default Firefox settings are permissive when it comes to fingerprinting. Out of the box, it seems that Brave wins here, but for now using Brave means accepting a lot of extra commercial stuff that should not be in a browser in the first place (and that increases the attack surface). I have been using the Arkenfox user.js for Firefox, but it's unclear how much good it does or if it isn't counterproductive (by making the the user stand out).
It looked to me like it was just due to recurring build issues. Lots of "swift can't import these conflicting C++ versioned libraries concurrently" and "can't use some operator due to versioning or build conflicts". Basically it sounds like trying to add swift to the project was breaking too many things, and they decided it wasn't worth it.
It's a shame, I think swift is an underappreciated language, however I understand their reasoning. I think if they tried to just use swift from the beginning it would have been too ambitious, and trying to add swift to a fragile, massive project was probably too complex.
Why did Ladybird even attempt this with Swift, but (I presume) not with Rust? If they're going to go to the trouble of adding another language, does Rust not have a better history of C++ interop? Not to mention, Swift's GC doesn't seem great for the browser's performance.
> Swift is strictly better in OO support and C++ interop
Fascinating.
They've shown the idea it is better on C++ interop is wrong.
I don't know enough to say Rust has same OO support as Swift, but I'm pretty sure it does. (my guess as a former Swift dev: "protocol oriented programming" was a buzzy thing that would have sounded novel, but amounted to "use traits" in rust parlance)
EDIT: Happy to hear a reply re: why downvotes, -3 is a little wild, given current replies don't raise any issues.
Rust has straightforward support for every part of OOP other than implementation inheritance, and even implementation inheritance can be rephrased elegantly as the generic typestate pattern. (The two are effectively one and the same; if anything, generic typestate is likely more general.)
> Rust has straightforward support for every part of OOP other than implementation inheritance
Except the only thing that makes OOP OOP: Message passing.
Granted, Swift only just barely supports it, and only for the sake of interop with Objective-C. Still, Swift has better OO support because of it. Rust doesn't even try.
Not that OOP is much of a goal. There is likely good reason why Smalltalk, Objective-C, and Ruby are really the only OOP languages in existence (some esoteric language nobody has ever heard of notwithstanding).
You can, but then you don't get any of what OOP actually offers. Message passing isn't the same thing as dynamic dispatch. OOP is a very different paradigm.
I think you are both unknowingly talking past each other: my understanding is that Smalltalk-style "object-oriented programming" ("everything is a message!") is quite distinct from C++/C#/Java/Rust "object-oriented programming" ("my structs have methods!")
I think we have seen enough since the best example of a Rust browser that is Servo, has taken them 14 years to reach v0.0.1.
So the approach of having a new language that requires a full rewrite (even with an LLM) is still a bad approach.
Fil-C likely can do the job without a massive rewrite and achieving safety for C and C++.
Job done.
EDIT: The authors of Ladybird have already dismissed using Rust, and with Servo progressing at a slow pace it clearly shows that Ladybird authors do not want something like that to happen to the project.
Until just a couple years ago, Servo had been a pure research project with no goal of ever releasing a full browser (and it was abandoned by Mozilla in 2020).
Igalia had five engineers working full time who turned that science project into v0.0.1 in less than two years.
The RUST ecosystem barely just started getting into shape on the GUI toolkits frontend... So perhaps save your criticisms for something that wasn't born out of the vacuum.
> Why did Ladybird even attempt this with Swift, but (I presume) not with Rust? I
Probably the same reason why Rust is problematic in game development. The borrow checker and idiomatic Rust do not go well together with things that demand cyclic dependencies/references. Obviously there are ways around it but they're not very ergonomic/productive.
I think that's fair. Funny to have a language that makes it prohibitively difficult to use most of the core computer science constructs (lists, graphs etc.).
Binding to C++ is an extremely difficult and complex problem for any language that is similarly rich and has lots of (seemingly) equivalent features. The number of subtle incompatibilities and edge cases becomes nearly endless. It's not surprising that some C++ code can't be bound properly.
Yeah, that's what I realised. But I just wanted to mention that this is not what I was expecting from "excellent" interop. I would say that C has excellent interop, in general.
I did this a long time ago as Swift calling Objective-C++ which can call C++ libs, in that case OpenCV. So it wasn't awful but did require making an ObjC++ wrapper, unless I did something wrong which is also possible.
Andreas Kling said Rust lacks OO, which he says is useful for GUI coding.
He even made an attempt at creating his own language, Jakt, under SerenityOS, but perhaps
felt that C++ (earlier with, now without Swift) were the pragmatic choice for Ladybird.
Rust initially started as a hobby project of a person who happened to be a Mozilla employee and later got sponsored by the foundation however it was not a language that was specifically designed with browsers in mind.
The language's largest project before it hit 1.0 was Servo. The language wasn't designed for browsers, but it certainly was informed by them and their struggles with maintaining and developing Firefox.
Do your hobbies revolve around the benefits for your employer? I don't mean it in a snarky way either, but given that Rust was initially written in OCaml, you could see how it could go like "I like programming, I like type systems but I want something procedural over functional so let me give it a go".
It can be described as a hobby project only in the sense that his employer would probably prefer that he spend all his time working on Firefox.
Tools to do X better are often designed by people who get paid a lot to do X and worry about losing their job if they are not good enough at X.
If he were to tell me that he didn't imagine Rust's helping with browser dev when he designed Rust, then I'd believe him, but the "circumstantial" evidence points strongly in the other direction.
> Rust designed specifically for being a language for developing a rendering engine
Rust was born at Mozilla, sort of. It was created by a Mozilla employee. The first "real" project to put it into action was Servo of which parts were adopted into Firefox. While Rust may not have been developed "specifically" to create a browser, it is a fair comment.
That said, Ladybird was started as part of the SerenityOS project. That entire project was built using C++. If the original goal of Serenity was to build an opeerating system, C++ would have felt like a reasonable choice at the time.
By the time Ladybird was looking for "better" languages than C++, Ladybird was already a large project and was making very heavy use of traditional OOP. Rust was evaluated but rejected because it did not support OOP well. Or, at least, it did not support integration into a large, C++ based, OOP project.
Perhaps, if Ladybird had first selected a languge to write a browser from scratch, they would have gone with Rust. We will never know,
We do know that Mozilla, despite being the de facto stewards of Rust at the time, and having a prototype web browser written in Rust (Servo), decided to drop both Rust and Servo. So, perhaps using Rust for browsers is not as open and shut as you imply.
As someone who was on that team for a long time, we took that into consideration, but it was never specifically for that. There was some stuff the Servo team would have liked us to have implemented that we didn’t.
I so wholeheartedly agree. You are making a new web browser - akin to a new OS - and you want it open source for everybody but you choose swift not rust?
Lots of people seem really committed to OOP. Rust is definitely a bad fit if you can't imagine writing code without classes and objects. I don't think this makes rust is a bad language for the problem. Its just, perhaps, makes rust a bad language for some programmers.
It doesn't seem uncommon for someone to generally like Rust but still want to use something OO for UI. I'm in that boat. Never liked OOP much, but it makes sense sometimes.
Implementation inheritance can be achieved with good old shared functions that take trait arguments, like this:
fn paint_if_visible<W>(widget: &W, ctx: &mut PaintCtx)
where
W: HasBounds + HasVisibility,
{
if widget.is_visible() {
ctx.paint_rect(widget.bounds());
}
}
You can also define default methods at the trait level.
This all ends up being much more precise, clear, and strongly typed than the typical OO inheritance model, while still following a similar overall structure.
You can see real world examples of this kind of thing in the various GUI toolkits for Rust, like Iced, gpui, egui, Dioxus, etc.
Not too surprising. Swift is too tied to Apple and it's not really clear what the benefit would be relative to a subset of C++ written with contemporary memory safety practices. It's a battle tested choice and pretty much every browser actually in use is written in C++.
> It's a battle tested choice and pretty much every browser actually in use is written in C++.
Every browser in use is stuck with C++ because they're in way too deep at this point, but Chromium and Firefox are both chipping away at it bit by bit and replacing it with safer alternatives where they feasibly can. Chromium even blocked JPEG-XL adoption until there was a safe implementation because they saw the reference C++ decoder as such a colossal liability.
IMO the takeaway is that although those browsers do use a ton of C++ and probably always will, their hard-won lessons have led them to wish they didn't have to, and to write a brand new browser in C++ is just asking to needlessly repeat all of the same mistakes. Chromium uses C++ because Webkit used C++ because KHTML used C++ in 1998. Today we have the benefit of hindsight.
> Chromium even blocked JPEG-XL adoption until there was a safe implementation because they saw the reference C++ decoder as such a colossal liability.
Quickly followed by several vulnerabilities in that reference library as well; good move
TBF that's less a C++ thing and more that there have been several high profile decoder vulnerabilities over the past however many years. Enough that Google created the custom language WUFFS for the express purpose of implementing secure parsers for arbitrary file formats.
It's emblematic of C++ devs penchant for not implementing error handling on invalid input because of the "safety net" of exceptions and not bothering to properly handle errors or exceptions.
It's probably okay to solve one problem at a time: first solve the "free open source browser, developed from the Web standard specs" problem in an established language (C++), and then the "reimplement all of part of it in a more suitable (safer, higher productivity) language - yet to be devised - problem.
And Andreas Kling already proved the naysayers wrong when he showd that a new operating system and Web browser can be written entirely from scratch, the former not even using any standard libraries; so beware when you are inclined to say 'not feasible'.
Maybe? I feel like there's been lots of efforts to migrate large C++ codebases over the years, and few actually complete the migration. Heck, Google is even making Carbon to try to solve this.
migrating any large project is going to be billions of dollars worth of labor. Language isn't a large factor in that cost, you can save few tens of millions at most with a better language.
> a subset of C++ written with contemporary memory safety practices
What is this mythical subset of C++? Does it include use of contemporary STL features like string_view? (Don’t get me wrong — modern STL is considerably improved, but it’s not even close to being memory-safe.)
Ladybird inherits its C++ from SerenityOS. Ladybird has an almost completely homegrown standard library including their own pointer classes and a couple of different string classes that do some interesting things with memory. But perhaps the most novel stuff are things like TRY and MUST:
https://github.com/SerenityOS/serenity/blob/master/Documenta...
You see this reflected all the way back to the main function. Here is the main entry function for the entire browser:
If Ladybird is successful, I would not be surprised to see its standard library take off with other projects. Again, it is really the SerenityOS standard library but the SerenityOS founder left the project to focus on Ladybird. So, that is where this stuff evolves now.
So interesting to hear about the internals of the browser, how it evolved from a standard library in an OS project. It's the kind of insight that's rarely documented, spoken about, or even known to anybody other than the author.
I can totally imagine how a prolific and ambitious developer would create a world of their own, essentially another language with domain-specific vocabulary and primitives. People often talk about using a "subset of C++" to make it manageable for mortals, and I think the somewhat unusual consideration of Swift was related to this desire for an ergonomic language to express and solve the needs of the project.
memory safety isn't really much of a problem with modern C++. We have the range library now for instance. What's nice about modern C++ is you can almost avoid most manual loops and talk at the algorithm level.
Are we talking about the same range library? The one that showed up in C++20 and is basically just iterator pairs dressed up nicely? The one where somehow the standard thought all memory safety issues with iterations could be summed up with a single “borrowed” bit? The one where instead of having a nice loop you can also build a little loop body pipeline and pass it as a parameter and have exactly the same iterator invalidation and borrowing problems that C++ has had since day 1?
Having a checklist of "things not to do" is historically a pretty in effectiveway to ensure memory safety, which is why the parent comment was asking for details. The fact that this type of thing gets dismissed as a non-issue is honestly a huge part of the problem in my opinion; it's time to move on from pretending this is a skill issue.
Servo is slowly but steadily getting there. The thing with Servo is that it's highly modularized and some of its components are widely used by the larger Rust ecosystem, even it the whole browser engine isn't. So there's multi-pronged vested interest in developing it.
Moreover, Servo aims to be embeddable (there are some working examples already), which is where other non-Chrome/ium browsers are failing (and Firefox too).
Thanks to this it has much better chance at wider adoption and actually spawning multiple browsers.
> The thing with Servo is that it's highly modularized and some of its components are widely used by the larger Rust ecosystem, even it the whole browser engine isn't.
Alas not nearly as modularized as it could be. I think it's mainly just Stylo and WebRender (the components that got pulled into Firefox), and html5ever (the HTML parser) that are externally consumable.
Text and layout support are two things that could easily be ecosystem modules but aren't seemingly (from my perspective) because the ambition to be modular has been lost.
I've seen recent talk about swappable js engine, so I'm unsure about the ambition being lost.
I'm eyeing Blitz too (actually tried to use it in one of my projects but the deps fucked me up).
Servo's history is much more complicated and originally was planned to be used for the holo lens before the layoff. Comparing trajectory doesn't make sense they had completely different goals and directions.
What are you talking about? It doesn't have a "browser", it has a testing shell. For a time there was actual attempt with the Verso experiment but it got shelved just recently.
Servo is working at being embeddable at the same time when Rust GUI toolkits are maturing. Once it gets embedding stabilized that will be the time for a full blown browser developement.
> It doesn't have a "browser", it has a testing shell.
So, yes it is still pre-historic.
> Once it gets embedding stabilized that will be the time for a full blown browser developement.
Servo development began in 2012. [0] 14 years later we get a v0.0.1.
At this point, Ladybird will likely reach 1.0 faster than Servo could, and the latter is not even remotely close to being usable even in 14 years of waiting.
This is disingenuous. Servo is using RUST, language which grew together with it, pretty much, and all components surrounding it.
C++ is how old, please remind me?
> At this point, Ladybird will likely reach 1.0 faster than Servo could, and the latter is not even remotely close to being usable even in 14 years of waiting.
Well, it was a terrible idea in any case unless it was for high-level-ish code only. Swift generally can't compete with C++ in raw performance (in the same way as Java - yeah, there are benchmarks where it's faster, but it basically doesn't happen in real programs).
Performance wasn't really the issue here though. The issue was that Swift's C++ interop is still half-baked and kept breaking the build. You can write a perfectly fast browser in Swift for the non-hot-path stuff, which is most of a browser. They killed it because the tooling wasn't ready, not because the language is slow.
It wasn't the reason why it was removed, but, well we agree, it would have been a problem if used indiscriminately. I didn't do any additional research, but what I read in public was simply "Ladybird is going to use Swift".
Hurray for micro benchmarks. Anyway, every language can be abused. I can make Java run slower than Ruby. Given that it runs on Microcontrollers on billions of devices, I don't think Swift is necessarily the problem in whatever case you have in mind (And yes I stole oracle's java marketing there for Swift, it is true though.)
For their sake, I hope not. I don't think an outside-donation-financed project with this much ADD can survive in the long term.
It's frustrating to discuss. It is a wonderful case study in how not to make engineering management decisions, and yet, they've occurred over enough time, and the cause is appealing enough, that it's hard to talk about out loud in toto without sounding like a dismissive jerk.
> From what I can tell they're pretty laser focused on making a browser
I agree with you. I also agree that this decision is an example of that.
SerenityOS had an "everything from scratch in one giant mono-repo" rule. It was, explicitly a hobby project and one rooted in enjoyment and 'idealism from the get go. It was founded by a man looking for something productive to focus on instead of drugs. It was therapy. Hence the name.
Ladybird, as an independent project, was founded with the goal of being the only truly independent web browser (independent from corporate control generally and Google specifically).
They have been very focussed on that, have not had any sacred cows, and have shed A LOT of the home-grown infrastructure they inherited from being part of SerenityOS. Sometimes that saddens me a little but there is no denying that it has sped them up.
Their progress has been incredible. This comment is being written in Ladybird. I have managed GitHub projects in Ladybird. I have sent Gmail messages in Ladybird. It is not "ready" but it blows my mind how close it is.
I think Ladybid will be a "usable" browser before we enter 2027. That is just plain amazing.
Yeah, Serenityos was build everything from scratch for fun. Ladybird is build where an alternative implementation is going to add value. No need to get sidetracked reinventing SSL or ffmpeg.
Guy writes that they won't be using Swift and the comments are filled with rust this and rust that. The stereotypes never die. One day the whole tech will just collapse because of how rotten, degenerate and toxic the rust community is.
There's already a stereotype that Rust people will just carpet bomb any discussion with aggressively promoting Rust, people expect it to happen at this point and it just annoys people.
So I think it probably meets the threshold of "toxic", but more importantly - it's not effective. Everyone and their dog has already heard of Rust, aggressive proselytising is not going to help drive Rust adoption, it's just pissing people off
Swift never felt truly open source either. That people can propose evolution points doesn’t change the fact that Apple still holds all the keys and pushes whatever priorities they need, even if they’re not a good idea (e.g. Concurrency, Swift Testing etc)
Also funny enough, all cross platform work is with small work groups, some even looking for funding … anyway.
Apple has been always 'transactional' when it comes to OSS - they open source things only when it serves a strategic purpose. They open-sourced Swift only because they needed the community to build an ecosystem around their platform.
Yeah, well, sure they've done some work around LLVM/Clang, WebKit, CUPS, but it's really not proportional to the size and the influence they still have.
Compare them to Google, with - TensorFlow, k8s, Android (nominally), Golang, Chrome, and a long tail of other shit. Or Meta - PyTorch and the Llama model series. Or even Microsoft, which has dramatically reversed course from its "open source is a cancer" era (yeah, they were openly saying that, can you believe it?) to becoming one of the largest contributors on GitHub.
Apple I've heard even have harshest restrictions about it - some teams are just not permitted to contribute to OSS in any way. Obsessively secretive and for what price? No wonder that Apple's software products are just horrendously bad, if not all the time - well, too often. And on their own hardware too.
I wouldn't mind if Swift dies, I'm glad Objective-C is no longer relevant. In fact, I can't wait for Swift to die sooner.
>> some teams are just not permitted to contribute to OSS in any way
My understanding is that by default you are not allowed to contribute to open-source even if its your own project. Exceptions are made for teams whose function is to work on those open-source project e.g. Swift/LLVM/etc...
I talked to an apple engineer at a bar years ago and he said they aren’t allowed to work on _anything_ including side projects without getting approval first. Seemed like a total wtf moment to me.
I have never had a non wtf moment talking to an apple software engineer at a bar.
I can recall one explaining to me in the mid 20 teens that the next iPhone would be literally impossible to jailbreak in any capacity with 100% confidence.
I could not understand how someone that capable(he was truly bright) could be that certain. That is pure 90s security arrogance. The only secure computer is one powered off in a vault, and even then I am not convinced.
Multiple exploits were eventually found anyway.
We never exchanged names. That’s the only way to interact with engineers like that and talk in real terms.
No, as far as I know, at Apple, this is strict - you cannot contribute to OSS, period. Not from your own equipment nor your friend's, not even during a vacation. It may cost you your job. Of course, it's not universal for every team, but on teams I know a few people - that's what I heard. Some companies just don't give a single fuck of what you want or need, or where your ideals lie.
I suspect it's not just Apple, I have "lost" so many good GitHub friends - incredible artisans and contributors, they've gotten well-payed jobs and then suddenly... not a single green dot on the wall since. That's sad. I hope they're getting paid more than enough.
Every programming job I've ever had, I've been required at certain points to make open source contributions. Granted, that was always "we have an issue with this OSS library/software we use, your task this sprint is to get that fixed".
I won't say never, but it would take an exceedingly large comp plan for me to sign paperwork forbidding me from working on hobby projects. That's pretty orwellian. I'm not allowed to work on hobby projects on company time, but that seems fair, since I also can't spend work hours doing non-programming hobbies either.
Sort of an exception that proves the rule. Yes, it's great and was released for free. But at least partially that's not a strategic decision from Apple but just a requirement of the LGPLv2 license[1] under which they received it (as KHTML) originally.
And even then, it was Blink and not WebKit that ended up providing better value to the community.
[1] It does bear pointing out that lots of the new work is dual-licensed as 2-clause BSD also. Though no one is really trying to test a BSD-only WebKit derivative, as the resulting "Here's why this is not a derived work of the software's obvious ancestor" argument would be awfully dicey to try to defend. The Ship of Theseus is not a recognized legal principle, and clean rooms have historically been clean for a reason.
The fact that Swift is an Apple baby should indeed be considered a red flag. I know there are some Objective-C lovers out there but I think it is an abomination.
Apple is (was?) good at hardware design and UX, but they pretty bad at producing software.
For what it’s worth, ObjC is not Apple’s brainchild. It just came along for the ride when they chose NEXTSTEP as the basis for Mac OS X.
I haven’t used it in a couple decades, but I do remember it fondly. I also suspect I’d hate it nowadays. Its roots are in a language that seemed revolutionary in the 80s and 90s - Smalltalk - and the melding of it with C also seemed revolutionary at the time. But the very same features that made it great then probably (just speculating - again I haven’t used it in a couple decades) aren’t so great now because a different evolutionary tree leapfrogged ahead of it. So most investment went into developing different solutions to the same problems, and ObjC, like Smalltalk, ends up being a weird anachronism that doesn’t play so nicely with modern tooling.
I've never written whole applications in ObjC but have had to dabble with it as part of Ardour (ardour.org) implementation details for macOS.
I think it's a great language! As long as you can tolerate dynamic dispatch, you really do get the best of C/C++ combined with its run-time manipulable object type system. I have no reason to use it for more code than I have to, but I never grimace if I know I'm going to have to deal with it. Method swizzling is such a neat trick!
It is, and that’s part of what I loved about it. But it’s also the kind of trick that can quickly become a source of chaos on a project with many contributors and a lot of contributor churn, like we tend to get nowadays. Because - and this was the real point of Dijkstra’s famous paper; GOTO was just the most salient concrete example at the time - control flow mechanisms tend to be inscrutable in proportion to their power.
And, much like what happened to GOTO 40 years ago, language designers have invented less powerful language features that are perfectly acceptable 90% solutions. e.g. nowadays I’d generally pick higher order functions or the strategy pattern over method swizzling because they’re more amenable to static analysis and easier to trace with typical IDE tooling.
I don't really want to defend method swizzling (it's grotesque from some entirely reasonable perspectives). However, it does work on external/3rd party code (e.g. audio plugins) even when you don't have control over their source code. I'm not sure you can pull that off with "better" approaches ...
Many of the built-in types in Objective C all have names beginning with “NS” like “NSString”. The NS stands for NeXTSTEP. I always found it insane that so many years later, every iPhone on Earth was running software written in a language released in the 80s. It’s definitely a weird language, but really quite pleasant once you get used to it, especially compared to other languages from the same time period. It’s truly remarkable they made something with such staying power.
>It’s truly remarkable they made something with such staying power
What has had the staying power is the API because that API is for an operating system that has had that staying power. As you hint, the macOS of today is simply the evolution of NeXTSTEP (released in 1989). And iOS is just a light version of it.
But 1989 is not all that remarkable. The Linux API (POSIX) was introduced in 1988 but started in 1984 and based on an API that emerged in the 70s. And the Windows API goes back to 1985. Apple is the newest API of the three.
As far as languages go, the Ladybird team is abandoning Swift to stick with C++ which was released back in 1979. And of course C++ is just an evolution of C which goes back to 1972 and which almost all of Linux is still written in.
And what is Ladybird even? It is an HTML interpretter. HTML was introduced in 1993. Guess what operating system HTML and the first web browser was created on. That is right...NeXTSTEP.
In some ways ObjC’s and the NEXTSTEP API’s staying power is more impressive because they survived the failure of their relatively small patron organization. POSIX and C++ were developed at and supported by tech titans - the 1970s and 1980s equivalents of FAANG. Meanwhile back at the turn of the century we had all witnessed the demise of NeXT and many of us were anticipating the demise of Apple, and there was no particularly strong reason to believe that a union of the two would fare any better, let alone grow to become one of the A’s in FAANG.
I actually suspect that ObjC and the NeXT APIs played a big part in that success. I know they’ve fallen out of favor now, and for reasons I have to assume are good. But back in the early 2000s, the difference in how quickly I could develop a good GUI for OS X compared to what I was used to on Windows and GNOME was life changing. It attracted a bunch of developers to the platform, not just me, which spurred an accumulation of applications with noticeably better UX that, in turn, helped fuel Apple’s consumer sentiment revival.
Good take. Even back in the 1990s, OpenStep was thought to be the best way to develop a Windows app. But NeXT charged per-seat licenses, so it didn't get much use outside of Wall Street or other places where Jobs would personally show up. And of course something like iPhone is easier when they already had a UI framework and an IDE and etc.
Assuming you mean C (C++ is an 80s child), that’s trivially true because devices with an ObjC SDK are a strict subset of devices that are running on C.
Yes, that is why I don't find it "insane" like the grandparent does, like yeah, devices run old languages because those languages work well for their intended purpose.
You should feel that C’s longevity is insane. How many languages have come and gone in the meantime? C is truly an impressive language that profoundly moved humanity forward. If that’s not insane (used colloquially) to you, then what is?
Next was more or less an Apple spinoff, that was later acquired by Apple. Objective-C was created because using standards is contrary to the company culture. And with Swift they are painting themselves into a corner.
> Objective-C was created because using standards is contrary to the company culture
Objective-C was actually created by a company called Stepstone that wanted what they saw as the productivity benefits of Smalltalk (OOP) with the performance and portability of C. Originally, Objective-C was seen as a C "pre-compiler".
One of the companies that licensed Objective-C was NeXT. They also saw pervasive OOP as a more productive way to build GUI applications. That was the core value proposition of NeXT.
NeXT ended up basically taking over Objective-C and then it became of a core part of Apple when Apple bought NeXT to create the next-generation of macOS (the one we have now).
So, Objective-C was actually born attempting to "use standards" (C instead of Smalltalk) and really has nothing to do with Apple culture. Of course, Apple and NeXT were brought into the world by Steve Jobs
> Objective-C was created because using standards is contrary to the company culture.
What language would you have suggested for that mission and that era? Self or Smalltalk and give up on performance on 25-MHz-class processors? C or Pascal and give up an excellent object system with dynamic dispatch?
C's a great language in 1985, and a great starting point. But development of UI software is one of those areas where object oriented software really shines. What if we could get all the advantages of C as a procedural language, but graft on top an extremely lightweight object system with a spec of < 20 pages to take advantage of these new 1980s-era developments in software engineering, while keeping 100% of the maturity and performance of the C ecosystem? We could call it Objective-C.
Years ago I wrote a toy Lisp implementation in Objective-C, ignoring Apple’s standard library and implementing my own class hierarchy. At that point it was basically standard C plus Smalltalk object dispatch, and it was a very cool language for that type of project.
I haven’t used it in Apple’s ecosystem, so maybe I am way off base here. But it seems to me that it was Apple’s effort to evolve the language away from its systems roots into a more suitable applications language that caused all the ugliness.
Some refer to the “Tim Cook doctrine” as a reason for Swift’s existence. It’s not meant to be good, just to fulfill the purpose of controlling that part of their products, so they don’t have to rely on someone else’s tooling.
That doesn’t really make sense though. I thought that they hired Lattner to work on LLVM/clang so they could have a non-gpl compiler and to make whatever extensions they wanted to C/Obj-C. Remember when they added (essentially) closures to C to serve their internal purposes?
So they already got what they wanted without inventing a new language. There must be some other reason.
The Accidental Tech podcast had a long interview with Lattner about Swift in 2017 [0]. He makes it out as something that had started as side-project / exploration thing without much of an agenda, which grew mostly because of how good positive feedback the project had got from other developers. He had recently left Apple back then, and supposedly left the future of Swift in other peoples' hands.
I definitely agree with the first point - it's not meant to be the best.
On the second part, I think the big thing was that they needed something that would interop with Objective-C well and that's not something that any language was going to do if Apple didn't make it. Swift gave Apple something that software engineers would like a ton more than Objective-C.
I think it's also important to remember that in 2010/2014 (when swift started and when it was released), the ecosystem was a lot different. Oracle v Google was still going on and wasn't finished until 2021. So Java really wasn't on the table. Kotlin hit 1.0 in 2016 and really wasn't at a stage to be used when Apple was creating Swift. Rust was still undergoing massive changes.
And a big part of it was simply that they wanted something that would be an easy transition from Objective-C without requiring a lot of bridging or wrappers. Swift accomplished that, but it also meant that a lot of decisions around Swift were made to accommodate Apple, not things that might be generally useful to the lager community.
All languages have this to an extent. For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory. This decision probably isn't what would be the best for most of the non-Google community, but it's also something that could be reconsidered in the future since it's an implementation detail rather than a language detail.
I'm not sure any non-Apple language would have bent over backwards to accommodate Objective-C. But also, what would Apple have chosen circa-2010 when work on Swift started? Go was (and to an extent still is) "we only do things these three Googlers think is a good idea", Go was basically brand-new at the time, and even today Go doesn't really have a UI framework. Kotlin hadn't been released when work started on Swift. C# was still closed source. Rust hadn't appeared yet and was still undergoing a lot of big changes through Swift's release. Python and other dynamic languages weren't going to fit the bill. There really wasn't anything that existed then which could have been used instead of Swift. Maybe D could have been used.
But also, is Swift bad? I think that some of the type inference stuff that makes compiles slow is genuinely a bad choice and I think the language could have used a little more editing, but it's pretty good. What's better that doesn't come with a garbage collector? I think Rust's borrow checker would have pissed off way too many people. I think Apple needed a language without a garbage collector for their desktop OS and it's also meant better battery life and lower RAM usage on mobile.
If you're looking for a language that doesn't have a garbage collector, what's better? Heck, what's even available? Zig is nice, but you're kinda doing manual memory management. I like Rust, but it's a much steeper learning curve than most languages. There's Nim, but its ARC-style system came 5+ years after Swift's introduction.
So even today and even without Objective-C, it's hard to see a language that would fit what Apple wants: a safe, non-GC language that doesn't require Rust-style stuff.
I think that their culture of trying to invent their own standards is generally bad, but it is even worse when it is a programming language. I believe they are painting themselves into a corner.
>For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory.
Do you have a source for this?
C# has a copying GC, and easy interop with C has always been one of its strengths. From the perspective of the user, all you need to do is to "pin" a pointer to a GC-allocated object before you access it from C so that the collector avoids moving it.
I always thought it had more to do with making the implementation simpler during the early stages of development, with the possibility of making it a copying GC some time in the feature (mentioned somewhere in stdlib's sources I think) but it never came to fruition because Go's non-copying GC was fast enough and a lot of code has since been written with the assumption that memory never moves. Adding a copying GC today would probaby break a lot of existing code.
To add to this, whatever was to become Obj-C's successor needed to be just as or more well-suited for UI programming with AppKit/UIKit as Obj-C was. That alone narrows the list of candidates a lot.
Swift has it's problems, and I certainly wouldn't use it for anything outside of development for Apple platforms, but saying they had no experts on the team is a stretch. Most Swift leads were highly regarded members of the C++ world, even if you discount Chris Lattner.
I meant no Swift experts in the Ladybird team. Their expertise is C++, you may think the transition is easy, and it can be pretty painless at first, but true language expertise means knowing how to work around its flaws, and adapting your patterns to its strenghts. Cool for a hobby, but switching language in the middle of an herculean work is suicide.
> switching language in the middle of an herculean work is suicide
My read is that that this is the main thing that happened here.
The Ladybird team is quite pragmatic. Or, at least their founder is. I think they understood the scale of buidling a browser. They understood that it was a massive undertaking. They also understood the security challenge of building an application so big that processes untrusted input as its main job. So, they thought that perhaps they needed a better language than C++ for these reasons. They evaluated a few options and came away thinking that Swift was the best, so they announced that they expected to move towards Swift in the future.
But the other side of being pragamatic is that, as time passed, they also realized how hard it would be to chagne horses. And they are quite productivity driven. They produce a YouTube video every month detailing their progress including charts and numbers.
Making no progress on the "real" job of building a browser to make progress on the somewhat artificial job of moving to a new programming language just never made sense I guess.
And the final part of being pragmatic is that, after months of not really making any progress on the switch, the founder posted this patch essentially admitting the reality and suggesting they just formalize the lack of progress and move on.
The lack of progress on Swift that is. Their progress on making a browser has been absolutely mind-blowing. This comment is being written in Ladybird.
The point of Swift is not really the language, it's the standard ABI for dynamic code. The Rust folks should commit to supporting it as a kind of extern FFI/interop alongside C, at least on platforms where a standard Swift implementation exists.
It depends. Many languages are a poor fit for certain use cases, and some are bad at everything beyond a very specific niche. Some are rather unpleasant to write any kind of substantial UI with.
What projects are trying something similar to Ladybird? Well, mobody really. But Servo is pretty close though they are not writing their own Javascript engine or anything.
But you should perhaps give your attention to Servo. They were founded as a project to write a modern browser in Rust. So, no hand-waving there.
No hand-waving on the Ladybird team either in my opinion. They have very strong technical leadership. The idea that building a massive application designed to process untrusted user input at scale might need a better language than C++ seems like a pretty solid technical suggestion. Making incedible progress month after month using the language you started with seems pretty good too. And deciding, given the progress buidling and the lack of progress exploring the new language, that perhaps it would be best to formally abandon the idea of a language switch...well, that seems like a pretty solid decision as well.
At least, that is my view.
Oh, and I was a massive Servo fan before the Ladybird project even began. But, given how much further Ladybird has gotten than Servo has, despite being at it for less time and taking on a larger scope...well, I am giving my attention to Ladybird these days.
There's no way to say this without sounding mean: Everything Chris Lattner has done has been a "successful mess". He's obviously smart, but a horrible engineer. No one should allow him to design anything.
LLVM: Pretty much everyone who has created a programming language with it has complained about its design. gingerbill, Jon Blow, and Andrew Kelley have all complained about it. LLVM is a good idea, but it that idea was executed better by Ken Thompson with his C compiler for Plan 9, and then again with his Go compiler design. Ken decided to create his own "architecture agnostic" assembly, which is very similar to the IR idea with LLVM.
Swift: I was very excited with the first release of Swift. But it ultimately did not have a very focused vision outlined for it. Because of this, it has morphed into a mess. It tries to be everything for everyone, like C++, and winds up being mediocre, and slow to compile to top it off.
Mojo isn't doesn't exist for the public yet. I hope it turns out to be awesome, but I'm just not going to get my hopes up this time.
Yes. I also written a compiler and I also complained about LLVM.
LLVM is
- Slow to compile
- Breaks compilers/doesn't have a stable ABI
- Optimizes poorly (at least, worse than GCC)
Swift I never used but I tried compiling it once and it was the bottom 2 slowest compiler I ever tested. The only thing nearly as bad was kotlin but 1) I don't actually remember which of these are worse 2) Kotlin wasn't meant to be a CLI compiler, it was meant to compile in the background as a language server so it was designed around that
Mojo... I have things I could say... But I'll stick to this. I talked to engineers there and I asked one how they expected any python developers to use the planned borrow checker. The engineer said "Don't worry about it" ie they didn't have a plan. The nicest thing I can say is they didn't bullshit me 100% of the time when I directly asked a question privately. That's the only nice or neutral thing I could say
> LLVM: Pretty much everyone who has created a programming language with it has complained about its design. gingerbill, Jon Blow, and Andrew Kelley have all complained about it. LLVM is a good idea, but it that idea was executed better by Ken Thompson with his C compiler for Plan 9, and then again with his Go compiler design. Ken decided to create his own "architecture agnostic" assembly, which is very similar to the IR idea with LLVM.
I suggest you ask around to see what the consensus is for which compiler is actually mature. Hint: for all its warts, nobody is writing a seriously optimized language in any of the options you listed besides LLVM.
As far as I know, only Go uses Go's back end because it was specifically designed for Go. But the architecture is such that it makes it trivial for Go to cross compile for any OS and architecture. This is something that LLVM cannot do. You have to compile a new compiler for every OS and arch combo you wish to compile to.
You could imagine creating a modified Go assembler that is more generic and not tied to Go's ABI that could accomplish the same effect as LLVM. However, it'd probably be better to create a project like that from scratch, because most of Go's optimizations happen before reaching the assembler stage.
It would probably be best to have the intermediate language that QBE has and transform that into "intermediate assembly" (IA) very similar to Go's assembly. That way the IL stage could contain nearly all the optimization passes, and the IA stage would focus on code generation that would translate to any OS/arch combo.
> As far as I know, only Go uses Go's back end because it was specifically designed for Go. But the architecture is such that it makes it trivial for Go to cross compile for any OS and architecture. This is something that LLVM cannot do. You have to compile a new compiler for every OS and arch combo you wish to compile to.
I don't think that's true. Zig have a cross-compiler (that also compiles C and C++) based on LLVM. I believe LLVM (unlike gcc) is inherently a cross-compiler, and it's mostly just shipping header files for every platform that `zig cc` is adding.
I do not have enough knowledge to say anything bad about LLVM. As an "amateur" compiler writer, it did confuse me a bit though.
What I will say is that it seem popular to start with LLVM and then move away from it. Zig is doign that. Rust is heading in the direction perhaps with Cranelift. It feels that, if LLVM had completely nailed its mission, these kinds of projects would be less common.
It is also notable that the Dragonegg project to bring GCC languages to LLVM died but we have multiple projects porting Rust to GCC.
You don't explain or support your position, you are calling Lattner names. That's not helpful to me or anyone else if we are trying to evaluate his work. Swift has millions of users as does Mojo and Modular in general. These are not trivial accomplishments.
> Everything Chris Lattner has done has been a "successful mess".
I don't have an emotional reaction to this, i.e. I don't think you're being mean, but it is wrong and reductive, which people usually will concisely, and perhaps reductively, describe as "mean".
Why is it wrong?
LLVM is great.
Chris Lattner left Apple a *decade* ago, & thus has ~0 impact or responsibility on Swift interop with C++ today.
Swift is a fun language to write, hence, why they shoehorned it in, in the first place.
Mojo is fine, but I wouldn't really know how you or I would judge it. For me, I'm not super-opinionated on Python, and it doesn't diverge heavily from it afaik.
I was hired around Google around the same time, but not nearly as famous :)
AFAICouldT it was a "hire first, figure out what to do later", and it ended up being Swift for TensorFlow. That went ~nowhere, and he left within 2 years.
That's fine and doesn't reflect on him, in general, that's Google for ya. At least that era of Google.
Except performance isn't great and it covers far fewer platforms. It aims for 70% performance but the few benchmarks I've seen show more like 30-50% performance.
It's a cool project and I'd consider it for a toy language but it's far from an LLVM replacement.
The commit removing Swift has a little bit more detail:
https://github.com/LadybirdBrowser/ladybird/commit/e87f889e3...Some more context here too:
https://github.com/LadybirdBrowser/ladybird/issues/933
As someone who first began using Swift in 2021, after almost 10 years in C#/.NET land, I was already a bit grumpy at how complex C# was, (C# was 21 years at that point), but then coming to Swift, I couldn't believe how complex Swift was compared to C# - Swift was released in 2014, so would've been 8 years old in 2022. How is a language less than half the age of C# MORE complex than C#?
And this was me trying to use Swift for a data access layer + backend web API. There's barely any guidance or existing knowledge on using Swift for backend APIs, let alone a web browser of all projects.
There's no precedent or existing implementation you can look at for reference; known best practices in Swift are geared almost entirely towards using it with Apple platform APIs, so tons of knowledge about using the language itself simply cannot be applied outside the domain of building client-running apps for Apple hardware.
To use swift outside its usual domain is to become a pioneer, and try something truly untested. It was always a longshot.
In the last years, simplistic languages such as Python and Go have “made the case” that complexity is bad, period. But when humans communicate expertly in English (Shakespeare, JK Rowling, etc) they use its vast wealth of nuance, shading and subtlety to create a better product. Sure you have to learn all the corners to have full command of the language, to wield all that expressive power (and newcomers to English are limited to the shallow end of the pool). But writing and reading are asymmetrical and a more expressive language used well can expose the code patterns and algorithms in a way that is easier for multiple maintainers to read and comprehend. We need to match the impedance of the tool to the problem. [I paraphrase Larry Wall, inventor of the gloriously expressive https://raku.org]
Not sure how I feel about Shakespeare and JK Rowling living in the same parenthesis!
Computer languages are the opposite of natural languages - they are for formalising and limiting thought, the exact opposite of literature. These two things are not comparable.
If natural language was so good for programs, we’d be using it - many many people have tried from literate programming onward.
I fully accept that formalism is an important factor in programming language design. But all HLLs (well, even ASM) are a compromise between machine speak (https://youtu.be/CTjolEUj00g?si=79zMVRl0oMQo4Tby) and human speak. My case is that the current fashion is to draw the line at an overly simple level, and that there are ways to wrap the formalism in more natural constructs that trigger the parts of the brain that have evolved to hanle language (nouns, verbs, adverbs, prepositions and so on).
Here's a very simple, lexical declaration made more human friendly by use of the preposition `my` (or `our` if it is packaged scoped)...
I started using it around 2018. After being reasonably conversant in Objective-C, I fully adopted Swift for a new iOS app and thought it was a big improvement.
But there's a lot of hokey, amateurish stuff in there... with more added all the time. Let's start with the arbitrary "structs are passed by value, classes by reference." And along with that: "Prefer structs over classes."
But then: "Have one source of truth." Um... you can't do that when every data structure is COPIED on every function call. So now what? I spent so much time dicking around trying to conform to Swift's contradictory "best practices" that developing became a joyless trudge with glacial progress. I finally realized that a lot of the sources I was reading didn't know WTF they were talking about and shitcanned their edicts.
A lot of the crap in Swift and SwiftUI remind me of object orientation, and how experienced programmers arrived at a distilled version of it that kept the useful parts and rejected dumb or utterly impractical ideas that were preached in the early days.
I think Swift was developed to keep a number of constituencies happy.
You can do classic OOP, FP, Protocol-Oriented Programming, etc., or mix them all (like I do).
A lot of purists get salty that it doesn’t force implementation of their choice, but I’m actually fine with it. I tend to have a “chimeric” approach, so it suits me.
Been using it since 2014 (the day it was announced). I enjoy it.
No Swift was developed as a strategic moat around Apple's devices. They cannot be dependent on any other party for the main language that runs on their hardware. Controlling your own destiny full stack means having your own language.
Prefer structs over classes != only use structs.
There are plenty of valid reasons to use classes in Swift. For example if you want to have shared state you will need to use a class so that each client has the same reference instead of a copy.
> But there's a lot of hokey, amateurish stuff in there... with more added all the time. Let's start with the arbitrary "structs are passed by value, classes by reference." And along with that: "Prefer structs over classes."
This is the same way that C# works and C and C++ why is this a surprise?
Neither C++ nor C pass classes by reference by default (what even is a C "class" other than a struct?).
You are correct - it’s been ages since I’ve done C. The distinction is in C#.
> when every data structure is COPIED on every function call
Swift structs use copy on write, so they aren’t actually copied on every function call.
They are, as far as "Have one source of truth" is concerned. That is what parent is talking about.
Nowhere does it say structs provide “one source of truth”. It says the opposite actually- that classes are to be used when unique instances are required. All classes have a unique ID, which is simply it’s virtual memory address. Structs by contrast get memcpy’d left and right and have no uniqueness.
You can also look at the source code for the language if any it’s confusing. It’s very readable.
You're re-stating his exact problem while trying to refute him.
Not to mention how heated my laptop gets when I try to compile a new vapor template. On an m1.
So did you go back to and keep using C#/NET?
Regardless of the language it is written in, one thing that I hope Ladybird will focus on when the time comes is a user-respecting Javascript implementation. Regardless of what the Web standards say, it is unacceptable that websites can (ab)use JS against the users for things such as monitoring presence/activity, disabling paste, and extracting device information beyond what is strictly necessary for an acceptably formatted website. One approach could be to report standardized (spoofed) values across the user base so that Ladybird users are essentially indistinguishable from each other (beyond the originating IP). This is more or less the approach taken by Tor, and where a project like Ladybird could make a real difference.
There's just too many defense mechanisms on popular websites that would simply make Ladybird flagged as a bot and render the website unusable. I wouldn't mind a toggle to switch between this and normal behavior but having that as a default would be bad for wider adoption.
If those "popular websites" are the likes of Facebook and Instagram, I don't see that as a big loss. That being said, I find that most of the Web works just fine on Tor, so it's certainly possible. Most of the issues seem related to the (known) the exit IP being overused or identified as Tor.
> If those "popular websites" are the likes of Facebook and Instagram, I don't see that as a big loss.
Personally I wouldn't mind either but my point is that they probably want to cater to the average person, and not just security conscious tech savvy people, and if that's the case, then you really can't exclude FB/IG/YT and others from working properly in your browser.
> they probably want to cater to the average person, and not just security conscious tech savvy people
Why? The average person is well served by large existing players, whereas security conscious tech people are extremely underserved and often actually willing to pay.
Specific numbers aside, one possible reason is they want to increase adoption to gain user volume, in order to have an effect on the larger ecosystem.
Once you have non-trivial network effects, you could continue to influence the ecosystem (see: MSIE, Firefox in its early days, and Google Chrome). There are probably multiple paths to this. This is one.
Firefox tries to position itself as that secure and private alternative, but this is mostly marketing. For a long time, Chromium had better site isolation, and the default Firefox settings are permissive when it comes to fingerprinting. Out of the box, it seems that Brave wins here, but for now using Brave means accepting a lot of extra commercial stuff that should not be in a browser in the first place (and that increases the attack surface). I have been using the Arkenfox user.js for Firefox, but it's unclear how much good it does or if it isn't counterproductive (by making the the user stand out).
Most of the web works with Tor, but to make tor successful at the things it is intended to do you have to disable JavaScript.
This kills the internet.
Only if there is not widespread adoption.
A web browser that explicitly does its own thing regardless of web standards is the last browser in the world I would consider using.
That's interesting, what happened? They don't explain it there.
For the record, I don't have a dog in this fight. As long as it runs on Linux, I'm willing to test drive it when it's ready.
It looked to me like it was just due to recurring build issues. Lots of "swift can't import these conflicting C++ versioned libraries concurrently" and "can't use some operator due to versioning or build conflicts". Basically it sounds like trying to add swift to the project was breaking too many things, and they decided it wasn't worth it.
It's a shame, I think swift is an underappreciated language, however I understand their reasoning. I think if they tried to just use swift from the beginning it would have been too ambitious, and trying to add swift to a fragile, massive project was probably too complex.
Looking at their integration, with cmake, they definitely took the hardmode approach to adoption.
The list of issues does not seem to stem from whether they had used this build tool (CMake) nor others (nor official build environments).
Why did Ladybird even attempt this with Swift, but (I presume) not with Rust? If they're going to go to the trouble of adding another language, does Rust not have a better history of C++ interop? Not to mention, Swift's GC doesn't seem great for the browser's performance.
https://x.com/awesomekling/status/1822236888188498031 https://x.com/awesomekling/status/1822239138038382684 "In the end it came down to Swift vs Rust, and Swift is strictly better in OO support and C++ interop."
> In the end it came down to Swift vs Rust, and Swift is strictly better in OO support and C++ interop
Why not D?
Why not rust? It's popilar, in wide adoption, with wide support, without the baggage of C++. What'e the downside?
It is not backward compatible, the library system is immature, and there is no variety of different compilers for the language.
> Swift is strictly better in OO support and C++ interop
Fascinating.
They've shown the idea it is better on C++ interop is wrong.
I don't know enough to say Rust has same OO support as Swift, but I'm pretty sure it does. (my guess as a former Swift dev: "protocol oriented programming" was a buzzy thing that would have sounded novel, but amounted to "use traits" in rust parlance)
EDIT: Happy to hear a reply re: why downvotes, -3 is a little wild, given current replies don't raise any issues.
Rust has straightforward support for every part of OOP other than implementation inheritance, and even implementation inheritance can be rephrased elegantly as the generic typestate pattern. (The two are effectively one and the same; if anything, generic typestate is likely more general.)
> Rust has straightforward support for every part of OOP other than implementation inheritance
Except the only thing that makes OOP OOP: Message passing.
Granted, Swift only just barely supports it, and only for the sake of interop with Objective-C. Still, Swift has better OO support because of it. Rust doesn't even try.
Not that OOP is much of a goal. There is likely good reason why Smalltalk, Objective-C, and Ruby are really the only OOP languages in existence (some esoteric language nobody has ever heard of notwithstanding).
You just need to define a trait, then you can use dynamic dispatch.
You can, but then you don't get any of what OOP actually offers. Message passing isn't the same thing as dynamic dispatch. OOP is a very different paradigm.
I think you are both unknowingly talking past each other: my understanding is that Smalltalk-style "object-oriented programming" ("everything is a message!") is quite distinct from C++/C#/Java/Rust "object-oriented programming" ("my structs have methods!")
I think we have seen enough since the best example of a Rust browser that is Servo, has taken them 14 years to reach v0.0.1.
So the approach of having a new language that requires a full rewrite (even with an LLM) is still a bad approach.
Fil-C likely can do the job without a massive rewrite and achieving safety for C and C++.
Job done.
EDIT: The authors of Ladybird have already dismissed using Rust, and with Servo progressing at a slow pace it clearly shows that Ladybird authors do not want something like that to happen to the project.
Until just a couple years ago, Servo had been a pure research project with no goal of ever releasing a full browser (and it was abandoned by Mozilla in 2020).
Igalia had five engineers working full time who turned that science project into v0.0.1 in less than two years.
> Fil-C likely can do the job without a massive rewrite and achieving safety for C and C++.
So long as you don't mind a 2-4x performance & memory usage cost.
Servo was essentially integrated into Firefox. It was not a browser in itself until it was put into a foundation on its own.
The RUST ecosystem barely just started getting into shape on the GUI toolkits frontend... So perhaps save your criticisms for something that wasn't born out of the vacuum.
> Fil-C likely can do the job
> Job done.
Seems like you forgot a few stops in your train of thought, Speed Racer.
> Why did Ladybird even attempt this with Swift, but (I presume) not with Rust? I
Probably the same reason why Rust is problematic in game development. The borrow checker and idiomatic Rust do not go well together with things that demand cyclic dependencies/references. Obviously there are ways around it but they're not very ergonomic/productive.
Here's Andreas Kling's general thoughts on Rust:
- Excellent for short-lived programs that transform input A to output B
- Clunky for long-lived programs that maintain large complex object graphs
- Really impressive ecosystem
- Toxic community
https://x.com/awesomekling/status/1822241531501162806
I think that's fair. Funny to have a language that makes it prohibitively difficult to use most of the core computer science constructs (lists, graphs etc.).
Swift actually has excellent C++ interop [1] (compared to other languages, but, I guess, not good enough for Ladybird).
[1] https://www.swift.org/documentation/cxx-interop/
I actually looked into that recently (calling C++ from Swift), and I was surprised by the amount of limitations.
Said differently: the C++ interop did not support calling the C++ library I wanted to use, so I wrote a C wrapper.
Binding to C++ is an extremely difficult and complex problem for any language that is similarly rich and has lots of (seemingly) equivalent features. The number of subtle incompatibilities and edge cases becomes nearly endless. It's not surprising that some C++ code can't be bound properly.
Yeah, that's what I realised. But I just wanted to mention that this is not what I was expecting from "excellent" interop. I would say that C has excellent interop, in general.
I did this a long time ago as Swift calling Objective-C++ which can call C++ libs, in that case OpenCV. So it wasn't awful but did require making an ObjC++ wrapper, unless I did something wrong which is also possible.
Yes that makes sense. I would just rather make a C wrapper than an ObjC++ one, because then that C wrapper can be used with many other languages.
Andreas Kling said Rust lacks OO, which he says is useful for GUI coding.
He even made an attempt at creating his own language, Jakt, under SerenityOS, but perhaps felt that C++ (earlier with, now without Swift) were the pragmatic choice for Ladybird.
But wasn’t Rust designed specifically for being a language for developing a rendering engine / web browser?
Rust initially started as a hobby project of a person who happened to be a Mozilla employee and later got sponsored by the foundation however it was not a language that was specifically designed with browsers in mind.
The language's largest project before it hit 1.0 was Servo. The language wasn't designed for browsers, but it certainly was informed by them and their struggles with maintaining and developing Firefox.
a lot of early rust design was driven by Servo - an internal mozilla project, and firefox component prototypes
How could browsers not be on his mind when his job was to contribute to Firefox as a dev?
Do your hobbies revolve around the benefits for your employer? I don't mean it in a snarky way either, but given that Rust was initially written in OCaml, you could see how it could go like "I like programming, I like type systems but I want something procedural over functional so let me give it a go".
It can be described as a hobby project only in the sense that his employer would probably prefer that he spend all his time working on Firefox.
Tools to do X better are often designed by people who get paid a lot to do X and worry about losing their job if they are not good enough at X.
If he were to tell me that he didn't imagine Rust's helping with browser dev when he designed Rust, then I'd believe him, but the "circumstantial" evidence points strongly in the other direction.
> Rust designed specifically for being a language for developing a rendering engine
Rust was born at Mozilla, sort of. It was created by a Mozilla employee. The first "real" project to put it into action was Servo of which parts were adopted into Firefox. While Rust may not have been developed "specifically" to create a browser, it is a fair comment.
That said, Ladybird was started as part of the SerenityOS project. That entire project was built using C++. If the original goal of Serenity was to build an opeerating system, C++ would have felt like a reasonable choice at the time.
By the time Ladybird was looking for "better" languages than C++, Ladybird was already a large project and was making very heavy use of traditional OOP. Rust was evaluated but rejected because it did not support OOP well. Or, at least, it did not support integration into a large, C++ based, OOP project.
Perhaps, if Ladybird had first selected a languge to write a browser from scratch, they would have gone with Rust. We will never know,
We do know that Mozilla, despite being the de facto stewards of Rust at the time, and having a prototype web browser written in Rust (Servo), decided to drop both Rust and Servo. So, perhaps using Rust for browsers is not as open and shut as you imply.
I stand corrected, I was always under the impression that Rust was created specifically for Servo; TIL.
No. It was developed as a general purpose language.
I think you are conflating the development of Servo with the design and development of Rust.
As someone who was on that team for a long time, we took that into consideration, but it was never specifically for that. There was some stuff the Servo team would have liked us to have implemented that we didn’t.
Might not be the best choice for browser chrome, where an OOP paradigm for GUIs might make sense.
It will be interesting to see any further justification; I believe Rust was rejected previously because of the DOM hierarchy/OOP but not sure IIRC.
20240810 https://news.ycombinator.com/item?id=41208836 Ladybird browser to start using Swift language this fall
I so wholeheartedly agree. You are making a new web browser - akin to a new OS - and you want it open source for everybody but you choose swift not rust?
This experiment has shown that both are actually bad choices.
Oh? They tried rust?
Lots of people seem really committed to OOP. Rust is definitely a bad fit if you can't imagine writing code without classes and objects. I don't think this makes rust is a bad language for the problem. Its just, perhaps, makes rust a bad language for some programmers.
It doesn't seem uncommon for someone to generally like Rust but still want to use something OO for UI. I'm in that boat. Never liked OOP much, but it makes sense sometimes.
What OO features are you thinking of that Rust doesn't have?
Traits give you the ability to model typical GUI OO hierarchies, e.g.:
Implementation inheritance can be achieved with good old shared functions that take trait arguments, like this: You can also define default methods at the trait level.This all ends up being much more precise, clear, and strongly typed than the typical OO inheritance model, while still following a similar overall structure.
You can see real world examples of this kind of thing in the various GUI toolkits for Rust, like Iced, gpui, egui, Dioxus, etc.
Also I believe one of the core LadyBird devs was an ex Apple employee on WebKit which has been using Swift as well.
The Ladybird founder was one of the original KHTML devs and worked on Safari at Apple.
I’m not even sure he was at Apple when Swift came out. WebKit integration is very recent.
The ladybird developers tried Rust and Swift both and voted to adopt Swift.
Swift != GC
Not too surprising. Swift is too tied to Apple and it's not really clear what the benefit would be relative to a subset of C++ written with contemporary memory safety practices. It's a battle tested choice and pretty much every browser actually in use is written in C++.
> It's a battle tested choice and pretty much every browser actually in use is written in C++.
Every browser in use is stuck with C++ because they're in way too deep at this point, but Chromium and Firefox are both chipping away at it bit by bit and replacing it with safer alternatives where they feasibly can. Chromium even blocked JPEG-XL adoption until there was a safe implementation because they saw the reference C++ decoder as such a colossal liability.
IMO the takeaway is that although those browsers do use a ton of C++ and probably always will, their hard-won lessons have led them to wish they didn't have to, and to write a brand new browser in C++ is just asking to needlessly repeat all of the same mistakes. Chromium uses C++ because Webkit used C++ because KHTML used C++ in 1998. Today we have the benefit of hindsight.
> Chromium even blocked JPEG-XL adoption until there was a safe implementation because they saw the reference C++ decoder as such a colossal liability.
Quickly followed by several vulnerabilities in that reference library as well; good move
TBF that's less a C++ thing and more that there have been several high profile decoder vulnerabilities over the past however many years. Enough that Google created the custom language WUFFS for the express purpose of implementing secure parsers for arbitrary file formats.
It's emblematic of C++ devs penchant for not implementing error handling on invalid input because of the "safety net" of exceptions and not bothering to properly handle errors or exceptions.
It's probably okay to solve one problem at a time: first solve the "free open source browser, developed from the Web standard specs" problem in an established language (C++), and then the "reimplement all of part of it in a more suitable (safer, higher productivity) language - yet to be devised - problem.
And Andreas Kling already proved the naysayers wrong when he showd that a new operating system and Web browser can be written entirely from scratch, the former not even using any standard libraries; so beware when you are inclined to say 'not feasible'.
Maybe? I feel like there's been lots of efforts to migrate large C++ codebases over the years, and few actually complete the migration. Heck, Google is even making Carbon to try to solve this.
migrating any large project is going to be billions of dollars worth of labor. Language isn't a large factor in that cost, you can save few tens of millions at most with a better language.
> a subset of C++ written with contemporary memory safety practices
What is this mythical subset of C++? Does it include use of contemporary STL features like string_view? (Don’t get me wrong — modern STL is considerably improved, but it’s not even close to being memory-safe.)
> What is this mythical subset of C++
Ladybird inherits its C++ from SerenityOS. Ladybird has an almost completely homegrown standard library including their own pointer classes and a couple of different string classes that do some interesting things with memory. But perhaps the most novel stuff are things like TRY and MUST: https://github.com/SerenityOS/serenity/blob/master/Documenta...
You see this reflected all the way back to the main function. Here is the main entry function for the entire browser:
ErrorOr<int> ladybird_main(Main::Arguments arguments).
https://github.com/LadybirdBrowser/ladybird/blob/master/UI/Q...
If Ladybird is successful, I would not be surprised to see its standard library take off with other projects. Again, it is really the SerenityOS standard library but the SerenityOS founder left the project to focus on Ladybird. So, that is where this stuff evolves now.
So interesting to hear about the internals of the browser, how it evolved from a standard library in an OS project. It's the kind of insight that's rarely documented, spoken about, or even known to anybody other than the author.
I can totally imagine how a prolific and ambitious developer would create a world of their own, essentially another language with domain-specific vocabulary and primitives. People often talk about using a "subset of C++" to make it manageable for mortals, and I think the somewhat unusual consideration of Swift was related to this desire for an ergonomic language to express and solve the needs of the project.
They probable mean safe code like this:
memory safety isn't really much of a problem with modern C++. We have the range library now for instance. What's nice about modern C++ is you can almost avoid most manual loops and talk at the algorithm level.
Are we talking about the same range library? The one that showed up in C++20 and is basically just iterator pairs dressed up nicely? The one where somehow the standard thought all memory safety issues with iterations could be summed up with a single “borrowed” bit? The one where instead of having a nice loop you can also build a little loop body pipeline and pass it as a parameter and have exactly the same iterator invalidation and borrowing problems that C++ has had since day 1?
Ranges are not memory safe. Sorry.
And yet in practice, it's been less than a week since a major CVE in Chromium due to memory unsafety: https://chromereleases.googleblog.com/2026/02/stable-channel...
Having a checklist of "things not to do" is historically a pretty in effectiveway to ensure memory safety, which is why the parent comment was asking for details. The fact that this type of thing gets dismissed as a non-issue is honestly a huge part of the problem in my opinion; it's time to move on from pretending this is a skill issue.
Wasn't Rust developed specifically for Mozilla? Isn't mozilla written in Rust?
Only parts of it. Servo is the engine written in Rust, some of which ended up in Mozilla.
Firefox has some Rust components but it isn't written in Rust overall. Servo is written in Rust but it isn't a full browser.
Servo is slowly but steadily getting there. The thing with Servo is that it's highly modularized and some of its components are widely used by the larger Rust ecosystem, even it the whole browser engine isn't. So there's multi-pronged vested interest in developing it.
Moreover, Servo aims to be embeddable (there are some working examples already), which is where other non-Chrome/ium browsers are failing (and Firefox too). Thanks to this it has much better chance at wider adoption and actually spawning multiple browsers.
> The thing with Servo is that it's highly modularized and some of its components are widely used by the larger Rust ecosystem, even it the whole browser engine isn't.
Alas not nearly as modularized as it could be. I think it's mainly just Stylo and WebRender (the components that got pulled into Firefox), and html5ever (the HTML parser) that are externally consumable.
Text and layout support are two things that could easily be ecosystem modules but aren't seemingly (from my perspective) because the ambition to be modular has been lost.
I've seen recent talk about swappable js engine, so I'm unsure about the ambition being lost. I'm eyeing Blitz too (actually tried to use it in one of my projects but the deps fucked me up).
Mozilla laid off the Servo team years ago.
Servo was passed onto Linux Foundation and is still being developed, some of its components are shared with Firefox.
Yet, after all these years its browser is quite frankly pre-historic.
Servo's history is much more complicated and originally was planned to be used for the holo lens before the layoff. Comparing trajectory doesn't make sense they had completely different goals and directions.
What are you talking about? It doesn't have a "browser", it has a testing shell. For a time there was actual attempt with the Verso experiment but it got shelved just recently. Servo is working at being embeddable at the same time when Rust GUI toolkits are maturing. Once it gets embedding stabilized that will be the time for a full blown browser developement.
> It doesn't have a "browser", it has a testing shell.
So, yes it is still pre-historic.
> Once it gets embedding stabilized that will be the time for a full blown browser developement.
Servo development began in 2012. [0] 14 years later we get a v0.0.1.
At this point, Ladybird will likely reach 1.0 faster than Servo could, and the latter is not even remotely close to being usable even in 14 years of waiting.
[0] https://en.wikipedia.org/wiki/Servo_(software)
It's by no means accurate, but the comparative histories of Ladybird vs. Servo sure has some parallels with Linux vs. GNU Hurd.
This is disingenuous. Servo is using RUST, language which grew together with it, pretty much, and all components surrounding it. C++ is how old, please remind me?
You could make almost any non-C non-C++ project good by that metric.
And no, they're not being disingenuous.
> At this point, Ladybird will likely reach 1.0 faster than Servo could, and the latter is not even remotely close to being usable even in 14 years of waiting.
When Servo is done, it's going to be a beast.
It's getting hundreds of commits per week:
https://github.com/servo/servo/graphs/commit-activity
Yes and yes. Firefox is partially written in Rust.
Well, it was a terrible idea in any case unless it was for high-level-ish code only. Swift generally can't compete with C++ in raw performance (in the same way as Java - yeah, there are benchmarks where it's faster, but it basically doesn't happen in real programs).
Performance wasn't really the issue here though. The issue was that Swift's C++ interop is still half-baked and kept breaking the build. You can write a perfectly fast browser in Swift for the non-hot-path stuff, which is most of a browser. They killed it because the tooling wasn't ready, not because the language is slow.
It wasn't the reason why it was removed, but, well we agree, it would have been a problem if used indiscriminately. I didn't do any additional research, but what I read in public was simply "Ladybird is going to use Swift".
Hurray for micro benchmarks. Anyway, every language can be abused. I can make Java run slower than Ruby. Given that it runs on Microcontrollers on billions of devices, I don't think Swift is necessarily the problem in whatever case you have in mind (And yes I stole oracle's java marketing there for Swift, it is true though.)
Swift is Apple's toy language and they cannot and will not allow it to be anything more than that.
Ah, that's too bad. Does that mean their own programming language, Jakt, is back on the table?
Ladybird split from SerenityOS a while ago, Jakt is not "their" language. And no, I don't think a niche programming language is on the table.
For their sake, I hope not. I don't think an outside-donation-financed project with this much ADD can survive in the long term.
It's frustrating to discuss. It is a wonderful case study in how not to make engineering management decisions, and yet, they've occurred over enough time, and the cause is appealing enough, that it's hard to talk about out loud in toto without sounding like a dismissive jerk.
Maybe I misunderstand, but I thought Jakt was part of SerenityOS, not Ladybird.
From what I can tell they're pretty laser focused on making a browser (even in this issue, they're abandoning Swift).
> From what I can tell they're pretty laser focused on making a browser
I agree with you. I also agree that this decision is an example of that.
SerenityOS had an "everything from scratch in one giant mono-repo" rule. It was, explicitly a hobby project and one rooted in enjoyment and 'idealism from the get go. It was founded by a man looking for something productive to focus on instead of drugs. It was therapy. Hence the name.
Ladybird, as an independent project, was founded with the goal of being the only truly independent web browser (independent from corporate control generally and Google specifically).
They have been very focussed on that, have not had any sacred cows, and have shed A LOT of the home-grown infrastructure they inherited from being part of SerenityOS. Sometimes that saddens me a little but there is no denying that it has sped them up.
Their progress has been incredible. This comment is being written in Ladybird. I have managed GitHub projects in Ladybird. I have sent Gmail messages in Ladybird. It is not "ready" but it blows my mind how close it is.
I think Ladybid will be a "usable" browser before we enter 2027. That is just plain amazing.
Yeah, Serenityos was build everything from scratch for fun. Ladybird is build where an alternative implementation is going to add value. No need to get sidetracked reinventing SSL or ffmpeg.
Jfc why do they not just use rust? Is it the c++ addiction dominating?
Rust isn't OO is their main issue at this point.
Their Mac UI is a thin layer of AppKit. Even there they're currently using Objective-C++ it looks like, not Swift:
https://github.com/LadybirdBrowser/ladybird/tree/master/UI/A...
Guy writes that they won't be using Swift and the comments are filled with rust this and rust that. The stereotypes never die. One day the whole tech will just collapse because of how rotten, degenerate and toxic the rust community is.
It is a genuinely strange social phenomenon. What is it about Rust specifically that attracts these people?
Bringing up Rust as an obvious alternative is not toxic.
There's already a stereotype that Rust people will just carpet bomb any discussion with aggressively promoting Rust, people expect it to happen at this point and it just annoys people.
So I think it probably meets the threshold of "toxic", but more importantly - it's not effective. Everyone and their dog has already heard of Rust, aggressive proselytising is not going to help drive Rust adoption, it's just pissing people off
I remember mocking the switch to Swift back then.
Swift is a poorly designed language, slow to compile, visibly not on path to be major system language, and they had no expert on the team.
I am glad they are cutting their losses.
Swift never felt truly open source either. That people can propose evolution points doesn’t change the fact that Apple still holds all the keys and pushes whatever priorities they need, even if they’re not a good idea (e.g. Concurrency, Swift Testing etc)
Also funny enough, all cross platform work is with small work groups, some even looking for funding … anyway.
> Swift never felt truly open source either.
Apple has been always 'transactional' when it comes to OSS - they open source things only when it serves a strategic purpose. They open-sourced Swift only because they needed the community to build an ecosystem around their platform.
Yeah, well, sure they've done some work around LLVM/Clang, WebKit, CUPS, but it's really not proportional to the size and the influence they still have.
Compare them to Google, with - TensorFlow, k8s, Android (nominally), Golang, Chrome, and a long tail of other shit. Or Meta - PyTorch and the Llama model series. Or even Microsoft, which has dramatically reversed course from its "open source is a cancer" era (yeah, they were openly saying that, can you believe it?) to becoming one of the largest contributors on GitHub.
Apple I've heard even have harshest restrictions about it - some teams are just not permitted to contribute to OSS in any way. Obsessively secretive and for what price? No wonder that Apple's software products are just horrendously bad, if not all the time - well, too often. And on their own hardware too.
I wouldn't mind if Swift dies, I'm glad Objective-C is no longer relevant. In fact, I can't wait for Swift to die sooner.
>> some teams are just not permitted to contribute to OSS in any way
My understanding is that by default you are not allowed to contribute to open-source even if its your own project. Exceptions are made for teams whose function is to work on those open-source project e.g. Swift/LLVM/etc...
I talked to an apple engineer at a bar years ago and he said they aren’t allowed to work on _anything_ including side projects without getting approval first. Seemed like a total wtf moment to me.
I have never had a non wtf moment talking to an apple software engineer at a bar.
I can recall one explaining to me in the mid 20 teens that the next iPhone would be literally impossible to jailbreak in any capacity with 100% confidence.
I could not understand how someone that capable(he was truly bright) could be that certain. That is pure 90s security arrogance. The only secure computer is one powered off in a vault, and even then I am not convinced.
Multiple exploits were eventually found anyway.
We never exchanged names. That’s the only way to interact with engineers like that and talk in real terms.
This is interesting, I knew a workplace where open source contributions are fine as long as its not on company PC and network.
No, as far as I know, at Apple, this is strict - you cannot contribute to OSS, period. Not from your own equipment nor your friend's, not even during a vacation. It may cost you your job. Of course, it's not universal for every team, but on teams I know a few people - that's what I heard. Some companies just don't give a single fuck of what you want or need, or where your ideals lie.
I suspect it's not just Apple, I have "lost" so many good GitHub friends - incredible artisans and contributors, they've gotten well-payed jobs and then suddenly... not a single green dot on the wall since. That's sad. I hope they're getting paid more than enough.
Every programming job I've ever had, I've been required at certain points to make open source contributions. Granted, that was always "we have an issue with this OSS library/software we use, your task this sprint is to get that fixed".
I won't say never, but it would take an exceedingly large comp plan for me to sign paperwork forbidding me from working on hobby projects. That's pretty orwellian. I'm not allowed to work on hobby projects on company time, but that seems fair, since I also can't spend work hours doing non-programming hobbies either.
WebKit started as a fork of the KHTML and KJS libraries from KDE.
> WebKit
Sort of an exception that proves the rule. Yes, it's great and was released for free. But at least partially that's not a strategic decision from Apple but just a requirement of the LGPLv2 license[1] under which they received it (as KHTML) originally.
And even then, it was Blink and not WebKit that ended up providing better value to the community.
[1] It does bear pointing out that lots of the new work is dual-licensed as 2-clause BSD also. Though no one is really trying to test a BSD-only WebKit derivative, as the resulting "Here's why this is not a derived work of the software's obvious ancestor" argument would be awfully dicey to try to defend. The Ship of Theseus is not a recognized legal principle, and clean rooms have historically been clean for a reason.
The fact that Swift is an Apple baby should indeed be considered a red flag. I know there are some Objective-C lovers out there but I think it is an abomination.
Apple is (was?) good at hardware design and UX, but they pretty bad at producing software.
For what it’s worth, ObjC is not Apple’s brainchild. It just came along for the ride when they chose NEXTSTEP as the basis for Mac OS X.
I haven’t used it in a couple decades, but I do remember it fondly. I also suspect I’d hate it nowadays. Its roots are in a language that seemed revolutionary in the 80s and 90s - Smalltalk - and the melding of it with C also seemed revolutionary at the time. But the very same features that made it great then probably (just speculating - again I haven’t used it in a couple decades) aren’t so great now because a different evolutionary tree leapfrogged ahead of it. So most investment went into developing different solutions to the same problems, and ObjC, like Smalltalk, ends up being a weird anachronism that doesn’t play so nicely with modern tooling.
I've never written whole applications in ObjC but have had to dabble with it as part of Ardour (ardour.org) implementation details for macOS.
I think it's a great language! As long as you can tolerate dynamic dispatch, you really do get the best of C/C++ combined with its run-time manipulable object type system. I have no reason to use it for more code than I have to, but I never grimace if I know I'm going to have to deal with it. Method swizzling is such a neat trick!
It is, and that’s part of what I loved about it. But it’s also the kind of trick that can quickly become a source of chaos on a project with many contributors and a lot of contributor churn, like we tend to get nowadays. Because - and this was the real point of Dijkstra’s famous paper; GOTO was just the most salient concrete example at the time - control flow mechanisms tend to be inscrutable in proportion to their power.
And, much like what happened to GOTO 40 years ago, language designers have invented less powerful language features that are perfectly acceptable 90% solutions. e.g. nowadays I’d generally pick higher order functions or the strategy pattern over method swizzling because they’re more amenable to static analysis and easier to trace with typical IDE tooling.
I don't really want to defend method swizzling (it's grotesque from some entirely reasonable perspectives). However, it does work on external/3rd party code (e.g. audio plugins) even when you don't have control over their source code. I'm not sure you can pull that off with "better" approaches ...
Many of the built-in types in Objective C all have names beginning with “NS” like “NSString”. The NS stands for NeXTSTEP. I always found it insane that so many years later, every iPhone on Earth was running software written in a language released in the 80s. It’s definitely a weird language, but really quite pleasant once you get used to it, especially compared to other languages from the same time period. It’s truly remarkable they made something with such staying power.
>It’s truly remarkable they made something with such staying power
What has had the staying power is the API because that API is for an operating system that has had that staying power. As you hint, the macOS of today is simply the evolution of NeXTSTEP (released in 1989). And iOS is just a light version of it.
But 1989 is not all that remarkable. The Linux API (POSIX) was introduced in 1988 but started in 1984 and based on an API that emerged in the 70s. And the Windows API goes back to 1985. Apple is the newest API of the three.
As far as languages go, the Ladybird team is abandoning Swift to stick with C++ which was released back in 1979. And of course C++ is just an evolution of C which goes back to 1972 and which almost all of Linux is still written in.
And what is Ladybird even? It is an HTML interpretter. HTML was introduced in 1993. Guess what operating system HTML and the first web browser was created on. That is right...NeXTSTEP.
In some ways ObjC’s and the NEXTSTEP API’s staying power is more impressive because they survived the failure of their relatively small patron organization. POSIX and C++ were developed at and supported by tech titans - the 1970s and 1980s equivalents of FAANG. Meanwhile back at the turn of the century we had all witnessed the demise of NeXT and many of us were anticipating the demise of Apple, and there was no particularly strong reason to believe that a union of the two would fare any better, let alone grow to become one of the A’s in FAANG.
I actually suspect that ObjC and the NeXT APIs played a big part in that success. I know they’ve fallen out of favor now, and for reasons I have to assume are good. But back in the early 2000s, the difference in how quickly I could develop a good GUI for OS X compared to what I was used to on Windows and GNOME was life changing. It attracted a bunch of developers to the platform, not just me, which spurred an accumulation of applications with noticeably better UX that, in turn, helped fuel Apple’s consumer sentiment revival.
Good take. Even back in the 1990s, OpenStep was thought to be the best way to develop a Windows app. But NeXT charged per-seat licenses, so it didn't get much use outside of Wall Street or other places where Jobs would personally show up. And of course something like iPhone is easier when they already had a UI framework and an IDE and etc.
Well, there are many more devices running on a language written in the 70s.
Assuming you mean C (C++ is an 80s child), that’s trivially true because devices with an ObjC SDK are a strict subset of devices that are running on C.
Yes, that is why I don't find it "insane" like the grandparent does, like yeah, devices run old languages because those languages work well for their intended purpose.
You should feel that C’s longevity is insane. How many languages have come and gone in the meantime? C is truly an impressive language that profoundly moved humanity forward. If that’s not insane (used colloquially) to you, then what is?
Next was more or less an Apple spinoff, that was later acquired by Apple. Objective-C was created because using standards is contrary to the company culture. And with Swift they are painting themselves into a corner.
> Objective-C was created because using standards is contrary to the company culture
Objective-C was actually created by a company called Stepstone that wanted what they saw as the productivity benefits of Smalltalk (OOP) with the performance and portability of C. Originally, Objective-C was seen as a C "pre-compiler".
One of the companies that licensed Objective-C was NeXT. They also saw pervasive OOP as a more productive way to build GUI applications. That was the core value proposition of NeXT.
NeXT ended up basically taking over Objective-C and then it became of a core part of Apple when Apple bought NeXT to create the next-generation of macOS (the one we have now).
So, Objective-C was actually born attempting to "use standards" (C instead of Smalltalk) and really has nothing to do with Apple culture. Of course, Apple and NeXT were brought into the world by Steve Jobs
> Objective-C was created because using standards is contrary to the company culture.
What language would you have suggested for that mission and that era? Self or Smalltalk and give up on performance on 25-MHz-class processors? C or Pascal and give up an excellent object system with dynamic dispatch?
C.
C's a great language in 1985, and a great starting point. But development of UI software is one of those areas where object oriented software really shines. What if we could get all the advantages of C as a procedural language, but graft on top an extremely lightweight object system with a spec of < 20 pages to take advantage of these new 1980s-era developments in software engineering, while keeping 100% of the maturity and performance of the C ecosystem? We could call it Objective-C.
Years ago I wrote a toy Lisp implementation in Objective-C, ignoring Apple’s standard library and implementing my own class hierarchy. At that point it was basically standard C plus Smalltalk object dispatch, and it was a very cool language for that type of project.
I haven’t used it in Apple’s ecosystem, so maybe I am way off base here. But it seems to me that it was Apple’s effort to evolve the language away from its systems roots into a more suitable applications language that caused all the ugliness.
Some refer to the “Tim Cook doctrine” as a reason for Swift’s existence. It’s not meant to be good, just to fulfill the purpose of controlling that part of their products, so they don’t have to rely on someone else’s tooling.
That doesn’t really make sense though. I thought that they hired Lattner to work on LLVM/clang so they could have a non-gpl compiler and to make whatever extensions they wanted to C/Obj-C. Remember when they added (essentially) closures to C to serve their internal purposes?
So they already got what they wanted without inventing a new language. There must be some other reason.
The Accidental Tech podcast had a long interview with Lattner about Swift in 2017 [0]. He makes it out as something that had started as side-project / exploration thing without much of an agenda, which grew mostly because of how good positive feedback the project had got from other developers. He had recently left Apple back then, and supposedly left the future of Swift in other peoples' hands.
[0] https://atp.fm/205-chris-lattner-interview-transcript#swiftc...
That sounds like Microsoft's doctrine!
To be fair, work on Swift began in 2010, which would technically predate Tim Cook's accession to the position of CEO by a year or so.
I definitely agree with the first point - it's not meant to be the best.
On the second part, I think the big thing was that they needed something that would interop with Objective-C well and that's not something that any language was going to do if Apple didn't make it. Swift gave Apple something that software engineers would like a ton more than Objective-C.
I think it's also important to remember that in 2010/2014 (when swift started and when it was released), the ecosystem was a lot different. Oracle v Google was still going on and wasn't finished until 2021. So Java really wasn't on the table. Kotlin hit 1.0 in 2016 and really wasn't at a stage to be used when Apple was creating Swift. Rust was still undergoing massive changes.
And a big part of it was simply that they wanted something that would be an easy transition from Objective-C without requiring a lot of bridging or wrappers. Swift accomplished that, but it also meant that a lot of decisions around Swift were made to accommodate Apple, not things that might be generally useful to the lager community.
All languages have this to an extent. For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory. This decision probably isn't what would be the best for most of the non-Google community, but it's also something that could be reconsidered in the future since it's an implementation detail rather than a language detail.
I'm not sure any non-Apple language would have bent over backwards to accommodate Objective-C. But also, what would Apple have chosen circa-2010 when work on Swift started? Go was (and to an extent still is) "we only do things these three Googlers think is a good idea", Go was basically brand-new at the time, and even today Go doesn't really have a UI framework. Kotlin hadn't been released when work started on Swift. C# was still closed source. Rust hadn't appeared yet and was still undergoing a lot of big changes through Swift's release. Python and other dynamic languages weren't going to fit the bill. There really wasn't anything that existed then which could have been used instead of Swift. Maybe D could have been used.
But also, is Swift bad? I think that some of the type inference stuff that makes compiles slow is genuinely a bad choice and I think the language could have used a little more editing, but it's pretty good. What's better that doesn't come with a garbage collector? I think Rust's borrow checker would have pissed off way too many people. I think Apple needed a language without a garbage collector for their desktop OS and it's also meant better battery life and lower RAM usage on mobile.
If you're looking for a language that doesn't have a garbage collector, what's better? Heck, what's even available? Zig is nice, but you're kinda doing manual memory management. I like Rust, but it's a much steeper learning curve than most languages. There's Nim, but its ARC-style system came 5+ years after Swift's introduction.
So even today and even without Objective-C, it's hard to see a language that would fit what Apple wants: a safe, non-GC language that doesn't require Rust-style stuff.
I think that their culture of trying to invent their own standards is generally bad, but it is even worse when it is a programming language. I believe they are painting themselves into a corner.
>For example, Go uses a non-copying GC because Google wanted it to work with their existing C++ code more easily. Copying GCs are hard to get 100% correct when you're dealing with an outside runtime that doesn't expect things to be moved around in memory.
Do you have a source for this?
C# has a copying GC, and easy interop with C has always been one of its strengths. From the perspective of the user, all you need to do is to "pin" a pointer to a GC-allocated object before you access it from C so that the collector avoids moving it.
I always thought it had more to do with making the implementation simpler during the early stages of development, with the possibility of making it a copying GC some time in the feature (mentioned somewhere in stdlib's sources I think) but it never came to fruition because Go's non-copying GC was fast enough and a lot of code has since been written with the assumption that memory never moves. Adding a copying GC today would probaby break a lot of existing code.
To add to this, whatever was to become Obj-C's successor needed to be just as or more well-suited for UI programming with AppKit/UIKit as Obj-C was. That alone narrows the list of candidates a lot.
Swift has it's problems, and I certainly wouldn't use it for anything outside of development for Apple platforms, but saying they had no experts on the team is a stretch. Most Swift leads were highly regarded members of the C++ world, even if you discount Chris Lattner.
I meant no Swift experts in the Ladybird team. Their expertise is C++, you may think the transition is easy, and it can be pretty painless at first, but true language expertise means knowing how to work around its flaws, and adapting your patterns to its strenghts. Cool for a hobby, but switching language in the middle of an herculean work is suicide.
> switching language in the middle of an herculean work is suicide
My read is that that this is the main thing that happened here.
The Ladybird team is quite pragmatic. Or, at least their founder is. I think they understood the scale of buidling a browser. They understood that it was a massive undertaking. They also understood the security challenge of building an application so big that processes untrusted input as its main job. So, they thought that perhaps they needed a better language than C++ for these reasons. They evaluated a few options and came away thinking that Swift was the best, so they announced that they expected to move towards Swift in the future.
But the other side of being pragamatic is that, as time passed, they also realized how hard it would be to chagne horses. And they are quite productivity driven. They produce a YouTube video every month detailing their progress including charts and numbers.
Making no progress on the "real" job of building a browser to make progress on the somewhat artificial job of moving to a new programming language just never made sense I guess.
And the final part of being pragmatic is that, after months of not really making any progress on the switch, the founder posted this patch essentially admitting the reality and suggesting they just formalize the lack of progress and move on.
The lack of progress on Swift that is. Their progress on making a browser has been absolutely mind-blowing. This comment is being written in Ladybird.
Oh my bad, I totally misread you. I concur with the point you were actually making!
I think they were meaning that there were no swift experts on the LadyBird team
The point of Swift is not really the language, it's the standard ABI for dynamic code. The Rust folks should commit to supporting it as a kind of extern FFI/interop alongside C, at least on platforms where a standard Swift implementation exists.
What language do you recommend?
The best tool for the job is the one you know and love.
It depends. Many languages are a poor fit for certain use cases, and some are bad at everything beyond a very specific niche. Some are rather unpleasant to write any kind of substantial UI with.
By that definition you will be stuck on the first language you love.
And someone will be stuck not to do anything because they are unsatisfied with all languages. :-)
I have not developed a deep love for a language yet. Swift has been interesting me, but so has Zig.
Then you should consider Lindy effect. Newer languages have counter-intuitively a shorter life expectancy than older ones.
is go the same? what is the consensus best pick right now I wonder, is it C#?
Best pick for what? It always depends, and there is certainly no consensus.
When does the migration to Rust start? /s
Kling was praising Charlie Kirk back in the day. Who cares what fascists build with?
I'm as interested in reading about whether Palantir spies with FP or OOP.
I hate to be the one to point this out but really, in 10 years, how many rust ports will face the same fate?
some? It can happen with any language.
none
Great, some languages do not need to be hack into a project.
Hard to feel excited for this project when it feels so handwavey and when basic technical decisions have never been nailed down.
What are other projects trying something similar that deserve attention?
What projects are trying something similar to Ladybird? Well, mobody really. But Servo is pretty close though they are not writing their own Javascript engine or anything.
But you should perhaps give your attention to Servo. They were founded as a project to write a modern browser in Rust. So, no hand-waving there.
No hand-waving on the Ladybird team either in my opinion. They have very strong technical leadership. The idea that building a massive application designed to process untrusted user input at scale might need a better language than C++ seems like a pretty solid technical suggestion. Making incedible progress month after month using the language you started with seems pretty good too. And deciding, given the progress buidling and the lack of progress exploring the new language, that perhaps it would be best to formally abandon the idea of a language switch...well, that seems like a pretty solid decision as well.
At least, that is my view.
Oh, and I was a massive Servo fan before the Ladybird project even began. But, given how much further Ladybird has gotten than Servo has, despite being at it for less time and taking on a larger scope...well, I am giving my attention to Ladybird these days.
This comment was written in Ladybird.
> when it feels so handwavey
Carefully making decisions and then reassessing those choices later on when they prove to be problematic is the opposite of handwavey...
There's no way to say this without sounding mean: Everything Chris Lattner has done has been a "successful mess". He's obviously smart, but a horrible engineer. No one should allow him to design anything.
Edit: I explained my position better below.
People are correct I didn't explain my position.
LLVM: Pretty much everyone who has created a programming language with it has complained about its design. gingerbill, Jon Blow, and Andrew Kelley have all complained about it. LLVM is a good idea, but it that idea was executed better by Ken Thompson with his C compiler for Plan 9, and then again with his Go compiler design. Ken decided to create his own "architecture agnostic" assembly, which is very similar to the IR idea with LLVM.
Swift: I was very excited with the first release of Swift. But it ultimately did not have a very focused vision outlined for it. Because of this, it has morphed into a mess. It tries to be everything for everyone, like C++, and winds up being mediocre, and slow to compile to top it off.
Mojo isn't doesn't exist for the public yet. I hope it turns out to be awesome, but I'm just not going to get my hopes up this time.
Yes. I also written a compiler and I also complained about LLVM.
LLVM is
Swift I never used but I tried compiling it once and it was the bottom 2 slowest compiler I ever tested. The only thing nearly as bad was kotlin but 1) I don't actually remember which of these are worse 2) Kotlin wasn't meant to be a CLI compiler, it was meant to compile in the background as a language server so it was designed around thatMojo... I have things I could say... But I'll stick to this. I talked to engineers there and I asked one how they expected any python developers to use the planned borrow checker. The engineer said "Don't worry about it" ie they didn't have a plan. The nicest thing I can say is they didn't bullshit me 100% of the time when I directly asked a question privately. That's the only nice or neutral thing I could say
> LLVM: Pretty much everyone who has created a programming language with it has complained about its design. gingerbill, Jon Blow, and Andrew Kelley have all complained about it. LLVM is a good idea, but it that idea was executed better by Ken Thompson with his C compiler for Plan 9, and then again with his Go compiler design. Ken decided to create his own "architecture agnostic" assembly, which is very similar to the IR idea with LLVM.
I suggest you ask around to see what the consensus is for which compiler is actually mature. Hint: for all its warts, nobody is writing a seriously optimized language in any of the options you listed besides LLVM.
How many languages are using LLVM as its backend vs Go's?
As far as I know, only Go uses Go's back end because it was specifically designed for Go. But the architecture is such that it makes it trivial for Go to cross compile for any OS and architecture. This is something that LLVM cannot do. You have to compile a new compiler for every OS and arch combo you wish to compile to.
You could imagine creating a modified Go assembler that is more generic and not tied to Go's ABI that could accomplish the same effect as LLVM. However, it'd probably be better to create a project like that from scratch, because most of Go's optimizations happen before reaching the assembler stage.
It would probably be best to have the intermediate language that QBE has and transform that into "intermediate assembly" (IA) very similar to Go's assembly. That way the IL stage could contain nearly all the optimization passes, and the IA stage would focus on code generation that would translate to any OS/arch combo.
> As far as I know, only Go uses Go's back end because it was specifically designed for Go. But the architecture is such that it makes it trivial for Go to cross compile for any OS and architecture. This is something that LLVM cannot do. You have to compile a new compiler for every OS and arch combo you wish to compile to.
I don't think that's true. Zig have a cross-compiler (that also compiles C and C++) based on LLVM. I believe LLVM (unlike gcc) is inherently a cross-compiler, and it's mostly just shipping header files for every platform that `zig cc` is adding.
I do not have enough knowledge to say anything bad about LLVM. As an "amateur" compiler writer, it did confuse me a bit though.
What I will say is that it seem popular to start with LLVM and then move away from it. Zig is doign that. Rust is heading in the direction perhaps with Cranelift. It feels that, if LLVM had completely nailed its mission, these kinds of projects would be less common.
It is also notable that the Dragonegg project to bring GCC languages to LLVM died but we have multiple projects porting Rust to GCC.
Go never advertised, designed for, nor supported external usage of their backend.
You don't explain or support your position, you are calling Lattner names. That's not helpful to me or anyone else if we are trying to evaluate his work. Swift has millions of users as does Mojo and Modular in general. These are not trivial accomplishments.
Mojo and Modular have millions of users?
You can answer that question yourself.
> Everything Chris Lattner has done has been a "successful mess".
I don't have an emotional reaction to this, i.e. I don't think you're being mean, but it is wrong and reductive, which people usually will concisely, and perhaps reductively, describe as "mean".
Why is it wrong?
LLVM is great.
Chris Lattner left Apple a *decade* ago, & thus has ~0 impact or responsibility on Swift interop with C++ today.
Swift is a fun language to write, hence, why they shoehorned it in, in the first place.
Mojo is fine, but I wouldn't really know how you or I would judge it. For me, I'm not super-opinionated on Python, and it doesn't diverge heavily from it afaik.
Not just LLVM, but Google's TPU seems to be doing fine also. Honestly it's an impressive track record.
He had 0 to do with the TPU.
I was hired around Google around the same time, but not nearly as famous :)
AFAICouldT it was a "hire first, figure out what to do later", and it ended up being Swift for TensorFlow. That went ~nowhere, and he left within 2 years.
That's fine and doesn't reflect on him, in general, that's Google for ya. At least that era of Google.
Ahh, thanks for the info. Yeah, I heard Google was a bit messy from colleagues who went there.
That's why there's nothing that comes close to LLVM and MLIR, right?
If he's such a horrible engineer then we should have lots of LLVM replacements, right?
QBE is a tiny project, but I think illustrates a better intermediate language design. https://c9x.me/compile/
Except performance isn't great and it covers far fewer platforms. It aims for 70% performance but the few benchmarks I've seen show more like 30-50% performance.
It's a cool project and I'd consider it for a toy language but it's far from an LLVM replacement.
Many compilers including my own uses C89
You'll still need a C compiler...
I never heard of hardware without one
Avoiding interacting with LLVM as a user doesn't mean you've created something equivalent to LLVM.
And if the C compiler you use is clang then you're still literally making use of LLVM.