87.75% compatibility, as measured by a comprehensive, but incomplete test suite. They want 87.75% compatibility to be an accurate measure, but we know that in reality the real number is lower.
Also, I have major issues with dumping GPL userspace utilities, for an MIT license suite, that is known to not be feature complete, only, and literally only because it was written in Rust. This does not make sense, and this is not good for users.
The question is going to be how much of that unknown/untested percentage actually matters. I mean, there's even a question of how much the 12.25% of known test regressions actually matter.
> Also, I have major issues with dumping GPL userspace utilities, for an MIT license suite, that is known to not be feature complete, only, and literally only because it was written in Rust. This does not make sense, and this is not good for users.
Thinking about it, I guess I have to agree. This allows ubuntu to avoid releasing security fixing patches if they so choose. You can't do that with GPLed code. It means they can send out binary security fixes and delay the source code release for as long as they like or indefinitely. Which is pretty convenient for a company that sells extended security support packages.
> This allows ubuntu to avoid releasing security fixing patches if they so choose. You can't do that with GPLed code. It means they can send out binary security fixes and delay the source code release for as long as they like or indefinitely
The GPL does not state that the source code for any modification must be released immediately, it doesn't even set some kind of time limit so it technically doesn't prevent indefinite delays either.
there's even a question of how much the 12.25% of known test regressions actually matter.
I would think that the regression tests are actually the most worthwhile targets for the new project to validate against: they represent real-world usage and logic corner cases that are evidently easy to get wrong. These are not the kind of bugs that Rust is designed to eliminate.
I agree. But I don't know that the 12.25% of test regressions are regression tests or unit tests from the gnu core utils.
I believe Ubuntu simply copied and transposed a bunch of tests from gnu core utils and that's where these ultimately came from. That doesn't really mean that all these tests arose due to regressions. (for sure some probably did).
To me moving from MIT to GPL is a downgrade regardless of features. Not everything is about features. Some people also care that their work can't be re-utilized as a tool by Big Corp in their march forward to subjugate their users.
Part of this project has been writing a lot of new tests, which are run on both GNU coreutils and rust coreutils. Some of these tests have found bugs in the original GNU coreutils.
This does not make sense to you because you are looking from a technological standpoint. The reason to rewrite coreutils (or sudo) in Rust is not technological, as there is no merit. Coreutuils are titanium rock stable tools that no one asked to rewrite.
And this is precisely why the worst Rust evangelists aim to rewrite it: virtue signaling with no suffering of the opposing party is not good enough.
Also, I don't really get why coreutils would be a worthwhile candidate for a Rust rewrite. A rewrite of curl/wget or sudo I can understand, but what's the security benefit to improved memory safety for CLI tools that are only ever run with same-user privileges? Even if there's an exploitable bug there, there's no avenue for privilege escalation.
> CLI tools that are only ever run with same-user privileges?
You don't think these are ever run with sudo/runas/pkexec/run0 or otherwise invoked by a program running as root?
That said I do think things like sudo, ssh, gpg, maybe systemd, http servers like nginx and apache etc. are more valuable to replace with tools written in rust (or more generally a "memory safe language"). But that doesn't mean rewriting coreutils isn't valuable.
Because the reasons to replace coreutils with the Rust rewrite are not technological, they are political. And thus aiming to rewrite something very core and stable is the correct approach to enrage the opposite party.
The GNU project can't go to the men's room without a thumbs up from Stallman, who is so disconnected from how real people do their computing that by his own statement he hasn't written any material amount of code in almost 20 years and can't even figure out how to update his own website, instead relying on volunteers to do so.
Stallman comes from the era when C was good enough, because computing was not a hostile environment like it is today.
GNU is never going to "rewrite it in rust" as long as he's living, and probably for several years afterwards.
In other words, it's a social problem not a technical one.
Let new generations of Free Software orgs come along and supplant GNU with a GBIR (GNU But In Rust), but don't insist on existing, established things that are perfectly good for who and what they are to change into whatever you prefer at any given moment.
In the short term, yeah 4x slower coreutils is probably worth it for memory safe code. In the long run, I’m glad rust coreutils is slowly working through these compatibility issues so we don’t have to pay for memory safety. Yeah, it'll have new bugs. And right now it’s slightly incompatible. But give it a few more years to cook. I’m sure they’ll get there.
As an aside, I find it weird how much negativity rewrites like this get. If someone decided to make a new web browser, C compiler or kernel people would be congratulating them. I really don’t understand the conservatism when it comes to Linux. Is the current implementation perfect? Should it be preserved in amber? The gnu runtime seems like a messy, badly specified hairball of hacky, inconsistent scripts to me. Some guys in a certain room in the 70s and 80s wrote some C programs. And now every bad idea they had lives in perpetuity in my /usr/bin directory? In the decades since, these tools have sprouted hundreds of weird features that almost nobody uses. And now what, people care what language it’s all written in? This code must never be changed?? Who cares.
GNU utils is battle tested, well reviewed, and STABLE. That's really what I want in an OS, stability. Rust solves only one case of security issues, but it cannot solve logical errors, which there will be many of in a new software project.
I just don't see what's to gain, to suffer through years of instability, waiting for a userspace suite to mature, and reach feature parity, when we have a well understood, and safe tool set know.
Maybe in five years, when coreutils is complete, I'd be okay with Ubuntu replacing user land with it. But we're not there, and it's a problem we shouldn't have to tolerate.
Also I can't stand we're leaving GPL code behind for MIT.
Luckily, the existence of uutils doesn’t change the fact that GNU coreutils exists. In fact, it’s helped improve the stability of the GNU coreutils by clarifying intended behavior and adding test cases. So if you prefer them, you should stick to them. Nobody is taking anything from you.
So I guess to properly clarify, I absolutely do not mind that someone wants to build coreutils in Rust. I don't have a problem with Rust Coreutils existing.
The problem, and the real issue I have is that this project is being used as the default in major linux distros. Eager adoption of this project, and making it the production target does take away things from me. The interface has changed, stability is affected. Correctness is now measured against this incomplete implementation first, not the known correct, and stable GNU coreutils.
That’s not what is happening. One distro is kicking the tires on using this by default. The purpose is exactly because the GNU versions are being treated as the proper versions. Divergences from them are being fixed, so that this new version follows those. You can only do that by actually trying them out, because it’s impossible for the test suite to cover every behavior.
> That’s not what is happening. One disto is kicking the tires on using this by default.
Many people call Ubuntu flavors distributions. This includes Ubuntu developers.
Ubuntu made it default. The tire kicking analogy was incorrect.
> The purpose is exactly because the GNU versions are being treated as the proper versions. Divergences from them are being fixed, so that this new version follows those. You can only do that by actually trying them out, because it’s impossible for the test suite to cover every behavior.
You should assume everyone understands how Ubuntu's decision would benefit this project. You should assume most Ubuntu users do not care.
> Many people call Ubuntu flavors distributions. This includes Ubuntu developers.
You seem mad that a Linux distribution (Ubuntu) is trying this software out. Why do you care so much? Do you expect some of the programs you use to break? Have they?
If you don’t want to use uutils, I have good news. You can opt out. Or use Ubuntu LTS. Or use a different distribution entirely. I suspect you’re mad for a different reason. If all the tests passed, would you still be mad? Do you feel a similar way about angry projects like alpine Linux, which ship code built on musl? All the same compatibility arguments apply there. Musl is also not 100% compatible with glibc. How about llvm? Do you wish we had fewer web browsers?
Or maybe, is it a rust thing in particular? Like, if this rewrite was in C, C++ or go would you feel the same way? Are you worried more components of Linux will be ported to rust? (And if so, why?)
Ultimately the strength (and weakness) of Linux is that you’re not locked in to anything. I don’t understand how the existence of this software could make your life worse. If anything it sounds like it might be helping to clarify your stance on OS stability. If you want to make a principled stance there, there’s plenty of stable Linux distributions which will mirror your values. (Eg debian, Ubuntu lts, etc). Or you can just opt out of this experiment.
Given all of that, the tone I’m inferring from your comments seems disproportionate. Whats going on? Or am I misreading you?
You thought I was angry? What would you call Linus Torvalds when someone broke user space?[1]
You confused blunt responses to repetitive, condescending, specious, or false statements and anger at Canonical seemingly.
I made no objection to any software existing.
I like Rust. It was unfortunate this experiment supported stereotypes of Rust fanatics promoting Rust without respect for stability.
I reject the view users should have to wait 2 years for bug fixes and features, accept silently all experiments, or switch silently to a distribution with less 3rd party support and other issues inevitably.
The opt out process I saw required --allow-remove-essential. It would be irresponsible to recommend this.
A more responsible way to conduct this experiment would have been opt in 1st. Then phased. Then opt out for everyone. And waiting until all tests passed would have been better of course.
It is expressly described as an experiment. Making it the default does not preclude it being an experiment. It’s how you get broad enough usage to see if it’s ready. If it isn’t by the time for LTS, then it’ll be unmade as the default. That’s what an experiment is.
Of course it’s not exempt from criticism. But suggesting something is permanent and final when it expressly is not is a poor criticism.
All software has bugs. Plus, not every bug is in the test suite. There are open bugs in all of the software shipped by every distro. Software can be ready for use even if there are know bugs in corner cases. Regular coreutils has open bugs as well.
> But suggesting something is permanent and final when it expressly is not is a poor criticism.
No one did this.
> All software has bugs. Plus, not every bug is in the test suite. There are open bugs in all of the software shipped by every distro. Software can be ready for use even if there are know bugs in corner cases. Regular coreutils has open bugs as well.
Stop speaking as if other people know nothing of software development. GNU do not break compatibility knowingly and with no user benefit.
Canonical broke compatibility knowingly and with no user benefit when they made these utilities default in Ubuntu 25.10. The point was saying the GNU utilities had bugs was specious.
Ubuntu is using uutils experimentally in a non-LTS release. This kind of widespread testing will speed up the development process. Won't be long before it catches up and surpasses GNU coreutils. Then what? You want people to not use it? why?
One of the major problems with C, which like a lot of C's issues Rust just doesn't have, is that it's getting more difficult to find young, eager programmers willing to maintain a C codebase. The hassle of C outweighs the rewards, especially when Rust exists. So, ceteris paribus, development on the Rust version will outpace the C version, and you'll get more and smarter eyes on the code base.
Best to put the C code out to pasture, i.e. in maintenance mode only, with a deprecation plan in place.
It sounds like your beef is with Ubuntu for shipping some of this code. Not with the project for existing and fixing all the compatibility issues that you seem to care a great deal about.
If you want a purely gnu userland with gpl code and strong stability guarantees, Ubuntu is almost certainly the wrong distribution for you. Plenty of Linux distributions are far more stable, and won’t replace coreutils, maybe forever. (And if this is aiming to be bug for bug compatible, they won’t ever have to.)
As for the gpl, this isn’t new. there’s been bsd/mit licensed alternatives to coreutils for decades. You know, in FreeBSD and friends. It’s only aiming for 100% Linux compatibility that’s new. And I guess, shipping it in Linux. But let’s be real, the gpl v3 is a pretty toxic license. By trying so hard to preserve user freedom, it becomes a new tyranny for developers. If you build a web based startup today hosted on top of Linux, you might be in breach of the gpl. What a waste of everyone’s time. The point of opensource to me is nobody can tell me what I’m allowed to do with my computer. And that includes RMS.
well, sudo-rs had a few privilege escalation CVEs recently. So there has been some recent evidence in favor of the stability argument. I think it’s worthwhile to RiiR in general but I’ll be waiting a few more years for things to mature.
I doubt this is true in practice. The majority of coreutils spend the majority of their time waiting for the results of IO/syscalls. (The exception would probably be, the hashing utilities like md5sum.)
Those are also highly algorithmic tools. Probably few code paths and a lot of vector operations where competing compilers may not have much room to differentiate.
It really depends based on how shasum is implemented. If its implemented in assembly, it'll perform the same no matter how its compiled. But if its written using C code, the compiler has a lot of latitude to vectorize based on target CPU features. And not just SSE2 and AVX. Even popcnt isn't even available in the baseline x86_64 target for llvm.
The Fil-C compiler is a fork of llvm. There's no way all that garbage collection code would make fil-c faster. So if its faster, its probably using different target flags. And in that case, its not a fair benchmark comparison.
I'm sure theres a difference in the binary, for a real comparison you would need to compile the same coreutils version with the same options.
I just think the assertion that "compute-heavy" tools like sha256sum would be especially affected by Fil-C is not true, and if that was true given the "baseline slowdown" of 4x, surely it would show up in this sloppy test.
Yeah, pretty much. If someone is ok with a 3-4x slower program with higher memory consumption, that’s great if it saves you development time. But I can’t see someone starting a new project in Fil-C when more performant and ergonomic options exist - Java, C#, Go, Swift, Rust. Even Javascript.
The problem is in the unknowns. The "unknowns" in a C implementation means arbitrary code execution and all kinds of nasty. The unknowns in a Rust implementation means a crash/denial of service.
There is no "arbitrary code execution and all kinds of nasty" in the Fil-C version and it profits from the decades spent fixing all the logic bugs, races, environment variable mess in coreutils.
Honest question. Why is a Rust rewrite of coreutils getting traction? Nobody thought it’s a good idea to rewrite coreutils with Go, Java, Python, C++, etc etc. It can’t just be memory safety.
> uutils coreutils aims to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.
> Our key objectives include:
> Matching GNU's output (stdout and error code) exactly
> Better error messages
> Providing comprehensive internationalization support (UTF-8)
> Improved performances
> Extensions when relevant (example: --progress)
> uutils aims to work on as many platforms as possible, to be able to use the same utils on Linux, macOS, Windows and other platforms. This ensures, for example, that scripts can be easily transferred between platforms.
Experimenting with better error messages, as test-bed for extensions that might not be able to be tried or accepted in GNU coreutils (for technical, social or other reasons), and being able to use the same tools in all major OS are very reasonable divergences from GNU's project to "justify" its existence.
The project was originally just a learning project for someone who wanted to learn Rust by reimplementing tools that were small, not a moving target and useful. From there, it grew as it found an audience of developers interested in productionalizing it. There have been coreutils ports for the languages you mention (go-coreutils, pycoreutils, coreutils-cpp, etc.), they just didn't (yet?) hit critical mass. It is a harder sell for GC-based projects in this case because they are unlikely to ever be included as part of a dirtribution's base. Lets not forget that coreutils themselves are a rewrite of previously existing tools to begin with.
It’s a cool project but does anyone else find the choice of MIT kinda icky/disrespectful? Like maintainers have put decades of work into the GNU coreutils under the gpl and all that entails, and then some people decide to rewrite it and just say “nah”.
I know they claim it’s a clean implementation but cmon, there’s no way they aren’t peeking at the existing coreutils source.
Do you think it was disrespectful for the GNU project to reimplement the original tools in a different license than the original authors had written them in?
AFAIK the GNU authors didn’t have access to the original source code because they were proprietary. I don’t know why this matters but it feels different in a purely “feels” way.
The GNU authors almost certainly did have access to the AT&T UNIX source code, and they had to be reminded not to refer to UNIX source code when writing GNU replacements. GNU made intentional efforts to design their programs along completely different lines to avoid similarity to the originals. This is described at https://www.gnu.org/prep/standards/standards.html#Reading-No... under "Referring to Proprietary Programs".
Gotcha! I don’t, but that’s why I asked, I wasn’t sure if this was about any specific license or what.
I’m also curious about this: does that it’s in a different language make any difference here? Like I could also maybe see what you’re saying if these were also in C, but being in Rust, it’s not like they can literally copy the code, regardless. I know you’re talking about feelings and not hard and fast rules, but do you think that plays into any of the feelings at all?
Not really. I love Rust. It’s all I want to write these days.
My feelings stem from what I perceive as the degradation of the old school hacker ethos into a more corporate friendly environment. Especially during this time when the bigger companies are salivating at the mouth to replace SWEs with AI at the same time encouraging us to pick friendly licenses so they can take advantage of our volunteer work…
I didn’t mean that it was about Rust specifically, just that if a language change factored in.
Anyway, thanks for replying. It’s always interesting to hear how people think. I personally feel differently, but I’m sure it’s nothing you haven’t heard before. :)
Yes and no; due to antitrust laws with AT&T, almost everyone had copies of UNIX source code, especially if you were near any universities (why does BSD still honor UC Berkeley on bootup, do you think?). Easy as pie to get; but extremely difficult to legally use without a license.
The question about whether Linux and GNU copied from the proprietary originals caused the famous SCO lawsuits. Even though this was proven false, there’s very little chance the originals weren’t used as reference in GNU.
No. The original UNIX utilities were under proprietary licenses for an extremely long time, before eventually they broke free under BSD. The BSD tools are descendants of the originals and are also the versions used by macOS.
BSD wasn’t under an open license when GNU got started, so GNU reimplemented the proprietary UNIX utilities with their own enhancements and their own GPL license.
As such, complaining about the license is rich, considering GNU basically stole it themselves from the first round. And to this day, HN complaining about macOS’s utilities is also rich considering they are actually more standard and authentically UNIX than GNU.
Only if you think it’s also icky for OpenJDK to have a clean room port of Java. I’m sure oracle would love to force Android and everyone else using Java to pay licensing fees in perpetuity.
While I <3 Rust, rewriting major things in Rust but then taking an additional step of releasing incomplete, incompatible code as "production" like Canonical has done without day 1 full compatibility is inherently unethical, problematic, and counterproductive.
Related: Because of Redhat/Fedora's decision; CentOS Stream, Alma, and Rocky 9 & 10 Docker images use a statically-linked "multi-call" variant of coreutils that is also problematic in real-world usage. This can be fixed with the following:
>but then taking an additional step of releasing incomplete, incompatible code as "production"
nobody runs Ubuntu's six-month releases in production. Like Fedora it's cutting edge in the literal sense of that term, and if you deploy Ubuntu anywhere in the business world you're on the LTS release.
That the rust coreutils aren't at full parity yet is explicitly mentioned in the release notes of 25.10, if you're installing a distribution with a lifespan of half a year it pretty much goes without saying you're a beta tester for the next LTS release. Like, even if everything was pristine and stable it makes no sense to use an operating system with six months of support in production.
True that, however coreutils-single is already rolled out in Docker images for RH-derived distros. It's not the same thing, but it has breaking incompatibilities that don't appear when the regular coreutils is installed. Small changes done unwisely lead to big breakage in ways arrogant distro maintainers shove down users' throats. Canonical makes reckless changes regularly.
To be fair, `/bin/sh` is suppose to be the POSIX shell, so as long as they are compliant it really doesn't matter. The problem with `/bin/sh` being bash, is that it provides extensions (even when running in POSIX mode from what I remember) and relying on them makes those scripts technically broken.
It would be more of a comparison if dash was aiming to be a drop-in replacement for bash and not a POSIX shell.
Well that's kind of my point. `ls` and `find` are "supposed to be" POSIX `ls` and `find`.
Obviously on a GNU system one can't just drop the GNU extensions, but what Rust Coreutils is fighting is (IIUC) to a large extent the goal of becoming bug-compatible, or unspecified-compatible.
For unspecified-compatible, to support users who relied on it (Hyrum's law), the same case could be made that lots of people relied on /bin/sh being bash.
dash may not have been aiming to be drop-in replacement for bash, but when OS vendors change /bin/sh to dash, what's the difference?
> Well that's kind of my point. `ls` and `find` are "supposed to be" POSIX `ls` and `find`.
I don't know about that, I at least wouldn't 100% agree on that. Mostly since POSIX only defines short args it kinda makes just not want to use them, since I like to spell out the long arg in scripts for clarity. So by default I just assume I am using GNU coreutils (since BSD coreutils have slightly different names in some places IIRC). And since there isn't such a destinction between "POSIX coreutils" and "GNU coreutils" like there is with "POSIX shell" (sh) and "bash" I wouldn't call the situations equivalent.
Not very. Most notably `env -S` doesn't work on some systems. I did get bitten by one script on Ubuntu using uutils where the script expected `uname -p` to work but it just prints `unknown` (technically legal I guess). But when I went to fix it they already had.
Probably was a bit premature for Ubuntu to enable it by default. Looking at the graph uutils will be fully compatible (or as close as makes no difference) in about 2 years, so I would have waited until then.
Still, I think most of the push-back is just the usual anti-Rust luddites.
What's the point of that? The C version has been around for ages, works well, and is GPL'ed. Reexamining the code carefully for a rewrite does provide the opportunity to catch some bugs, but the rewrite will most likely introduce countless new bugs. It's not clear what is gained by the Rust rewrite, it's not like Rust is faster than C, it should be very similar or perhaps slower.
This is going to sound like a gripe and I swear it isn't, but is there a plan for a reasonably full suite of minimal userspace tools? I don't expect util-linux itself in Rust, but something like:
- coreutils
- findutils
- libmagic and file
- tar and some compression libs
- grep, awk, sed
- the shell and all of its builtins
- something functionally equivalent to openssl or GnuTLS
- some ssh client and server
- curl
- a terminal-based editor
- man-db and texinfo
- some init system and bootloader
- pick a package manager, any package manager, and rewrite it in Rust
Barring all of that, maybe just busybox but written in Rust. That should give you roughly what you need for a non-graphical system. coreutils isn't nothing, but it's a pretty small part of the system, with much of it ending up implemented by the shell in most distros.
systemd sort of did this to certain parts and it's the worst thing that happened to Linux. Standardizing on Rust political rewrites of tar and libmagic is going to be an epic disaster and a decade of never-ending fun for Rust evangelicals.
also worth mentioning that the ones that are user facing are much lower priority than machine facing one. no one cares about CVEs in ed, but a CVE in cp would be really bad
I really don't understand the hate, it's not 1.0 and majority of unsupported cases probably haven't seen use since 1990's and especially dumb cases where it's 100x slower when providing unrealistic values like parsing e9000000 which is actually because it attempts to actually parse it due to bigint support instead of clamping to i128.
also people complaining about inclusion of it in ubuntu versions, wait till you find out about the linux kernel.
this has nothing to do with rust, kernel regulary ships rather experiemental features in some of the releases, there is a reason why only specific kernel versions become LTS
userspace doesn't get broken, you just kernel panic or experience severe slowdown due to some bug (like the one that existed on 6.18 and was fixed in 6.18.1 - not sure what the exact bug was, but updating to 6.18.1 made it go away.
The fallacy here is that code is either "good code that works" or "bad code that needs to be rewritten". It doesn't work like that. "If it aint broke don't fix it" is actually terrible advice.
> If it aint broke don't fix it" is actually terrible advice.
We want to give up good things to get better things.
What is the vision for improving these tools with rust? What user benefit are they promising?
I’ve heard nothing but nebulous arguments about memory or security purity. There is no insight into how to do them better or faster; and in most cases they have demonstrated they didn’t even understand them fully to begin with, such as missing locales.
There isn’t even subjective value for users demanding it, it’s driven by rust developers.
From their README, they promise better error messages, extensions when relevant (example: --progress), and Linux, macOS, Windows and other platforms support. It is my understanding that the GnuWin32 coreutils were last updated in 2017 and had some subtle differences to regular GNU coreutils, so that's one set of users for whom there's clear benefit. Faster speed could be accomplished if architectural changes are viable, but for many of these tools they are IO-bound.
Yep, nuance is definitely needed. The greater issue is orthogonal: pushing out unfinished and incompatible code as production-usable rather code written in another language.
Rust, by itself, isn't a panacea to add formal verification but one leg on the footstool of formal verification methodologies to produce safe(r/ish) software rather than subtly buggy software that's difficult to prove correct and more expensive to maintain.
"If it aint broke don't fix it" =~= "I've never gotten into an accident until now, so airbags and seatbelts are pointless." ==> reactive methodology / failure / hubris
Ubuntu is evaluating it as the default in order to see if it’s ready. That’s something you want to do before declaring something 1.0.
If it’s not ready, they’ll roll it back.
Part of why you have to do something like this is because the test suite just isn’t comprehensive, nor should we expect it to be. Real world usage is what shakes out the long tail of bugs. You just have to have some sort of stage like this in order to get things into a good state.
No. Because the tests that don’t pass are edge cases and corners that most people wouldn’t notice. It’s arguably more important to fix bugs that impact actual usage, so it can be a valid strategy to do this even before you hit 100% coverage, to help you prioritize the remaining bugs to fix.
In other words, there may be more serious bugs not in the test suite than the ones that aren’t passing that are in the suite. And you only find that out through real usage.
> Because the tests that don’t pass are edge cases and corners that most people wouldn’t notice.
This standard may be justified when there is significant benefit. There is not in this case. And some projects have stricter standards.[1]
> In other words, there may be more serious bugs not in the test suite than the ones that aren’t passing that are in the suite. And you only find that out through real usage.
You should assume everyone understands how Ubuntu's decision would benefit this project. You should assume most Ubuntu users do not care. You replied to a comment which told you this before.[2]
Many of the utils, such as sort, aren't locale-aware. Considering that most of the world do not use English/ASCII, do you still consider that an irrelevant edge case?
I don't consider it irrelevant, but neither does uutils. However, it's also not something that is currently at a zero. I'm not even sure that this percentage of tests is related to locale support specifically. I'm sure parity will be reached here.
so they see issues that rise up from real world issues that tests might not cover? the same ubuntu version also bundles the latest kernel which is not considered stable to begin with.
Also bear in mind these tools have a long tail of obscure features that aren’t used much. Theres a useful subset of coreutils that is common across gnu coreutils, busybox and FreeBSD & macOS. Full coreutils compatibility is laudable - and of course needed if it’ll ever be a viable replacement. But most shell scripts people actually write probably already work on top of this port.
Are they bugs, though? Or just ambiguities where they want bug-for-bug compat?
Another commenter said "dumb cases where it's 100x slower when providing unrealistic values like parsing e9000000 which is actually because it attempts to actually parse it due to bigint support instead of clamping to i128".
Who is declaring the compatibility percentage? Given that GNU coreutils is GPL and Rust Coreutils is MIT, they must implement clean room development practices.
87.75% compatibility, as measured by a comprehensive, but incomplete test suite. They want 87.75% compatibility to be an accurate measure, but we know that in reality the real number is lower.
Also, I have major issues with dumping GPL userspace utilities, for an MIT license suite, that is known to not be feature complete, only, and literally only because it was written in Rust. This does not make sense, and this is not good for users.
The question is going to be how much of that unknown/untested percentage actually matters. I mean, there's even a question of how much the 12.25% of known test regressions actually matter.
> Also, I have major issues with dumping GPL userspace utilities, for an MIT license suite, that is known to not be feature complete, only, and literally only because it was written in Rust. This does not make sense, and this is not good for users.
Thinking about it, I guess I have to agree. This allows ubuntu to avoid releasing security fixing patches if they so choose. You can't do that with GPLed code. It means they can send out binary security fixes and delay the source code release for as long as they like or indefinitely. Which is pretty convenient for a company that sells extended security support packages.
> This allows ubuntu to avoid releasing security fixing patches if they so choose. You can't do that with GPLed code. It means they can send out binary security fixes and delay the source code release for as long as they like or indefinitely
The GPL does not state that the source code for any modification must be released immediately, it doesn't even set some kind of time limit so it technically doesn't prevent indefinite delays either.
there's even a question of how much the 12.25% of known test regressions actually matter.
I would think that the regression tests are actually the most worthwhile targets for the new project to validate against: they represent real-world usage and logic corner cases that are evidently easy to get wrong. These are not the kind of bugs that Rust is designed to eliminate.
I agree. But I don't know that the 12.25% of test regressions are regression tests or unit tests from the gnu core utils.
I believe Ubuntu simply copied and transposed a bunch of tests from gnu core utils and that's where these ultimately came from. That doesn't really mean that all these tests arose due to regressions. (for sure some probably did).
To be clear, Ubuntu did nothing. This is a third party implementation that Ubuntu decided to ship in their OS.
To me moving from MIT to GPL is a downgrade regardless of features. Not everything is about features. Some people also care that their work can't be re-utilized as a tool by Big Corp in their march forward to subjugate their users.
You meant moving to MIT from GPL?
Yes. Sorry, I'm retarded.
Part of this project has been writing a lot of new tests, which are run on both GNU coreutils and rust coreutils. Some of these tests have found bugs in the original GNU coreutils.
This does not make sense to you because you are looking from a technological standpoint. The reason to rewrite coreutils (or sudo) in Rust is not technological, as there is no merit. Coreutuils are titanium rock stable tools that no one asked to rewrite.
And this is precisely why the worst Rust evangelists aim to rewrite it: virtue signaling with no suffering of the opposing party is not good enough.
Also, I don't really get why coreutils would be a worthwhile candidate for a Rust rewrite. A rewrite of curl/wget or sudo I can understand, but what's the security benefit to improved memory safety for CLI tools that are only ever run with same-user privileges? Even if there's an exploitable bug there, there's no avenue for privilege escalation.
> CLI tools that are only ever run with same-user privileges?
You don't think these are ever run with sudo/runas/pkexec/run0 or otherwise invoked by a program running as root?
That said I do think things like sudo, ssh, gpg, maybe systemd, http servers like nginx and apache etc. are more valuable to replace with tools written in rust (or more generally a "memory safe language"). But that doesn't mean rewriting coreutils isn't valuable.
Because the reasons to replace coreutils with the Rust rewrite are not technological, they are political. And thus aiming to rewrite something very core and stable is the correct approach to enrage the opposite party.
> comprehensive, but incomplete
????
The gnu project is more than welcome to make its own moves away from C.
The GNU project can't go to the men's room without a thumbs up from Stallman, who is so disconnected from how real people do their computing that by his own statement he hasn't written any material amount of code in almost 20 years and can't even figure out how to update his own website, instead relying on volunteers to do so.
Stallman comes from the era when C was good enough, because computing was not a hostile environment like it is today.
GNU is never going to "rewrite it in rust" as long as he's living, and probably for several years afterwards.
In other words, it's a social problem not a technical one.
In fact, it is not a problem at all.
Let new generations of Free Software orgs come along and supplant GNU with a GBIR (GNU But In Rust), but don't insist on existing, established things that are perfectly good for who and what they are to change into whatever you prefer at any given moment.
If you want 100% compatibility:
sudo apt purge --autoremove --allow-remove-essential coreutils-from-uutils # reinstalls gnu coreutils
sudo update-alternatives --config sudo # can switch back to regular sudo from sudo-rs
(for Ubuntu 25.10)
why isn't that the default?
why ship broken implementation prematurely?
Because Canonical has completely lost the plot and become infected by Rust zealots.
With you want 100% compatibility with GNU Coreutils + memory safety just compile Coreutils with Fil-C. 100% compatibility with 0 rewrite.
In the short term, yeah 4x slower coreutils is probably worth it for memory safe code. In the long run, I’m glad rust coreutils is slowly working through these compatibility issues so we don’t have to pay for memory safety. Yeah, it'll have new bugs. And right now it’s slightly incompatible. But give it a few more years to cook. I’m sure they’ll get there.
As an aside, I find it weird how much negativity rewrites like this get. If someone decided to make a new web browser, C compiler or kernel people would be congratulating them. I really don’t understand the conservatism when it comes to Linux. Is the current implementation perfect? Should it be preserved in amber? The gnu runtime seems like a messy, badly specified hairball of hacky, inconsistent scripts to me. Some guys in a certain room in the 70s and 80s wrote some C programs. And now every bad idea they had lives in perpetuity in my /usr/bin directory? In the decades since, these tools have sprouted hundreds of weird features that almost nobody uses. And now what, people care what language it’s all written in? This code must never be changed?? Who cares.
GNU utils is battle tested, well reviewed, and STABLE. That's really what I want in an OS, stability. Rust solves only one case of security issues, but it cannot solve logical errors, which there will be many of in a new software project.
I just don't see what's to gain, to suffer through years of instability, waiting for a userspace suite to mature, and reach feature parity, when we have a well understood, and safe tool set know.
Maybe in five years, when coreutils is complete, I'd be okay with Ubuntu replacing user land with it. But we're not there, and it's a problem we shouldn't have to tolerate.
Also I can't stand we're leaving GPL code behind for MIT.
Luckily, the existence of uutils doesn’t change the fact that GNU coreutils exists. In fact, it’s helped improve the stability of the GNU coreutils by clarifying intended behavior and adding test cases. So if you prefer them, you should stick to them. Nobody is taking anything from you.
So I guess to properly clarify, I absolutely do not mind that someone wants to build coreutils in Rust. I don't have a problem with Rust Coreutils existing.
The problem, and the real issue I have is that this project is being used as the default in major linux distros. Eager adoption of this project, and making it the production target does take away things from me. The interface has changed, stability is affected. Correctness is now measured against this incomplete implementation first, not the known correct, and stable GNU coreutils.
That’s not what is happening. One distro is kicking the tires on using this by default. The purpose is exactly because the GNU versions are being treated as the proper versions. Divergences from them are being fixed, so that this new version follows those. You can only do that by actually trying them out, because it’s impossible for the test suite to cover every behavior.
> That’s not what is happening. One disto is kicking the tires on using this by default.
Many people call Ubuntu flavors distributions. This includes Ubuntu developers.
Ubuntu made it default. The tire kicking analogy was incorrect.
> The purpose is exactly because the GNU versions are being treated as the proper versions. Divergences from them are being fixed, so that this new version follows those. You can only do that by actually trying them out, because it’s impossible for the test suite to cover every behavior.
You should assume everyone understands how Ubuntu's decision would benefit this project. You should assume most Ubuntu users do not care.
> Many people call Ubuntu flavors distributions. This includes Ubuntu developers.
You seem mad that a Linux distribution (Ubuntu) is trying this software out. Why do you care so much? Do you expect some of the programs you use to break? Have they?
If you don’t want to use uutils, I have good news. You can opt out. Or use Ubuntu LTS. Or use a different distribution entirely. I suspect you’re mad for a different reason. If all the tests passed, would you still be mad? Do you feel a similar way about angry projects like alpine Linux, which ship code built on musl? All the same compatibility arguments apply there. Musl is also not 100% compatible with glibc. How about llvm? Do you wish we had fewer web browsers?
Or maybe, is it a rust thing in particular? Like, if this rewrite was in C, C++ or go would you feel the same way? Are you worried more components of Linux will be ported to rust? (And if so, why?)
Ultimately the strength (and weakness) of Linux is that you’re not locked in to anything. I don’t understand how the existence of this software could make your life worse. If anything it sounds like it might be helping to clarify your stance on OS stability. If you want to make a principled stance there, there’s plenty of stable Linux distributions which will mirror your values. (Eg debian, Ubuntu lts, etc). Or you can just opt out of this experiment.
Given all of that, the tone I’m inferring from your comments seems disproportionate. Whats going on? Or am I misreading you?
You thought I was angry? What would you call Linus Torvalds when someone broke user space?[1]
You confused blunt responses to repetitive, condescending, specious, or false statements and anger at Canonical seemingly.
I made no objection to any software existing.
I like Rust. It was unfortunate this experiment supported stereotypes of Rust fanatics promoting Rust without respect for stability.
I reject the view users should have to wait 2 years for bug fixes and features, accept silently all experiments, or switch silently to a distribution with less 3rd party support and other issues inevitably.
The opt out process I saw required --allow-remove-essential. It would be irresponsible to recommend this.
A more responsible way to conduct this experiment would have been opt in 1st. Then phased. Then opt out for everyone. And waiting until all tests passed would have been better of course.
[1] https://lkml.org/lkml/2012/12/23/75
It is expressly described as an experiment. Making it the default does not preclude it being an experiment. It’s how you get broad enough usage to see if it’s ready. If it isn’t by the time for LTS, then it’ll be unmade as the default. That’s what an experiment is.
> It is expressly described as an experiment. Making it the default does not preclude it being an experiment.
Calling something an experiment does not make it exempt from criticism.
> It’s how you get broad enough usage to see if it’s ready.
My understanding was it was known not 100% compatible. And what did I say you should assume?
> If it isn’t by the time for LTS, then it’ll be unmade as the default.
People use non LTS releases for non experimental purposes.
Of course it’s not exempt from criticism. But suggesting something is permanent and final when it expressly is not is a poor criticism.
All software has bugs. Plus, not every bug is in the test suite. There are open bugs in all of the software shipped by every distro. Software can be ready for use even if there are know bugs in corner cases. Regular coreutils has open bugs as well.
> But suggesting something is permanent and final when it expressly is not is a poor criticism.
No one did this.
> All software has bugs. Plus, not every bug is in the test suite. There are open bugs in all of the software shipped by every distro. Software can be ready for use even if there are know bugs in corner cases. Regular coreutils has open bugs as well.
Stop speaking as if other people know nothing of software development. GNU do not break compatibility knowingly and with no user benefit.
This project is not knowingly breaking compatibility. It expressly considers the GNU behavior to be the correct one.
Canonical broke compatibility knowingly and with no user benefit when they made these utilities default in Ubuntu 25.10. The point was saying the GNU utilities had bugs was specious.
Ubuntu is using uutils experimentally in a non-LTS release. This kind of widespread testing will speed up the development process. Won't be long before it catches up and surpasses GNU coreutils. Then what? You want people to not use it? why?
One of the major problems with C, which like a lot of C's issues Rust just doesn't have, is that it's getting more difficult to find young, eager programmers willing to maintain a C codebase. The hassle of C outweighs the rewards, especially when Rust exists. So, ceteris paribus, development on the Rust version will outpace the C version, and you'll get more and smarter eyes on the code base.
Best to put the C code out to pasture, i.e. in maintenance mode only, with a deprecation plan in place.
It sounds like your beef is with Ubuntu for shipping some of this code. Not with the project for existing and fixing all the compatibility issues that you seem to care a great deal about.
If you want a purely gnu userland with gpl code and strong stability guarantees, Ubuntu is almost certainly the wrong distribution for you. Plenty of Linux distributions are far more stable, and won’t replace coreutils, maybe forever. (And if this is aiming to be bug for bug compatible, they won’t ever have to.)
As for the gpl, this isn’t new. there’s been bsd/mit licensed alternatives to coreutils for decades. You know, in FreeBSD and friends. It’s only aiming for 100% Linux compatibility that’s new. And I guess, shipping it in Linux. But let’s be real, the gpl v3 is a pretty toxic license. By trying so hard to preserve user freedom, it becomes a new tyranny for developers. If you build a web based startup today hosted on top of Linux, you might be in breach of the gpl. What a waste of everyone’s time. The point of opensource to me is nobody can tell me what I’m allowed to do with my computer. And that includes RMS.
well, sudo-rs had a few privilege escalation CVEs recently. So there has been some recent evidence in favor of the stability argument. I think it’s worthwhile to RiiR in general but I’ll be waiting a few more years for things to mature.
> well, sudo-rs had a few privilege escalation CVEs recently. So there has been some recent evidence in favor of the stability argument.
it would probably be a lot stronger an argument if sudo hadn’t also had a few privilege escalation CVEs recently.
> 4x slower coreutils
I doubt this is true in practice. The majority of coreutils spend the majority of their time waiting for the results of IO/syscalls. (The exception would probably be, the hashing utilities like md5sum.)
I tried md5sum and sha256sum and there was exactly zero difference in runtime (the Fil-C version of sha256sum was consistently faster, in fact..)
Why would it be faster? That seems suspicious to me.
Is all the code being compiled with the same flags? Shasum probably benefits a lot from intrinsics that are only available on newer CPU targets.
Those are also highly algorithmic tools. Probably few code paths and a lot of vector operations where competing compilers may not have much room to differentiate.
It really depends based on how shasum is implemented. If its implemented in assembly, it'll perform the same no matter how its compiled. But if its written using C code, the compiler has a lot of latitude to vectorize based on target CPU features. And not just SSE2 and AVX. Even popcnt isn't even available in the baseline x86_64 target for llvm.
The Fil-C compiler is a fork of llvm. There's no way all that garbage collection code would make fil-c faster. So if its faster, its probably using different target flags. And in that case, its not a fair benchmark comparison.
I'm sure theres a difference in the binary, for a real comparison you would need to compile the same coreutils version with the same options.
I just think the assertion that "compute-heavy" tools like sha256sum would be especially affected by Fil-C is not true, and if that was true given the "baseline slowdown" of 4x, surely it would show up in this sloppy test.
Is Fil-C free of runtime overhead?
No, it's gc-like. Up to 4x slowdown iirc
To better port C to Rust: 3C (Checked C), c2rust, Crown ownership analysis, RustMap, c2saferrust (LLM), Laertes
The run time overhead is 2-4x.
Isn't this putting it in the Java territory?
Yeah, pretty much. If someone is ok with a 3-4x slower program with higher memory consumption, that’s great if it saves you development time. But I can’t see someone starting a new project in Fil-C when more performant and ergonomic options exist - Java, C#, Go, Swift, Rust. Even Javascript.
The problem is in the unknowns. The "unknowns" in a C implementation means arbitrary code execution and all kinds of nasty. The unknowns in a Rust implementation means a crash/denial of service.
I know which I prefer.
> The "unknowns" in a C implementation means arbitrary code execution and all kinds of nasty.
In Fil-C?
There is no "arbitrary code execution and all kinds of nasty" in the Fil-C version and it profits from the decades spent fixing all the logic bugs, races, environment variable mess in coreutils.
Meanwhile, the Rust version of course is vulnerable to all of those: https://ubuntu.com/security/notices/USN-7867-1
Thanks for fixing imaginary memory safety issues in /bin/cat...
It's not only about memory safety.
Correct. It's almost not at all about memory safety.
No, it's all about corporate greed.
Don't be ridiculous.
Not ridiculous at all.
Rust coreutils are MIT licensed.
Canonical sells extended support and security packages.
It’s completely reasonable to think they will want to monetize additional patches on coreutils as part of their commercial offerings.
Honest question. Why is a Rust rewrite of coreutils getting traction? Nobody thought it’s a good idea to rewrite coreutils with Go, Java, Python, C++, etc etc. It can’t just be memory safety.
> uutils coreutils aims to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.
> Our key objectives include:
> Matching GNU's output (stdout and error code) exactly
> Better error messages
> Providing comprehensive internationalization support (UTF-8)
> Improved performances
> Extensions when relevant (example: --progress)
> uutils aims to work on as many platforms as possible, to be able to use the same utils on Linux, macOS, Windows and other platforms. This ensures, for example, that scripts can be easily transferred between platforms.
Experimenting with better error messages, as test-bed for extensions that might not be able to be tried or accepted in GNU coreutils (for technical, social or other reasons), and being able to use the same tools in all major OS are very reasonable divergences from GNU's project to "justify" its existence.
The project was originally just a learning project for someone who wanted to learn Rust by reimplementing tools that were small, not a moving target and useful. From there, it grew as it found an audience of developers interested in productionalizing it. There have been coreutils ports for the languages you mention (go-coreutils, pycoreutils, coreutils-cpp, etc.), they just didn't (yet?) hit critical mass. It is a harder sell for GC-based projects in this case because they are unlikely to ever be included as part of a dirtribution's base. Lets not forget that coreutils themselves are a rewrite of previously existing tools to begin with.
It’s a cool project but does anyone else find the choice of MIT kinda icky/disrespectful? Like maintainers have put decades of work into the GNU coreutils under the gpl and all that entails, and then some people decide to rewrite it and just say “nah”.
I know they claim it’s a clean implementation but cmon, there’s no way they aren’t peeking at the existing coreutils source.
Do you think it was disrespectful for the GNU project to reimplement the original tools in a different license than the original authors had written them in?
AFAIK the GNU authors didn’t have access to the original source code because they were proprietary. I don’t know why this matters but it feels different in a purely “feels” way.
The GNU authors almost certainly did have access to the AT&T UNIX source code, and they had to be reminded not to refer to UNIX source code when writing GNU replacements. GNU made intentional efforts to design their programs along completely different lines to avoid similarity to the originals. This is described at https://www.gnu.org/prep/standards/standards.html#Reading-No... under "Referring to Proprietary Programs".
Gotcha! I don’t, but that’s why I asked, I wasn’t sure if this was about any specific license or what.
I’m also curious about this: does that it’s in a different language make any difference here? Like I could also maybe see what you’re saying if these were also in C, but being in Rust, it’s not like they can literally copy the code, regardless. I know you’re talking about feelings and not hard and fast rules, but do you think that plays into any of the feelings at all?
Not really. I love Rust. It’s all I want to write these days.
My feelings stem from what I perceive as the degradation of the old school hacker ethos into a more corporate friendly environment. Especially during this time when the bigger companies are salivating at the mouth to replace SWEs with AI at the same time encouraging us to pick friendly licenses so they can take advantage of our volunteer work…
I didn’t mean that it was about Rust specifically, just that if a language change factored in.
Anyway, thanks for replying. It’s always interesting to hear how people think. I personally feel differently, but I’m sure it’s nothing you haven’t heard before. :)
Yes and no; due to antitrust laws with AT&T, almost everyone had copies of UNIX source code, especially if you were near any universities (why does BSD still honor UC Berkeley on bootup, do you think?). Easy as pie to get; but extremely difficult to legally use without a license.
The question about whether Linux and GNU copied from the proprietary originals caused the famous SCO lawsuits. Even though this was proven false, there’s very little chance the originals weren’t used as reference in GNU.
Were they also open source?
Legally, no, but plenty of people had copies of the source.
Their point was creating the 1st open source version was a good reason not to use the original license I thought.
No. The original UNIX utilities were under proprietary licenses for an extremely long time, before eventually they broke free under BSD. The BSD tools are descendants of the originals and are also the versions used by macOS.
BSD wasn’t under an open license when GNU got started, so GNU reimplemented the proprietary UNIX utilities with their own enhancements and their own GPL license.
As such, complaining about the license is rich, considering GNU basically stole it themselves from the first round. And to this day, HN complaining about macOS’s utilities is also rich considering they are actually more standard and authentically UNIX than GNU.
MacOS's utilities are really just FreeBSD's with some patches.
https://github.com/apple-oss-distributions/text_cmds https://github.com/apple-oss-distributions/system_cmds https://github.com/apple-oss-distributions/file_cmds https://github.com/apple-oss-distributions/adv_cmds https://github.com/apple-oss-distributions/shell_cmds https://github.com/apple-oss-distributions/misc_cmds
That is because Apple took entire FreeBSD 'userspace' as part of Mac OS X.
Only if you think it’s also icky for OpenJDK to have a clean room port of Java. I’m sure oracle would love to force Android and everyone else using Java to pay licensing fees in perpetuity.
OpenJDK isn't a clean room port - it was created from the original source code of Sun's JDK.
It’s uncharitable to assume they’re lying. In which case, it’s perfectly ethical and legal to reimplement an existing program.
While I <3 Rust, rewriting major things in Rust but then taking an additional step of releasing incomplete, incompatible code as "production" like Canonical has done without day 1 full compatibility is inherently unethical, problematic, and counterproductive.
Related: Because of Redhat/Fedora's decision; CentOS Stream, Alma, and Rocky 9 & 10 Docker images use a statically-linked "multi-call" variant of coreutils that is also problematic in real-world usage. This can be fixed with the following:
>but then taking an additional step of releasing incomplete, incompatible code as "production"
nobody runs Ubuntu's six-month releases in production. Like Fedora it's cutting edge in the literal sense of that term, and if you deploy Ubuntu anywhere in the business world you're on the LTS release.
That the rust coreutils aren't at full parity yet is explicitly mentioned in the release notes of 25.10, if you're installing a distribution with a lifespan of half a year it pretty much goes without saying you're a beta tester for the next LTS release. Like, even if everything was pristine and stable it makes no sense to use an operating system with six months of support in production.
True that, however coreutils-single is already rolled out in Docker images for RH-derived distros. It's not the same thing, but it has breaking incompatibilities that don't appear when the regular coreutils is installed. Small changes done unwisely lead to big breakage in ways arrogant distro maintainers shove down users' throats. Canonical makes reckless changes regularly.
For context, how "compatible" is BSD utils with either one?
IOW: how much does it matter?
These utilities' purpose is to replace GNU utilities. BSD utilities' purpose is not.
Seems like there's more uproar with this than with changing the /bin/sh symlink to dash, and that difference seems much bigger.
To be fair, `/bin/sh` is suppose to be the POSIX shell, so as long as they are compliant it really doesn't matter. The problem with `/bin/sh` being bash, is that it provides extensions (even when running in POSIX mode from what I remember) and relying on them makes those scripts technically broken.
It would be more of a comparison if dash was aiming to be a drop-in replacement for bash and not a POSIX shell.
Well that's kind of my point. `ls` and `find` are "supposed to be" POSIX `ls` and `find`.
Obviously on a GNU system one can't just drop the GNU extensions, but what Rust Coreutils is fighting is (IIUC) to a large extent the goal of becoming bug-compatible, or unspecified-compatible.
For unspecified-compatible, to support users who relied on it (Hyrum's law), the same case could be made that lots of people relied on /bin/sh being bash.
dash may not have been aiming to be drop-in replacement for bash, but when OS vendors change /bin/sh to dash, what's the difference?
> Well that's kind of my point. `ls` and `find` are "supposed to be" POSIX `ls` and `find`.
I don't know about that, I at least wouldn't 100% agree on that. Mostly since POSIX only defines short args it kinda makes just not want to use them, since I like to spell out the long arg in scripts for clarity. So by default I just assume I am using GNU coreutils (since BSD coreutils have slightly different names in some places IIRC). And since there isn't such a destinction between "POSIX coreutils" and "GNU coreutils" like there is with "POSIX shell" (sh) and "bash" I wouldn't call the situations equivalent.
Ubuntu made /bin/sh dash in 2006 according to Wikipedia. Many fewer people used Ubuntu then.
Not very. Most notably `env -S` doesn't work on some systems. I did get bitten by one script on Ubuntu using uutils where the script expected `uname -p` to work but it just prints `unknown` (technically legal I guess). But when I went to fix it they already had.
Probably was a bit premature for Ubuntu to enable it by default. Looking at the graph uutils will be fully compatible (or as close as makes no difference) in about 2 years, so I would have waited until then.
Still, I think most of the push-back is just the usual anti-Rust luddites.
> expected `uname -p` to work but it just prints `unknown`
I just tried that on three very different Linux systems of mine, and all print `unknown`.
What's the point of that? The C version has been around for ages, works well, and is GPL'ed. Reexamining the code carefully for a rewrite does provide the opportunity to catch some bugs, but the rewrite will most likely introduce countless new bugs. It's not clear what is gained by the Rust rewrite, it's not like Rust is faster than C, it should be very similar or perhaps slower.
The point is virtue signaling. There is no technical merit to it.
The point is that they get to rewrite it in their favourite language.
This is going to sound like a gripe and I swear it isn't, but is there a plan for a reasonably full suite of minimal userspace tools? I don't expect util-linux itself in Rust, but something like:
- coreutils
- findutils
- libmagic and file
- tar and some compression libs
- grep, awk, sed
- the shell and all of its builtins
- something functionally equivalent to openssl or GnuTLS
- some ssh client and server
- curl
- a terminal-based editor
- man-db and texinfo
- some init system and bootloader
- pick a package manager, any package manager, and rewrite it in Rust
Barring all of that, maybe just busybox but written in Rust. That should give you roughly what you need for a non-graphical system. coreutils isn't nothing, but it's a pretty small part of the system, with much of it ending up implemented by the shell in most distros.
systemd sort of did this to certain parts and it's the worst thing that happened to Linux. Standardizing on Rust political rewrites of tar and libmagic is going to be an epic disaster and a decade of never-ending fun for Rust evangelicals.
I’m not aware of a unified plan to do this. uutils was started as a fun project, not some plan to build a Rust-only userland.
also worth mentioning that the ones that are user facing are much lower priority than machine facing one. no one cares about CVEs in ed, but a CVE in cp would be really bad
There are already many of these. fd, httpie, rg, and so on.
There is of course RustTLS that already exists and is already very advanced.
For package manager see for example AerynOS (or its new name that I can't remember).
There are lots of rust editors, terminal and otherwise.
And of course the complete Cosmic Desktop is written in Rust, so you don't have to go 'non-graphical' at all.
So AerynOS, with uutils and Cosmic is very close I would say.
Most (all?) of these projects aren’t aiming for drop-in compatibility.
I didn't realize this is what the person was wanting.
I could be wrong!
I really don't understand the hate, it's not 1.0 and majority of unsupported cases probably haven't seen use since 1990's and especially dumb cases where it's 100x slower when providing unrealistic values like parsing e9000000 which is actually because it attempts to actually parse it due to bigint support instead of clamping to i128.
also people complaining about inclusion of it in ubuntu versions, wait till you find out about the linux kernel.
The hateable part is the license. I love Rust but this is a huge step backwards.
and that's what I was looking for, finally got a real answer.
> wait till you find out about the linux kernel
Check the extent to which this is true. Also are we rewriting good kernel code that works?
this has nothing to do with rust, kernel regulary ships rather experiemental features in some of the releases, there is a reason why only specific kernel versions become LTS
> kernel regulary ships rather experiemental features in some of the releases
Does the Linux kernel ship with rather experimental features as default? When was the rule not to break user space revoked?
userspace doesn't get broken, you just kernel panic or experience severe slowdown due to some bug (like the one that existed on 6.18 and was fixed in 6.18.1 - not sure what the exact bug was, but updating to 6.18.1 made it go away.
> Does the Linux kernel ship with rather experimental features as default?
> good kernel code that works
The fallacy here is that code is either "good code that works" or "bad code that needs to be rewritten". It doesn't work like that. "If it aint broke don't fix it" is actually terrible advice.
> If it aint broke don't fix it" is actually terrible advice.
We want to give up good things to get better things.
What is the vision for improving these tools with rust? What user benefit are they promising?
I’ve heard nothing but nebulous arguments about memory or security purity. There is no insight into how to do them better or faster; and in most cases they have demonstrated they didn’t even understand them fully to begin with, such as missing locales.
There isn’t even subjective value for users demanding it, it’s driven by rust developers.
> What user benefit are they promising?
From their README, they promise better error messages, extensions when relevant (example: --progress), and Linux, macOS, Windows and other platforms support. It is my understanding that the GnuWin32 coreutils were last updated in 2017 and had some subtle differences to regular GNU coreutils, so that's one set of users for whom there's clear benefit. Faster speed could be accomplished if architectural changes are viable, but for many of these tools they are IO-bound.
Yep, nuance is definitely needed. The greater issue is orthogonal: pushing out unfinished and incompatible code as production-usable rather code written in another language.
Rust, by itself, isn't a panacea to add formal verification but one leg on the footstool of formal verification methodologies to produce safe(r/ish) software rather than subtly buggy software that's difficult to prove correct and more expensive to maintain.
"If it aint broke don't fix it" =~= "I've never gotten into an accident until now, so airbags and seatbelts are pointless." ==> reactive methodology / failure / hubris
The seat belt analogy makes no sense. The new tools have entirely new vulnerability and problems.
I mean, why is Ubuntu using it as default when it isnt 1.0?
Ubuntu is evaluating it as the default in order to see if it’s ready. That’s something you want to do before declaring something 1.0.
If it’s not ready, they’ll roll it back.
Part of why you have to do something like this is because the test suite just isn’t comprehensive, nor should we expect it to be. Real world usage is what shakes out the long tail of bugs. You just have to have some sort of stage like this in order to get things into a good state.
> they’ll roll it back.
They will absolutely not roll it back, not matter how broken they are.
The reasons to switch from coreutils to the Rust rewrite are purely political.
> Ubuntu is evaluating it as the default in order to see if it’s ready.
Did 100% of tests pass when Ubuntu made this decision? My understanding was no.
No. Because the tests that don’t pass are edge cases and corners that most people wouldn’t notice. It’s arguably more important to fix bugs that impact actual usage, so it can be a valid strategy to do this even before you hit 100% coverage, to help you prioritize the remaining bugs to fix.
In other words, there may be more serious bugs not in the test suite than the ones that aren’t passing that are in the suite. And you only find that out through real usage.
> edge cases and corners that most people wouldn’t notice.
This is a horrible mindset for developing software people are supposed to rely on.
Every project prioritizes bugs. It is impossible to work on everything all at once.
> Because the tests that don’t pass are edge cases and corners that most people wouldn’t notice.
This standard may be justified when there is significant benefit. There is not in this case. And some projects have stricter standards.[1]
> In other words, there may be more serious bugs not in the test suite than the ones that aren’t passing that are in the suite. And you only find that out through real usage.
You should assume everyone understands how Ubuntu's decision would benefit this project. You should assume most Ubuntu users do not care. You replied to a comment which told you this before.[2]
[1] https://lkml.org/lkml/2012/12/23/75
[2] https://news.ycombinator.com/item?id=46267541
Many of the utils, such as sort, aren't locale-aware. Considering that most of the world do not use English/ASCII, do you still consider that an irrelevant edge case?
I don't consider it irrelevant, but neither does uutils. However, it's also not something that is currently at a zero. I'm not even sure that this percentage of tests is related to locale support specifically. I'm sure parity will be reached here.
For example, sort has an open PR on it right now.
> I'm sure parity will be reached here.
This was not the question.
They are testing it in a real world scenario before putting it into a LTS of theirs.
so they see issues that rise up from real world issues that tests might not cover? the same ubuntu version also bundles the latest kernel which is not considered stable to begin with.
Another way to pitch the same result is rust coreutils 0.5.0 ships with 88 documented bugs, in addition to the usual unknown ones.
I like the project but beware.
How many documented and undocumented bugs does GNU coreutils ship with? 0.5.0 is explicitly a pre-stable version.
(Sure, I realize that GNUcu is old and mature enough to have been pretty thoroughly debugged by this point.)
The uutils project has found bugs in upstream, added extra tests, and clarified behavior. It’s helped both projects improve.
Also bear in mind these tools have a long tail of obscure features that aren’t used much. Theres a useful subset of coreutils that is common across gnu coreutils, busybox and FreeBSD & macOS. Full coreutils compatibility is laudable - and of course needed if it’ll ever be a viable replacement. But most shell scripts people actually write probably already work on top of this port.
Are they bugs, though? Or just ambiguities where they want bug-for-bug compat?
Another commenter said "dumb cases where it's 100x slower when providing unrealistic values like parsing e9000000 which is actually because it attempts to actually parse it due to bigint support instead of clamping to i128".
Who is declaring the compatibility percentage? Given that GNU coreutils is GPL and Rust Coreutils is MIT, they must implement clean room development practices.
They test Rust coreutils against the GNU coreutils test suite, with 87.75% of the test cases passing.
https://uutils.github.io/coreutils/docs/test_coverage.html
just a curiosity: why not trying to put in a loop gemini o gpt and wait until 100% of test suite is passed?