This is the kind of spring cleaning I crave. Deleting busted drivers that haven't worked in over a decade? Fantastic!
Some of this hardware likely has exactly zero users because the material it's made from can't possibly have survived. Look at the cord on the mouse in the photo: you might be able to plug it in, but I wouldn't bet money signal can still make it down the wire.
There are still ISA slots in new systems with fairly modern processors and plenty of RAM, if you don’t mind buying specific models of industrial PCs for way too much money.
Or maybe you need 4 PCI and 9 ISA for some reason. DuroPC’s got you, if you can drop $1800 on a system with the same generation of processor. https://duropc.com/product/r810-4p9i-4
You don't really need IRQs for most ISA boards. OPL3/Adlib sound cards don't need one, MIDI doesn't, joystick port doesn't. I saw various I/O boards that don't need IRQ. Soundblaster does, but I don't know for what purpose. Maybe someone here can explain?
Coincidentally I'm currently working on a Sound Blaster driver for some DOS homebrew, so here's quick rundown of how an SB is programmed and what its resources do:
Base Address: This is the beginning of the IO port range you use to program the card, commonly it's 0x220, but can be configured with jumpers (or software on later cards). You can add offsets to this address to access different functionality of the card, such as the OPL chip or the Mixer chip.
IRQ: The interrupt number that will be fired when the soundcard finishes playback of an audio chunk. Early cards usually used 7, later models defaulting to 5. More on this below.
DMA Channel: Which channel of the PC's DMA controller will be supplying audio data to the card. Usually 1 for 8-bit cards, with 5 being used for 16-bit cards.
The general process for playback is as follows:
- Program the DMA controller with the address and size of an audio buffer you'll be using to mix your PCM sound into. This buffer will conventionally be used in 2 halves by the interrupt service routine, a front buffer and backbuffer, similar to what you'd have for double buffered video. The DMA channel should also be put in "auto-init" mode so that the DMA transfer will loop back to the start when it finishes, which allows continuous playback.
- Install an interrupt service routine to write data into the "backbuffer" half of the DMA buffer, which switches back and forth each time an IRQ fires.
- Initialize the DSP chip via its IO port, pick a sample rate (usually around 11khz for most DOS games), then issue a continuous playback command. For this part, you tell the soundcard that your playback buffer is half the size it actually is, which causes the IRQ to fire once in the middle of the buffer, and again at the end of the buffer before looping back to the start. These halfway IRQs allow you to fill the unused half of the buffer while the other half is playing, for smooth gapless playback with no clicks or pops.
This is probably more info than you or anyone actually wanted, but it's a fun topic so I couldn't help myself.
Sound Blasters and compatible cards used IRQ lines because back in the bad old days CPUs were slow, bandwidth was tiny, and buffers were minuscule.
To get responsive/real time audio the card needs to signal to the CPU, not the other way around, and at the time IRQs were the way to do that on ISA busses.
I would imagine that ISA cards that didn't need IRQs either required CPU polling or DMA.
I imagined that the game / audio driver would just send data to the card at regular intervals and that's it. I realize now that the card uses it's own clock that can drift when compared to the system timer and this method would have a buffer underrun/overrun problem.
This is what the word "bus" used to mean on a hardware level: a backplane of connections to which multiple peripherals could be attached. These days a bus is a LAN of point-to-point serial connections which, it turns out, is much more viable at the high communication rates demanded of modern hardware.
In original ISA none of this is managed, the owner of the PC is expected to manually configure both hardware and software appropriately.
So e.g. [with the PC turned off!] you move a tiny jumper (basically just a piece of conductive metal with a plastic housing) to the "IRQ 8" position and you pick "IRQ 8" in some menu or set it in an environment variable in DOS or whatever.
By the time PCI is starting to appear there is some level of "Plug and Plug ISA" but it's fairly crazy because of course all the old stuff still exists whereas for PCI the bus always had this intelligence baked in so nobody just assumes they can pick.
That's correct. I considered whether I should dig out a manual and decided that I should do the exact opposite and pick a value I know won't exist for ISA.
To avoid collisions, you moved physical jumpers on cards that might conflict, to select among a small range of addresses, I/O ports and/or IRQ numbers.
For example if you had two identical network cards, or SCSI disk controllers, you would need to physically reconfigure one of them away from its defaults.
There were only a small number of configurations available on each type of device, and some weren't configurable at all, so you could still get irreconcilable conflicts.
The Linux kernel of the time was full of hard-coded "probe" addresses and I/O ports, probe sequences to see if there was a device there, and IRQ auto-detection routines that triggered an interrupt to find out which IRQ line was asserted. Some of the probes had to be run in a particular order, so that probes for one type of device wouldn't break another type.
Later came ISAPnP, meaning Plug'n'Play for ISA, which allowed the operating system to use a clever protocol to talk simultaneously over ISA with all devices on the bus that support it, identify and select them individually, query what they required and and configure their addresses, I/O ports and IRQs to avoid overlap, or permit overlap where it was ok for IRQs. After the operating system was done configuring them, they operated as if they were configured physically like the older ISA cards. If necessary this could be implemented cheaply by adding an ISAPnP module to an existing ISA card design.
Eventually ISA was superceded by PCI which had better, well-defined enumeration and configuration methods from the start which all devices had to implement. PCI also allowed MMIO and IO base addresses to be set anywhere (32-bit), not just a small number or single option as ISA cards usually had, so there were no more address conflicts. The operating system still had to find the PCI bus registers itself, but after that, probing was simpler and more reliable than with ISA.
USB also arrived around the same time, and also had well-defined enumeration and configuration methods. Many simpler ISA devices were replaced by equivalent USB devices. Although USB was (and is) complex to implement at a low level, the complexity was handled very well by low-cost, generic USB modules on the device side, so it was easy for device manufacturers to use.
There's a shared address bus. Each device responds to the i/o and/or memory addresses it's configured for. Configuration can be static, jumpers, isapnp.
> I assume it’s a master and slave system, but even then were address collisions automatically resolved?
No. If two devices want to use the same address space, you'll have problems. isapnp might help you out, but it was added in the second decade of ISA, so ... lots of things don't use it.
All cards receive the same signals (address lines, data lines, IRQ lines and everything else). They just ignore all data for addresses (on address lines) that are not theirs.
Each card must have a unique I/O address, sometimes more than one and sometimes an IRQ and DMA too. For example, Soundblaster cards had an OPL3/Adlib/FM synth chip at address 388h (it's fixed, you can't have two in the same system, or maybe you can and they would play the same tune, I don't know...), the main chip (wave playback and recording) at 220h or 240h configurable by a jumper, IRQ 2, 5, 7 or 9 (two jumpers), a MIDI port at 300h or 330h (another jumper), two DMA channels (another 4 jumpers), and an IDE port (2 more jumpers).
When you install the card, you set those jumpers according to the manual and what other cards you have installed and their addresses, so that there are no conflicts. Then you add the "SET BLASTER=A220 I5 D1 H7 T6 P330" to AUTOEXEC.BAT so that the games know where in memory to read/write the data so that it reaches the correct card.
Then, PnP was invented, because changing those jumpers and avoiding conflicts was very hard, as you can imagine.
On a PnP system, you would enter the BIOS setup and reserve the IRQs of any non-PnP cards you may have, so that they are not auto-assigned to PnP cards. I/O addresses are managed automatically.
The ISA PnP initialization process is actually very interesting:
All the ISA PnP cards power up in a disabled state. They all respond only to a specific address reserved for PnP initialization. Each card has a unique serial number written at factory. The BIOS scans for serial numbers, not by brute force (that would take too long), but bit by bit.
Let's say there are 3 cards:
A: 010...
B: 011...
C: 100...
BIOS sends an "init command" to the reserverd initialization address. All cards enter selection process.
BIOS asks for bit 0 of the serial number. Cards A and B pull down the line for bit 0. ISA lines are normally pulled-up by the chipset when receiving data from the cards. The BIOS remembers "0". Card C notices that the line is down, in conflict with it's own bit (has "1") and disables itself until the next init command.
BIOS asks for bit 1. No cards pull down the line, both A and B have "1". BIOS adds a "1" to the serial number (now "01").
BIOS asks for bit 2. Card A pulls down the line. BIOS remembers "010". Card B is in conflict and disables itself.
Continue until the last bit. Only card A remains active. For each bit, it either pulls down the line and the BIOS adds a "0" or no response and BIOS adds a "1". There can't be any more conflicts to disable it, since card A is the only one remaining. When the BIOS reaches the last bit, only one card can remain, no matter how many were initially active.
The BIOS then asks for config requirements, and the only remaining active card answers. BIOS configures it with bus addresses, IRQs, DMAs, etc.
BIOS sends the "init command" again. Card A now has specific addresses configured and will ignore the reserved init address. Only cards B and C enter the selection process.
BIOS asks for bit 0. Card B pulls down the line. Card C is in conflict and disables itself. Card B remains the only one active and will be configured.
Repeat the process and configure remaining card C.
At the end, when no more cards remain. Serial number scan returns "1111111..." - no cards to pull down any lines. It means the scan is finished.
Plastics and rubbers tend to not survive well a lot of the time just because of the chemistry. There's really no way around plastic embrittlement and rubber decomposing. You can prolong it with the right storage conditions, but those molecules are gonna break down sooner or later.
Microkernels have lost the open kernel wars because of their speed problems, but this is a great example of a driver that should have been running in userspace a long time ago, just like how Windows has been moving in that direction.
I guess things are going into that direction naturally, but not officially. eBPF is helping with getting deep kernel aspects into userspace. And there's some ressurgence of out-of-tree graphics drivers, specially for gaming.
I believe userspace drivers are much more powerful and easy to build than 10 years ago, but it is not from a requirement from the kernel.
Who knows, maybe we will get a smaller (instead of bigger) kernel in 10-20 years
Luckily I'm not a kernel maintainer, but it seems like they don't have 10-20 years to make hard practical decisions. It's easy to get rid of old unmaintained drivers, but they have to solidify interfaces much more as it is getting exponentially easier to find and use bugs or any unspecified part of the kernel for attacking it.
There was a very interesting point when people who were creating Rust interfaces were asking hard questions about ownerships and lifetimes in driver interfaces from the C linux maintainers and they didn't really care to answer (just wanted to wish Rust away).
Now with AI these questions are getting practical. Fortunately big companies have big stake in keeping linux secure, so I'm not worried about it being addressed at least.
> this is a great example of a driver that should have been running in userspace a long time ago, just like how Windows has been moving in that direction.
Hasn't windows (nt lineage) moved solidly in the opposite direction? Used to be you could reload/restart the video card ("GPU") driver if the driver crashed?
I think this conflates two different eras/layers. NT 4 famously moved the window manager/GDI/graphics subsystem into kernel mode, so that’s probably the “opposite direction” history. But modern GPU-driver recovery is WDDM/TDR, and it very much still exists: WDDM splits the display driver into user-mode and kernel-mode components, and TDR resets/recovers a hung GPU/driver instead of requiring a reboot.
I also update NVIDIA drivers regularly on Windows 11 without rebooting, though that’s install-time driver reload rather than exactly the same thing as TDR.
No it's the opposite. WDDM and DirectX are constantly being updated and have been improving crash recovery of the GPU, updating its driver, power management, abstracting features like video encoding and storage DMA, among many things. In Linux it is taking ages, the first proposal for DRM to support 2010 era WDDM features was in 2021 and it still does not exist. Graphics is one of the few places some of Microsoft still innovates. Although not in the sense of having great code, they just put in the work to coordinate these changes from the handful of vendors. If only someone hosted more steak dinners for Linux.
Is this due to Mythos and other LLMs finding a bunch of obscure bugs or simply a precaution? If someone (a normie not a gentooman) wanted to run Linux on retro hardware how would they do it? Boot Debian Sarge?
But that code would have to be selected in menuconfig, compiled, and the module loaded. I assume that nobody does that for bus mice, and even if someone, by mistake, selects one of the drivers, that's 1 machine in a billion. Who would target that?
Same argument for any retro-tech. What hacker would spend hours/days to hack my bare-metal DOS box running Arachne + a packet driver just to mine bitcoins on a K6-2 for a couple of hours until I turn it off from the AT power switch (not button).
From my understanding, that isn't how drivers in Linux work. Nearly no kernels will have that code compiled into them because kconfig won't call for it. It is "opt-in", and it is so niche few Distros would have done so.
Linux only ships with a tiny sub-set of the drivers in the source tree.
The PS/2 connector is what came after that the bus mouse. Back then the mouse was connected to a specialised add-in card. Probably an ISA card if I remember correctly.
> looking to remove old drivers due to the surge of AI/LLM bug reports
I wonder how OpenBSD's careful code quality and hygiene (maybe there's a better word) has affected its vulnerability to LLM bug finding. Did their approach pay off in this case?
Now, most will say "but why, 1995 is ancient history, no such hardware exists anymore". The thing is ... should Linux get rid of what is old? I understand you have a smaller kernel when you have less code, less cost to maintain, I get it. Still, I wonder whether this should be the only allowed opinion. Would it not be better to, kind of, transition into a situation where any hardware built in the future, would be supported? So in 2050, we'd not say "damn, computers from 2026 are obsolete now". We could say "no problem, linux is forever". Everything is supported. I actually would prefer the latter than the "older than 30 years, we no longer support it".
> Would it not be better to, kind of, transition into a situation where any hardware built in the future, would be supported?
easier said then done -- the kernel's internal interfaces aren't static, they change often. The project has never committed to stabilizing it's driver api, so every driver takes non-zero work to maintain.
I would assume computers that are still running these old ISA mouses (mice?) probably are also running an older version of linux; and if they're running a new kernel then it'll be somebodys job to port the drivers forward. There's some likelihood this will end up maintained by someone out-of-tree, which is a nice way of saying "we've sent your dog to a farm upstate..."
To add to this, as long as the diff representing the removal of the driver is kept in the git history it would be trivial for someone in the far future to say to an AI agent:
"Please take this linux source and patch the Bus mouse driver back in but match the new driver interface".
With code preserved in git history it's never actually "removed". It's just, disconnected.
That date feels a little bit late. The PS/2 devices that superseded the bus mouse started appearing around 1987. There were certainly still bus mice around in 1995, but they were thoroughly obsolete.
The real issue is that they don't have stable intra-kernel ABI/APIs. It should entirely be the case that technologies that are 10+ years old are stable and a clean abstraction layer can be created. You maintain the abstraction layer and all the things on the other side of it don't have to track random kernel changes. Things like this just keep working indefinitely.
This is the kind of spring cleaning I crave. Deleting busted drivers that haven't worked in over a decade? Fantastic!
Some of this hardware likely has exactly zero users because the material it's made from can't possibly have survived. Look at the cord on the mouse in the photo: you might be able to plug it in, but I wouldn't bet money signal can still make it down the wire.
My bus mouse still works just fine; things built in the 80s tended to be pretty solid.
However, it would be hard pressed to find a machine with ISA slots with enough resources to run Linux 7.1 acceptably.
There are still ISA slots in new systems with fairly modern processors and plenty of RAM, if you don’t mind buying specific models of industrial PCs for way too much money.
For $1100 or so you, too, could have a 4th generation Core i3 machine. https://www.rampcsystems.com/product/2-isa-slot
Or maybe you need 4 PCI and 9 ISA for some reason. DuroPC’s got you, if you can drop $1800 on a system with the same generation of processor. https://duropc.com/product/r810-4p9i-4
ISA slots are all identical. If you have one slot, you can multiply it to 100 slots just by connecting the wires.
There was at least one non-identical ISA slot:
https://www.lo-tech.co.uk/wiki/IBM_Personal_Computer_XT_Syst... https://www.lo-tech.co.uk/understanding-pcxt-slot-8/
IBM was always special. :-) Aren't they the ones who invented the MCA bus abomination that required a floppy disk to configure each card?
That’s one of those facts that’s always good to know, but in practice people tend to put one card in one slot with no expanders.
I'm pretty sure the host will run out of IRQs long before 100. Don't most systems only have 16?
You don't really need IRQs for most ISA boards. OPL3/Adlib sound cards don't need one, MIDI doesn't, joystick port doesn't. I saw various I/O boards that don't need IRQ. Soundblaster does, but I don't know for what purpose. Maybe someone here can explain?
Coincidentally I'm currently working on a Sound Blaster driver for some DOS homebrew, so here's quick rundown of how an SB is programmed and what its resources do:
Base Address: This is the beginning of the IO port range you use to program the card, commonly it's 0x220, but can be configured with jumpers (or software on later cards). You can add offsets to this address to access different functionality of the card, such as the OPL chip or the Mixer chip.
IRQ: The interrupt number that will be fired when the soundcard finishes playback of an audio chunk. Early cards usually used 7, later models defaulting to 5. More on this below.
DMA Channel: Which channel of the PC's DMA controller will be supplying audio data to the card. Usually 1 for 8-bit cards, with 5 being used for 16-bit cards.
The general process for playback is as follows:
- Program the DMA controller with the address and size of an audio buffer you'll be using to mix your PCM sound into. This buffer will conventionally be used in 2 halves by the interrupt service routine, a front buffer and backbuffer, similar to what you'd have for double buffered video. The DMA channel should also be put in "auto-init" mode so that the DMA transfer will loop back to the start when it finishes, which allows continuous playback.
- Install an interrupt service routine to write data into the "backbuffer" half of the DMA buffer, which switches back and forth each time an IRQ fires.
- Initialize the DSP chip via its IO port, pick a sample rate (usually around 11khz for most DOS games), then issue a continuous playback command. For this part, you tell the soundcard that your playback buffer is half the size it actually is, which causes the IRQ to fire once in the middle of the buffer, and again at the end of the buffer before looping back to the start. These halfway IRQs allow you to fill the unused half of the buffer while the other half is playing, for smooth gapless playback with no clicks or pops.
This is probably more info than you or anyone actually wanted, but it's a fun topic so I couldn't help myself.
No, I really appreciate the detailed answer. Things were so simple back then.
I thought the OPL chip was addressed via 388h (adlib/fm), not 220h (wave)?
Sound Blasters and compatible cards used IRQ lines because back in the bad old days CPUs were slow, bandwidth was tiny, and buffers were minuscule.
To get responsive/real time audio the card needs to signal to the CPU, not the other way around, and at the time IRQs were the way to do that on ISA busses.
I would imagine that ISA cards that didn't need IRQs either required CPU polling or DMA.
I imagined that the game / audio driver would just send data to the card at regular intervals and that's it. I realize now that the card uses it's own clock that can drift when compared to the system timer and this method would have a buffer underrun/overrun problem.
This is what the word "bus" used to mean on a hardware level: a backplane of connections to which multiple peripherals could be attached. These days a bus is a LAN of point-to-point serial connections which, it turns out, is much more viable at the high communication rates demanded of modern hardware.
How were different devices addressed? I assume it’s a master and slave system, but even then were address collisions automatically resolved?
In original ISA none of this is managed, the owner of the PC is expected to manually configure both hardware and software appropriately.
So e.g. [with the PC turned off!] you move a tiny jumper (basically just a piece of conductive metal with a plastic housing) to the "IRQ 8" position and you pick "IRQ 8" in some menu or set it in an environment variable in DOS or whatever.
By the time PCI is starting to appear there is some level of "Plug and Plug ISA" but it's fairly crazy because of course all the old stuff still exists whereas for PCI the bus always had this intelligence baked in so nobody just assumes they can pick.
It can't be IRQ 8 on an ISA board. That's the IRQ for the RTC.
That's correct. I considered whether I should dig out a manual and decided that I should do the exact opposite and pick a value I know won't exist for ISA.
To avoid collisions, you moved physical jumpers on cards that might conflict, to select among a small range of addresses, I/O ports and/or IRQ numbers.
For example if you had two identical network cards, or SCSI disk controllers, you would need to physically reconfigure one of them away from its defaults.
There were only a small number of configurations available on each type of device, and some weren't configurable at all, so you could still get irreconcilable conflicts.
The Linux kernel of the time was full of hard-coded "probe" addresses and I/O ports, probe sequences to see if there was a device there, and IRQ auto-detection routines that triggered an interrupt to find out which IRQ line was asserted. Some of the probes had to be run in a particular order, so that probes for one type of device wouldn't break another type.
Later came ISAPnP, meaning Plug'n'Play for ISA, which allowed the operating system to use a clever protocol to talk simultaneously over ISA with all devices on the bus that support it, identify and select them individually, query what they required and and configure their addresses, I/O ports and IRQs to avoid overlap, or permit overlap where it was ok for IRQs. After the operating system was done configuring them, they operated as if they were configured physically like the older ISA cards. If necessary this could be implemented cheaply by adding an ISAPnP module to an existing ISA card design.
Eventually ISA was superceded by PCI which had better, well-defined enumeration and configuration methods from the start which all devices had to implement. PCI also allowed MMIO and IO base addresses to be set anywhere (32-bit), not just a small number or single option as ISA cards usually had, so there were no more address conflicts. The operating system still had to find the PCI bus registers itself, but after that, probing was simpler and more reliable than with ISA.
USB also arrived around the same time, and also had well-defined enumeration and configuration methods. Many simpler ISA devices were replaced by equivalent USB devices. Although USB was (and is) complex to implement at a low level, the complexity was handled very well by low-cost, generic USB modules on the device side, so it was easy for device manufacturers to use.
> How were different devices addressed?
There's a shared address bus. Each device responds to the i/o and/or memory addresses it's configured for. Configuration can be static, jumpers, isapnp.
> I assume it’s a master and slave system, but even then were address collisions automatically resolved?
No. If two devices want to use the same address space, you'll have problems. isapnp might help you out, but it was added in the second decade of ISA, so ... lots of things don't use it.
All cards receive the same signals (address lines, data lines, IRQ lines and everything else). They just ignore all data for addresses (on address lines) that are not theirs.
Each card must have a unique I/O address, sometimes more than one and sometimes an IRQ and DMA too. For example, Soundblaster cards had an OPL3/Adlib/FM synth chip at address 388h (it's fixed, you can't have two in the same system, or maybe you can and they would play the same tune, I don't know...), the main chip (wave playback and recording) at 220h or 240h configurable by a jumper, IRQ 2, 5, 7 or 9 (two jumpers), a MIDI port at 300h or 330h (another jumper), two DMA channels (another 4 jumpers), and an IDE port (2 more jumpers).
When you install the card, you set those jumpers according to the manual and what other cards you have installed and their addresses, so that there are no conflicts. Then you add the "SET BLASTER=A220 I5 D1 H7 T6 P330" to AUTOEXEC.BAT so that the games know where in memory to read/write the data so that it reaches the correct card.
Then, PnP was invented, because changing those jumpers and avoiding conflicts was very hard, as you can imagine.
On a PnP system, you would enter the BIOS setup and reserve the IRQs of any non-PnP cards you may have, so that they are not auto-assigned to PnP cards. I/O addresses are managed automatically.
The ISA PnP initialization process is actually very interesting:
All the ISA PnP cards power up in a disabled state. They all respond only to a specific address reserved for PnP initialization. Each card has a unique serial number written at factory. The BIOS scans for serial numbers, not by brute force (that would take too long), but bit by bit.
They used 16-bit addresses from 0x0 to 0xffff.
> things built in the 80s tended to be pretty solid
Survivorship bias. We built a lot of crap stuff in the 80s, too. Most stuff built in the 80s is probably in landfills now.
Most things.
Plastics and rubbers tend to not survive well a lot of the time just because of the chemistry. There's really no way around plastic embrittlement and rubber decomposing. You can prolong it with the right storage conditions, but those molecules are gonna break down sooner or later.
The mouse in the photo was made somewhere between 1987-1993. I have computers older than that which work just fine.
Same, but I didn't spend a lot of time putting my grubby hands on those computers.
My Z840 server I use for self hosting has both bus original IBM keyboard and bus mouse attached. Either works just fine.
Microkernels have lost the open kernel wars because of their speed problems, but this is a great example of a driver that should have been running in userspace a long time ago, just like how Windows has been moving in that direction.
Isn't Linux planning to do the same?
I guess things are going into that direction naturally, but not officially. eBPF is helping with getting deep kernel aspects into userspace. And there's some ressurgence of out-of-tree graphics drivers, specially for gaming.
I believe userspace drivers are much more powerful and easy to build than 10 years ago, but it is not from a requirement from the kernel.
Who knows, maybe we will get a smaller (instead of bigger) kernel in 10-20 years
Luckily I'm not a kernel maintainer, but it seems like they don't have 10-20 years to make hard practical decisions. It's easy to get rid of old unmaintained drivers, but they have to solidify interfaces much more as it is getting exponentially easier to find and use bugs or any unspecified part of the kernel for attacking it.
There was a very interesting point when people who were creating Rust interfaces were asking hard questions about ownerships and lifetimes in driver interfaces from the C linux maintainers and they didn't really care to answer (just wanted to wish Rust away).
Now with AI these questions are getting practical. Fortunately big companies have big stake in keeping linux secure, so I'm not worried about it being addressed at least.
> this is a great example of a driver that should have been running in userspace a long time ago, just like how Windows has been moving in that direction.
Hasn't windows (nt lineage) moved solidly in the opposite direction? Used to be you could reload/restart the video card ("GPU") driver if the driver crashed?
I think this conflates two different eras/layers. NT 4 famously moved the window manager/GDI/graphics subsystem into kernel mode, so that’s probably the “opposite direction” history. But modern GPU-driver recovery is WDDM/TDR, and it very much still exists: WDDM splits the display driver into user-mode and kernel-mode components, and TDR resets/recovers a hung GPU/driver instead of requiring a reboot.
https://learn.microsoft.com/en-us/windows-hardware/drivers/d... https://learn.microsoft.com/en-us/windows-hardware/drivers/d...
I also update NVIDIA drivers regularly on Windows 11 without rebooting, though that’s install-time driver reload rather than exactly the same thing as TDR.
No it's the opposite. WDDM and DirectX are constantly being updated and have been improving crash recovery of the GPU, updating its driver, power management, abstracting features like video encoding and storage DMA, among many things. In Linux it is taking ages, the first proposal for DRM to support 2010 era WDDM features was in 2021 and it still does not exist. Graphics is one of the few places some of Microsoft still innovates. Although not in the sense of having great code, they just put in the work to coordinate these changes from the handful of vendors. If only someone hosted more steak dinners for Linux.
If anyone is curious, here is the actual commit that removed the drivers: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
What the hell. Kernel.org is mining in my browser to “make sure I’m not a bot”?
Is this due to Mythos and other LLMs finding a bunch of obscure bugs or simply a precaution? If someone (a normie not a gentooman) wanted to run Linux on retro hardware how would they do it? Boot Debian Sarge?
The old-fashioned way, build your own kernel with the driver included.
Raises the question whether a bug in code that's never called actually exists ;)
Code that is never normally used can sometimes still be gotten to run by an attacker, and therefore can still be a security risk.
But that code would have to be selected in menuconfig, compiled, and the module loaded. I assume that nobody does that for bus mice, and even if someone, by mistake, selects one of the drivers, that's 1 machine in a billion. Who would target that?
Same argument for any retro-tech. What hacker would spend hours/days to hack my bare-metal DOS box running Arachne + a packet driver just to mine bitcoins on a K6-2 for a couple of hours until I turn it off from the AT power switch (not button).
Good point. I guess I have this issue in the back of my mind [1], which was widely shipped with ffmpeg despite being basically never needed.
[1] https://x.com/FFmpeg/status/1983949866725437791
From my understanding, that isn't how drivers in Linux work. Nearly no kernels will have that code compiled into them because kconfig won't call for it. It is "opt-in", and it is so niche few Distros would have done so.
Linux only ships with a tiny sub-set of the drivers in the source tree.
schrödinbug?
https://softwareengineering.stackexchange.com/questions/9697...
What is a bus mouse? Is it using the old PS/2 port?
No, interface used before ps/2 that need ISA card to be used.
https://en.wikipedia.org/wiki/Bus_mouse https://en.wikipedia.org/wiki/Industry_Standard_Architecture
The PS/2 connector is what came after that the bus mouse. Back then the mouse was connected to a specialised add-in card. Probably an ISA card if I remember correctly.
Damn, there goes Linux as my retro computing target.
Guess I’ll be porting dahdi to netbsd soon lol.
Tbf, I get why Linux is dropping all this stuff. I wouldn’t mind becoming a maintainer of smaller drivers myself, but I doubt I have the skill level.
> looking to remove old drivers due to the surge of AI/LLM bug reports
I wonder how OpenBSD's careful code quality and hygiene (maybe there's a better word) has affected its vulnerability to LLM bug finding. Did their approach pay off in this case?
This makes me sad.
Now, most will say "but why, 1995 is ancient history, no such hardware exists anymore". The thing is ... should Linux get rid of what is old? I understand you have a smaller kernel when you have less code, less cost to maintain, I get it. Still, I wonder whether this should be the only allowed opinion. Would it not be better to, kind of, transition into a situation where any hardware built in the future, would be supported? So in 2050, we'd not say "damn, computers from 2026 are obsolete now". We could say "no problem, linux is forever". Everything is supported. I actually would prefer the latter than the "older than 30 years, we no longer support it".
> Would it not be better to, kind of, transition into a situation where any hardware built in the future, would be supported?
easier said then done -- the kernel's internal interfaces aren't static, they change often. The project has never committed to stabilizing it's driver api, so every driver takes non-zero work to maintain.
I would assume computers that are still running these old ISA mouses (mice?) probably are also running an older version of linux; and if they're running a new kernel then it'll be somebodys job to port the drivers forward. There's some likelihood this will end up maintained by someone out-of-tree, which is a nice way of saying "we've sent your dog to a farm upstate..."
To add to this, as long as the diff representing the removal of the driver is kept in the git history it would be trivial for someone in the far future to say to an AI agent:
"Please take this linux source and patch the Bus mouse driver back in but match the new driver interface".
With code preserved in git history it's never actually "removed". It's just, disconnected.
That date feels a little bit late. The PS/2 devices that superseded the bus mouse started appearing around 1987. There were certainly still bus mice around in 1995, but they were thoroughly obsolete.
The real issue is that they don't have stable intra-kernel ABI/APIs. It should entirely be the case that technologies that are 10+ years old are stable and a clean abstraction layer can be created. You maintain the abstraction layer and all the things on the other side of it don't have to track random kernel changes. Things like this just keep working indefinitely.
[flagged]
that pee stained microsoft mouse is really sending this home
That's the next Apple Neo color for you.
I'm sure that photo was chosen rather deliberately to garner support from a wide cross selection of grey beards.