Soooo my previous toots ended up on Phoronix and here come the entitled users saying how dare you tell me to switch to Wayland.
Repeat after me: Xorg is dead. It is unmaintained. It is buggy and those bugs are not getting fixed. *THIS IS FROM ITS OWN DEVELOPERS*. The people previously working on Xorg are now working on Wayland. They are literally part of the same organization FFS.
If you want Xorg to keep working, fix it yourself. Oh, not interested? Nobody else is either. Guess what, if nobody works on it, it will bitrot into oblivion. Nobody has signed up to fix it. No amount of wishful thinking is going to change that. You can keep using it all you like, but unless YOU sign up to maintain it, it's going to die.
Want Xorg to survive? Take over maintenance. We're all waiting.
*crickets*
Seriously, it's like you all think just shouting "I will never leave Xorg and my X11-only software" is going to magically save it from withering away and dying.
Sign up to do the work yourselves or deal with it. If so many people love Xorg so much, why is literally *nobody* signing up to save it? It's not going to save itself no matter how much you comment on Phoronix.
@getimiskon It's unmaintained because it's a giant mess of legacy upon legacy upon legacy and everyone who tried to maintain it ended up burning out.
At some point you just have to give up and start over, and that is Wayland. No, it's not perfect. But it has a future where Xorg doesn't.
@getimiskon @marcan Like with many systems, the old version is horribly design-rotten and unfixable, but has lots of current software, and both developers will not target the new target and users will not use the new target, because of lack of software or users. That leads to everybody staying on the horrible platform.
See also GCC 2.95 and the GCC 3.x branch. Many large GCC users took years to switch, but as soon as they did nobody looked back.
@getimiskon @marcan the GPU story in *BSD is so sad that I don't think it's worth caring about at all.
It's super frustrating and there is no point in being held back just because *BSD isn't ready to move.
@karolherbst @getimiskon @marcan How much of that is the BSDs fault? Right now, every open source graphics driver is for Linux and must be ported to the BSDs, so the BSDs invariably fall behind. I would much prefer if there was an actual API used by the drivers to interface with the kernel precisely because it would give non-Linux OSs a chance.
@alwayscurious @getimiskon @marcan
hard to tell. There is a "no pointless abstraction" policy within Linux, because those abstraction layer inevitable turn out to be terrible. Ever checked Nvidia's driver? They have a lot of pointless core code, because they can't even expect a kernel to have a linked list.
But also some BSD developers decide to write their own drivers instead of porting the Linux ones, which.. always fail due to massive scope.
@alwayscurious @getimiskon @marcan I think the only solution really is to just suck it up and say if you have a GPU and want to use it, don't install BSD. I honestly don't see any other way without some amount of pain.
GPU drivers are way too entangled with memory management and other core kernel bits, so BSD choice is really only to either layer the Linux driver or find 50 developers to write drivers for them.
@karolherbst @getimiskon @marcan @karolherbst @getimiskon @marcan I understand why Linux has the policy it does, but at the same time it currently mean that non-Linux operating systems are dead on arrival outside of no-GUI applications, unless they are okay with having all the GUI stuff run in a Linux VM. That is a serious problem because Linux is a monolithic kernel, and monolithic kernels are known to be broken when it comes to security.
@karolherbst @getimiskon @marcan What I would much prefer is for the _entire_ GPU driver (both kernel and userspace) to be part of Mesa, and for OSs that want GPU support to include Mesa as a git submodule or similar. Supporting GPUs in a new operating system would still be a lot of work, but one would at least have some chance of success.
@alwayscurious @getimiskon @marcan I think there is no actual proof for it besides "in theory"
@karolherbst @getimiskon @marcan One proof is that no public cloud provider will run user-provided containers without an additional layer of isolation, despite the performance and density advantages of doing so.
@alwayscurious @getimiskon @marcan so is there a public cloud provider running non monolithic kernels not doing this isolation?
@karolherbst @getimiskon @marcan Not that I am aware of, but not because of security! A microkernel can provide as much isolation as one wants (up to and including perfect, as https://sel4.systems shows) without needing virtualization at all.
@karolherbst @getimiskon @marcan Also, a major reason Qubes OS, OpenXT, and Spectrum OS exist at all is that monolithic kernels fail to provide sufficient isolation.
(Disclaimer: am a Qubes OS developer.)
@alwayscurious @getimiskon @marcan sure, but that comes with a cost of less performance/features, right?
You always have to draw the line somewhere. And yes, more isolation would be good, but I think there are some people working on per kernel module VMs and stuff like that.
My point is more, what does all that "in theory" stuff help if there is almost nobody willing to use it or invest into it.
I'll just wait for the first serious non monolithic kernel project to emerge.
@karolherbst @getimiskon @marcan Google Fuchsia is based on a microkernel (Zircon) and there are a LOT of microkernels in the embedded space where backwards compatibility is not a concern. Also QNX is based on a microkernel.
@alwayscurious @getimiskon @marcan sure, niche always exists, the point was more about a general purpose kernel covering more use cases.
Maybe the future is having different kernels for different use cases, I don't know. But even Fuchsia I'd still say is just experimental.
I'll kinda want to see a proper microkernel running well on a desktop, but it's also a huge investment and it's probably easier to rework Linux than recreate all of it.
@karolherbst @getimiskon @marcan I absolutely do not want to see the enormous amount of work put into Linux go to waste. I just wish that some parts of that work (GPU drivers in this case) were easier to reuse in other contexts.
Maybe what I really want is something like wlroots, but for operating systems instead of Wayland compositors. A bunch of modular, unopinionated components that one can use to make an OS without having to do everything from scratch.
@alwayscurious @getimiskon @marcan yeah.. I mean, I see the point, just GPU drivers touch a lot of core parts, like memory management so it's not really all that trivial to really abstract it out.
Maybe you could share some OS agnostic bits (like code directly accessing hardware), but all the code between APIs and hardware probably needs to be written against each kernel, which is the biggest part anyway.
@karolherbst @getimiskon @marcan How much of the complexity would go away if one decided to require that all GPU-accessible memory must be pinned, unless the GPU supports retryable page faults?
@alwayscurious @getimiskon @marcan the issue is way more lower level.
Well, that's already happening. All host memory the GPU accesses in a submission gets pinned. But you still need APIs for that. And also a lot of the pain already starts at C not having a proper stdlib API with like basic structures like linked lists.
It's worth taking a look at how Nvidia wrote their driver to abstract kernels and it's often pure pain.
@karolherbst @getimiskon @marcan What if one is using a language with a decent standard library and native support for interfaces/traits/etc?
@alwayscurious @getimiskon @marcan I mean, the more APIs kernels have in common the easier it will be to abstract target kernels.
At the moment you can't even use linked lists without providing your own and linux upstream will complain if you use your own.
How much it would make it easier to interface with other subsystems? Not sure, but I suspect this would also become more simple.
@marcan @getimiskon OpenBSD maintains its own X server, Xenocara. So they still have a working X server.
@alwayscurious @karolherbst @marcan @getimiskon A big part of the problem is that, due to Linux’s license, driver code cannot be copied onto the BSDs and other OSs.
@whynothugo @alwayscurious @marcan @getimiskon that's not true.
@karolherbst @alwayscurious @getimiskon @marcan
wow, i am SO out of the loop on new Star Trek.
@alwayscurious @karolherbst @getimiskon Our GPU driver *is* written in Rust and BSD not supporting Rust in their kernel is actually the reason they can't use it sooooo
@marcan @alwayscurious @getimiskon are there any plans of changing that by the way? I don't follow BSD development at all, so I'm not in the loop at all.
@karolherbst @alwayscurious @getimiskon Not to my knowledge.
@marcan @alwayscurious @getimiskon apparently there are some people having written prototypes for FreeBSD, but they all come with their own wrappers, soo.. I guess this will take a while still.
@marcan @karolherbst @getimiskon will this be what it takes for *BSD to start using Rust in base?
@alwayscurious @marcan @getimiskon honestly? I have no expectations here.
@karolherbst @getimiskon @marcan Chimera Linux seems interesting. At a certain point, there stops being any point trying to catch up with everything happening in the Linux kernel. So, you might as well just join the party and assert your independence on the stuff that actually matters.
@cwize1 @getimiskon @marcan yeah, just means no proper GPU supprt in BSD
@alwayscurious @marcan @getimiskon there is no sustainable path for any GPU driver in any BSD, I don't see why the asahi one will be any different.
@karolherbst @getimiskon @marcan BSD is dead. Long live usermode BSD. 😉
@karolherbst @marcan @getimiskon does that mean that *BSD will be relegated to server use-cases?
@alwayscurious @marcan @getimiskon was it ever any different?
@karolherbst @marcan @getimiskon The OpenBSD developers run OpenBSD on their dev machines IIRC.
@alwayscurious @marcan @getimiskon I mean, sure, but should the normal user do it? no
@alwayscurious @marcan @getimiskon seems like FreeBSD actually catched up to 5.16, so maybe FreeBSD actually has the people to work on it. My last update was they struggled to update the tree, so I guess they figured it out, kinda.
@karolherbst @marcan @getimiskon FreeBSD and OpenBSD at least only support Intel and AMD GPUs, which helps. (I don’t count the FreeBSD Nvidia support as Nvidia provides that.)
@alwayscurious @marcan @getimiskon yeah, seems like they figured it out last December, so that's why I missed that.
@alwayscurious @marcan @getimiskon but still, this is nowhere close of being sustainable, especially if you want to start supporting more drivers.
@karolherbst @marcan @getimiskon Hopefully virtio-GPU native contexts will allow any OS with sufficent virt support to spin up a tiny Linux VM and use its drivers that way.
@alwayscurious @karolherbst @getimiskon @marcan there was a project called OSKit that tried to do that, guess what happened to it...
@alwayscurious @karolherbst @getimiskon @marcan also there is genode os framework
@karolherbst @alwayscurious @marcan @getimiskon it’s always a bit non-trivial to upgrade, the kernel APIs change all the time and drivers (i915 i mean) continuously grow the API surface they rely on, but 🤷 good enough for me.
BTW about the Asahi GPU driver – don’t think Rust will be as much of a problem for FreeBSD as still not having any OFW/FDT stuff in our LinuxKPI layer xD There is a Panfrost port actually but it’s not the same codebase as the desktop one and there’s sort of movement towards reconciling that but eh it’ll take time.
Obviously things are different for OpenBSD.
@alwayscurious @karolherbst @getimiskon @marcan I think the only realistic option is that you pass the gpu device to a linux vm, run the full linux gpu stack in there with a virgl virtual gpu, and then export that virgl gpu to your desktop/app session vms.
that way all you need is just a virgl/virtio kernel driver, and you can reuse the full linux gpu driver support. if you're good might even be possible to make that gpu vm restartable if it crashes.
still a lot of typing, but all hw agnostic
@marcan you probably already saw it but just in case: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/9.0_release_notes/deprecated_functionality#deprecated-functionality_graphics-infrastructures
@valpackett @marcan @karolherbst @getimiskon Maybe it is time to revive (and finally upstream) the Linux kernel library project?
@karolherbst @getimiskon @marcan I wonder if these problems could be solved with a new arch port. User-mode Linux works just fine, and I suspect it is significantly more complex than porting Linux to run as a user-space process on <microkernel of choice>.
@alwayscurious @getimiskon @marcan I'd say that address space isolation is the more promising path honestly.
And then we could just isolate certain linux subsystems or out of tree drivers or other fun things.
@karolherbst @getimiskon @marcan Could address space isolation eventually turn Linux into a whole bunch of (mostly) mutually-distrusting userspace tasks running on a microkernel?
And then we start getting into _really_ awesome stuff. The microkernel could be made pluggable, so that one can use e.g. #seL4 instead of the one Linux provides. Userspace could start bypassing the clunky POSIX API in favor of one that used the underlying microkernel’s IPC primitives directly. The userspace tasks’s APIs could themselves become public interfaces, allowing other programs running on the microkernel to use them. Eventually, I could even see the ability to run standard Linux userspace become optional.
@alwayscurious @getimiskon @marcan addres space isolation kinda makes it pointless to move entire driver into userspace, no?
@karolherbst @getimiskon @marcan Address space isolation requires the driver to run in userspace if you want the security benefits, unless I have missed something. Otherwise the driver could just change the page table root pointer or something else equally bad.
@alwayscurious @getimiskon @marcan why would it? They just get their own VM and read only mappings of things they are not supposed to change but have to read. Or things not mapped they shouldn't read at all.
There are some folks working on it to isolate certain bits inside the kernel. I think it's mostly focused on KVM use cases for now, but I can also see it being used outside of it.
@karolherbst @getimiskon @marcan I am talking strictly about _hardware_ privilege levels here. The component being sandboxed needs to run in user mode, not kernel mode.
@alwayscurious @getimiskon @marcan yeah.. no idea about that. But they probably run at a lower privilege level than the core kernel. Never checked that out though.
@alwayscurious @karolherbst @getimiskon @marcan has there been any effort to run them on #microkernel like sel4 microkernel on some sort of microprocessor like #riscv ?