Personally, I’m looking forward to native Wayland support for Wine and KDE’s port to Qt 6.
Linux phones are getting closer and closer to usability every day. I don’t care that they’ll always be less polished than iOS or Android, I want a Linux phone.
I’ve been curious about Linux phones. Can you recommended any devices or operating systems to watch? Thanks.
Your best bet right now IMO would be flashing PostmarketOS onto a used OnePlus 6, which is cheap, has good specs and none of the battery issues plaguing the Pinephone Pro. That said, it’s not 100% ready to be a phone yet- for now its best use case is as a mini-tablet / PDA kind of thing. Really feels like carrying a pocket laptop around, which is pretty fun as a starting point.
Cool, thank you!
Pinephone has a great active community, and the device itself is dirt cheap (also pretty low-specced). There’s a pro version with a much better specs in theory, but development state is much rougher. Not that the basic model is anywhere near daily driver material yet, but the progress is very appreciable every time i check in.
Linux phones
Will we be able to use messaging apps such as WhatsApp and Signal on Linux phones?
Yes, since you can run Android apps on them. They will be slower and have some quirks though I’m sure.
Wine + Wayland for sure. It’s time to let X11 rest, it’s earned it.
Its all finished, the main developer is porting the source code by patches so its easier for the MR to get accepted by the Wine devs.
What do you mean? I play games with Lutris on Wayland without issues.
It goes through XWayland, whereas Wine on Wayland would do away with that later
Linux phones for me. Really impressed by how these things have come in the last 3-4 years, and now we’re getting close to having at least one that’s usable day-to-day (with plenty of rough edges, obviously). As soon as that happens I hope more people will decide to take the plunge and really start pushing things forward.
Oh yea, I’m very excited to give Plasma Mobile a go in earnest
Plasma’s scalable applications paradigm has been around for coming up on 15 years. Gnome’s isn’t far behind.
I’m just disappointed in the direction of UX they’re all taking. Ubuntu Touch was looking innovative and made me excited. Then that didn’t happen and now we just have a bunch of Android look-alikes but worse and buggier. Don’t get me wrong, I’m very glad to have GNU/Linux on a phone either way (especially NixOS Mobile), but I’m not excited to use one.
I don’t know if it’s just me getting older or if innovation in how we interface with technology has just sort of stagnated. In the past there was so much happening. New input methods (all kinds of pointer devices, joysticks, weird keyboards); graphical paradigms (floating windows vs tiling panes, tabs, stacking, grouping, virtual desktops); display technologies (vector graphics, convex screens, flat screens, projectors, VR headsets, e-ink); even machine architectures (eg Lisp machines) and how you interacted with your computer environment as a result.
As far as I can tell, VR systems are the latest innovation and they haven’t changed significantly in close to a decade. E-ink displays are almost nowhere to be found, or only attached to shitty devices (thanks, patent laws) - although I’m excited for the PineNote to eventually happen.
How do we still not have radial menus?! Or visual graph-like pipelining for composing input-outputs between bespoke programs?! We’ve all settled on a very homogenous way of interacting with computers, and I don’t believe for a second that it’s the best way.
Just want to add that I don’t think it’s a technological plateau. I think it’s capitalism producing shiny and “upgraded” versions of things that are easy to sell. Things that enable accessible and rapid consumption. High refresh rate, vertical high-resolution screens for endless scrolling in apps optimised for ads-scrolled-past-per-second. E-ink devices only good enough that you can clearly see the ads on them as you read your books. Things are just not made for humans. They’re made for corporations to extract value out of humans.
Having used Ubuntu touch for a bit I’m way more excited about gnome mobile. I just think it’s overall a better paradigm. Ubuntu had some neat ideas but overall it just didn’t do it for me.
Yeah, the desktops are A++ for the last 10 years, it’s the phones that I’m excited to get to a similar level. I have one and it’s an expensive dust collector, I dust it off every few months and not much is changing
A WINE type app but for OSX (or really just iOS) apps would be awesome to have both desktops and phone. Call it CIDER or something similar. I reckon the way Apple does their app stores these days it would be hard to actually get most software working, but I don’t think that alone is a showstopper.
Having both that and Waydroid on a phone would be pretty great. You might want to check out Darling for running Mac apps on Linux in the meantime, since its goals are similar to Wine’s (but it’s still early in development in comparison)
I am looking forward to Wayland being a problem free experience. Well, rather, I don’t care if it’s X11 or Wayland, I don’t want have to think about the underlying system.
Also, software becoming distributable in a uniform way. Though here, I strongly would advocate for flatpak.
Two things at completely opposite ends of the “Linux world”:
-
eBPF. It seems super promising for improving observability and security; especially performance of these concerns. It also strikes me as a risky architectural decision. Programmable privileged kernel code + JIT. What could go wrong… that validator sure is doing heavy lifting.
-
Valve flexing more muscle in developing Proton as it comes to terms with the fact Microsoft’s vertical integration (and monopolistic practices increasingly unfettered by government) will eventually be an existential risk to it. It is now ridiculously easy to install and run so many games on Linux, so long as you accept the devil you know and it’s DRMy platform. Definitely not perfect but it’s so vastly improved I’m comfortable calling it “night and day”
The Valve one has been the most exciting for me. AFAIK Valve has been thinking about the issues with Windows controlling PC gaming since Windows 8 first came out. The Steam Machines were a flop at the time but in recent years they’ve been able to maks big moves for Linux gaming and instead of giving up has been doubling down on the importance of it.
Ahh yes the Steam Machines. Definitely contemporaneous with windows 8.
I think it’s likely Valve have intensified efforts recently for a number of reasons but not least of which is the ongoing encroachment of Microsoft turning the Windows PC experience into more of a walled garden across more segments. It can’t have gone unnoticed that Microsoft are 1) selling games on the Microsoft Store and 2) are normalising the concept of hardware root of trust etc with the windows 11 TPM requirement.
EFI secure boot was one thing. Setting conditions up so every PC in the world has hardware support for verifying that user space programs are signed by Microsoft is another. I’m not saying overnight they’ll flick a switch and every windows installation in the world is on S mode. But it’s clearly trending that way. That would be good night for Steam if they so chose. And clearly Microsoft believe they can fob off regulators well enough
-
RiscV laptops and precompiled binaries in package managers.
My dream is to have a RISC-V phone running Linux
RISCV laptops, with battery that can handle 3 days of juice, doing work. And should be powered by linux, either Fedora or it’s derivative (imho)
whats all of this for?
A fully working Linux Phone with good battery life that supports a good matrix client with e2 encryption. GrapheneOS is good, but we need initiatives independent from Google.
Technically Android is Linux. I know what you mean though, and it would be great. Maybe some day…
deleted by creator
-
bcachefs; I currently use zfs and am not a huge fan of btrfs. Having another filesystem mainlined will be fun.
-
eBPF, particularly if somebody picks up after the presumably abandoned bpfilter.
-
Improved/matured support for rust written drivers. I’m not so fussed about in-tree work, but future third party drivers being written in a safer language would be a nice benefit.
-
long term: the newly introduced accelerator section of the kernel might make SoCs with NPUs and the like have better software support.
-
very hyped for plasma 6, and Cosmic both. I’ve got a lot of confidence in KDE devs, and Cosmic previews look very nice.
-
NixOS has been a really cool distro for a while, but it also looks to have a solid build system from which interesting derivatives will show up.
from which interesting derivatives will show up.
I don’t think that will happen and hope it won’t because NixOS can handle the usual preferences people might have internally.
Don’t like glibc? pkgsMusl is the entire package set but with musl instead of glibc.
Want static compilation? pkgsStatic.
Afraid of systemd? Well okay, we don’t have that right now but I don’t think anyone would be opposed to optional support for worse service managers. It’d just be an opt-in toggle that we could support with enough people interested in it.Nah, people always want to put their own spin on things and I welcome the diversity.
Arch can bring in all the necessary packages yourself, but Garuda exists and people enjoy using it. Horses for courses.
Garuda only exists because the only way to distribute a set of default configuration in regular distros is to create a whole new distro/installer. We don’t have that problem in NixOS because all configuration is declarative and composable.
In the NixOS world, Garuda would be a NixOS base config which users would import in their own config and extend with their own configuration. You’d still be using NixOS though.
If you’re packaging enough changes that somebody would say it’s a different experience, calling it the “X configuration” vs “X distribution” based on how it’s packaged is just splitting hairs.
What’s eBPF?
It’s a technology that lets you run code through the kernel’s JIT compiler. It’s an extremely flexible way to run code in kernel space; the typical example is using it to build XDP programs for networking, which can deeply analyse network packets without having to incur the performance penalty for changing context to userspace.
Just to be sure, what’s wrong with ARC and L2ARC?
My issue is not with the ARC, it’s a few things:
-
kernel integration is iffy; I don’t want to attach a module to my system every time I compile the kernel and prey that the difference in pace between the release schedules of openZFS and Linux hasn’t caused issues, and because of the licencing issues my options of having a distro with zfs built in are very limited.
-
it’s performance isn’t excellent from a NVME standpoint. It’s not terrible, but it could be better.
-
it has a massive code base, making introducing things like performance improvements and new features quite a challenge (Though the openZFS team are doing a bang-up job despite this).
Ultimately if I was still holding on to 40+TB of important data, I’d be using ZFS and be happy about it. I want snapshots on my workstation, without all the strange issues I’ve had with btrfs. I’m sure bcachefs will have its own issues but it’s better to have options.
Sure, I understand the part about having to compile the ZFS module every time alongside the kernel. But that must be some heavy-lifting you’re doing if you’re regularly compiling your own kernel. I’d be interested in what you’re running that requires such efforts.
I don’t understand why you would need NVMe for ARC. Doesn’t it run in RAM only? Isn’t L2ARC what runs on storage devices?
Not really heavy lifting, I’m just running the Xanmod kernel, and need to turn on some features I need for eBPF development. I’m also keeping up to date with kernel releases, so every 6 weeks or so I need to rebuild.
The ARC runs in RAM, but is generally best when it’s given:
- A consistent amount of memory.
- An easily predictable workload.
- Long periods of time between restarts.
Conditions great for a server but not so much for a workstation. I don’t intend for my cache misses to go to spinning rust, so I have 2 2TB NVME drives. SSDs are cheap as chips currently.
The L2ARC is a victim cache of the ARC, and while it is persistent it’s still much more effective for me to just use a NVME drive for my pool.
Just went through Xanmod’s page: the list of features provided seem exciting, although I don’t really know much about some of them. Do you need these features for eBPF development?
Well, you’re right: ARC is best used in a server. What problems did you have with BTRFS that prompted you to switch?
I use Xanmod for gaming (fsync & related tweaks), but need other flags for development on the same machine.
My issues with BTRFS were mainly in their userspace tooling; ZFS volume management is just glorious, it felt like a significant downgrade to use BTRFS.
-
-
Better tools for graphic design. Maybe a port of the Affinity suite or a big push towards GIMP, Inkscape, and Scribus development. GIMP… I feel like people dreamed for more than a decade for essential photo editor functionalities like CMYK support and non-destructive editing. At least the first one is coming in the next version(partially).
I switched my design workflow to FLOSS tools exclusively. Krita is a perfectly competent photoshop replacement, Inkscape has been developed at a breakneck pace in the past year, the workflow is different, but it’s every bit as good as illustrator, and Scribus is great once you get used to the workflow. If anything, Scribus’ workflow helps you plan and structure your projects better. IMHO FLOSS tools are absolutely ready for professional work, but you cannot expect the workflow to match
existingproprietary tools.Would absolutely love for Serif Labs to create a port for Affinity Photo and Designer. Of the programs I’ve tried, those two have the closest UX to Photoshop and Illustrator without the software-as-a-service model.
Hell, I’d even take it if all they did was support it working under WINE. While I would prefer a seamless UI that fits in with both GTK and Qt, it’s understandable that they might not consider it worth the effort.
If Affinity apps worked natively on Linux I’d ditch Windows for good.
Krita was developed for graphic design specifically. Gimp tackles other simpler use cases
HDR and wide color gamut! While the displays are still only really available in the mid to high end (I don’t count HDR400), it’s no longer just pro gear and I upgraded to a new display recently that I’d love to take advantage of it with. I’ve been using the new, still in testing Variable Refresh Rate on GNOME and this would be the final piece of the puzzle for making me ditch windows 100% when it comes to gaming, as Proton has basically solved every other issue for me - I’m primarily a singleplayer gamer.
I think KDE and Gamescope have experimental HDR working with Windows games.
I don’t know if KDE got it working yet, but Gamescope’s works pretty well out-of-the-box. Nobara and maybe Chimara OS already have this ready with a session for Steam Big Picture mode.
Kinda funny that Windows games seem to always get compatibility with these things first. I guess just adding support in Wine means more games get the functionality at once than developers adding it on a per-game basis.
Looking forward to seeing Cosmic get a alpha/beta release, I love what they’ve shown and since I can never get used to tiling window managers, it looks like a very nice middle ground between DE/WM. And seeing their Virgo laptop, I doubt I’ll get one since EU shipping is a nightmare (Though they’re supposed to open an EU warehouse soon-ish), but more repairable laptops, esp. one using GPLv3 for every bit, is amazing. Looking forward to seeing more about the FW16, not linux per se, but still cool.
Plasma 6, ofc. Way, way in the future (Probably) is seeing more DEs make their way to Wayland, like XFCE/Cinnamon/Budgie
AMD is planning to release OpenSIL in 2027, which should, in theory, accelerate the development of Coreboot and Libreboot and bring them to modern AMD motherboards
I’m curious, will that work with Motherboards released until then, or just new motherboards from that point onwards?
New motherboards. Unless AMD collaborates with board makers to push updates to their BIOS/UEFI to include OpenSIL compatibility, which is likely not going to be the case in my opinion
IIRC the next few Wayland updates this year will solve and improve a lot of problems.
Like what? Have you got any examples?
-
More/better atomic distros, like Silverblue, Kinoite, VanillaOS, etc. Silverblue is already excellent, easy to use and extremely solid, but there are still some odd rough edges that I think would make it less appealing to new users. When we can offer newbies a personally unbreakable Linux system that does basically everything they want and more, then I think it’ll be easy to recommend. At this point it’s hard to imagine going back to a traditionally updated distro.
-
The next steps for PipeWire, which has improved and streamlined audio (and sometimes video) handling and production immensely. I can imagine a future where we can easily send, audio, video, midi, and all kinds of other data streams between arbitrary programs on Linux, easily routing things with GUI frontends, having connections establish automatically, etc. I don’t know how much this stuff is in the works, but I think PipeWire has a ton of potential left to be explored.
I’m a happy user of Fedora workstation. What makes Silverblue better? I’ve never tried it. I’ve done lots of changes but my system has been rock solid since Fedora 36.
I was on Fedora workstation before switching to Silverblue and they’re both quite solid, to be fair. The big feature that differentiates Silverblue is immutability–you can’t easily make changes to the base system.
Now, to some people I think that’s going to sound awful, but it has its pros and cons. The biggest benefit being that your base system is solid (and not just solid as in unlikely to break, but literally unchanging over time). Updating your system is effectively replacing it with a different system entirely (delta compressed, so it’s not too inefficient, if I understand correctly), and you can rollback/revert/swap between systems on the fly, in the unlikely event that an update makes something worse, though I haven’t needed to. You can even rebase your Silverblue (Gnome) system into a Kinoite (KDE Plasma) system, pin both “commits” and swap between them. I haven’t tried that though, since I’m pretty happy with the Gnome workflow. Long story short, immutable distros like Silverblue are basically as solid as solid can be.
There are two drawbacks that I can think of, and then a couple of minor nitpicks. The biggest being that you need to restart your system after making changes or installing packages. You don’t need to restart between each package install or anything, but any system-level changes that you make won’t take effect until you restart. The second drawback is that layering packages is not always ideal and working inside docker/podman containers (often via toolbx/distrobox) is the best way to do some tasks. For example, if you’re a programmer and need to install a lot of dependencies to build some program, I find it’s best to create a “pet container” to work in. That doesn’t both me much though, in fact I kind of like that workflow.
So basically, it’s probably not for everyone, especially people who really love to tinker and customizes everything. But if you want a basically unbreakable Linux machine, it’s worth looking into.
Thanks much for the detailed reply. It’s obviously not for me since I do a lot of tinkering and I’m used to the traditional system. But it definitely should be suitable for some scenarios. Scools and kiosks come to mind.
The base os is immutable, but you can still change configuration files, compile and install local software (but not in the immutable directories), install desktop environment extensions, add custom repositories, etc. You can also layer packages, but most graphical software is best installed as flatpaks (but not mandatory). So it depends on what tinkering means for you. If it means messing around with binaries in the default locations, like /usr/bin, then it’s not for you, but for many other things there is a way, it’s just a matter of getting used to the separation between the immutable base layer and the things that you build around and on top of it.
-
HDR and HDMI 2.1 support would be nice.
Some TVs don’t have display ports eh.
And maybe we wanna enjoy 7.1 audio on our fancy ATMOS setups.