The Console Is a Lie — Braille on Linux Is a Lovecraftian Joke
Welcome back to "I Want to Love Linux. It Doesn’t Love Me Back." In this post, we’re stepping into the part of Linux that everyone says “just works” — the console. The raw, text-only interface you drop into when X dies, the GUI fails, or you're installing a system from scratch.
Sighted users can rely on it without thinking. But if you’re blind? That safety net comes with holes, duct tape, and a list of arcane incantations.
This is about braille, speech, and the illusion of accessibility in text mode. Not your GNOME terminal, not Konsole, but the real TTY — Ctrl+Alt+F2, system recovery shells, live install environments. The kind of places where graphical desktops are gone and the console is your last, best hope.
Linux claims to support blind users here. It even ships the tools. But using them? Getting speech or braille output when you need it most? That’s a punishing mess of driver quirks, missing defaults, audio stack failures, and layers of modern regression hidden under the surface.
Speakup: The Crutch That's Always on Fire
Let’s talk about Speakup, the kernel-space screen reader that ties into the virtual console. It’s been around forever. It works, or at least can work — if everything is aligned, and you make the right sacrifices to the right deprecated mailing lists.
To Arch’s Credit...
Arch does include Speakup in the install ISO. And it even speaks, provided you're:
- Using a single, sane sound card
- Not dealing with HDMI audio devices masquerading as real outputs
- Not booting on hardware with weird port enumerations
- And able to press the down arrow and Enter to start the talking installer prompt.
That's a lot of caveats for what should be basic functionality. But I'll give Arch credit: it tries, and it's the only distro that lets me manually bash something together that mostly works. Not by default, not out of the box, but at least not actively hostile to the idea.
Debian Tries Too… and Actually Remembers You Exist
Say what you will about Debian (and I have), but credit where credit’s due: it’s one of the very few distros that makes a serious, working effort to support blind users during installation — not just speech, but actual device selection.
Boot the installer, press s
, and Speakup comes up. But here’s where Debian stands alone: if you’ve got multiple sound cards — including those HDMI audio traps that pretend to be speakers but don’t actually output anything — Debian asks you which one you want. And then it saves that choice. That’s not just a quality-of-life feature. That’s a life-saving one.
It means I can say, with confidence:
“If I don’t hear anything, the system didn’t boot or this hardware’s unsupported.”
Debian treats the installer like it matters, because for a blind user, it does. It’s not a graphical toy that you’ll only run once. It’s the first time the OS proves it knows you exist. And Debian actually shows up.
After installation? It still enables Speakup via a systemd service — and that’s more than most. But here’s where Debian gets tripped up by the ecosystem: the moment you hit a login prompt, you enter a session with user-locked audio. This isn’t Debian’s fault. It’s the fault of PulseAudio, PipeWire, and the entire philosophy of session-bound audio daemons that don’t care what the kernel is doing.
So Speakup’s still running. It’s loaded. It’s waiting. But your login session closes the door and walks away with the audio keys. And now you’ve got a working screen reader screaming silently into the void.
BRLTTY: The Underrated Backbone — and a Source of Pain
Let’s talk about BRLTTY. It’s one of the most powerful accessibility tools on Linux, and somehow one of the least discussed. This is its first proper appearance in the series — and that’s a problem, because BRLTTY is essential.
It’s the tool that makes braille work. It supports dozens of braille displays over USB, Bluetooth, and serial connections. It can drive displays directly from the TTY, pipe input through speech, integrate with graphical screen readers like Orca, and even act as a bridge between user-space applications and low-level terminal buffers. BRLTTY is the thing that lets blind and deafblind users actually use a Linux system — not just when it’s booted, but when it’s broken, when it’s offline, when the network’s down and your last hope is a blinking cursor in a recovery shell.
Especially for deafblind users, BRLTTY isn't a luxury — it's the entire UI. When speech fails, you can fall back to braille. When graphics fail, you can fall back to the console. But when braille fails, and you can't hear output, and you can't see the screen, you're done. There is no fallback. There is no workaround. If BRLTTY doesn't come up clean, or it binds the wrong port, or it races with udev and loses — that's it. You're locked out.
And it does fail: - On non-English locales, BRLTTY will frequently misrender or drop characters outright. - When switching to a graphical session, it often says "screen not in text mode" — which is accurate, but not helpful. - Sometimes it just gives up and says "no screen." No diagnostic. No explanation. Just that. - Integration with Orca can break silently. No crash, no error message, just a display that doesn’t update.
I’ve debugged BRLTTY systems for other blind users before. In one case, a deafblind person was completely locked out of their system after a routine upgrade. BRLTTY was running — sort of. The daemon was active, but the display was frozen. No output. They couldn’t read logs, couldn’t even tell if the system had booted. They had to get another blind user to SSH in, trace the USB detection, discover that modemmanager
had eaten the braille port, and rip it out manually. That's the kind of debugging you can't do if you're the one who needs the accessibility support.
And then there’s BRLTTY’s habit of eating serial and USB ports. Plug in an audio interface or communication device, and it might not appear at all — because BRLTTY bound to the port first. The solution? According to some documentation and forum threads: just disable BRLTTY. Right. Turn off the screen reader for your only input/output device. Perfect.
Want BRLTTY to work reliably at boot? You need to:
- Add it to your initramfs with the right module options and dependencies.
- Write
udev
rules that avoid conflicts. - Disable
modemmanager
, and maybe evenbluetoothd
depending on your device. - Ensure your kernel has support for the USB-to-serial adapter in question, loaded early.
This isn’t optional extra functionality. This is core accessibility infrastructure. And yet Linux treats it like an afterthought — something for edge cases, or niche embedded setups. There's no standard distro integration. There's no fallback detection. There's no sanity checking if the configured device fails. Just silence.
BRLTTY is powerful. It has scripting, speech integration, braille table customization, and the ability to control the system at a level most sighted users never touch. It deserves full-time support, upstream coordination, and testing pipelines. Instead, it gets relegated to a handful of package maintainers and bug reporters in the dark.
Linux doesn’t support braille. It ships BRLTTY and prays it starts.
Fenrir: Surprisingly Capable — If You Survive the Kernel Bugs
Enter Fenrir, the Python-based userland console screen reader that was supposed to be the modern alternative to Speakup.
And you know what? It’s actually pretty great. - It supports scripting. - It works with BRLTTY for braille output. - It’s easy to configure once installed. - And unlike Speakup, it lives entirely in user space, so you don’t need to mess with kernel modules.
If — and this is a big if — it’s included in your distro’s repositories and your kernel upgrades don’t suddenly lock up your keyboard when it’s active, then Fenrir can deliver a genuinely usable console experience. That’s not theoretical. That’s from daily use.
But here’s the problem: you still can’t use it to install a distro, or boot into a live ISO, or rescue a broken system. Because:
- It’s not included in any installer by default.
- It needs a working Python environment, audio, shell, and sane locales.
- It doesn’t autostart without user setup.
So even though it’s modern, modular, and capable, it’s still gated behind a successful install — which, for a blind user, is the part we need help with the most.
Fenrir deserves praise for being actively maintained, scriptable, and clean. But like every other tool in this space, it suffers from the absence of upstream integration and predictable defaults. It’s the best tool we have — and it still requires more bootstrapping than most people realize.
The Console Can Be the Whole System
This is true whether you're using Speakup or Fenrir. Once you're past the installation and configuration barriers, the Linux console can become far more than a fallback. It can be a complete and powerful daily environment for blind users — and for me, it was. For years.
- You can browse the web with tools like
edbrowse
,w3m
, orlynx
. - Handle email through
alpine
ormutt
. - Write and code inside
vim
,nano
, oremacs
, with full screen reading. - Use
emacspeak
as an entire self-voicing interface — giving you a speech-first experience across text editing, email, file management, and more.
This isn’t just theoretical. The console was my entire system. Not a backup. Not a novelty. A real workspace — fast, reliable, and built for speech.
The TTY is efficient and free from the visual clutter and accessibility regressions that plague modern desktops. Eventually, I moved away from the console — not because it failed me, but because I wanted to try something more modern: a full desktop with graphical applications, real-time collaboration tools, and the kind of integrated software that still doesn't exist in TTY land. But I left the console behind reluctantly. It had been my whole world, and it worked. It’s not some legacy curiosity. It’s a platform that should be first-class for blind users. And it nearly was.
What I Gained — and What I Lost
Switching to a graphical desktop gave me things I couldn’t easily replicate on the console: - Real-time collaboration through web apps and chat clients. - Full-featured graphical browsers that could handle modern, JavaScript-heavy websites. - A larger ecosystem of mainstream software, from productivity suites to communication tools.
That’s what I gained: access to the same digital world as everyone else.
But I lost things, too:
- The speed and responsiveness of the TTY.
- The simplicity and stability of a system where every part was speech-controlled and scriptable.
- The ability to fix a broken system from the inside, without needing to boot external media or beg for sighted assistance.
Most of all, I lost confidence. On the console, I knew every part of the stack. I could trust it. With a desktop, I’m always watching out for the next GNOME regression, the next broken AT-SPI hook, the next time a browser update silently kills accessibility.
Modern Linux desktops offer a broader world — but the console, when it works, offers control. And no blind user should be forced to choose between the two.
This is the tension at the heart of blind accessibility on Linux: the GUI is advancing in features, while the TTY is decaying from neglect. And yet the console remains the most stable, most direct, most reliable interface — when it’s allowed to work. That’s what makes all of this so heartbreaking.
We don’t need miracles. We just need the console — the one place Linux has always claimed to be universal — to actually include us.
This isn’t just about accessibility. This is about power and independence — being able to administer servers, work offline, or troubleshoot real-world problems with tools that speak to you on your terms. And it’s infuriating that we are still this close to having that experience work out of the box, but only if we fight for it, patch it, and guard it from every system update like it’s a fragile shrine.
The Chicken-and-Egg Hell
Let’s say you’re blind, you want to install Linux, and you want both speech and braille at boot. Here’s what you have to do:
- Hope your distro’s ISO even boots with audio or braille output.
- If not, remaster it.
- Preinstall BRLTTY with exact driver configs.
- Pre-enable Speakup.
- Choose between espeakup (which may fail with PipeWire or PulseAudio) or Fenrir (which won’t run unless you’re already inside a shell).
- Fix udev, fix systemd ordering, patch initramfs, and disable audio user-locking.
All just to get a text prompt you can hear or feel.
And every part of that pipeline is fragile, undocumented, and increasingly incompatible with “modern Linux.” There are no presets. No safe defaults. And absolutely no respect for the fact that blind users might need to use the TTY when their desktop breaks.
SIDEBAR: Why Session-Locked Audio Screws Blind Users (Even When It Can Be Fixed)
Modern Linux audio — PulseAudio, and now PipeWire — isn’t just “sandboxed” in the Flatpak sense. It’s session-locked by design. Only processes running inside the currently active user session are allowed to produce audio output. That’s fine for desktop apps. It’s a disaster for kernel-level screen readers like Speakup.
Speakup vs. the Session Wall
When you boot, Speakup runs outside your user session. It tries to speak using espeakup
, which outputs through ALSA. But once PulseAudio or PipeWire grabs the audio stack, only the current user's session is allowed to use it — and Speakup isn't in that session. Result: silence.
The PipeWire Fix (Yes, It Works)
PipeWire is actually flexible enough to fix this. The Fenrir screen reader project provides a script that:
- Adds a second Pulse-compatible socket at
/tmp/pulse.sock
, accessible to root processes. - Sets PipeWire and WirePlumber configs to keep sinks alive, prevent suspend, and avoid TTY-related disconnects.
- Adds a
client.conf
for root, telling system processes to use that alternate socket.
Run it once as your user and once as root. Done. You now have working audio for both the console and graphical environment — simultaneously.
I didn’t learn this from upstream docs. I learned it from a reader of my first blog post who figured it out and shared it. That’s the story of Linux accessibility in a nutshell.
The PulseAudio Maybe-Fix
Fenrir also includes a similar script for PulseAudio. It does the same dual-socket trick: one config for the user to serve the audio, and one for root to connect to it. It might work. I haven’t tested it deeply — and honestly, Pulse is older, more fragile, and less predictable.
Still: someone in the community thought to do this. Upstream didn’t.
The Real Problem
None of this should be necessary. There is no standard way to say:
“This process is accessibility-critical. Let it speak.”
No XDG spec. No PipeWire setting. No distro guideline. Nothing.
So when a blind user logs in, modern Linux says:
"Welcome, human! You may now hear sound."
And to Speakup, the one thing that’s been speaking this whole time:
"Shut up. You're not part of the session."
Blind users? Accessibility? System-wide speech?
Still treated as noise.
The Brutal Truth
If you're blind, here's what Linux offers you in 2025: - Speech output in the console, maybe, on some distros, under strict conditions. - Braille output at boot, if you manually configure it in the initramfs. - User sessions that break accessibility by default. - Screen readers that only work after install, when it's already too late. - And an entire ecosystem that pretends this isn’t a problem.
Console accessibility isn’t a solved problem. It’s a decaying pile of partial solutions that only work if you never make a mistake, never change hardware, and never let your audio stack change.
I want to love Linux. I want to trust it to recover a broken boot, install a headless server, or drop into a rescue shell when everything else fails. But unless I build it all myself — and know every damned quirk of every component — Linux will leave me blind and locked out of the very tools it pretends are accessible.
hi, thanks for this post - really enlightening for a long-term (sighted) linux user. one nit-pick: all the bulleted lists in this post are broken. assuming this is rendered from markdown, you're probably missing an empty line in front of each list.
Well, linuxes has potential to be most accessible OSes so it is sad it is not realized potential
I think that those are all legit problems and while i cant say anything about wayland state of accessibility (wayland-protocols etc but see also https://github.com/splondike/wayland-accessibility-notes/blob/main/talon-requirements.md )
For console there may be solutions I think first thing would be allowing in-kernel alsa to support multiple inputs (so pipewire in user session and espeak on root session could co exist) and also adding support to output on all devices in alsa so apps wanting that can do it (so espeak would not have problems to output audio even with "fake" hdmi audio output)
Also good idea to implement would be "accessibility-file" which would be shown in flatpak and also copied to initramfs for example /.accessible-system.conf so apps could use accessibility by default if it exist
It could be ~/.accessible-system.conf if not exist check /.accessible-system.conf
That would be good idea as installer could just create that file and software could fallback to it (or systemd after first login to console and pressing some key for example 5 times shift) (press shift 5 times to enable accessibility etc)
That file could have types of disability separated by coma for example b,d (for blind and deaf) so software could use it and if letter is unrecognized then just ignore that letter (so future disabilites could be added) On debian apt could auto install accessibility software in that case
That is my idea, which maybe would help through still wayland problem in gui remains (but i guesa console would be mostly solved)
Have you tried alternative operating systems that may have better or more cohesive sound design that's less prone to breakage? Like FreeBSD with OSS, virtual_oss, and mixer; OpenBSD has sndio and sndioctl. Ports-software that gets ported from Linux and other places to these operating systems-sometimes use Linux's systems instead of their own audio systems, but I think OpenBSD ports are a little better about keeping to native solutions. On FreeBSD there has been some action taken for improving future accessibility for the console https://reviews.freebsd.org/D35772 https://reviews.freebsd.org/D35754 https://reviews.freebsd.org/D35776 https://freebsdfoundation.org/project/vision-accessibility-subsystem-for-freebsd/ https://lists.freebsd.org/subscription/freebsd-accessibility
Nice article about an unfortunate challenge. Looking forward to reading some of your opinions on at-spi or the lower level a11y stack generally if you have experience in that space.
Hi,
Thanks you for those posts.
I’m developing myself a CLI-based browser. The goal is to type command and get the page on stdout. To follow a link, simply type the number next that that link.
The browser is called offpunk: https://offpunk.net
I would be very interested to have feedback from blind users and to be able to make it better for blind persons.