I went to bed last night. I woke up this morning and Mastodon had eaten my app. I spent the morning reading notifications. I spent the rest of the day building. This post is from the other side of that.
Why Touchscreen Keyboards Are Hell
Blind people have a complicated relationship with typing on touchscreens. It isn't exotic — it's a design problem the industry decided wasn't worth solving properly, and then didn't solve properly, for a long time, and the accumulated weight of that decision is something every blind smartphone user carries around and mostly doesn't talk about because at some point you run out of energy for explaining to sighted people why the thing they find intuitive is not intuitive for you.
The short version: a touchscreen keyboard is built around the assumption that you can see it. You look at the key you want. You tap it. You look at what appeared. The entire feedback loop runs through your eyes, and if your eyes aren't in the loop, the keyboard doesn't stop existing but it stops being usable in the same way.
There are options. Explore-by-touch is the default — the screen reader reads the key label when your finger lands on it, waits for a double tap or a second finger to confirm, keeps you from typing a string of random characters every time you touch the screen. It works the way that doing everything one-handed works when someone's tied the other hand behind your back. Technically functional. Painfully slow. iOS also has direct touch typing, which lets you type exactly the way a sighted user would, with VoiceOver announcing each key as you tap it — faster, but you're relying entirely on muscle memory and spatial awareness to hit the right targets without looking. Both iOS and Android support lift to type, where you explore the keyboard and lift your finger to commit to whatever key you're currently on, which removes the double tap but keeps the hunting. These things exist. They work, after a fashion. The autocomplete on both platforms is still not good enough to close the gap, Gboard is not good enough to close the gap, nothing on the market right now is good enough to close the gap in a way that makes touchscreen typing feel like something designed for you rather than something you've learned to survive.
Swipe typing exists too, and I'll briefly address the people already composing the reply that says well why don't you just use swipe typing, because swipe typing was also designed around vision. You look at the keyboard, you find the first letter, you drag to the next letter, you lift. You can do that without looking at the keyboard if you've memorised the layout in enough spatial detail to draw a path across it by feel, and some blind users do, and it's impressive, and it is still a compensatory behaviour for a system that wasn't designed with you in mind. It works better than it has any right to. It is not the same thing as a keyboard designed from the beginning to be used without vision.
That keyboard, the one designed from the beginning, was FlickType. It existed on iOS. The gesture model was simple in the way that good ideas are simple: five possible inputs per key — tap, flick up, flick down, flick left, flick right — distributed across the character set in a way that made frequent letters easy to reach. It was built around the fact that you couldn't see it rather than working around it, and the difference was palpable — it was fast in a way that felt earned rather than accidental, and it felt like something designed for a human being who happened to be blind rather than a workaround designed for a human being who was expected to be sighted.
FlickType is gone now, shut down, no longer available, and I was one of the people who used it. I know what it felt like to have it and I know what it felt like when it wasn't there anymore. The people who messaged me asking for an iOS port are not describing an abstract inconvenience — they're describing the specific loss of something that worked, that was built for them, that has no replacement and no obvious path to one.
FlickType was on iOS. I have an iPhone. I've had iPhones on and off since the 3GS, which means I've watched iOS evolve from the inside as a blind user across a significant stretch of its history, and I've watched Android evolve from the inside across the same period, and I keep coming back to Android. Not because I don't know the alternative. Because I know both and I keep making the same choice. The response to any Android accessibility complaint, delivered with metronomic reliability by people who are trying to help and sometimes by people who are not, is that I should just get an iPhone. I have heard this so many times that I have developed a bone-level weariness about it. I have an iPhone. I use it. I know what VoiceOver does well and I know where it falls short, and I keep landing back on Android because the things I gain there matter to me more than the things I lose. It is a decision I have made repeatedly, with full information, over many years.
FlickType was one of the things I lost when I was on Android. I built TapType partly out of that loss, and partly out of the specific frustration of knowing the problem was solvable and watching nobody solve it.
How TapType Got Made
The conventional account of how a solo developer builds something — the idealised montage of focused effort resolving cleanly into a finished product — is not accurate to the experience, and the gap between the account and the reality is part of what makes the overnight response land so strangely.
It started as frustration that curdled into a decision. I had been using the available Android keyboards for long enough to know exactly what I hated about each of them and why none of them were going to get better in any timeframe that mattered to me. I have a particular kind of tolerance for bad tooling — I'll put up with a lot if the workarounds are manageable — and when that tolerance runs out, it runs out hard. I stopped being willing to make do. I started thinking about what it would take to build the thing I actually wanted.
The answer was evenings and weekends. The specific kind of sustained irritation that is, if you're a developer, also a kind of fuel. I know Android development. I know how input method services work — this is the layer of the Android API that keyboards live on, and it is not the most enjoyable API the platform exposes, but I had a clear enough picture of what I needed from it that the learning curve was more annoying than blocking. The gesture recogniser came first, because the gesture recogniser was the actual new idea, the thing that wasn't just assembling existing parts. I needed to turn five directional inputs per key into reliable character selection in the presence of a screen reader, which meant accounting for the ways that screen reader touch handling modifies what the system sees before your code sees it, which meant testing gestures that felt right to me and discarding ones that didn't, which meant building something that had to work without any visual feedback because I was the user and I don't have visual feedback.
This is the thing people miss when they think about accessibility-first development: it's not that you remove the visual interface and add audio. It's that you redesign the interaction model from the assumption that audio and touch are primary, and vision either isn't present or isn't reliable, and every decision flows from that. When I was testing the gesture recogniser, I was testing it by feel. When I found a gesture that was unreliable — too easy to trigger accidentally, too similar to something else on the layout — I discarded it and tried something else, and the criterion for something else was always whether it felt right to me, because I was the test case, because the only user I knew existed was me.
The layout came after the recogniser. The keyboard face came after the layout. Then the integration work, which is where everything that looks easy from the outside becomes a problem.
The TalkBack Problem
The problem is TalkBack. TalkBack is Android's primary screen reader and it owns touch input. When TalkBack is running, it intercepts your touch events before they reach your application, because that interception is how it reads elements aloud and manages focus — it needs to see the touch before anything else does. This is fine and correct and necessary when you're navigating an interface. It is a significant obstacle when you're trying to build a keyboard that does its own gesture handling, because TalkBack and your gesture recogniser both want to be first in line for the same touch events, and TalkBack is not in the habit of yielding.
The standard solution, if you can call it that, is to tell the user to disable TalkBack. Think about what that means. You are building an accessible keyboard — a keyboard specifically for people who use screen readers — and your solution to the screen reader compatibility problem is to tell the user to turn the screen reader off. This is the approach taken by most third-party accessible keyboards on Android. Braille input methods that aren't Google's own. Accessible games that implement their own gesture handling and TTS. Turn off TalkBack to use the accessible app. The screen reader is not an optional layer you can peel off when it's inconvenient. It is the thing that makes the device usable. Telling a blind user to disable their screen reader to use your accessible keyboard is the software equivalent of telling someone in a wheelchair to stand up to use your accessible entrance. I was not going to do that.
What TapType does instead is bypass TalkBack's focus handling for the keyboard region specifically, while leaving TalkBack fully active everywhere else. This is not a single keyboard trick — it's two pieces of software. The keyboard itself, and an accessibility service running alongside it, watching the accessibility tree, waiting for the keyboard region to become the focused region. When it is, the service registers a gesture passthrough, and touch events in the keyboard's space stop being TalkBack's problem and become the keyboard's problem. TalkBack stays on. The interface above the keyboard stays accessible. The keyboard does its own thing in its own space.
One of the approaches the documentation points you toward is setting an importantForAccessibility flag to no on the keyboard view — telling the accessibility system to ignore this region entirely, so TalkBack never tries to focus it. This works right up until the user accidentally navigates away from the keyboard and then can't get back in, because TalkBack can't focus a region it's been told doesn't exist for accessibility purposes. The keyboard is there. It's drawing on screen. From TalkBack's perspective it is not there, which means the user is blind and stuck and the thing they need to type with has become unreachable. The documentation presents this as a solution. It is a solution whose failure mode is, for the specific user base TapType is built for, completely unacceptable.
So TalkBack needs to be able to focus the keyboard, which creates the other problem: when TalkBack focuses the keyboard, it eats every touch event in that region. Every tap. Every gesture. Consumed by the accessibility layer before the keyboard's gesture recogniser ever sees it. The symptom, from the user side, is that you tap into the keyboard and instead of anything happening, TalkBack announces "taptype keyboard." Every tap. Just the keyboard announcing its own name back at you, over and over, confirming that you are in the right place and doing absolutely nothing else.
The accessibility service passthrough is what threads this needle. TalkBack can focus the keyboard — navigation works, the user can get back in — but once focused, the service intercepts and events flow through to the gesture recogniser. Except when it fails to register, in which case you get the announcement loop. And except when it fails to stop, which is the failure mode that bothered me most: the keyboard dismisses, the passthrough doesn't clean up properly, and now the region of the screen where the keyboard was is a dead zone. TalkBack can't reach it. Taps fall straight through to the application behind it. The keyboard is gone but its ghost isn't, and there's a rectangle of screen that has been effectively excised from the accessible interface, invisible, unreachable, and you're blind so you can't see where it is or that it's there at all. You just know that part of the screen isn't behaving correctly and you don't know why. That bug took a while to find.
One more thing about what TapType actually is: it doesn't draw keys. There is no visual keyboard rendered on screen in the traditional sense. The gesture layer and the drag-to-type feature — which exists, for situations where accuracy matters more than speed — run entirely on X/Y coordinates and the keyboard's own TTS and gesture handler. The keyboard knows where everything is spatially. It announces what's there. It handles what you do. The visual representation is minimal because the visual representation is not the point. I am not building a keyboard for people who look at their keyboards. I am building a spatial model of a keyboard for people who navigate by feel and audio, and the fact that it doesn't look like much is a feature of what it's trying to be, not a gap in the implementation.
The documentation for all of this is not absent. Absent documentation leaves you in the dark and you proceed carefully, testing everything. Misleading documentation gives you false confidence — you implement what it describes and the implementation is broken in ways you can't immediately diagnose because as far as you can tell you did what it said. Some of what I found was technically correct and completely wrong — accurate descriptions of behaviour that, combined, pointed toward implementations that failed in the specific ways I've described. I found the problems because I felt them. When the passthrough failed, I was the one sitting there hearing "taptype keyboard" on every tap. When the ghost region appeared, I was the one finding the dead zone with my finger. The feedback loop is short in a way it cannot be for a sighted developer adding an accessibility layer to something they built for themselves, and I want to name that as the advantage it is, because it is inseparable from the thing I'm compensating for.
When it was working well enough that I was actually using it for real tasks I cleaned it up. I put the releases on GitHub, not the source, and called it done.
Why I Expected Nothing
The expected response to what happened next is you must have been so excited, and the honest answer is that I wasn't, quite, and explaining why that's not ingratitude requires explaining what my expectations actually were.
My expectations were calibrated by experience. I have released things into the blind Android user community before. I know the shape of that market, if market is even the right word for a community of people trying to solve shared problems with limited resources and limited industry attention. The shape is: small, already stretched thin across the tools they're maintaining and the advocacy they're doing. When I've released things before, they've accumulated installs from people I already knew, generated some genuine appreciation, and then plateaued. That's not a failure. That's the realistic shape of the thing. I had built TapType for myself. If it turned out to be useful for some number of other people in the same situation, that was the upside case, and the upside case had historically been modest.
I put it up. I posted about it in the places I post about things. I went to bed.
Waking Up to the Pile
I reached for my phone. My phone is my alarm and my calendar and my work communication and, more fundamentally than any of that, it is the primary interface through which I interact with the digital world. The phone is not a convenience, it's load-bearing. When I pick it up in the morning it is because I need to know what the morning holds, and reading notifications is not optional the way it might be optional for someone who can also glance at a laptop or a monitor or a sheet of paper on the kitchen counter. I unlocked it. I navigated to my notifications.
There's a texture to a normal overnight notification accumulation — the rhythm of a few things arriving at intervals, the predictable sources, the manageable volume. This was not that. This was a notification list that had been added to by many different people across many different platforms within a compressed window of time, and even before I started reading the individual items, the shape of it told me something had happened.
I started reading. Mastodon: mention. Boost. Boost. New follower. Mention. Boost. Mention. Boost. New follower. Boost. WhatsApp: a message from someone I know, then a message from someone I sort of know, then a message from a number I don't recognise at all. Telegram: a thread that had already been going for hours in a group I'm in where someone had dropped a link and a conversation had started without me. Discord: similar. More Mastodon. More WhatsApp. By the time I'd read enough to understand what was happening, I was fully awake in the way that adrenaline makes you fully awake, which is not a pleasant kind of awake regardless of whether the news is good or bad.
What strikes me about it, reading back, is how different the texture is across those platforms. Mastodon is boosts and mentions, a public record of something spreading. WhatsApp is personal — people messaging me directly, people who have my number, people who felt like this was a you need to see this conversation rather than a public broadcast. Telegram and Discord are crowds, people talking to each other about a thing I made, in spaces I'm a member of but wasn't present in when the conversation was happening, which produces the particular strange feeling of reading a discussion about yourself that you weren't invited to.
Someone had tried TapType from my post on Mastodon and Posted about it on Telegram. Someone else had posted about it on their own timeline. The federated architecture means I can't reconstruct the chain from where I'm sitting — a boost crosses instance boundaries and disappears into feeds I have no visibility into, where it gets boosted again, and by the time the ripple reaches me the origin is unrecoverable. What I know is that by the time I was sitting up in bed, something I'd built had already been seen and assessed and shared and argued about by people I'd never met, and all of their questions and requests had accumulated while I was unconscious, and now they were waiting.
The Requests
Some of it was bug reports, real ones with enough detail to be useful, and I'm not going to complain about those — people found something broken and told me about it, which is the correct thing to do. What I'm sitting with is the geometry of them. Some describe issues on hardware I own, versions I tested, behaviour I can reproduce right now if I pick up my phone. The rest describe failure modes against devices I've never touched: manufacturer skins that modify how the Android accessibility stack behaves, screen sizes outside what I tested, Android versions I've never flashed. Error conditions that emerged from the collision of my code with environments I didn't know existed, because I built this for one environment, which was mine. How do you fix a bug you can't reproduce? You guess and sometimes you're right and sometimes you introduce new problems in paths the reporter didn't test. You can ask them to test a build, which turns someone who came to you with a problem into an unpaid QA volunteer. There is no option that is fast and free and guaranteed.
The feature requests are where it gets interesting, because the ones from blind users and the ones from sighted users describe completely different pieces of software. The blind user requests I can evaluate — screen reader compatibility questions I can answer from my own experience, braille keyboard integration questions I understand because I use TalkBack specifically for braille screen input, questions about whether specific gestures will conflict with the keyboard's own handling. They're asking in a language I speak, toward outcomes I can reason about. Then there are the other requests. Themes. Haptic profiles. Swipe-to-type hybrid modes. Someone asked for a GIF picker. Someone asked for a clipboard manager. These are reasonable things to want from a keyboard if you're a sighted user whose problem is speed or personalisation rather than basic usability. They describe an app I have no particular interest in building, being filed against the one I did build, because that's the one these people found.
The language requests are where I felt the gap most concretely. Swahili. Arabic — not just a different character set but a different directionality, right-to-left, which the layout assumptions aren't built for. Japanese, with multiple input systems that don't map onto a flick keyboard in any obvious way. I don't speak these languages, don't know anyone I could test with, and at seven in the morning staring at the pile, every language request felt like someone asking me to build a different version of the app from scratch in a context I had no framework for evaluating. I'll come back to this.
I'm not going to port TapType to iOS. I have an iPhone, I know VoiceOver, I'm not declining from ignorance of the platform — I'm declining because I have no interest in becoming an Apple developer, in App Store review, in the relationship between someone building software and a platform owner who treats that relationship as a revenue stream and a control mechanism. Most of the people asking are former FlickType users, and I understand that. I have been a FlickType user. The loss is real. My sympathy for it doesn't translate into willingness to navigate App Store review on their behalf.
Then there were a couple of comments that landed differently from everything else. Not requests. Comments, in the threads where people were sharing the app, from people saying they wouldn't install it because they were convinced it was malware. No evidence. No specific concern they could name. Just the assertion, made publicly, in front of everyone who came across those threads afterward.
The community has genuine reasons to be cautious about installing software from developers they don't know, particularly software that handles keyboard input. Scepticism is not unreasonable in principle. What bothers me is what this scepticism chose to ignore. The app requests no permissions. Zero — not storage, not network access, not contacts, not location. Nothing. The accessibility service is optional; the keyboard works without it if you're willing to disable TalkBack or don't use it. The actual evidence on the install screen pointed away from the concern rather than toward it. That evidence wasn't engaged with. The only thing that would satisfy them was the source code, which I haven't published, which made its absence the suspicious thing — not the permissions, not the behaviour, not the fact that people are actively using it and reporting bugs rather than reporting that it's doing something it shouldn't.
Publishing the source in response to that would establish something I don't want established: that bad faith public doubt is a lever that moves me, that making a pointed enough accusation gets results. The source decision stands on its own reasons, which I'll get to, but that is not one of them.
Those comments can poison a project in ways that are difficult to recover from, because doubt spreads faster than reassurance and the people who read it and quietly don't install it don't show up anywhere I can measure. They're just not there, in the gap between the download count and what it could have been.
The source decision itself is simpler than people seem to want it to be. I made a deliberate choice not to publish it, and the real reason is merge requests, not scrutiny. I do assistive technology training. I do freelance consulting. When I'm not doing either of those I'm planning the next thing, and when I'm not planning I'm doing the life admin that doesn't do itself when you work independently — invoices, scheduling, equipment, the background noise of life. The hours are spoken for. Merge requests are not the same thing as feature requests. A feature request is someone telling me what they want. A merge request is someone else's labour sitting in my queue, representing decisions they made about my code, with a social clock ticking every day I leave it unacknowledged. I did not agree to that. I released a keyboard, not a project, and the difference between those two things is something I thought the absence of a source directory would communicate clearly enough and apparently did not.
A Day of Work
I spent the day building.
English UK and US layouts. Spanish. German. A full-screen keyboard option. New settings. Action mode — a way to move around, select, copy, cut, paste without leaving the keyboard. Clipboard history.
A day. One day.
The language requests felt impossible in the middle of the notification pile — Swahili, Arabic, Japanese, an endless list of scripts I don't speak and can't test. And some of them genuinely are impossible for me right now, for exactly the reasons I was thinking about at seven in the morning. But UK English, US English, Spanish, German — those I could do. I speak one of those languages natively, have enough familiarity with the others to validate the layouts, and the gesture model generalises cleanly to Latin-script alphabets. The requests I could evaluate quickly were the ones I had the context to evaluate. The requests I couldn't evaluate remain unanswerable for the same reasons they were unanswerable this morning, and that distinction is considerably clearer now than it was when the whole pile looked equally impossible.
The clipboard history is worth a separate note. When I was reading the feature requests that morning, the clipboard manager request read as a sighted user importing the feature expectations of their current keyboard into mine, which is the exact gravity I was worried about. And then I built clipboard history anyway. Not because someone pressured me into it, but because once the noise cleared and I was actually looking at the codebase, it was useful, it was adjacent to what the keyboard already needed to do, and it made action mode better. Evaluating a request in the middle of a wall of noise is not the same as evaluating it after a cup of tea and some toast. Some things that look like scope creep at seven in the morning are just features.
Not all of them. The GIF picker is still not happening.
How I Actually Feel About This
I love that people like it. That is real and I don't want it to get lost in what comes after: people found a thing I built and it solved a problem for them, and for the blind users specifically — the ones whose frustrations are my frustrations, the ones who needed a keyboard built for them and found one — their responses are the reason I put the app up. That part of this is uncomplicated and good.
But I shouldn't have had to make it. A few hundred people found this keyboard in one night through word of mouth on a platform that is not large. If that many people needed something like this badly enough to download an APK from GitHub within hours of hearing it existed, they needed it before this morning. They needed it last year and the year before that, and they were making do with what existed, which was not built for them, which was never going to get better for them because the people building keyboards are not building them for blind users and the platforms are not demanding that they do. One person got frustrated enough to fix it in his spare time and now there's a keyboard. The day of work I just put in didn't change what that means. The download numbers are not a compliment to me. They are evidence of a failure that predates me and will continue after I've moved on.
The anger is at an industry that has had the resources and the time to solve this problem for years and decided, repeatedly, that the market was too small and the effort wasn't worth it, leaving the people in that market to depend on tools built by individuals who may or may not still be maintaining them next year.
And then there's what happens next, and this is what I'm worried about.
The Flexy Warning
Before FlickType, there was Flexy. This is not ancient history, and it is the story I keep returning to when I look at the feature requests.
Flexy was a keyboard app for iOS, and blind users loved it in the specific intense way that you love a tool that was built for you on a platform that mostly wasn't. They loved it enough to advocate for it. They loved it enough that it became part of the argument — a concrete, working example — for why Apple needed to open iOS to third-party keyboards at all. The blind community, using the existence of Flexy and its usefulness and the need it demonstrated, helped push Apple toward opening an API that would change what iOS keyboards could be for everyone. That argument worked.
Apple opened the third-party keyboard API.
And then Flexy discovered that there were a lot more sighted users than blind users.
This is not a unique or incomprehensible business decision. When a platform opens up and a developer sees an enormous addressable market that their product could serve, the pressure to serve that market is real and the logic of serving it is legible. I understand how that decision gets made. What I cannot do is look at the outcome — at the blind community that had championed this app, that had in some meaningful sense helped create the conditions for its expanded existence, that had trusted it — and describe what happened to them as acceptable.
Flexy became a flexible, themable, GIF-and-sticker-and-emoji-filled keyboard for sighted users. The VoiceOver support that had made it usable for blind people degraded and then largely disappeared. The interaction model rebuilt itself around vision. The features that went in were the features the sighted majority wanted, and the features the blind users needed stopped being features anyone was thinking about, and the app that had been a thing the blind community loved became a thing that was inaccessible to them.
What they got instead was Flexy VO. A separate app. The old version, essentially, wrapped up and published as though it were a gesture toward the people who'd been left behind. No system integration — none of the iOS keyboard API capabilities that the advocacy had helped unlock. No updates. The accessible version of Flexy, maintained in a state of permanent stasis while the main product grew and changed and added features that none of its original users could access. A monument to having been useful and then no longer being the priority.
I think about Flexy VO when I read the theme requests and the haptic profiles and the clipboard manager, and I think about the weight of a user base that is larger than the one you built for, and louder, and whose feature requests don't intersect with the ones that would make the tool better for the people it was actually built to serve. I am not a company. I don't have a growth mandate. I am not going to let TapType become something that doesn't work for me, because I still need it, because I am still the user. But I would be lying if I said the gravity didn't exist. It does. It's real. You can believe completely in your own intentions and still feel the pull.
Building today helped with this in a way I didn't expect. Shipping the things that were mine to build — the language support I could validate, the action mode, the settings that came from real usage questions — and not shipping the things that weren't clarified something I was struggling to articulate at seven in the morning. The keyboard is still the keyboard. It's better than it was this morning. It didn't become a different thing in order to be better. That's the line, and a day of holding it while actually building has made it feel less abstract than it did in the middle of the notification pile.
Somewhere in the notifications there is probably someone already forming an opinion about what this keyboard should do next, who has no idea it was built for a specific use case that isn't theirs, who will eventually decide that my failure to address their request is a choice I made about them rather than a consequence of what I was building and who I was building it for. They don't know the gesture system is an accessibility architecture rather than a stylistic choice, that there's a whole history of what happens to tools like this when they escape their original context. That complaint hasn't arrived yet. But I've been in this long enough to know it's coming.
I made a keyboard because I needed one. I put it on GitHub because someone else might need one. I did not know that the someone else would be this many people, this varied, arriving this fast, having already decided what the thing I made was and what it owed them before I'd had breakfast.
The keyboard is better than it was this morning. And that's enough. I will keep making it better as long as I can.