The manufacturers claimed that most people could learn to use it in just a couple of hours. With some practice, it is possible to become a faster typist with the Microwriter than with a conventional keyboard, providing that what is being entered is just text. Typing is slowed if a substantial number of special characters have to be entered using the “shifting” mechanism.
The one of the coolest parts is it could be used completely detached from a computer:
At the top end of the unit is a 25 pin D-type connector providing an RS-232 port, an interface to an audio cassette player for saving and restoring files
My use case was for terminal commands and using my phone as an audio terminal (termux and espeak).
I recorded all my key presses on my laptop including non-printing commands like crtl, alt, esc, etc, built a key map based on weighted finger positions, flashed a new key map onto the thing and it nearly worked. Unfortunately anything more more complex than ed had too much state I needed to keep in mind when working, so I needed to look at the screen anyway.
I can enter all the weird symbols I want using software, and it’s kind of usable for entering text. Using that inside temux with emacs I have something that is good enough to edit files and run commands for any server that I could possibly want.
If I wanted to take notes though both are terrible. I’d look into stenography, but with how good speech-to-text has become I’d just dictate to an app with a red button.
The setup was always buggy as hell on regular bash+linux. I vaguely remember some sort of C monstrosity that dispatched espeak processes with each key press, and another one that killed whatever was running when a new key was pressed. I never managed to get anything that wasn’t a full text line to audio in termux.
What did work for audio terminal is running a shell in emacspeak. That’s how I got to the point that I realized that it was too much mental overhead for me to use a chorded keyboard along with an audio only for emacs interface, and the only programs that worked well were the oldest unix utilities that used to output to actual teletypes, never was the terseness of ed more appreciated.
Again that was on Linux only with the keyboard connected by bluetooth. Currently emacspeak does not compile for termux. If you know enough about emacs to compile it for termux that would help the project tremendously.
That said I am one of the few people to genuinely use ed from a shell in emacs which is something I’m both proud and ashamed of.
I’m actually more interested in your keybindings and how your interactions with it when it did work. For the audio output part, it sounds like your relied on the underlying utility, like ed, being nice about it’s output. But what of the keyboard input part? Did that work well?
Currently emacspeak does not compile for termux.
I’m a bit baffled by what this means and why it’s needed. What do you mean by “compile for”? Also, how important is it to use termux versus another shell or terminal?
I’m make espeakui which has a Espeaker class that is IMO a better API for espeak. Would this help with the bash+linux approach? You can start, stop and pause speech easily without having to kill an espeak process. I use it to change speech rate mid sentence. It uses the same underlying speaklib library as espeak (but can also use espeak-ng with minimal change).
The key chords worked as expected. All the smart stuff was done on board the keyboard’s micro controller with only regular key presses being sent out by bluetooth. So the computer couldn’t tell the difference between it and a regular keyboard. I don’t have the map any more and I was trying naively to use the same key bindings as I did on a regular keyboard. The amount of mental state I needed to keep track of was extremely high.
For my use case emacspeak is a better fit than trying to use a terminal and when I finally got it working on my main machine all the projects I’d been trying to look at became pointless.
I’m a bit baffled by what this means and why it’s needed. What do you mean by “compile for”? Also, how important is it to use termux versus another shell or terminal?
Termux is an android terminal emulator and Linux environment, my use case was to try and build an audio only work station on mobile so I could use the same setup where ever I was. I did not succeed, I did get to the point where I had emacspeak + terminal working on my desktop along with a chorded keyboard. The combination was a lot less usable than I thought it would be.
Ultimately I moved to an eink android tablet + keyboard + battery. The setup lasts me weeks between charges and it is only marginally more difficult to carry around than a regular laptop. Since emacs works under termux I have the same setup on all my machines for on the go editing. It is rather pleasant to work outdoors in the bright sunshine and for ‘serious’ work I ssh into my home network.
I’m make espeakui which has a Espeaker class that is IMO a better API for espeak. Would this help with the bash+linux approach?
From a cursory look, no. You need to spawn a process for every letter press and every word. Basically have a look at what emacspeak does and you’ll see something I’m 90% happy with for coding on.
That said the use case for that software is mostly covered by the following line of bash:
I will give yours a try later since I never did manage to get espeak to display the text being read currently, and never cared enough to look into how to solve the problem.
So key chords worked just as well as a keyboard? It wasn’t much slower to use one?
I’ve only looked at emacspeaks briefly and that was a long time ago. Isn’t it basically a screen reader with emacs? Any significant difference between using that and, say, yasr or espeakup (other than being inside emacs)?
The combination was a lot less usable than I thought it would be.
Ah, that’s unfortunate. It sounds like you were mostly going the “regular” terminal and screen reader route. I want to know if there’s some design that would specifically be good for audio linux with a keyboard (or other input) with fewer keys.
That said the use case for that software is mostly covered by the following line of bash:
Yes, you can do that (with mplayer -af scaletempo) but for large webpages the wave file takes quite some time to generate and there’s significant audio distortion at >2x speed (which might be why you set your intial speed to 950?). I do still use the pipe-to-mplayer method on a phone though.
But I was pointing at the library rather than the application. But it looks like you want a mostly completed screen reader, which indeed it isn’t. Its better suited for building “custom” audio applications.
Huh, MessageEase looks interesting, thanks! Though I don’t know how much of a fit it is for non-keypad phones. Taking notes via speech recognition in a noisy environment would probably not be too great, and you’d need to summarize them later.
Huh, MessageEase looks interesting, thanks! Though I don’t know how much of a fit it is for non-keypad phones.
It works vastly better on touch screens. Typing one symbol at a time is still much slower than swiftkey like swipes for words but for accuracy in typing it beats anything I’ve tried, including phones with physical keyboards and chorded keyboards.
Taking notes via speech recognition in a noisy environment would probably not be too great, and you’d need to summarize them later.
The last time I played around with speech to text was when Mozilla released their audio thing way back when. Using it on a bunch of librevox recordings it averaged less than 1 error per thousand words.
Chorded keyboards are a thing, I have a https://www.gboards.ca/product/georgi, and apart from taking a bit of time to get used to it, it’s a nice way for input.
Hahah, this is the keyboard that always breaks my autocompletion! I made a (full-size) keyboard called “George”, and whenever I try to cd into its QMK directory, autocompletion always annoyingly stops at “georg” because there’s “georgi” too. I had no idea it was chorded as well, thanks for the link!
If you did 10 keys (two hands), could you arrange things so that no prefix of a chord was also a chord? So that you could always unambiguously detect when an intended input event had occurred. With five keys, you probably don’t have enough bits for this.
Alternately, you could split chords between “bottom-up chords” and “top-down chords”, depending on the order in which your fingers hit the keys. That way, you could probably even get an additional bit out of the five-key setup. Maybe top-down chords could be capital letters.
Hmm, that’s an intriguing idea, using the order of keys to tell if they should be capitals. Some keys are single-press, though, so I don’t know if it’d be worth the tradeoff. Josh suggested an accelerometer and turning the keyboard itself at an angle, but I don’t know how convenient that’d be either.
I once made a three-key keyboard where I first made an input filter for BeOS and created a log of every keystroke over a period of weeks that I then used to create a modified Huffman encoding based on the usage of the keys. It worked, but I can’t say it was very practical. Didn’t spend a lot of time trying to learn the alphabet, though, so maybe it could have been useful…
I did this for a project in college. Used Cherry switches and 3D printed piano keys, inspired by the MOAD.
Maybe I should do it again, only this time, document it somewhere I don’t throw away immediately after graduating…
Long long ago, when both I and my college’s EE department were too cheap to purchase a PIC programmer, I made a two-switch input device. BIT (toggle) and CLK (momentary, debounced) are enough to slam code into a PIC, if you’re spiteful enough (and if you are using PIC asm, you are filled with spite). It worked and everything.
Funny hack, but ultimately I ponied up for a BASIC Stamp and the project got a lot easier.
Do you mount the keyboard to something before using it? From the photo in the article, it looks like it would be difficult to hold it without accidentally pressing any buttons. Or is that not the case?
People keep pointing out that there are products available that do this. Given how simple this one is it seems that there is room for an open hardware version based along similar principles. The only one I could find in a quick google search it the twiddler which is far more complex than this one at 12 buttons, and costs $200.
The one in this article could probably be mass produced for less than $10 and is far more practical for simple notetaking for people that don’t have hundreds of hours to learn a 12 button (4096 combination) keyboard.
I was actually looking to buy something a while back but the consumer market had nothing to offer that was even close to what I want.
Add one more key and you could implement the Microwriter input method: https://en.wikipedia.org/wiki/Microwriter#Keyboard
The one of the coolest parts is it could be used completely detached from a computer:
That’s very interesting, thanks! Looks like what’s old is new again.
Believe it or not there are products that implement this: https://www.in10did.com/
My use case was for terminal commands and using my phone as an audio terminal (termux and espeak).
I recorded all my key presses on my laptop including non-printing commands like crtl, alt, esc, etc, built a key map based on weighted finger positions, flashed a new key map onto the thing and it nearly worked. Unfortunately anything more more complex than ed had too much state I needed to keep in mind when working, so I needed to look at the screen anyway.
Which brought me to the solution that actually works: https://en.wikipedia.org/wiki/MessagEase along with emacs.
I can enter all the weird symbols I want using software, and it’s kind of usable for entering text. Using that inside temux with emacs I have something that is good enough to edit files and run commands for any server that I could possibly want.
If I wanted to take notes though both are terrible. I’d look into stenography, but with how good speech-to-text has become I’d just dictate to an app with a red button.
Do you have longer write-ups of your set up and/or videos of it running? I am extremely interested in an audio terminal setup.
Unfortunately no.
The setup was always buggy as hell on regular bash+linux. I vaguely remember some sort of C monstrosity that dispatched espeak processes with each key press, and another one that killed whatever was running when a new key was pressed. I never managed to get anything that wasn’t a full text line to audio in termux.
What did work for audio terminal is running a shell in emacspeak. That’s how I got to the point that I realized that it was too much mental overhead for me to use a chorded keyboard along with an audio only for emacs interface, and the only programs that worked well were the oldest unix utilities that used to output to actual teletypes, never was the terseness of ed more appreciated.
Again that was on Linux only with the keyboard connected by bluetooth. Currently emacspeak does not compile for termux. If you know enough about emacs to compile it for termux that would help the project tremendously.
That said I am one of the few people to genuinely use ed from a shell in emacs which is something I’m both proud and ashamed of.
I’m actually more interested in your keybindings and how your interactions with it when it did work. For the audio output part, it sounds like your relied on the underlying utility, like
ed
, being nice about it’s output. But what of the keyboard input part? Did that work well?I’m a bit baffled by what this means and why it’s needed. What do you mean by “compile for”? Also, how important is it to use termux versus another shell or terminal?
I’m make espeakui which has a
Espeaker
class that is IMO a better API for espeak. Would this help with the bash+linux approach? You can start, stop and pause speech easily without having to kill an espeak process. I use it to change speech rate mid sentence. It uses the same underlyingspeaklib
library as espeak (but can also use espeak-ng with minimal change).The key chords worked as expected. All the smart stuff was done on board the keyboard’s micro controller with only regular key presses being sent out by bluetooth. So the computer couldn’t tell the difference between it and a regular keyboard. I don’t have the map any more and I was trying naively to use the same key bindings as I did on a regular keyboard. The amount of mental state I needed to keep track of was extremely high.
For my use case emacspeak is a better fit than trying to use a terminal and when I finally got it working on my main machine all the projects I’d been trying to look at became pointless.
Termux is an android terminal emulator and Linux environment, my use case was to try and build an audio only work station on mobile so I could use the same setup where ever I was. I did not succeed, I did get to the point where I had emacspeak + terminal working on my desktop along with a chorded keyboard. The combination was a lot less usable than I thought it would be.
Ultimately I moved to an eink android tablet + keyboard + battery. The setup lasts me weeks between charges and it is only marginally more difficult to carry around than a regular laptop. Since emacs works under termux I have the same setup on all my machines for on the go editing. It is rather pleasant to work outdoors in the bright sunshine and for ‘serious’ work I ssh into my home network.
From a cursory look, no. You need to spawn a process for every letter press and every word. Basically have a look at what emacspeak does and you’ll see something I’m 90% happy with for coding on.
That said the use case for that software is mostly covered by the following line of bash:
I will give yours a try later since I never did manage to get espeak to display the text being read currently, and never cared enough to look into how to solve the problem.
Not sure if that’s any help at all.
So key chords worked just as well as a keyboard? It wasn’t much slower to use one?
I’ve only looked at emacspeaks briefly and that was a long time ago. Isn’t it basically a screen reader with emacs? Any significant difference between using that and, say, yasr or espeakup (other than being inside emacs)?
Ah, that’s unfortunate. It sounds like you were mostly going the “regular” terminal and screen reader route. I want to know if there’s some design that would specifically be good for audio linux with a keyboard (or other input) with fewer keys.
Yes, you can do that (with mplayer -af scaletempo) but for large webpages the wave file takes quite some time to generate and there’s significant audio distortion at >2x speed (which might be why you set your intial speed to 950?). I do still use the pipe-to-mplayer method on a phone though.
But I was pointing at the library rather than the application. But it looks like you want a mostly completed screen reader, which indeed it isn’t. Its better suited for building “custom” audio applications.
Huh, MessageEase looks interesting, thanks! Though I don’t know how much of a fit it is for non-keypad phones. Taking notes via speech recognition in a noisy environment would probably not be too great, and you’d need to summarize them later.
It works vastly better on touch screens. Typing one symbol at a time is still much slower than swiftkey like swipes for words but for accuracy in typing it beats anything I’ve tried, including phones with physical keyboards and chorded keyboards.
What you can do with audio engineering today is out of this world: https://www.youtube.com/watch?v=hrQ_2JhEKpY
The last time I played around with speech to text was when Mozilla released their audio thing way back when. Using it on a bunch of librevox recordings it averaged less than 1 error per thousand words.
Chorded keyboards are a thing, I have a https://www.gboards.ca/product/georgi, and apart from taking a bit of time to get used to it, it’s a nice way for input.
Steno keyboards are a thing. Look at Plover if interested.
Hahah, this is the keyboard that always breaks my autocompletion! I made a (full-size) keyboard called “George”, and whenever I try to cd into its QMK directory, autocompletion always annoyingly stops at “georg” because there’s “georgi” too. I had no idea it was chorded as well, thanks for the link!
The caption on the photo with your fist was excellent. Never use this in public!
Yeah, I laughed out loud when I saw that. The whole post was quite amusing, though. Loved the writing style.
Awesome!
If you did 10 keys (two hands), could you arrange things so that no prefix of a chord was also a chord? So that you could always unambiguously detect when an intended input event had occurred. With five keys, you probably don’t have enough bits for this.
Alternately, you could split chords between “bottom-up chords” and “top-down chords”, depending on the order in which your fingers hit the keys. That way, you could probably even get an additional bit out of the five-key setup. Maybe top-down chords could be capital letters.
Hmm, that’s an intriguing idea, using the order of keys to tell if they should be capitals. Some keys are single-press, though, so I don’t know if it’d be worth the tradeoff. Josh suggested an accelerometer and turning the keyboard itself at an angle, but I don’t know how convenient that’d be either.
I once made a three-key keyboard where I first made an input filter for BeOS and created a log of every keystroke over a period of weeks that I then used to create a modified Huffman encoding based on the usage of the keys. It worked, but I can’t say it was very practical. Didn’t spend a lot of time trying to learn the alphabet, though, so maybe it could have been useful…
That’s fantastic, why three and not two, though?
More fingers/keys means shorter sequences!
In the data I generated I think the longest sequence was four or five keypresses to generate a character.
I did this for a project in college. Used Cherry switches and 3D printed piano keys, inspired by the MOAD. Maybe I should do it again, only this time, document it somewhere I don’t throw away immediately after graduating…
Long long ago, when both I and my college’s EE department were too cheap to purchase a PIC programmer, I made a two-switch input device. BIT (toggle) and CLK (momentary, debounced) are enough to slam code into a PIC, if you’re spiteful enough (and if you are using PIC asm, you are filled with spite). It worked and everything.
Funny hack, but ultimately I ponied up for a BASIC Stamp and the project got a lot easier.
Do you mount the keyboard to something before using it? From the photo in the article, it looks like it would be difficult to hold it without accidentally pressing any buttons. Or is that not the case?
People keep pointing out that there are products available that do this. Given how simple this one is it seems that there is room for an open hardware version based along similar principles. The only one I could find in a quick google search it the twiddler which is far more complex than this one at 12 buttons, and costs $200.
The one in this article could probably be mass produced for less than $10 and is far more practical for simple notetaking for people that don’t have hundreds of hours to learn a 12 button (4096 combination) keyboard.
I was actually looking to buy something a while back but the consumer market had nothing to offer that was even close to what I want.
I had fun reading that.
The caption on the photo with your fist was excellent. Never use this in public!