I’ve been struggling the last few days to wrap some kind of narrative around this project in order to tell its story. Partly that’s because I’ve had to go back and change my approach a couple times when something that seemed to be working turned out not to really be working, and part of that is that the hardware and software parts are all kind of inextricably linked.
For the sake of moving forward, let’s just assume that I have an interrupt service routine that gets called when I need to read a byte from the MAX3100 UART or when I’m ready to send a byte to it. In this post, I’m talking about what happens in the software when either of those conditions occurs, and we’ll delve into the problems of making the interrupts trigger correctly later.
Rather than rewriting the Project:65 computer’s OS every time I want to tweak the I/O routines, I’ve been doing all the software work in a test program. The test program sets up the interrupt service routines and puts its own PutChar and GetChar routines into place. Then it goes into a loop where it echoes back any text that it receives. I’ve added various kinds of delay and throttling to the echo to test what happens when buffers overflow or underflow.
In the original, polling-based version of the I/O routines, calling PutChar or GetChar meant talking to the UART immediately. PutChar would block until the character was successfully transmitted, while GetChar would return a character or set a flag to indicate there was no input available.
In the interrupt-driven version, the send and receive function calls were split into two parts. Books on operating system design call these the “top half” and the “bottom half” (or “upper half” and “lower half”, depending on the author).
The top-half subroutines are still called PutChar and GetChar, and appear (to the program that calls them) to behave exactly the same as the originals. The difference is that instead of talking directly with the MAX3100, they read and write data to a pair of circular buffers in memory. (Since I never get circular buffers right on the first try, I borrowed the buffer code from this article by Garth Wilson).
The bottom-half routines are called from the interrupt service routine, and handle the actual communication. Bytes read from the MAX3100 are fed into the read buffer, and bytes in the write buffer are sent out to the MAX3100. The buffers are each 256 bytes – a nice round number, but bigger than I needed. I figured I could worry about that later.
It didn’t take long to come up with the first draft of this code – it was mostly a matter of rearranging the pieces that already existed. So I was a little surprised to see that it didn’t quite work. Everything was fine when I was just sitting in front of TeraTerm typing in input that the P:65 would echo back to me. But if I tried to send a file, it looked like half the characters weren’t being echoed back. When I throttled the transmit speed way down I got a valuable clue. Every time a character was transmitted from the P:65, it would miss reading an incoming character.
It turns out that I had misunderstood the MAX3100’s communications protocol in a pretty important way. The MAX3100 has both a “Read Data” and a “Write Data” command, but in fact both commands will force you to read a byte if the MAX3100 has anything in its receive buffer. If the P:65 sent a byte to the MAX3100 while a constant stream of data was coming in, my code wouldn’t notice the received byte that showed up as a result of the “Write Data” command.
My first kludgy but basically functional solution was to add some code to the send character interrupt routine to check for a read byte and stuff it into the read buffer. But as my work on the interrupt generation continued, I realized that I wasn’t always going to be able to tell, during the interrupt service routine, whether the interrupt happened because I needed to read or to write – or both!
After studying the MAX3100 datasheet more carefully, I eventually came on to a complex but, I think, fairly elegant solution, that took advantage of the fact that the interrupt service routine received data from the MAX3100 at the same time as it sent data to it. I called this routine MAX3100_SendRecv, and it worked like this:
- We always send the “Write Data” command. This command is two bytes long – a control byte and the actually data byte to be sent.
- The control byte includes a “Transmit Enable” bit to tell the 3100 that the data byte contains “real” data to be transmitted. Otherwise, it assumes the write command is only trying to modify the hardware handshaking signals (more on that later). At any rate, we’ll set Transmit Enable only if the write buffer is nonempty.
- At the same time we’re sending the control byte to the MAX3100, we’re receiving a status byte. The status byte contains two bits that we care about. The R bit is set if there is data ready to be read. The T bit is set if the MAX3100 is ready to be sent a byte to be transmitted.
- If the write buffer is nonempty AND the T bit is set, we pop a byte off of the write buffer and send it as the data byte. Otherwise, we send a dummy value.
- As we’re sending the data byte, we’re also reading a data byte from the MAX3100. If the R bit was set, we put this byte in the read buffer. Otherwise, it’s junk data that we can discard.
That sure does seem like a lot to handle! But since there doesn’t seem to be any way to send a byte to the MAX3100 without possibly receiving one also, I just don’t see a better way to handle it.
Once I had the MAX3100_SendRecv subroutine working, I was able to simultaneously send and receive data without dropping any characters, at least as long as there was room in the buffers and if the interrupts were being sent to the CPU correctly. But that’s another story.