Rethinking

This week the goal was to write more tests and implement more functionality for a GDB wrapper. However, I've faced with some problems that led me to rethinking of library design and changing my plans for project implementation. So, let's see it step by step.

Library design

Initially I wanted gdb-js to maintain a state because I thought that it'd be more efficient and easier for the end user to consume. But back then I didn't understand how GDB/MI interface actually works. While reading through documentation I understood that it makes much more sense for a GDB wrapper to provide methods for getting state and events for state changes (e.g. "stopped", "running") than a bunch of "update:whatever" events. And it doesn't make it harder to connect with React application. So the library actually should do well only two things: provide asynchronous methods for GDB commands that return promises that resolve with a JSON object and expose events for state changes. So in the end it should be just a seamless and thin wrapper around GDB/MI parser (which is already written). GDB/MI interfce is perfect for this goal because on every command it returns only one line with a result. However, not everything in the garden is rosy... Some commands that I need GDB/MI does not support. For example, the -symbol-list-variables command which I need for getting globals is absent even if it's written in the documentation. The correspondent GDB command for it is info variables but using it is far from comfortable since it doesn't return only one line with a result. However, seems like it's a synchronous operation and every line between our command and ^done line is the result of this operation. So, I believe I should just parse them somehow. At this stage I just made some refactoring on the library and disabled all previous tests.

The next problem with GDB/MI is that it's impossible to distinguish between target program output and GDB/MI output even if it's said that all target program output will be prefixed with @. Sounds good, but it's not true (see the #8759 bug). So I went a simple way and decided to return every line that a wrapper can't parse as a target program output. However, for this specific project (i.e. react-gdb) I believe that it makes more sense to require server to run GDB with --tty=/what/ever and leave it to user how to interact with this tty.

In general I want to finish GDB wrapper during the next week and start doing UI. But before doing UI I need to understand how the interaction with a server should work and it's another matter...

Server interaction

Again, initially the docker-exec-websocket-server was suggested to use with react-gdb. Although it does a good job on exposing process streams from docker running on the server it's not enough for a debugger frontend. Debugger itself should also be a file explorer and (maybe) something else. It needs his own protocol. I considered three options:

  1. Still use docker-exec-websocket-server protocol, for everything else send HTTP requests
  2. Modify docker-exec-websocket-server by adding new message types
  3. Dirty hacks with remote procedures calling

The first option seems like an overhead and it will be hard to document such protocol as well as to implement it. The third option has a whole bunch of different pitfalls so I don't even want to think of it. So I decided to go with the second option. While reading again through the sources of docker-exec-websocket-server I've noticed that the only tricky part of it is back-pressure implementation (it also occupies the largest part of the code). So I thought, why not decouple it so that it would be easy to implement the new protocol with back-pressure? As a result I made the ws-streamify package.

ws-streamify

The first thing I realized is that the back-pressure is already implemented in Node.js streams and there's no need to implement the wheel. All we need is to extend them and correctly tune the connection between sockets. Then I checked out existing implementations and found out that they don't support back-pressure. So I needed to make it myself. Initially I implemented the same approach as in docker-exec-websocket-server with pause and resume codes and wrote tests for it. Everything looked pretty well until I decided to pipe a stream to it and check network profiling in Chrome DevTools :) It turned out that this approach is not reliable. When stream that is being piped is too fast it manages to send enormous amounts of data before it receives a message from the consumer that asks to stop sending data. The buffers are being terribly overflowed constantly. By the way, I couldn't find any tests for handling back-pressure in the docker-exec-websocket-server sources. So, I thought that a better idea would be to implement a similar to TCP approach — explicit acknowledgement of whether the other side can send the next message or not. It might be a little overhead but it's reliable. I've tested it and it works much better, the buffers are filling up in acceptable levels. Performance is also very fine, but if it's not enough you can simply increase the chunks size. The library is already on npm, have tests and CI is configured.

Final

In the upcoming week I want to make a server using ws-streamify and docker containers (in order to make it more native with taskcluster environment). I've already familirar with using docker API from javascript (btw, I managed to get rid of that hack with sleep in tests!). Also, I want to finish GDB wrapper (well, almost finish :)).

P.S.

  • Seems like I broke RSS the last time... Sorry, it's fixed now :)
  • Since I decided to use a different protocol for react-gdb I don't need ws-exec anymore and I decided to delete it. Anyway, with ws-streamify it can be implemented very easily.
  • I hate exams (ノಥДಥ)ノ︵┻━┻