I keep talking about the future VyOS 2.0 and how we all should be doing it, but I guess my biggest mistake is not being public enough, and not being structured enough.
In the early days of VyOS, I used to post development updates, which no one would read or comment upon, so I gave up on it. Now that I think of it, I shouldn't have expected much as the size of the community was very small at the time, and there were hardly many people to read it in the first place, even though it was a critical time for the project, and input from the readers would have been very valuable.
Well, this is a critical time for the project too, and we need your input and your contributions more than ever, so I need to get to fixing my mistakes and try to make it easy for everyone to see what's going on and what we need help with.
Getting a steady stream of contributions is a very important goal. While the commercial support thing we are doing may let the maintainers focus on VyOS and ensure that things like security fixes and release builds get guaranteed attention in time, without occasional contributors who add things they personally need (while maintainers may not, I think myself I'm using maybe 30% of all VyOS features any often) the project will never realize its full potential, and may go stale.
But to make the project easy to manage and easy to contribute to, we need to solve multiple hard problems. It can be hard to get oneself to do things that promise no immediate returns, but if you looks at it the other way, we have a chance to build a system of our dreams together. As of 1.1.x and 1.2.x (the jessie branch), we'll figure it out how to maintain it until we solve those problems, but that's for another post. Right now we are talking about VyOS 2.0, which gets to be a cleanroom rewrite.
Why VyOS isn't as good as it could be, and can't be improved
I considered using "Why VyOS sucks" to catch reader's attention. It's a harsh word, and it may not be all that true, given that VyOS in its current state is way ahead of many other systems that don't even have system-wide config consistency checks, or revisions, or safe upgrades, but there are multiple problems that are so fundamental that they are impossible to fix without rewriting at least a very large part of the code.
I'll state the design problems that cannot be fixed in the current system. They affect both end users and contributors, sometimes indirectly, but very seriously.
Design problem #1: partial commits
You've seen it. You commit, there's an error somewhere, and one part of the config is applied, while the other isn't. Most of the time it's just a nuisance, you fix the issue and commit again, but if you, say, change interface address and firewall rule that is supposed to allow SSH to it, you can get locked out of your system.
The worst case, however, is when commit fails at boot. While it's good to have SSH at least, debugging it can be very frustrating, when something doesn't work, and you have no idea why, until you inspect the running config and see that something is simply missing (if you run into it in VyOS 1.x, do "load /config/config.boot" and commit, this will either work or show you why it failed). It's made worse by lack of notifications about config load failure for remote users, you can only see that error on the console.
The feature that can't be implemented due to it is what goes by "commit check" in JunOS. You can't test if your configuration will apply cleanly without actually commiting it.
It's because in the scripts, the logic for consistency checking and generating real configs (and sometimes applying them too) is mixed together. Regardless of the backend issues, every script needs to be taken apart and rewritten to separate that logic. We'll talk more about it later.
Design problem #2: read and write operations disparity
Config reads and writes are implemented in completely different ways. There is no easy programmatic API for modifying the config, and it's very hard to implement because binaries that do it rely on specific environment setup. Not impossible, but very hard to do right, and to maintain afterwards.
This blocks many things: network API and thus an easy to implement GUI, modifying the config script scripts in sane ways (we do have the script-template which does the trick, kinda, but it could be a lot better).
Design problem #3: internal representation
Now we are getting to really bad stuff. The running config is represented as a directory tree in tmpfs. If you find it hard to believe, browse /opt/vyatta/config/active, e.g. /opt/vyatta/config/active/system/time-zone/node.val
Config levels are directories, and node values are in node.val files. For every config session, a copy of the active directory is made, and mounted together with the original directory in union mount through UnionFS.
There are lots of reasons why it's bad:
- It relies on behaviour of UnionFS, OverlayFS or another filesystem won't do. We are at mercy of unionfs-fuse developers now, and if they stop maintaining it (and I can see why they may, OverlayFS has many advantages over it), things will get interesting for us
- It requires watching file ownership and permissions. Scripts that modify the config need to run as vyattacfg group, and if you forget to sg, you end up with a system where no one but you (or root) can make any new commits, until you fix it by hand or reboot
- It keeps us from implementing role-based access control, since config permissions are tied to UNIX permissions, and we'd have to map it to POSIX ACLs or SELinux and re-create those access rules at boot since the running config dir is populated by loading the config
- For large configs, it creates a fair amount of system calls and context switches, which may make system run slower than it could
Design problem #3: rollback mechanism
Due to certain details (mostly handling of default values), and the way config scripts work too, rollback cannot be done without reboot. Same issue once made Vyatta developers revert activate/deactivate feature.
It makes confirmed commit a lot less useful than it should be, especially in telecom where routers cannot be rebooted at random even in maintenance windows.
Implementation problem #1: untestable logic
We already discussed it a bit. The logic for reading the config, validating it, and generating application configs is mixed in most of the scripts. It may not look like a big deal, but for the maintainers and contributors it is. It's also amplified by the fact that there is not way to create and manipulate configs separately, the only way you can test anything is to build a complete image, boot it, and painstakingly test everything by hand, or have expect-like tool emulate testing it by hand.
You never know if your changes may possibly work until you get them to a live system. This allows syntax errors in command definitions and compilation errors in scripts to make it into builds, and it make it into a release more than one time when it wasn't immediately apparent and only appread with certain combination of options.
This can be improved a lot by testing components in isolation, but this requires that the code is written in appropriate way. If you write a calculator and start with add(), sub(), mul() etc. functions, and use them in a GUI form, you can test the logic on its own automatically, e.g. does add(2,3) equal 5, and does mul(9, 0) equal 0, does sqrt(-3) raise an exception and so on. But if you embed that logic in button event handlers, you are out of luck. That's how VyOS is for the most part, even if you mock the config subsystem so that config read functions return the test data, you need to redo the script so that every function does exactly one thing testable in isolation.
This is one of the reasons 1.2.0 is taking so long, without tests, or even ability to add them, we don't even know what's not working until we stumble upon it in manual testing.
Implementation problem #2: command definitions
This is a design problem too, but it's not so fundamental. Now we use custom syntax for command definitions (aka "templates"), which have tags such as help: or type: and embedded shell scripts. There are multiple problem with it. For example, it's not so easy to automatically generate at least a command reference from them, and you need a complete live system for that, since part of the templates is autogenerated. The other issue is that right now some components feature very extensive use of embedded shell, and some things are implemented in embedded shell scripts inside templates entirely, which makes testing even harder than it already is.
We could talk about upgrade mechanism too, but I guess I'll leave it for another post. Right now I'd like to talk about proposed solutions, and what's being done already, and what kind of work you can join.
The plan to get out of this situation is the following:
- Design and implement a new configuration backend
- Decide on the new base distro and implement a new image build and upgrade mechanism on top of it
- Rewrite integrations scripts
Briefly on the new distro, while Debian is a great distro and no one doubts it, there are new developments in package managers that can, potentially at least, give us safe and reversible upgrade for free. NiOS and Nix package manager looks like an interesting candidate. Right now we will focus on the backend, however.
The working title for the new backend is "vyconf", I'll use "vyconf" and "the backend" interchangeably from now on.
What the backend does?
If you are new to VyOS development, you need to know that there are two main parts: the config backend that loads and saves the config, handles set/delete/commit operations, checks if node values are valid, and runs the pre/post-commit hooks, and scripts that use the config read API of the backend to produce real configs for openvpn, iptables, and everything else that VyOS uses.
Therefore to get a working backend we need to design and implement the following pieces:
- Config format grammar and a lexer/parser for it
- The datastructure that represents the running config and proposed configs from sessions
- Command definition syntax, representation of the command hierarchy, and value checking mechanism
- set/delete/rename/copy operations logic
- commit and rollback logic
Of course we also need frontends that could communicate to it. I'm planning the following frontend applications:
- Non-interactive CLI client (like cli-shell-api, except with config modification functionality)
- Interactive shell (replacement of vbash in current VyOS)
- HTTP bridge (for both remote API and GUI)
They all need a common library for actual communication. Since non-interactive client needs the least amount of wrapping, it's sensible to start with it.
Is backend just for VyOS?
It's a big of a digression, but my observation is that hardly any of those functions are unique to VyOS, they could be used in any software appliance. Imagine if instead of writing everything by hand, people could take an existing, stable and robust config backend, and make their own software appliance with it. Rather than reimplement config handling logic, they could focus on their config scripts and custom frontend.
For us, it also means that we avoid unwarranted assumptions about the environment (in VyOS 1.2.0 jessie migration, we found enormous amounts of such assumptions deeply entrenched in the system!), and we also get more testers and contributors from outside the VyOS community.
Myself, I'm not going to try to account for hypothetical scenarios that may only occur outside VyOS, but I think we should keep the backend as OS/distro-independent and standalone as feasible and only make config scripts specific to VyOS.
What language I'm writing in?
This gets objections all the time, so I'm answering these questions here. I'm writing it in ML, specifically, the OCaml language of the ML family (the other notable members are StandardML and F#, though F# doesn't implement a number of important ML features such as its module system). This means you need to learn it too if you want to join the work on backend, but it's fairly easy to learn as it has fairly compact syntax and well-defined behaviour. The main stumbling block is the semantics which is very different from imperative languages.
While it's little known outside of the academia and investment banks, it has important properties that make it very well suited for this task. First, it compiles to native code and it's very fast (it can compile itself in less than 10 minutes, for instance). It offers a very expressive and safe type system that can find infinite loops in some cases (http://perl.plover.com/yak/typing/notes.html , the example is in slide 27) or allows you to make a typed printf where something like printf("%d", "foo") is a type error in a few lines (http://www.brics.dk/RS/98/12/).
ML family and Haskell are closely related, and they share a common trait: all data is immutable unless specified otherwise. This has two important effects. First, nothing can be modified accidentally. Unlike Haskell, OCaml does support mutable variables, but they have to be declared as mutable. Second, the guarantees that nothing is accidentally modified allows the language to implement structural sharing: no values are copied unless they really have to be copied.
My early prototypes of the underlying datastructures were in Python. While I like Python and use it for many tasks (and I think we can use it for config scripts too), pervasive mutability combined with destructiveness of functions from the standard library created need for copy or even deepcopy at virtually every step.
There are at least two examples of projects that migrated to OCaml at some stage: Unison file synchronizer and 0install package manager. The maintainer of 0install documented his search for new language better than I can possibly do, read his blog series on it if you are interested: http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-replacement-for-0install/
The academic applications are more numerous and, for certain type of geeks, way more exciting (Coq proof assistant, CompCert the formally verified C compiler, FFTW the Fourier transform generator and many more), though yesterday academic techniques are, luckily, making it into everyday software development, and there are things like http://fbinfer.com/ and https://github.com/facebook/pfff by Facebook.
If you are new to it, you can get one of the books or read tutorials on ocaml.org.
Possible solutions to design problems
The obvious solution is to make commits all or nothing, but it has an unfortunate implication: if config load (which is a commit too) fails at boot, we are left with an unusable system. One way around it is to add the concept of a fallback config. People can make a minimal config that allows them access to the router at least and save it to the fallback config file, so that if bad things happen, they still can login and debug.
To support commit dry run and enforce good separation of concerns, at the backend level we can introduce three types of scripts: check, generate, and apply. When a user commits, first check scripts for all components are run, and if any of them fails, the commit fails. If checks pass, then generate scripts are run to produce real configs from the VyOS config, and then apply scripts are run to reload/restart daemons and do other things necessary to apply the changes to the underlying system.
Read/write disparity and testability
The new backend uses in-memory datastructures for everything. The config can be represented as a multi-way tree (a rose tree, if you prefer more flowery language, in literal sense). The main observation is that the config and the hierarchy of command definitions can be represented as the same data structure, with the only difference in data attached to the children. Thanks to support for parametric polymorphism in the ML type system, we don't even need to do anything special for this (in say C++ we'd make it a template).
Since it doesn't need any directories or files to work, the tree can be populated by hand, or initialized from a file and then manipulated in any way we want. This allows us to test its correctness in isolation, and I already have quite some unit tests for it.
I call the command hierarchy tree a "reference tree", since set/delete operations use it as a reference, to check which paths are allowed and what kind of values are valid for them.
A tricky point: we need the local processes to be able to communicate to the backend daemon without explicit authentication (having to enter password in the interactive shell after authenticating in SSH or local console would be very weird), but we also need to prevent operator level users from entering the configuration mode, and if we ever want RBAC, we need to know who the user is. Save for Kerberos or another SSO mechanism deployed locally, it leaves us with the only option, UNIX domain sockets, which, on Linux and FreeBSD at least, support options for UID and GID.
I guess that's all for now. If you want the code of the backend being written, look here: https://github.com/vyos/vyconf
To build it, you need a working OCaml setup, which is not hard to get. First, install OPAM (the OCaml package manager): http://opam.ocaml.org/doc/Install.html Then do "opam switch 4.03.0" and install the build tools and dependencies with "opam install oasis ppx_deriving_yojson lwt ounit xml-light pcre". When you have all that installed, do "oasis setup -setup-update dynamic" which will give you the setup.ml file and ./configure and Makefile wrappers. Then do he configure ("ocaml setup.ml -configure --enable tests"), and run "make", or "make test".
If you have any issues with it, feel free to ask me.