20200919

FreeBSD Subversion to Git Migration: Pt 1 Why?

 FreeBSD moving to Git: Why

With luck, I'll be writing a few blogs on FreeBSD's move to git later this year. Today, we'll start with "why"?

Why?

There's a number of factors motivating the change. We'll explore the reasons, from long term viability of Subversion, to wider support for tools that will make the project better. Today I'll enumerate these points. There are some logistical points around how the decision was made. I'll not get into the politics about how we got here, though. While interesting for insiders who like to argue and quibble, they are no more relevant to the larger community that the color of the delivery truck that delivered groceries to your grocer this morning (even if it had the latest episode of a cool, scrappy cartoon cat that was involved in a multi-year arc wooing the love of his life by buying food at this store).

Apache has moved on, so has llvm

The Apache Foundation used to be the care taker and main user for Subversion. They used Subversion for all their repos. While they still technically the caretaker of Subversion, they've moved all their repositories to git. This is a worrying development because the foreseeable outcome of this will be less Subversion development. This will mean the FreeBSD project will need to undertake to support Subversion if we remain on it in the long term. FreeBSD is now the last, large Open Source project using Subversion. LLVM has made its transition to git recently. There are very real concerns about the health and viability of the Subversion ecosystem, especially when compared to the thriving, vibrant git ecosystem.

Better CI support

Git have more support for newer CI tools than subversion. This will allow us, once things are fully phased in, to increase the quality of the code going into the tree, as well as greatly reduce build breakages and accidental regressions. While one can use CI tools outside of git, integration into a git workflow requires less discipline on the part of developers, making it easy for them to fix issues found by CI as part of the commit/merge process before they affect others.

Better Merging

Git merging facilities are much better than subversion. You can more easily curate patches as well since git supports a rebase workflow. This allows cleaner patches that are logical bits for larger submissions. Subversion can't do this.

Git also allows integration of multiple git repositories with subtree rewriting via 'git subtree merge'. This allows for easier tracking of upstream projects and will allow us to improve the vendor import work flow.

Robust Mirroring

Git can easily and robustly be mirrored. Subversion can be mirrored, but that mirroring is far from robust. One of the snags in the git migration is that different svn mirrors have different data than the main repo or each other. Mirroring in git is built into the work flow. Since every repo is cloned, mirroring comes along for free. And there's a number of third party mirroring sites available, such as GitHub, GitLab and SourceForge. These sites offer collaboration and CI add ons as well.

Git can sign tags and commits. Subversion cannot. We can increase the integrity of the source of truth though these measures.

Features from 3rd Party Sites

Mirroring also opens up more 3rd party plug ins. Gitlab can do some things, Github other things, in terms of automated testing and continuous integration. Tests can be run when branches are pushed. Both these platforms have significant collaboration tools as well, which will support groups going off and creating new features for FreeBSD.  While one can use these things to a limited degree, with subversion mirrored to GitHub, the full power of these tools isn't fully realized without a conversion to git.

The wide range of tools available on these sites, or in stand-alone configurations, will allow us to have both pre-commit checks, as well as long-term post-commit tests running on a number of different platforms. This will allow the project to leverage existing infrastructure where it makes financial sense to let others run the tests, while still allowing the project to retain control of the bits that are critical to our operations.

Improved User-Submitted Patch Tracking and Integration

One area we've struggled with as a project is patch integration. We have no clear way to submit patches that will be acted on in a timely fashion, or at least that's the criticism. We do have ways, but they are only partially effective at integrating patches into the tree. Pull requests, of various flavors, offer a centralized way to accept patches, and have tools to review them. This should lower the friction to people submitting patches, as well as making it easier for developers to review the patches. Other projects have reported increased patch flow when moving to git. This can also be coupled with automated testing and other validation of the patch before the developer looks at it, which addresses one of the big issues with past systems: very low signal to noise ratio. While not a panacea, it will make things better and more widely use the scarce developer time.

Collaboration

Some have said that git, strictly speaking, isn't a pure source code management system. It is really a collaboration tool that also supports versioning. This may sound like a knock on git, but really it's git's greatest strength. Git's distributed model allows for easier and more frequent collaboration. Whole webs sites have been built around this concept, and show the power of easy collaboration to amplify efforts and build community.

Skill Set

All of this is before the skill set arguments are made about kids today just knowing git, but needing to learn subversion. That has had increasing been a source of friction. This argument is supported both by anecdotal evidence of people complaining about having to learn Subversion, about professors having to take time to teach it, etc. In addition, studies in the industry have shown a migration to git away from other alternatives. Git now has between 80% and 90% of the market, depending on which data you look at. It's so widely used that our non-use of it is a source of friction for new developers.

Developers are already moving

More and more of the active developers on the project use git for all or part of their development efforts. This has lead to a minor amount of friction because all these ways are not standardized, have fit and finish issues when pushing the changes into subversion, and cause friction. The friction is somewhat offset by the increase in productivity they offer these developers. The thinking is that having git as source of truth will unlock the potential of git even more to increase productivity.

Downsides

First, git has no keyword expansion support for implementing $FreeBSD$. While one can quibble over the git add-ins that do this sort of thing, the information added isn't nearly as useful as Subversions'. This will represent a loss. However, tagged keyword expansion has been going out of style for a long time. It used to be that source code control systems would embed the commit history in files committed, only to discover weird things happened when they were imported into other projects so the practice withered. We ran into a similar issue years ago with $Id$ and all the projects switched to $FooBSD$ instead. This was useful when source code tracking was weak and companies randomly imported versions: it told us what versions were there. Now, companies tend to do this with git, which has better tracking to the source abilities. The value isn't zero, but it's much lower than when we adopted it in the 90s. Git also doesn't support dealing with all the merge conflicts it causes very well. Since the tool rewards people for importing in a more proper way, more companies appear to be using it that way, lessening the need for explicit source marking. [edit: some don't consider this a loss at all].

Second, git doesn't have a running count of commits. One can work around this in a number of ways, but there's nothing fundamental that can be used as a commit number as reliably as subversion. Even so, the workarounds suffice for most uses, and many projects are using git and commit counts successfully today.

Third, the BSDL git clients are much less mature than the GPL ones. Until recently, there was no C client that could be imported into the tree. While one might debate whether or not that's a good idea, there's a strong cultural heritage of having all you need in the base system that's hard to shrug off. OpenBSD recently wrote got which has an agreeable license (but a completely different command line interface, for good or ill). It has its issues, which aren't relevant here, but is maturing nicely. Even with the current restrictions, it is usable. There is an active port of got to FreeBSD due to the large number of OpenBSDisms that are baked in (some necessary, some gratuitous). The OpenBSD people are open to having a portable version of got, so this is encouraging.

Finally, Change Is HARD. It's easy to keep using the same tools, with the same work flows with with the same people as you did yesterday. Learning a new system is difficult. Migrating one work flow to another is tricky. You have to balance the accumulated knowledge and tooling benefits vs the cost it will take to move to something new. The git migration team views moving from subversion to git as the first step in many of improving and refining FreeBSD's work flow. As such, we've tried to create a git workflow that will be familiar to old users and developers, and at the same time allow for further innovation in the future.

Conclusion

Although it's not without risk and engineering trade offs, the bulk of the evidence strongly suggests that moving to git will increase our productivity, improve project participation, grow the community, increase quality and produce a better system in the end. It will give us a platform that we can also re-engineer other aspects of the project. This will come at the cost of retooling our release process, our source distribution infrastructure and developers needing to learn new tools. The cost will be worth it, as it will pay dividends for years to come.

20200831

How to transport a branch from one git tree to another

git branch transport

Recently, I had need to move a branch from one repo to another.

Normally, one would do a 'get push' or 'get fetch' to do this. It's fundamental to git. If you are looking for how to do this, you should see something else.

So what abnormal thing am I doing?

git svn.

So I have two git svn trees. They both point to FreeBSD's upstream subversion repo. git svn is cool and all, but has some issues.

First, it creates unique branches between the two trees. The git hashes are different. This makes it harder to move patches between trees. It's possible, but ugly (too ugly to document here).

Second, sometimes (though rarely) git svn loses its mind. If it does this and you have a dozen branches in flight when it happens, what do you do?

Third, git svn fetch, the first time, takes over a day. So recreating the tree isn't to be undertaken lightly.

How to export / import the branch

Git has a nice feature to send mail to maintainers. You can take all the patches on a branch and email-bomb a mailing list. This is a direct result of the Linux (and friends) work flow. Leaving aside the wisdom of that work flow, git support for it is helpful. Because git also implements a 'hey, I was mail bombed, please sift through it and apply it to my tree' functionality.  Between the two we have the makings for a solution to my problem.

The 'export' side it 'git format-patch'. It normally exports it as a bunch of files. However the '--stdout' flag allows you to export it as a stream.

The 'import' side is 'git am'. One could use 'git apply' but it requires you write a loop that 'git am' already does it.

So, recently, when I had another instance of my every year or two 'git svn screwed the tree up' experience, I was able to transport my branches by putting these together and understanding a bit about what is going on with git.

The Hack

bad-tree% git format-patch --stdout main..foo | \
    ssh remote "(cd some/path && git checkout main && \
        git checkout -b foo && git am)"

so that's it.  We format the patches for everything from the mainline through the end of foo. It doesn't matter where 'foo' is off main. The main.. notation is such that it does the right thing (see gitrevisions(7) for all you could do here).

On the receiving side, we cd to git svn repo (or really any repo, this technique generalizes if you're moving patches between projects, though you may need to apply odd pipeline filters to change paths), make sure we have a clean tree with 'git checkout main' (though I suppose I could make that more robust). We create a new branch foo and then we apply the patch. Easy, simple, no muss, no fuss.

BTW I use '&&' above in case I've fat fingered anything, or there's already a foo branch, etc, it will fail as early as possible.

But what about when things go wrong...

Well, things go wrong from time to time. Sadly, this blog doesn't have all the answers, but you'll basically have to slog through it manually. The few times it has happened to me, I've copied the patch over, and consulted git-am to see how each of the weird things I hit had to be dealt with.

'git am --show-current-patch' is the most helpful command I've found to deal with the occasional oops.

I also have found that fewer patches on a branch I have to do this with, the more likely it will apply on the remote side.

One Last Hack

So I was helping out with the OpenZFS merge and as part of that created a bunch of changes to the FreeBSD spl in OpenZFS. I did this the FreeBSD tree, but needed to then create a pull request. I used a variation of this technique to automate sending the branch from one place to another.

cd freebsd
git format-patch --stdout main..zstd sys/contrib/openzfs | \
    sed -e 's=/sys/contrib/openzfs/=/=g' | \
    (cd ../zfs ; git checkout  master && git branch -b zstd && \
        git am)

This nicely moved the patches from one tree to the other to make it easier for me to create my pull request. Your milage may vary, and I've had to play around with the filter to make sure I didn't catch unwanted things in it... I've not taken the time to create a regexp for the lines that I need to apply the sed to for maximum safety, but so far I've gotten lucky that the above path isn't an any of the files I want to transport this way...

Final Though

One could likely also use subtree merging to accomplish this. I've had issues in the past, though, when there wasn't a common ancestor. Caveat Emptor. It's not the right tool for this problem, but it often is for related problems.

Postscript

Git rebase was originally implemented this way, but with a -3 on the command line, which is quite handy.

20200816

A 35-year-old bug in patch found in efforts to restore 29 year old 2.11BSD

 A 35 Year Old Bug in Patch.

Larry Wall posted patch 1.3 to mod.sources on May 8, 1985. A number of versions followed over the years. It's been a faithful alley for a long, long time. I've never had a problem with patch until I embarked on the 2.11BSD restoration project. In going over the logs very carefully, I've discovered a bug that bites this effort twice. It's quite interesting to use 27 year old patches to find this bug while restoring a 29 year old OS...

After some careful research, this turned out to be a fairly obscure bug in an odd edge case caused by "the state of email in the 1980s." which can be relegated to the dustbin of history...

What is the bug?

Why has no-one else noticed this bug? Well, it only happens when we're processing the last patch hunk in a file, and that patch hunk only deletes lines and the 'new-style' context diff omits the replacement text (since it's implied). Oh, and you also have to be doing a -R patch as well?  That's pretty obscure, eh?

I found it with the following patch from the 2.11BSD series (patch 107). It ends like so:
*** /usr/src/bin/sh/mac.h.old   Fri Dec 24 18:44:31 1982
--- /usr/src/bin/sh/mac.h       Mon Jan 18 08:45:24 1993
***************
*** 59,63 ****
  #define RQ    '\''
  #define MINUS '-'
  #define COLON ':'
-
- #define MAX(a,b)      ((a)>(b)?(a):(b))
--- 59,61 ----

which seems fairly routine and pedestrian, no? However, this hunk runs afoul of a very old bug in the patch code when one tries to reverse apply (-R) it. I got the following output:
--------------------------
|*** /usr/src/bin/sh/mac.h.old  Fri Dec 24 18:44:31 1982
|--- /usr/src/bin/sh/mac.h      Mon Jan 18 08:45:24 1993
--------------------------
Patching file usr/src/bin/sh/mac.h using Plan A...
No such line 62 in input file, ignoring
Hunk #1 succeeded at 53 with fuzz 1 (offset -6 lines).
done

Which looks odd. Why is it complaining about a line that isn't there? why did it misapply the patch 6 lines earlier? It thinks it succeeded, but really added back the MAX macro line too early.

Where is the bug?

While debugging this, I quickly discovered that inverse patch file look weird (patch will generate it for you in the .rej file)

***************
*** 59,61 ****



--- 59,63 ----
  #define RQ    '\''
  #define MINUS '-'
  #define COLON ':'
+
+ #define MAX(a,b)      ((a)>(b)?(a):(b))

Notice the blank lines, they will become important later. They shouldn't be there. The start of the patch should look like:

***************
*** 59,61 ****
--- 59,63 ----

with things snugged together. That's our first clue as to what's going wrong. Since this applies only to reverse patches, we need to make sure that pch_swap is doing what it's supposed to be doing. It's the thing that touches the internal representation when the -R flag is given to 'rewrite' the normalized form of the patch.

Setting breakpoints shows that pch_swap is producing garbage out, because it's getting garbage in. for some reason, the 3 extra blank lines come into this routine for swapping. So it's not a bug in reversing patches. Which is good: this bug doesn't but if it isn't the last hunk in the patch file.

So what is inserting those blank lines?  A little debugging later, lands us on the following code (in FreeBSD, other implementations are similar) in another_hunk() in pch.c:
    len = pgets(true);
    p_input_line++;
    if (len == 0) {
        if (p_max - p_end < 4) {
            /* assume blank lines got chopped */
            strlcpy(buf, "  \n", buf_size);
        } else {
            if (repl_beginning && repl_could_be_missing) {
                repl_missing = true;
                goto hunk_done;
    }
            fatal("unexpected end of file in patch\n");
        }
    }
This is a little hard to follow, but it basically says that if pgets() returns 0 (which it does at the end of the file), then we try to bail out. If p_max - p_end < 4, it will insert a blank line. Otherwise, it will assume the replacement text is missing if we've started looking at the replacement and it could be missing. Fairly straight forward.

p_max gets set to the largest possible extent of the patch in other code in another_hunk() when the "--- 59,61 ---" line is parsed in the original patch. In this case, p_max is 9 and p_end is 6 (it's set to p_end + 61 - 59 + 1). For normal diffs, we'd expect there to be an additional 3 lines of context here. But we don't have that with this diff since they are omitted.

So why '4' in the second 'if' in the quoted code above? what's so magic about it? Indeed, if we hack the patch to have 6 lines of context instead of 3, it applies correctly. So what gives? If we remove that entire if, the patch applies correctly as well. So that's a possible fix, but what are we losing by doing this?

The Fix

As noted, if we just remove the second if entirely and replace it with the lines from the 'else' clause, the patch applies. Now I need to justify just removing the if. An alternate fix would be to say if p_end != repl_beginning apply the heuristic, but otherwise don't. However,  I think that fix is worse because the whole if isn't needed.

The oldest patch version I can find is patch 1.3 which Larry Wall posted May 8, 1985 to mod.sources in the old USENET hierarchy (well, I guess it's all old now, so maybe the pre-reorg hierarchy). The SCCS comments in the file suggest it was started around Christmas the prior year, but I can't find any of those versions extant. The code is clearly there:
            ret = pgets(buf,sizeof buf, pfp);
            if (ret == Nullch) {
                if (p_max - p_end < 4)
                    Strcpy(buf,"  \n"); /* assume blank lines got chopped */
                else
                    fatal("Unexpected end of file in patch.\n");
            }
though I don't think that the bug actually bit that version since it didn't try to fill in the blanks. The 2.0 version, released on Oct 27, 1986 does have code very similar to the code we use today:
           ret = pgets(buf, sizeof buf, pfp);
           if (ret == Nullch) {
               if (p_max - p_end < 4)
                   Strcpy(buf, "  \n");  /* assume blank lines got chopped */
               else {
                   if (repl_beginning && repl_could_be_missing) {
                repl_missing = TRUE;
                       goto hunk_done;
                   }
                   fatal1("Unexpected end of file in patch.\n");
               }
           }
which has this bug for the same reason modern code has this bug...

So 'assume blank lines got chopped' is really only relevant to other types of patches (old-style context diffs I believe). One could also perhaps fix this only for old-style context and normal diffs. However, I think that's the wrong fix too. It's one of many patches that deals with 'diff going from A to B gets distorted in some predictable ways' that we no longer have to deal with.

So why was the code added? I've sent an email to Larry Wall, but I've not heard back from him (he's gone onto perl fame, and doesn't usually mess with patch issues since maybe 1990, so I'm not hopeful of a reply from him). Absent that, though, I can relate my limited experiences of USENET in the late 1980s that are likely relevant. Email was viewed by many authors as a way to get text from point A to point B over very expensive date links, sometimes. As such, there was little compunction for making minor edits to the email that was sent to facilitate these goals. The 'shar' programs of the era recognized this problem and pre-pended X to all the lines in files that were run through them.  A common issue was leading white space being deleted, and this solved that. Other issues with mailers, and mail software, included white space being inserted at the start of every line for replies. patch(1) itself deals with this case by trying to adjust for indented patch files by removing just enough leading white space to dig the applicable part of the diff out of these distorting influences. The notion of fuzz and other heuristics in patch cope, in part, with these difficulties. It's small wonder that in addition to all these issues, it coped with a few lines of trailing white space being deleted, corrupting the patch.

We no longer live in a world where patches are subjected to such hostile conditions. Rather than tweak this heuristic designed to cope with BITNET, UUCP, SMTP, VMS, VM and any number of other mailers in the wild to deal with my case, I would suggest that we should delete the heuristic as no longer relevant. Patch files no longer are subject to this level of mischief. And if they are, adding a few blank lines to the end of patch that's corrupt seems like a much smaller universe of issues than having basic functionality broken. This runs the risk of breaking no, well-formed patches. The new-style context diffs that are padded ignore this padding. unified diffs and other variants patch supports doesn't need this padding and will ignore it. 'ed' scripts don't take this code path. 'old' style context diffs are a extremely rare bird these days.

Side note: Old Style Context Diffs

So what program produced the "so called" old-style context diffs? The earliest diff I could find that produced context diffs was in 4.0BSD. The patch program looks for "*** XX,YY" for old style, but "*** XX,YY ***" for new. Looking at 4BSD sources, we see that they produce the former style. Releases through 4.2 included this style. Starting with release 4.3BSD, the new style was produced. So any system that was 4.2BSD based had the old style, and everything since 4.3BSD has had the new style (including gnu diff, which never produced the old style that I can tell). All diff programs since then have produced new-style context diffs (or the newer unified diffs that are even shorter). 4.3BSD was released in 1986, after the first release of patch, but before 2.0 which accounts for its understanding both variants.

FreeBSD Fix

I've committed the fix for FreeBSD here. It should be trivial to adopt for other versions of patch that I've reviewed.

Conclusion

So, a minor glitch I'd noticed in my reconstruction of 2.11BSD as released lead me to find a bug in patch that's been in the code for 35 years (and been a bug at least 34 of those years). The bug is an extreme edge case that triggers a heuristic for deleted trailing blank lines that in turn causes a problem reversing the patch, but only when it's the last one at the end of a patch file and only if it just deletes lines. Still, it's been rare that I've found and fixed bugs in my career that are 35 years old that I thought I'd write this up. It's also nuts that I found this using 27 year old patches...

Addendem

On hacker news, I see that modern gnu-patch doesn't suffer from this issue. It would appear that gnu-patch had corrected this some time ago. I was looking at an old version when I thought that it hadn't fixed it...

20200806

Recovering 2.11BSD, fighting the patches

Recovering 2.11BSD, fighting the patches

2.11BSD was released March 14, 1991 to celebrate the 30th anniversary of the PDP-11. It was released 15 months after 2.10.1BSD was released in January 1989.

2.11BSD was quite the ambitious release. The high points include:
  1. The kernel logger (/dev/klog)
  2. The  namei cache and argument encapsulation calling sequence
  3. readv(2)/writev(2) as system calls rather than emulation/compatibility routines
  4. Shadow password file implementation (the May 1989 4.3BSD update)
  5. A TMSCP (TK50/TU81) driver with standalone support (bootblock and standalone driver)
  6. Pronet and LH/DH IMP networking support
  7. The portable ascii archive file format (ar, ranlib)
  8. The Unibus Mapping Register (UMR) handling of the network was rewritten to avoid allocating excessive UMRs.
  9. The necessary mods to the IP portion of the networking were made to allow traceroute (which is present in 2.11BSD) to run.
  10. Long filenames in the file system
In addition to a completely new syscall setup. Item 10 required a full new install with restoring from a back up tape. Item #7 would be a thorn in my side.

However, following on the heels of 2.10.1BSD as quickly as it did, there's a number of changes that followed quickly after the release. So, by November of 1994 195 patches had been released for the tree. Well, there were more patches than that (patches 2 and 3 had several parts, and my work has uncovered a number of hidden patches along the way, but more about those later).

The Problem(s)

Well, if we have patch 195, and all 195 patches, what's the problem? Why can't you do a simple for loop and patch -R to get back to original? And for that matter, why were no copies of the original saved?

Turns out the root of both of these problems can be summarized as 'resource shortage'. Back in the day when this was released, 100MB disks were large. The release came on 2 mag tapes that held 40MB each. Saving a copy of these required a substantial amount of space. And it was more important to have the latest release, not the original release, for running the system. It was more efficient and better anyway.

In addition to small disk space, these small systems were connected via USENET or UUCP. These connections tended to be slow. Coupled with the small size of the storage on the PDP-11s running 2.11BSD, the patches weren't what we think of as modern patches. The patches started before the newer unified diff format was created. That format is much more efficient that the traditional context diffs. In addition, compress(1) was the only thing that could compress things, giving poor compression ratios. The UUCP transport of usenet messages also mean that the messages had to be relatively short. So, this mean that the 'patches' were really an upgrade process, that often included patches. But just as often, it included instructions like this from patch 4:
Fix:
        Apply the patch "/tmp/dif" (multiple files updated by it) and
        install the new scripts ":splfix.mfps" and "splfix.movb+mfps".
which included a shar file that needed to be run through /bin/sh (or unshar, if you were lucky). You then needed to follow the instructions, which included patching. Patch 14 illustrates the problem, it includes the instructions that say to run /tmp/c created by the shar. It looks like this:
#! /bin/sh -v
cd /usr/src/usr.bin
rm lint/*
zcat PORT/lint.tar.Z | tar xf -
cd lint
rm :yyfix llib-*
cp -p /tmp/libs libs
So this is a very efficient way to remove files. Often the diffs were larger than the original files, so it was more efficient to remove and replace. But this presents two problems: what files were there before? And even in this case, what files were removed? These operations were not uncommon, and destroyed information. It's a bit like running the sausage mill backwards to rebuild the pig.

Another problem for this project is phantom patches in the first 80 patches. A number of changes appear only in the patches that come with the catch-up kit. These aren't a surprise, as a number of patches going backwards had unexpected slight line number anomalies in them. After patch 80, patch generation appears to be better managed, but at the time of this writing, the author hasn't tried to reapply all the patches to get back to the p195 tape from the reconstructed release tape. The author also has not audited the comp.bugs.2bsd mailing list between the March 1991 release and the November 1994 p195 tape. Given the surprises so far (tftp and tftpd were updated in May 1991, patches were posted in comp.bugs.2bsd, but no formal patch was released and this change does appear in the catch-up patches), additional discoveries like this cannot be ruled out.

The Path Forward

Taken at face value it's hopeless. There's no way to definitively reconstruct the lost information. But there's some glimmers of home. First, there weren't a lot of files deleted like this. lint, pcc, ld, ranlib and a few others were removed. However, what's missing can be rather critical. Next, we have both the prior release (2.10.1BSD). A good first approximation of what is there in the release will be what's in the last release. Many files aren't changed often in the 2BSD. So we have some data present there, even if some was destroyed.

There were also a number of patches. These can provide data about the missing files. There's all the patches in the 2.11BSD series, as well as a number of patches posted to comp.bugs.2bsd after 2.10.1BSD was released. We know from the release notes that these were included in the 2.11BSD release.

Next, we have 4.2BSD, 4.3BSD and even 4.4BSD and the SCCS files that go along with them. This means that we can see changes that were going on in the 4BSD series, and we know much code came from there in 2.11BSD, both before and after the release.

Next, we have dates. We are fortunate that all the files are dated in the tree. This means we have a way of testing if we've missed anything in our reconstruction. If we reconstruct something that's dated after the release of 2.11BSD, we have a very good reason to believe that we've missed something.

There was a catch-up kit that was produced around patch 80. It contained a number of commands to re-removed what should have been removed, as well patch everything up. This gives us a second way of checking: can we apply this kit to get to patch 80? Does it work? And if we replay the individual patches we should get the same thing. And it should be the same as we got unapplying the patches back from 195. A word of caution: as with all other things around this project, we must always be open to the possibility of a mistake in the catch-up kit. There are at least a few files that it tries to remove that aren't necessarily there.

Finally, we know that the system was released as binaries. Which means that the system built. So we can test the reconstruction by compiling it. We should be able to build the final system. But we need to do that with the final system, so some way must be found to bootstrap it for this test. I'll talk more about that in another blog.

The Slog Backward

The slog backwards proceeded with fits and starts. For the first 20 or so patches (that is patch 177-195), it was fairly straight forward. I was able to use them to develop the framework to proceed. It wasn't until I hit the a.out binary format changes that I had problems (the first, looking back, of these is 176).

The Framework

I settled on a fairly simple framework going backwards. There are three types of patches in the archive. First, there are the pure patches. These have a simple patch set to apply, and maybe some verbal description of commands to run. Often, these commands were just the minimal amount to rebuild so you didn't have to suffer through a day's long compile. Sometimes, they included sneaky instructions. The patches 89-99 all contain the following text:
The /GENALLSYS script is obsolete and has been removed
so in addition to applying the patches, the script needed to be removed. To complicate things, some patches were generated from the root directory, while others were generated in the leaf node. To this end when the script detects a simple patch it tries to locate a hints/patch#.setup script. If one is found, then that script is sourced. Setup scripts are expected to be idempotent so we can reuse them for the forward trip. They all do a cd to where the patch was generated from. After the setup, if any, the mk211bsd script will apply the patch.

However, when there's other things to do, that won't work. So, there's a second way to assist. If there's a hints/patch#.unpatch, then that will be run and it is responsible for moving the patch backwards. About 20 patches need this assistance. For example, patch 118 replaces ucb/Mail with a port from 4.3. So, I need to reconstruct what was there. In this case, I copy the ucb/Mail files from 2.10.1 and then check for patches in comp.bugs.2bsd (there aren't any) and in the patch stream (patch 10 tweaks it). So, I have to reapply all those patches in the unpatch script.

The second type of file is the shar file. Here, one has to run the shar file through /bin/sh (or unshar) and use the extracted files to do something. Often times there's a short script to run and a patch to apply, but other times it's more complicated. In these cases, you must provide an hints/patch#.unshar file. This file takes care of unsharing things, and running whatever you need to do to unapply it. This varies a lot. In some cases, you extract a patch (or set of patches) and reverse apply them. In other cases, you need to remove files. In still others you need to snag them from 2.10.1 and touch them up (like we did for ucb/Mail).

The final type is a tar ball. These files are treated like patches, with the usual unpatch protocol. However, the unpatch program script has to know how the tar file encoded its patches.

Interesting Patches

There's 195 patches (and a few sub-patches). Describing all of them would be quite tedious. Only about 80 hints are needed (and a few additional files for reconstructed fixups). I'll describe some of the more interesting ones briefly here. Most of the interesting ones are in the last 50 or so. And by interesting, I mean ones that destroyed information such that it had to be reconstructed or that otherwise affected the reconstruction.

Patch 185 replaces m4, so we have to snag the one from 2.10.1BSD to replace it, as well as unapply a patch in the shar file. Patch 115 did a fixup on the Makefile, so we have to reapply it so that patch 115 will unapply correctly.

Patch 184 fixes the makefiles to use pipelines instead of temporary files now that the assembler can cope with pipelines. At the same time, it removed a few obsolete compatibility system calls that need to be added back from 2.10.1BSD. Adding them back messes up the directory times, which make depends on, so workarounds have to be deployed in build2.sh, the bootstrap script (make in 2.11BSD assumes times in the future mean the directory is up to date and doesn't need to be rebuilt, while newer makes will ignore that time and descend into the directory always).

Patch 180 updates symcompact.c and strcompact.c by replacing them entirely. These files weren't in 2.10.1BSD, but came in with patch 172, so needed to be recovered from there. It also adds symdump.c which needs to be removed. There's no data loss here, but it illustrates the need to look at all the patches to see what might need to be recovered (and what doesn't).

Patch 178 removes an unused header, and fixes up another one. The headers were resynchronized in patch 175, so a small fixup is needed here. Need to investigate this, and other 'fixup' patches. One unknown is whether 2.11BSD shipped with the unsynchronized /usr/include and /usr/src/include or not. The reconstruction tries to get the right ones in each place, but uses the /usr/include ones as the ones in /usr/src/include when we can't otherwise reconstruct.

Patches 158-176 remove the 8 character limit to symbol names in programs. These are quite disruptive, but for the most part just large patches. ranlib(1) was grabbed from 2.10.1 and patched to cope with the new archive format to undo these patches. ld(1) needed similar work (and once I started rebuilding things, I had to fix so it would compile). It also needed some fixes from comp.bugs.2bsd. Initially, I'd kept them separate, but I eventually needed to merge them together. ranlib(1) and ld(1) represent the largest effort in retro-programming to reconstruct (code was cut and pasted from other programs that survived and minimally changed to be functionally correct).

Patches 151 and 152 rework the assembler. A number of patches to as were posted to comp.bugs.2bsd that needed to be applied to 2.10.1's assembler to make things work out. A couple of minor tweaks were needed after all these changes were applied, though, to make the patches work out.

Patch 142 unapplied the patch, but then needs to unapply changes to all the kernel config files. This a recurring issue for the generated files, so some fancy scripting is needed to apply them to all the kernel config directories. Patches 124, 121, 93 (which has to remove some directories), 84, 83, 72, 42, 36, 4, and 2. There's a number of issues with patching the kernel, not least was that overlay structures were often hacked on the fly and preserving them is hard. The restoration has only tried to get GENERIC and possibly KAZOO kernel configs correct, and the others it has largely ignored. I'll do a full blog on all the issues to detail what is or isn't recovered.

Patch 132 brings in named, so it needed to be recovered from 2.10.1, plus patches from patch 106.

Patch 123 is actually missing a part. It updates the documentation for how to install the system. This means we need to apply some kind if fixup. But all we know for sure is the SCCS id changed, so that's all we fix up. If I find more data to support other changes, I'll update to include that, but so far nothing has surfaced.

Patch 118 replaces ucb/Mail with the one from 4.3BSD. Revert to 2.10.1BSD's version and apply a patch from patch 10. There's no publicly posted patches to 2.10.1BSD for ucb/Mail.

Patch 80 is the catch-up kit patch. A lot of files were added from USENET. Undoing it is straight forward, making it go forward will tough, but we have the files at least. This is both a blessing and a curse to the project. On the one hand, we have a second way of cross checking things. On the other, the cross checks reveal many problems, including updated kermit being missing entirely (though, to be fair, /usr/src/new and /usr/src/local weren't considered part of the system until patch 195, so prior to that patches to this part of the tree were irregular).

Patch 40 is an announcement of corrupt files in the original release in /usr/games. I've chosen to restore them to what was intended for the original tape.

Patch 17 massively reworked pcc. It's not quite right in the reconstruction yet, however, since the 'redo catch-up kit' test fails to delete some expected files, what shipped in 2.11BSD isn't quite right. It may never be right, and that may not matter since it didn't work anyway.

Patch 14 fixes lint. It was totally broken in 2.11 (but worked in 2.10.1). Undoing it may also be incorrect for a similar reason as patch 17. Since neither one of these worked, it may be quite hard to know what 2.11 shipped with. A case could be made, though, that it doesn't matter...

Patch 2 removed tmscpboot.s. The rest of the patch undoes easily, but tmscpboot.s was new in 2.11BSD. We know it is derived from tkboot.s which comes from Ultrix-11, but I've not yet been able to tweak it to work yet...

Current Status

At this point, I've reconstructed a possible release tape. A series of them, actually. All of them fail the tests set forth above (though in fewer and fewer ways each iteration). I'm able to use the different tests to suggest fixes. I've recovered 5 lost patches (which I write about elsewhere). There's at least 5 more I believe based on the attempts to apply the catch-up kit. I discovered the first 5 based entirely on bad dates. The current reconstruction is decent (maybe 30-40 files still aren't quiet right, and a similar number are reconstructions that work, but it's impossible to know if they are missing bug fixes). The tmscpboot stuff is still missing, and there's a number of minor fixups that need to be reconciled with the known missing patches. All very careful, detailed work, so it may be some time to work through them.

20200804

Bootstrapping 2.11BSD (no patches) from 2.11BSD pl 195

Bootstrapping 2.11BSD (original)

I've had the sources for what I think is the original 2.11BSD for some time now. However, how do I know these sources are good? That's a very good question. I have a series of tests that I'm doing to verify that the sources are consistent with what we know, or have some kind of known deviation / reconstruction when not. They had passed only the first of my many tests (they were consistent with the patches themselves, but nothing else). It was time to see if I could build what I'd made.
Click to close image, click and drag to move. Use arrow keys for next and previous.

One thing we know for sure: The 2.11BSD release happened. This means that sources for the release must be buildable, in some way. The 2.11 BSD release notes don't mention any reproducibility issues. Presumably the documented way will work. However patches 106-111 fix dozens of build issues that affected reproducibility of the build. In addition, one should be able to build twice in a row and get identical results, modulo a few binaries that encode dates and such. Experience has shown that many programs in /usr/local or /usr/new are the worst offenders. I've made the decision that if make install doesn't install it from the top level, then it won't be in the release I recreate. Though I also made the decision that building some man pages by hand was also OK to make that happen...

Part of building it twice is building it at all. In patches 158-178, the binary format of the .o files changes to accommodate longer symbol names. As a result, the binaries in the 195 image don't produce binaries that work on the unpatched release (well, the binaries themselves do, but the .o's are wrong, as are all programs that read symbols). In addition, there's issues just building everything on the 195 image: as, nm, and ld don't even build, and without those, you won't get far. In fact, the 195 assembler won't even assemble the assembler I've recreated. Since the straight forward way won't work, I thought I'd document what does.

For a background on the toolchains, please see an earlier blog post. It goes over all the basics of toolchains, which I assume people are familiar with.

Bootstrapping as

So, we have to bootstrap the assembler. The 2.11pl195 assembler won't assemble it properly. The v7 assembler will. However, building it on a v7 system isn't the solution: the resulting binaries won't run on the 2.11BSD system. The system call format changed with 2.11BSD, so even the 2.10BSD binaries won't run. One advantage, though, of either the 2.11BSD or the V7 assembler is that it will run under apout.

Apout is a tool that the unix-1972 crowd over at tuhs created to run PDP-11 binaries on modern hardware. It doesn't implement all the system calls. The C compiler which forks other things won't run, for example. However, the assembler will. And the loader. And cpp. Why's the last one important? Well, if we have cpp, then we can assemble the 2.11BSD system call glue in libc.

The assembler is written in fairly low-level code. It calls half a dozen system calls, so this is easy, right? For the system calls, one needed only cpp and the assembler to create them. However, there's one other function it calls: signal. Signal used to be a system call when as was written. In 2BSD, Berkeley reworked how signals worked, so they created a compatibility shim written in C for the old way. That presents a problem... Getting the C compiler going was a lot of effort because it was so many passes and I'd have to string them all together by hand. My solution was to look at the sources and notice that it was just called to register an atexit function to cleanup tmp files when SIGINT was received. This is important for real, old-school PDP-11 hardware that measured the speed in hundreds of thousands of operations a second (or worse!). It would mean that ^C would clean up the temp files. But for bootstrapping? It's not really needed. So I created a .s file that was just '_signal: rts pc', which does nothing but satisfy linkage...

To make things simple, I used ld's partial link functionality to link all the .o's together to create a bootstrap.o. This took the place of libc. So I was able to bootstrap the assembler using the V7 as and ld binaries as well as the 2.11BSD cpp binary to pre-process the 2.11 sources. I did this twice, once for each pass of the assembler. I added the code to the script that I use to create the 2.11BSD (original) tree. This script took care of copying the results into the 2.11BSD tree. It was able to assemble itself, so on to the next step.

Now that I had the assembler bootstrapped, I could move on to the next things. Here we shift from the FreeBSD host that was creating the 2.11BSD (original) tree to a 2.11BSD pl 195 simh image that had a copy of this tree (which I'll call ur2.11 below to distinguish it from 2.11BSD pl 195 which I'll just call '195 below) mounted on /scratch. FYI: the 'ur' prefix means 'original' and it's often used in linguistics to describe the original version of something, now lost but reconstructed.

Bootstrapping ranlib (and to get there ld and nm)

So, one of the things you need is something called ranlib. It reads through a library and collects a table of all the symbols in that library and puts it in the first member of that archive. ld then uses that to pull in what it needs from the library. This eliminates the need to worry about cycles and other strange things. Normally, without a table of contents, ld will just make a single pass through the .a file, pulling in everything that's needed. When there's no cycles in the dependencies, this works great when you create the library with 'lorder *.o | tsort' so that it can be pulled in with one pass. If there are cycles, the library has to be pulled in multiple times to resolve them all.

libc, of course, has cycles. So, how do we fix that? Well, we need to build ranlib (since the newer ranlib uses a different table of contents format, because why would it be easy). To make matters worse, 2.11BSD changed the archive format to the portable archive format from the old PDP-11 format.

So, to build ranlib, we need libc and ld. For libc, we need nm because the lorder shell script uses it and I didn't want to hack the build process. Let's focus on the first two of those. In an ideal world, we could just build them on the '195 image. For once in this project, that's entirely possible, but with a caveat. The include files have changed, so I needed to build this on the 195 system, but using the ur2.11 includes (not the '195 ones, they had been rototilled in the 158-178 patch sequence for the new binary format). I needed to do this in the '195 system because it could create new binaries (but chrooted to the ur2.11 system could not). I was able to do this simply enough:
cd /scratch/usr/src/bin
cc -o ld -O -i ld.c -I/scratch/usr/include
cc -o nm -O -i nm.c -I/scratch/usr/include
 Now I had everything I needed to bootstrap ranlib... almost....

Drop into the chroot

As readers of my blog know, I recently did some search into chroot. The reason was this effort. I'd recalled reading that it was added into 4.2BSD, etc. So I went looking and found an interesting story (that I've already told).

Now you know why I was looking: the next step is to chroot into /scratch. Once we're there, we need to do a few things. First, let's copy things over:
chroot /scratch
cd usr/src/bin/as
cp as /bin
cp as2 /lib
rm as as2
cd ..
cp nm ld /bin
rm nm ld
cp /bin/true /usr/bin/ranlib
OK. That gives us a working assembler, loader and nm. What about cc? Don't we need to rebuild it? Turns out, no. It's already working, creating perfectly fine assembler. Since we just swapped out the assembler, we're good: it produces the new format. And the loader, it can combine them into binaries that will run (we're quite fortunate that the '195 loader can create binaries that work on ur2.11). What about ar(1)? Well, we don't have to bootstrap that either (at least not yet) since the format is the same, even if the program was imported from 4.3BSD in the 158-178 patch series. Finally, we avoid an extra step later by copying /bin/true to ranlib. This means the ranlib in the ur2.11 tree right now (which came from '195) won't create an entry in libc.a we have to delete later.

Building libc.a and crt0.o

So, next up, we need to rebuild libc and crt0.o. cc uses these to create working binaries, and we need cc to rebuild ranlib. Thankfully, it's relatively straight forward to rebuild libc and install it:
cd /usr/src/lib/libc
make clean
# Hack around make sometimes failing to descend on some runs
(cd pdp/compat-4.1; make)
(cd pdp; make)
make
make install
make clean
so now we've replaced the '195 libc.a with it's newer format binaries with ur2.11 libc.a with the proper for this version format. When building, you may have noticed tsort reported a cycle in the dependency graph. It's safe to ignore that for now, we'll work around it in a minute. Depending on dates of directories, you may need to build deep directories by hand because directories in the future aren't considered out of date so aren't rebuilt...

Building ranlib (for real)

Now we can build ranlib, and use it to add a table of contents to libc.a. We'll need to specify libc.a twice in order for it to resolve the circular dependency. When linking libraries w/o the ranlib table of contents, ld only makes one pass through the library. So, if we list it twice, it will get the rest of the dependencies when it makes a second pass through the library. Since all the other symbols are resolved, we don't wind up with two copies of anything.
cd /usr/src/usr.bin
cc -o ranlib -O -i ranlib.c -lc
cp ranlib /usr/bin
ranlib /lib/libc.a
So, now we have a sane libc.a and ranlib.

Finishing up the Bootstrapping

OK. We could go on from here and make a lot of progress. Along the way, though, we'll discover that there's some programs whose Makefile assumes certain things about ar, or want to exec the strip program, etc. So we'll build those now and install them to make for smoother sailing later. All the other dependencies are properly handled.
cd /usr/src/bin
make ar strip
cp ar strip /bin
And we're not quite done. install groks the binary format, so it has to be bootstrapped now before we use install -s as part of many make install targets:
cd /usr/src/usr.bin
make xinstall
cp xinstall /usr/bin/install 

Doing the Build

At this point, the simple way to build is to do the following
cd /usr/src
make clean
make all
make install
make clean
make all
make install
which builds everything twice. This is far from optimal, but will work. The things that fail the first time around, due to missing libraries and such, will succeed the second time through.

One could look in the sources and find there's another process, 'make build' which installs the includes (well, that's commented out, and that caused version skew between /usr/src/include and /usr/include), builds and installs libc, builds and installs the C compiler, rebuilds libc, rebuilds and reinstalls the C compiler, then builds and installs usr.lib before building and installing 'bin usr.bin etc ucb new games' directories. This works mostly OK. However, in our situation, this leaves a big hole: there's programs in /usr/src/usr.lib that need other libraries in /usr/src/usr.lib, so they fail to build in the make build scenario. Plus, I've had it fail in the second build of libc for reasons unknown (it just fails to descend into the pdp directory, which it had no trouble doing the first time).

So if you go and look at the bootstrap program, you'll see the following crazy dance that it does. Of course, it knows it's already built libc once (and it lacks the above workaround for libc, but the actual automation has it):
cd /usr/src
make clean
cd lib
for i in ccom cpp c2 libc ccom cpp c2; do
    (cd $i; make all install)
done
cd ../usr.lib
for i in lib[0-9A-Za-z]*; do
    (cd $i; make all install; make clean)
done
ln /usr/lib/libom.a /usr/lib/libm.a
cd ..
make all
make install
make clean
which is similar enough to 'make build' but avoids the holes in it and avoids having to build absolutely everything twice (though it does build libc and the C compiler 3 times total, which likely is overkill). The funky pattern for building libraries is because there's a lib.b that's installed (it's just a text file with what appears to be B code in it). The link for libm afterwards is to mimic what the make install target does in usr.lib since we're not using it and libom.a is used for libm.a on 2.11BSD. Since we remove all .a's in creating the root, we have to recreate this here.

In the end, we're left with a complete user land that we can then move to the next phase with. Once we have a kernel, we can rebuild the release tapes which I'll leave as a topic for another day. With the boot block rework, the disk label changes and the changing needs of the 2.11BSD community, rebuilding them for ur2.11 is somewhat different than 2.11BSD patch 469.

As a workaround for some build issues, I also needed to build a number of man pages so the program associated with them would be properly installed... Suffice to say I rebuilt all the man pages in the end as part of the bootstrap script, but they aren't strictly required to run the system.

Building the Kernel

Normally, one would re config the kernel and build it. However, in 2.11BSD as released, there were a number of hacks made to the kernel Makefile to get it to fit into memory. Normally, one would hack these things in /sys/conf/Make.sunix so configuring GENERIC wouldn't destroy any carefully worked out overlay, but that wasn't done initially. So, we have to be careful how we build.

Also, in the initial version, the root partition was hard coded into the kernel. There was a script call /GENALLSYS that would create all versions of the: rpunix, raunix, xpunix, hkunix, etc.When installing, one needs to know the proper one to use. So, putting that all together, we can just do this:

cd /usr/src/sys/GENERIC
make && make install && (cd / ; cp unix genunix; sh -x /GENALLSYS)

which builds all possible bootable kernels... 

Building all the Standalone Programs

When we built everything, a few things still weren't build: the boot loader, the autoconfig and boot program (which is different than the boot loader). One just needs to build in /sys/mdec, /sys/autoconfig and /sys/pdpstand:
cd /sys/mdec
make && make install && make clean
cd /sys/autoconfig
make && make install && make clean
cd /sys/pdpstand
make && make install && make clean
Once one has mdec installed, one needs to dd the blocks onto the disk to make it bootable. When I was bootstrapping this disk, I did it with the intention of making a bootable system. I had to add /usr to /etc/fstab too, but all the things I did might fill another blog entry...

Conclusion

Building entire systems is messy, and has always been messy. Unless you skipped to the conclusion, I suspect that you've already formed this opinion about the 2.11BSD build process.  I've managed to enshrine everything above into build.sh and build2.sh to make things automated. Using this technique I've managed to build a ur2.11BSD boot disk, created boot tapes and installed from those tapes. Automation was key, though, to recording all the right steps in the right order.

20200803

Missing 2.11BSD patches

2.11BSD Missing Patches

While looking into some date anomalies in the final image (since I'd like to get the dates right) I discovered a number of source directories had dates slightly newer than the date in the announcement. This lead me to discover some missing patches in a couple of different places.

The Anomaly

I've automated the system generation, tape generation and installing from tapes to allow me to make small tweaks and get end to end testing. As part of this, after the system is installed, I'll do a test boot, similar to the following, as if I'd installed the system on April 10th, 1991 and booted it on April 15th. The boot looks something like this:
sim> boot rq

boot: 73Boot
: ra(0,0)unix

2.11 BSD UNIX #1: Fri Mar 15 15:48:55 PST 1991
    root@wlonex.imsd.contel.com:/usr/src/sys/GENERIC

phys mem  = 4186112
avail mem = 4008640
user mem  = 307200

Apr 10 13:50:01 init: configure system
ra 0 csr 172150 vector 154 attached
rl 0 csr 174400 vector 160 attached
tms 0 csr 174500 vector 260 attached
ts 0 csr 172520 vector 224 attached
xp 0 csr 176700 vector 254 attached
erase, kill ^U, intr ^C
# date 9104151234
date: can't write wtmp file.
Mon Apr 15 12:34:00 PDT 1991
# Fast boot ... skipping disk checks
/dev/ra0c on /usr: Device busy
checking quotas: done.
Assuming non-networking system ...
preserving editor files
clearing /tmp
standard daemons: update cron accounting.
starting lpd
starting local daemons:.
Mon Apr 15 12:34:01 PDT 1991


2.10 BSD UNIX (my.domain.name) (console)

login:
I set the date in single user then bring it up to multiuser. In one of my tests, I found the following:
-rw-r--r--  1 imp  imp   9777 Aug 31  1991 alloc.c
-rw-r--r--  1 imp  imp   4817 Aug 31  1991 alloc11.c
-rw-r--r--  1 imp  imp  12474 Aug 31  1991 doprnt.c
-rw-r--r--  1 imp  imp   3299 Feb 23  1987 doprnt11.s
-rw-r--r--  1 imp  imp    831 Aug 31  1991 printf.c
-rw-r--r--  1 imp  imp  20446 Aug 31  1991 sh.c
-rw-r--r--  1 imp  imp   1771 Aug 31  1991 sh.char.c
which I thought was quite strange. There shouldn't be any files dated newer than the release in the tree. I know all files patched don't get the time set right, so I exclude those from my search (I plan on fixing that bug later). The above files (and others, it's just a short list) shouldn't be there. So I started looking...

The Diffs

Running diffs against 2.10.1 I discovered that the csh files were almost all the same. However, a typical diff looked like:

diff -ur root-2.10.1/usr/src/bin/csh/alloc.c root-2.11/usr/src/bin/csh/alloc.c
--- root-2.10.1/usr/src/bin/csh/alloc.c 1987-02-08 15:27:23.000000000 -0700
+++ root-2.11/usr/src/bin/csh/alloc.c   1991-08-31 01:03:00.000000000 -0600
@@ -4,10 +4,10 @@
  * specifies the terms and conditions for redistribution.
  */

-#ifndef lint
+#if    !defined(lint) && defined(DOSCCS)
 /* From "@(#)malloc.c  5.5 (Berkeley) 2/25/86"; */
 static char *sccsid = "@(#)alloc.c     5.3 (Berkeley) 3/29/86";
-#endif not lint
+#endif

 /*
  * malloc.c (Caltech) 2/21/82

which removed the SCCS IDs from the binary to save size. Other changes included introducing overlays for the first time. This indicated size issue. Let's take a look at what else was going on around the 31 Aug 91 in the patch stream. Looking, we find that this is just after patch 18 (which fixed a long vs int bug in test) as well as patch 17, which updated pcc. This sounds like a size hack by someone that had just updated the compiler, or was testing with pcc (the normal system compiler wasn't pcc, but the earlier Thompson compiler). Another of the changes also fixes an issue with character handling, which other patches have done to reduce the size of binaries that got too big.

So, in context, this change makes perfect sense. The only trouble is that it wasn't posted to comp.bugs.2bsd, nor did it make it into Steven Schultz's patch repo. And csh isn't the only troublesome one. There's issues in rn, and games/warp.

The Catch Up Patch

There was a catchup kit that was issued officially as Patch 80 (though it omitted patch 79).  Looking in that patch kit we find this change! So it's in one that was intended. So what to do?  And it turns out there's 5 such patches (but only 4 of them made it into the patch kit... I'll talk about #5 in a minute).

I've decided to look at the dates of each of these patches and pretend they happened just after the patch whose date is closest). I've updated my mk211bsd script to extract these from the catch up patch.

Oh, and there were a number of new programs added in the catchup patch as well. These must be deleted too, but I'd already noticed that and deleted them.

tftp changes

So, on May 4th, 1991 a patch to dd.c was posted to comp.bugs.2bsd. It's also included in the official archive as patch 1. The release announcement was dated March 14th, 1991. But there's tftp files dated May 15th, 1991. What's up with those? Turns out, this is another missed patch (but one that's assumed to be in place in the catch-up patch because it's not in there. Well, it's partially in the patches, partially in the scripts. It's an update of tftp and tftpd to a new version. It was posted to comp.bugs.2bsd on May 15th, 1991, but isn't in the official list of patches. So not only do we have to dig it out of the catch-up patches (from two different files), we also have to restore the old man pages from 2.10.1BSD, but in a different place, so this patch will be a patch + rm (which going backwards is patch + cp)

csh changes

As discussed above, these are various hacks to get the size of csh down.

warp changes

The changes here are around the config script used to generate files for the build. The changes use full path names, and cope with the new shadow password format changes.

pcc changes

[[ edit -- these were poorly documented: they are in patch 17, but not called out specifically to apply ]]

rn changes

As part of the catchup, there's a number of minor patches to rn that were included in the catch up, but weren't formally published or occupy a number of their own.

My Work

So, how does this affect me? Well, it means that I need to understand the catch up patch a lot better. I had hoped to use it later as a cross-check against my work. I didn't anticipate that I'd be using it "sooner" as to get missing bits I'd found using other techniques. I've had to update mk211bsd to extract the bits, as well as creating a couple of hints files to help me undo the changes.

And when the time comes to reply all the patches, I'll need to take these anomalies into account as well. But that's a problem for future me.

I also have to complete the audit of the weird file dates. There's 63 of them right now, 29 in /usr/src still. Some of clearly old man pages that I can remove. Some are the result of running 'configure' or similar script (rn has 7 of these). Some are config files that change over time (like for the root name server). Some may be just left-over detritus of a running system. I need to see which ones fall into which categories and update accordingly. This may dovetail back into needing to bring them back to make sure I can march back to pl195 and get the same system. Since I started with ~1500 such anomalies, I think being down to 63 is quite good. And there's others elsewhere in the system... 

Current status

As you might guess, if I'm finding things like this, that means I'm getting closer. I've shared a lot of this on my @bsdimp twitter account, but now is a good time for a wrap up. Here's what's done currently:
  1. Script to undo all the patches, including helper 'hints' scripts, where possible from existing artifacts.
  2. Miss patches reconstructed and integrated into the build
  3. Automated installing 2.11BSDpl195 image
  4. Automated bootstrapping back from 195 -> 0. There's a number of interesting problems here that I'll blog about soon
  5. Building the 2.11BSD pl 0 tapes automatically
  6. Test installing the pl  0 system from the pl 0 tapes.
The missing bits include
  1. Getting the dates right (or failing that plausible) for the patched files
  2. Finishing the date audit and tracking all anomalies to ground.
  3. Cleaning up my helper scripts off the image
  4. Creating a github repo with all the patches in it
  5. Reproducing the build on a second system
  6. Getting the ownership right for some files (eg using the mtree hack to get the ownership and permissions right, generating it from the pl195 tape/image, etc)
  7. Getting dates right on /, right now restor(8) doesn't restore the date in one at a time mode, so these are all wrong.
  8. Fixing tmscp boot. It's broken. The tmscpboot.s, ported from tkboot.s, only existed for a short period of time and has been lost. My reconstruction has issues (it won't boot), and I've not delved into why.
  9. Creating automation to ensure that the 'catch up' kit will apply cleanly.
And of course, I need to figure out the best way to publish the artifacts when I think I'm done.

20200727

When Unix learned to reboot(2).

History of Reboot(2)

Recently, a friend asked me the history of halt, and when did we have to stop with the sync / sync / sync dance before running halt or reboot. The two are related, it turns out.

That sync; sync; sync Thing...

If you go looking around the net, you'll find some people giving advice like "when shutting down, type 'sync; sync; sync; halt' to be safe." There's good reasons behind this advice which aren't immediately clear and are interesting to explore. Before exploring, I'd been told that the reasons for the sync dance were a driver bug in v6 that's been fixed 45 years ago... But it turns out whoever told me that must have been mistaken because the code tells a different story...

The sync program called the sync system call and exited (and still does). The sync system call in research editions of unix was implemented as approximately:
foreach mountpoint
    write superblock with bwrite
for each dirty inode
    write the inode with iupdat
bflush
 
It would step through the fixed list of buffers in the system, writing the dirty ones out. It used bwrite() to do this, which was synchronous.  Each write had to complete before the next one started. iupdat will read the inode off the disk, update that inode, and write out, again synchronously. bflush writes everything with bwrite, but marks the buffers as B_ASYNC which means in that case it won't wait. And nothing else waits either. So, recommendation for typing sync 3 times, one line at a time, was to give time for the buffers to drain (the subsequent syncs would schedule no new I/O on a quiet system). Typing all three on one line with semicolons, didn't give this time...

If you look at the recommendation, it's actually quite smart. Typed one line at a time, waiting for the prompt each time, would schedule a lot of I/O the first time, then give the operator a harmless task to do for a few seconds that would allow the I/O to complete before they did anything. The kernel avoided all kinds of nasty deadlocks that later systems would face when they implemented waiting for the I/O to complete.

Edit: One bit of lore that was passed on to me was the first sync returned right away, but the second one blocked.... I've found no evidence of that in BSD or System V based systems... although there is an increasing amount of protection against multiple threads being in the sync code as concurrency in Unix increased.

Why not do this in reboot(2)?

None of the versions of Research Unix had a system call to reboot. To restart things, one killed init with SIGHUP, which would in turn kill everything else and fork a new shell in single user. There was no other way to restart the system, and bad things happened if init actually died. But there was no clean reboot option, nor any way to stop the kernel cleanly (apart from the power switch).
Looking at the sources, there was one small hint that something was planned, but never executed. All the system calls were defined in /usr/include/sys.s. A close examination shows the following:
lock    = 53.
ioctl   = 54.
reboot  = 55.
mpx     = 56.
setinf  = 59.
which proves me wrong, right? Well, maybe not. Looking at sysent.c, we see the following:
1, 0, syslock,          /* 53 = lock user in core */
3, 0, ioctl,            /* 54 = ioctl */
0, 0, nosys,            /* 55 = readwrite (in abeyance) */
4, 0, mpxchan,          /* 56 = creat mpx comm channel */
0, 0, nosys,            /* 57 = reserved for USG */
0, 0, nosys,            /* 58 = reserved for USG */
3, 0, exece,            /* 59 = exece */
which lists 'nosys' as the handler, so there's no implementation. There's no reboot system call. I'll also note that there's another disconnect: system call 59 is listed as setinf (whatever that is), but is implemented as exece.

Enter 4BSD

The first reference to reboot(2) I can find is in 4.0BSD in sysent.c, we see the following;
3, 0, ioctl,            /* 54 = ioctl */
1, 0, reboot,           /* 55 = reboot */
4, 0, mpxchan,          /* 56 = creat mpx comm channel */
where reboot landed in the slot allocated for it. There's a new command in /etc/ that calls it (by the syscall number, not the normal wrapper). It wasn't in 2.8BSD, but is also present in a similar form in 2.9BSD and later. 3BSD still has the placeholder pointing at nosys. Since the 2BSD evolution tracked 4BSD, I'll not call it out further.

In 4.0BSD (1980), reboot() just call a machine dependent boot() routine. It called update(), which scheduled the writes as described above. It printed that it was waiting for the IO to finish. However, the 'wait for it' code was basically 'sleep(5)' and so if all the data didn't get out in 5 seconds, bad things would happen. So the "sync sync sync halt" dance was still useful advice. It would get thee ball rolling and easily double the amount of time that the data had to make it to the disk, depending on the typist... 4.1c (1982) bumped this to 10s and had ifdef'd out code to try to wait for all the dirty buffers to clear.

In 4.2BSD (1983), the wait for the writes code is engaged. It tries up to 20 times to walk the list of bufs in the system to let the buffers drain. So progress is made here. 4.3 adds a delay that's 40ms * iter (total of 8.4s)... which remains through at least 4.4BSD (1993)...  So while things got better, so did systems, and more and more I/O could be piled up. Successor BSD systems improved on this as well, including various ways to solve it (mostly when systems got big enough so update(8) couldn't flush all the I/O in 30s before the next sync call started).

AT&T Unix

Meanwhile, on the AT&T side of things...  System III (1980) didn't have anything. System Vr1 doesn't have anything. None of the Programmer Work Bench releases (PWB) have a reboot(2).

System Vr2 (1984) defined a new uadmin(2) system call. It acted like an indirect system call (you passed it what you wanted it to do as the first arg). One of these, A_REBOOT, is called from fsck, but the kernel doesn't implement it.

Fast forward to System Vr3 (1987) and we find an implementation. It's basically a call to umount for the / filesystem followed by a call to reset the CPU. The other filesystems are unmounted as part of shutdown process before uadmin(A_REBOOT, ..) gets called, so only / remained mounted by the time it's called. This call to umount flushes the dirty buffers, and cleanly unwinds everything else so nothing is pending when the call to the CPU reset happens. So finally the 'sync sync sync halt' problem had been solved... Well, maybe... There's no timeout, nor any way to avoid deadlock. Still, a fairly clean solution to the issue, especially relative to what BSD was doing at the time.

Commercial Unix

The availability of even leaked source from the early days. By the time SunOS gets to 4.1, however, there was a vfs_syncall() which the MD boot() function called to synchronize everything (it only returned when all the scheduled I/O was done). I've not checked to see if there was a halting problem here or not, but empirical evidence of rebooting a lot of suns suggests that it was rarely a problem in practice if any of the I/Os somehow got wedged...  Or that I was lucky enough not to have a large enough fleet of machines where flaky disk problems start to show up... I can't find any earlier versions of the Sun sources, so it's hard to know when this solution entered into the tree (the code I have looked at dates from 1994, 11 years after the initial release).  I suspect Sun solved this problem early, but have no proof of this beyond a hunch.

Other Unixes, that aren't just System V ports, are hard to find in source form, so I can't say from original sources whether or not they solved the issue or not prior to System Vr3. There were also a lot of 4.2BSD and 4.3BSD ports that didn't survive either to be examined.

The one exception to this general rule was a copy of the Unisoft 1.0 kernel I found on bitsavers. It dates from 1986 (so relatively late). It has a reboot system call (number 64, not 55). That system call calls update(), like 4BSD's, but then does a big for() loop (1 to 1,000,000) before calling a routine that resets the CPU. This kernel is a V7 port, and most likely got the idea (and maybe the code) from one of the 4BSD releases. This kernel appears to be the basis Sony's SUNIX (which appears to be a 7th Edition-based Unix that predated SONY's NEWS-OS based on 4.2BSD). NEWS-OS likely behaved like 4.2BSD, but I can't confirm that due to lack of sources. If you know more info about SUNIX or NEWS-OS, please leave a comment.

Linux

Linux's sync call is synchronous. You get the same guarantees as you do from fsync. This behavior was introduced in 1.3.20, released in 1995. Prior to that the same sync dance advice was useful since early versions of Linux were more aggressively asynchronous in their handling of disk writes than other contemporary systems. While this helped it compete in benchmarks, it caused data integrity problems when Linux machines started to be put into production (which was one of the reasons motivating the change). Modern Linux systems flush out all the dirty buffers as part of the shutdown sequence and wait for the flush to complete before proceeding to reboot, turning the system off or halting.

Conclusion

For years, I'd been told the reasons for the 3 sync dance was due to a driver bug, long since fixed, in a DEC disk driver in v6. However, digging into it shows that there were decent reasons for doing this dance, even after Unix learned to reboot() itself.